From k-assem84 at hotmail.com Sun May 1 05:02:01 2011 From: k-assem84 at hotmail.com (suzana8447) Date: Sun, 1 May 2011 02:02:01 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] fit program. In-Reply-To: References: <31503548.post@talk.nabble.com> Message-ID: <31516125.post@talk.nabble.com> Dear all, This is the program i use to fit a function called EoverA to a certain data: _______________________________________________________________________________ from scipy.optimize import * from scipy.optimize import fmin from numpy import array, pi import numpy as np de = array([ 0.0405, 0.081, 0.1215, 0.162, 0.243]) en = array([ -7.9, -12.71, -15.17, -15.954, -13.54 ]) def EoverA(parameters,rho): 'From PRB 28,5480 (1983' t0 = parameters[0] t3 = parameters[1] alpha = parameters[2] amass= 939. hbar= 197.3 ainte= array([ 1.23349198, 0.960749725, 0.827850618, 0.743755769, 0.638012883]) E =analytical expression for EoverA return E def objective(pars,y,x): #we will minimize this function err = (y - EoverA(pars,x)) return sum(err**2) x0 = [ -2931.70, 18700. , 0.16667] #initial guess of parameters plsq = fmin(objective,x0, args=(en,de)) print plsq _____________________________________________________________________________________ Now I have the analytical expressions for the first and second derivatives for the function EoverA. I want also to fit these 2 expressions to some data (at least at one point). In essence, I want the program to give me the new parameters in such a way the functions EoverA and its first and second derivatives are being fitted to certain data. thanks in advance. Charles R Harris wrote: > > On Fri, Apr 29, 2011 at 2:37 AM, suzana8447 wrote: > >> >> Hello, >> >> I am using right now a program called least square root fit (python >> language) and I am fitting a function to some data. >> But I urgently need to fit also the first and second derivatives of the >> same >> function at the same time to some data. However I do not know how to deal >> with. Can you help me. >> Thanks in advance. >> >> > Probably, but you need to be a bit more precise about what you are trying > to > do. Also, can you supply a link to the program you are using? > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/fit-program.-tp31503548p31516125.html Sent from the Scipy-User mailing list archive at Nabble.com. From vanleeuwen.martin at gmail.com Sun May 1 14:34:20 2011 From: vanleeuwen.martin at gmail.com (Martin van Leeuwen) Date: Sun, 1 May 2011 11:34:20 -0700 Subject: [SciPy-User] confusion around return value from numpy.max Message-ID: HI All, I have a 2-dimensional array f that contains floating point values. I believe this array contains no NaNs. Is it possible to have nan returned by numpy.max() if an array contains no NaNs?? To further illustrate my problem, I have the following two lines right after each other in my larger block of code: print numpy.sum(f==numpy.nan) print numpy.max(f) the first line prints 0 (zero), the second nan. I can use numpy.nanmax instead of numpy.max and that works okay. I am just confused as to whether there are or aren't nan values in my array f, and why numpy.max returns nan. I have never experienced a problem like this with the simple numpy.max function... Unfortunately, I can't reproduce the problem with a small script using an array of simply random values. If I do that, it works okay. Martin From kwgoodman at gmail.com Sun May 1 14:43:32 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 1 May 2011 11:43:32 -0700 Subject: [SciPy-User] confusion around return value from numpy.max In-Reply-To: References: Message-ID: On Sun, May 1, 2011 at 11:34 AM, Martin van Leeuwen wrote: > HI All, > > I have a 2-dimensional array f that contains floating point values. I > believe this array contains no NaNs. > > Is it possible to have nan returned by numpy.max() if an array > contains no NaNs?? > > To further illustrate my problem, I have the following two lines right > after each other in my larger block of code: > > > print numpy.sum(f==numpy.nan) > print numpy.max(f) > > > the first line prints 0 (zero), the second nan. f == np.nan is alway False even if f contains NaNs (because a NaN is not equal to a NaN). Try: np.sum(np.isnan(f)) From vanleeuwen.martin at gmail.com Sun May 1 14:52:13 2011 From: vanleeuwen.martin at gmail.com (Martin van Leeuwen) Date: Sun, 1 May 2011 11:52:13 -0700 Subject: [SciPy-User] confusion around return value from numpy.max In-Reply-To: References: Message-ID: Hi Keith, Thanks. I didn't know f == numpy.nan always return False, but it's certainly good to know (and probably makes a lot of sense too). numpy.isnan(f) works fine indeed. Thanks again, Martin 2011/5/1 Keith Goodman : > On Sun, May 1, 2011 at 11:34 AM, Martin van Leeuwen > wrote: >> HI All, >> >> I have a 2-dimensional array f that contains floating point values. I >> believe this array contains no NaNs. >> >> Is it possible to have nan returned by numpy.max() if an array >> contains no NaNs?? >> >> To further illustrate my problem, I have the following two lines right >> after each other in my larger block of code: >> >> >> print numpy.sum(f==numpy.nan) >> print numpy.max(f) >> >> >> the first line prints 0 (zero), the second nan. > > f == np.nan is alway False even if f contains NaNs (because a NaN is > not equal to a NaN). Try: > > np.sum(np.isnan(f)) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Sun May 1 15:08:55 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 1 May 2011 12:08:55 -0700 Subject: [SciPy-User] Moving median code In-Reply-To: <4DB593D5.4010506@cs.wisc.edu> References: <4DB593D5.4010506@cs.wisc.edu> Message-ID: On Mon, Apr 25, 2011 at 8:31 AM, J. David Lee wrote: > In working on a project recently, I wrote a moving median code that is > about 10x faster than scipy.ndimage.median_filter. It utilizes a linked > list structure to store values and track the median value. If anyone is > interested, I've posted the code here (about 150 lines): > > http://pages.cs.wisc.edu/~johnl/median_code/median_code.c I wrapped your moving window median code in cython. Here is how the timing scales with window size: >> a = np.random.rand(1e6) >> timeit move_median(a, window=10) 10 loops, best of 3: 44.4 ms per loop >> timeit move_median(a, window=100) 10 loops, best of 3: 179 ms per loop >> timeit move_median(a, window=1000) 1 loops, best of 3: 1.96 s per loop >> timeit move_median(a, window=10000) 1 loops, best of 3: 48.3 s per loop It would be interesting to compare the timings with a double heap algorithm [1] and with a C version of Wes's skiplist. I'll try to wrap the double heap next when I get a chance. BTW, what license are you using for your code? Here's the code I used. It's the first wrap I've done so suggestions welcomed. Now that I sort of know how, I'm going to wrap everything I see! import numpy as np cimport numpy as np import cython cdef extern from "cmove_median.c": struct mm_node struct mm_list: np.npy_int64 len mm_node *head mm_node *tail mm_node *min_node mm_node *med_node void mm_init_median(mm_list *mm) void mm_insert_init(mm_list *mm, np.npy_float64 val) void mm_update(mm_list *mm, np.npy_float64 val) np.npy_float64 mm_get_median(mm_list *mm) void mm_free(mm_list *mm) np.npy_float64 mm_get_median(mm_list *mm) mm_list mm_new(np.npy_int64 len) def move_median(np.ndarray[np.float64_t, ndim=1] a, int window): cdef int n = a.size, i if window > n: raise ValueError("`window` must be less than a.size.") if window < 2: raise ValueError("I get a segfault when `window` is 1.") cdef np.ndarray[np.float64_t, ndim=1] y = np.empty(n) cdef mm_list mm = mm_new(window) for i in range(window): mm_insert_init(cython.address(mm), a[i]) y[i] = np.nan mm_init_median(cython.address(mm)) y[window-1] = mm_get_median(cython.address(mm)) for i in range(window, n): mm_update(cython.address(mm), a[i]) y[i] = mm_get_median(cython.address(mm)) mm_free(cython.address(mm)) return y [1] http://home.tiac.net/~cri_a/source_code/index.html#winmedian_pkg From wesmckinn at gmail.com Sun May 1 15:11:22 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 1 May 2011 15:11:22 -0400 Subject: [SciPy-User] Moving median code In-Reply-To: References: <4DB593D5.4010506@cs.wisc.edu> Message-ID: On Sun, May 1, 2011 at 3:08 PM, Keith Goodman wrote: > On Mon, Apr 25, 2011 at 8:31 AM, J. David Lee wrote: >> In working on a project recently, I wrote a moving median code that is >> about 10x faster than scipy.ndimage.median_filter. It utilizes a linked >> list structure to store values and track the median value. If anyone is >> interested, I've posted the code here (about 150 lines): >> >> http://pages.cs.wisc.edu/~johnl/median_code/median_code.c > > I wrapped your moving window median code in cython. Here is how the > timing scales with window size: > >>> a = np.random.rand(1e6) >>> timeit move_median(a, window=10) > 10 loops, best of 3: 44.4 ms per loop >>> timeit move_median(a, window=100) > 10 loops, best of 3: 179 ms per loop >>> timeit move_median(a, window=1000) > 1 loops, best of 3: 1.96 s per loop >>> timeit move_median(a, window=10000) > 1 loops, best of 3: 48.3 s per loop > > It would be interesting to compare the timings with a double heap > algorithm [1] and with a C version of Wes's skiplist. I'll try to wrap > the double heap next when I get a chance. BTW, what license are you > using for your code? > > Here's the code I used. It's the first wrap I've done so suggestions > welcomed. Now that I sort of know how, I'm going to wrap everything I > see! > > import numpy as np > cimport numpy as np > import cython > > cdef extern from "cmove_median.c": > ? ?struct mm_node > ? ?struct mm_list: > ? ? ? ?np.npy_int64 len > ? ? ? ?mm_node *head > ? ? ? ?mm_node *tail > ? ? ? ?mm_node *min_node > ? ? ? ?mm_node *med_node > ? ?void mm_init_median(mm_list *mm) > ? ?void mm_insert_init(mm_list *mm, np.npy_float64 val) > ? ?void mm_update(mm_list *mm, np.npy_float64 val) > ? ?np.npy_float64 mm_get_median(mm_list *mm) > ? ?void mm_free(mm_list *mm) > ? ?np.npy_float64 mm_get_median(mm_list *mm) > ? ?mm_list mm_new(np.npy_int64 len) > > def move_median(np.ndarray[np.float64_t, ndim=1] a, int window): > ? ?cdef int n = a.size, i > ? ?if window > n: > ? ? ? ?raise ValueError("`window` must be less than a.size.") > ? ?if window < 2: > ? ? ? ?raise ValueError("I get a segfault when `window` is 1.") > ? ?cdef np.ndarray[np.float64_t, ndim=1] y = np.empty(n) > ? ?cdef mm_list mm = mm_new(window) > ? ?for i in range(window): > ? ? ? ?mm_insert_init(cython.address(mm), a[i]) > ? ? ? ?y[i] = np.nan > ? ?mm_init_median(cython.address(mm)) > ? ?y[window-1] = mm_get_median(cython.address(mm)) > ? ?for i in range(window, n): > ? ? ? ?mm_update(cython.address(mm), a[i]) > ? ? ? ?y[i] = mm_get_median(cython.address(mm)) > ? ?mm_free(cython.address(mm)) > ? ?return y > > [1] http://home.tiac.net/~cri_a/source_code/index.html#winmedian_pkg > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > assuming a compatible license do you want to make a little github repo like you mentioned? I have been interested in this problem for a while From wesmckinn at gmail.com Sun May 1 15:17:39 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 1 May 2011 15:17:39 -0400 Subject: [SciPy-User] Moving median code In-Reply-To: References: <4DB593D5.4010506@cs.wisc.edu> Message-ID: On Sun, May 1, 2011 at 3:11 PM, Wes McKinney wrote: > On Sun, May 1, 2011 at 3:08 PM, Keith Goodman wrote: >> On Mon, Apr 25, 2011 at 8:31 AM, J. David Lee wrote: >>> In working on a project recently, I wrote a moving median code that is >>> about 10x faster than scipy.ndimage.median_filter. It utilizes a linked >>> list structure to store values and track the median value. If anyone is >>> interested, I've posted the code here (about 150 lines): >>> >>> http://pages.cs.wisc.edu/~johnl/median_code/median_code.c >> >> I wrapped your moving window median code in cython. Here is how the >> timing scales with window size: >> >>>> a = np.random.rand(1e6) >>>> timeit move_median(a, window=10) >> 10 loops, best of 3: 44.4 ms per loop >>>> timeit move_median(a, window=100) >> 10 loops, best of 3: 179 ms per loop >>>> timeit move_median(a, window=1000) >> 1 loops, best of 3: 1.96 s per loop >>>> timeit move_median(a, window=10000) >> 1 loops, best of 3: 48.3 s per loop >> >> It would be interesting to compare the timings with a double heap >> algorithm [1] and with a C version of Wes's skiplist. I'll try to wrap >> the double heap next when I get a chance. BTW, what license are you >> using for your code? >> >> Here's the code I used. It's the first wrap I've done so suggestions >> welcomed. Now that I sort of know how, I'm going to wrap everything I >> see! >> >> import numpy as np >> cimport numpy as np >> import cython >> >> cdef extern from "cmove_median.c": >> ? ?struct mm_node >> ? ?struct mm_list: >> ? ? ? ?np.npy_int64 len >> ? ? ? ?mm_node *head >> ? ? ? ?mm_node *tail >> ? ? ? ?mm_node *min_node >> ? ? ? ?mm_node *med_node >> ? ?void mm_init_median(mm_list *mm) >> ? ?void mm_insert_init(mm_list *mm, np.npy_float64 val) >> ? ?void mm_update(mm_list *mm, np.npy_float64 val) >> ? ?np.npy_float64 mm_get_median(mm_list *mm) >> ? ?void mm_free(mm_list *mm) >> ? ?np.npy_float64 mm_get_median(mm_list *mm) >> ? ?mm_list mm_new(np.npy_int64 len) >> >> def move_median(np.ndarray[np.float64_t, ndim=1] a, int window): >> ? ?cdef int n = a.size, i >> ? ?if window > n: >> ? ? ? ?raise ValueError("`window` must be less than a.size.") >> ? ?if window < 2: >> ? ? ? ?raise ValueError("I get a segfault when `window` is 1.") >> ? ?cdef np.ndarray[np.float64_t, ndim=1] y = np.empty(n) >> ? ?cdef mm_list mm = mm_new(window) >> ? ?for i in range(window): >> ? ? ? ?mm_insert_init(cython.address(mm), a[i]) >> ? ? ? ?y[i] = np.nan >> ? ?mm_init_median(cython.address(mm)) >> ? ?y[window-1] = mm_get_median(cython.address(mm)) >> ? ?for i in range(window, n): >> ? ? ? ?mm_update(cython.address(mm), a[i]) >> ? ? ? ?y[i] = mm_get_median(cython.address(mm)) >> ? ?mm_free(cython.address(mm)) >> ? ?return y >> >> [1] http://home.tiac.net/~cri_a/source_code/index.html#winmedian_pkg >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > assuming a compatible license do you want to make a little github repo > like you mentioned? I have been interested in this problem for a while > you can see that the skiplist method is very much O(N log W). a lot of python overhead though, obviously In [16]: timeit rolling_median(arr, 10) 1 loops, best of 3: 2.95 s per loop In [17]: timeit rolling_median(arr, 100) 1 loops, best of 3: 4.06 s per loop In [18]: timeit rolling_median(arr, 1000) 1 loops, best of 3: 5.56 s per loop In [19]: timeit rolling_median(arr, 10000) 1 loops, best of 3: 7.78 s per loop probably the most optimal solution would be to have 2 algorithms: one for small window sizes (where linear updates are OK) and then switch over when your big-O constants "cross over" for large enough window size From josef.pktd at gmail.com Sun May 1 15:21:13 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 1 May 2011 15:21:13 -0400 Subject: [SciPy-User] Moving median code In-Reply-To: References: <4DB593D5.4010506@cs.wisc.edu> Message-ID: On Sun, May 1, 2011 at 3:08 PM, Keith Goodman wrote: > On Mon, Apr 25, 2011 at 8:31 AM, J. David Lee wrote: >> In working on a project recently, I wrote a moving median code that is >> about 10x faster than scipy.ndimage.median_filter. It utilizes a linked >> list structure to store values and track the median value. If anyone is >> interested, I've posted the code here (about 150 lines): >> >> http://pages.cs.wisc.edu/~johnl/median_code/median_code.c > > I wrapped your moving window median code in cython. Here is how the > timing scales with window size: > >>> a = np.random.rand(1e6) >>> timeit move_median(a, window=10) > 10 loops, best of 3: 44.4 ms per loop >>> timeit move_median(a, window=100) > 10 loops, best of 3: 179 ms per loop >>> timeit move_median(a, window=1000) > 1 loops, best of 3: 1.96 s per loop >>> timeit move_median(a, window=10000) > 1 loops, best of 3: 48.3 s per loop > > It would be interesting to compare the timings with a double heap > algorithm [1] and with a C version of Wes's skiplist. I'll try to wrap > the double heap next when I get a chance. BTW, what license are you > using for your code? > > Here's the code I used. It's the first wrap I've done so suggestions > welcomed. Now that I sort of know how, I'm going to wrap everything I > see! Is this an invitation to everyone to show you a function written in C? Josef :) From kwgoodman at gmail.com Sun May 1 15:48:46 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 1 May 2011 12:48:46 -0700 Subject: [SciPy-User] Moving median code In-Reply-To: References: <4DB593D5.4010506@cs.wisc.edu> Message-ID: On Sun, May 1, 2011 at 12:11 PM, Wes McKinney wrote: > On Sun, May 1, 2011 at 3:08 PM, Keith Goodman wrote: >> On Mon, Apr 25, 2011 at 8:31 AM, J. David Lee wrote: >>> In working on a project recently, I wrote a moving median code that is >>> about 10x faster than scipy.ndimage.median_filter. It utilizes a linked >>> list structure to store values and track the median value. If anyone is >>> interested, I've posted the code here (about 150 lines): >>> >>> http://pages.cs.wisc.edu/~johnl/median_code/median_code.c >> >> I wrapped your moving window median code in cython. Here is how the >> timing scales with window size: >> >>>> a = np.random.rand(1e6) >>>> timeit move_median(a, window=10) >> 10 loops, best of 3: 44.4 ms per loop >>>> timeit move_median(a, window=100) >> 10 loops, best of 3: 179 ms per loop >>>> timeit move_median(a, window=1000) >> 1 loops, best of 3: 1.96 s per loop >>>> timeit move_median(a, window=10000) >> 1 loops, best of 3: 48.3 s per loop >> >> It would be interesting to compare the timings with a double heap >> algorithm [1] and with a C version of Wes's skiplist. I'll try to wrap >> the double heap next when I get a chance. BTW, what license are you >> using for your code? >> >> Here's the code I used. It's the first wrap I've done so suggestions >> welcomed. Now that I sort of know how, I'm going to wrap everything I >> see! >> >> import numpy as np >> cimport numpy as np >> import cython >> >> cdef extern from "cmove_median.c": >> ? ?struct mm_node >> ? ?struct mm_list: >> ? ? ? ?np.npy_int64 len >> ? ? ? ?mm_node *head >> ? ? ? ?mm_node *tail >> ? ? ? ?mm_node *min_node >> ? ? ? ?mm_node *med_node >> ? ?void mm_init_median(mm_list *mm) >> ? ?void mm_insert_init(mm_list *mm, np.npy_float64 val) >> ? ?void mm_update(mm_list *mm, np.npy_float64 val) >> ? ?np.npy_float64 mm_get_median(mm_list *mm) >> ? ?void mm_free(mm_list *mm) >> ? ?np.npy_float64 mm_get_median(mm_list *mm) >> ? ?mm_list mm_new(np.npy_int64 len) >> >> def move_median(np.ndarray[np.float64_t, ndim=1] a, int window): >> ? ?cdef int n = a.size, i >> ? ?if window > n: >> ? ? ? ?raise ValueError("`window` must be less than a.size.") >> ? ?if window < 2: >> ? ? ? ?raise ValueError("I get a segfault when `window` is 1.") >> ? ?cdef np.ndarray[np.float64_t, ndim=1] y = np.empty(n) >> ? ?cdef mm_list mm = mm_new(window) >> ? ?for i in range(window): >> ? ? ? ?mm_insert_init(cython.address(mm), a[i]) >> ? ? ? ?y[i] = np.nan >> ? ?mm_init_median(cython.address(mm)) >> ? ?y[window-1] = mm_get_median(cython.address(mm)) >> ? ?for i in range(window, n): >> ? ? ? ?mm_update(cython.address(mm), a[i]) >> ? ? ? ?y[i] = mm_get_median(cython.address(mm)) >> ? ?mm_free(cython.address(mm)) >> ? ?return y >> >> [1] http://home.tiac.net/~cri_a/source_code/index.html#winmedian_pkg >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > assuming a compatible license do you want to make a little github repo > like you mentioned? I have been interested in this problem for a while I made a repo: https://github.com/kwgoodman/roly Add yourself to the license in the readme once you upload some stuff. Anyone interest in moving window medians, wrapping C code in cython, etc, join us. From johnl at cs.wisc.edu Sun May 1 19:38:39 2011 From: johnl at cs.wisc.edu (J. David Lee) Date: Sun, 01 May 2011 18:38:39 -0500 Subject: [SciPy-User] Moving median code In-Reply-To: References: <4DB593D5.4010506@cs.wisc.edu> Message-ID: <4DBDEEFF.8010205@cs.wisc.edu> On 05/01/2011 02:17 PM, Wes McKinney wrote: > On Sun, May 1, 2011 at 3:11 PM, Wes McKinney wrote: >> On Sun, May 1, 2011 at 3:08 PM, Keith Goodman wrote: >>> On Mon, Apr 25, 2011 at 8:31 AM, J. David Lee wrote: >>>> In working on a project recently, I wrote a moving median code that is >>>> about 10x faster than scipy.ndimage.median_filter. It utilizes a linked >>>> list structure to store values and track the median value. If anyone is >>>> interested, I've posted the code here (about 150 lines): >>>> >>>> http://pages.cs.wisc.edu/~johnl/median_code/median_code.c >>> I wrapped your moving window median code in cython. Here is how the >>> timing scales with window size: >>> >>>>> a = np.random.rand(1e6) >>>>> timeit move_median(a, window=10) >>> 10 loops, best of 3: 44.4 ms per loop >>>>> timeit move_median(a, window=100) >>> 10 loops, best of 3: 179 ms per loop >>>>> timeit move_median(a, window=1000) >>> 1 loops, best of 3: 1.96 s per loop >>>>> timeit move_median(a, window=10000) >>> 1 loops, best of 3: 48.3 s per loop >>> >>> It would be interesting to compare the timings with a double heap >>> algorithm [1] and with a C version of Wes's skiplist. I'll try to wrap >>> the double heap next when I get a chance. BTW, what license are you >>> using for your code? >>> >>> Here's the code I used. It's the first wrap I've done so suggestions >>> welcomed. Now that I sort of know how, I'm going to wrap everything I >>> see! >>> >>> import numpy as np >>> cimport numpy as np >>> import cython >>> >>> cdef extern from "cmove_median.c": >>> struct mm_node >>> struct mm_list: >>> np.npy_int64 len >>> mm_node *head >>> mm_node *tail >>> mm_node *min_node >>> mm_node *med_node >>> void mm_init_median(mm_list *mm) >>> void mm_insert_init(mm_list *mm, np.npy_float64 val) >>> void mm_update(mm_list *mm, np.npy_float64 val) >>> np.npy_float64 mm_get_median(mm_list *mm) >>> void mm_free(mm_list *mm) >>> np.npy_float64 mm_get_median(mm_list *mm) >>> mm_list mm_new(np.npy_int64 len) >>> >>> def move_median(np.ndarray[np.float64_t, ndim=1] a, int window): >>> cdef int n = a.size, i >>> if window> n: >>> raise ValueError("`window` must be less than a.size.") >>> if window< 2: >>> raise ValueError("I get a segfault when `window` is 1.") >>> cdef np.ndarray[np.float64_t, ndim=1] y = np.empty(n) >>> cdef mm_list mm = mm_new(window) >>> for i in range(window): >>> mm_insert_init(cython.address(mm), a[i]) >>> y[i] = np.nan >>> mm_init_median(cython.address(mm)) >>> y[window-1] = mm_get_median(cython.address(mm)) >>> for i in range(window, n): >>> mm_update(cython.address(mm), a[i]) >>> y[i] = mm_get_median(cython.address(mm)) >>> mm_free(cython.address(mm)) >>> return y >>> >>> [1] http://home.tiac.net/~cri_a/source_code/index.html#winmedian_pkg >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> assuming a compatible license do you want to make a little github repo >> like you mentioned? I have been interested in this problem for a while >> > you can see that the skiplist method is very much O(N log W). a lot of > python overhead though, obviously > > In [16]: timeit rolling_median(arr, 10) > 1 loops, best of 3: 2.95 s per loop > > In [17]: timeit rolling_median(arr, 100) > 1 loops, best of 3: 4.06 s per loop > > In [18]: timeit rolling_median(arr, 1000) > 1 loops, best of 3: 5.56 s per loop > > In [19]: timeit rolling_median(arr, 10000) > 1 loops, best of 3: 7.78 s per loop > > probably the most optimal solution would be to have 2 algorithms: one > for small window sizes (where linear updates are OK) and then switch > over when your big-O constants "cross over" for large enough window > size > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user I don't know what the end goal of this is, but if we're looking at scipy itself, I think it would be nice to have a single function call that would select the best method based on the window size, but it would also be nice to have the option to call a specific implementation if desired, because the speed of the median will depend on both the implementation and the data in question. David From glen.shennan at gmail.com Mon May 2 04:23:24 2011 From: glen.shennan at gmail.com (Glen Shennan) Date: Mon, 2 May 2011 18:23:24 +1000 Subject: [SciPy-User] Completing source install of numpy/scipy Message-ID: Hi, This question may be slightly outside the scope of the list but I have been unable to find a solution myself and no forums have provided answers. I'm trying to install the standard python/numpy/scipy/matplotlib combination. I'm using Ubuntu 11.04 and am using python 2.7 obtained from the repos. I have numpy and scipy (and BLAS/ATLAS/LAPACK) compiled from source and installed in $HOME/local/[lib|include|bin]/ I now want to install matplotlib. I'm happy to do this from the repos, I only want the number-crunching parts from source for speed. However matplotlib depends on the packages python-numpy, libblas3gf and liblapack, so whenever I try to install python-matplotlib these packages are installed as well. I've tried "aptitude hold ..." and locking the version in synaptic but for reasons that are beyond me the packages continue to be installed no matter what I do. I realise I could just install matplotlib too but I run more than a couple of other programs that have the same dependencies so I need a general solution. How might I go about convincing Linux that the packages are installed? I assume someone else around here has come up against the same problem. Can anyone offer suggestions? Cheers, Glen Shennan -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Mon May 2 05:33:52 2011 From: david at silveregg.co.jp (David) Date: Mon, 02 May 2011 18:33:52 +0900 Subject: [SciPy-User] Completing source install of numpy/scipy In-Reply-To: References: Message-ID: <4DBE7A80.5070808@silveregg.co.jp> On 05/02/2011 05:23 PM, Glen Shennan wrote: > Hi, > > This question may be slightly outside the scope of the list but I have > been unable to find a solution myself and no forums have provided answers. > > I'm trying to install the standard python/numpy/scipy/matplotlib > combination. I'm using Ubuntu 11.04 and am using python 2.7 obtained > from the repos. I have numpy and scipy (and BLAS/ATLAS/LAPACK) compiled > from source and installed in $HOME/local/[lib|include|bin]/ > > I now want to install matplotlib. I'm happy to do this from the repos, > I only want the number-crunching parts from source for speed. However > matplotlib depends on the packages python-numpy, libblas3gf and > liblapack, so whenever I try to install python-matplotlib these packages > are installed as well. I've tried "aptitude hold ..." and locking the > version in synaptic but for reasons that are beyond me the packages > continue to be installed no matter what I do. > > I realise I could just install matplotlib too but I run more than a > couple of other programs that have the same dependencies so I need a > general solution. > > How might I go about convincing Linux that the packages are installed? This is impossible. If you want to use ubuntu packages which depend on numpy, you have to install numpy from ubuntu. What you *can* do is to build your own numpy/scipy and make sure it still works with the ubuntu matplotlib package (i.e. your custom numpy will be used). Still not so easy, but doable. But if you only care about "speed", then using the ubuntu packages should work relatively well, unless you want to build a multithreaded blas/lapack by yourself, which is not for the faint of the heart. cheers, David From pav at iki.fi Mon May 2 06:10:08 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 2 May 2011 10:10:08 +0000 (UTC) Subject: [SciPy-User] Completing source install of numpy/scipy References: <4DBE7A80.5070808@silveregg.co.jp> Message-ID: Mon, 02 May 2011 18:33:52 +0900, David wrote: [clip] >> How might I go about convincing Linux that the packages are installed? > > This is impossible. Not really -- you can use the `equivs` tool to create dummy Debian packages that fake real packages, http://blog.andrewbeacock.com/2005/09/creating-dummy-debian-package-for.html It's fiddly, though, so don't do it if you do not exactly understand what you are doing. (And as it's off-topic for this list, debian or ubuntu lists would be a better place to ask...) -- Pauli Virtanen From pav at iki.fi Mon May 2 06:11:57 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 2 May 2011 10:11:57 +0000 (UTC) Subject: [SciPy-User] Completing source install of numpy/scipy References: <4DBE7A80.5070808@silveregg.co.jp> Message-ID: Mon, 02 May 2011 10:10:08 +0000, Pauli Virtanen wrote: > Mon, 02 May 2011 18:33:52 +0900, David wrote: [clip] >>> How might I go about convincing Linux that the packages are installed? >> >> This is impossible. > > Not really -- you can use the `equivs` tool to create dummy Debian > packages that fake real packages, > > http://blog.andrewbeacock.com/2005/09/creating-dummy-debian-package-for.html > > It's fiddly, though, so don't do it if you do not exactly understand > what you are doing. (And as it's off-topic for this list, debian or > ubuntu lists would be a better place to ask...) And also, it's the wrong thing to do. Your local installation of Numpy will override the system one (at least if you installed it via setupegg.py) From tloramus at gmail.com Mon May 2 06:31:27 2011 From: tloramus at gmail.com (Miha Marolt) Date: Mon, 2 May 2011 10:31:27 +0000 (UTC) Subject: [SciPy-User] Completing source install of numpy/scipy References: Message-ID: I was trying to do something similar, not because of speed though, but because packages in Debian stable are outdated. Here is what i found out: You could make .deb packages of your customly build scipy and numpy. It is very easy to do so with the checkinstall program (i use it on Debian, but i guess it should work on Ubuntu too). All you have to do is that you install scipy/numpy with the following command su root checkinstall python setup.py install --prefix=$HOME/local This will automaticaly make .deb package and install it. But be aware that this can break packages from repository that depend on numpy and scipy. I personally build and installed numpy 1.5.1 like this, but had to uninstall it, because some other packages that i installed from repository (mayavi2, glade,..) depended on numpy < 1.5! So i think it is generally a bad idea to mess with package management system like this. What is the alternative? 1. When writing scripts, you could change sys.path at the begining of your script. That way only the script would use numpy and scipy in $HOME/local, while all other programs would use numpy and scipy from repositories (in /usr/lib/..). 2. For interactive work with ipython for example, you have to go to directory with your scipy/numpy and launch it from there. When you import scipy, your custom version will be loaded. This doesn't work with "ipython -pylab" unfortunately. I only recentrly started building my own versions, so my solutions are probably not that good. Any advice is very welcome. From glen.shennan at gmail.com Mon May 2 06:36:14 2011 From: glen.shennan at gmail.com (Glen Shennan) Date: Mon, 2 May 2011 20:36:14 +1000 Subject: [SciPy-User] Completing source install of numpy/scipy In-Reply-To: References: <4DBE7A80.5070808@silveregg.co.jp> Message-ID: "And also, it's the wrong thing to do. Your local installation of Numpy will override the system one (at least if you installed it via setupegg.py)" -- Perfect! I actually just discovered this too. Is this because python searches for modules at locations in the order contained in sys.path? I didn't install using setupegg.py but I've set PYTHONPATH in /etc/environment and it's coming up first in the sys.path list and numpy.__path__ is to my install. Thanks to everyone for the answers, you've saved me a lot of head-banging, very much appreciated. Glen On 2 May 2011 20:11, Pauli Virtanen wrote: > Mon, 02 May 2011 10:10:08 +0000, Pauli Virtanen wrote: > > > Mon, 02 May 2011 18:33:52 +0900, David wrote: [clip] > >>> How might I go about convincing Linux that the packages are installed? > >> > >> This is impossible. > > > > Not really -- you can use the `equivs` tool to create dummy Debian > > packages that fake real packages, > > > > > http://blog.andrewbeacock.com/2005/09/creating-dummy-debian-package-for.html > > > > It's fiddly, though, so don't do it if you do not exactly understand > > what you are doing. (And as it's off-topic for this list, debian or > > ubuntu lists would be a better place to ask...) > > And also, it's the wrong thing to do. Your local installation of Numpy > will override the system one (at least if you installed it via setupegg.py) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vmatos at dei.uminho.pt Mon May 2 09:08:01 2011 From: vmatos at dei.uminho.pt (Vitor Matos) Date: Mon, 02 May 2011 14:08:01 +0100 Subject: [SciPy-User] multiple ODE problem with scipy.integrate.ode.ode Message-ID: <1304341681.14176.7.camel@a1520> Hi, I've been trying to perform the integration of multiple ODEs. The problem is that only the first ODE being integrated produces the correct solution. The subsequent ODE solution come out as gibberish. To demonstrate the problem, I've written a simple code that solves two RC circuit ODEs: ---------------------------------------------------------------------- from scipy.integrate import * def f(t, y, arg1): # dV/dt = -1/RC (V - Vi) return -1/(arg1[0]*arg1[1])*(y - arg1[2]) y0 = 0 t0 = 0 # 0- R parameter # 1- C parameter # 2- Vin step arg = [100, 0.01, 5] arg2 = [200, 0.005, 5] r = ode(f).set_integrator('vode') r.set_initial_value(y0,t0) r.set_f_params(arg) r2 = ode(f).set_integrator('vode') r2.set_initial_value(y0,t0) r2.set_f_params(arg2) t_final = 10 dt = 0.1 t = [t0] v1 = [y0] v2 = [y0] while r.successful and r.t < t_final \ and r2.successful and r2.t < t_final: r.integrate(r.t + dt) r2.integrate(r2.t + dt) #r2.integrate(r2.t + dt) #r.integrate(r.t + dt) print "1: ", r.t, r.y print "2: ", r2.t, r2.y t.append(r.t) v1.append(r.y) v2.append(r2.y) ## plot the simulation from matplotlib import * from pylab import * figure(1) #Using matplotlib plot(t,v1,t,v2) title('RC circuit') xlabel('Time') show() ---------------------------------------------------------------------- What could be the problem? Also, I'm new to Python and I'm still trying to grasp how to best use it. Thanks in advance, Vitor Matos From tsyu80 at gmail.com Mon May 2 09:47:06 2011 From: tsyu80 at gmail.com (Tony Yu) Date: Mon, 2 May 2011 09:47:06 -0400 Subject: [SciPy-User] multiple ODE problem with scipy.integrate.ode.ode In-Reply-To: <1304341681.14176.7.camel@a1520> References: <1304341681.14176.7.camel@a1520> Message-ID: On Mon, May 2, 2011 at 9:08 AM, Vitor Matos wrote: > Hi, > I've been trying to perform the integration of multiple ODEs. > > The problem is that only the first ODE being integrated produces the > correct solution. The subsequent ODE solution come out as gibberish. > > To demonstrate the problem, I've written a simple code that solves two > RC circuit ODEs: > > ---------------------------------------------------------------------- > > from scipy.integrate import * > > def f(t, y, arg1): > # dV/dt = -1/RC (V - Vi) > return -1/(arg1[0]*arg1[1])*(y - arg1[2]) > > y0 = 0 > t0 = 0 > > # 0- R parameter > # 1- C parameter > # 2- Vin step > arg = [100, 0.01, 5] > arg2 = [200, 0.005, 5] > > r = ode(f).set_integrator('vode') > r.set_initial_value(y0,t0) > r.set_f_params(arg) > > r2 = ode(f).set_integrator('vode') > r2.set_initial_value(y0,t0) > r2.set_f_params(arg2) > > t_final = 10 > dt = 0.1 > > t = [t0] > v1 = [y0] > v2 = [y0] > > while r.successful and r.t < t_final \ > and r2.successful and r2.t < t_final: > > r.integrate(r.t + dt) > r2.integrate(r2.t + dt) > > #r2.integrate(r2.t + dt) > #r.integrate(r.t + dt) > > print "1: ", r.t, r.y > print "2: ", r2.t, r2.y > > t.append(r.t) > v1.append(r.y) > v2.append(r2.y) > > ## plot the simulation > from matplotlib import * > from pylab import * > figure(1) #Using matplotlib > plot(t,v1,t,v2) > title('RC circuit') > xlabel('Time') > show() > > ---------------------------------------------------------------------- > > What could be the problem? > > Also, I'm new to Python and I'm still trying to grasp how to best use > it. > > Thanks in advance, > Vitor Matos > I'm not sure why this is happening, but if you separate the integration loops (one for r followed by one for r2), the problem goes away. In other words while r.successful and r.t < t_final: r.integrate(r.t + dt) print "1: ", r.t, r.y t.append(r.t) v1.append(r.y) while r2.successful and r2.t < t_final: r2.integrate(r2.t + dt) print "2: ", r2.t, r2.y t2.append(r2.t) v2.append(r2.y) There's some weird sharing between the two calls to integrate (e.g. with a single loop, moving r2.integrate before r.integrate results in r2 integrating smoothly, but r not). I don't know what's going on under the hood of ode.integrate, so that's all the help I can give. Best, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Mon May 2 14:10:04 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 02 May 2011 20:10:04 +0200 Subject: [SciPy-User] Completing source install of numpy/scipy In-Reply-To: References: Message-ID: <1304359804.2607.26.camel@newpride> On Mon, 2011-05-02 at 10:31 +0000, Miha Marolt wrote: > I was trying to do something similar, not because of speed though, but because > packages in Debian stable are outdated. Here is what i found out: > > su root > checkinstall python setup.py install --prefix=$HOME/local Yes, a very bad way of doing it... > So i think it is generally a bad idea to mess with package management system > like this. What is the alternative? Use virtualenv! (this also answers the question of the OP) -- Sincerely yours, Yury V. Zaytsev From pav at iki.fi Mon May 2 16:08:08 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 2 May 2011 20:08:08 +0000 (UTC) Subject: [SciPy-User] multiple ODE problem with scipy.integrate.ode.ode References: <1304341681.14176.7.camel@a1520> Message-ID: On Mon, 02 May 2011 09:47:06 -0400, Tony Yu wrote: [clip] > There's some weird sharing between the two calls to integrate (e.g. with > a single loop, moving r2.integrate before r.integrate results in r2 > integrating smoothly, but r not). I don't know what's going on under the > hood of ode.integrate, so that's all the help I can give. The underlying Fortran code uses SAVE statements, and is therefore not reentrant. The author of the Python wrappers apparently forgot about that. Damnation. So in short, it just doesn't work; the underlying integration engine does not support this kind of usage. Short of rewriting considerable parts of ancient Fortran 77 code, the only thing the wrappers should do is to raise errors. The ODE integration stuff in Scipy has been waiting for a proper cleanup anyway for a long time. -- Pauli Virtanen From kwgoodman at gmail.com Mon May 2 16:32:51 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 2 May 2011 13:32:51 -0700 Subject: [SciPy-User] Moving median code In-Reply-To: References: <4DB593D5.4010506@cs.wisc.edu> Message-ID: On Mon, Apr 25, 2011 at 9:01 AM, Keith Goodman wrote: > On Mon, Apr 25, 2011 at 8:31 AM, J. David Lee wrote: >> In working on a project recently, I wrote a moving median code that is >> about 10x faster than scipy.ndimage.median_filter. It utilizes a linked >> list structure to store values and track the median value. If anyone is >> interested, I've posted the code here (about 150 lines): >> >> http://pages.cs.wisc.edu/~johnl/median_code/median_code.c > > Does anyone have a feel for how the speed of a linked list approach > would compare to a double heap? Here's the speed comparison between using a link list and a double heap for a moving window median: >> a = np.random.rand(1e5) >> >> window = 11 >> timeit roly.slow.move_median(a, window) 1 loops, best of 3: 2.57 s per loop >> timeit roly.linkedlist.move_median(a, window) 100 loops, best of 3: 4.57 ms per loop >> timeit roly.doubleheap.move_median(a, window) 100 loops, best of 3: 4.87 ms per loop >> >> window = 101 >> timeit roly.linkedlist.move_median(a, window) 10 loops, best of 3: 19.4 ms per loop >> timeit roly.doubleheap.move_median(a, window) 100 loops, best of 3: 6.55 ms per loop >> >> window = 1001 >> timeit roly.linkedlist.move_median(a, window) 1 loops, best of 3: 206 ms per loop >> timeit roly.doubleheap.move_median(a, window) 100 loops, best of 3: 7.76 ms per loop >> >> window = 10001 >> timeit roly.linkedlist.move_median(a, window) 1 loops, best of 3: 4.56 s per loop >> timeit roly.doubleheap.move_median(a, window) 100 loops, best of 3: 10.2 ms per loop The double heap is much faster except at small window widths. And even then it is not far behind. The code can be found here: https://github.com/kwgoodman/roly and discussed here: http://groups.google.com/group/bottle-neck From tmp50 at ukr.net Tue May 3 05:05:09 2011 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 03 May 2011 12:05:09 +0300 Subject: [SciPy-User] questions about splines Message-ID: hi all, 1) in the page http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.BivariateSpline.html I see the entry SmoothUnivariateSpline to create a BivariateSpline through the given points Isn't it a bug? 2) Are there any tools for N-dimensional splines with derivatives in scipy? I thought it's quite simple to be implemented and rather essential, but I noticed only Rbf, and derivatives are absent there. Do you know a soft with Python API and N-dimensional splines and related derivatives? Or maybe anyone has any plans of implementing something like that in future? D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand.prabhakar.patil at gmail.com Tue May 3 08:51:59 2011 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 3 May 2011 05:51:59 -0700 (PDT) Subject: [SciPy-User] Scikits.sparse build issue Message-ID: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> Hi all, I was excited to notice scikits.sparse this morning. I've been waiting for something like that to appear for a while now! I'm trying to install the HG head on a Mac, with EPD 7.0-2 and Numpy 1.5.1. I've installed SuiteSparse from MacPorts. First, the following line from cholmod.pyx doesn't work with MacPorts' SuiteSparse: cdef extern from "suitesparse/cholmod.h": It needs to be cdef extern from "ufsparse/cholmod.h": Having done that, I get a sequence of errors like the following: python setup.py build running build running build_py running build_ext skipping 'scikits/sparse/cholmod.c' Cython extension (up-to-date) building 'scikits.sparse.cholmod' extension gcc -DNDEBUG -g -O3 -arch x86_64 -isysroot /Developer/SDKs/ MacOSX10.5.sdk -I/Library/Frameworks/EPD64.framework/Versions/7.0/ include -I/opt/local/include -I/Library/Frameworks/EPD64.framework/ Versions/7.0/lib/python2.7/site-packages/numpy/core/include -I/Library/ Frameworks/EPD64.framework/Versions/7.0/include/python2.7 -c scikits/ sparse/cholmod.c -o build/temp.macosx-10.6-x86_64-2.7/scikits/sparse/ cholmod.o scikits/sparse/cholmod.c: In function ?__pyx_f_7scikits_6sparse_7cholmod__py_sparse?: scikits/sparse/cholmod.c:1713: error: storage size of ?__pyx_t_10? isn?t known I've never used Cython and am having a hard time figuring this out. Thanks in advance for any help, Anand From tloramus at gmail.com Tue May 3 11:17:54 2011 From: tloramus at gmail.com (Miha Marolt) Date: Tue, 3 May 2011 15:17:54 +0000 (UTC) Subject: [SciPy-User] Completing source install of numpy/scipy References: <1304359804.2607.26.camel@newpride> Message-ID: Thanks, i will check out virtualenv! From kwgoodman at gmail.com Tue May 3 12:38:00 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 3 May 2011 09:38:00 -0700 Subject: [SciPy-User] get best few of many: argsort( few= ) using std::partial_sort ? In-Reply-To: References: Message-ID: On Thu, Jul 8, 2010 at 10:17 AM, denis wrote: > Folks, > ?to get the best few of a large number of objects, > e.g. vectors near a given one, or small distances in > spatial.distance.cdist or .pdist, > argsort( bigArray )[: a few ] is not so hot. ?It would be nice if > ? ?argsort( bigArray, few= ) > did this -- faster, save mem too. Would anyone else find this useful ? > > I recently stumbled across partial_sort in stl; fwiw, > std:partial_sort( A, A + sqrt(N), A + N ) is ~ 10 times faster than > std:sort > on my old mac ppc, even for N 100. > Also fwiw, nth_element alone is ~ twice as slow as partial_sort -- > odd. I recently added partial sorting to bottleneck (0.5.0dev): partsort() and argpartsort(). Make an array: >> import bottleneck as bn >> a = np.random.rand(1000000) Timing: >> timeit np.sort(a) 10 loops, best of 3: 114 ms per loop >> timeit bn.partsort(a, n=1000) 100 loops, best of 3: 3.95 ms per loop >> timeit np.argsort(a) 10 loops, best of 3: 161 ms per loop >> timeit bn.argpartsort(a, n=1000) 100 loops, best of 3: 11.4 ms per loop The speed up is more like a factor 2 to 5 for 2d input and n = a.shape[axis] / 2. You mentioned memory usage. I can reduce that from my current implementation by one copy of the input array if it is a problem. bn.partsort doc string: partsort(arr, n, axis=-1) Partial sorting of array elements along given axis. A partially sorted array is one in which the `n` smallest values appear (in any order) in the first `n` elements. The remaining largest elements are also unordered. Due to the algorithm used (Wirth's method), the nth smallest element is in its sorted position (at index `n-1`). Shuffling the input array may change the output. The only guarantee is that the first `n` elements will be the `n` smallest and the remaining element will appear in the remainder of the output. This functions is not protected against NaN. Therefore, you may get unexpected results if the input contains NaN. Parameters ---------- arr : array_like Input array. If `arr` is not an array, a conversion is attempted. n : int The `n` smallest elements will appear (unordered) in the first `n` elements of the output array. axis : {int, None}, optional Axis along which the partial sort is performed. The default (axis=-1) is to sort along the last axis. Returns ------- y : ndarray A partially sorted copy of the input array where the `n` smallest elements will appear (unordered) in the first `n` elements. See Also -------- bottleneck.argpartsort: Indices that would partially sort an array Notes ----- Unexpected results may occur if the input array contains NaN. Examples -------- Create a numpy array: >>> a = np.array([1, 0, 3, 4, 2]) Partially sort array so that the first 3 elements are the smallest 3 elements (note, as in this example, that the smallest 3 elements may not be sorted): >>> bn.partsort(a, n=3) array([1, 0, 2, 4, 3]) From njs at pobox.com Tue May 3 13:10:49 2011 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 3 May 2011 10:10:49 -0700 Subject: [SciPy-User] Scikits.sparse build issue In-Reply-To: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> References: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> Message-ID: On Tue, May 3, 2011 at 5:51 AM, Anand Patil wrote: > First, the following line from cholmod.pyx doesn't work with MacPorts' > SuiteSparse: > > ? cdef extern from "suitesparse/cholmod.h": > > It needs to be > > ? cdef extern from "ufsparse/cholmod.h": It should probably be: cdef extern from "cholmod.h": and then the build system should figure out in what idiosyncratic way cholmod has been installed on the system, and set appropriate compiler flags so it can be found. I really have no idea how to do this with distutils, though -- does anyone else? > scikits/sparse/cholmod.c: In function > ?__pyx_f_7scikits_6sparse_7cholmod__py_sparse?: > scikits/sparse/cholmod.c:1713: error: storage size of ?__pyx_t_10? > isn?t known > > I've never used Cython and am having a hard time figuring this out. Could you send me the file 'scikits/sparse/cholmod.c'? This means that there's some C type that was forward-declared, but never actually defined, and then we tried to instantiate an instance of it. But I'll need to see the generated code to figure out which type '__pyx_t_10' is supposed to be. -- Nathaniel From ralf.gommers at googlemail.com Tue May 3 14:18:31 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 3 May 2011 20:18:31 +0200 Subject: [SciPy-User] ANN: Numpy 1.6.0 release candidate 2 Message-ID: Hi, I am pleased to announce the availability of the second release candidate of NumPy 1.6.0. Compared to the first release candidate, one segfault on (32-bit Windows + MSVC) and several memory leaks were fixed. If no new problems are reported, the final release will be in one week. Sources and binaries can be found at http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc2/ For (preliminary) release notes see below. Enjoy, Ralf ========================= NumPy 1.6.0 Release Notes ========================= This release includes several new features as well as numerous bug fixes and improved documentation. It is backward compatible with the 1.5.0 release, and supports Python 2.4 - 2.7 and 3.1 - 3.2. Highlights ========== * Re-introduction of datetime dtype support to deal with dates in arrays. * A new 16-bit floating point type. * A new iterator, which improves performance of many functions. New features ============ New 16-bit floating point type ------------------------------ This release adds support for the IEEE 754-2008 binary16 format, available as the data type ``numpy.half``. Within Python, the type behaves similarly to `float` or `double`, and C extensions can add support for it with the exposed half-float API. New iterator ------------ A new iterator has been added, replacing the functionality of the existing iterator and multi-iterator with a single object and API. This iterator works well with general memory layouts different from C or Fortran contiguous, and handles both standard NumPy and customized broadcasting. The buffering, automatic data type conversion, and optional output parameters, offered by ufuncs but difficult to replicate elsewhere, are now exposed by this iterator. Legendre, Laguerre, Hermite, HermiteE polynomials in ``numpy.polynomial`` ------------------------------------------------------------------------- Extend the number of polynomials available in the polynomial package. In addition, a new ``window`` attribute has been added to the classes in order to specify the range the ``domain`` maps to. This is mostly useful for the Laguerre, Hermite, and HermiteE polynomials whose natural domains are infinite and provides a more intuitive way to get the correct mapping of values without playing unnatural tricks with the domain. Fortran assumed shape array and size function support in ``numpy.f2py`` ----------------------------------------------------------------------- F2py now supports wrapping Fortran 90 routines that use assumed shape arrays. Before such routines could be called from Python but the corresponding Fortran routines received assumed shape arrays as zero length arrays which caused unpredicted results. Thanks to Lorenz H?depohl for pointing out the correct way to interface routines with assumed shape arrays. In addition, f2py interprets Fortran expression ``size(array, dim)`` as ``shape(array, dim-1)`` which makes it possible to automatically wrap Fortran routines that use two argument ``size`` function in dimension specifications. Before users were forced to apply this mapping manually. Other new functions ------------------- ``numpy.ravel_multi_index`` : Converts a multi-index tuple into an array of flat indices, applying boundary modes to the indices. ``numpy.einsum`` : Evaluate the Einstein summation convention. Using the Einstein summation convention, many common multi-dimensional array operations can be represented in a simple fashion. This function provides a way compute such summations. ``numpy.count_nonzero`` : Counts the number of non-zero elements in an array. ``numpy.result_type`` and ``numpy.min_scalar_type`` : These functions expose the underlying type promotion used by the ufuncs and other operations to determine the types of outputs. These improve upon the ``numpy.common_type`` and ``numpy.mintypecode`` which provide similar functionality but do not match the ufunc implementation. Changes ======= Changes and improvements in the numpy core ------------------------------------------ ``default error handling`` -------------------------- The default error handling has been change from ``print`` to ``warn`` for all except for ``underflow``, which remains as ``ignore``. ``numpy.distutils`` ------------------- Several new compilers are supported for building Numpy: the Portland Group Fortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel C compiler on Linux. ``numpy.testing`` ----------------- The testing framework gained ``numpy.testing.assert_allclose``, which provides a more convenient way to compare floating point arrays than `assert_almost_equal`, `assert_approx_equal` and `assert_array_almost_equal`. ``C API`` --------- In addition to the APIs for the new iterator and half data type, a number of other additions have been made to the C API. The type promotion mechanism used by ufuncs is exposed via ``PyArray_PromoteTypes``, ``PyArray_ResultType``, and ``PyArray_MinScalarType``. A new enumeration ``NPY_CASTING`` has been added which controls what types of casts are permitted. This is used by the new functions ``PyArray_CanCastArrayTo`` and ``PyArray_CanCastTypeTo``. A more flexible way to handle conversion of arbitrary python objects into arrays is exposed by ``PyArray_GetArrayParamsFromObject``. Deprecated features =================== The "normed" keyword in ``numpy.histogram`` is deprecated. Its functionality will be replaced by the new "density" keyword. Removed features ================ ``numpy.fft`` ------------- The functions `refft`, `refft2`, `refftn`, `irefft`, `irefft2`, `irefftn`, which were aliases for the same functions without the 'e' in the name, were removed. ``numpy.memmap`` ---------------- The `sync()` and `close()` methods of memmap were removed. Use `flush()` and "del memmap" instead. ``numpy.lib`` ------------- The deprecated functions ``numpy.unique1d``, ``numpy.setmember1d``, ``numpy.intersect1d_nu`` and ``numpy.lib.ufunclike.log2`` were removed. ``numpy.ma`` ------------ Several deprecated items were removed from the ``numpy.ma`` module:: * ``numpy.ma.MaskedArray`` "raw_data" method * ``numpy.ma.MaskedArray`` constructor "flag" keyword * ``numpy.ma.make_mask`` "flag" keyword * ``numpy.ma.allclose`` "fill_value" keyword ``numpy.distutils`` ------------------- The ``numpy.get_numpy_include`` function was removed, use ``numpy.get_include`` instead. From anand.prabhakar.patil at gmail.com Wed May 4 05:39:55 2011 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Wed, 4 May 2011 02:39:55 -0700 (PDT) Subject: [SciPy-User] Scikits.sparse build issue In-Reply-To: References: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> Message-ID: <8e3b19c3-5d5e-44f3-b651-71d3629290f2@w21g2000yqm.googlegroups.com> Hi Nathaniel, Thanks for getting back to me. I'll send you cholmod.c by email. Anand On May 3, 6:10?pm, Nathaniel Smith wrote: > On Tue, May 3, 2011 at 5:51 AM, Anand Patil > > wrote: > > First, the following line from cholmod.pyx doesn't work with MacPorts' > > SuiteSparse: > > > ? cdef extern from "suitesparse/cholmod.h": > > > It needs to be > > > ? cdef extern from "ufsparse/cholmod.h": > > It should probably be: > ? cdef extern from "cholmod.h": > and then the build system should figure out in what idiosyncratic way > cholmod has been installed on the system, and set appropriate compiler > flags so it can be found. > > I really have no idea how to do this with distutils, though -- does anyone else? > > > scikits/sparse/cholmod.c: In function > > ?__pyx_f_7scikits_6sparse_7cholmod__py_sparse?: > > scikits/sparse/cholmod.c:1713: error: storage size of ?__pyx_t_10? > > isn?t known > > > I've never used Cython and am having a hard time figuring this out. > > Could you send me the file 'scikits/sparse/cholmod.c'? This means that > there's some C type that was forward-declared, but never actually > defined, and then we tried to instantiate an instance of it. But I'll > need to see the generated code to figure out which type '__pyx_t_10' > is supposed to be. > > -- Nathaniel > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From opossumnano at gmail.com Wed May 4 06:28:54 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Wed, 4 May 2011 12:28:54 +0200 Subject: [SciPy-User] [ANN] EuroScipy 2011 - deadline approaching Message-ID: <20110504102854.GA820@tulpenbaum.cognition.tu-berlin.de> ===================================== EuroScipy 2011 - Deadline Approaching ===================================== Beware: talk submission deadline is approaching. You can submit your contribution until Sunday May 8. --------------------------------------------- The 4th European meeting on Python in Science --------------------------------------------- **Paris, Ecole Normale Sup?rieure, August 25-28 2011** We are happy to announce the 4th EuroScipy meeting, in Paris, August 2011. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Main topics =========== - Presentations of scientific tools and libraries using the Python language, including but not limited to: - vector and array manipulation - parallel computing - scientific visualization - scientific data flow and persistence - algorithms implemented or exposed in Python - web applications and portals for science and engineering. - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Tutorials ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Keynote Speaker: Fernando Perez =============================== We are excited to welcome Fernando Perez (UC Berkeley, Helen Wills Neuroscience Institute, USA) as our keynote speaker. Fernando Perez is the original author of the enhanced interactive python shell IPython and a very active contributor to the Python for Science ecosystem. Important dates =============== Talk submission deadline: Sunday May 8 Program announced: Sunday May 29 Tutorials tracks: Thursday August 25 - Friday August 26 Conference track: Saturday August 27 - Sunday August 28 Call for papers =============== We are soliciting talks that discuss topics related to scientific computing using Python. These include applications, teaching, future development directions, and research. We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Submission guidelines ===================== - We solicit talk proposals in the form of a one-page long abstract. - Submissions whose main purpose is to promote a commercial product or service will be refused. - All accepted proposals must be presented at the EuroSciPy conference by at least one author. The one-page long abstracts are for conference planing and selection purposes only. We will later select papers for publication of post-proceedings in a peer-reviewed journal. How to submit an abstract ========================= To submit a talk to the EuroScipy conference follow the instructions here: http://www.euroscipy.org/card/euroscipy2011_call_for_papers Organizers ========== Chairs: - Ga?l Varoquaux (INSERM, Unicog team, and INRIA, Parietal team) - Nicolas Chauvat (Logilab) Local organization committee: - Emmanuelle Gouillart (Saint-Gobain Recherche) - Jean-Philippe Chauvat (Logilab) Tutorial chair: - Valentin Haenel (MKP, Technische Universit?t Berlin) Program committee: - Chair: Tiziano Zito (MKP, Technische Universit?t Berlin) - Romain Brette (ENS Paris, DEC) - Emmanuelle Gouillart (Saint-Gobain Recherche) - Eric Lebigot (Laboratoire Kastler Brossel, Universit? Pierre et Marie Curie) - Konrad Hinsen (Soleil Synchrotron, CNRS) - Hans Petter Langtangen (Simula laboratories) - Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) - Mike M?ller (Python Academy) - Didrik Pinte (Enthought Inc) - Marc Poinot (ONERA) - Christophe Pradal (CIRAD/INRIA, Virtual Plantes team) - Andreas Schreiber (DLR) - St?fan van der Walt (University of Stellenbosch) Website ======= http://www.euroscipy.org/conference/euroscipy_2011 From laserson at mit.edu Tue May 3 20:20:25 2011 From: laserson at mit.edu (Uri Laserson) Date: Tue, 3 May 2011 20:20:25 -0400 Subject: [SciPy-User] Why does the scipy.stats documentation not include the scipy.stats.entropy function? Message-ID: It's missing from: http://docs.scipy.org/doc/scipy/reference/stats.html Uri ................................................................................... Uri Laserson Graduate Student, Biomedical Engineering Harvard-MIT Division of Health Sciences and Technology M +1 917 742 8019 laserson at mit.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed May 4 09:42:28 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 4 May 2011 09:42:28 -0400 Subject: [SciPy-User] Why does the scipy.stats documentation not include the scipy.stats.entropy function? In-Reply-To: References: Message-ID: On Tue, May 3, 2011 at 8:20 PM, Uri Laserson wrote: > It's missing from: > http://docs.scipy.org/doc/scipy/reference/stats.html > Uri Because of an oversight, there might be other things missing. I don't think we checked for completeness. I added it to http://docs.scipy.org/scipy/docs/scipy-docs/stats.rst/ but the docstring of entropy is not up to standard http://docs.scipy.org/scipy/docs/scipy.stats.distributions.entropy/#scipy-stats-entropy Thanks for reporting this, If you want to make changes to the documentation, then you can edit it in the online doc editor (after getting an account). Josef > ................................................................................... > Uri Laserson > Graduate Student, Biomedical Engineering > Harvard-MIT Division of Health Sciences and Technology > M +1 917 742 8019 > laserson at mit.edu > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From njs at pobox.com Wed May 4 15:16:44 2011 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 4 May 2011 12:16:44 -0700 Subject: [SciPy-User] Scikits.sparse build issue In-Reply-To: References: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> Message-ID: On Tue, May 3, 2011 at 10:10 AM, Nathaniel Smith wrote: > On Tue, May 3, 2011 at 5:51 AM, Anand Patil > wrote: >> scikits/sparse/cholmod.c: In function >> ?__pyx_f_7scikits_6sparse_7cholmod__py_sparse?: >> scikits/sparse/cholmod.c:1713: error: storage size of ?__pyx_t_10? >> isn?t known >> >> I've never used Cython and am having a hard time figuring this out. > > Could you send me the file 'scikits/sparse/cholmod.c'? This means that > there's some C type that was forward-declared, but never actually > defined, and then we tried to instantiate an instance of it. But I'll > need to see the generated code to figure out which type '__pyx_t_10' > is supposed to be. Huh, this appears to be some bad interaction between numpy and cython, rather than anything to do with my code. The offending variable comes from doing 'cimport numpy as np' and then referring to 'np.NPY_F_CONTIGUOUS' -- this is being translated to: enum requirements __pyx_t_10; __pyx_t_10 = NPY_F_CONTIGUOUS; and then gcc is complaining that 'enum requirements' is an undefined type. What version of Numpy and Cython do you have installed? -- Nathaniel From anand.prabhakar.patil at gmail.com Thu May 5 06:03:01 2011 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Thu, 5 May 2011 03:03:01 -0700 (PDT) Subject: [SciPy-User] Scikits.sparse build issue In-Reply-To: References: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> Message-ID: On May 4, 8:16?pm, Nathaniel Smith wrote: > On Tue, May 3, 2011 at 10:10 AM, Nathaniel Smith wrote: > > On Tue, May 3, 2011 at 5:51 AM, Anand Patil > > wrote: > >> scikits/sparse/cholmod.c: In function > >> ?__pyx_f_7scikits_6sparse_7cholmod__py_sparse?: > >> scikits/sparse/cholmod.c:1713: error: storage size of ?__pyx_t_10? > >> isn?t known > > >> I've never used Cython and am having a hard time figuring this out. > > > Could you send me the file 'scikits/sparse/cholmod.c'? This means that > > there's some C type that was forward-declared, but never actually > > defined, and then we tried to instantiate an instance of it. But I'll > > need to see the generated code to figure out which type '__pyx_t_10' > > is supposed to be. > > Huh, this appears to be some bad interaction between numpy and cython, > rather than anything to do with my code. The offending variable comes > from doing 'cimport numpy as np' and then referring to > 'np.NPY_F_CONTIGUOUS' -- this is being translated to: > ? enum requirements __pyx_t_10; > ? __pyx_t_10 = NPY_F_CONTIGUOUS; > and then gcc is complaining that 'enum requirements' is an undefined type. > > What version of Numpy and Cython do you have installed? Cython 0.14.1, Numpy 1.5.1. Which versions do you have? Thanks, Anand > > -- Nathaniel > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From ralf.gommers at googlemail.com Thu May 5 13:10:35 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 5 May 2011 19:10:35 +0200 Subject: [SciPy-User] questions about splines In-Reply-To: References: Message-ID: 2011/5/3 Dmitrey > hi all, > 1) in the page > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.BivariateSpline.html > I see the entry > > SmoothUnivariateSpline to create a BivariateSpline through the given > points > Isn't it a bug? > fixed in the wiki. > > 2) Are there any tools for N-dimensional splines with derivatives in scipy? > I thought it's quite simple to be implemented and rather essential, but I > noticed only Rbf, and derivatives are absent there. Do you know a soft with > Python API and N-dimensional splines and related derivatives? Or maybe > anyone has any plans of implementing something like that in future? > > D. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu May 5 14:42:08 2011 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 5 May 2011 11:42:08 -0700 Subject: [SciPy-User] Scikits.sparse build issue In-Reply-To: References: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> Message-ID: On Thu, May 5, 2011 at 3:03 AM, Anand Patil wrote: > > On May 4, 8:16?pm, Nathaniel Smith wrote: >> On Tue, May 3, 2011 at 10:10 AM, Nathaniel Smith wrote: >> > On Tue, May 3, 2011 at 5:51 AM, Anand Patil >> > wrote: >> >> scikits/sparse/cholmod.c: In function >> >> ?__pyx_f_7scikits_6sparse_7cholmod__py_sparse?: >> >> scikits/sparse/cholmod.c:1713: error: storage size of ?__pyx_t_10? >> >> isn?t known >> >> >> I've never used Cython and am having a hard time figuring this out. >> >> > Could you send me the file 'scikits/sparse/cholmod.c'? This means that >> > there's some C type that was forward-declared, but never actually >> > defined, and then we tried to instantiate an instance of it. But I'll >> > need to see the generated code to figure out which type '__pyx_t_10' >> > is supposed to be. >> >> Huh, this appears to be some bad interaction between numpy and cython, >> rather than anything to do with my code. The offending variable comes >> from doing 'cimport numpy as np' and then referring to >> 'np.NPY_F_CONTIGUOUS' -- this is being translated to: >> ? enum requirements __pyx_t_10; >> ? __pyx_t_10 = NPY_F_CONTIGUOUS; >> and then gcc is complaining that 'enum requirements' is an undefined type. >> >> What version of Numpy and Cython do you have installed? > > Cython 0.14.1, Numpy 1.5.1. Which versions do you have? It looks like with Cython 0.12.1, which is what I was using before, it happens not to generate a temporary variable in this case, but Cython 0.14.1 generates the temporary variable. I've just committed a workaround to the scikits.sparse repository: https://code.google.com/p/scikits-sparse/source/detail?r=ad106e9c2c2d55f2022a3fb8b9282003b55666fc# (I believe it works -- it does compile -- but technically I can't guarantee it since for me the tests are now failing with an "illegal instruction" error inside BLAS. But I think this must be an unrelated Ubuntu screwup. Yay software.) And I'll see about poking Cython upstream to get this fixed... -- Nathaniel From klonuo at gmail.com Thu May 5 17:58:58 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Thu, 05 May 2011 23:58:58 +0200 Subject: [SciPy-User] scikits nightmares on Windows Message-ID: <20110505235856.E677.B1C76292@gmail.com> Hi, I'm trying to install some scikits on Windows PC (XP3 32b) scikits.samplerate: ~~~~~~~~~~~~~~~~~~ After running 'easy_install scikits.samplerate' or 'pip install scikits.samplerate' I get error about missing 'SRC (http://www.mega-nerd.com/SRC/) library' > libraries samplerate not found in C:\Python26\lib > libraries samplerate not found in C:\ > libraries samplerate not found in C:\Python26\libs On a 'home page' (http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/samplerate/) I read: > Sources and windows binaries can be found on Pypi. but provided link does not offer any build, just gziped sources. OK, lets try *nix building nightmares on Windows. I downloaded mega-nerd's source and tried to build on MSYS/mingw (which BTW I have as part of Octave install, as I'm completely not interested in building source packages) and to my surprise everything went smooth and I got 'libsndfile.a' library without any issue. I put it in 'C:\Python26\libs', then run those install commands again, but it just repeats same error The same situation if I try to install 'scikits.audiolab', but luckily for me 'scikits.audiolab' has Windows binaries provided. scikits.image: ~~~~~~~~~~~~~ After running 'easy_install' I get this: > Running scikits.image-0.2.2\setup.py -q bdist_egg --dist-dir c:\docume~1\sava\locals~1\temp\easy_install-ghy8pd\scikits.image-0.2.2\egg-dist-tmp-2lzyfc > Warning: Assuming default configuration (scikits/{setup_scikits,setup}.py was not found) Appending scikits configuration to > Ignoring attempt to set 'name' (from '' to 'scikits') error: > ..\locals~1\temp\easy_install-ghy8pd\scikits.image-0.2.2\scikits\image\opencv\setup.py: The process cannot access the file because it is being used by another process OK, I downloaded the source and tried to install it with 'python setup.py install' then got bunch of warnings about opencv (which is optional package), ending with this: > reading manifest template 'MANIFEST.in' > warning: no files found matching '*.h' under directory 'scikits' > Traceback (most recent call last): > File "setup.py", line 80, in > zip_safe=False, # the package can run out of an .egg file > File "C:\Python26\lib\site-packages\numpy\distutils\core.py", line 186, in setup > return old_setup(**new_attr) > File "C:\Python26\lib\distutils\core.py", line 152, in setup > dist.run_commands() > File "C:\Python26\lib\distutils\dist.py", line 975, in run_commands > self.run_command(cmd) > File "C:\Python26\lib\distutils\dist.py", line 995, in run_command > cmd_obj.run() > File "C:\Python26\lib\site-packages\numpy\distutils\command\install.py", line 57, in run > r = self.setuptools_run() > File "C:\Python26\lib\site-packages\numpy\distutils\command\install.py", line 51, in setuptools_run > self.do_egg_install() > File "C:\Python26\lib\site-packages\setuptools\command\install.py", line 96, in do_egg_install > self.run_command('bdist_egg') > File "C:\Python26\lib\distutils\cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "C:\Python26\lib\distutils\dist.py", line 995, in run_command > cmd_obj.run() > File "C:\Python26\lib\site-packages\setuptools\command\bdist_egg.py", line 167, in run > self.run_command("egg_info") > File "C:\Python26\lib\distutils\cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "C:\Python26\lib\distutils\dist.py", line 995, in run_command > cmd_obj.run() > File "C:\Python26\lib\site-packages\numpy\distutils\command\egg_info.py", line 9, in run > _egg_info.run(self) > File "C:\Python26\lib\site-packages\setuptools\command\egg_info.py", line 177, in run > self.find_sources() > File "C:\Python26\lib\site-packages\setuptools\command\egg_info.py", line 252, in find_sources > mm.run() > File "C:\Python26\lib\site-packages\setuptools\command\egg_info.py", line 308, in run > self.read_template() > File "C:\Python26\lib\site-packages\setuptools\command\sdist.py", line 157, in read_template > _sdist.read_template(self) > File "C:\Python26\lib\distutils\command\sdist.py", line 336, in read_template > self.filelist.process_template_line(line) > File "C:\Python26\lib\distutils\filelist.py", line 129, in process_template_line > (action, patterns, dir, dir_pattern) = self._parse_template_line(line) > File "C:\Python26\lib\distutils\filelist.py", line 104, in _parse_template_line > dir = convert_path(words[1]) > File "C:\Python26\lib\distutils\util.py", line 201, in convert_path > raise ValueError, "path '%s' cannot end with '/'" % pathname > ValueError: path 'doc/source/_templates/' cannot end with '/' Then I downloaded latest code from Git, and tried again. It stacked here: > Creating library build\temp.win32-2.6\Release\scikits\image\transform\_hough_transform.lib and object build\temp.win3 > 2-2.6\Release\scikits\image\transform\_hough_transform.exp > _hough_transform.obj : error LNK2019: unresolved external symbol _round referenced in function ___pyx_pf_7scikits_5image > _9transform_16_hough_transform__hough > build\lib.win32-2.6\scikits\image\transform\_hough_transform.pyd : fatal error LNK1120: 1 unresolved externals > error: Command "c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Py > thon26\libs /LIBPATH:C:\Python26\PCbuild /EXPORT:init_hough_transform build\temp.win32-2.6\Release\scikits\image\transfo > rm\_hough_transform.obj /OUT:build\lib.win32-2.6\scikits\image\transform\_hough_transform.pyd /IMPLIB:build\temp.win32-2 > .6\Release\scikits\image\transform\_hough_transform.lib /MANIFESTFILE:build\temp.win32-2.6\Release\scikits\image\transfo > rm\_hough_transform.pyd.manifest" failed with exit status 1120 Is this hopeless to install on Windows, or it's just me? Thanks From josef.pktd at gmail.com Thu May 5 18:15:58 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 5 May 2011 18:15:58 -0400 Subject: [SciPy-User] scikits nightmares on Windows In-Reply-To: <20110505235856.E677.B1C76292@gmail.com> References: <20110505235856.E677.B1C76292@gmail.com> Message-ID: On Thu, May 5, 2011 at 5:58 PM, Klonuo Umom wrote: > > Hi, > > I'm trying to install some scikits on Windows PC (XP3 32b) > > scikits.samplerate: > ~~~~~~~~~~~~~~~~~~ > After running 'easy_install scikits.samplerate' or 'pip install scikits.samplerate' I get error about missing 'SRC (http://www.mega-nerd.com/SRC/) library' > >> libraries samplerate not found in C:\Python26\lib >> libraries samplerate not found in C:\ >> libraries samplerate not found in C:\Python26\libs > > On a 'home page' (http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/samplerate/) I read: > >> Sources and windows binaries can be found on Pypi. > > but provided link does not offer any build, just gziped sources. > > OK, lets try *nix building nightmares on Windows. I downloaded mega-nerd's source and tried to build on MSYS/mingw (which BTW I have as part of Octave install, as I'm completely not interested in building source packages) and to my surprise everything went smooth and I got 'libsndfile.a' library without any issue. I put it in 'C:\Python26\libs', then run those install commands again, but it just repeats same error > > The same situation if I try to install 'scikits.audiolab', but luckily for me 'scikits.audiolab' has Windows binaries provided. > > scikits.image: > ~~~~~~~~~~~~~ > After running 'easy_install' I get this: > >> Running scikits.image-0.2.2\setup.py -q bdist_egg --dist-dir c:\docume~1\sava\locals~1\temp\easy_install-ghy8pd\scikits.image-0.2.2\egg-dist-tmp-2lzyfc >> Warning: Assuming default configuration (scikits/{setup_scikits,setup}.py was not found) Appending scikits configuration to >> Ignoring attempt to set 'name' (from '' to 'scikits') error: >> ..\locals~1\temp\easy_install-ghy8pd\scikits.image-0.2.2\scikits\image\opencv\setup.py: The process cannot access the file because it is being used by another process > > OK, I downloaded the source and tried to install it with 'python setup.py install' then got bunch of warnings about opencv (which is optional package), ending with this: > >> reading manifest template 'MANIFEST.in' >> warning: no files found matching '*.h' under directory 'scikits' >> Traceback (most recent call last): >> ? File "setup.py", line 80, in >> ? ? zip_safe=False, # the package can run out of an .egg file >> ? File "C:\Python26\lib\site-packages\numpy\distutils\core.py", line 186, in setup >> ? ? return old_setup(**new_attr) >> ? File "C:\Python26\lib\distutils\core.py", line 152, in setup >> ? ? dist.run_commands() >> ? File "C:\Python26\lib\distutils\dist.py", line 975, in run_commands >> ? ? self.run_command(cmd) >> ? File "C:\Python26\lib\distutils\dist.py", line 995, in run_command >> ? ? cmd_obj.run() >> ? File "C:\Python26\lib\site-packages\numpy\distutils\command\install.py", line 57, in run >> ? ? r = self.setuptools_run() >> ? File "C:\Python26\lib\site-packages\numpy\distutils\command\install.py", line 51, in setuptools_run >> ? ? self.do_egg_install() >> ? File "C:\Python26\lib\site-packages\setuptools\command\install.py", line 96, in do_egg_install >> ? ? self.run_command('bdist_egg') >> ? File "C:\Python26\lib\distutils\cmd.py", line 333, in run_command >> ? ? self.distribution.run_command(command) >> ? File "C:\Python26\lib\distutils\dist.py", line 995, in run_command >> ? ? cmd_obj.run() >> ? File "C:\Python26\lib\site-packages\setuptools\command\bdist_egg.py", line 167, in run >> ? ? self.run_command("egg_info") >> ? File "C:\Python26\lib\distutils\cmd.py", line 333, in run_command >> ? ? self.distribution.run_command(command) >> ? File "C:\Python26\lib\distutils\dist.py", line 995, in run_command >> ? ? cmd_obj.run() >> ? File "C:\Python26\lib\site-packages\numpy\distutils\command\egg_info.py", line 9, in run >> ? ? _egg_info.run(self) >> ? File "C:\Python26\lib\site-packages\setuptools\command\egg_info.py", line 177, in run >> ? ? self.find_sources() >> ? File "C:\Python26\lib\site-packages\setuptools\command\egg_info.py", line 252, in find_sources >> ? ? mm.run() >> ? File "C:\Python26\lib\site-packages\setuptools\command\egg_info.py", line 308, in run >> ? ? self.read_template() >> ? File "C:\Python26\lib\site-packages\setuptools\command\sdist.py", line 157, in read_template >> ? ? _sdist.read_template(self) >> ? File "C:\Python26\lib\distutils\command\sdist.py", line 336, in read_template >> ? ? self.filelist.process_template_line(line) >> ? File "C:\Python26\lib\distutils\filelist.py", line 129, in process_template_line >> ? ? (action, patterns, dir, dir_pattern) = self._parse_template_line(line) >> ? File "C:\Python26\lib\distutils\filelist.py", line 104, in _parse_template_line >> ? ? dir = convert_path(words[1]) >> ? File "C:\Python26\lib\distutils\util.py", line 201, in convert_path >> ? ? raise ValueError, "path '%s' cannot end with '/'" % pathname >> ValueError: path 'doc/source/_templates/' cannot end with '/' > > > Then I downloaded latest code from Git, and tried again. It stacked here: > >> ? ?Creating library build\temp.win32-2.6\Release\scikits\image\transform\_hough_transform.lib and object build\temp.win3 >> 2-2.6\Release\scikits\image\transform\_hough_transform.exp >> _hough_transform.obj : error LNK2019: unresolved external symbol _round referenced in function ___pyx_pf_7scikits_5image >> _9transform_16_hough_transform__hough >> build\lib.win32-2.6\scikits\image\transform\_hough_transform.pyd : fatal error LNK1120: 1 unresolved externals >> error: Command "c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Py >> thon26\libs /LIBPATH:C:\Python26\PCbuild /EXPORT:init_hough_transform build\temp.win32-2.6\Release\scikits\image\transfo >> rm\_hough_transform.obj /OUT:build\lib.win32-2.6\scikits\image\transform\_hough_transform.pyd /IMPLIB:build\temp.win32-2 >> .6\Release\scikits\image\transform\_hough_transform.lib /MANIFESTFILE:build\temp.win32-2.6\Release\scikits\image\transfo >> rm\_hough_transform.pyd.manifest" failed with exit status 1120 > > > Is this hopeless to install on Windows, or it's just me? windows xp with MingW I managed to build scikits.samplerate a year ago but haven't tried since. I also managed some of the other scikits without much problems e.g. scikits.timeseries I never managed to build scikits.sparse scikits.image binaries are available at http://www.lfd.uci.edu/~gohlke/pythonlibs/ I never tried to build it myself. Josef > > Thanks > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From klonuo at gmail.com Thu May 5 18:46:44 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Fri, 06 May 2011 00:46:44 +0200 Subject: [SciPy-User] scikits nightmares on Windows In-Reply-To: References: <20110505235856.E677.B1C76292@gmail.com> Message-ID: <20110506004642.E67C.B1C76292@gmail.com> Thanks for the link Ironically, I've been there for Cython binary just hour ago, but did not browse the site :D It's in my bookmarks now Cheers From ondrej at certik.cz Thu May 5 19:23:40 2011 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 5 May 2011 16:23:40 -0700 Subject: [SciPy-User] should one put "." into PYTHONPATH Message-ID: Hi, is it a good practice to have the following in .bashrc: export PYTHONPATH=$PYTHONPATH:. I know that Ubuntu long time ago had the "." in PYTHONPATH by default, and then dropped it. The reason why I want it is so that I can develop in the current directory, by doing things like: python examples/a.py where 'a.py' imports something from the current directory. I googled a bit, and found that some people recommend to use "setup.py develop" instead. I don't use setup.py in my project (I use cmake to mix Fortran and Python together). So one option for me is to always install it, and then import it like any other package from examples/a.py. I am currently undecided, so I'd be interested in any opinions on this. Ondrej From jrennie at gmail.com Thu May 5 21:08:15 2011 From: jrennie at gmail.com (Jason Rennie) Date: Thu, 5 May 2011 21:08:15 -0400 Subject: [SciPy-User] should one put "." into PYTHONPATH In-Reply-To: References: Message-ID: On Thu, May 5, 2011 at 7:23 PM, Ondrej Certik wrote: > is it a good practice to have the following in .bashrc: > > export PYTHONPATH=$PYTHONPATH:. > My opinion is "no" since import behavior then depends on the directory from which you run your script. I know that Ubuntu long time ago had the "." in PYTHONPATH by default, > and then dropped it. The reason why I want it is so that I can develop > in the current directory, by doing things like: > > python examples/a.py > Python does automatically include the script directory in sys.path, so if you put your scripts at the base of your library hierarchy (or put your library in your bin/ directory), you won't have to worry about setting PYTHONPATH. http://pythonquirks.blogspot.com/2010/07/absolutely-relative-import.html http://docs.python.org/library/sys.html#sys.path Cheers, Jason -- Jason Rennie Research Scientist/Software Engineer Google/ITA Software 617-714-2645 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu May 5 22:22:33 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 5 May 2011 21:22:33 -0500 Subject: [SciPy-User] should one put "." into PYTHONPATH In-Reply-To: References: Message-ID: On Thu, May 5, 2011 at 18:23, Ondrej Certik wrote: > Hi, > > is it a good practice to have the following in .bashrc: > > export PYTHONPATH=$PYTHONPATH:. I don't recommend it. You will get unexpected behavior and waste time chasing down problems that don't actually exist. > I know that Ubuntu long time ago had the "." in PYTHONPATH by default, > and then dropped it. The reason why I want it is so that I can develop > in the current directory, by doing things like: > > python examples/a.py > > where 'a.py' imports something from the current directory. I googled a > bit, and found that some people recommend to use "setup.py develop" > instead. I don't use setup.py in my project (I use cmake to mix > Fortran and Python together). So one option for me is to always > install it, and then import it like any other package from > examples/a.py. You don't need to do "python setup.py develop" every time. Nothing actually depends on there being a setup.py. Just add a .pth file into your site-packages listing the directory you want added. E.g. if you have your sympy checkout in /home/ondrej/git/sympy/ you would have a file named sympy.pth (or any other name ending in .pth) in your site-packages directory with just the following contents (without the triple quotes: """ /home/ondrej/git/sympy """ Then you can run /home/ondrej/git/sympy/examples/a.py however you like, from whichever directory you like. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From luca.giacomel at gmail.com Fri May 6 04:03:47 2011 From: luca.giacomel at gmail.com (Luca Giacomel) Date: Fri, 6 May 2011 10:03:47 +0200 Subject: [SciPy-User] Kmeans, the correct role of whitening? Message-ID: <961140CA-F25C-4767-9EB4-B332CA39CD4E@gmail.com> Hello, I'm developing a piece of code which uses the kmeans algorithm to cluster huge amounts of financial data. I've got a few problems with the whitening function though. How and when should I use it? Let's make it simpler, my initial idea was this: whitened_points=whiten(array([point.getDimensionalFields().values() for point in Point.objects.all()])) #This is simply a django code for retrieving values from the db. clusters, variance = kmeans(whitened_points,number_of_clusters) And now I'm happy because scipy generated some wonderful clusters for me. Next step is to take another observation and check to which cluster it belongs using vector quantization. But how should I use the whitening function to get unit-variance for the observation? I propose three approaches, which is correct? 1 #Take the observation and add it to all the points, so variance is correctly calculated. #Problem: I'm going to use A LOT of memory for every single check which doesn't seem correct. whitened_points=whiten(array([point.getDimensionalFields().values() for point in Point.objects.all()]+obs)) #add the obs to all the other values, terrible approach. code, distance = vq(whitened_points[-1],codebook) #use the -1 to get the obs normalized 2 #Take the observation and add it to all the centroids. #Problem: Has it any sense? whitened_points=whiten(array([cluster.centroid.getDimensionalFields().values() for cluster in Cluster.objects.all()]+obs)) #add the obs to all the centroids, uses way less ram as I've got only 50 clusters code, distance = vq(whitened_points[-1],codebook) #use the -1 to get the obs normalized 3 #Whiten the observation with itself #Problem: Has it any sense? (again) whitened_points=whiten(array(obs)) #fast :) code, distance = vq(whitened_points,codebook) #use the -1 to get the obs normalized Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pierre.RAYBAUT at CEA.FR Fri May 6 04:28:56 2011 From: Pierre.RAYBAUT at CEA.FR (Pierre.RAYBAUT at CEA.FR) Date: Fri, 6 May 2011 10:28:56 +0200 Subject: [SciPy-User] [ANN] guiqwt v2.1.3 Message-ID: Hi all, I am pleased to announce that `guiqwt` v2.1.3 has been released. Based on PyQwt (plotting widgets for PyQt4 graphical user interfaces) and on the scientific modules NumPy and SciPy, guiqwt is a Python library providing efficient 2D data-plotting features (curve/image visualization and related tools) for interactive computing and signal/image processing application development. Complete change log is now available here: http://code.google.com/p/guiqwt/wiki/ChangeLog Documentation with examples, API reference, etc. is available here: http://packages.python.org/guiqwt/ This version of `guiqwt` includes a demo software, Sift (for Signal and Image Filtering Tool), based on `guidata` and `guiqwt`: http://packages.python.org/guiqwt/sift.html Windows users may even download the portable version of Sift 0.2.3 to test it without having to install anything: http://code.google.com/p/guiqwt/downloads/detail?name=sift023_portable.zip When compared to the excellent module `matplotlib`, the main advantages of `guiqwt` are: * Performance: see http://packages.python.org/guiqwt/overview.html#performances * Interactivity: see for example http://packages.python.org/guiqwt/_images/plot.png * Powerful signal processing tools: see for example http://packages.python.org/guiqwt/_images/fit.png * Powerful image processing tools: * Real-time contrast adjustment: http://packages.python.org/guiqwt/_images/contrast.png * Cross sections (line/column, averaged and oblique cross sections!): http://packages.python.org/guiqwt/_images/cross_section.png * Arbitrary affine transforms on images: http://packages.python.org/guiqwt/_images/transform.png * Interactive filters: http://packages.python.org/guiqwt/_images/imagefilter.png * Geometrical shapes/Measurement tools: http://packages.python.org/guiqwt/_images/image_plot_tools.png * Perfect integration of `guidata` features for image data editing: http://packages.python.org/guiqwt/_images/simple_window.png But `guiqwt` is more than a plotting library; it also provides: * Framework for signal/image processing application development: see http://packages.python.org/guiqwt/examples.html * And many other features like making executable Windows programs easily (py2exe helpers): see http://packages.python.org/guiqwt/disthelpers.html guiqwt has been successfully tested on GNU/Linux and Windows platforms. Python package index page: http://pypi.python.org/pypi/guiqwt/ Documentation, screenshots: http://packages.python.org/guiqwt/ Downloads (source + Python(x,y) plugin): http://guiqwt.googlecode.com Cheers, Pierre --- Dr. Pierre Raybaut CEA - Commissariat ? l'Energie Atomique et aux Energies Alternatives From anand.prabhakar.patil at gmail.com Fri May 6 05:05:12 2011 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Fri, 6 May 2011 02:05:12 -0700 (PDT) Subject: [SciPy-User] Scikits.sparse build issue In-Reply-To: References: <565dfdb6-56e8-41df-b69f-eaabc02002a7@k22g2000yqh.googlegroups.com> Message-ID: <17eb4775-e05b-400a-8dee-098a5a9e2669@p6g2000vbn.googlegroups.com> Thanks, Nathaniel! I had to make a few more changes to get the tests to run on my setup, which is: - Mac - Python, Numpy, SciPy from EPD 7.0.1 - SuiteSparse from MacPorts. The patch follows. Cheers, Anand ----------- patch ----------- diff -r 0fd458997533 scikits/sparse/cholmod.pyx --- a/scikits/sparse/cholmod.pyx Thu May 05 11:16:24 2011 -0700 +++ b/scikits/sparse/cholmod.pyx Fri May 06 10:01:09 2011 +0100 @@ -46,7 +46,7 @@ hack.base = base return arr -cdef extern from "suitesparse/cholmod.h": +cdef extern from "ufsparse/cholmod.h": cdef enum: CHOLMOD_INT CHOLMOD_PATTERN, CHOLMOD_REAL, CHOLMOD_COMPLEX diff -r 0fd458997533 setup.py --- a/setup.py Thu May 05 11:16:24 2011 -0700 +++ b/setup.py Fri May 06 10:01:09 2011 +0100 @@ -70,11 +70,11 @@ ext_modules = [ Extension("scikits.sparse.cholmod", ["scikits/sparse/cholmod.pyx"], - libraries=["cholmod"], - include_dirs=[np.get_include()], + libraries=["cholmod","amd","camd","colamd","metis","ccolamd"], + include_dirs=[np.get_include(),'/opt/local/ include'], # If your CHOLMOD is in a funny place, you may need to # add something like this: - #library_dirs=["/opt/suitesparse/lib"], + library_dirs=["/opt/local/lib"] # And modify include_dirs above in a similar way. ), ], ------------- end of patch ---------------- On May 5, 7:42?pm, Nathaniel Smith wrote: > On Thu, May 5, 2011 at 3:03 AM, Anand Patil > > > > > > > > > > wrote: > > > On May 4, 8:16?pm, Nathaniel Smith wrote: > >> On Tue, May 3, 2011 at 10:10 AM, Nathaniel Smith wrote: > >> > On Tue, May 3, 2011 at 5:51 AM, Anand Patil > >> > wrote: > >> >> scikits/sparse/cholmod.c: In function > >> >> ?__pyx_f_7scikits_6sparse_7cholmod__py_sparse?: > >> >> scikits/sparse/cholmod.c:1713: error: storage size of ?__pyx_t_10? > >> >> isn?t known > > >> >> I've never used Cython and am having a hard time figuring this out. > > >> > Could you send me the file 'scikits/sparse/cholmod.c'? This means that > >> > there's some C type that was forward-declared, but never actually > >> > defined, and then we tried to instantiate an instance of it. But I'll > >> > need to see the generated code to figure out which type '__pyx_t_10' > >> > is supposed to be. > > >> Huh, this appears to be some bad interaction between numpy and cython, > >> rather than anything to do with my code. The offending variable comes > >> from doing 'cimport numpy as np' and then referring to > >> 'np.NPY_F_CONTIGUOUS' -- this is being translated to: > >> ? enum requirements __pyx_t_10; > >> ? __pyx_t_10 = NPY_F_CONTIGUOUS; > >> and then gcc is complaining that 'enum requirements' is an undefined type. > > >> What version of Numpy and Cython do you have installed? > > > Cython 0.14.1, Numpy 1.5.1. Which versions do you have? > > It looks like with Cython 0.12.1, which is what I was using before, it > happens not to generate a temporary variable in this case, but Cython > 0.14.1 generates the temporary variable. > > I've just committed a workaround to the scikits.sparse repository: > ?https://code.google.com/p/scikits-sparse/source/detail?r=ad106e9c2c2d... > (I believe it works -- it does compile -- but technically I can't > guarantee it since for me the tests are now failing with an "illegal > instruction" error inside BLAS. But I think this must be an unrelated > Ubuntu screwup. Yay software.) > > And I'll see about poking Cython upstream to get this fixed... > > -- Nathaniel > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From JRadinger at gmx.at Fri May 6 05:46:37 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Fri, 06 May 2011 11:46:37 +0200 Subject: [SciPy-User] matplotlib: print to postscript file and some other questions Message-ID: <20110506094637.180850@gmx.net> Hello I don't know if there is a special matplotlib-user list but anyway: I managed to draw a very simple plot of a probability density function (actually two superimposed pdfs). Now I'd like to draw vertical lines fat the points of the scale-parameters (only under the curve of the function) and label them. How can I do that? And what is the command to print the result into a postscript-file? Here is what I've got so far: import matplotlib.pyplot as plt import numpy as np from scipy import stats p=0.3 m=0 x = np.arange(-100, 100, 0.2) def pdf(x,s1,s2): return p * stats.norm.pdf(x, loc=m, scale=s1) + (1-p) * stats.norm.pdf(x, loc=m, scale=s2) #plt.axis([-100, 100, 0, 0.03])#probably not necessary plt.plot(x, pdf(x,12,100), color='red') plt.title('Pdf of dispersal kernel') plt.text(60, 0.0025, r'$\mu=100,\ \sigma=15$') plt.ylabel('probabilty of occurance') plt.xlabel('distance from starting point') plt.grid(True) plt.show() thanks /johannes -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From k-assem84 at hotmail.com Thu May 5 10:50:57 2011 From: k-assem84 at hotmail.com (suzana8447) Date: Thu, 5 May 2011 07:50:57 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] How to get access to Array elements Message-ID: <31551042.post@talk.nabble.com> Hello for all, I am still a beginner in the Python language. I have a problem and hope that some can help me. I my program I have an array called for example densities= array([1,2,8,10,50)] note that the third element in the array is 8. I want to form an if statement as this: if(x==8): print x # just as an example. But how to declare x as the third elemnt of the above array. Thanks in advance. -- View this message in context: http://old.nabble.com/How-to-get-access-to-Array-elements-tp31551042p31551042.html Sent from the Scipy-User mailing list archive at Nabble.com. From kwgoodman at gmail.com Fri May 6 09:45:41 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 6 May 2011 06:45:41 -0700 Subject: [SciPy-User] [SciPy-user] How to get access to Array elements In-Reply-To: <31551042.post@talk.nabble.com> References: <31551042.post@talk.nabble.com> Message-ID: On Thu, May 5, 2011 at 7:50 AM, suzana8447 wrote: > array([1,2,8,10,50)] > note that the third element in the array is 8. ?I want to form an if > statement as this: > if(x==8): print x # just as an example. > But how to declare x as the third elemnt of the above array. Here's an example: >> import numpy as np >> a = np.array([1,2,8,10,50]) >> a[0] 1 >> a[1] 2 >> a[2] 8 >> a[3] 10 >> a[4] 50 From ptittmann at gmail.com Fri May 6 09:51:21 2011 From: ptittmann at gmail.com (Peter Tittmann) Date: Fri, 6 May 2011 06:51:21 -0700 Subject: [SciPy-User] [SciPy-user] How to get access to Array elements In-Reply-To: <31551042.post@talk.nabble.com> References: <31551042.post@talk.nabble.com> Message-ID: Hi, At=array([2,3,8,10]) x=At[2] #remember that indexing begins with 0. if x==8: print x hth Peter On May 6, 2011 6:42 AM, "suzana8447" wrote: > > Hello for all, > > I am still a beginner in the Python language. > I have a problem and hope that some can help me. > I my program I have an array called for example densities= > array([1,2,8,10,50)] > note that the third element in the array is 8. I want to form an if > statement as this: > if(x==8): print x # just as an example. > But how to declare x as the third elemnt of the above array. > Thanks in advance. > > -- > View this message in context: http://old.nabble.com/How-to-get-access-to-Array-elements-tp31551042p31551042.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephsmidt at gmail.com Fri May 6 11:56:24 2011 From: josephsmidt at gmail.com (Joseph Smidt) Date: Fri, 6 May 2011 08:56:24 -0700 Subject: [SciPy-User] Is there anything like polyval2? Message-ID: Hi, Matlab has a function called polyval2 that evaluates a 2D polynomial from a a least-squares fit of 2D data: http://www.mathworks.com/matlabcentral/fx_files/13719/1/content/polyfitweighted2/publishpolyfitweighted2.html Does numpy or scipy have anything like this? Can it be done with existing routines in a straight forward way? I only see poly1d for one-dimensional polynomials. Thanks. -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 2165 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-9025 From josef.pktd at gmail.com Fri May 6 12:32:14 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 6 May 2011 12:32:14 -0400 Subject: [SciPy-User] Is there anything like polyval2? In-Reply-To: References: Message-ID: On Fri, May 6, 2011 at 11:56 AM, Joseph Smidt wrote: > Hi, > > ? Matlab has a function called polyval2 that evaluates a 2D > polynomial from a a least-squares fit of 2D data: > http://www.mathworks.com/matlabcentral/fx_files/13719/1/content/polyfitweighted2/publishpolyfitweighted2.html > > ? ?Does numpy or scipy have anything like this? ?Can it be done with > existing routines in a straight forward way? ?I only see poly1d for > one-dimensional polynomials. I have written something like this several times (I often limit the cross product to a simplex) >>> nobs = 100 >>> x1 = np.random.randn(nobs) >>> x2 = np.random.randn(nobs) generate polynomial like a 2d np.vander: >>> design = (x1[:,None, None]**np.arange(3)[None,:,None] * x2[:,None, None]**np.arange(3)[None,None,:]).reshape(nobs,-1) >>> y = design.sum(1) + np.random.randn(nobs) >>> params = np.linalg.lstsq(design, y)[0] >>> predicted = np.dot(design, params) Josef > > ? Thanks. > > -- > ------------------------------------------------------------------------ > Joseph Smidt > > Physics and Astronomy > 2165 Frederick Reines Hall > Irvine, CA 92697-4575 > Office: 949-824-9025 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Fri May 6 13:06:08 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 6 May 2011 13:06:08 -0400 Subject: [SciPy-User] Is there anything like polyval2? In-Reply-To: References: Message-ID: On Fri, May 6, 2011 at 12:32 PM, wrote: > On Fri, May 6, 2011 at 11:56 AM, Joseph Smidt wrote: >> Hi, >> >> ? Matlab has a function called polyval2 that evaluates a 2D >> polynomial from a a least-squares fit of 2D data: >> http://www.mathworks.com/matlabcentral/fx_files/13719/1/content/polyfitweighted2/publishpolyfitweighted2.html >> >> ? ?Does numpy or scipy have anything like this? ?Can it be done with >> existing routines in a straight forward way? ?I only see poly1d for >> one-dimensional polynomials. > > I have written something like this several times (I often limit the > cross product to a simplex) > >>>> nobs = 100 >>>> x1 = np.random.randn(nobs) >>>> x2 = np.random.randn(nobs) > > generate polynomial like a 2d np.vander: > >>>> design = (x1[:,None, None]**np.arange(3)[None,:,None] * x2[:,None, None]**np.arange(3)[None,None,:]).reshape(nobs,-1) > >>>> y = design.sum(1) + np.random.randn(nobs) > >>>> params = np.linalg.lstsq(design, y)[0] >>>> predicted = np.dot(design, params) or something like this >>> alli = np.column_stack(np.tril_indices(3,3)) >>> ind = alli[alli.sum(1)<3,:].T >>> x = np.random.randn(nobs,2) >>> design = (x[:,:,None] ** ind[None,:,:] ).prod(1) >>> ind array([[0, 0, 0, 1, 1, 2], [0, 1, 2, 0, 1, 0]]) Josef > > Josef > >> >> ? Thanks. >> >> -- >> ------------------------------------------------------------------------ >> Joseph Smidt >> >> Physics and Astronomy >> 2165 Frederick Reines Hall >> Irvine, CA 92697-4575 >> Office: 949-824-9025 >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From rajs2010 at gmail.com Sat May 7 06:07:16 2011 From: rajs2010 at gmail.com (Rajeev Singh) Date: Sat, 7 May 2011 15:37:16 +0530 Subject: [SciPy-User] matplotlib: print to postscript file and some other questions In-Reply-To: <20110506094637.180850@gmx.net> References: <20110506094637.180850@gmx.net> Message-ID: On Fri, May 6, 2011 at 3:16 PM, Johannes Radinger wrote: > Hello > > I don't know if there is a special matplotlib-user list but anyway: > > I managed to draw a very simple plot of a probability density function > (actually two superimposed pdfs). Now I'd like to draw vertical lines fat > the points of the scale-parameters (only under the curve of the function) > and label them. How can I do that? > > And what is the command to print the result into a postscript-file? > > Here is what I've got so far: > > import matplotlib.pyplot as plt > import numpy as np > from scipy import stats > > p=0.3 > m=0 > > x = np.arange(-100, 100, 0.2) > > def pdf(x,s1,s2): > return p * stats.norm.pdf(x, loc=m, scale=s1) + (1-p) * > stats.norm.pdf(x, loc=m, scale=s2) > > #plt.axis([-100, 100, 0, 0.03])#probably not necessary > plt.plot(x, pdf(x,12,100), color='red') > plt.title('Pdf of dispersal kernel') > plt.text(60, 0.0025, r'$\mu=100,\ \sigma=15$') > plt.ylabel('probabilty of occurance') > plt.xlabel('distance from starting point') > plt.grid(True) > > plt.show() > > thanks > /johannes > -- > NEU: FreePhone - kostenlos mobil telefonieren und surfen! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, You can try the following hack - import matplotlib.pyplot as plt import numpy as np from scipy import stats p=0.3 m=0 x = np.arange(-100, 100, 0.2) def pdf(x,s1,s2): return p * stats.norm.pdf(x, loc=m, scale=s1) + (1-p) * stats.norm.pdf(x, loc=m, scale=s2) #plt.axis([-100, 100, 0, 0.03])#probably not necessary s1, s2 = 12, 100 plt.plot(x, pdf(x,s1, s2), color='red') plt.plot(np.array([s1,s1]), np.array([plt.ylim()[0],pdf(s1,s1,s2)]), '--k') plt.plot(np.array([s2,s2]), np.array([plt.ylim()[0],pdf(s2,s1,s2)]), '--k') plt.title('Pdf of dispersal kernel') plt.text(60, 0.0025, r'$\mu=100,\ \sigma=15$') plt.ylabel('probabilty of occurance') plt.xlabel('distance from starting point') #plt.grid(True) #plt.show() plt.savefig('filename.eps') # this will save the figure in an eps file in current directory I am not sure if this is what you want! Rajeev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajs2010 at gmail.com Sat May 7 06:14:29 2011 From: rajs2010 at gmail.com (Rajeev Singh) Date: Sat, 7 May 2011 15:44:29 +0530 Subject: [SciPy-User] [SciPy-user] How to get access to Array elements In-Reply-To: <31551042.post@talk.nabble.com> References: <31551042.post@talk.nabble.com> Message-ID: On Thu, May 5, 2011 at 8:20 PM, suzana8447 wrote: > > Hello for all, > > I am still a beginner in the Python language. > I have a problem and hope that some can help me. > I my program I have an array called for example densities= > array([1,2,8,10,50)] > note that the third element in the array is 8. I want to form an if > statement as this: > if(x==8): print x # just as an example. > But how to declare x as the third elemnt of the above array. > Thanks in advance. > > -- > View this message in context: > http://old.nabble.com/How-to-get-access-to-Array-elements-tp31551042p31551042.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, Try - densities = array([1,2,8,10,50)] for i in range(5): if densities[i] == 8: print densities[i] Hope it helps. Rajeev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Sat May 7 06:19:29 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 07 May 2011 12:19:29 +0200 Subject: [SciPy-User] matplotlib: print to postscript file and some other questions In-Reply-To: References: <20110506094637.180850@gmx.net> Message-ID: On Sat, 7 May 2011 15:37:16 +0530 Rajeev Singh wrote: > On Fri, May 6, 2011 at 3:16 PM, Johannes Radinger > wrote: > >> Hello >> >> I don't know if there is a special matplotlib-user list >>but anyway: >> >> I managed to draw a very simple plot of a probability >>density function >> (actually two superimposed pdfs). Now I'd like to draw >>vertical lines fat >> the points of the scale-parameters (only under the curve >>of the function) >> and label them. How can I do that? >> >> And what is the command to print the result into a >>postscript-file? >> >> Here is what I've got so far: >> >> import matplotlib.pyplot as plt >> import numpy as np >> from scipy import stats >> >> p=0.3 >> m=0 >> >> x = np.arange(-100, 100, 0.2) >> >> def pdf(x,s1,s2): >> return p * stats.norm.pdf(x, loc=m, scale=s1) + (1-p) >>* >> stats.norm.pdf(x, loc=m, scale=s2) >> >> #plt.axis([-100, 100, 0, 0.03])#probably not necessary >> plt.plot(x, pdf(x,12,100), color='red') >> plt.title('Pdf of dispersal kernel') >> plt.text(60, 0.0025, r'$\mu=100,\ \sigma=15$') >> plt.ylabel('probabilty of occurance') >> plt.xlabel('distance from starting point') >> plt.grid(True) >> >> plt.show() >> >> thanks >> /johannes >> -- >> NEU: FreePhone - kostenlos mobil telefonieren und >>surfen! >> Jetzt informieren: http://www.gmx.net/de/go/freephone >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > Hi, > > You can try the following hack - > > import matplotlib.pyplot as plt > import numpy as np > from scipy import stats > > p=0.3 > m=0 > > x = np.arange(-100, 100, 0.2) > > def pdf(x,s1,s2): > return p * stats.norm.pdf(x, loc=m, scale=s1) + (1-p) >* stats.norm.pdf(x, > loc=m, scale=s2) > > #plt.axis([-100, 100, 0, 0.03])#probably not necessary > s1, s2 = 12, 100 > plt.plot(x, pdf(x,s1, s2), color='red') > plt.plot(np.array([s1,s1]), >np.array([plt.ylim()[0],pdf(s1,s1,s2)]), '--k') > plt.plot(np.array([s2,s2]), >np.array([plt.ylim()[0],pdf(s2,s1,s2)]), '--k') > plt.title('Pdf of dispersal kernel') > plt.text(60, 0.0025, r'$\mu=100,\ \sigma=15$') > plt.ylabel('probabilty of occurance') > plt.xlabel('distance from starting point') > #plt.grid(True) > > #plt.show() > plt.savefig('filename.eps') # this will save the figure >in an eps file in > current directory > > I am not sure if this is what you want! > > Rajeev See http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.savefig http://matplotlib.sourceforge.net/users/customizing.html for details. HTH Nils From klonuo at gmail.com Sat May 7 08:34:00 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Sat, 07 May 2011 14:34:00 +0200 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn Message-ID: <20110507143358.2838.B1C76292@gmail.com> Hi, forgive my ignorance, but I wanted to express my point. I'm graduate physics student and fell in love with Python, IPython, NumPy, SciPy etc. Lately I started to play around and wanted to actually produce something out of it. Looking at matplotlib examples gallery everything seemed so perfect and possible, but to be honest I spent too much time to plot just one stacked histogram the way I wanted, and then went to easier to me gnuplot to save some time and finish my plots. In the same manner, but on much higher level, I got interested, out of pure curiosity, in hand-writing recognition which is apparently possible with scikits.learn (among other older packages). AFAIK almost all scikits lack documentation beyond bare docstrings inside modules, but scikits.learn however has user guide ( http://scikit-learn.sourceforge.net/user_guide.html ) with examples inside. One of them is 'Recognizing hand-written digits' in 'Machine Learning' chapter. This example has no explanation - source code is presented along with plot output. There is no hint what you can do except to run this code and see it plot. Images and appropriate data are apparently read from some zipped csv table in unknown formatting. Data is probably retrieved from NIST example database, but that's it. There is no hint how to start experimenting with own examples. That's the point I wanted to express - there seem to be possibility to do various things with common sense effort and interest, but if I want to go there and try to do something then I'll whiteness steep curve unlike general Python comfortability. Maybe this package is meant to exist for experienced Python users and those working in the field, but I really can't see how I can do what I want (try to process my samples with some hand-written data) or how can common user benefit from such examples when they are far from real usage. Thanks From pav at iki.fi Sat May 7 12:10:00 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 7 May 2011 16:10:00 +0000 (UTC) Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn References: <20110507143358.2838.B1C76292@gmail.com> Message-ID: On Sat, 07 May 2011 14:34:00 +0200, Klonuo Umom wrote: [clip] > Looking at matplotlib examples gallery everything seemed so > perfect and possible, but to be honest I spent too much time to plot > just one stacked histogram the way I wanted, and then went to easier to > me gnuplot to save some time and finish my plots. For me, achieving things like that with Gnuplot would be incredibly frustrating. The gnuplot learning curve is steep :) For your feedback to be more constructive, it might be good if you also provided suggestions on how to improve the situation. What do you want to have? Did you read the existing beginner-level documentation of Matplotlib (i.e. User's Guide)? If yes, was it useful or not? What in the specific example you cite was difficult to do? [clip] > In the same manner, but on much higher level, I got interested, out of > pure curiosity, in hand-writing recognition which is apparently possible > with scikits.learn (among other older packages). AFAIK almost all > scikits lack documentation beyond bare docstrings inside modules, but > scikits.learn however has user guide ( > http://scikit-learn.sourceforge.net/user_guide.html ) with examples > inside. One of them is 'Recognizing hand-written digits' in 'Machine > Learning' chapter. This example has no explanation - source code is > presented along with plot output. The best place to start would be to read the tutorial part of the user guide http://scikit-learn.sourceforge.net/tutorial.html It seems to explain exactly the example you mention in detail. [clip] > That's the point I wanted to express - there seem to be possibility to > do various things with common sense effort and interest, but if I want > to go there and try to do something then I'll whiteness steep curve > unlike general Python comfortability. Maybe this package is meant to > exist for experienced Python users and those working in the field, but I > really can't see how I can do what I want (try to process my samples > with some hand-written data) or how can common user benefit from such > examples when they are far from real usage. That's a valid point -- examples with datasets that are not prepackaged could be more useful. However, the tutorial does show what the datasets are supposed to be. Of course, it's supposed you already know how to load data files from disk with Python, which seems a reasonable thing to presuppose in an extension package. But certainly, the documentation coverage in many packages is lacking, especially at the beginner level. However, learning to use any tool takes time. Good documentation can only reduce how long this takes. -- Pauli Virtanen From ralf.gommers at googlemail.com Sat May 7 16:30:43 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 7 May 2011 22:30:43 +0200 Subject: [SciPy-User] ANN: Numpy 1.6.0 release candidate 3 Message-ID: Hi, I am pleased to announce the availability of the third release candidate of NumPy 1.6.0. Compared to the second release candidate, two issues in f2py and one in loadtxt were fixed. If no new problems are reported, the final release will be in one week. Sources and binaries can be found at http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc3/ For (preliminary) release notes see below. Enjoy, Ralf ========================= NumPy 1.6.0 Release Notes ========================= This release includes several new features as well as numerous bug fixes and improved documentation. It is backward compatible with the 1.5.0 release, and supports Python 2.4 - 2.7 and 3.1 - 3.2. Highlights ========== * Re-introduction of datetime dtype support to deal with dates in arrays. * A new 16-bit floating point type. * A new iterator, which improves performance of many functions. New features ============ New 16-bit floating point type ------------------------------ This release adds support for the IEEE 754-2008 binary16 format, available as the data type ``numpy.half``. Within Python, the type behaves similarly to `float` or `double`, and C extensions can add support for it with the exposed half-float API. New iterator ------------ A new iterator has been added, replacing the functionality of the existing iterator and multi-iterator with a single object and API. This iterator works well with general memory layouts different from C or Fortran contiguous, and handles both standard NumPy and customized broadcasting. The buffering, automatic data type conversion, and optional output parameters, offered by ufuncs but difficult to replicate elsewhere, are now exposed by this iterator. Legendre, Laguerre, Hermite, HermiteE polynomials in ``numpy.polynomial`` ------------------------------------------------------------------------- Extend the number of polynomials available in the polynomial package. In addition, a new ``window`` attribute has been added to the classes in order to specify the range the ``domain`` maps to. This is mostly useful for the Laguerre, Hermite, and HermiteE polynomials whose natural domains are infinite and provides a more intuitive way to get the correct mapping of values without playing unnatural tricks with the domain. Fortran assumed shape array and size function support in ``numpy.f2py`` ----------------------------------------------------------------------- F2py now supports wrapping Fortran 90 routines that use assumed shape arrays. Before such routines could be called from Python but the corresponding Fortran routines received assumed shape arrays as zero length arrays which caused unpredicted results. Thanks to Lorenz H?depohl for pointing out the correct way to interface routines with assumed shape arrays. In addition, f2py supports now automatic wrapping of Fortran routines that use two argument ``size`` function in dimension specifications. Other new functions ------------------- ``numpy.ravel_multi_index`` : Converts a multi-index tuple into an array of flat indices, applying boundary modes to the indices. ``numpy.einsum`` : Evaluate the Einstein summation convention. Using the Einstein summation convention, many common multi-dimensional array operations can be represented in a simple fashion. This function provides a way compute such summations. ``numpy.count_nonzero`` : Counts the number of non-zero elements in an array. ``numpy.result_type`` and ``numpy.min_scalar_type`` : These functions expose the underlying type promotion used by the ufuncs and other operations to determine the types of outputs. These improve upon the ``numpy.common_type`` and ``numpy.mintypecode`` which provide similar functionality but do not match the ufunc implementation. Changes ======= ``default error handling`` -------------------------- The default error handling has been change from ``print`` to ``warn`` for all except for ``underflow``, which remains as ``ignore``. ``numpy.distutils`` ------------------- Several new compilers are supported for building Numpy: the Portland Group Fortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel C compiler on Linux. ``numpy.testing`` ----------------- The testing framework gained ``numpy.testing.assert_allclose``, which provides a more convenient way to compare floating point arrays than `assert_almost_equal`, `assert_approx_equal` and `assert_array_almost_equal`. ``C API`` --------- In addition to the APIs for the new iterator and half data type, a number of other additions have been made to the C API. The type promotion mechanism used by ufuncs is exposed via ``PyArray_PromoteTypes``, ``PyArray_ResultType``, and ``PyArray_MinScalarType``. A new enumeration ``NPY_CASTING`` has been added which controls what types of casts are permitted. This is used by the new functions ``PyArray_CanCastArrayTo`` and ``PyArray_CanCastTypeTo``. A more flexible way to handle conversion of arbitrary python objects into arrays is exposed by ``PyArray_GetArrayParamsFromObject``. Deprecated features =================== The "normed" keyword in ``numpy.histogram`` is deprecated. Its functionality will be replaced by the new "density" keyword. Removed features ================ ``numpy.fft`` ------------- The functions `refft`, `refft2`, `refftn`, `irefft`, `irefft2`, `irefftn`, which were aliases for the same functions without the 'e' in the name, were removed. ``numpy.memmap`` ---------------- The `sync()` and `close()` methods of memmap were removed. Use `flush()` and "del memmap" instead. ``numpy.lib`` ------------- The deprecated functions ``numpy.unique1d``, ``numpy.setmember1d``, ``numpy.intersect1d_nu`` and ``numpy.lib.ufunclike.log2`` were removed. ``numpy.ma`` ------------ Several deprecated items were removed from the ``numpy.ma`` module:: * ``numpy.ma.MaskedArray`` "raw_data" method * ``numpy.ma.MaskedArray`` constructor "flag" keyword * ``numpy.ma.make_mask`` "flag" keyword * ``numpy.ma.allclose`` "fill_value" keyword ``numpy.distutils`` ------------------- The ``numpy.get_numpy_include`` function was removed, use ``numpy.get_include`` instead. Checksums ========= 295c456e5008873a14a56c11bce7cb60 release/installers/numpy-1.6.0rc3-py2.7-python.org-macosx10.6.dmg faaa5e8ec8992263ad444d809a511f07 release/installers/numpy-1.6.0rc3-win32-superpack-python2.5.exe 3a27307c4a5dcd56f8c8a1ae1d858f93 release/installers/numpy-1.6.0rc3-win32-superpack-python2.6.exe 6bf42735d54f7043a0d0b5a9652d3dbb release/installers/numpy-1.6.0rc3-win32-superpack-python2.7.exe 450e39f4666805e53fd750d1d0d07243 release/installers/numpy-1.6.0rc3-win32-superpack-python3.1.exe 500214e83b4823f90569ccf2f947ff6b release/installers/numpy-1.6.0rc3-win32-superpack-python3.2.exe 09efcfdb987899d0ff6b79d79e707ac9 release/installers/numpy-1.6.0rc3.tar.gz 466fe879964de5e9ecb562f48ba18131 release/installers/numpy-1.6.0rc3.zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide_fiocco at yahoo.it Sat May 7 19:15:54 2011 From: davide_fiocco at yahoo.it (davide_fiocco at yahoo.it) Date: Sat, 7 May 2011 16:15:54 -0700 (PDT) Subject: [SciPy-User] Vectorize matrix inversion Message-ID: <1b146d09-8189-4422-a07b-f9bfa928acfb@j31g2000yqe.googlegroups.com> Hey all, I have a NxNxP array. Let's call it foo. foo[:,:,p] contains a matrix I want to invert. Is there a pure python/scipy way to compute an array bar without loops such that it would be equivalent to the following? import scipy.linalg as la import numpy as np bar = np.zeros((N, N, P)) + 0j for i in range(0,P): bar[:,:,i] = la.inv(foo[:,:,i]) I realize I could write some fortran code to do this or even use cython, but it would be nice if I could do this without needing to compile some extra code. As a summary, does anyone know how to compute the above (either example, but preferably both) without using loops? Cheers, Davide P.S. This is almost necrobumping a question that was never answered on this very same forum, on a post by Josh Lawrence, Nov 12 11:23:26 CST 2008. From njs at pobox.com Sat May 7 19:39:11 2011 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 May 2011 16:39:11 -0700 Subject: [SciPy-User] Vectorize matrix inversion In-Reply-To: <1b146d09-8189-4422-a07b-f9bfa928acfb@j31g2000yqe.googlegroups.com> References: <1b146d09-8189-4422-a07b-f9bfa928acfb@j31g2000yqe.googlegroups.com> Message-ID: On Sat, May 7, 2011 at 4:15 PM, davide_fiocco at yahoo.it wrote: > Hey all, > > I have a NxNxP array. Let's call it foo. foo[:,:,p] contains a > matrix > I want to invert. Is there a pure python/scipy way to compute an array > bar without loops such that it > would be equivalent to the following? > > import scipy.linalg as la > import numpy as np > > bar = np.zeros((N, N, P)) + 0j > > for i in range(0,P): > ? ? ? ?bar[:,:,i] = la.inv(foo[:,:,i]) I don't think there is any such way. But, note that unless N is very small, there won't be much advantage to doing this. The cost of doing a loop iteration + Python function call is higher than it is in C, but it still isn't *that* high when compared to the cost of actually doing matrix inversions. If speed is terribly important to you then you might want to get into the habit of using 'xrange' instead of 'range', though. -- Nathaniel From josef.pktd at gmail.com Sat May 7 19:42:30 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 7 May 2011 19:42:30 -0400 Subject: [SciPy-User] Vectorize matrix inversion In-Reply-To: <1b146d09-8189-4422-a07b-f9bfa928acfb@j31g2000yqe.googlegroups.com> References: <1b146d09-8189-4422-a07b-f9bfa928acfb@j31g2000yqe.googlegroups.com> Message-ID: On Sat, May 7, 2011 at 7:15 PM, davide_fiocco at yahoo.it wrote: > Hey all, > > I have a NxNxP array. Let's call it foo. foo[:,:,p] contains a > matrix > I want to invert. Is there a pure python/scipy way to compute an array > bar without loops such that it > would be equivalent to the following? > > import scipy.linalg as la > import numpy as np > > bar = np.zeros((N, N, P)) + 0j > > for i in range(0,P): > ? ? ? ?bar[:,:,i] = la.inv(foo[:,:,i]) > > I realize I could write some fortran code to do this or even use > cython, but it would be nice if I could do this without needing to > compile some extra code. As a summary, does anyone know how to > compute > the above (either example, but preferably both) without using loops? Maybe the answer to this question is still the same as before, "no", nobody knows how to compute this without using loops. If I cannot reformulate the problem in situations like this, I use a loop. For very small N and large P a blockdiagonal matrix might be possible, but building it might be more expensive than the loop. (The standard recommendation is "avoid that inv", which is however not always avoidable.) Josef > > Cheers, > > Davide > > P.S. This is almost necrobumping a question that was never answered on > this very same forum, on a post by Josh Lawrence, Nov 12 11:23:26 CST > 2008. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wesmckinn at gmail.com Sat May 7 19:47:33 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Sat, 7 May 2011 19:47:33 -0400 Subject: [SciPy-User] Vectorize matrix inversion In-Reply-To: References: <1b146d09-8189-4422-a07b-f9bfa928acfb@j31g2000yqe.googlegroups.com> Message-ID: On Sat, May 7, 2011 at 7:42 PM, wrote: > On Sat, May 7, 2011 at 7:15 PM, davide_fiocco at yahoo.it > wrote: >> Hey all, >> >> I have a NxNxP array. Let's call it foo. foo[:,:,p] contains a >> matrix >> I want to invert. Is there a pure python/scipy way to compute an array >> bar without loops such that it >> would be equivalent to the following? >> >> import scipy.linalg as la >> import numpy as np >> >> bar = np.zeros((N, N, P)) + 0j >> >> for i in range(0,P): >> ? ? ? ?bar[:,:,i] = la.inv(foo[:,:,i]) >> >> I realize I could write some fortran code to do this or even use >> cython, but it would be nice if I could do this without needing to >> compile some extra code. As a summary, does anyone know how to >> compute >> the above (either example, but preferably both) without using loops? > > Maybe the answer to this question is still the same as before, "no", > nobody knows how to compute this without using loops. > > If I cannot reformulate the problem in situations like this, I use a > loop. For very small N and large P a blockdiagonal matrix might be > possible, but building it might be more expensive than the loop. > (The standard recommendation is "avoid that inv", which is however not > always avoidable.) > > Josef > > >> >> Cheers, >> >> Davide >> >> P.S. This is almost necrobumping a question that was never answered on >> this very same forum, on a post by Josh Lawrence, Nov 12 11:23:26 CST >> 2008. >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > This is a bit of an aside, but this "Matlab-style" indexing, let's call it: foo[:,:,p] does not yield a contiguous array in Python. You would be better off ordering the data like: bar = np.zeros((P, N, N)) and indexing lik bar[i] to retrieve each matrix. - Wes From klonuo at gmail.com Sun May 8 02:11:16 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Sun, 08 May 2011 08:11:16 +0200 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn In-Reply-To: References: <20110507143358.2838.B1C76292@gmail.com> Message-ID: <20110508081114.638F.B1C76292@gmail.com> Re-reading what I wrote yesterday may seem un-balanced, but please consider that it is not with intend to be disrespectful toward any work been made. Also consider that English is not my native language and I can't always describe things the way I wanted. > For me, achieving things like that with Gnuplot would be incredibly > frustrating. The gnuplot learning curve is steep :) > > For your feedback to be more constructive, it might be good if you also > provided suggestions on how to improve the situation. > > What do you want to have? Did you read the existing beginner-level > documentation of Matplotlib (i.e. User's Guide)? If yes, was it useful > or not? What in the specific example you cite was difficult to do? It is useful. I have it together with PACKT's 'Matplotlib for Python Developers'. Both are useful and web page is just as beautiful as useful, showing state of the art structuring - I can find anything I'm interested in, in 2 clicks. And I feel terrible complaining about it, but that's what happened. I don't have example data right here, but it was data from 30 participants divided in 5 bins. I looked at 'table_demo.py' from gallery page and 'bars stacked' graph from Chapter 9 in PACKT book. IIRC I could not make bars to be presented at 100% each (so all at same overall level) instead as appended absolute values, as shown in both examples I followed. It sounds trivial but I had problem with 'yoff' IIRC and getting to know mechanism behind stacked bars, then just left it as I had to make those plots in time. I have gnuplot since I know I made my first plot, and that could suggest I need to spend more time with matplotlib to gain advantage of using advanced plotting library in python code. And that's fine as I plan to do that, and I used what I experienced as a prolog to my scikits.learn problem. > The best place to start would be to read the tutorial part of > the user guide > > http://scikit-learn.sourceforge.net/tutorial.html > > It seems to explain exactly the example you mention in detail. Been there. First I thought, that this is too advanced for me and I never liked statistics really, but when I saw 'Recognizing hand-written digits' I got interested for some reason and went on trying, nonetheless. If I open the csv table I see 64 column and many rows presented by 0-16 digits. Each row presents image. If I follow link for the source from which this table is distilled, it seems it's some data based on output from 'WACOM PL-100V pressure sensitive tablet with an integrated LCD display and a cordless stylus'. I don't see connection between 'http://archive.ics.uci.edu/ml/machine-learning-databases/pendigits/' and '..\scikits\learn\datasets\data\digits.csv.gz'. Or I don't see how can I make my test model. Cheers From opossumnano at gmail.com Sun May 8 07:18:47 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Sun, 8 May 2011 13:18:47 +0200 Subject: [SciPy-User] [ANN] EuroScipy 2011 - deadline extended Message-ID: <20110508111847.GB19748@multivac.zonafranca> =================================== EuroScipy 2011 - Deadline Extended! =================================== Deadline extended! You can submit your contribution until Friday May 13. --------------------------------------------- The 4th European meeting on Python in Science --------------------------------------------- **Paris, Ecole Normale Sup?rieure, August 25-28 2011** We are happy to announce the 4th EuroScipy meeting, in Paris, August 2011. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Main topics =========== - Presentations of scientific tools and libraries using the Python language, including but not limited to: - vector and array manipulation - parallel computing - scientific visualization - scientific data flow and persistence - algorithms implemented or exposed in Python - web applications and portals for science and engineering. - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Tutorials ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Keynote Speaker: Fernando Perez =============================== We are excited to welcome Fernando Perez (UC Berkeley, Helen Wills Neuroscience Institute, USA) as our keynote speaker. Fernando Perez is the original author of the enhanced interactive python shell IPython and a very active contributor to the Python for Science ecosystem. Important dates =============== Talk submission deadline: Sunday May 8 Program announced: Sunday May 29 Tutorials tracks: Thursday August 25 - Friday August 26 Conference track: Saturday August 27 - Sunday August 28 Call for papers =============== We are soliciting talks that discuss topics related to scientific computing using Python. These include applications, teaching, future development directions, and research. We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Submission guidelines ===================== - We solicit talk proposals in the form of a one-page long abstract. - Submissions whose main purpose is to promote a commercial product or service will be refused. - All accepted proposals must be presented at the EuroSciPy conference by at least one author. The one-page long abstracts are for conference planing and selection purposes only. We will later select papers for publication of post-proceedings in a peer-reviewed journal. How to submit an abstract ========================= To submit a talk to the EuroScipy conference follow the instructions here: http://www.euroscipy.org/card/euroscipy2011_call_for_papers Organizers ========== Chairs: - Ga?l Varoquaux (INSERM, Unicog team, and INRIA, Parietal team) - Nicolas Chauvat (Logilab) Local organization committee: - Emmanuelle Gouillart (Saint-Gobain Recherche) - Jean-Philippe Chauvat (Logilab) Tutorial chair: - Valentin Haenel (MKP, Technische Universit?t Berlin) Program committee: - Chair: Tiziano Zito (MKP, Technische Universit?t Berlin) - Romain Brette (ENS Paris, DEC) - Emmanuelle Gouillart (Saint-Gobain Recherche) - Eric Lebigot (Laboratoire Kastler Brossel, Universit? Pierre et Marie Curie) - Konrad Hinsen (Soleil Synchrotron, CNRS) - Hans Petter Langtangen (Simula laboratories) - Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) - Mike M?ller (Python Academy) - Didrik Pinte (Enthought Inc) - Marc Poinot (ONERA) - Christophe Pradal (CIRAD/INRIA, Virtual Plantes team) - Andreas Schreiber (DLR) - St?fan van der Walt (University of Stellenbosch) Website ======= http://www.euroscipy.org/conference/euroscipy_2011 _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From wilson.andrew.j at gmail.com Fri May 6 12:35:24 2011 From: wilson.andrew.j at gmail.com (Andy Wilson) Date: Fri, 6 May 2011 11:35:24 -0500 Subject: [SciPy-User] matplotlib: print to postscript file and some other questions In-Reply-To: <20110506094637.180850@gmx.net> References: <20110506094637.180850@gmx.net> Message-ID: On Fri, May 6, 2011 at 4:46 AM, Johannes Radinger wrote: > > I don't know if there is a special matplotlib-user list... > Hi Johannes. There is a matplotlib users list: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon May 9 05:23:55 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 9 May 2011 11:23:55 +0200 Subject: [SciPy-User] should one put "." into PYTHONPATH In-Reply-To: References: Message-ID: <20110509092355.GE1931@phare.normalesup.org> On Thu, May 05, 2011 at 04:23:40PM -0700, Ondrej Certik wrote: > is it a good practice to have the following in .bashrc: > export PYTHONPATH=$PYTHONPATH:. I am against it. It is non standard. > where 'a.py' imports something from the current directory. I googled a > bit, and found that some people recommend to use "setup.py develop" > instead. I don't use setup.py in my project (I use cmake to mix > Fortran and Python together). So one option for me is to always > install it, and then import it like any other package from > examples/a.py. 'setup.py' is standard. It will work on any platform with no additional soft to install. While it may be awkward to use with a complex build chain, people are working on fixing that (eg David C). My 2 cents, Gael From ondrej at certik.cz Mon May 9 05:49:21 2011 From: ondrej at certik.cz (Ondrej Certik) Date: Mon, 9 May 2011 02:49:21 -0700 Subject: [SciPy-User] should one put "." into PYTHONPATH In-Reply-To: References: Message-ID: Hi Robert, Jason and Gael, On Thu, May 5, 2011 at 7:22 PM, Robert Kern wrote: > On Thu, May 5, 2011 at 18:23, Ondrej Certik wrote: >> Hi, >> >> is it a good practice to have the following in .bashrc: >> >> export PYTHONPATH=$PYTHONPATH:. > > I don't recommend it. You will get unexpected behavior and waste time > chasing down problems that don't actually exist. The answer is 100% clear: don't fiddle with PYTHONPATH. Thanks for that, I was undecided. Now I can see, that I need to use other solutions to the problem. > >> I know that Ubuntu long time ago had the "." in PYTHONPATH by default, >> and then dropped it. The reason why I want it is so that I can develop >> in the current directory, by doing things like: >> >> python examples/a.py >> >> where 'a.py' imports something from the current directory. I googled a >> bit, and found that some people recommend to use "setup.py develop" >> instead. I don't use setup.py in my project (I use cmake to mix >> Fortran and Python together). So one option for me is to always >> install it, and then import it like any other package from >> examples/a.py. > > You don't need to do "python setup.py develop" every time. Nothing > actually depends on there being a setup.py. Just add a .pth file into > your site-packages listing the directory you want added. E.g. if you > have your sympy checkout in /home/ondrej/git/sympy/ you would have a > file named sympy.pth (or any other name ending in .pth) in your > site-packages directory with just the following contents (without the > triple quotes: > > """ > /home/ondrej/git/sympy > """ > > Then you can run /home/ondrej/git/sympy/examples/a.py however you > like, from whichever directory you like. Wow, thanks for this tip! This works like a charm. That's exactly what I was looking for. Gael, I use lots of Fortran and I tried to figure out how to use it together with C and Cython in setup.py, and it's not easy. With cmake, everything works as it should (and in a standard way, e.g. anyone who knows cmake knows what to do). Ondrej From gael.varoquaux at normalesup.org Mon May 9 06:02:13 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 9 May 2011 12:02:13 +0200 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn In-Reply-To: <20110507143358.2838.B1C76292@gmail.com> References: <20110507143358.2838.B1C76292@gmail.com> Message-ID: <20110509100212.GF1931@phare.normalesup.org> On Sat, May 07, 2011 at 02:34:00PM +0200, Klonuo Umom wrote: > Looking at matplotlib examples gallery everything seemed so > perfect and possible, but to be honest I spent too much time to plot > just one stacked histogram the way I wanted, and then went to easier to > me gnuplot to save some time and finish my plots. That's just a simple learning issue: you know gnuplot, thus you find it easier to use than matplotlib. I was there at some point. I don't think that out of the documentations of gnuplot or matplotlib, any one is significantly easier to grasp. > In the same manner, but on much higher level, I got interested, out of > pure curiosity, in hand-writing recognition which is apparently possible > with scikits.learn (among other older packages). > AFAIK almost all scikits lack documentation beyond bare docstrings > inside modules, but scikits.learn however has user guide > ( http://scikit-learn.sourceforge.net/user_guide.html ) with examples > inside. One of them is 'Recognizing hand-written digits' in 'Machine > Learning' chapter. This example has no explanation - source code is > presented along with plot output. Fair enough, this example could be improved (darn, I have my name on it :$). I won't reply to the detailed points of message. If you feel that specific issues must be addressed, feel free to point them out (preferably on the scikits.learn mailing list https://lists.sourceforge.net/lists/listinfo/scikit-learn-general). We will do our best to integrate your feedback. However keep in mind that what you are trying to do is actually a challenging task and the subject of ongoing research. There will be a learning curve. The scikits.learn is trying to do explain as much as possible machine learning to non specialists, but it a full-blown scientific field. You will not be able to have a functioning hand-written digits pipeline without investing time in it. I wouldn't be able to solve that problem quickly either. That said, we fight as hard as we can to make it easier. People usually tell us that the scikits.learn is amongst the easiest machine learning package to use. It has a 300-pages long manual, with many different examples, and external references. All methods have fully-fledged docstrings. Achieving this alone is a huge amount of work, on top of providing binaries for various platforms, making sure that it gives correct and controled results. We can do better, of course; we want to do better, but we are a bunch of volunteers, loosing sleep on the scikit, just like most of the developers of open source packages. My back is telling me that I can't sacrifice more sleep to open source development. To have better packages, we need more help. > I'll whiteness steep curve unlike general Python comfortability. I think that scientific packages tend to have a steeper learning curve partly because there is the science to learn on top of the software. To end on a positive note, I must say that I understand you frustration, and I agree with you that usability of software is premium. As a community we must keep this in mind, and do our best. Cheers, Ga?l From davide_fiocco at yahoo.it Mon May 9 06:08:39 2011 From: davide_fiocco at yahoo.it (Davide Fiocco) Date: Mon, 9 May 2011 03:08:39 -0700 (PDT) Subject: [SciPy-User] Vectorize matrix inversion In-Reply-To: References: <1b146d09-8189-4422-a07b-f9bfa928acfb@j31g2000yqe.googlegroups.com> Message-ID: <322cb2b7-b52e-490f-b0ee-e330bd902da7@g12g2000yqd.googlegroups.com> Thanks to all of you! > (The standard recommendation is "avoid that inv", which is however not > always avoidable.) As _most_ of the time I'll be dealing with 2x2xP matrices, I think I'll stick to something det = foo[0,0,:]*foo[1,1,:] - foo[1,0,:]*foo[0,1,:] bar[0,0,:] = foo[1,1,:]/det bar[0,1,:] = - foo[0,1,:]/det bar[1,1,:] = foo[0,0,:]/det bar[1,0,:] = - foo[1,0,:]/det and change my indexing habit in the meantime. Thanks again! From klonuo at gmail.com Mon May 9 07:46:45 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 09 May 2011 13:46:45 +0200 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn In-Reply-To: <20110509100212.GF1931@phare.normalesup.org> References: <20110507143358.2838.B1C76292@gmail.com> <20110509100212.GF1931@phare.normalesup.org> Message-ID: <20110509134639.2043.B1C76292@gmail.com> > I won't reply to the detailed points of message. If you feel that > specific issues must be addressed, feel free to point them out > (preferably on the scikits.learn mailing list > https://lists.sourceforge.net/lists/listinfo/scikit-learn-general). We > will do our best to integrate your feedback. However keep in mind that > what you are trying to do is actually a challenging task and the subject > of ongoing research. There will be a learning curve. The scikits.learn is > trying to do explain as much as possible machine learning to non > specialists, but it a full-blown scientific field. You will not be able > to have a functioning hand-written digits pipeline without investing time > in it. I wouldn't be able to solve that problem quickly either. Hi Gael :) thank you for your patient reply. I suspect it's very advanced topic, and not sure if I'll gain anything, but at least I'll know how far I am from it. If you could point to some source how 'digits.csv.gz' was distilled from 'http://archive.ics.uci.edu/ml/machine-learning-databases/pendigits/' data, or some similar example, I could probably start wondering around and maybe ask smarter questions at scikits.learn mailing list I tried to look from other side, like 'reusing of existing data from http://mlcomp.org', but I can't find my common denominator with their provided datasets. Best wishes, Klonuo From brockp at umich.edu Mon May 9 09:34:22 2011 From: brockp at umich.edu (Brock Palen) Date: Mon, 9 May 2011 09:34:22 -0400 Subject: [SciPy-User] SciPy Featured on Podcast Message-ID: Thank you to part of the SciPy crew for taking time out to talk about the project. http://www.rce-cast.com/Podcast/rce-54-scipy-scientific-tools-for-python.html Feel free to spread this link around it should be good for people who are curious about SciPy. If you have topics you would like to hear on the podcast please contact me off list. Thanks again Travis, Anthony and Warren. Brock Palen www.umich.edu/~brockp Center for Advanced Computing brockp at umich.edu (734)936-1985 From denis-bz-gg at t-online.de Mon May 9 09:20:38 2011 From: denis-bz-gg at t-online.de (denis) Date: Mon, 9 May 2011 06:20:38 -0700 (PDT) Subject: [SciPy-User] Kmeans, the correct role of whitening? In-Reply-To: <961140CA-F25C-4767-9EB4-B332CA39CD4E@gmail.com> References: <961140CA-F25C-4767-9EB4-B332CA39CD4E@gmail.com> Message-ID: <6f3c5d71-b4c7-4bb0-8058-f0f40ee8af52@24g2000yqk.googlegroups.com> On May 6, 10:03 am, Luca Giacomel wrote: > Hello, > I'm developing a piece of code which uses the kmeans algorithm to cluster huge amounts of financial data. I've got a few problems with the whitening function thogh ... Luca, there's "cheap" and "full" whitening: cheap per-component, white(X) = (X - allthedata.mean()) / allthedata.std() and full, divide by sqrt of the full covariance matrix. Which do you mean ? (There must also be tridiagonal whitening, O(N) -- stats people ?) Also, can you not check many observations at once, much faster than one at a time ? cheers -- denis From gael.varoquaux at normalesup.org Mon May 9 09:32:46 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 9 May 2011 15:32:46 +0200 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn In-Reply-To: <20110509134639.2043.B1C76292@gmail.com> References: <20110507143358.2838.B1C76292@gmail.com> <20110509100212.GF1931@phare.normalesup.org> <20110509134639.2043.B1C76292@gmail.com> Message-ID: <20110509133246.GA4651@phare.normalesup.org> On Mon, May 09, 2011 at 01:46:45PM +0200, Klonuo Umom wrote: > If you could point to some source how 'digits.csv.gz' was distilled from > 'http://archive.ics.uci.edu/ml/machine-learning-databases/pendigits/' > data, or some similar example, I could probably start wondering around > and maybe ask smarter questions at scikits.learn mailing list Your whishes are https://github.com/scikit-learn/scikit-learn/commit/348d9aa6cab8fe0c0819514fc0cc00c32f6abba1 https://github.com/scikit-learn/scikit-learn/commit/1c7c01145eb997ae3a95513ac0854d9db8105b1e (I did spend an hour on this). > I tried to look from other side, like 'reusing of existing data from > http://mlcomp.org', but I can't find my common denominator with their > provided datasets. Yes, and this is no suprise. In general, one can face data with arbritrary shape, size, structure... In my experience of years of data processing, there is always at least an hour or so to spend to massage new data into shape before being able to use it. Cheers, Gael From jjstickel at vcn.com Mon May 9 10:43:11 2011 From: jjstickel at vcn.com (Jonathan Stickel) Date: Mon, 09 May 2011 08:43:11 -0600 Subject: [SciPy-User] Vectorize matrix inversion In-Reply-To: References: Message-ID: <4DC7FD7F.5010303@vcn.com> On 5/8/11 00:11 , scipy-user-request at scipy.org wrote: > Date: Sat, 7 May 2011 16:15:54 -0700 (PDT) > From:"davide" > Subject: [SciPy-User] Vectorize matrix inversion > > Hey all, > > I have a NxNxP array. Let's call it foo. foo[:,:,p] contains a > matrix > I want to invert. Is there a pure python/scipy way to compute an array > bar without loops such that it > would be equivalent to the following? > > import scipy.linalg as la > import numpy as np > > bar = np.zeros((N, N, P)) + 0j > > for i in range(0,P): > bar[:,:,i] = la.inv(foo[:,:,i]) > > I realize I could write some fortran code to do this or even use > cython, but it would be nice if I could do this without needing to > compile some extra code. As a summary, does anyone know how to > compute > the above (either example, but preferably both) without using loops? > > Cheers, > > Davide Recently I wanted to invert a block diagonal matrix where each block was a vandermonde matrix. I found it reasonably fast to put each block in a list and then use list comprehension to invert each block: # form the block matrices B = [np.vander(x[j:j+nn],nn) for j in xrange(N-n)] # compute the inverse for each block matrix Binv = [scipy.linalg.inv(B[j]) for j in xrange(N-n)] Note exactly "vectorized", but faster than an explicit for loop. HTH, Jonathan From klonuo at gmail.com Mon May 9 10:43:25 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 09 May 2011 16:43:25 +0200 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn In-Reply-To: <20110509133246.GA4651@phare.normalesup.org> References: <20110509134639.2043.B1C76292@gmail.com> <20110509133246.GA4651@phare.normalesup.org> Message-ID: <20110509164323.2048.B1C76292@gmail.com> > Your whishes are > > https://github.com/scikit-learn/scikit-learn/commit/348d9aa6cab8fe0c0819514fc0cc00c32f6abba1 > https://github.com/scikit-learn/scikit-learn/commit/1c7c01145eb997ae3a95513ac0854d9db8105b1e > > (I did spend an hour on this). Thank you Gael, you are real gentleman It looks interesting and overwhelming. I'll try to find my way in, and address possible questions on scikits.learn mailing list Thanks again for your time, and cheers Klonuo From william.ratcliff at gmail.com Mon May 9 10:51:19 2011 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 9 May 2011 10:51:19 -0400 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn In-Reply-To: <20110509164323.2048.B1C76292@gmail.com> References: <20110509134639.2043.B1C76292@gmail.com> <20110509133246.GA4651@phare.normalesup.org> <20110509164323.2048.B1C76292@gmail.com> Message-ID: Is the manual open sourced so the community can edit it? On May 9, 2011 10:43 AM, "Klonuo Umom" wrote: >> Your whishes are >> >> https://github.com/scikit-learn/scikit-learn/commit/348d9aa6cab8fe0c0819514fc0cc00c32f6abba1 >> https://github.com/scikit-learn/scikit-learn/commit/1c7c01145eb997ae3a95513ac0854d9db8105b1e >> >> (I did spend an hour on this). > > Thank you Gael, you are real gentleman > > It looks interesting and overwhelming. I'll try to find my way in, and > address possible questions on scikits.learn mailing list > > > Thanks again for your time, and cheers > > Klonuo > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon May 9 11:09:22 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 9 May 2011 17:09:22 +0200 Subject: [SciPy-User] Trying hand-writing recognition with scikits.learn In-Reply-To: References: <20110509134639.2043.B1C76292@gmail.com> <20110509133246.GA4651@phare.normalesup.org> <20110509164323.2048.B1C76292@gmail.com> Message-ID: <20110509150922.GC4651@phare.normalesup.org> On Mon, May 09, 2011 at 10:51:19AM -0400, william ratcliff wrote: > Is the manual open sourced so the community can edit it? Of course. Everything is BSD licensed. The source code of the manual can be found in the 'doc' directory of the scikit-learn's source code. It is compiled with sphinx, which is pretty standard in the scipy community. The source code is hosted on github: https://github.com/scikit-learn/scikit-learn The standard way to contribute is to fork the project on github, modify the code (including documentation) and send pull requests. The process is described on http://scikit-learn.sourceforge.net/dev/developers/index.html The development version of the manual, including any changes contributed, is compiled nightly, and visible on: http://scikit-learn.sourceforge.net/dev/ As can be seen from https://github.com/scikit-learn/scikit-learn/graphs/impact we are very inclusive with external contributions. This is a community driven project. And the community can certainly help improving the documentation. Gael From pbajk at yahoo.co.uk Mon May 9 15:09:43 2011 From: pbajk at yahoo.co.uk (P B) Date: Mon, 9 May 2011 20:09:43 +0100 (BST) Subject: [SciPy-User] problem with installing on osx 10.6.6 Message-ID: <456683.27562.qm@web132303.mail.ird.yahoo.com> Hi, I', having trouble with scipy.? I have followed the instructions at scipy website and have installed the following on my mac osx 10.6.6 (taken from the sourceforge binarys) NumPy version 1.5.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy SciPy version 0.8.0 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy Python version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 1.0.0 When I run the test scipy.test('1','10') some items seem to pass: test_streams.test_make_stream(True,) ... ok Some tests seem to be skipped: nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so is executable; skipped some seem to fail: /Users/user/.python26_compiled/m7/module_multi_function.cpp:13:19: error: complex: No such file or directory and ====================================================================== ERROR: test_string_and_int (test_ext_tools.TestExtModule) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/tests/test_ext_tools.py", line 72, in test_string_and_int ??? mod.compile(location = build_dir) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/ext_tools.py", line 367, in compile ??? verbose = verbose, **kw) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/build_tools.py", line 273, in build_extension ??? setup(name = module_name, ext_modules = [ext],verbose=verb) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/distutils/core.py", line 186, in setup ??? return old_setup(**new_attr) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/core.py", line 169, in setup ??? raise SystemExit, "error: " + str(msg) CompileError: error: Command "c++ -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.4u.sdk -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp -o /var/folders/4b/4bhByeH9HSuDIezfnSZ6G++++TI/-Tmp-/user/python26_intermediate/compiler_7ca1591dfd3261e140e707030a00840e/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.o" failed with exit status 1 with the final result being: FAILED (KNOWNFAIL=15, SKIP=40, errors=242, failures=2) I'm assuming I have the wrong version of something, so I upgraded to scipy 0.9. ? Sadly that led to essentially the same results.? Can anyone advise me what to do next? Thanks, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From bastian.weber at gmx-topmail.de Mon May 9 15:13:07 2011 From: bastian.weber at gmx-topmail.de (Bastian Weber) Date: Mon, 09 May 2011 21:13:07 +0200 Subject: [SciPy-User] unexpected behavior when reverting twice ([::-1, 0]) Message-ID: <4DC83CC3.1060002@gmx-topmail.de> Hello, I found an unexpected behavior when reverting an array and then reverting a part of it again (numpy version 1.3.0): In [1]: import numpy as np In [2]: a = np.arange(5) In [3]: a Out[3]: array([0, 1, 2, 3, 4]) In [4]: b = np.c_[a, a*3] In [5]: b Out[5]: array([[ 0, 0], [ 1, 3], [ 2, 6], [ 3, 9], [ 4, 12]]) In [6]: c = b[::-1, :] In [7]: c Out[7]: array([[ 4, 12], [ 3, 9], [ 2, 6], [ 1, 3], [ 0, 0]]) In [8]: c[:,0] = c[::-1, 0] In [9]: c Out[9]: array([[ 0, 12], [ 1, 9], [ 2, 6], [ 1, 3], [ 0, 0]]) In [10]: b Out[10]: array([[ 0, 0], [ 1, 3], [ 2, 6], [ 1, 9], [ 0, 12]]) I would have expected the first column of c to be [0,1,2,3,4] an b to be the same as in step 5. This is how it continues with side effects which seem magical to me: In [11]: c[:,0] = np.arange(5) In [12]: c Out[12]: array([[ 0, 12], [ 1, 9], [ 2, 6], [ 3, 3], [ 4, 0]]) In [13]: b Out[13]: array([[ 4, 0], [ 3, 3], [ 2, 6], [ 1, 9], [ 0, 12]]) In [14]: np.version.version Out[14]: '1.3.0' Do I misinterpret something or is this an version issue? Maybe its just on my machine.. Best regards, Bastian. From robert.kern at gmail.com Mon May 9 15:27:47 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 9 May 2011 14:27:47 -0500 Subject: [SciPy-User] unexpected behavior when reverting twice ([::-1, 0]) In-Reply-To: <4DC83CC3.1060002@gmx-topmail.de> References: <4DC83CC3.1060002@gmx-topmail.de> Message-ID: On Mon, May 9, 2011 at 14:13, Bastian Weber wrote: > Hello, > > I found an unexpected behavior when reverting an array and then > reverting a part of it again (numpy version 1.3.0): > > > In [1]: import numpy as np > > In [2]: a = np.arange(5) > > In [3]: a > Out[3]: array([0, 1, 2, 3, 4]) > > In [4]: b = np.c_[a, a*3] > > In [5]: b > Out[5]: > array([[ 0, ?0], > ? ? ? ?[ 1, ?3], > ? ? ? ?[ 2, ?6], > ? ? ? ?[ 3, ?9], > ? ? ? ?[ 4, 12]]) > > In [6]: c = b[::-1, :] > > In [7]: c > Out[7]: > array([[ 4, 12], > ? ? ? ?[ 3, ?9], > ? ? ? ?[ 2, ?6], > ? ? ? ?[ 1, ?3], > ? ? ? ?[ 0, ?0]]) > > In [8]: c[:,0] = c[::-1, 0] > > In [9]: c > Out[9]: > array([[ 0, 12], > ? ? ? ?[ 1, ?9], > ? ? ? ?[ 2, ?6], > ? ? ? ?[ 1, ?3], > ? ? ? ?[ 0, ?0]]) > > In [10]: b > Out[10]: > array([[ 0, ?0], > ? ? ? ?[ 1, ?3], > ? ? ? ?[ 2, ?6], > ? ? ? ?[ 1, ?9], > ? ? ? ?[ 0, 12]]) > > > I would have expected the first column of c to be [0,1,2,3,4] an b to be > the same as in step 5. c[::-1,0] makes a view on c, not a copy. When you assign back into c[:,0], you end up modifying the elements near the end of c[::-1,0] (i.e. near the beginning of c[:,0]). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From ralf.gommers at googlemail.com Mon May 9 16:15:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 9 May 2011 22:15:21 +0200 Subject: [SciPy-User] problem with installing on osx 10.6.6 In-Reply-To: <456683.27562.qm@web132303.mail.ird.yahoo.com> References: <456683.27562.qm@web132303.mail.ird.yahoo.com> Message-ID: On Mon, May 9, 2011 at 9:09 PM, P B wrote: > Hi, > I', having trouble with scipy. > I have followed the instructions at scipy website and have installed the > following on my mac osx 10.6.6 (taken from the sourceforge binarys) > > NumPy version 1.5.1 > NumPy is installed in > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy > SciPy version 0.8.0 > SciPy is installed in > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy > Python version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple > Inc. build 5493)] > nose version 1.0.0 > > When I run the test scipy.test('1','10') > > some items seem to pass: > > test_streams.test_make_stream(True,) ... ok > > Some tests seem to be skipped: > > nose.selector: INFO: > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so > is executable; skipped > > some seem to fail: > /Users/user/.python26_compiled/m7/module_multi_function.cpp:13:19: error: > complex: No such file or directory > > and > > ====================================================================== > ERROR: test_string_and_int (test_ext_tools.TestExtModule) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/tests/test_ext_tools.py", > line 72, in test_string_and_int > mod.compile(location = build_dir) > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/ext_tools.py", > line 367, in compile > verbose = verbose, **kw) > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/build_tools.py", > line 273, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/distutils/core.py", > line 186, in setup > return old_setup(**new_attr) > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/core.py", > line 169, in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "c++ -fno-strict-aliasing -fno-common -dynamic > -isysroot /Developer/SDKs/MacOSX10.4u.sdk -arch ppc -arch i386 -g -O2 > -DNDEBUG -g -O3 > -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave > -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx > -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp > -o > /var/folders/4b/4bhByeH9HSuDIezfnSZ6G++++TI/-Tmp-/user/python26_intermediate/compiler_7ca1591dfd3261e140e707030a00840e/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.o" > failed with exit status 1 > > with the final result being: > > FAILED (KNOWNFAIL=15, SKIP=40, errors=242, failures=2) > > > I'm assuming I have the wrong version of something, so I upgraded to scipy > 0.9. > > Sadly that led to essentially the same results. Can anyone advise me what > to do next? > > Hmm, this is a little unusual, if you used the binary installers from SF for numpy 1.5.1 and scipy 0.8 / 0.9, and the Python binary from python.org, that should just work. Can you provide the complete output of "scipy.test(verbose=2)" (if it's too large put it on a pastebin site)? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From bastian.weber at gmx-topmail.de Mon May 9 16:06:51 2011 From: bastian.weber at gmx-topmail.de (Bastian Weber) Date: Mon, 09 May 2011 22:06:51 +0200 Subject: [SciPy-User] unexpected behavior when reverting twice ([::-1, 0]) In-Reply-To: References: <4DC83CC3.1060002@gmx-topmail.de> Message-ID: <4DC8495B.9080100@gmx-topmail.de> >> >> >> I would have expected the first column of c to be [0,1,2,3,4] an b to be >> the same as in step 5. > > c[::-1,0] makes a view on c, not a copy. When you assign back into > c[:,0], you end up modifying the elements near the end of c[::-1,0] > (i.e. near the beginning of c[:,0]). > Ah, OK. As I expected: the bug was in front of the monitor. ;) I already guessed, that it might have to do with views at first... but then I tried it with np.flipud(...) and got the same results. Does this mean flipud creates a view too? And is there a short syntax for creating a revered copy? Is there some document about the view-concept, its properties and possible pitfalls? Thanks, Bastian. From robert.kern at gmail.com Mon May 9 16:40:55 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 9 May 2011 15:40:55 -0500 Subject: [SciPy-User] unexpected behavior when reverting twice ([::-1, 0]) In-Reply-To: <4DC8495B.9080100@gmx-topmail.de> References: <4DC83CC3.1060002@gmx-topmail.de> <4DC8495B.9080100@gmx-topmail.de> Message-ID: On Mon, May 9, 2011 at 15:06, Bastian Weber wrote: > >>> I would have expected the first column of c to be [0,1,2,3,4] an b to be >>> the same as in step 5. >> >> c[::-1,0] makes a view on c, not a copy. When you assign back into >> c[:,0], you end up modifying the elements near the end of c[::-1,0] >> (i.e. near the beginning of c[:,0]). >> > > Ah, OK. As I expected: the bug was in front of the monitor. ;) > > I already guessed, that it might have to do with views at first... but > then I tried it with np.flipud(...) and got the same results. Does this > mean flipud creates a view too? Yes. > And is there a short syntax for creating a revered copy? b = c[::-1].copy() > Is there some document about the view-concept, its properties and > possible pitfalls? http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From pbajk at yahoo.co.uk Mon May 9 17:01:43 2011 From: pbajk at yahoo.co.uk (P B) Date: Mon, 9 May 2011 22:01:43 +0100 (BST) Subject: [SciPy-User] problem with installing on osx 10.6.6 In-Reply-To: Message-ID: <352107.67118.qm@web132303.mail.ird.yahoo.com> Hi,this is odd. ?IF I put in the command?>>> import numpy>>> import scipy>>> scipy.test(verbose=2) I get the output Running unit tests for scipy NumPy version 1.5.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy SciPy version 0.9.0 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy Python version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 1.0.0 ******* ?many test results which then finish with: OK (KNOWNFAIL=12, SKIP=42) Could it be there is a problem with the "scipy.test('1','10')" command? The full output is: Python 2.6.6 (r266:84374, Aug 31 2010, 11:00:51)? [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "copyright", "credits" or "license()" for more information. ? ? **************************************************************** ? ? Personal firewall software may warn about the connection IDLE ? ? makes to its subprocess using this computer's internal loopback ? ? interface.? This connection is not visible on any external ? ? interface and no data is sent to or received from the Internet. ? ? **************************************************************** ?? ? IDLE 2.6.6 ? ? ? >>> import numpy >>> import scipy >>> scipy.test(verbose=2) Running unit tests for scipy NumPy version 1.5.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy SciPy version 0.9.0 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy Python version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 1.0.0 Tests cophenet(Z) on tdist data set. ... ok Tests cophenet(Z, Y) on tdist data set. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) with empty linkage and condensed distance matrix. ... ok Tests num_obs_linkage with observation matrices of multiple sizes. ... ok Tests fcluster(Z, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests from_mlab_linkage on empty linkage array. ... ok Tests from_mlab_linkage on linkage array with multiple rows. ... ok Tests from_mlab_linkage on linkage array with single row. ... ok Tests inconsistency matrix calculation (depth=1) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=1, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=2, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=3, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=4, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=1) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a single linkage. ... ok Tests is_isomorphic on test case #1 (one flat cluster, different labellings) ... ok Tests is_isomorphic on test case #2 (two flat clusters, different labelings) ... ok Tests is_isomorphic on test case #3 (no flat clusters) ... ok Tests is_isomorphic on test case #4A (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #4B (3 flat clusters, different labelings, nonisomorphic) ... ok Tests is_isomorphic on test case #4C (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling, slightly non-isomorphic.) Run 3 times. ... ok Tests is_monotonic(Z) on 1x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting False. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 1). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 2). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 3). Expecting False ... ok Tests is_monotonic(Z) on 3x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on an empty linkage. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on Iris data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Perturbing. Expecting False. ... ok Tests is_valid_im(R) on im over 2 observations. ... ok Tests is_valid_im(R) on im over 3 observations. ... ok Tests is_valid_im(R) with 3 columns. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link counts. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height means. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height standard deviations. ... ok Tests is_valid_im(R) with 5 columns. ... ok Tests is_valid_im(R) with empty inconsistency matrix. ... ok Tests is_valid_im(R) with integer type. ... ok Tests is_valid_linkage(Z) on linkage over 2 observations. ... ok Tests is_valid_linkage(Z) on linkage over 3 observations. ... ok Tests is_valid_linkage(Z) with 3 columns. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative counts. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative distances. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (left). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (right). ... ok Tests is_valid_linkage(Z) with 5 columns. ... ok Tests is_valid_linkage(Z) with empty linkage. ... ok Tests is_valid_linkage(Z) with integer type. ... ok Tests leaders using a flat clustering generated by single linkage. ... ok Tests leaves_list(Z) on a 1x4 linkage. ... ok Tests leaves_list(Z) on a 2x4 linkage. ... ok Tests leaves_list(Z) on the Iris data set using average linkage. ... ok Tests leaves_list(Z) on the Iris data set using centroid linkage. ... ok Tests leaves_list(Z) on the Iris data set using complete linkage. ... ok Tests leaves_list(Z) on the Iris data set using median linkage. ... ok Tests leaves_list(Z) on the Iris data set using single linkage. ... ok Tests leaves_list(Z) on the Iris data set using ward linkage. ... ok Tests linkage(Y, 'average') on the tdist data set. ... ok Tests linkage(Y, 'centroid') on the Q data set. ... ok Tests linkage(Y, 'complete') on the Q data set. ... ok Tests linkage(Y, 'complete') on the tdist data set. ... ok Tests linkage(Y) where Y is a 0x4 linkage matrix. Exception expected. ... ok Tests linkage(Y, 'single') on the Q data set. ... ok Tests linkage(Y, 'single') on the tdist data set. ... ok Tests linkage(Y, 'weighted') on the Q data set. ... ok Tests linkage(Y, 'weighted') on the tdist data set. ... ok Tests maxdists(Z) on the Q data set using centroid linkage. ... ok Tests maxdists(Z) on the Q data set using complete linkage. ... ok Tests maxdists(Z) on the Q data set using median linkage. ... ok Tests maxdists(Z) on the Q data set using single linkage. ... ok Tests maxdists(Z) on the Q data set using Ward linkage. ... ok Tests maxdists(Z) on empty linkage. Expecting exception. ... ok Tests maxdists(Z) on linkage with one cluster. ... ok Tests maxinconsts(Z, R) on the Q data set using centroid linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using complete linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using median linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using single linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using Ward linkage. ... ok Tests maxinconsts(Z, R) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxinconsts(Z, R) on empty linkage. Expecting exception. ... ok Tests maxinconsts(Z, R) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 0) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 0) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 1) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok Tests maxRstat(Z, R, -1). Expecting exception. ... ok Tests maxRstat(Z, R, 4). Expecting exception. ... ok Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests num_obs_linkage(Z) with empty linkage. ... ok Tests to_mlab_linkage on linkage array with multiple rows. ... ok Tests to_mlab_linkage on empty linkage array. ... ok Tests to_mlab_linkage on linkage array with single row. ... ok test_hierarchy.load_testing_files ... ok Ticket #505. ... ok Testing that kmeans2 init methods work. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 and its results. ... ok Regression test for #546: fail when k arg is 0. ... ok This will cause kmean to have a cluster with no points. ... ok test_kmeans_simple (test_vq.TestKMean) ... ok test_large_features (test_vq.TestKMean) ... ok test_py_vq (test_vq.TestVq) ... ok test_py_vq2 (test_vq.TestVq) ... ok test_vq (test_vq.TestVq) ... ok Test special rank 1 vq algo, python implementation. ... ok test_definition (test_basic.TestDoubleFFT) ... ok test_djbfft (test_basic.TestDoubleFFT) ... ok test_n_argument_real (test_basic.TestDoubleFFT) ... ok test_definition (test_basic.TestDoubleIFFT) ... ok test_definition_real (test_basic.TestDoubleIFFT) ... ok test_djbfft (test_basic.TestDoubleIFFT) ... ok test_random_complex (test_basic.TestDoubleIFFT) ... ok test_random_real (test_basic.TestDoubleIFFT) ... ok test_size_accuracy (test_basic.TestDoubleIFFT) ... ok test_axes_argument (test_basic.TestFftn) ... ok test_definition (test_basic.TestFftn) ... ok test_shape_argument (test_basic.TestFftn) ... ok Test that fftn raises ValueError when s.shape is longer than x.shape ... ok test_shape_axes_argument (test_basic.TestFftn) ... ok test_shape_axes_argument2 (test_basic.TestFftn) ... ok test_definition (test_basic.TestFftnSingle) ... ok test_size_accuracy (test_basic.TestFftnSingle) ... ok test_definition (test_basic.TestIRFFTDouble) ... ok test_djbfft (test_basic.TestIRFFTDouble) ... ok test_random_real (test_basic.TestIRFFTDouble) ... ok test_size_accuracy (test_basic.TestIRFFTDouble) ... ok test_definition (test_basic.TestIRFFTSingle) ... ok test_djbfft (test_basic.TestIRFFTSingle) ... ok test_random_real (test_basic.TestIRFFTSingle) ... ok test_size_accuracy (test_basic.TestIRFFTSingle) ... ok test_definition (test_basic.TestIfftnDouble) ... ok test_random_complex (test_basic.TestIfftnDouble) ... ok test_definition (test_basic.TestIfftnSingle) ... ok test_random_complex (test_basic.TestIfftnSingle) ... ok test_complex (test_basic.TestLongDoubleFailure) ... ok test_real (test_basic.TestLongDoubleFailure) ... ok test_basic.TestOverwrite.test_fft ... ok test_basic.TestOverwrite.test_fftn ... ok test_basic.TestOverwrite.test_ifft ... ok test_basic.TestOverwrite.test_ifftn ... ok test_basic.TestOverwrite.test_irfft ... ok test_basic.TestOverwrite.test_rfft ... ok test_definition (test_basic.TestRFFTDouble) ... ok test_djbfft (test_basic.TestRFFTDouble) ... ok test_definition (test_basic.TestRFFTSingle) ... ok test_djbfft (test_basic.TestRFFTSingle) ... ok test_definition (test_basic.TestSingleFFT) ... ok test_djbfft (test_basic.TestSingleFFT) ... ok test_n_argument_real (test_basic.TestSingleFFT) ... ok test_notice (test_basic.TestSingleFFT) ... KNOWNFAIL: single-precision FFT implementation is partially disabled, until accuracy issues with large prime powers are resolved test_definition (test_basic.TestSingleIFFT) ... ok test_definition_real (test_basic.TestSingleIFFT) ... ok test_djbfft (test_basic.TestSingleIFFT) ... ok test_random_complex (test_basic.TestSingleIFFT) ... ok test_random_real (test_basic.TestSingleIFFT) ... ok test_size_accuracy (test_basic.TestSingleIFFT) ... ok fft returns wrong result with axes parameter. ... ok test_definition (test_helper.TestFFTFreq) ... ok test_definition (test_helper.TestFFTShift) ... ok test_inverse (test_helper.TestFFTShift) ... ok test_definition (test_helper.TestRFFTFreq) ... ok test_definition (test_pseudo_diffs.TestDiff) ... ok test_expr (test_pseudo_diffs.TestDiff) ... ok test_expr_large (test_pseudo_diffs.TestDiff) ... ok test_int (test_pseudo_diffs.TestDiff) ... ok test_period (test_pseudo_diffs.TestDiff) ... ok test_random_even (test_pseudo_diffs.TestDiff) ... ok test_random_odd (test_pseudo_diffs.TestDiff) ... ok test_sin (test_pseudo_diffs.TestDiff) ... ok test_zero_nyquist (test_pseudo_diffs.TestDiff) ... ok test_definition (test_pseudo_diffs.TestHilbert) ... ok test_random_even (test_pseudo_diffs.TestHilbert) ... ok test_random_odd (test_pseudo_diffs.TestHilbert) ... ok test_tilbert_relation (test_pseudo_diffs.TestHilbert) ... ok test_definition (test_pseudo_diffs.TestIHilbert) ... ok test_itilbert_relation (test_pseudo_diffs.TestIHilbert) ... ok test_definition (test_pseudo_diffs.TestITilbert) ... ok test_pseudo_diffs.TestOverwrite.test_cc_diff ... ok test_pseudo_diffs.TestOverwrite.test_cs_diff ... ok test_pseudo_diffs.TestOverwrite.test_diff ... ok test_pseudo_diffs.TestOverwrite.test_hilbert ... ok test_pseudo_diffs.TestOverwrite.test_itilbert ... ok test_pseudo_diffs.TestOverwrite.test_sc_diff ... ok test_pseudo_diffs.TestOverwrite.test_shift ... ok test_pseudo_diffs.TestOverwrite.test_ss_diff ... ok test_pseudo_diffs.TestOverwrite.test_tilbert ... ok test_definition (test_pseudo_diffs.TestShift) ... ok test_definition (test_pseudo_diffs.TestTilbert) ... ok test_random_even (test_pseudo_diffs.TestTilbert) ... ok test_random_odd (test_pseudo_diffs.TestTilbert) ... ok test_axis (test_real_transforms.TestDCTIDouble) ... ok test_definition (test_real_transforms.TestDCTIDouble) ... ok test_axis (test_real_transforms.TestDCTIFloat) ... ok test_definition (test_real_transforms.TestDCTIFloat) ... ok test_axis (test_real_transforms.TestDCTIIDouble) ... ok test_definition (test_real_transforms.TestDCTIIDouble) ... ok Test correspondance with matlab (orthornomal mode). ... ok test_axis (test_real_transforms.TestDCTIIFloat) ... ok test_definition (test_real_transforms.TestDCTIIFloat) ... ok Test correspondance with matlab (orthornomal mode). ... ok test_axis (test_real_transforms.TestDCTIIIDouble) ... ok test_definition (test_real_transforms.TestDCTIIIDouble) ... ok Test orthornomal mode. ... ok test_axis (test_real_transforms.TestDCTIIIFloat) ... ok test_definition (test_real_transforms.TestDCTIIIFloat) ... ok Test orthornomal mode. ... ok test_definition (test_real_transforms.TestIDCTIDouble) ... ok test_definition (test_real_transforms.TestIDCTIFloat) ... ok test_definition (test_real_transforms.TestIDCTIIDouble) ... ok test_definition (test_real_transforms.TestIDCTIIFloat) ... ok test_definition (test_real_transforms.TestIDCTIIIDouble) ... ok test_definition (test_real_transforms.TestIDCTIIIFloat) ... ok test_real_transforms.TestOverwrite.test_dct ... ok test_real_transforms.TestOverwrite.test_idct ... ok Check the dop853 solver ... ok Check the dopri5 solver ... ok Check the vode solver ... ok Check the dop853 solver ... ok Check the dopri5 solver ... ok Check the vode solver ... ok Check the zvode solver ... ok test_odeint (test_integrate.TestOdeint) ... ok test_algebraic_log_weight (test_quadpack.TestQuad) ... ok test_cauchypv_weight (test_quadpack.TestQuad) ... ok test_cosine_weighted_infinite (test_quadpack.TestQuad) ... ok test_double_integral (test_quadpack.TestQuad) ... ok test_indefinite (test_quadpack.TestQuad) ... ok test_sine_weighted_finite (test_quadpack.TestQuad) ... ok test_sine_weighted_infinite (test_quadpack.TestQuad) ... ok test_singular (test_quadpack.TestQuad) ... ok test_triple_integral (test_quadpack.TestQuad) ... ok test_typical (test_quadpack.TestQuad) ... ok Test the first few degrees, for evenly spaced points. ... ok Test newton_cotes with points that are not evenly spaced. ... ok test_non_dtype (test_quadrature.TestQuadrature) ... ok test_quadrature (test_quadrature.TestQuadrature) ... ok test_quadrature_rtol (test_quadrature.TestQuadrature) ... ok test_romb (test_quadrature.TestQuadrature) ... ok test_romberg (test_quadrature.TestQuadrature) ... ok test_romberg_rtol (test_quadrature.TestQuadrature) ... ok test_bilinearity (test_fitpack.TestLSQBivariateSpline) ... ok Test whether empty inputs returns an empty output. Ticket 1014 ... ok test_integral (test_fitpack.TestLSQBivariateSpline) ... ok test_linear_constant (test_fitpack.TestLSQBivariateSpline) ... ok test_defaults (test_fitpack.TestRectBivariateSpline) ... ok test_evaluate (test_fitpack.TestRectBivariateSpline) ... ok test_integral (test_fitpack.TestSmoothBivariateSpline) ... ok test_linear_1d (test_fitpack.TestSmoothBivariateSpline) ... ok test_linear_constant (test_fitpack.TestSmoothBivariateSpline) ... ok Test whether empty input returns an empty output. Ticket 1014 ... ok test_linear_1d (test_fitpack.TestUnivariateSpline) ... ok test_linear_constant (test_fitpack.TestUnivariateSpline) ... ok test_preserve_shape (test_fitpack.TestUnivariateSpline) ... ok test_subclassing (test_fitpack.TestUnivariateSpline) ... ok test_interpnd.TestCloughTocher2DInterpolator.test_dense ... ok test_interpnd.TestCloughTocher2DInterpolator.test_linear_smoketest ... ok test_interpnd.TestCloughTocher2DInterpolator.test_quadratic_smoketest ... ok test_interpnd.TestEstimateGradients2DGlobal.test_smoketest ... ok test_interpnd.TestLinearNDInterpolation.test_complex_smoketest ... ok test_interpnd.TestLinearNDInterpolation.test_smoketest ... ok test_interpnd.TestLinearNDInterpolation.test_smoketest_alternate ... ok test_interpnd.TestLinearNDInterpolation.test_square ... ok test_interpolate.TestInterp1D.test_bounds ... ok test_interpolate.TestInterp1D.test_complex ... ok Check the actual implementation of spline interpolation. ... ok Check that the attributes are initialized appropriately by the ... ok Check the actual implementation of linear interpolation. ... ok test_interpolate.TestInterp1D.test_nd ... ok test_interpolate.TestInterp1D.test_nd_zero_spline ... KNOWNFAIL: zero-order splines fail for the last point Check the actual implementation of nearest-neighbour interpolation. ... ok Make sure that appropriate exceptions are raised when invalid values ... ok Check the actual implementation of zero-order spline interpolation. ... KNOWNFAIL: zero-order splines fail for the last point test_interp2d (test_interpolate.TestInterp2D) ... ok test_interp2d_meshgrid_input (test_interpolate.TestInterp2D) ... ok test_lagrange (test_interpolate.TestLagrange) ... ok test_block_average_above (test_interpolate_wrapper.Test) ... ok test_linear (test_interpolate_wrapper.Test) ... ok test_linear2 (test_interpolate_wrapper.Test) ... ok test_logarithmic (test_interpolate_wrapper.Test) ... ok test_nearest (test_interpolate_wrapper.Test) ... ok test_ndgriddata.TestGriddata.test_1d ... ok test_ndgriddata.TestGriddata.test_alternative_call ... ok test_ndgriddata.TestGriddata.test_complex_2d ... ok test_ndgriddata.TestGriddata.test_fill_value ... ok test_ndgriddata.TestGriddata.test_multipoint_2d ... ok test_ndgriddata.TestGriddata.test_multivalue_2d ... ok test_append (test_polyint.CheckBarycentric) ... ok test_delayed (test_polyint.CheckBarycentric) ... ok test_lagrange (test_polyint.CheckBarycentric) ... ok test_scalar (test_polyint.CheckBarycentric) ... ok test_shapes_1d_vectorvalue (test_polyint.CheckBarycentric) ... ok test_shapes_scalarvalue (test_polyint.CheckBarycentric) ... ok test_shapes_vectorvalue (test_polyint.CheckBarycentric) ... ok test_vector (test_polyint.CheckBarycentric) ... ok test_wrapper (test_polyint.CheckBarycentric) ... ok test_derivative (test_polyint.CheckKrogh) ... ok test_derivatives (test_polyint.CheckKrogh) ... ok test_empty (test_polyint.CheckKrogh) ... ok test_hermite (test_polyint.CheckKrogh) ... ok test_high_derivative (test_polyint.CheckKrogh) ... ok test_lagrange (test_polyint.CheckKrogh) ... ok test_low_derivatives (test_polyint.CheckKrogh) ... ok test_scalar (test_polyint.CheckKrogh) ... ok test_shapes_1d_vectorvalue (test_polyint.CheckKrogh) ... ok test_shapes_scalarvalue (test_polyint.CheckKrogh) ... ok test_shapes_scalarvalue_derivative (test_polyint.CheckKrogh) ... ok test_shapes_vectorvalue (test_polyint.CheckKrogh) ... ok test_shapes_vectorvalue_derivative (test_polyint.CheckKrogh) ... ok test_vector (test_polyint.CheckKrogh) ... ok test_wrapper (test_polyint.CheckKrogh) ... ok test_construction (test_polyint.CheckPiecewise) ... ok test_derivative (test_polyint.CheckPiecewise) ... ok test_derivatives (test_polyint.CheckPiecewise) ... ok test_incremental (test_polyint.CheckPiecewise) ... ok test_scalar (test_polyint.CheckPiecewise) ... ok test_shapes_scalarvalue (test_polyint.CheckPiecewise) ... ok test_shapes_scalarvalue_derivative (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue_1d (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue_derivative (test_polyint.CheckPiecewise) ... ok test_vector (test_polyint.CheckPiecewise) ... ok test_wrapper (test_polyint.CheckPiecewise) ... ok test_exponential (test_polyint.CheckTaylor) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_regularity('multiquadric', 0.050000000000000003) ... ok test_rbf.test_rbf_regularity('inverse multiquadric', 0.02) ... ok test_rbf.test_rbf_regularity('gaussian', 0.01) ... ok test_rbf.test_rbf_regularity('cubic', 0.14999999999999999) ... ok test_rbf.test_rbf_regularity('quintic', 0.10000000000000001) ... ok test_rbf.test_rbf_regularity('thin-plate', 0.10000000000000001) ... ok test_rbf.test_rbf_regularity('linear', 0.20000000000000001) ... ok Check that the Rbf class can be constructed with the default ... ok Check that the Rbf class can be constructed with function=callable. ... ok Ticket #629 ... ok Parsing trivial file with nothing. ... ok Parsing trivial file with some comments in the data section. ... ok Test parsing wrong type of attribute from their value. ... ok Parsing trivial header with nothing. ... ok Test parsing type of attribute from their value. ... ok test_missing (test_arffread.MissingDataTest) ... ok test_byteordercodes.test_native ... ok test_byteordercodes.test_to_numpy ... ok test_mio.test_load('double', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.4_GLNX86.mat'], {'testdouble': array([[ 0.? ? ? ? ,? 0.78539816,? 1.57079633,? 2.35619449,? 3.14159265, ... ok test_mio.test_load('string', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_7.4_GLNX86.mat'], {'teststring': array([u'"Do nine men interpret?" "Nine men," I nod.'], ... ok test_mio.test_load('complex', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_7.4_GLNX86.mat'], {'testcomplex': array([[? 1.00000000e+00 +0.00000000e+00j, ... ok test_mio.test_load('matrix', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_7.4_GLNX86.mat'], {'testmatrix': array([[ 1.,? 2.,? 3.,? 4.,? 5.], ... ok test_mio.test_load('sparse', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.4_GLNX86.mat'], {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_load('sparsecomplex', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.4_GLNX86.mat'], {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_load('multi', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmulti_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmulti_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmulti_7.4_GLNX86.mat'], {'a': array([[ 1.,? 2.,? 3.,? 4.,? 5.], ... ok test_mio.test_load('minus', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_7.4_GLNX86.mat'], {'testminus': array([[-1]])}) ... ok test_mio.test_load('onechar', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_7.4_GLNX86.mat'], {'testonechar': array([u'r'], ... ok test_mio.test_load('cell', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_7.4_GLNX86.mat'], {'testcell': array([[[u'This cell contains this string and 3 arrays of increasing length'], ... ok test_mio.test_load('scalarcell', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testscalarcell_7.4_GLNX86.mat'], {'testscalarcell': array([[[[1]]]], dtype=object)}) ... ok test_mio.test_load('emptycell', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_5.3_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_7.4_GLNX86.mat'], {'testemptycell': array([[[[1]], [[2]], [], [], [[3]]]], dtype=object)}) ... ok test_mio.test_load('stringarray', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_7.4_GLNX86.mat'], {'teststringarray': array([u'one? ', u'two? ', u'three'], ... ok test_mio.test_load('3dmatrix', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_7.4_GLNX86.mat'], {'test3dmatrix': array([[[ 1,? 7, 13, 19], ... ok test_mio.test_load('struct', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_7.4_GLNX86.mat'], {'teststruct': array([[ ([u'Rats live on no evil star.'], [[1.4142135623730951, 2.7182818284590451, 3.1415926535897931]], [[(1.4142135623730951+1.4142135623730951j), (2.7182818284590451+2.7182818284590451j), (3.1415926535897931+3.1415926535897931j)]])]], ... ok test_mio.test_load('cellnest', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_7.4_GLNX86.mat'], {'testcellnest': array([[[[1]], [[[[2]] [[3]] [[[[4]] [[5]]]]]]]], dtype=object)}) ... ok test_mio.test_load('structnest', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_7.4_GLNX86.mat'], {'teststructnest': array([[([[1]], [[(array([u'number 3'], ... ok test_mio.test_load('structarr', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_7.4_GLNX86.mat'], {'teststructarr': array([[([[1]], [[2]]), ([u'number 1'], [u'number 2'])]], ... ok test_mio.test_load('object', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_7.4_GLNX86.mat'], {'testobject': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]])]], ... ok test_mio.test_load('unicode', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testunicode_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testunicode_7.4_GLNX86.mat'], {'testunicode': array([ u'Japanese: \n\u3059\u3079\u3066\u306e\u4eba\u9593\u306f\u3001\u751f\u307e\u308c\u306a\u304c\u3089\u306b\u3057\u3066\u81ea\u7531\u3067\u3042\u308a\u3001\n\u304b\u3064\u3001\u5c0a\u53b3\u3068\u6a29\u5229\u3068 \u306b\u3064\u3044\u3066\u5e73\u7b49\u3067\u3042\u308b\u3002\n\u4eba\u9593\u306f\u3001\u7406\u6027\u3068\u826f\u5fc3\u3068\u3092\u6388\u3051\u3089\u308c\u3066\u304a\u308a\u3001\n\u4e92\u3044\u306b\u540c\u80de\u306e\u7cbe\u795e\u3092\u3082\u3063\u3066\u884c\u52d5\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002'], ... ok test_mio.test_load('sparse', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.4_GLNX86.mat'], {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_load('sparsecomplex', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.4_GLNX86.mat'], {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('double_round_trip', {'testdouble': array([[ 0.? ? ? ? ,? 0.78539816,? 1.57079633,? 2.35619449,? 3.14159265, ... ok test_mio.test_round_trip('string_round_trip', {'teststring': array([u'"Do nine men interpret?" "Nine men," I nod.'], ... ok test_mio.test_round_trip('complex_round_trip', {'testcomplex': array([[? 1.00000000e+00 +0.00000000e+00j, ... ok test_mio.test_round_trip('matrix_round_trip', {'testmatrix': array([[ 1.,? 2.,? 3.,? 4.,? 5.], ... ok test_mio.test_round_trip('sparse_round_trip', {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('sparsecomplex_round_trip', {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('multi_round_trip', {'a': array([[ 1.,? 2.,? 3.,? 4.,? 5.], ... ok test_mio.test_round_trip('minus_round_trip', {'testminus': array([[-1]])}, '4') ... ok test_mio.test_round_trip('onechar_round_trip', {'testonechar': array([u'r'], ... ok test_mio.test_round_trip('cell_round_trip', {'testcell': array([[[u'This cell contains this string and 3 arrays of increasing length'], ... ok test_mio.test_round_trip('scalarcell_round_trip', {'testscalarcell': array([[[[1]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('emptycell_round_trip', {'testemptycell': array([[[[1]], [[2]], [], [], [[3]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('stringarray_round_trip', {'teststringarray': array([u'one? ', u'two? ', u'three'], ... ok test_mio.test_round_trip('3dmatrix_round_trip', {'test3dmatrix': array([[[ 1,? 7, 13, 19], ... ok test_mio.test_round_trip('struct_round_trip', {'teststruct': array([[ ([u'Rats live on no evil star.'], [[1.4142135623730951, 2.7182818284590451, 3.1415926535897931]], [[(1.4142135623730951+1.4142135623730951j), (2.7182818284590451+2.7182818284590451j), (3.1415926535897931+3.1415926535897931j)]])]], ... ok test_mio.test_round_trip('cellnest_round_trip', {'testcellnest': array([[[[1]], [[[[2]] [[3]] [[[[4]] [[5]]]]]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('structnest_round_trip', {'teststructnest': array([[([[1]], [[(array([u'number 3'], ... ok test_mio.test_round_trip('structarr_round_trip', {'teststructarr': array([[([[1]], [[2]]), ([u'number 1'], [u'number 2'])]], ... ok test_mio.test_round_trip('object_round_trip', {'testobject': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]])]], ... ok test_mio.test_round_trip('unicode_round_trip', {'testunicode': array([ u'Japanese: \n\u3059\u3079\u3066\u306e\u4eba\u9593\u306f\u3001\u751f\u307e\u308c\u306a\u304c\u3089\u306b\u3057\u3066\u81ea\u7531\u3067\u3042\u308a\u3001\n\u304b\u3064\u3001\u5c0a\u53b3\u3068\u6a29\u5229\u3068 \u306b\u3064\u3044\u3066\u5e73\u7b49\u3067\u3042\u308b\u3002\n\u4eba\u9593\u306f\u3001\u7406\u6027\u3068\u826f\u5fc3\u3068\u3092\u6388\u3051\u3089\u308c\u3066\u304a\u308a\u3001\n\u4e92\u3044\u306b\u540c\u80de\u306e\u7cbe\u795e\u3092\u3082\u3063\u3066\u884c\u52d5\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002'], ... ok test_mio.test_round_trip('sparse_round_trip', {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('sparsecomplex_round_trip', {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('objectarray_round_trip', {'testobjectarray': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]]), ... ok test_mio.test_gzip_simple ... ok test_mio.test_multiple_open ... ok test_mio.test_mat73 ... ok test_mio.test_warnings(, , '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat') ... ok Regression test for #653. ... ok test_mio.test_structname_len ... ok test_mio.test_4_and_long_field_names_incompatible ... ok test_mio.test_long_field_names ... ok test_mio.test_long_field_names_in_struct ... ok test_mio.test_cell_with_one_thing_in_it ... ok test_mio.test_writer_properties([], []) ... ok test_mio.test_writer_properties(['avar'], ['avar']) ... ok test_mio.test_writer_properties(False, False) ... ok test_mio.test_writer_properties(True, True) ... ok test_mio.test_writer_properties(False, False) ... ok test_mio.test_writer_properties(True, True) ... ok test_mio.test_use_small_element(True,) ... ok test_mio.test_use_small_element(True,) ... ok test_mio.test_save_dict ... ok test_mio.test_1d_shape ... ok test_mio.test_compression(array([[ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., ... ok test_mio.test_compression(array([[ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., ... ok test_mio.test_compression(True,) ... ok test_mio.test_compression(array([[ 1.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., ... ok test_mio.test_compression(array([[ 1.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., ... ok test_mio.test_single_object ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_empty_struct ... ok test_mio.test_recarray(array([[ 0.5]]), 0.5) ... ok test_mio.test_recarray(array([u'python'], ... ok test_mio.test_recarray(array([[ 0.5]]), 0.5) ... ok test_mio.test_recarray(array([u'python'], ... ok test_mio.test_recarray(dtype([('f1', '|O4'), ('f2', '|O4')]), dtype([('f1', '|O4'), ('f2', '|O4')])) ... ok test_mio.test_recarray(array([[ 99.]]), 99) ... ok test_mio.test_recarray(array([u'not perl'], ... ok test_mio.test_save_object ... ok test_mio.test_read_opts ... ok test_mio.test_empty_string ... ok test_mio.test_mat4_3d ... ok test_mio.test_func_read(True,) ... ok test_mio.test_func_read(, >, {'__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, Platform: GLNX86, Created on: Fri Feb 20 15:26:59 2009', 'testfunc': MatlabFunction([[ ([u'/opt/matlab-2007a'], [u'/'], [u'@'], [[(array([u'afunc'], ... ok test_mio.test_mat_dtype('u', 'u') ... ok test_mio.test_mat_dtype('f', 'f') ... ok test_mio.test_sparse_in_struct(matrix([[ 1.,? 0.,? 0.,? 0.], ... ok test_mio.test_mat_struct_squeeze ... ok test_mio.test_str_round ... ok test_mio.test_fieldnames ... ok test_mio.test_loadmat_varnames ... ok test_mio.test_round_types ... ok test_mio.test_varmats_from_mat ... ok test_mio5_utils.test_byteswap(16777216L, 16777216L) ... ok test_mio5_utils.test_byteswap(1L, 1L) ... ok test_mio5_utils.test_byteswap(65536L, 65536L) ... ok test_mio5_utils.test_byteswap(256L, 256L) ... ok test_mio5_utils.test_byteswap(256L, 256L) ... ok test_mio5_utils.test_byteswap(65536L, 65536L) ... ok test_mio5_utils.test_read_tag(, ) ... ok test_mio5_utils.test_read_tag(, ) ... ok test_mio5_utils.test_read_stream('\x05\x00\x04\x00\x01\x00\x00\x00', '\x05\x00\x04\x00\x01\x00\x00\x00') ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(array([1]), 1) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric_writeable(True,) ... ok test_mio5_utils.test_zero_byte_string ... ok test_mio_funcs.test_jottings ... ok test_mio_utils.test_cproduct(1, 1) ... ok test_mio_utils.test_cproduct(1, 1) ... ok test_mio_utils.test_cproduct(3, 3) ... ok test_mio_utils.test_cproduct(3, 3) ... ok test_mio_utils.test_squeeze_element(array([ 0.,? 0.,? 0.]), array([ 0.,? 0.,? 0.])) ... ok test_mio_utils.test_squeeze_element(True,) ... ok test_mio_utils.test_squeeze_element(True,) ... ok test_mio_utils.test_chars_strings(array([u'learn ', u'python', u'fast? ', u'here? '], ... ok test_mio_utils.test_chars_strings(array([[u'learn ', u'python'], ... ok test_mio_utils.test_chars_strings(array([[[u'learn ', u'python'], ... ok test_mio_utils.test_chars_strings(array([u'learn ', u'python', u'fast? ', u'here? '], ... ok test_mio_utils.test_chars_strings(array([u''], ... ok test_pathological.test_multiple_fieldnames ... ok test_streams.test_make_stream ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(5, 5) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(7, 7) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(6, 6) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(5, 5) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(7, 7) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(6, 6) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(5, 5) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(7, 7) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(6, 6) ... ok test_streams.test_read('a\x00string', 'a\x00string') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00string', 'a\x00string') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00string', 'a\x00string') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_idl.TestArrayDimensions.test_1d ... ok test_idl.TestArrayDimensions.test_2d ... ok test_idl.TestArrayDimensions.test_3d ... ok test_idl.TestArrayDimensions.test_4d ... ok test_idl.TestArrayDimensions.test_5d ... ok test_idl.TestArrayDimensions.test_6d ... ok test_idl.TestArrayDimensions.test_7d ... ok test_idl.TestArrayDimensions.test_8d ... ok test_idl.TestCompressed.test_byte ... ok test_idl.TestCompressed.test_bytes ... ok test_idl.TestCompressed.test_complex32 ... ok test_idl.TestCompressed.test_complex64 ... ok test_idl.TestCompressed.test_compressed ... ok test_idl.TestCompressed.test_float32 ... ok test_idl.TestCompressed.test_float64 ... ok test_idl.TestCompressed.test_heap_pointer ... ok test_idl.TestCompressed.test_int16 ... ok test_idl.TestCompressed.test_int32 ... ok test_idl.TestCompressed.test_int64 ... ok test_idl.TestCompressed.test_object_reference ... ok test_idl.TestCompressed.test_structure ... ok test_idl.TestCompressed.test_uint16 ... ok test_idl.TestCompressed.test_uint32 ... ok test_idl.TestCompressed.test_uint64 ... ok test_idl.TestIdict.test_idict ... ok test_idl.TestPointers.test_pointers ... ok test_idl.TestScalars.test_byte ... ok test_idl.TestScalars.test_bytes ... ok test_idl.TestScalars.test_complex32 ... ok test_idl.TestScalars.test_complex64 ... ok test_idl.TestScalars.test_float32 ... ok test_idl.TestScalars.test_float64 ... ok test_idl.TestScalars.test_heap_pointer ... ok test_idl.TestScalars.test_int16 ... ok test_idl.TestScalars.test_int32 ... ok test_idl.TestScalars.test_int64 ... ok test_idl.TestScalars.test_object_reference ... ok test_idl.TestScalars.test_structure ... ok test_idl.TestScalars.test_uint16 ... ok test_idl.TestScalars.test_uint32 ... ok test_idl.TestScalars.test_uint64 ... ok test_idl.TestStructures.test_arrays ... ok test_idl.TestStructures.test_arrays_replicated ... ok test_idl.TestStructures.test_scalars ... ok test_idl.TestStructures.test_scalars_replicated ... ok test_random_rect_real (test_mmio.TestMMIOArray) ... ok test_random_symmetric_real (test_mmio.TestMMIOArray) ... ok test_simple (test_mmio.TestMMIOArray) ... ok test_simple_complex (test_mmio.TestMMIOArray) ... ok test_simple_hermitian (test_mmio.TestMMIOArray) ... ok test_simple_real (test_mmio.TestMMIOArray) ... ok test_simple_rectangular (test_mmio.TestMMIOArray) ... ok test_simple_rectangular_real (test_mmio.TestMMIOArray) ... ok test_simple_skew_symmetric (test_mmio.TestMMIOArray) ... ok test_simple_skew_symmetric_float (test_mmio.TestMMIOArray) ... ok test_simple_symmetric (test_mmio.TestMMIOArray) ... ok test_complex_write_read (test_mmio.TestMMIOCoordinate) ... ok test_empty_write_read (test_mmio.TestMMIOCoordinate) ... ok read a general matrix ... ok read a hermitian matrix ... ok read a skew-symmetric matrix ... ok read a symmetric matrix ... ok read a symmetric pattern matrix ... ok test_real_write_read (test_mmio.TestMMIOCoordinate) ... ok test_sparse_formats (test_mmio.TestMMIOCoordinate) ... ok test_netcdf.test_read_write_files(True,) ... ok test_netcdf.test_read_write_files('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_files('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_files((11,), (11,)) ... ok test_netcdf.test_read_write_files(10, 10) ... ok test_netcdf.test_read_write_files(False,) ... ok test_netcdf.test_read_write_files('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_files('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_files((11,), (11,)) ... ok test_netcdf.test_read_write_files(10, 10) ... ok test_netcdf.test_read_write_files(False,) ... ok test_netcdf.test_read_write_files('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_files('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_files((11,), (11,)) ... ok test_netcdf.test_read_write_files(10, 10) ... ok test_netcdf.test_read_write_sio('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_sio('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_sio((11,), (11,)) ... ok test_netcdf.test_read_write_sio(10, 10) ... ok test_netcdf.test_read_write_sio(, , , 'r', True) ... ok test_netcdf.test_read_write_sio('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_sio('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_sio((11,), (11,)) ... ok test_netcdf.test_read_write_sio(10, 10) ... ok test_netcdf.test_read_write_sio(2, 2) ... ok test_netcdf.test_read_write_sio('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_sio('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_sio((11,), (11,)) ... ok test_netcdf.test_read_write_sio(10, 10) ... ok test_netcdf.test_read_write_sio(2, 2) ... ok test_netcdf.test_read_example_data ... ok test_cast_to_fp (test_recaster.TestRecaster) ...? Warning (from warnings module): ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/recaster.py", line 328 ? ? test_arr = arr.astype(T) ComplexWarning: Casting complex values to real discards the imaginary part ok test_init (test_recaster.TestRecaster) ... ok test_recasts (test_recaster.TestRecaster) ...? Warning (from warnings module): ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/recaster.py", line 375 ? ? return arr.astype(idt) ComplexWarning: Casting complex values to real discards the imaginary part ok test_smallest_int_sctype (test_recaster.TestRecaster) ... ok test_wavfile.test_read_1 ... ok test_wavfile.test_read_2 ... ok test_wavfile.test_read_fail ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i2'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i2'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i2'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i2'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i2'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i2'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int16'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int16'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int16'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int16'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int16'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int16'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i4'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i4'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i4'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i4'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i4'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i4'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int32'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int32'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int32'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int32'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int32'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int32'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i8'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i8'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i8'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i8'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i8'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i8'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int64'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int64'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int64'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int64'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int64'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int64'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint8'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint8'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint8'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint8'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint8'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint8'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u2'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u2'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u2'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u2'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u2'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u2'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint16'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint16'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint16'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint16'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint16'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint16'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u4'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u4'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u4'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u4'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u4'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u4'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint32'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint32'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint32'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint32'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint32'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint32'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u8'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u8'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u8'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u8'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u8'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u8'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint64'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint64'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint64'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint64'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint64'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint64'), 5) ... ok test_blas (test_blas.TestBLAS) ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... ok test_axpy (test_blas.TestFBLAS1Simple) ... ok test_copy (test_blas.TestFBLAS1Simple) ... ok test_dot (test_blas.TestFBLAS1Simple) ... ok test_nrm2 (test_blas.TestFBLAS1Simple) ... ok test_scal (test_blas.TestFBLAS1Simple) ... ok test_swap (test_blas.TestFBLAS1Simple) ... ok test_gemv (test_blas.TestFBLAS2Simple) ... ok test_ger (test_blas.TestFBLAS2Simple) ... ok test_gemm (test_blas.TestFBLAS3Simple) ... ok test_gemm2 (test_blas.TestFBLAS3Simple) ... ok test_default_a (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCaxpy) ... ok test_x_and_y_stride (test_fblas.TestCaxpy) ... ok test_x_bad_size (test_fblas.TestCaxpy) ... ok test_x_stride (test_fblas.TestCaxpy) ... ok test_y_bad_size (test_fblas.TestCaxpy) ... ok test_y_stride (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCcopy) ... ok test_x_and_y_stride (test_fblas.TestCcopy) ... ok test_x_bad_size (test_fblas.TestCcopy) ... ok test_x_stride (test_fblas.TestCcopy) ... ok test_y_bad_size (test_fblas.TestCcopy) ... ok test_y_stride (test_fblas.TestCcopy) ... ok test_default_beta_y (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCgemv) ... ok test_simple_transpose (test_fblas.TestCgemv) ... ok test_simple_transpose_conj (test_fblas.TestCgemv) ... ok test_x_stride (test_fblas.TestCgemv) ... ok test_x_stride_assert (test_fblas.TestCgemv) ... ok test_x_stride_transpose (test_fblas.TestCgemv) ... ok test_y_stride (test_fblas.TestCgemv) ... ok test_y_stride_assert (test_fblas.TestCgemv) ... ok test_y_stride_transpose (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCscal) ... ok test_x_bad_size (test_fblas.TestCscal) ... ok test_x_stride (test_fblas.TestCscal) ... ok test_simple (test_fblas.TestCswap) ... ok test_x_and_y_stride (test_fblas.TestCswap) ... ok test_x_bad_size (test_fblas.TestCswap) ... ok test_x_stride (test_fblas.TestCswap) ... ok test_y_bad_size (test_fblas.TestCswap) ... ok test_y_stride (test_fblas.TestCswap) ... ok test_default_a (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDaxpy) ... ok test_x_and_y_stride (test_fblas.TestDaxpy) ... ok test_x_bad_size (test_fblas.TestDaxpy) ... ok test_x_stride (test_fblas.TestDaxpy) ... ok test_y_bad_size (test_fblas.TestDaxpy) ... ok test_y_stride (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDcopy) ... ok test_x_and_y_stride (test_fblas.TestDcopy) ... ok test_x_bad_size (test_fblas.TestDcopy) ... ok test_x_stride (test_fblas.TestDcopy) ... ok test_y_bad_size (test_fblas.TestDcopy) ... ok test_y_stride (test_fblas.TestDcopy) ... ok test_default_beta_y (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDgemv) ... ok test_simple_transpose (test_fblas.TestDgemv) ... ok test_simple_transpose_conj (test_fblas.TestDgemv) ... ok test_x_stride (test_fblas.TestDgemv) ... ok test_x_stride_assert (test_fblas.TestDgemv) ... ok test_x_stride_transpose (test_fblas.TestDgemv) ... ok test_y_stride (test_fblas.TestDgemv) ... ok test_y_stride_assert (test_fblas.TestDgemv) ... ok test_y_stride_transpose (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDscal) ... ok test_x_bad_size (test_fblas.TestDscal) ... ok test_x_stride (test_fblas.TestDscal) ... ok test_simple (test_fblas.TestDswap) ... ok test_x_and_y_stride (test_fblas.TestDswap) ... ok test_x_bad_size (test_fblas.TestDswap) ... ok test_x_stride (test_fblas.TestDswap) ... ok test_y_bad_size (test_fblas.TestDswap) ... ok test_y_stride (test_fblas.TestDswap) ... ok test_default_a (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestSaxpy) ... ok test_x_and_y_stride (test_fblas.TestSaxpy) ... ok test_x_bad_size (test_fblas.TestSaxpy) ... ok test_x_stride (test_fblas.TestSaxpy) ... ok test_y_bad_size (test_fblas.TestSaxpy) ... ok test_y_stride (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestScopy) ... ok test_x_and_y_stride (test_fblas.TestScopy) ... ok test_x_bad_size (test_fblas.TestScopy) ... ok test_x_stride (test_fblas.TestScopy) ... ok test_y_bad_size (test_fblas.TestScopy) ... ok test_y_stride (test_fblas.TestScopy) ... ok test_default_beta_y (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSgemv) ... ok test_simple_transpose (test_fblas.TestSgemv) ... ok test_simple_transpose_conj (test_fblas.TestSgemv) ... ok test_x_stride (test_fblas.TestSgemv) ... ok test_x_stride_assert (test_fblas.TestSgemv) ... ok test_x_stride_transpose (test_fblas.TestSgemv) ... ok test_y_stride (test_fblas.TestSgemv) ... ok test_y_stride_assert (test_fblas.TestSgemv) ... ok test_y_stride_transpose (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSscal) ... ok test_x_bad_size (test_fblas.TestSscal) ... ok test_x_stride (test_fblas.TestSscal) ... ok test_simple (test_fblas.TestSswap) ... ok test_x_and_y_stride (test_fblas.TestSswap) ... ok test_x_bad_size (test_fblas.TestSswap) ... ok test_x_stride (test_fblas.TestSswap) ... ok test_y_bad_size (test_fblas.TestSswap) ... ok test_y_stride (test_fblas.TestSswap) ... ok test_default_a (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZaxpy) ... ok test_x_and_y_stride (test_fblas.TestZaxpy) ... ok test_x_bad_size (test_fblas.TestZaxpy) ... ok test_x_stride (test_fblas.TestZaxpy) ... ok test_y_bad_size (test_fblas.TestZaxpy) ... ok test_y_stride (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZcopy) ... ok test_x_and_y_stride (test_fblas.TestZcopy) ... ok test_x_bad_size (test_fblas.TestZcopy) ... ok test_x_stride (test_fblas.TestZcopy) ... ok test_y_bad_size (test_fblas.TestZcopy) ... ok test_y_stride (test_fblas.TestZcopy) ... ok test_default_beta_y (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZgemv) ... ok test_simple_transpose (test_fblas.TestZgemv) ... ok test_simple_transpose_conj (test_fblas.TestZgemv) ... ok test_x_stride (test_fblas.TestZgemv) ... ok test_x_stride_assert (test_fblas.TestZgemv) ... ok test_x_stride_transpose (test_fblas.TestZgemv) ... ok test_y_stride (test_fblas.TestZgemv) ... ok test_y_stride_assert (test_fblas.TestZgemv) ... ok test_y_stride_transpose (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZscal) ... ok test_x_bad_size (test_fblas.TestZscal) ... ok test_x_stride (test_fblas.TestZscal) ... ok test_simple (test_fblas.TestZswap) ... ok test_x_and_y_stride (test_fblas.TestZswap) ... ok test_x_bad_size (test_fblas.TestZswap) ... ok test_x_stride (test_fblas.TestZswap) ... ok test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_clapack_dsyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyev Clapack empty, skip clapack test test_clapack_dsyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr Clapack empty, skip clapack test test_clapack_dsyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr_ranges Clapack empty, skip clapack test test_clapack_ssyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyev Clapack empty, skip clapack test test_clapack_ssyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr Clapack empty, skip clapack test test_clapack_ssyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr_ranges Clapack empty, skip clapack test test_dsyev (test_esv.TestEsv) ... ok test_dsyevr (test_esv.TestEsv) ... ok test_dsyevr_ranges (test_esv.TestEsv) ... ok test_ssyev (test_esv.TestEsv) ... ok test_ssyevr (test_esv.TestEsv) ... ok test_ssyevr_ranges (test_esv.TestEsv) ... ok test_clapack_dsygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_1 Clapack empty, skip flapack test test_clapack_dsygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_2 Clapack empty, skip flapack test test_clapack_dsygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_3 Clapack empty, skip flapack test test_clapack_ssygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_1 Clapack empty, skip flapack test test_clapack_ssygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_2 Clapack empty, skip flapack test test_clapack_ssygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_3 Clapack empty, skip flapack test test_dsygv_1 (test_gesv.TestSygv) ... ok test_dsygv_2 (test_gesv.TestSygv) ... ok test_dsygv_3 (test_gesv.TestSygv) ... ok test_ssygv_1 (test_gesv.TestSygv) ... ok test_ssygv_2 (test_gesv.TestSygv) ... ok test_ssygv_3 (test_gesv.TestSygv) ... ok test_clapack_dgebal (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_dgebal Clapack empty, skip flapack test test_clapack_dgehrd (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_dgehrd Clapack empty, skip flapack test test_clapack_sgebal (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_sgebal Clapack empty, skip flapack test test_clapack_sgehrd (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_sgehrd Clapack empty, skip flapack test test_dgebal (test_lapack.TestLapack) ... ok test_dgehrd (test_lapack.TestLapack) ... ok test_sgebal (test_lapack.TestLapack) ... ok test_sgehrd (test_lapack.TestLapack) ... ok test_random (test_basic.TestDet) ... ok test_random_complex (test_basic.TestDet) ... ok test_simple (test_basic.TestDet) ... ok test_simple_complex (test_basic.TestDet) ... ok test_random (test_basic.TestInv) ... ok test_random_complex (test_basic.TestInv) ... ok test_simple (test_basic.TestInv) ... ok test_simple_complex (test_basic.TestInv) ... ok test_random_complex_exact (test_basic.TestLstsq) ... ok test_random_complex_overdet (test_basic.TestLstsq) ... ok test_random_exact (test_basic.TestLstsq) ... ok test_random_overdet (test_basic.TestLstsq) ... ok test_random_overdet_large (test_basic.TestLstsq) ... ok test_simple_exact (test_basic.TestLstsq) ... ok test_simple_overdet (test_basic.TestLstsq) ... ok test_simple_overdet_complex (test_basic.TestLstsq) ... ok test_simple_underdet (test_basic.TestLstsq) ... ok test_basic.TestNorm.test_zero_norm ... ok test_simple (test_basic.TestPinv) ... ok test_simple_0det (test_basic.TestPinv) ... ok test_simple_cols (test_basic.TestPinv) ... ok test_simple_rows (test_basic.TestPinv) ... ok test_20Feb04_bug (test_basic.TestSolve) ... ok test_nils_20Feb04 (test_basic.TestSolve) ... ok test_random (test_basic.TestSolve) ... ok test_random_complex (test_basic.TestSolve) ... ok test_random_sym (test_basic.TestSolve) ... ok test_random_sym_complex (test_basic.TestSolve) ... ok test_simple (test_basic.TestSolve) ... ok test_simple_complex (test_basic.TestSolve) ... ok test_simple_sym (test_basic.TestSolve) ... ok test_simple_sym_complex (test_basic.TestSolve) ... ok test_bad_shape (test_basic.TestSolveBanded) ... ok test_complex (test_basic.TestSolveBanded) ... ok test_real (test_basic.TestSolveBanded) ... ok test_01_complex (test_basic.TestSolveHBanded) ... ok test_01_float32 (test_basic.TestSolveHBanded) ... ok test_01_lower (test_basic.TestSolveHBanded) ... ok test_01_upper (test_basic.TestSolveHBanded) ... ok test_02_complex (test_basic.TestSolveHBanded) ... ok test_02_float32 (test_basic.TestSolveHBanded) ... ok test_02_lower (test_basic.TestSolveHBanded) ... ok test_02_upper (test_basic.TestSolveHBanded) ... ok test_03_upper (test_basic.TestSolveHBanded) ... ok test_bad_shapes (test_basic.TestSolveHBanded) ... ok solve_triangular on a simple 2x2 matrix. ... ok solve_triangular on a simple 2x2 complex matrix ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... ok test_axpy (test_blas.TestFBLAS1Simple) ... ok test_complex_dotc (test_blas.TestFBLAS1Simple) ... ok test_complex_dotu (test_blas.TestFBLAS1Simple) ... ok test_copy (test_blas.TestFBLAS1Simple) ... ok test_dot (test_blas.TestFBLAS1Simple) ... ok test_nrm2 (test_blas.TestFBLAS1Simple) ... ok test_scal (test_blas.TestFBLAS1Simple) ... ok test_swap (test_blas.TestFBLAS1Simple) ... ok test_gemv (test_blas.TestFBLAS2Simple) ... ok test_ger (test_blas.TestFBLAS2Simple) ... ok test_gemm (test_blas.TestFBLAS3Simple) ... ok test_lapack (test_build.TestF77Mismatch) ... SKIP: Skipping test: test_lapack Skipping fortran compiler mismatch on non Linux platform test_datanotshared (test_decomp.TestDataNotShared) ... ok test_simple (test_decomp.TestDiagSVD) ... ok test_decomp.TestEig.test_bad_geneig ... ok Test matrices giving some Nan generalized eigen values. ... ok Check that passing a non-square array raises a ValueError. ... ok Check that passing arrays of with different shapes raises a ValueError. ... ok test_decomp.TestEig.test_simple ... ok test_decomp.TestEig.test_simple_complex ... ok test_decomp.TestEig.test_simple_complex_eig ... ok Test singular pair ... ok Compare dgbtrf? LU factorisation with the LU factorisation result ... ok Compare dgbtrs? solutions for linear equation system? A*x = b ... ok Compare dsbev eigenvalues and eigenvectors with ... ok Compare dsbevd eigenvalues and eigenvectors with ... ok Compare dsbevx eigenvalues and eigenvectors ... ok Compare eigenvalues and eigenvectors of eig_banded ... ok Compare eigenvalues of eigvals_banded with those of linalg.eig. ... ok Compare zgbtrf? LU factorisation with the LU factorisation result ... ok Compare zgbtrs? solutions for linear equation system? A*x = b ... ok Compare zhbevd eigenvalues and eigenvectors ... ok Compare zhbevx eigenvalues and eigenvectors ... ok test_simple (test_decomp.TestEigVals) ... ok test_simple_complex (test_decomp.TestEigVals) ... ok test_simple_tr (test_decomp.TestEigVals) ... ok test_random (test_decomp.TestHessenberg) ... ok test_random_complex (test_decomp.TestHessenberg) ... ok test_simple (test_decomp.TestHessenberg) ... ok test_simple2 (test_decomp.TestHessenberg) ... ok test_simple_complex (test_decomp.TestHessenberg) ... ok test_hrectangular (test_decomp.TestLU) ... ok test_hrectangular_complex (test_decomp.TestLU) ... ok Check lu decomposition on medium size, rectangular matrix. ... ok Check lu decomposition on medium size, rectangular matrix. ... ok test_simple (test_decomp.TestLU) ... ok test_simple2 (test_decomp.TestLU) ... ok test_simple2_complex (test_decomp.TestLU) ... ok test_simple_complex (test_decomp.TestLU) ... ok test_vrectangular (test_decomp.TestLU) ... ok test_vrectangular_complex (test_decomp.TestLU) ... ok test_hrectangular (test_decomp.TestLUSingle) ... ok test_hrectangular_complex (test_decomp.TestLUSingle) ... ok Check lu decomposition on medium size, rectangular matrix. ... ok Check lu decomposition on medium size, rectangular matrix. ... ok test_simple (test_decomp.TestLUSingle) ... ok test_simple2 (test_decomp.TestLUSingle) ... ok test_simple2_complex (test_decomp.TestLUSingle) ... ok test_simple_complex (test_decomp.TestLUSingle) ... ok test_vrectangular (test_decomp.TestLUSingle) ... ok test_vrectangular_complex (test_decomp.TestLUSingle) ... ok test_lu (test_decomp.TestLUSolve) ... ok test_random (test_decomp.TestQR) ... ok test_random_complex (test_decomp.TestQR) ... ok test_random_tall (test_decomp.TestQR) ... ok test_random_tall_e (test_decomp.TestQR) ... ok test_random_trap (test_decomp.TestQR) ... ok test_simple (test_decomp.TestQR) ... ok test_simple_complex (test_decomp.TestQR) ... ok test_simple_tall (test_decomp.TestQR) ... ok test_simple_tall_e (test_decomp.TestQR) ... ok test_simple_trap (test_decomp.TestQR) ... ok test_random (test_decomp.TestRQ) ... ok test_simple (test_decomp.TestRQ) ... ok test_random (test_decomp.TestSVD) ... ok test_random_complex (test_decomp.TestSVD) ... ok test_simple (test_decomp.TestSVD) ... ok test_simple_complex (test_decomp.TestSVD) ... ok test_simple_overdet (test_decomp.TestSVD) ... ok test_simple_singular (test_decomp.TestSVD) ... ok test_simple_underdet (test_decomp.TestSVD) ... ok test_simple (test_decomp.TestSVDVals) ... ok test_simple_complex (test_decomp.TestSVDVals) ... ok test_simple_overdet (test_decomp.TestSVDVals) ... ok test_simple_overdet_complex (test_decomp.TestSVDVals) ... ok test_simple_underdet (test_decomp.TestSVDVals) ... ok test_simple_underdet_complex (test_decomp.TestSVDVals) ... ok test_simple (test_decomp.TestSchur) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, False, (2, 4)) ... ok test_decomp.test_eigh_integer ... ok Check linalg works with non-aligned memory ... ok Check linalg works with non-aligned memory ... ok Check that complex objects don't need to be completely aligned ... ok test_decomp.test_lapack_misaligned ... KNOWNFAIL: Ticket #1152, triggers a segfault in rare cases. test_random (test_decomp_cholesky.TestCholesky) ... ok test_random_complex (test_decomp_cholesky.TestCholesky) ... ok test_simple (test_decomp_cholesky.TestCholesky) ... ok test_simple_complex (test_decomp_cholesky.TestCholesky) ... ok test_lower_complex (test_decomp_cholesky.TestCholeskyBanded) ... ok test_lower_real (test_decomp_cholesky.TestCholeskyBanded) ... ok test_upper_complex (test_decomp_cholesky.TestCholeskyBanded) ... ok test_upper_real (test_decomp_cholesky.TestCholeskyBanded) ... ok test_default_a (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCaxpy) ... ok test_x_and_y_stride (test_fblas.TestCaxpy) ... ok test_x_bad_size (test_fblas.TestCaxpy) ... ok test_x_stride (test_fblas.TestCaxpy) ... ok test_y_bad_size (test_fblas.TestCaxpy) ... ok test_y_stride (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCcopy) ... ok test_x_and_y_stride (test_fblas.TestCcopy) ... ok test_x_bad_size (test_fblas.TestCcopy) ... ok test_x_stride (test_fblas.TestCcopy) ... ok test_y_bad_size (test_fblas.TestCcopy) ... ok test_y_stride (test_fblas.TestCcopy) ... ok test_default_beta_y (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCgemv) ... ok test_simple_transpose (test_fblas.TestCgemv) ... ok test_simple_transpose_conj (test_fblas.TestCgemv) ... ok test_x_stride (test_fblas.TestCgemv) ... ok test_x_stride_assert (test_fblas.TestCgemv) ... ok test_x_stride_transpose (test_fblas.TestCgemv) ... ok test_y_stride (test_fblas.TestCgemv) ... ok test_y_stride_assert (test_fblas.TestCgemv) ... ok test_y_stride_transpose (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCscal) ... ok test_x_bad_size (test_fblas.TestCscal) ... ok test_x_stride (test_fblas.TestCscal) ... ok test_simple (test_fblas.TestCswap) ... ok test_x_and_y_stride (test_fblas.TestCswap) ... ok test_x_bad_size (test_fblas.TestCswap) ... ok test_x_stride (test_fblas.TestCswap) ... ok test_y_bad_size (test_fblas.TestCswap) ... ok test_y_stride (test_fblas.TestCswap) ... ok test_default_a (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDaxpy) ... ok test_x_and_y_stride (test_fblas.TestDaxpy) ... ok test_x_bad_size (test_fblas.TestDaxpy) ... ok test_x_stride (test_fblas.TestDaxpy) ... ok test_y_bad_size (test_fblas.TestDaxpy) ... ok test_y_stride (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDcopy) ... ok test_x_and_y_stride (test_fblas.TestDcopy) ... ok test_x_bad_size (test_fblas.TestDcopy) ... ok test_x_stride (test_fblas.TestDcopy) ... ok test_y_bad_size (test_fblas.TestDcopy) ... ok test_y_stride (test_fblas.TestDcopy) ... ok test_default_beta_y (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDgemv) ... ok test_simple_transpose (test_fblas.TestDgemv) ... ok test_simple_transpose_conj (test_fblas.TestDgemv) ... ok test_x_stride (test_fblas.TestDgemv) ... ok test_x_stride_assert (test_fblas.TestDgemv) ... ok test_x_stride_transpose (test_fblas.TestDgemv) ... ok test_y_stride (test_fblas.TestDgemv) ... ok test_y_stride_assert (test_fblas.TestDgemv) ... ok test_y_stride_transpose (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDscal) ... ok test_x_bad_size (test_fblas.TestDscal) ... ok test_x_stride (test_fblas.TestDscal) ... ok test_simple (test_fblas.TestDswap) ... ok test_x_and_y_stride (test_fblas.TestDswap) ... ok test_x_bad_size (test_fblas.TestDswap) ... ok test_x_stride (test_fblas.TestDswap) ... ok test_y_bad_size (test_fblas.TestDswap) ... ok test_y_stride (test_fblas.TestDswap) ... ok test_default_a (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestSaxpy) ... ok test_x_and_y_stride (test_fblas.TestSaxpy) ... ok test_x_bad_size (test_fblas.TestSaxpy) ... ok test_x_stride (test_fblas.TestSaxpy) ... ok test_y_bad_size (test_fblas.TestSaxpy) ... ok test_y_stride (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestScopy) ... ok test_x_and_y_stride (test_fblas.TestScopy) ... ok test_x_bad_size (test_fblas.TestScopy) ... ok test_x_stride (test_fblas.TestScopy) ... ok test_y_bad_size (test_fblas.TestScopy) ... ok test_y_stride (test_fblas.TestScopy) ... ok test_default_beta_y (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSgemv) ... ok test_simple_transpose (test_fblas.TestSgemv) ... ok test_simple_transpose_conj (test_fblas.TestSgemv) ... ok test_x_stride (test_fblas.TestSgemv) ... ok test_x_stride_assert (test_fblas.TestSgemv) ... ok test_x_stride_transpose (test_fblas.TestSgemv) ... ok test_y_stride (test_fblas.TestSgemv) ... ok test_y_stride_assert (test_fblas.TestSgemv) ... ok test_y_stride_transpose (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSscal) ... ok test_x_bad_size (test_fblas.TestSscal) ... ok test_x_stride (test_fblas.TestSscal) ... ok test_simple (test_fblas.TestSswap) ... ok test_x_and_y_stride (test_fblas.TestSswap) ... ok test_x_bad_size (test_fblas.TestSswap) ... ok test_x_stride (test_fblas.TestSswap) ... ok test_y_bad_size (test_fblas.TestSswap) ... ok test_y_stride (test_fblas.TestSswap) ... ok test_default_a (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZaxpy) ... ok test_x_and_y_stride (test_fblas.TestZaxpy) ... ok test_x_bad_size (test_fblas.TestZaxpy) ... ok test_x_stride (test_fblas.TestZaxpy) ... ok test_y_bad_size (test_fblas.TestZaxpy) ... ok test_y_stride (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZcopy) ... ok test_x_and_y_stride (test_fblas.TestZcopy) ... ok test_x_bad_size (test_fblas.TestZcopy) ... ok test_x_stride (test_fblas.TestZcopy) ... ok test_y_bad_size (test_fblas.TestZcopy) ... ok test_y_stride (test_fblas.TestZcopy) ... ok test_default_beta_y (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZgemv) ... ok test_simple_transpose (test_fblas.TestZgemv) ... ok test_simple_transpose_conj (test_fblas.TestZgemv) ... ok test_x_stride (test_fblas.TestZgemv) ... ok test_x_stride_assert (test_fblas.TestZgemv) ... ok test_x_stride_transpose (test_fblas.TestZgemv) ... ok test_y_stride (test_fblas.TestZgemv) ... ok test_y_stride_assert (test_fblas.TestZgemv) ... ok test_y_stride_transpose (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZscal) ... ok test_x_bad_size (test_fblas.TestZscal) ... ok test_x_stride (test_fblas.TestZscal) ... ok test_simple (test_fblas.TestZswap) ... ok test_x_and_y_stride (test_fblas.TestZswap) ... ok test_x_bad_size (test_fblas.TestZswap) ... ok test_x_stride (test_fblas.TestZswap) ... ok test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_gebal (test_lapack.TestFlapackSimple) ... ok test_gehrd (test_lapack.TestFlapackSimple) ... ok test_clapack (test_lapack.TestLapack) ... ok test_flapack (test_lapack.TestLapack) ... ok test_consistency (test_matfuncs.TestExpM) ... ok test_zero (test_matfuncs.TestExpM) ... ok test_nils (test_matfuncs.TestLogM) ... ok test_defective1 (test_matfuncs.TestSignM) ... ok test_defective2 (test_matfuncs.TestSignM) ... ok test_defective3 (test_matfuncs.TestSignM) ... ok test_nils (test_matfuncs.TestSignM) ... ok test_bad (test_matfuncs.TestSqrtM) ... ok test_special_matrices.TestBlockDiag.test_bad_arg ... ok test_special_matrices.TestBlockDiag.test_basic ... ok test_special_matrices.TestBlockDiag.test_dtype ... ok test_special_matrices.TestBlockDiag.test_no_args ... ok test_special_matrices.TestBlockDiag.test_scalar_and_1d_args ... ok test_basic (test_special_matrices.TestCirculant) ... ok test_bad_shapes (test_special_matrices.TestCompanion) ... ok test_basic (test_special_matrices.TestCompanion) ... ok test_basic (test_special_matrices.TestHadamard) ... ok test_basic (test_special_matrices.TestHankel) ... ok test_special_matrices.TestKron.test_basic ... ok test_bad_shapes (test_special_matrices.TestLeslie) ... ok test_basic (test_special_matrices.TestLeslie) ... ok test_basic (test_special_matrices.TestToeplitz) ... ok test_complex_01 (test_special_matrices.TestToeplitz) ... ok Scalar arguments still produce a 2D array. ... ok test_scalar_01 (test_special_matrices.TestToeplitz) ... ok test_scalar_02 (test_special_matrices.TestToeplitz) ... ok test_scalar_03 (test_special_matrices.TestToeplitz) ... ok test_scalar_04 (test_special_matrices.TestToeplitz) ... ok test_2d (test_special_matrices.TestTri) ... ok test_basic (test_special_matrices.TestTri) ... ok test_diag (test_special_matrices.TestTri) ... ok test_diag2d (test_special_matrices.TestTri) ... ok test_basic (test_special_matrices.TestTril) ... ok test_diag (test_special_matrices.TestTril) ... ok test_basic (test_special_matrices.TestTriu) ... ok test_diag (test_special_matrices.TestTriu) ... ok test_logsumexp (test_maxentropy.TestMaxentropy) ... ok test_doccer.test_unindent('Another test\n ? with some indent', 'Another test\n ? with some indent') ... ok test_doccer.test_unindent('Another test, one line', 'Another test, one line') ... ok test_doccer.test_unindent('Another test\n ? with some indent', 'Another test\n ? with some indent') ... ok test_doccer.test_unindent_dict('Another test\n ? with some indent', 'Another test\n ? with some indent') ... ok test_doccer.test_unindent_dict('Another test, one line', 'Another test, one line') ... ok test_doccer.test_unindent_dict('Another test\n ? with some indent', 'Another test\n ? with some indent') ... ok test_doccer.test_docformat('Docstring\n? ? Another test\n ? ? ? with some indent\n? ? ? ? Another test, one line\n ? ? Another test\n ? ? ? with some indent\n', 'Docstring\n? ? Another test\n ? ? ? with some indent\n? ? ? ? Another test, one line\n ? ? Another test\n ? ? ? with some indent\n') ... ok test_doccer.test_docformat('Single line doc Another test\n ? with some indent', 'Single line doc Another test\n ? with some indent') ... ok test_doccer.test_decorator(' Docstring\n? ? ? ? Another test\n ? ? ? ? ? with some indent\n? ? ? ? ', ' Docstring\n? ? ? ? Another test\n ? ? ? ? ? with some indent\n? ? ? ? ') ... ok test_doccer.test_decorator(' Docstring\n? ? ? ? ? ? Another test\n ? ? ? ? ? ? ? with some indent\n? ? ? ? ', ' Docstring\n? ? ? ? ? ? Another test\n ? ? ? ? ? ? ? with some indent\n? ? ? ? ') ... ok test_bytescale (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_bytescale Need to import PIL for this test test_imresize (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_imresize Need to import PIL for this test test_imresize2 (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_imresize2 Need to import PIL for this test test_imresize3 (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_imresize3 Need to import PIL for this test Failure: SkipTest (Skipping test: test_fromimage Need to import PIL for this test) ... SKIP: Skipping test: test_fromimage Need to import PIL for this test test_filters.test_ticket_701 ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, 4) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1, -1) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1, 4) ... ok test_io.test_imread ... SKIP: Skipping test: test_imread The Python Image Library could not be found. test_basic (test_measurements.Test_measurements_select) ... ok test_a (test_measurements.Test_measurements_stats) ... ok test_a_centered (test_measurements.Test_measurements_stats) ... ok test_b (test_measurements.Test_measurements_stats) ... ok test_b_centered (test_measurements.Test_measurements_stats) ... ok test_nonint_labels (test_measurements.Test_measurements_stats) ... ok label 1 ... ok label 2 ... ok label 3 ... ok label 4 ... ok label 5 ... ok label 6 ... ok label 7 ... ok label 8 ... ok label 9 ... ok label 10 ... ok label 11 ... ok label 12 ... ok label 13 ... ok find_objects 1 ... ok find_objects 2 ... ok find_objects 3 ... ok find_objects 4 ... ok find_objects 5 ... ok find_objects 6 ... ok find_objects 7 ... ok find_objects 8 ... ok find_objects 9 ... ok sum 1 ... ok sum 2 ... ok sum 3 ... ok sum 4 ... ok sum 5 ... ok sum 6 ... ok sum 7 ... ok sum 8 ... ok sum 9 ... ok sum 10 ... ok sum 11 ... ok sum 12 ... ok mean 1 ... ok mean 2 ... ok mean 3 ... ok mean 4 ... ok minimum 1 ... ok minimum 2 ... ok minimum 3 ... ok minimum 4 ... ok maximum 1 ... ok maximum 2 ... ok maximum 3 ... ok maximum 4 ... ok Ticket #501 ... ok variance 1 ... ok variance 2 ... ok variance 3 ... ok variance 4 ... ok variance 5 ... ok variance 6 ... ok standard deviation 1 ... ok standard deviation 2 ... ok standard deviation 3 ... ok standard deviation 4 ... ok standard deviation 5 ... ok standard deviation 6 ... ok standard deviation 7 ... ok minimum position 1 ... ok minimum position 2 ... ok minimum position 3 ... ok minimum position 4 ... ok minimum position 5 ... ok minimum position 6 ... ok minimum position 7 ... ok maximum position 1 ... ok maximum position 2 ... ok maximum position 3 ... ok maximum position 4 ... ok maximum position 5 ... ok maximum position 6 ... ok extrema 1 ... ok extrema 2 ... ok extrema 3 ... ok extrema 4 ... ok center of mass 1 ... ok center of mass 2 ... ok center of mass 3 ... ok center of mass 4 ... ok center of mass 5 ... ok center of mass 6 ... ok center of mass 7 ... ok center of mass 8 ... ok center of mass 9 ... ok histogram 1 ... ok histogram 2 ... ok histogram 3 ... ok affine_transform 1 ... ok affine transform 2 ... ok affine transform 3 ... ok affine transform 4 ... ok affine transform 5 ... ok affine transform 6 ... ok affine transform 7 ... ok affine transform 8 ... ok affine transform 9 ... ok affine transform 10 ... ok affine transform 11 ... ok affine transform 12 ... ok affine transform 13 ... ok affine transform 14 ... ok affine transform 15 ... ok affine transform 16 ... ok affine transform 17 ... ok affine transform 18 ... ok affine transform 19 ... ok affine transform 20 ... ok affine transform 21 ... ok binary closing 1 ... ok binary closing 2 ... ok binary dilation 1 ... ok binary dilation 2 ... ok binary dilation 3 ... ok binary dilation 4 ... ok binary dilation 5 ... ok binary dilation 6 ... ok binary dilation 7 ... ok binary dilation 8 ... ok binary dilation 9 ... ok binary dilation 10 ... ok binary dilation 11 ... ok binary dilation 12 ... ok binary dilation 13 ... ok binary dilation 14 ... ok binary dilation 15 ... ok binary dilation 16 ... ok binary dilation 17 ... ok binary dilation 18 ... ok binary dilation 19 ... ok binary dilation 20 ... ok binary dilation 21 ... ok binary dilation 22 ... ok binary dilation 23 ... ok binary dilation 24 ... ok binary dilation 25 ... ok binary dilation 26 ... ok binary dilation 27 ... ok binary dilation 28 ... ok binary dilation 29 ... ok binary dilation 30 ... ok binary dilation 31 ... ok binary dilation 32 ... ok binary dilation 33 ... ok binary dilation 34 ... ok binary dilation 35 ... ok binary erosion 1 ... ok binary erosion 2 ... ok binary erosion 3 ... ok binary erosion 4 ... ok binary erosion 5 ... ok binary erosion 6 ... ok binary erosion 7 ... ok binary erosion 8 ... ok binary erosion 9 ... ok binary erosion 10 ... ok binary erosion 11 ... ok binary erosion 12 ... ok binary erosion 13 ... ok binary erosion 14 ... ok binary erosion 15 ... ok binary erosion 16 ... ok binary erosion 17 ... ok binary erosion 18 ... ok binary erosion 19 ... ok binary erosion 20 ... ok binary erosion 21 ... ok binary erosion 22 ... ok binary erosion 23 ... ok binary erosion 24 ... ok binary erosion 25 ... ok binary erosion 26 ... ok binary erosion 27 ... ok binary erosion 28 ... ok binary erosion 29 ... ok binary erosion 30 ... ok binary erosion 31 ... ok binary erosion 32 ... ok binary erosion 33 ... ok binary erosion 34 ... ok binary erosion 35 ... ok binary erosion 36 ... ok binary fill holes 1 ... ok binary fill holes 2 ... ok binary fill holes 3 ... ok binary opening 1 ... ok binary opening 2 ... ok binary propagation 1 ... ok binary propagation 2 ... ok black tophat 1 ... ok black tophat 2 ... ok boundary modes ... ok boundary modes 2 ... ok correlation 1 ... ok correlation 2 ... ok correlation 3 ... ok correlation 4 ... ok correlation 5 ... ok correlation 6 ... ok correlation 7 ... ok correlation 8 ... ok correlation 9 ... ok correlation 10 ... ok correlation 11 ... ok correlation 12 ... ok correlation 13 ... ok correlation 14 ... ok correlation 15 ... ok correlation 16 ... ok correlation 17 ... ok correlation 18 ... ok correlation 19 ... ok correlation 20 ... ok correlation 21 ... ok correlation 22 ... ok correlation 23 ... ok correlation 24 ... ok correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ok brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ok euclidean distance transform 1 ... ok euclidean distance transform 2 ... ok euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok line extension 1 ... ok line extension 2 ... ok line extension 3 ... ok line extension 4 ... ok line extension 5 ... ok line extension 6 ... ok line extension 7 ... ok line extension 8 ... ok line extension 9 ... ok line extension 10 ... ok ellipsoid fourier filter for complex transforms 1 ... ok ellipsoid fourier filter for real transforms 1 ... ok gaussian fourier filter for complex transforms 1 ... ok gaussian fourier filter for real transforms 1 ... ok shift filter for complex transforms 1 ... ok shift filter for real transforms 1 ... ok uniform fourier filter for complex transforms 1 ... ok uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 - single precision data ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1 ... ok generic 1d filter 1 ... ok generic gradient magnitude 1 ... ok generic laplace filter 1 ... ok geometric transform 1 ... ok geometric transform 2 ... ok geometric transform 3 ... ok geometric transform 4 ... ok geometric transform 5 ... ok geometric transform 6 ... ok geometric transform 7 ... ok geometric transform 8 ... ok geometric transform 10 ... ok geometric transform 13 ... ok geometric transform 14 ... ok geometric transform 15 ... ok geometric transform 16 ... ok geometric transform 17 ... ok geometric transform 18 ... ok geometric transform 19 ... ok geometric transform 20 ... ok geometric transform 21 ... ok geometric transform 22 ... ok geometric transform 23 ... ok geometric transform 24 ... ok grey closing 1 ... ok grey closing 2 ... ok grey dilation 1 ... ok grey dilation 2 ... ok grey dilation 3 ... ok grey erosion 1 ... ok grey erosion 2 ... ok grey erosion 3 ... ok grey opening 1 ... ok grey opening 2 ... ok binary hit-or-miss transform 1 ... ok binary hit-or-miss transform 2 ... ok binary hit-or-miss transform 3 ... ok iterating a structure 1 ... ok iterating a structure 2 ... ok iterating a structure 3 ... ok laplace filter 1 ... ok laplace filter 2 ... ok map coordinates 1 ... ok map coordinates 2 ... ok maximum filter 1 ... ok maximum filter 2 ... ok maximum filter 3 ... ok maximum filter 4 ... ok maximum filter 5 ... ok maximum filter 6 ... ok maximum filter 7 ... ok maximum filter 8 ... ok maximum filter 9 ... ok minimum filter 1 ... ok minimum filter 2 ... ok minimum filter 3 ... ok minimum filter 4 ... ok minimum filter 5 ... ok minimum filter 6 ... ok minimum filter 7 ... ok minimum filter 8 ... ok minimum filter 9 ... ok morphological gradient 1 ... ok morphological gradient 2 ... ok morphological laplace 1 ... ok morphological laplace 2 ... ok prewitt filter 1 ... ok prewitt filter 2 ... ok prewitt filter 3 ... ok prewitt filter 4 ... ok rank filter 1 ... ok rank filter 2 ... ok rank filter 3 ... ok rank filter 4 ... ok rank filter 5 ... ok rank filter 6 ... ok rank filter 7 ... ok median filter 8 ... ok rank filter 9 ... ok rank filter 10 ... ok rank filter 11 ... ok rank filter 12 ... ok rank filter 13 ... ok rank filter 14 ... ok rotate 1 ... ok rotate 2 ... ok rotate 3 ... ok rotate 4 ... ok rotate 5 ... ok rotate 6 ... ok rotate 7 ... ok rotate 8 ... ok shift 1 ... ok shift 2 ... ok shift 3 ... ok shift 4 ... ok shift 5 ... ok shift 6 ... ok shift 7 ... ok shift 8 ... ok shift 9 ... ok sobel filter 1 ... ok sobel filter 2 ... ok sobel filter 3 ... ok sobel filter 4 ... ok spline filter 1 ... ok spline filter 2 ... ok spline filter 3 ... ok spline filter 4 ... ok spline filter 5 ... ok uniform filter 1 ... ok uniform filter 2 ... ok uniform filter 3 ... ok uniform filter 4 ... ok uniform filter 5 ... ok uniform filter 6 ... ok watershed_ift 1 ... ok watershed_ift 2 ... ok watershed_ift 3 ... ok watershed_ift 4 ... ok watershed_ift 5 ... ok watershed_ift 6 ... ok watershed_ift 7 ... ok white tophat 1 ... ok white tophat 2 ... ok zoom 1 ... ok zoom 2 ... ok zoom by affine transformation 1 ... ok Regression test for #413: median_filter does not handle bytes orders. ... ok Ticket #643 ... ok test_explicit (test_odr.TestODR) ... ok test_implicit (test_odr.TestODR) ... ok test_lorentz (test_odr.TestODR) ... ok test_multi (test_odr.TestODR) ... ok test_pearson (test_odr.TestODR) ... ok test_ticket_1253 (test_odr.TestODR) ... ok test_simple (test_cobyla.TestCobyla) ... ok test_linesearch.TestLineSearch.test_armijo_terminate_1 ... ok test_linesearch.TestLineSearch.test_line_search_armijo ... ok test_linesearch.TestLineSearch.test_line_search_wolfe1 ... ok test_linesearch.TestLineSearch.test_line_search_wolfe2 ... ok test_linesearch.TestLineSearch.test_scalar_search_armijo ... ok test_linesearch.TestLineSearch.test_scalar_search_wolfe1 ... ok test_linesearch.TestLineSearch.test_scalar_search_wolfe2 ... ok test_linesearch.TestLineSearch.test_wolfe_terminate ... ok test_one_argument (test_minpack.TestCurveFit) ... ok test_two_argument (test_minpack.TestCurveFit) ... ok fsolve without gradient, equal pipes -> equal flows ... ok fsolve with gradient, equal pipes -> equal flows ... ok The callables 'func' and 'deriv_func' have no 'func_name' attribute. ... ok test_minpack.TestFSolve.test_wrong_shape_fprime_function ... ok The callable 'func' has no 'func_name' attribute. ... ok test_minpack.TestFSolve.test_wrong_shape_func_function ... ok f(x) = c * x**2; fixed point should be x=1/c ... ok f(x) = c * x**0.5; fixed point should be x=c**2 ... ok test_array_trivial (test_minpack.TestFixedPoint) ... ok f(x) = x**2; x0=1.05; fixed point should be x=1 ... ok f(x) = x**0.5; x0=1.05; fixed point should be x=1 ... ok f(x) = 2x; fixed point should be x=0 ... ok test_basic (test_minpack.TestLeastSq) ... ok test_full_output (test_minpack.TestLeastSq) ... ok test_input_untouched (test_minpack.TestLeastSq) ... ok The callables 'func' and 'deriv_func' have no 'func_name' attribute. ... ok test_wrong_shape_Dfun_function (test_minpack.TestLeastSq) ... ok The callable 'func' has no 'func_name' attribute. ... ok test_wrong_shape_func_function (test_minpack.TestLeastSq) ... ok test_nnls (test_nnls.TestNNLS) ... ok fsolve without gradient, equal pipes -> equal flows ... ok fsolve with gradient, equal pipes -> equal flows ... ok The callables 'func' and 'deriv_func' have no 'func_name' attribute. ... ok test_nonlin.TestFSolve.test_wrong_shape_fprime_function ... ok The callable 'func' has no 'func_name' attribute. ... ok test_nonlin.TestFSolve.test_wrong_shape_func_function ... ok test_nonlin.TestJacobianDotSolve.test_anderson ... ok test_nonlin.TestJacobianDotSolve.test_broyden1 ... ok test_nonlin.TestJacobianDotSolve.test_broyden2 ... ok test_nonlin.TestJacobianDotSolve.test_diagbroyden ... ok test_nonlin.TestJacobianDotSolve.test_excitingmixing ... ok test_nonlin.TestJacobianDotSolve.test_krylov ... ok test_nonlin.TestJacobianDotSolve.test_linearmixing ... ok test_anderson (test_nonlin.TestLinear) ... ok test_broyden1 (test_nonlin.TestLinear) ... ok test_broyden2 (test_nonlin.TestLinear) ... ok test_krylov (test_nonlin.TestLinear) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_anderson (test_nonlin.TestNonlinOldTests) ... ok test_broyden1 (test_nonlin.TestNonlinOldTests) ... ok test_broyden2 (test_nonlin.TestNonlinOldTests) ... ok test_diagbroyden (test_nonlin.TestNonlinOldTests) ... ok test_exciting (test_nonlin.TestNonlinOldTests) ... ok test_linearmixing (test_nonlin.TestNonlinOldTests) ... ok test_anderson (test_nonlin.TestSecant) ... ok test_broyden1 (test_nonlin.TestSecant) ... ok test_broyden1_update (test_nonlin.TestSecant) ... ok test_broyden2 (test_nonlin.TestSecant) ... ok test_broyden2_update (test_nonlin.TestSecant) ... ok Broyden-Fletcher-Goldfarb-Shanno optimization routine ... ok brent algorithm ... ok conjugate gradient optimization routine ... ok Test fminbound ... ok test_fminbound_scalar (test_optimize.TestOptimize) ... ok limited-memory bound-constrained BFGS algorithm ... ok line-search Newton conjugate gradient optimization routine ... ok Nelder-Mead simplex algorithm ... ok Powell (direction set) optimization routine ... ok Compare rosen_hess(x) times p with rosen_hess_prod(x,p) (ticket #1248) ... ok test_tnc (test_optimize.TestTnc) ... ok Ticket #1214 ... ok Ticket #1074 ... ok test_bound_approximated (test_slsqp.TestSLSQP) ... ok test_bound_equality_given (test_slsqp.TestSLSQP) ... ok test_bound_equality_inequality_given (test_slsqp.TestSLSQP) ... ok test_unbounded_approximated (test_slsqp.TestSLSQP) ... ok test_unbounded_given (test_slsqp.TestSLSQP) ... ok test_bisect (test_zeros.TestBasic) ... ok test_brenth (test_zeros.TestBasic) ... ok test_brentq (test_zeros.TestBasic) ... ok test_deriv_zero_warning (test_zeros.TestBasic) ... ok test_ridder (test_zeros.TestBasic) ... ok Regression test for #651: better handling of badly conditioned ... ok test_simple (test_filter_design.TestTf2zpk) ... ok Test that invalid cutoff argument raises ValueError. ... ok test_bandpass (test_fir_filter_design.TestFirWinMore) ... ok Test that attempt to create a highpass filter with an even number ... ok test_highpass (test_fir_filter_design.TestFirWinMore) ... ok test_lowpass (test_fir_filter_design.TestFirWinMore) ... ok test_multi (test_fir_filter_design.TestFirWinMore) ... ok Test the nyq keyword. ... ok test_response (test_fir_filter_design.TestFirwin) ... ok For one lowpass, bandpass, and highpass example filter, this test ... ok test01 (test_fir_filter_design.TestFirwin2) ... ok test02 (test_fir_filter_design.TestFirwin2) ... ok test03 (test_fir_filter_design.TestFirwin2) ... ok test_invalid_args (test_fir_filter_design.TestFirwin2) ... ok test_nyq (test_fir_filter_design.TestFirwin2) ... ok test_hilbert (test_fir_filter_design.TestRemez) ... ok test_ltisys.TestSS2TF.test_basic(3, 3, 3) ... ok test_ltisys.TestSS2TF.test_basic(1, 3, 3) ... ok test_ltisys.TestSS2TF.test_basic(1, 1, 1) ... ok test_ltisys.Test_impulse2.test_01 ... ok Specify the desired time values for the output. ... ok Specify an initial condition as a scalar. ... ok Specify an initial condition as a list. ... ok test_ltisys.Test_impulse2.test_05 ... ok test_ltisys.Test_impulse2.test_06 ... ok test_ltisys.Test_lsim2.test_01 ... ok test_ltisys.Test_lsim2.test_02 ... ok test_ltisys.Test_lsim2.test_03 ... ok test_ltisys.Test_lsim2.test_04 ... ok test_ltisys.Test_lsim2.test_05 ... ok Test use of the default values of the arguments `T` and `U`. ... ok test_ltisys.Test_step2.test_01 ... ok Specify the desired time values for the output. ... ok Specify an initial condition as a scalar. ... ok Specify an initial condition as a list. ... ok test_ltisys.Test_step2.test_05 ... ok test_ltisys.Test_step2.test_06 ... ok test_basic (test_signaltools.TestCSpline1DEval) ... ok test_2d_arrays (test_signaltools.TestConvolve) ... ok test_basic (test_signaltools.TestConvolve) ... ok test_complex (test_signaltools.TestConvolve) ... ok test_same_mode (test_signaltools.TestConvolve) ... ok test_valid_mode (test_signaltools.TestConvolve) ... ok test_zero_order (test_signaltools.TestConvolve) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex128) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex128) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex128) ... ok test_rank3 (test_signaltools.TestCorrelateComplex128) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex256) ... ok test_rank3 (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex256) ... ok test_rank3 (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex64) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex64) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex64) ... ok test_rank3 (test_signaltools.TestCorrelateComplex64) ... ok test_rank1_full (test_signaltools.TestCorrelateDecimal) ... ok test_rank1_same (test_signaltools.TestCorrelateDecimal) ... ok test_rank1_valid (test_signaltools.TestCorrelateDecimal) ... ok test_rank3_all (test_signaltools.TestCorrelateDecimal) ... ok test_rank3_same (test_signaltools.TestCorrelateDecimal) ... ok test_rank3_valid (test_signaltools.TestCorrelateDecimal) ... ok test_rank1_full (test_signaltools.TestCorrelateFloat128) ... ok test_rank1_same (test_signaltools.TestCorrelateFloat128) ... ok test_rank1_valid (test_signaltools.TestCorrelateFloat128) ... ok test_rank3_all (test_signaltools.TestCorrelateFloat128) ... ok test_rank3_same (test_signaltools.TestCorrelateFloat128) ... ok test_rank3_valid (test_signaltools.TestCorrelateFloat128) ... ok test_rank1_full (test_signaltools.TestCorrelateFloat32) ... ok test_rank1_same (test_signaltools.TestCorrelateFloat32) ... ok test_rank1_valid (test_signaltools.TestCorrelateFloat32) ... ok test_rank3_all (test_signaltools.TestCorrelateFloat32) ... ok test_rank3_same (test_signaltools.TestCorrelateFloat32) ... ok test_rank3_valid (test_signaltools.TestCorrelateFloat32) ... ok test_rank1_full (test_signaltools.TestCorrelateFloat64) ... ok test_rank1_same (test_signaltools.TestCorrelateFloat64) ... ok test_rank1_valid (test_signaltools.TestCorrelateFloat64) ... ok test_rank3_all (test_signaltools.TestCorrelateFloat64) ... ok test_rank3_same (test_signaltools.TestCorrelateFloat64) ... ok test_rank3_valid (test_signaltools.TestCorrelateFloat64) ... ok test_rank1_full (test_signaltools.TestCorrelateInt) ... ok test_rank1_same (test_signaltools.TestCorrelateInt) ... ok test_rank1_valid (test_signaltools.TestCorrelateInt) ... ok test_rank3_all (test_signaltools.TestCorrelateInt) ... ok test_rank3_same (test_signaltools.TestCorrelateInt) ... ok test_rank3_valid (test_signaltools.TestCorrelateInt) ... ok test_rank1_full (test_signaltools.TestCorrelateInt16) ... ok test_rank1_same (test_signaltools.TestCorrelateInt16) ... ok test_rank1_valid (test_signaltools.TestCorrelateInt16) ... ok test_rank3_all (test_signaltools.TestCorrelateInt16) ... ok test_rank3_same (test_signaltools.TestCorrelateInt16) ... ok test_rank3_valid (test_signaltools.TestCorrelateInt16) ... ok test_rank1_full (test_signaltools.TestCorrelateInt8) ... ok test_rank1_same (test_signaltools.TestCorrelateInt8) ... ok test_rank1_valid (test_signaltools.TestCorrelateInt8) ... ok test_rank3_all (test_signaltools.TestCorrelateInt8) ... ok test_rank3_same (test_signaltools.TestCorrelateInt8) ... ok test_rank3_valid (test_signaltools.TestCorrelateInt8) ... ok test_rank1_full (test_signaltools.TestCorrelateUint16) ... ok test_rank1_same (test_signaltools.TestCorrelateUint16) ... ok test_rank1_valid (test_signaltools.TestCorrelateUint16) ... ok test_rank3_all (test_signaltools.TestCorrelateUint16) ... ok test_rank3_same (test_signaltools.TestCorrelateUint16) ... ok test_rank3_valid (test_signaltools.TestCorrelateUint16) ... ok test_rank1_full (test_signaltools.TestCorrelateUint32) ... ok test_rank1_same (test_signaltools.TestCorrelateUint32) ... ok test_rank1_valid (test_signaltools.TestCorrelateUint32) ... ok test_rank3_all (test_signaltools.TestCorrelateUint32) ... ok test_rank3_same (test_signaltools.TestCorrelateUint32) ... ok test_rank3_valid (test_signaltools.TestCorrelateUint32) ... ok test_rank1_full (test_signaltools.TestCorrelateUint64) ... ok test_rank1_same (test_signaltools.TestCorrelateUint64) ... ok test_rank1_valid (test_signaltools.TestCorrelateUint64) ... ok test_rank3_all (test_signaltools.TestCorrelateUint64) ... ok test_rank3_same (test_signaltools.TestCorrelateUint64) ... ok test_rank3_valid (test_signaltools.TestCorrelateUint64) ... ok test_rank1_full (test_signaltools.TestCorrelateUint8) ... ok test_rank1_same (test_signaltools.TestCorrelateUint8) ... ok test_rank1_valid (test_signaltools.TestCorrelateUint8) ... ok test_rank3_all (test_signaltools.TestCorrelateUint8) ... ok test_rank3_same (test_signaltools.TestCorrelateUint8) ... ok test_rank3_valid (test_signaltools.TestCorrelateUint8) ... ok test_signaltools.TestDecimate.test_basic ... ok test_2d_complex_same (test_signaltools.TestFFTConvolve) ... ok test_2d_real_same (test_signaltools.TestFFTConvolve) ... ok test_complex (test_signaltools.TestFFTConvolve) ... ok test_random_data (test_signaltools.TestFFTConvolve) ... ok test_real (test_signaltools.TestFFTConvolve) ... ok test_real_same_mode (test_signaltools.TestFFTConvolve) ... ok test_real_valid_mode (test_signaltools.TestFFTConvolve) ... ok test_zero_order (test_signaltools.TestFFTConvolve) ... ok test_signaltools.TestFiltFilt.test_basic ... ok test_signaltools.TestHilbert.test_hilbert_axisN(array([[? 0.+2.30940108j, ? 6.+2.30940108j,? 12.+2.30940108j], ... ok test_signaltools.TestHilbert.test_hilbert_axisN(array([ 0.+2.30940108j,? 1.-1.15470054j,? 2.-1.15470054j,? 3.-1.15470054j, ... ok test_signaltools.TestHilbert.test_hilbert_axisN((3, 20), [3, 20]) ... ok test_signaltools.TestHilbert.test_hilbert_axisN((20, 3), [20, 3]) ... ok test_signaltools.TestHilbert.test_hilbert_axisN(array([? 0.00000000e+00-1.7201583j , ? 1.00000000e+00-2.04779451j, ... ok test_signaltools.TestHilbert.test_hilbert_theoretical ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank3 (test_signaltools.TestLinearFilterComplex128) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank3 (test_signaltools.TestLinearFilterComplex64) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank2 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank3 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank3 (test_signaltools.TestLinearFilterDecimal) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank3 (test_signaltools.TestLinearFilterFloat32) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank3 (test_signaltools.TestLinearFilterFloat64) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank2 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank3 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_basic (test_signaltools.TestMedFilt) ... ok Ticket #1124. Ensure this does not segfault. ... ok test_basic (test_signaltools.TestOrderFilt) ... ok test_basic (test_signaltools.TestWiener) ... ok test_hyperbolic_at_zero (test_waveforms.TestChirp) ... ok test_hyperbolic_freq_01 (test_waveforms.TestChirp) ... ok test_hyperbolic_freq_02 (test_waveforms.TestChirp) ... ok test_hyperbolic_freq_03 (test_waveforms.TestChirp) ... ok test_integer_all (test_waveforms.TestChirp) ... ok test_integer_f0 (test_waveforms.TestChirp) ... ok test_integer_f1 (test_waveforms.TestChirp) ... ok test_integer_t1 (test_waveforms.TestChirp) ... ok test_linear_at_zero (test_waveforms.TestChirp) ... ok test_linear_freq_01 (test_waveforms.TestChirp) ... ok test_linear_freq_02 (test_waveforms.TestChirp) ... ok test_logarithmic_at_zero (test_waveforms.TestChirp) ... ok test_logarithmic_freq_01 (test_waveforms.TestChirp) ... ok test_logarithmic_freq_02 (test_waveforms.TestChirp) ... ok test_logarithmic_freq_03 (test_waveforms.TestChirp) ... ok test_quadratic_at_zero (test_waveforms.TestChirp) ... ok test_quadratic_at_zero2 (test_waveforms.TestChirp) ... ok test_quadratic_freq_01 (test_waveforms.TestChirp) ... ok test_quadratic_freq_02 (test_waveforms.TestChirp) ... ok test_unknown_method (test_waveforms.TestChirp) ... ok test_integer_bw (test_waveforms.TestGaussPulse) ... ok test_integer_bwr (test_waveforms.TestGaussPulse) ... ok test_integer_fc (test_waveforms.TestGaussPulse) ... ok test_integer_tpr (test_waveforms.TestGaussPulse) ... ok test_sweep_poly_const (test_waveforms.TestSweepPoly) ... ok test_sweep_poly_cubic (test_waveforms.TestSweepPoly) ... ok Use an array of coefficients instead of a poly1d. ... ok Use a list of coefficients instead of a poly1d. ... ok test_sweep_poly_linear (test_waveforms.TestSweepPoly) ... ok test_sweep_poly_quad1 (test_waveforms.TestSweepPoly) ... ok test_sweep_poly_quad2 (test_waveforms.TestSweepPoly) ... ok test_cascade (test_wavelets.TestWavelets) ... ok test_daub (test_wavelets.TestWavelets) ... ok test_morlet (test_wavelets.TestWavelets) ... ok test_qmf (test_wavelets.TestWavelets) ... ok test_windows.TestChebWin.test_cheb_even ... ok test_windows.TestChebWin.test_cheb_odd ... ok test_windows.TestGetWindow.test_boxcar ... ok test_windows.TestGetWindow.test_cheb_even ... ok test_windows.TestGetWindow.test_cheb_odd ... ok Getting factors of complex matrix ... SKIP: Skipping test: test_complex_lu UMFPACK appears not to be compiled Getting factors of real matrix ... SKIP: Skipping test: test_real_lu UMFPACK appears not to be compiled Getting factors of complex matrix ... SKIP: Skipping test: test_complex_lu UMFPACK appears not to be compiled Getting factors of real matrix ... SKIP: Skipping test: test_real_lu UMFPACK appears not to be compiled Prefactorize (with UMFPACK) matrix for solving with multiple rhs ... SKIP: Skipping test: test_factorized_umfpack UMFPACK appears not to be compiled Prefactorize matrix for solving with multiple rhs ... SKIP: Skipping test: test_factorized_without_umfpack UMFPACK appears not to be compiled Solve with UMFPACK: double precision complex ... SKIP: Skipping test: test_solve_complex_umfpack UMFPACK appears not to be compiled Solve: single precision complex ... SKIP: Skipping test: test_solve_complex_without_umfpack UMFPACK appears not to be compiled Solve with UMFPACK: double precision, sparse rhs ... SKIP: Skipping test: test_solve_sparse_rhs UMFPACK appears not to be compiled Solve with UMFPACK: double precision ... SKIP: Skipping test: test_solve_umfpack UMFPACK appears not to be compiled Solve: single precision ... SKIP: Skipping test: test_solve_without_umfpack UMFPACK appears not to be compiled test_non_square (test_linsolve.TestLinsolve) ... ok test_singular (test_linsolve.TestLinsolve) ... ok test_smoketest (test_linsolve.TestLinsolve) ... ok test_twodiags (test_linsolve.TestLinsolve) ... ok test_linsolve.TestSplu.test_lu_refcount ... ok test_linsolve.TestSplu.test_spilu_nnz0 ... ok test_linsolve.TestSplu.test_spilu_smoketest ... ok test_linsolve.TestSplu.test_splu_basic ... ok test_linsolve.TestSplu.test_splu_nnz0 ... ok test_linsolve.TestSplu.test_splu_perm ... ok test_linsolve.TestSplu.test_splu_smoketest ... ok test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ... ok test_no_convergence (test_arpack.TestEigenComplexNonSymmetric) ... ok test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) ... ok test_no_convergence (test_arpack.TestEigenComplexSymmetric) ... ok test_no_convergence (test_arpack.TestEigenNonSymmetric) ... ok test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ... ok test_starting_vector (test_arpack.TestEigenNonSymmetric) ... ok test_no_convergence (test_arpack.TestEigenSymmetric) ... ok test_starting_vector (test_arpack.TestEigenSymmetric) ... ok test_symmetric_modes (test_arpack.TestEigenSymmetric) ... ok test_simple_complex (test_arpack.TestSparseSvd) ... ok test_simple_real (test_arpack.TestSparseSvd) ... ok test_arpack.test_eigen_bad_shapes ... ok test_arpack.test_eigs_operator ... ok test (test_speigs.TestEigs) ... ok test_lobpcg.test_Small ... ok test_lobpcg.test_ElasticRod ... ok test_lobpcg.test_MikotaPair ... ok test_lobpcg.test_trivial ... ok test_callback (test_iterative.TestGMRES) ... ok test whether all methods converge ... ok test whether maxiter is respected ... ok test whether all methods accept a trivial preconditioner ... ok Check that QMR works with left and right preconditioners ... ok test_outer_v (test_lgmres.TestLGMRES) ... ok test_preconditioner (test_lgmres.TestLGMRES) ... ok test_lsqr.test_basic ... ok test_utils.test_make_system_bad_shape ... ok test_basic (test_interface.TestAsLinearOperator) ... ok test_matvec (test_interface.TestLinearOperator) ... ok test_iterative.test_gmres_basic ... ok test_abs (test_base.TestBSR) ... ok test_add (test_base.TestBSR) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestBSR) ... ok test_asfptype (test_base.TestBSR) ... ok test_astype (test_base.TestBSR) ... ok test_bsr_matvec (test_base.TestBSR) ... ok test_bsr_matvecs (test_base.TestBSR) ... ok check native BSR format constructor ... ok construct from dense ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestBSR) ... ok test_elementwise_multiply (test_base.TestBSR) ... ok test_eliminate_zeros (test_base.TestBSR) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestBSR) ... ok test_from_list (test_base.TestBSR) ... ok test_from_matrix (test_base.TestBSR) ... ok test_from_sparse (test_base.TestBSR) ... ok test_getcol (test_base.TestBSR) ... ok test_getrow (test_base.TestBSR) ... ok test_idiv_scalar (test_base.TestBSR) ... ok test_imag (test_base.TestBSR) ... ok test_imul_scalar (test_base.TestBSR) ... ok test_invalid_shapes (test_base.TestBSR) ... ok test_matmat_dense (test_base.TestBSR) ... ok test_matmat_sparse (test_base.TestBSR) ... ok test_matvec (test_base.TestBSR) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestBSR) ... ok test_mul_scalar (test_base.TestBSR) ... ok test_neg (test_base.TestBSR) ... ok test_nonzero (test_base.TestBSR) ... ok test_pow (test_base.TestBSR) ... ok test_radd (test_base.TestBSR) ... ok test_real (test_base.TestBSR) ... ok test_repr (test_base.TestBSR) ... ok test_rmatvec (test_base.TestBSR) ... ok test_rmul_scalar (test_base.TestBSR) ... ok test_rsub (test_base.TestBSR) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok test_sparse_format_conversions (test_base.TestBSR) ... ok test_str (test_base.TestBSR) ... ok test_sub (test_base.TestBSR) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestBSR) ... ok test_tobsr (test_base.TestBSR) ... ok test_todense (test_base.TestBSR) ... ok test_transpose (test_base.TestBSR) ... ok test_abs (test_base.TestCOO) ... ok test_add (test_base.TestCOO) ... ok adding a dense matrix to a sparse matrix ... ok test_asfptype (test_base.TestCOO) ... ok test_astype (test_base.TestCOO) ... ok unsorted triplet format ... ok unsorted triplet format with duplicates (which are summed) ... ok empty matrix ... ok from dense matrix ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestCOO) ... ok test_elementwise_multiply (test_base.TestCOO) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestCOO) ... ok test_from_list (test_base.TestCOO) ... ok test_from_matrix (test_base.TestCOO) ... ok test_from_sparse (test_base.TestCOO) ... ok test_getcol (test_base.TestCOO) ... ok test_getrow (test_base.TestCOO) ... ok test_imag (test_base.TestCOO) ... ok test_invalid_shapes (test_base.TestCOO) ... ok test_matmat_dense (test_base.TestCOO) ... ok test_matmat_sparse (test_base.TestCOO) ... ok test_matvec (test_base.TestCOO) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mul_scalar (test_base.TestCOO) ... ok test_neg (test_base.TestCOO) ... ok test_nonzero (test_base.TestCOO) ... ok test_pow (test_base.TestCOO) ... ok test_radd (test_base.TestCOO) ... ok test_real (test_base.TestCOO) ... ok test_repr (test_base.TestCOO) ... ok test_rmatvec (test_base.TestCOO) ... ok test_rmul_scalar (test_base.TestCOO) ... ok test_rsub (test_base.TestCOO) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok test_sparse_format_conversions (test_base.TestCOO) ... ok test_str (test_base.TestCOO) ... ok test_sub (test_base.TestCOO) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestCOO) ... ok test_tobsr (test_base.TestCOO) ... ok test_todense (test_base.TestCOO) ... ok test_transpose (test_base.TestCOO) ... ok test_abs (test_base.TestCSC) ... ok test_add (test_base.TestCSC) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestCSC) ... ok test_asfptype (test_base.TestCSC) ... ok test_astype (test_base.TestCSC) ... ok test_constructor1 (test_base.TestCSC) ... ok test_constructor2 (test_base.TestCSC) ... ok test_constructor3 (test_base.TestCSC) ... ok using (data, ij) format ... ok infer dimensions from arrays ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestCSC) ... ok test_elementwise_multiply (test_base.TestCSC) ... ok test_eliminate_zeros (test_base.TestCSC) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_fancy_indexing (test_base.TestCSC) ... ok test_fancy_indexing_randomized (test_base.TestCSC) ... ok test_fancy_indexing_set (test_base.TestCSC) ... KNOWNFAIL: Fancy indexing is known to be broken for CSC matrices test_from_array (test_base.TestCSC) ... ok test_from_list (test_base.TestCSC) ... ok test_from_matrix (test_base.TestCSC) ... ok test_from_sparse (test_base.TestCSC) ... ok Test for new slice functionality (EJS) ... ok test_get_slices (test_base.TestCSC) ... ok Test for new slice functionality (EJS) ... ok test_getcol (test_base.TestCSC) ... ok test_getelement (test_base.TestCSC) ... ok test_getrow (test_base.TestCSC) ... ok test_idiv_scalar (test_base.TestCSC) ... ok test_imag (test_base.TestCSC) ... ok test_imul_scalar (test_base.TestCSC) ... ok test_invalid_shapes (test_base.TestCSC) ... ok test_matmat_dense (test_base.TestCSC) ... ok test_matmat_sparse (test_base.TestCSC) ... ok test_matvec (test_base.TestCSC) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestCSC) ... ok test_mul_scalar (test_base.TestCSC) ... ok test_neg (test_base.TestCSC) ... ok test_nonzero (test_base.TestCSC) ... ok test_pow (test_base.TestCSC) ... ok test_radd (test_base.TestCSC) ... ok test_real (test_base.TestCSC) ... ok test_repr (test_base.TestCSC) ... ok test_rmatvec (test_base.TestCSC) ... ok test_rmul_scalar (test_base.TestCSC) ... ok test_rsub (test_base.TestCSC) ... ok test_setelement (test_base.TestCSC) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sort_indices (test_base.TestCSC) ... ok test_sparse_format_conversions (test_base.TestCSC) ... ok test_str (test_base.TestCSC) ... ok test_sub (test_base.TestCSC) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestCSC) ... ok test_tobsr (test_base.TestCSC) ... ok test_todense (test_base.TestCSC) ... ok test_transpose (test_base.TestCSC) ... ok test_unsorted_arithmetic (test_base.TestCSC) ... ok test_abs (test_base.TestCSR) ... ok test_add (test_base.TestCSR) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestCSR) ... ok test_asfptype (test_base.TestCSR) ... ok test_astype (test_base.TestCSR) ... ok test_constructor1 (test_base.TestCSR) ... ok test_constructor2 (test_base.TestCSR) ... ok test_constructor3 (test_base.TestCSR) ... ok using (data, ij) format ... ok infer dimensions from arrays ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestCSR) ... ok test_elementwise_multiply (test_base.TestCSR) ... ok test_eliminate_zeros (test_base.TestCSR) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_fancy_indexing (test_base.TestCSR) ... ok test_fancy_indexing_randomized (test_base.TestCSR) ... ok test_fancy_indexing_set (test_base.TestCSR) ... KNOWNFAIL: Fancy indexing is known to be broken for CSR matrices test_from_array (test_base.TestCSR) ... ok test_from_list (test_base.TestCSR) ... ok test_from_matrix (test_base.TestCSR) ... ok test_from_sparse (test_base.TestCSR) ... ok Test for new slice functionality (EJS) ... ok test_get_slices (test_base.TestCSR) ... ok Test for new slice functionality (EJS) ... ok test_getcol (test_base.TestCSR) ... ok test_getelement (test_base.TestCSR) ... ok test_getrow (test_base.TestCSR) ... ok test_idiv_scalar (test_base.TestCSR) ... ok test_imag (test_base.TestCSR) ... ok test_imul_scalar (test_base.TestCSR) ... ok test_invalid_shapes (test_base.TestCSR) ... ok test_matmat_dense (test_base.TestCSR) ... ok test_matmat_sparse (test_base.TestCSR) ... ok test_matvec (test_base.TestCSR) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestCSR) ... ok test_mul_scalar (test_base.TestCSR) ... ok test_neg (test_base.TestCSR) ... ok test_nonzero (test_base.TestCSR) ... ok test_pow (test_base.TestCSR) ... ok test_radd (test_base.TestCSR) ... ok test_real (test_base.TestCSR) ... ok test_repr (test_base.TestCSR) ... ok test_rmatvec (test_base.TestCSR) ... ok test_rmul_scalar (test_base.TestCSR) ... ok test_rsub (test_base.TestCSR) ... ok test_setelement (test_base.TestCSR) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sort_indices (test_base.TestCSR) ... ok test_sparse_format_conversions (test_base.TestCSR) ... ok test_str (test_base.TestCSR) ... ok test_sub (test_base.TestCSR) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestCSR) ... ok test_tobsr (test_base.TestCSR) ... ok test_todense (test_base.TestCSR) ... ok test_transpose (test_base.TestCSR) ... ok test_unsorted_arithmetic (test_base.TestCSR) ... ok test_abs (test_base.TestDIA) ... ok test_add (test_base.TestDIA) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestDIA) ... ok test_asfptype (test_base.TestDIA) ... ok test_astype (test_base.TestDIA) ... ok test_constructor1 (test_base.TestDIA) ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestDIA) ... ok test_elementwise_multiply (test_base.TestDIA) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestDIA) ... ok test_from_list (test_base.TestDIA) ... ok test_from_matrix (test_base.TestDIA) ... ok test_from_sparse (test_base.TestDIA) ... ok test_getcol (test_base.TestDIA) ... ok test_getrow (test_base.TestDIA) ... ok test_imag (test_base.TestDIA) ... ok test_invalid_shapes (test_base.TestDIA) ... ok test_matmat_dense (test_base.TestDIA) ... ok test_matmat_sparse (test_base.TestDIA) ... ok test_matvec (test_base.TestDIA) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestDIA) ... ok test_mul_scalar (test_base.TestDIA) ... ok test_neg (test_base.TestDIA) ... ok test_nonzero (test_base.TestDIA) ... ok test_pow (test_base.TestDIA) ... ok test_radd (test_base.TestDIA) ... ok test_real (test_base.TestDIA) ... ok test_repr (test_base.TestDIA) ... ok test_rmatvec (test_base.TestDIA) ... ok test_rmul_scalar (test_base.TestDIA) ... ok test_rsub (test_base.TestDIA) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok test_sparse_format_conversions (test_base.TestDIA) ... ok test_str (test_base.TestDIA) ... ok test_sub (test_base.TestDIA) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestDIA) ... ok test_tobsr (test_base.TestDIA) ... ok test_todense (test_base.TestDIA) ... ok test_transpose (test_base.TestDIA) ... ok test_abs (test_base.TestDOK) ... ok test_add (test_base.TestDOK) ... ok adding a dense matrix to a sparse matrix ... ok test_asfptype (test_base.TestDOK) ... ok test_astype (test_base.TestDOK) ... ok Test provided by Andrew Straw.? Fails in SciPy <= r1477. ... ok Check whether the copy=True and copy=False keywords work ... ok test_ctor (test_base.TestDOK) ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestDOK) ... ok test_elementwise_multiply (test_base.TestDOK) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestDOK) ... ok test_from_list (test_base.TestDOK) ... ok test_from_matrix (test_base.TestDOK) ... ok test_from_sparse (test_base.TestDOK) ... ok test_getcol (test_base.TestDOK) ... ok test_getelement (test_base.TestDOK) ... ok test_getrow (test_base.TestDOK) ... ok test_imag (test_base.TestDOK) ... ok test_invalid_shapes (test_base.TestDOK) ... ok test_matmat_dense (test_base.TestDOK) ... ok test_matmat_sparse (test_base.TestDOK) ... ok test_matvec (test_base.TestDOK) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mul_scalar (test_base.TestDOK) ... ok test_mult (test_base.TestDOK) ... ok test_neg (test_base.TestDOK) ... ok test_nonzero (test_base.TestDOK) ... ok test_pow (test_base.TestDOK) ... ok test_radd (test_base.TestDOK) ... ok test_real (test_base.TestDOK) ... ok test_repr (test_base.TestDOK) ... ok A couple basic tests of the resize() method. ... ok test_rmatvec (test_base.TestDOK) ... ok test_rmul_scalar (test_base.TestDOK) ... ok test_rsub (test_base.TestDOK) ... ok Test for slice functionality (EJS) ... ok test_setelement (test_base.TestDOK) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sparse_format_conversions (test_base.TestDOK) ... ok test_str (test_base.TestDOK) ... ok test_sub (test_base.TestDOK) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok Regression test for ticket #1160. ... ok test_toarray (test_base.TestDOK) ... ok test_tobsr (test_base.TestDOK) ... ok test_todense (test_base.TestDOK) ... ok test_transpose (test_base.TestDOK) ... ok test_abs (test_base.TestLIL) ... ok test_add (test_base.TestLIL) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestLIL) ... ok test_asfptype (test_base.TestLIL) ... ok test_astype (test_base.TestLIL) ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_dot (test_base.TestLIL) ... ok test_elementwise_divide (test_base.TestLIL) ... ok test_elementwise_multiply (test_base.TestLIL) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_fancy_indexing (test_base.TestLIL) ... ok test_fancy_indexing_randomized (test_base.TestLIL) ... KNOWNFAIL: Fancy indexing is known to be broken for LIL matrices test_fancy_indexing_set (test_base.TestLIL) ... KNOWNFAIL: Fancy indexing is known to be broken for LIL matrices test_from_array (test_base.TestLIL) ... ok test_from_list (test_base.TestLIL) ... ok test_from_matrix (test_base.TestLIL) ... ok test_from_sparse (test_base.TestLIL) ... ok Test for new slice functionality (EJS) ... ok test_get_slices (test_base.TestLIL) ... ok Test for new slice functionality (EJS) ... ok test_getcol (test_base.TestLIL) ... ok test_getelement (test_base.TestLIL) ... ok test_getrow (test_base.TestLIL) ... ok test_idiv_scalar (test_base.TestLIL) ... ok test_imag (test_base.TestLIL) ... ok test_imul_scalar (test_base.TestLIL) ... ok test_inplace_ops (test_base.TestLIL) ... ok test_invalid_shapes (test_base.TestLIL) ... ok Tests whether a lil_matrix can be constructed from a ... ok test_lil_iteration (test_base.TestLIL) ... ok Tests whether a row of one lil_matrix can be assigned to ... ok test_lil_sequence_assignment (test_base.TestLIL) ... ok test_lil_slice_assignment (test_base.TestLIL) ... ok test_matmat_dense (test_base.TestLIL) ... ok test_matmat_sparse (test_base.TestLIL) ... ok test_matvec (test_base.TestLIL) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestLIL) ... ok test_mul_scalar (test_base.TestLIL) ... ok test_neg (test_base.TestLIL) ... ok test_nonzero (test_base.TestLIL) ... ok test_point_wise_multiply (test_base.TestLIL) ... ok test_pow (test_base.TestLIL) ... ok test_radd (test_base.TestLIL) ... ok test_real (test_base.TestLIL) ... ok test_repr (test_base.TestLIL) ... ok test_reshape (test_base.TestLIL) ... ok test_rmatvec (test_base.TestLIL) ... ok test_rmul_scalar (test_base.TestLIL) ... ok test_rsub (test_base.TestLIL) ... ok test_scalar_mul (test_base.TestLIL) ... ok test_setelement (test_base.TestLIL) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sparse_format_conversions (test_base.TestLIL) ... ok test_str (test_base.TestLIL) ... ok test_sub (test_base.TestLIL) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestLIL) ... ok test_tobsr (test_base.TestLIL) ... ok test_todense (test_base.TestLIL) ... ok test_transpose (test_base.TestLIL) ... ok test_bmat (test_construct.TestConstructUtils) ... ok test_eye (test_construct.TestConstructUtils) ... ok test_hstack (test_construct.TestConstructUtils) ... ok test_identity (test_construct.TestConstructUtils) ... ok test_kron (test_construct.TestConstructUtils) ... ok test_kronsum (test_construct.TestConstructUtils) ... ok test_rand (test_construct.TestConstructUtils) ... ok test_spdiags (test_construct.TestConstructUtils) ... ok test_vstack (test_construct.TestConstructUtils) ... ok test_tril (test_extract.TestExtract) ... ok test_triu (test_extract.TestExtract) ... ok test_count_blocks (test_spfuncs.TestSparseFunctions) ... ok test_cs_graph_components (test_spfuncs.TestSparseFunctions) ... ok test_estimate_blocksize (test_spfuncs.TestSparseFunctions) ... ok test_scale_rows_and_cols (test_spfuncs.TestSparseFunctions) ... ok test_getdtype (test_sputils.TestSparseUtils) ... ok test_isdense (test_sputils.TestSparseUtils) ... ok test_isintlike (test_sputils.TestSparseUtils) ... ok test_isscalarlike (test_sputils.TestSparseUtils) ... ok test_issequence (test_sputils.TestSparseUtils) ... ok test_isshape (test_sputils.TestSparseUtils) ... ok test_upcast (test_sputils.TestSparseUtils) ... ok Tests cdist(X, 'braycurtis') on random data. ... ok Tests cdist(X, 'canberra') on random data. ... ok Tests cdist(X, 'chebychev') on random data. ... ok Tests cdist(X, 'cityblock') on random data. ... ok Tests cdist(X, 'correlation') on random data. ... ok Tests cdist(X, 'cosine') on random data. ... ok Tests cdist(X, 'dice') on random data. ... ok Tests cdist(X, 'euclidean') on random data. ... ok Tests cdist(X, u'euclidean') using unicode metric string ... ok Tests cdist(X, 'hamming') on random boolean data. ... ok Tests cdist(X, 'hamming') on random data. ... ok Tests cdist(X, 'jaccard') on random boolean data. ... ok Tests cdist(X, 'jaccard') on random data. ... ok Tests cdist(X, 'kulsinski') on random data. ... ok Tests cdist(X, 'mahalanobis') on random data. ... ok Tests cdist(X, 'matching') on random data. ... ok Tests cdist(X, 'minkowski') on random data. (p=1.23) ... ok Tests cdist(X, 'minkowski') on random data. (p=3.8) ... ok Tests cdist(X, 'minkowski') on random data. (p=4.6) ... ok Tests cdist(X, 'rogerstanimoto') on random data. ... ok Tests cdist(X, 'russellrao') on random data. ... ok Tests cdist(X, 'seuclidean') on random data. ... ok Tests cdist(X, 'sokalmichener') on random data. ... ok Tests cdist(X, 'sokalsneath') on random data. ... ok Tests cdist(X, 'sqeuclidean') on random data. ... ok Tests cdist(X, 'wminkowski') on random data. (p=1.23) ... ok Tests cdist(X, 'wminkowski') on random data. (p=3.8) ... ok Tests cdist(X, 'wminkowski') on random data. (p=4.6) ... ok Tests cdist(X, 'yule') on random data. ... ok Tests is_valid_dm(*) on an assymetric distance matrix. Exception expected. ... ok Tests is_valid_dm(*) on an assymetric distance matrix. False expected. ... ok Tests is_valid_dm(*) on a correct 1x1. True expected. ... ok Tests is_valid_dm(*) on a correct 2x2. True expected. ... ok Tests is_valid_dm(*) on a correct 3x3. True expected. ... ok Tests is_valid_dm(*) on a correct 4x4. True expected. ... ok Tests is_valid_dm(*) on a correct 5x5. True expected. ... ok Tests is_valid_dm(*) on a 1D array. Exception expected. ... ok Tests is_valid_dm(*) on a 1D array. False expected. ... ok Tests is_valid_dm(*) on a 3D array. Exception expected. ... ok Tests is_valid_dm(*) on a 3D array. False expected. ... ok Tests is_valid_dm(*) on an int16 array. Exception expected. ... ok Tests is_valid_dm(*) on an int16 array. False expected. ... ok Tests is_valid_dm(*) on a distance matrix with a nonzero diagonal. Exception expected. ... ok Tests is_valid_dm(*) on a distance matrix with a nonzero diagonal. False expected. ... ok Tests is_valid_y(*) on 100 improper condensed distance matrices. Expecting exception. ... ok Tests is_valid_y(*) on a correct 2x2 condensed. True expected. ... ok Tests is_valid_y(*) on a correct 3x3 condensed. True expected. ... ok Tests is_valid_y(*) on a correct 4x4 condensed. True expected. ... ok Tests is_valid_y(*) on a correct 5x5 condensed. True expected. ... ok Tests is_valid_y(*) on a 2D array. Exception expected. ... ok Tests is_valid_y(*) on a 2D array. False expected. ... ok Tests is_valid_y(*) on a 3D array. Exception expected. ... ok Tests is_valid_y(*) on a 3D array. False expected. ... ok Tests is_valid_y(*) on an int16 array. Exception expected. ... ok Tests is_valid_y(*) on an int16 array. False expected. ... ok Tests num_obs_dm(D) on a 0x0 distance matrix. Expecting exception. ... ok Tests num_obs_dm(D) on a 1x1 distance matrix. ... ok Tests num_obs_dm(D) on a 2x2 distance matrix. ... ok Tests num_obs_dm(D) on a 3x3 distance matrix. ... ok Tests num_obs_dm(D) on a 4x4 distance matrix. ... ok Tests num_obs_dm with observation matrices of multiple sizes. ... ok Tests num_obs_y(y) on a condensed distance matrix over 1 observations. Expecting exception. ... ok Tests num_obs_y(y) on a condensed distance matrix over 2 observations. ... ok Tests num_obs_y(y) on 100 improper condensed distance matrices. Expecting exception. ... ok Tests num_obs_y(y) on a condensed distance matrix over 3 observations. ... ok Tests num_obs_y(y) on a condensed distance matrix over 4 observations. ... ok Tests num_obs_y(y) on a condensed distance matrix between 5 and 15 observations. ... ok Tests num_obs_y with observation matrices of multiple sizes. ... ok Tests pdist(X, 'canberra') to see if the two implementations match on the Iris data set. ... ok Tests pdist(X, 'canberra') to see if Canberra gives the right result as reported in Scipy bug report 711. ... ok Tests pdist(X, 'chebychev') on the Iris data set. ... ok Tests pdist(X, 'chebychev') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_chebychev') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'chebychev') on random data. ... ok Tests pdist(X, 'chebychev') on random data. (float32) ... ok Tests pdist(X, 'test_chebychev') [the non-C implementation] on random data. ... ok Tests pdist(X, 'cityblock') on the Iris data set. ... ok Tests pdist(X, 'cityblock') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_cityblock') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'cityblock') on random data. ... ok Tests pdist(X, 'cityblock') on random data. (float32) ... ok Tests pdist(X, 'test_cityblock') [the non-C implementation] on random data. ... ok Tests pdist(X, 'correlation') on the Iris data set. ... ok Tests pdist(X, 'correlation') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_correlation') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'correlation') on random data. ... ok Tests pdist(X, 'correlation') on random data. (float32) ... ok Tests pdist(X, 'test_correlation') [the non-C implementation] on random data. ... ok Tests pdist(X, 'cosine') on the Iris data set. ... ok Tests pdist(X, 'cosine') on the Iris data set. ... ok Tests pdist(X, 'test_cosine') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'cosine') on random data. ... ok Tests pdist(X, 'cosine') on random data. (float32) ... ok Tests pdist(X, 'test_cosine') [the non-C implementation] on random data. ... ok Tests pdist(X, 'hamming') on random data. ... ok Tests pdist(X, 'hamming') on random data. (float32) ... ok Tests pdist(X, 'test_hamming') [the non-C implementation] on random data. ... ok Tests pdist(X, 'dice') to see if the two implementations match on random double input data. ... ok Tests dice(*,*) with mtica example #1. ... ok Tests dice(*,*) with mtica example #2. ... ok Tests pdist(X, 'jaccard') on random data. ... ok Tests pdist(X, 'jaccard') on random data. (float32) ... ok Tests pdist(X, 'test_jaccard') [the non-C implementation] on random data. ... ok Tests pdist(X, 'euclidean') on the Iris data set. ... ok Tests pdist(X, 'euclidean') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_euclidean') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'euclidean') on random data. ... ok Tests pdist(X, 'euclidean') on random data (float32). ... ok Tests pdist(X, 'test_euclidean') [the non-C implementation] on random data. ... ok Tests pdist(X, 'euclidean') with unicode metric string ... ok Tests pdist(X, 'hamming') on random data. ... ok Tests pdist(X, 'hamming') on random data. ... ok Tests pdist(X, 'test_hamming') [the non-C implementation] on random data. ... ok Tests pdist(X, 'jaccard') to see if the two implementations match on random double input data. ... ok Tests jaccard(*,*) with mtica example #1. ... ok Tests jaccard(*,*) with mtica example #2. ... ok Tests pdist(X, 'jaccard') on random data. ... ok Tests pdist(X, 'jaccard') on random data. (float32) ... ok Tests pdist(X, 'test_jaccard') [the non-C implementation] on random data. ... ok Tests pdist(X, 'kulsinski') to see if the two implementations match on random double input data. ... ok Tests pdist(X, 'matching') to see if the two implementations match on random boolean input data. ... ok Tests matching(*,*) with mtica example #1 (nums). ... ok Tests matching(*,*) with mtica example #2. ... ok Tests pdist(X, 'minkowski') on iris data. ... ok Tests pdist(X, 'minkowski') on iris data. (float32) ... ok Tests pdist(X, 'test_minkowski') [the non-C implementation] on iris data. ... ok Tests pdist(X, 'minkowski') on iris data. ... ok Tests pdist(X, 'minkowski') on iris data. (float32) ... ok Tests pdist(X, 'test_minkowski') [the non-C implementation] on iris data. ... ok Tests pdist(X, 'minkowski') on random data. ... ok Tests pdist(X, 'minkowski') on random data. (float32) ... ok Tests pdist(X, 'test_minkowski') [the non-C implementation] on random data. ... ok Tests pdist(X, 'rogerstanimoto') to see if the two implementations match on random double input data. ... ok Tests rogerstanimoto(*,*) with mtica example #1. ... ok Tests rogerstanimoto(*,*) with mtica example #2. ... ok Tests pdist(X, 'russellrao') to see if the two implementations match on random double input data. ... ok Tests russellrao(*,*) with mtica example #1. ... ok Tests russellrao(*,*) with mtica example #2. ... ok Tests pdist(X, 'seuclidean') on the Iris data set. ... ok Tests pdist(X, 'seuclidean') on the Iris data set (float32). ... ok Tests pdist(X, 'test_seuclidean') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'seuclidean') on random data. ... ok Tests pdist(X, 'seuclidean') on random data (float32). ... ok Tests pdist(X, 'test_sqeuclidean') [the non-C implementation] on random data. ... ok Tests pdist(X, 'sokalmichener') to see if the two implementations match on random double input data. ... ok Tests pdist(X, 'sokalsneath') to see if the two implementations match on random double input data. ... ok Tests sokalsneath(*,*) with mtica example #1. ... ok Tests sokalsneath(*,*) with mtica example #2. ... ok test_pdist_wminkowski (test_distance.TestPdist) ... ok Tests pdist(X, 'yule') to see if the two implementations match on random double input data. ... ok Tests yule(*,*) with mtica example #1. ... ok Tests yule(*,*) with mtica example #2. ... ok Tests squareform on a 1x1 matrix. ... ok Tests squareform on a 2x2 matrix. ... ok Tests squareform on an empty matrix. ... ok Tests squareform on an empty vector. ... ok Tests squareform on a square matrices of multiple sizes. ... ok Tests squareform on a 1-D array, length=1. ... ok Loading test data files for the scipy.spatial.distance tests. ... ok Regression test for ticket #876 ... ok test_kdtree.test_count_neighbors.test_large_radius ... ok test_kdtree.test_count_neighbors.test_multiple_radius ... ok test_kdtree.test_count_neighbors.test_one_radius ... ok test_kdtree.test_random.test_approx ... ok test_kdtree.test_random.test_m_nearest ... ok test_kdtree.test_random.test_nearest ... ok test_kdtree.test_random.test_points_near ... ok test_kdtree.test_random.test_points_near_l1 ... ok test_kdtree.test_random.test_points_near_linf ... ok test_kdtree.test_random_ball.test_found_all ... ok test_kdtree.test_random_ball.test_in_ball ... ok test_kdtree.test_random_ball_approx.test_found_all ... ok test_kdtree.test_random_ball_approx.test_in_ball ... ok test_kdtree.test_random_ball_far.test_found_all ... ok test_kdtree.test_random_ball_far.test_in_ball ... ok test_kdtree.test_random_ball_l1.test_found_all ... ok test_kdtree.test_random_ball_l1.test_in_ball ... ok test_kdtree.test_random_ball_linf.test_found_all ... ok test_kdtree.test_random_ball_linf.test_in_ball ... ok test_kdtree.test_random_compiled.test_approx ... ok test_kdtree.test_random_compiled.test_m_nearest ... ok test_kdtree.test_random_compiled.test_nearest ... ok test_kdtree.test_random_compiled.test_points_near ... ok test_kdtree.test_random_compiled.test_points_near_l1 ... ok test_kdtree.test_random_compiled.test_points_near_linf ... ok test_kdtree.test_random_far.test_approx ... ok test_kdtree.test_random_far.test_m_nearest ... ok test_kdtree.test_random_far.test_nearest ... ok test_kdtree.test_random_far.test_points_near ... ok test_kdtree.test_random_far.test_points_near_l1 ... ok test_kdtree.test_random_far.test_points_near_linf ... ok test_kdtree.test_random_far_compiled.test_approx ... ok test_kdtree.test_random_far_compiled.test_m_nearest ... ok test_kdtree.test_random_far_compiled.test_nearest ... ok test_kdtree.test_random_far_compiled.test_points_near ... ok test_kdtree.test_random_far_compiled.test_points_near_l1 ... ok test_kdtree.test_random_far_compiled.test_points_near_linf ... ok test_kdtree.test_rectangle.test_max_inside ... ok test_kdtree.test_rectangle.test_max_one_side ... ok test_kdtree.test_rectangle.test_max_two_sides ... ok test_kdtree.test_rectangle.test_min_inside ... ok test_kdtree.test_rectangle.test_min_one_side ... ok test_kdtree.test_rectangle.test_min_two_sides ... ok test_kdtree.test_rectangle.test_split ... ok test_kdtree.test_small.test_approx ... ok test_kdtree.test_small.test_m_nearest ... ok test_kdtree.test_small.test_nearest ... ok test_kdtree.test_small.test_nearest_two ... ok test_kdtree.test_small.test_points_near ... ok test_kdtree.test_small.test_points_near_l1 ... ok test_kdtree.test_small.test_points_near_linf ... ok test_kdtree.test_small_compiled.test_approx ... ok test_kdtree.test_small_compiled.test_m_nearest ... ok test_kdtree.test_small_compiled.test_nearest ... ok test_kdtree.test_small_compiled.test_nearest_two ... ok test_kdtree.test_small_compiled.test_points_near ... ok test_kdtree.test_small_compiled.test_points_near_l1 ... ok test_kdtree.test_small_compiled.test_points_near_linf ... ok test_kdtree.test_small_nonleaf.test_approx ... ok test_kdtree.test_small_nonleaf.test_m_nearest ... ok test_kdtree.test_small_nonleaf.test_nearest ... ok test_kdtree.test_small_nonleaf.test_nearest_two ... ok test_kdtree.test_small_nonleaf.test_points_near ... ok test_kdtree.test_small_nonleaf.test_points_near_l1 ... ok test_kdtree.test_small_nonleaf.test_points_near_linf ... ok test_kdtree.test_small_nonleaf_compiled.test_approx ... ok test_kdtree.test_small_nonleaf_compiled.test_m_nearest ... ok test_kdtree.test_small_nonleaf_compiled.test_nearest ... ok test_kdtree.test_small_nonleaf_compiled.test_nearest_two ... ok test_kdtree.test_small_nonleaf_compiled.test_points_near ... ok test_kdtree.test_small_nonleaf_compiled.test_points_near_l1 ... ok test_kdtree.test_small_nonleaf_compiled.test_points_near_linf ... ok test_kdtree.test_sparse_distance_matrix.test_consistency_with_neighbors ... ok test_kdtree.test_sparse_distance_matrix.test_zero_distance ... ok test_kdtree.test_two_random_trees.test_all_in_ball ... ok test_kdtree.test_two_random_trees.test_found_all ... ok test_kdtree.test_two_random_trees_far.test_all_in_ball ... ok test_kdtree.test_two_random_trees_far.test_found_all ... ok test_kdtree.test_two_random_trees_linf.test_all_in_ball ... ok test_kdtree.test_two_random_trees_linf.test_found_all ... ok test_kdtree.test_vectorization.test_single_query ... ok test_kdtree.test_vectorization.test_single_query_all_neighbors ... ok test_kdtree.test_vectorization.test_single_query_multiple_neighbors ... ok test_kdtree.test_vectorization.test_vectorized_query ... ok test_kdtree.test_vectorization.test_vectorized_query_all_neighbors ... ok test_kdtree.test_vectorization.test_vectorized_query_multiple_neighbors ... ok test_kdtree.test_vectorization_compiled.test_single_query ... ok test_kdtree.test_vectorization_compiled.test_single_query_multiple_neighbors ... ok test_kdtree.test_vectorization_compiled.test_vectorized_query ... ok test_kdtree.test_vectorization_compiled.test_vectorized_query_multiple_neighbors ... ok test_kdtree.test_vectorization_compiled.test_vectorized_query_noncontiguous_values ... ok test_kdtree.test_random_ball_vectorized ... ok test_kdtree.test_distance_l2 ... ok test_kdtree.test_distance_l1 ... ok test_kdtree.test_distance_linf ... ok test_kdtree.test_distance_vectorization ... ok test_kdtree.test_distance_matrix ... ok test_kdtree.test_distance_matrix_looping ... ok test_kdtree.test_onetree_query(, 0.10000000000000001) ... ok test_kdtree.test_onetree_query(, 0.10000000000000001) ... ok test_kdtree.test_onetree_query(, 0.001) ... ok test_kdtree.test_onetree_query(, 1.0000000000000001e-05) ... ok test_kdtree.test_onetree_query(, 9.9999999999999995e-07) ... ok test_kdtree.test_query_pairs_single_node ... ok test_qhull.TestRidgeIter2D.test_complicated ... ok test_qhull.TestRidgeIter2D.test_rectangle ... ok test_qhull.TestRidgeIter2D.test_triangle ... ok test_qhull.TestTriangulation.test_2d_square ... ok test_qhull.TestTriangulation.test_duplicate_points ... ok test_qhull.TestTriangulation.test_nd_simplex ... ok test_qhull.TestTriangulation.test_pathological ... ok test_qhull.TestUtilities.test_convex_hull ... ok test_qhull.TestUtilities.test_find_simplex ... ok test_qhull.TestUtilities.test_plane_distance ... ok test_ai_zeros (test_basic.TestAiry) ... ok test_airy (test_basic.TestAiry) ... ok test_airye (test_basic.TestAiry) ... ok test_bi_zeros (test_basic.TestAiry) ... ok test_assoc_laguerre (test_basic.TestAssocLaguerre) ... ok test_bernoulli (test_basic.TestBernoulli) ... ok test_i0 (test_basic.TestBessel) ... ok test_i0_series (test_basic.TestBessel) ... ok test_i0e (test_basic.TestBessel) ... ok test_i1 (test_basic.TestBessel) ... ok test_i1_series (test_basic.TestBessel) ... ok test_i1e (test_basic.TestBessel) ... ok test_it2i0k0 (test_basic.TestBessel) ... ok test_it2j0y0 (test_basic.TestBessel) ... ok test_iti0k0 (test_basic.TestBessel) ... ok test_itj0y0 (test_basic.TestBessel) ... ok test_iv (test_basic.TestBessel) ... ok test_iv_cephes_vs_amos (test_basic.TestBessel) ... ok test_iv_hyperg_poles (test_basic.TestBessel) ... ok test_iv_series (test_basic.TestBessel) ... ok test_ive (test_basic.TestBessel) ... ok test_ivp (test_basic.TestBessel) ... ok test_ivp0 (test_basic.TestBessel) ... ok test_j0 (test_basic.TestBessel) ... ok test_j1 (test_basic.TestBessel) ... ok test_jacobi (test_basic.TestBessel) ... ok test_jn (test_basic.TestBessel) ... ok test_jn_zeros (test_basic.TestBessel) ... ok test_jn_zeros_slow (test_basic.TestBessel) ... ok test_jnjnp_zeros (test_basic.TestBessel) ... ok test_jnp_zeros (test_basic.TestBessel) ... ok test_jnyn_zeros (test_basic.TestBessel) ... ok test_jv (test_basic.TestBessel) ... ok test_jv_cephes_vs_amos (test_basic.TestBessel) ... ok test_jve (test_basic.TestBessel) ... ok test_jvp (test_basic.TestBessel) ... ok test_k0 (test_basic.TestBessel) ... ok test_k0e (test_basic.TestBessel) ... ok test_k1 (test_basic.TestBessel) ... ok test_k1e (test_basic.TestBessel) ... ok test_kn (test_basic.TestBessel) ... ok test_kv0 (test_basic.TestBessel) ... ok test_kv1 (test_basic.TestBessel) ... ok test_kv2 (test_basic.TestBessel) ... ok test_kv_cephes_vs_amos (test_basic.TestBessel) ... ok test_kve (test_basic.TestBessel) ... ok test_kvp_n1 (test_basic.TestBessel) ... ok test_kvp_n2 (test_basic.TestBessel) ... ok test_kvp_v0n1 (test_basic.TestBessel) ... ok test_negv_iv (test_basic.TestBessel) ... ok test_negv_ive (test_basic.TestBessel) ... ok test_negv_jv (test_basic.TestBessel) ... ok test_negv_jve (test_basic.TestBessel) ... ok test_negv_kv (test_basic.TestBessel) ... ok test_negv_kve (test_basic.TestBessel) ... ok test_negv_yv (test_basic.TestBessel) ... ok test_negv_yve (test_basic.TestBessel) ... ok Real-valued Bessel I overflow ... ok test_ticket_623 (test_basic.TestBessel) ... ok Negative-order Bessels ... ok Real-valued Bessel domains ... ok test_y0 (test_basic.TestBessel) ... ok test_y0_zeros (test_basic.TestBessel) ... ok test_y1 (test_basic.TestBessel) ... ok test_y1_zeros (test_basic.TestBessel) ... ok test_y1p_zeros (test_basic.TestBessel) ... ok test_yn (test_basic.TestBessel) ... ok test_yn_zeros (test_basic.TestBessel) ... ok test_ynp_zeros (test_basic.TestBessel) ... ok test_ynp_zeros_large_order (test_basic.TestBessel) ... KNOWNFAIL: cephes/yv is not eps accurate for large orders on all platforms, and has nan/inf issues test_yv (test_basic.TestBessel) ... ok test_yv_cephes_vs_amos (test_basic.TestBessel) ... KNOWNFAIL: cephes/yv is not eps accurate for large orders on all platforms, and has nan/inf issues test_yv_cephes_vs_amos_only_small_orders (test_basic.TestBessel) ... ok test_yve (test_basic.TestBessel) ... ok test_yvp (test_basic.TestBessel) ... ok test_besselpoly (test_basic.TestBesselpoly) ... ok test_beta (test_basic.TestBeta) ... ok test_betainc (test_basic.TestBeta) ... ok test_betaincinv (test_basic.TestBeta) ... ok test_betaln (test_basic.TestBeta) ... ok test_airy (test_basic.TestCephes) ... ok test_airye (test_basic.TestCephes) ... ok test_bdtr (test_basic.TestCephes) ... ok test_bdtrc (test_basic.TestCephes) ... ok test_bdtri (test_basic.TestCephes) ... ok test_bdtrik (test_basic.TestCephes) ... ok test_bdtrin (test_basic.TestCephes) ... ok test_bei (test_basic.TestCephes) ... ok test_beip (test_basic.TestCephes) ... ok test_ber (test_basic.TestCephes) ... ok test_berp (test_basic.TestCephes) ... ok test_besselpoly (test_basic.TestCephes) ... ok test_beta (test_basic.TestCephes) ... ok test_betainc (test_basic.TestCephes) ... ok test_betaincinv (test_basic.TestCephes) ... ok test_betaln (test_basic.TestCephes) ... ok test_btdtr (test_basic.TestCephes) ... ok test_btdtri (test_basic.TestCephes) ... ok test_btdtria (test_basic.TestCephes) ... ok test_btdtrib (test_basic.TestCephes) ... ok test_cbrt (test_basic.TestCephes) ... ok test_chdtr (test_basic.TestCephes) ... ok test_chdtrc (test_basic.TestCephes) ... ok test_chdtri (test_basic.TestCephes) ... ok test_chdtriv (test_basic.TestCephes) ... ok test_chndtr (test_basic.TestCephes) ... ok test_chndtridf (test_basic.TestCephes) ... ok test_chndtrinc (test_basic.TestCephes) ... ok test_chndtrix (test_basic.TestCephes) ... ok test_cosdg (test_basic.TestCephes) ... ok test_cosm1 (test_basic.TestCephes) ... ok test_cotdg (test_basic.TestCephes) ... ok test_dawsn (test_basic.TestCephes) ... ok test_ellipe (test_basic.TestCephes) ... ok test_ellipeinc (test_basic.TestCephes) ... ok test_ellipj (test_basic.TestCephes) ... ok test_ellipk (test_basic.TestCephes) ... ok test_ellipkinc (test_basic.TestCephes) ... ok test_erf (test_basic.TestCephes) ... ok test_erfc (test_basic.TestCephes) ... ok test_exp1 (test_basic.TestCephes) ... ok test_exp10 (test_basic.TestCephes) ... ok test_exp1_reg (test_basic.TestCephes) ... ok test_exp2 (test_basic.TestCephes) ... ok test_expi (test_basic.TestCephes) ... ok test_expm1 (test_basic.TestCephes) ... ok test_expn (test_basic.TestCephes) ... ok test_fdtr (test_basic.TestCephes) ... ok test_fdtrc (test_basic.TestCephes) ... ok test_fdtri (test_basic.TestCephes) ... ok test_fdtridfd (test_basic.TestCephes) ... ok test_fresnel (test_basic.TestCephes) ... ok test_gamma (test_basic.TestCephes) ... ok test_gammainc (test_basic.TestCephes) ... ok test_gammaincc (test_basic.TestCephes) ... ok test_gammainccinv (test_basic.TestCephes) ... ok test_gammaln (test_basic.TestCephes) ... ok test_gdtr (test_basic.TestCephes) ... ok test_gdtrc (test_basic.TestCephes) ... ok test_gdtria (test_basic.TestCephes) ... ok test_gdtrib (test_basic.TestCephes) ... ok test_gdtrix (test_basic.TestCephes) ... ok test_hankel1 (test_basic.TestCephes) ... ok test_hankel1e (test_basic.TestCephes) ... ok test_hankel2 (test_basic.TestCephes) ... ok test_hankel2e (test_basic.TestCephes) ... ok test_hyp1f1 (test_basic.TestCephes) ... ok test_hyp1f2 (test_basic.TestCephes) ... ok test_hyp2f0 (test_basic.TestCephes) ... ok test_hyp2f1 (test_basic.TestCephes) ... ok test_hyp3f0 (test_basic.TestCephes) ... ok test_hyperu (test_basic.TestCephes) ... ok test_i0 (test_basic.TestCephes) ... ok test_i0e (test_basic.TestCephes) ... ok test_i1 (test_basic.TestCephes) ... ok test_i1e (test_basic.TestCephes) ... ok test_it2i0k0 (test_basic.TestCephes) ... ok test_it2j0y0 (test_basic.TestCephes) ... ok test_it2struve0 (test_basic.TestCephes) ... ok test_itairy (test_basic.TestCephes) ... ok test_iti0k0 (test_basic.TestCephes) ... ok test_itj0y0 (test_basic.TestCephes) ... ok test_itmodstruve0 (test_basic.TestCephes) ... ok test_itstruve0 (test_basic.TestCephes) ... ok test_iv (test_basic.TestCephes) ... ok test_j0 (test_basic.TestCephes) ... ok test_j1 (test_basic.TestCephes) ... ok test_jn (test_basic.TestCephes) ... ok test_jv (test_basic.TestCephes) ... ok test_k0 (test_basic.TestCephes) ... ok test_k0e (test_basic.TestCephes) ... ok test_k1 (test_basic.TestCephes) ... ok test_k1e (test_basic.TestCephes) ... ok test_kei (test_basic.TestCephes) ... ok test_keip (test_basic.TestCephes) ... ok test_ker (test_basic.TestCephes) ... ok test_kerp (test_basic.TestCephes) ... ok test_kn (test_basic.TestCephes) ... ok test_kolmogi (test_basic.TestCephes) ... ok test_kolmogorov (test_basic.TestCephes) ... ok test_log1p (test_basic.TestCephes) ... ok test_lpmv (test_basic.TestCephes) ... ok test_mathieu_a (test_basic.TestCephes) ... ok test_mathieu_b (test_basic.TestCephes) ... ok test_mathieu_cem (test_basic.TestCephes) ... ok test_mathieu_modcem1 (test_basic.TestCephes) ... ok test_mathieu_modcem2 (test_basic.TestCephes) ... ok test_mathieu_modsem1 (test_basic.TestCephes) ... ok test_mathieu_modsem2 (test_basic.TestCephes) ... ok test_mathieu_sem (test_basic.TestCephes) ... ok test_modfresnelm (test_basic.TestCephes) ... ok test_modfresnelp (test_basic.TestCephes) ... ok test_nbdtr (test_basic.TestCephes) ... ok test_nbdtrc (test_basic.TestCephes) ... ok test_nbdtri (test_basic.TestCephes) ... ok test_nbdtrin (test_basic.TestCephes) ... ok test_ncfdtr (test_basic.TestCephes) ... ok test_ncfdtri (test_basic.TestCephes) ... ok test_ncfdtridfd (test_basic.TestCephes) ... ok test_nctdtr (test_basic.TestCephes) ... ok test_nctdtrinc (test_basic.TestCephes) ... ok test_nctdtrit (test_basic.TestCephes) ... ok test_ndtr (test_basic.TestCephes) ... ok test_ndtri (test_basic.TestCephes) ... ok test_nrdtrimn (test_basic.TestCephes) ... ok test_nrdtrisd (test_basic.TestCephes) ... ok test_obl_ang1 (test_basic.TestCephes) ... ok test_obl_ang1_cv (test_basic.TestCephes) ... ok test_obl_rad1 (test_basic.TestCephes) ... ok test_obl_rad1_cv (test_basic.TestCephes) ... ok test_obl_rad2 (test_basic.TestCephes) ... ok test_obl_rad2_cv (test_basic.TestCephes) ... ok test_pbdv (test_basic.TestCephes) ... ok test_pbvv (test_basic.TestCephes) ... ok test_pbwa (test_basic.TestCephes) ... ok test_pdtr (test_basic.TestCephes) ... ok test_pdtrc (test_basic.TestCephes) ... ok test_pdtri (test_basic.TestCephes) ... ok test_pdtrik (test_basic.TestCephes) ... ok test_pro_ang1 (test_basic.TestCephes) ... ok test_pro_ang1_cv (test_basic.TestCephes) ... ok test_pro_rad1 (test_basic.TestCephes) ... ok test_pro_rad1_cv (test_basic.TestCephes) ... ok test_pro_rad2 (test_basic.TestCephes) ... ok test_pro_rad2_cv (test_basic.TestCephes) ... ok test_psi (test_basic.TestCephes) ... ok test_radian (test_basic.TestCephes) ... ok test_rgamma (test_basic.TestCephes) ... ok test_round (test_basic.TestCephes) ... ok test_shichi (test_basic.TestCephes) ... ok test_sici (test_basic.TestCephes) ... ok test_sindg (test_basic.TestCephes) ... ok test_smirnov (test_basic.TestCephes) ... ok test_smirnovi (test_basic.TestCephes) ... ok test_spence (test_basic.TestCephes) ... ok test_stdtr (test_basic.TestCephes) ... ok test_stdtridf (test_basic.TestCephes) ... ok test_stdtrit (test_basic.TestCephes) ... ok test_struve (test_basic.TestCephes) ... ok test_tandg (test_basic.TestCephes) ... ok test_tklmbda (test_basic.TestCephes) ... ok test_wofz (test_basic.TestCephes) ... ok test_y0 (test_basic.TestCephes) ... ok test_y1 (test_basic.TestCephes) ... ok test_yn (test_basic.TestCephes) ... ok test_yv (test_basic.TestCephes) ... ok test_zeta (test_basic.TestCephes) ... ok test_zetac (test_basic.TestCephes) ... ok test_ellipe (test_basic.TestEllip) ... ok test_ellipeinc (test_basic.TestEllip) ... ok test_ellipj (test_basic.TestEllip) ... ok Regression test for #912. ... ok test_ellipk (test_basic.TestEllip) ... ok test_ellipkinc (test_basic.TestEllip) ... ok test_erf (test_basic.TestErf) ... ok test_erf_zeros (test_basic.TestErf) ... ok test_erfcinv (test_basic.TestErf) ... ok test_erfinv (test_basic.TestErf) ... ok test_errprint (test_basic.TestErf) ... ok test_euler (test_basic.TestEuler) ... ok test_exp10 (test_basic.TestExp) ... ok test_exp10more (test_basic.TestExp) ... ok test_exp2 (test_basic.TestExp) ... ok test_exp2more (test_basic.TestExp) ... ok test_expm1 (test_basic.TestExp) ... ok test_expm1more (test_basic.TestExp) ... ok test_fresnel (test_basic.TestFresnel) ... ok test_fresnel_zeros (test_basic.TestFresnel) ... ok test_fresnelc_zeros (test_basic.TestFresnel) ... ok test_fresnels_zeros (test_basic.TestFresnel) ... ok test_modfresnelm (test_basic.TestFresnelIntegral) ... ok test_modfresnelp (test_basic.TestFresnelIntegral) ... ok test_975 (test_basic.TestGamma) ... ok test_gamma (test_basic.TestGamma) ... ok test_gammainc (test_basic.TestGamma) ... ok test_gammaincc (test_basic.TestGamma) ... ok test_gammainccinv (test_basic.TestGamma) ... ok test_gammaincinv (test_basic.TestGamma) ... ok test_gammaln (test_basic.TestGamma) ... ok test_rgamma (test_basic.TestGamma) ... ok test_hankel1 (test_basic.TestHankel) ... ok test_hankel1e (test_basic.TestHankel) ... ok test_hankel2 (test_basic.TestHankel) ... ok test_hankl2e (test_basic.TestHankel) ... ok test_neg2e (test_basic.TestHankel) ... ok test_negv1 (test_basic.TestHankel) ... ok test_negv1e (test_basic.TestHankel) ... ok test_negv2 (test_basic.TestHankel) ... ok test_h1vp (test_basic.TestHyper) ... ok test_h2vp (test_basic.TestHyper) ... ok test_hyp0f1 (test_basic.TestHyper) ... ok test_hyp1f1 (test_basic.TestHyper) ... ok test_hyp1f2 (test_basic.TestHyper) ... ok test_hyp2f0 (test_basic.TestHyper) ... ok test_hyp2f1 (test_basic.TestHyper) ... ok test_hyp3f0 (test_basic.TestHyper) ... ok test_hyperu (test_basic.TestHyper) ... ok test_bei (test_basic.TestKelvin) ... ok test_bei_zeros (test_basic.TestKelvin) ... ok test_beip (test_basic.TestKelvin) ... ok test_beip_zeros (test_basic.TestKelvin) ... ok test_ber (test_basic.TestKelvin) ... ok test_ber_zeros (test_basic.TestKelvin) ... ok test_berp (test_basic.TestKelvin) ... ok test_berp_zeros (test_basic.TestKelvin) ... ok test_kei (test_basic.TestKelvin) ... ok test_kei_zeros (test_basic.TestKelvin) ... ok test_keip (test_basic.TestKelvin) ... ok test_keip_zeros (test_basic.TestKelvin) ... ok test_kelvin (test_basic.TestKelvin) ... ok test_kelvin_zeros (test_basic.TestKelvin) ... ok test_ker (test_basic.TestKelvin) ... ok test_ker_zeros (test_basic.TestKelvin) ... ok test_kerp (test_basic.TestKelvin) ... ok test_kerp_zeros (test_basic.TestKelvin) ... ok test_genlaguerre (test_basic.TestLaguerre) ... ok test_laguerre (test_basic.TestLaguerre) ... ok test_lmbda (test_basic.TestLambda) ... ok test_legendre (test_basic.TestLegendre) ... ok test_lpmn (test_basic.TestLegendreFunctions) ... ok test_lpmv (test_basic.TestLegendreFunctions) ... ok test_lpn (test_basic.TestLegendreFunctions) ... ok test_lqmn (test_basic.TestLegendreFunctions) ... ok test_lqmn_shape (test_basic.TestLegendreFunctions) ... ok test_lqn (test_basic.TestLegendreFunctions) ... ok test_log1p (test_basic.TestLog1p) ... ok test_log1pmore (test_basic.TestLog1p) ... ok test_mathieu_a (test_basic.TestMathieu) ... ok test_mathieu_even_coef (test_basic.TestMathieu) ... ok test_mathieu_odd_coef (test_basic.TestMathieu) ... ok test_obl_cv_seq (test_basic.TestOblCvSeq) ... ok test_pbdn_seq (test_basic.TestParabolicCylinder) ... ok test_pbdv (test_basic.TestParabolicCylinder) ... ok test_pbdv_gradient (test_basic.TestParabolicCylinder) ... ok test_pbdv_points (test_basic.TestParabolicCylinder) ... ok test_pbdv_seq (test_basic.TestParabolicCylinder) ... ok test_pbvv_gradient (test_basic.TestParabolicCylinder) ... ok test_polygamma (test_basic.TestPolygamma) ... ok test_pro_cv_seq (test_basic.TestProCvSeq) ... ok test_psi (test_basic.TestPsi) ... ok test_radian (test_basic.TestRadian) ... ok test_radianmore (test_basic.TestRadian) ... ok test_riccati_jn (test_basic.TestRiccati) ... ok test_riccati_yn (test_basic.TestRiccati) ... ok test_round (test_basic.TestRound) ... ok test_sph_harm (test_basic.TestSpherical) ... ok test_sph_in (test_basic.TestSpherical) ... ok test_sph_inkn (test_basic.TestSpherical) ... ok test_sph_jn (test_basic.TestSpherical) ... ok test_sph_jnyn (test_basic.TestSpherical) ... ok test_sph_kn (test_basic.TestSpherical) ... ok test_sph_yn (test_basic.TestSpherical) ... ok Regression test for #679 ... ok test_basic.TestStruve.test_some_values ... ok Check Struve function versus its power series ... ok test_specialpoints (test_basic.TestTandg) ... ok test_tandg (test_basic.TestTandg) ... ok test_tandgmore (test_basic.TestTandg) ... ok test_0 (test_basic.TestTrigonometric) ... ok test_cbrt (test_basic.TestTrigonometric) ... ok test_cbrtmore (test_basic.TestTrigonometric) ... ok test_cosdg (test_basic.TestTrigonometric) ... ok test_cosdgmore (test_basic.TestTrigonometric) ... ok test_cosm1 (test_basic.TestTrigonometric) ... ok test_cotdg (test_basic.TestTrigonometric) ... ok test_cotdgmore (test_basic.TestTrigonometric) ... ok test_sinc (test_basic.TestTrigonometric) ... ok test_sindg (test_basic.TestTrigonometric) ... ok test_sindgmore (test_basic.TestTrigonometric) ... ok test_specialpoints (test_basic.TestTrigonometric) ... ok test_basic.test_sph_harm(array((0.28209479177387814+0j)), 0.28209479177387814) ... ok test_basic.test_sph_harm(array((0.19313710101159479+0j)), 0.19313710101159473) ... ok test_basic.test_sph_harm(array((0.38627420202318968+0j)), 0.38627420202318957) ... ok test_basic.test_sph_harm(array((0.38627420202318957-9.4606768423053307e-17j)), (0.38627420202318957-9.4606768423053307e-17j)) ... ok test_basic.test_sph_harm(array((1.1521668490919394e-17+0.18816934037548774j)), (1.1521668490919398e-17+0.18816934037548777j)) ... ok test_basic.test_sph_harm(array((1.6935260841945282e-18+0.027658293277811382j)), (1.6935260841945294e-18+0.027658293277811399j)) ... ok test_basic.test_chi2_smalldf ... ok test_basic.test_chi2c_smalldf ... ok test_basic.test_chi2_inv_smalldf ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... KNOWNFAIL: Known bug in Cephes kn implementation test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... KNOWNFAIL: Known bug in Cephes yv implementation test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_data.test_boost(,) ... ok test_lambertw.test_values ... ok test_lambertw.test_ufunc ... ok test_mpmath.test_expi_complex ... SKIP: Skipping test: test_expi_complex mpmath library is not present test_mpmath.test_hyp2f1_strange_points ... SKIP: Skipping test: test_hyp2f1_strange_points mpmath library is not present test_mpmath.test_hyp2f1_real_some_points ... SKIP: Skipping test: test_hyp2f1_real_some_points mpmath library is not present test_mpmath.test_hyp2f1_some_points_2 ... SKIP: Skipping test: test_hyp2f1_some_points_2 mpmath library is not present test_mpmath.test_hyp2f1_real_some ... SKIP: Skipping test: test_hyp2f1_real_some mpmath library is not present test_mpmath.test_erf_complex ... SKIP: Skipping test: test_erf_complex mpmath library is not present test_mpmath.test_lpmv ... SKIP: Skipping test: test_lpmv mpmath library is not present test_orthogonal.TestCall.test_call ... ok test_chebyc (test_orthogonal.TestCheby) ... ok test_chebys (test_orthogonal.TestCheby) ... ok test_chebyt (test_orthogonal.TestCheby) ... ok test_chebyu (test_orthogonal.TestCheby) ... ok test_gegenbauer (test_orthogonal.TestGegenbauer) ... ok test_hermite (test_orthogonal.TestHermite) ... ok test_hermitenorm (test_orthogonal.TestHermite) ... ok test_orthogonal_eval.TestPolys.test_chebyc ... ok test_orthogonal_eval.TestPolys.test_chebys ... ok test_orthogonal_eval.TestPolys.test_chebyt ... ok test_orthogonal_eval.TestPolys.test_chebyu ... ok test_orthogonal_eval.TestPolys.test_gegenbauer ... ok test_orthogonal_eval.TestPolys.test_genlaguerre ... ok test_orthogonal_eval.TestPolys.test_hermite ... ok test_orthogonal_eval.TestPolys.test_hermitenorm ... ok test_orthogonal_eval.TestPolys.test_jacobi ... ok test_orthogonal_eval.TestPolys.test_laguerre ... ok test_orthogonal_eval.TestPolys.test_legendre ... ok test_orthogonal_eval.TestPolys.test_sh_chebyt ... ok test_orthogonal_eval.TestPolys.test_sh_chebyu ... ok test_orthogonal_eval.TestPolys.test_sh_jacobi ... ok test_orthogonal_eval.TestPolys.test_sh_legendre ... ok test_orthogonal_eval.test_eval_chebyt ... ok test_orthogonal_eval.test_warnings ... ok test1 (test_spfun_stats.TestMultiGammaLn) ... ok test_ararg (test_spfun_stats.TestMultiGammaLn) ... ok test_bararg (test_spfun_stats.TestMultiGammaLn) ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), array(inf), array(inf), 0.31772708039386671, 0.021186836778540902, 1000, 'alphasample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), array(inf), array(inf), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(0.11685027506808487), 0.019485173966289539, 0.11461131582481687, 1000, 'anglitsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(0.11685027506808487), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.125), 0.51691545030819297, 0.12586663168201145, 1000, 'arcsinesample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.125), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), array(0.78653818488354665), array(0.04264856955583702), 0.78396526766379881, 0.045854302817002014, 1000, 'betasample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), array(0.78653818488354665), array(0.04264856955583702), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (5, 6), array(1.0), array(0.5), 0.9608076505807116, 0.47274240606696361, 1000, 'betaprimesample mean test') ... ok test_continuous_basic.test_cont_basic(, (5, 6), array(1.0), array(0.5), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), array(0.47823078529291196), array(0.083238491922311225), 0.4933692442090703, 0.083341918695194112, 1000, 'bradfordsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), array(0.47823078529291196), array(0.083238491922311225), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), array(1.2109372989617821), array(0.029148272765685535), 1.2204374750635194, 0.030007409783013278, 1000, 'burrsample mean test') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), array(1.2109372989617821), array(0.029148272765685535), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 1.552645887869553, 327.63988189221925, 1000, 'cauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (78,), array(8.8035000285242742), array(0.49838724777310972), 8.7666853364107826, 0.4613094602619911, 1000, 'chisample mean test') ... ok test_continuous_basic.test_cont_basic(, (78,), array(8.8035000285242742), array(0.49838724777310972), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (55,), array(55.0), array(110.0), 54.443237765581728, 99.92590900706503, 1000, 'chi2sample mean test') ... ok test_continuous_basic.test_cont_basic(, (55,), array(55.0), array(110.0), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), array(0.0), array(2.3174697893161609), 0.058649521093078708, 2.2265558396676179, 1000, 'dgammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), array(0.0), array(2.3174697893161609), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), array(0.0), array(0.98644644671326842), 0.036400052129263775, 0.95809285003898315, 1000, 'dweibullsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), array(0.0), array(0.98644644671326842), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (20,), array(20.0), array(20.0), 19.75909577220186, 18.118858828066092, 1000, 'erlangsample mean test') ... ok test_continuous_basic.test_cont_basic(, (20,), array(20.0), array(20.0), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 1.0489197735650717, 1.063581407259818, 1000, 'exponsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), array(0.76622330667382488), array(0.05900404926303171), 0.78088147091689752, 0.055982343857351249, 1000, 'exponpowsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), array(0.76622330667382488), array(0.05900404926303171), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), array(1.2873418037984079), array(0.18119174498960655), 1.3122933415946192, 0.18047748963723587, 1000, 'exponweibsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), array(1.2873418037984079), array(0.18119174498960655), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (29, 18), array(1.125), array(0.2805572660098522), 1.1043635143445401, 0.2668974538642544, 1000, 'fsample mean test') ... ok test_continuous_basic.test_cont_basic(, (29, 18), array(1.125), array(0.2805572660098522), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29,), array(421.5), array(884942.25), 381.5231692182964, 659709.81429483229, 1000, 'fatiguelifesample mean test') ... ok test_continuous_basic.test_cont_basic(, (29,), array(421.5), array(884942.25), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), array(1.19619763403311), array(0.84763509403100801), 1.2366804437236734, 0.81012589838120275, 1000, 'fisksample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), array(1.19619763403311), array(0.84763509403100801), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), array(inf), array(inf), 7.1128596673275473, 316.34888997897662, 1000, 'foldcauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), array(inf), array(inf), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), array(1.97141281966251), array(0.92432482721597609), 1.9485668609369076, 0.89445098899946973, 1000, 'foldnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), array(1.97141281966251), array(0.92432482721597609), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), array(-0.90148416697658329), array(0.076288054283963236), -0.88452420295607126, 0.07344496156832396, 1000, 'frechet_lsample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), array(-0.90148416697658329), array(0.076288054283963236), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), array(0.88747270666698841), array(0.23766896745884436), 0.91443119143867235, 0.24138404811476438, 1000, 'frechet_rsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), array(0.88747270666698841), array(0.23766896745884436), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), array(1.9932305483800778), array(1.9932305483800778), 1.9049475453784119, 1.683287232178736, 1000, 'gammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), array(1.9932305483800778), array(1.9932305483800778), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), array(0.68628702119319673), array(2.226241073208163), 0.7659800257484175, 2.3195034224827276, 1000, 'genextremesample mean test') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), array(0.68628702119319673), array(2.226241073208163), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), array(1.5702162249275609), array(0.060317549758582167), 1.5854913681198324, 0.057829220024230694, 1000, 'gengammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), array(1.5702162249275609), array(0.060317549758582167), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), array(0.70597656450848112), array(0.12459765121103794), 0.72486157273526908, 0.12193473451964035, 1000, 'genhalflogisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), array(0.70597656450848112), array(0.12459765121103794), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), array(-1.8996417990533176), array(8.5520107632574316), -1.6915023776411073, 7.2701839656308955, 1000, 'genlogisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), array(-1.8996417990533176), array(8.5520107632574316), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), array(1.1111111111111116), array(1.543209876543199), 1.1693857515263717, 1.6532801760783784, 1000, 'genparetosample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), array(1.1111111111111116), array(1.543209876543199), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (), array(1.6487212707001282), array(4.670774270471604), 1.5542857947354762, 3.0177319205419764, 1000, 'gilbratsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.6487212707001282), array(4.670774270471604), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), array(0.61842381762891141), array(0.18616258957403664), 0.64114216641328903, 0.19073684302906721, 1000, 'gompertzsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), array(0.61842381762891141), array(0.18616258957403664), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (), array(-0.57721566490153287), array(1.6449340668482264), -0.48770832402349951, 1.403145247583542, 1000, 'gumbel_lsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(-0.57721566490153287), array(1.6449340668482264), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), array(0.57721566490153287), array(1.6449340668482264), 0.64936088543269999, 1.6827982006295068, 1000, 'gumbel_rsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.57721566490153287), array(1.6449340668482264), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 5.8851391398051538, 1194.9588343167102, 1000, 'halfcauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), array(1.3862943611198906), array(1.3680560780236473), 1.4456693604391833, 1.4370223442716139, 1000, 'halflogisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.3862943611198906), array(1.3680560780236473), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), array(0.79788456080286541), array(0.36338022763241862), 0.78953928239319027, 0.34253579133614653, 1000, 'halfnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.79788456080286541), array(0.36338022763241862), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.4674011002723395), 0.10651732466849634, 2.2821454925062783, 1000, 'hypsecantsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.4674011002723395), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), array(0.93729530609975309), array(13.131951625091871), 0.98003361245261933, 2.2163927217803892, 1000, 'invgammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), array(0.93729530609975309), array(13.131951625091871), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic('invgamma', (2.0668996136993067,), 0.01, array([? 0.211, ? 0.326, ? 0.295, ? 2.314, ? 0.359, ? 0.75 , ? 1.167, ? 0.806, ? 1.237, ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), array(0.14546264555347513), array(0.0030778995751055637), 0.14654162671350618, 0.0029042058229545743, 1000, 'invnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), array(0.14546264555347513), array(0.0030778995751055637), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic('invnorm', (0.14546264555347513,), 0.01, array([ 0.191,? 0.239,? 0.122,? 0.175,? 0.092,? 0.082,? 0.175,? 0.121,? 0.242,? 0.124, ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), array(0.14546264555347513), array(0.0030778995751055637), 0.14654162671350618, 0.0029042058229545743, 1000, 'invgausssample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), array(0.14546264555347513), array(0.0030778995751055637), 'invgauss') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invgauss') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invgauss') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invgauss') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invgauss') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invgauss') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invgauss') ... ok test_continuous_basic.test_cont_basic('invgauss', (0.14546264555347513,), 0.01, array([ 0.191,? 0.239,? 0.122,? 0.175,? 0.092,? 0.082,? 0.175,? 0.121,? 0.242,? 0.124, ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), array(0.20952073643389132), array(0.0026608544463244121), 0.21254860453459329, 0.0026502849724359253, 1000, 'johnsonsbsample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), array(0.20952073643389132), array(0.0026608544463244121), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic('johnsonsb', (4.3172675099141058, 3.1837781130785063), 0.01, array([ 0.135,? 0.165,? 0.158,? 0.294,? 0.172,? 0.223,? 0.252,? 0.228,? 0.256,? 0.164, ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.0), 0.098100813619852997, 1.8262500606297374, 1000, 'laplacesample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.0), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 1932.0205477129166, 578434680.15985334, 1000, 'levysample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic('levy', (), 0.01, array([? 2.704e-01, ? 6.147e-01, ? 5.045e-01, ? 1.637e+02, ? 7.518e-01, ? 4.449e+00, ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), -146.03264446610152, 5369822.6715053003, 1000, 'levy_lsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic('levy_l', (), 0.01, array([ -2.141e+02,? -1.524e+01,? -2.480e+01,? -2.878e-01,? -9.949e+00,? -1.216e+00, ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), array(-2.4617679388246936), array(6.8426502245961007), -2.4057296757832174, 5.9831621683265688, 1000, 'loggammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), array(-2.4617679388246936), array(6.8426502245961007), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(3.2898681336964528), 0.12005549995830994, 3.078985058384891, 1000, 'logisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(3.2898681336964528), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), array(1.1045330480739952), array(0.38917293304417666), 1.1334773520725614, 0.37349384621458165, 1000, 'loglaplacesample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), array(1.1045330480739952), array(0.38917293304417666), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic('loglaplace', (3.2505926592051435,), 0.01, array([ 0.506,? 0.757,? 0.703,? 1.898,? 0.807,? 1.102,? 1.335,? 1.134,? 1.372,? 0.743, ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), array(1.5757871665018548), array(3.6827062109137709), 1.4935980843433927, 2.4720855647743387, 1000, 'lognormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), array(1.5757871665018548), array(3.6827062109137709), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), array(1.1400690695795155), array(inf), 1.2046982570012683, 8.1205985026367333, 1000, 'lomaxsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), array(1.1400690695795155), array(inf), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (), array(1.5957691216057308), array(0.45352091052967447), 1.5580947217508414, 0.41026340192631527, 1000, 'maxwellsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.5957691216057308), array(0.45352091052967447), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), array(0.97519075370169761), array(0.049002993894714963), 0.9885798461330112, 0.047891039771988025, 1000, 'nakagamisample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), array(0.97519075370169761), array(0.049002993894714963), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), array(1.0966313767196905), array(0.19835325341303081), 1.0759414510260812, 0.18247641865181716, 1000, 'ncfsample mean test') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), array(1.0966313767196905), array(0.19835325341303081), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), array(0.25436732738327772), array(1.1694163414603564), 0.23917507013451059, 1.071479548334112, 1000, 'nctsample mean test') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), array(0.25436732738327772), array(1.1694163414603564), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), array(22.056046597511642), array(46.224186390046569), 22.092814053046368, 48.384580315939367, 1000, 'ncx2sample mean test') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), array(22.056046597511642), array(46.224186390046569), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(1.0), -0.021857613430289052, 0.96543031451323214, 1000, 'normsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(1.0), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), array(1.6166305764162521), array(1.6034057205287133), 1.6550137301062546, 1.2674974417903764, 1000, 'paretosample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), array(1.6166305764162521), array(1.6034057205287133), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), array(0.62393479469353574), array(0.85857496135780687), 0.63812341708244613, 0.061503031152340903, 1000, 'powerlawsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), array(0.62393479469353574), array(0.85857496135780687), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), array(-1.0934378551735171), array(0.46999722851193249), -1.0492605411034246, 0.4387505202446717, 1000, 'powernormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), array(-1.0934378551735171), array(0.46999722851193249), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic('powernorm', (4.4453652254590779,), 0.01, array([-2.241, -1.649, -1.771, -0.089, -1.536, -0.831, -0.503, -0.773, -0.465, -1.681, ... ok test_continuous_basic.test_cont_basic(, (), array(1.2533141373155001), array(0.42920367320510344), 1.2898620140336594, 0.43409553188317485, 1000, 'rayleighsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.2533141373155001), array(0.42920367320510344), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), array(0.19667848552072986), array(0.060882307287375481), 0.20860644275950979, 0.064125334510458903, 1000, 'reciprocalsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), array(0.19667848552072986), array(0.060882307287375481), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), array(0.0), array(3.6905172081719186), -0.027973201793274144, 3.5344407590010793, 1000, 'tsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), array(0.0), array(3.6905172081719186), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), array(0.38595009941509401), array(0.048170356578380126), 0.3979467137648407, 0.048758189137974826, 1000, 'triangsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), array(0.38595009941509401), array(0.048170356578380126), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), array(0.95654169346841034), array(0.79425834443323384), 1.0006988174499489, 0.83089849698746843, 1000, 'truncexponsample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), array(0.95654169346841034), array(0.79425834443323384), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic('truncexpon', (4.6907725456810478,), 0.01, array([? 5.550e-02, ? 2.235e-01, ? 1.716e-01, ? 2.646e+00, ? 2.830e-01, ? 9.932e-01, ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), array(0.24256013977032281), array(0.63221524437974919), 0.28565694330796559, 0.63826177893087388, 1000, 'truncnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), array(0.24256013977032281), array(0.63221524437974919), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic('truncnorm', (-1.0978730080013919, 2.7306754109031979), 0.01, array([ -9.039e-01,? -4.955e-01,? -6.034e-01, ? 1.582e+00,? -3.846e-01, ? 4.763e-01, ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), array(0.0), array(0.30476472279111871), 0.0090800061342219563, 0.026536393118696645, 1000, 'tukeylambdasample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), array(0.0), array(0.30476472279111871), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.083333333333333329), 0.51516123993082641, 0.083002600178291724, 1000, 'uniformsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.083333333333333329), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 1.0190852081914157, 1.0256627129688456, 1000, 'waldsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 'wald') ... ok test_continuous_basic.test_cont_basic(, (), 'wald') ... ok test_continuous_basic.test_cont_basic(, (), 'wald') ... ok test_continuous_basic.test_cont_basic(, (), 'wald') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), array(-0.89129676887660936), array(0.11369305518071726), -0.87022627496962057, 0.10761318131749308, 1000, 'weibull_maxsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), array(-0.89129676887660936), array(0.11369305518071726), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), array(0.88961629797475072), array(0.26510662289002929), 0.9178211327965029, 0.27042300220730853, 1000, 'weibull_minsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), array(0.88961629797475072), array(0.26510662289002929), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), array(3.1415926535897931), array(3.4151322438845), 3.2377370605003604, 3.4056398320138892, 1000, 'wrapcauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), array(3.1415926535897931), array(3.4151322438845), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.322,? 1.211,? 0.949,? 5.915,? 1.501,? 4.04 ,? 5.113,? 4.253,? 5.215,? 1.139, ... ok test_continuous_extra.test_540_567 ... ok test_discrete_basic.test_discrete_basic(0.29999999999999999, array(0.29999999999999999), 'bernoulli sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.20999999999999627, array(0.20999999999999999), 'bernoulli sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), 'bernoulli cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), array([0, 1]), 'bernoulli cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), 'bernoulli pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), 'bernoulli oth') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), -1.2380952380951449, 0.87287156094400487, 'bernoulli skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), array([0, 0, 0, ..., 1, 0, 0]), 0.01, 'bernoulli chisquare') ... ok test_discrete_basic.test_discrete_basic(2.0015000000000001, array(2.0), 'binom sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.1854977500000026, array(1.2), 'binom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), 'binom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), array([0, 1, 2, 3, 4, 5]), 'binom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), 'binom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), 'binom oth') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), -0.26248929225026352, 0.28057933666556623, 'binom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), array([2, 2, 2, ..., 4, 1, 3]), 0.01, 'binom chisquare') ... ok test_discrete_basic.test_discrete_basic(0.32900000000000001, array(0.32731081784804011), 'boltzmann sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.43975900000001117, array(0.4344431884043245), 'boltzmann sample var test') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 'boltzmann cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), array([0, 1, 2, 3, 4]), 'boltzmann cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 'boltzmann pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 'boltzmann oth') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 6.7133652484343216, 2.418691392797208, 'boltzmann skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), array([0, 0, 0, ..., 2, 0, 0]), 0.01, 'boltzmann chisquare') ... ok test_discrete_basic.test_discrete_basic(0.0070000000000000001, array(7.9181711188056743e-17), 'dlaplace sample mean test') ... ok test_discrete_basic.test_discrete_basic(2.9319510000000588, array(2.9635341891843714), 'dlaplace sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 'dlaplace cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), array([-8, -7, -6, -5, -4, -3, -2, -1,? 0,? 1,? 2,? 3,? 4,? 5,? 6,? 7]), 'dlaplace cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 'dlaplace pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 'dlaplace oth') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 3.0660776822072453, 0.021996158609059947, 'dlaplace skew_kurt') ... ok test_discrete_basic.test_discrete_basic(1.9870000000000001, array(2.0), 'geom sample mean test') ... ok test_discrete_basic.test_discrete_basic(2.0098310000000303, array(2.0), 'geom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 'geom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), array([ 1,? 2,? 3,? 4,? 5,? 6,? 7,? 8,? 9, 10]), 'geom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 'geom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 'geom oth') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 5.1935883716655766, 2.0476504362662378, 'geom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), array([1, 1, 2, ..., 6, 1, 2]), 0.01, 'geom chisquare') ... ok test_discrete_basic.test_discrete_basic(2.3860000000000001, array(2.4000000000000004), 'hypergeom sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.1500039999999776, array(1.1917241379310344), 'hypergeom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), 'hypergeom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), array([0, 1, 2, 3, 4, 5, 6]), 'hypergeom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), 'hypergeom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), 'hypergeom oth') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), -0.29686916362552029, 0.020906577365969316, 'hypergeom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), array([1, 1, 4, ..., 3, 2, 2]), 0.01, 'hypergeom chisquare') ... ok test_discrete_basic.test_discrete_basic(1.724, array(1.7142857142857142), 'hypergeom sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.65282400000000207, array(0.66122448979591841), 'hypergeom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (21, 3, 12), 'hypergeom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (21, 3, 12), array([0, 1, 2, 3]), 'hypergeom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (21, 3, 12), 'hypergeom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (21, 3, 12), 'hypergeom oth') ... ok test_discrete_basic.test_discrete_basic(, (21, 3, 12), -0.46243472564588117, -0.18093529905213196, 'hypergeom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (21, 3, 12), array([2, 3, 2, ..., 2, 2, 1]), 0.01, 'hypergeom chisquare') ... ok test_discrete_basic.test_discrete_basic(9.4184999999999999, array(9.4285714285714288), 'hypergeom sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.68435774999998678, array(0.67346938775510201), 'hypergeom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (21, 18, 11), 'hypergeom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (21, 18, 11), array([ 8,? 9, 10, 11]), 'hypergeom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (21, 18, 11), 'hypergeom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (21, 18, 11), 'hypergeom oth') ... ok test_discrete_basic.test_discrete_basic(, (21, 18, 11), -0.53396352457617935, 0.093601755841816861, 'hypergeom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (21, 18, 11), array([ 9,? 8,? 9, ..., 10, 10, 10]), 0.01, 'hypergeom chisquare') ... ok test_discrete_basic.test_discrete_basic(1.635, array(1.637035001905937), 'logser sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.325775000000023, array(1.4127039072996714), 'logser sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'logser cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), array([1, 2, 3, 4, 5, 6, 7, 8, 9]), 'logser cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'logser pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'logser oth') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 7.559198377977479, 2.4947797038220592, 'logser skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), array([1, 1, 1, ..., 1, 1, 4]), 0.01, 'logser chisquare') ... ok test_discrete_basic.test_discrete_basic(4.9210000000000003, array(5.0), 'nbinom sample mean test') ... ok test_discrete_basic.test_discrete_basic(9.4787590000000037, array(10.0), 'nbinom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), array([ 0,? 1,? 2,? 3,? 4,? 5,? 6,? 7,? 8,? 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21]), 'nbinom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom oth') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 1.5000586959708402, 0.97358518373019021, 'nbinom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), array([0, 2, 6, ..., 3, 3, 3]), 0.01, 'nbinom chisquare') ... ok test_discrete_basic.test_discrete_basic(0.58399999999999996, array(0.60000000000000009), 'nbinom sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.4729440000000598, array(1.5000000000000002), 'nbinom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 'nbinom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), array([ 0,? 1,? 2,? 3,? 4,? 5,? 6,? 7,? 8,? 9, 10, 12]), 'nbinom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 'nbinom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 'nbinom oth') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 13.929082276070467, 3.2071528858780165, 'nbinom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), array([0, 0, 0, ..., 0, 0, 0]), 0.01, 'nbinom chisquare') ... ok test_discrete_basic.test_discrete_basic(1.496, array(1.5031012098113492), 'planck sample mean test') ... ok test_discrete_basic.test_discrete_basic(3.8119840000000167, array(3.7624144567476914), 'planck sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 'planck cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), array([ 0,? 1,? 2,? 3,? 4,? 5,? 6,? 7,? 8,? 9, 10, 11, 12]), 'planck cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 'planck pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 'planck oth') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 5.0921201134828475, 1.9924056300476671, 'planck skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), array([1, 1, 1, ..., 7, 0, 2]), 0.01, 'planck chisquare') ... ok test_discrete_basic.test_discrete_basic(0.58550000000000002, array(0.59999999999999998), 'poisson sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.59768974999998681, array(0.59999999999999998), 'poisson sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'poisson cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), array([0, 1, 2, 3, 4, 5]), 'poisson cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'poisson pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'poisson oth') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 1.9406814436782422, 1.3589585241917534, 'poisson skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), array([0, 0, 0, ..., 1, 0, 0]), 0.01, 'poisson chisquare') ... ok test_discrete_basic.test_discrete_basic(18.4725, array(18.5), 'randint sample mean test') ... ok test_discrete_basic.test_discrete_basic(48.800243749999929, array(47.916666666666664), 'randint sample var test') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), 'randint cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), array([ 7,? 8,? 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, ... ok test_discrete_basic.test_discrete_basic(, (7, 31), 'randint pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), 'randint oth') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), -1.2115060412211844, -0.025412774105826177, 'randint skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), array([27, 10, 15, ..., 16,? 9, 17]), 0.01, 'randint chisquare') ... ok test_discrete_basic.test_discrete_basic(7.0019999999999998, array(7.0), 'skellam sample mean test') ... ok test_discrete_basic.test_discrete_basic(22.550995999999991, array(23.0), 'skellam sample var test') ... ok test_discrete_basic.test_discrete_basic(, (15, 8), 'skellam cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (15, 8), array([-10,? -7,? -6,? -5,? -4,? -3,? -2,? -1, ? 0, ? 1, ? 2, ? 3, ? 4, ? 5, ? 6, ? 7, ... ok test_discrete_basic.test_discrete_basic(, (15, 8), 'skellam pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (15, 8), 'skellam oth') ... ok test_discrete_basic.test_discrete_basic(, (15, 8), 0.11554402415317133, 0.10806520422790773, 'skellam skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (15, 8), array([ 4,? 6, 10, ...,? 5, 14, 15]), 0.01, 'skellam chisquare') ... ok Failure: SkipTest (Skipping test: test_discrete_privateTest skipped due to test condition) ... SKIP: Skipping test: test_discrete_privateTest skipped due to test condition test_noexception (test_distributions.TestArrayArgument) ... ok test_rvs (test_distributions.TestBernoulli) ... ok test_rvs (test_distributions.TestBinom) ... ok test_precision (test_distributions.TestChi2) ... ok test_rvs (test_distributions.TestDLaplace) ... ok See ticket #761 ... ok See ticket #497 ... ok test_beta (test_distributions.TestExpect) ... ok test_hypergeom (test_distributions.TestExpect) ... ok test_norm (test_distributions.TestExpect) ... ok test_poisson (test_distributions.TestExpect) ... ok test_tail (test_distributions.TestExpon) ... ok test_zero (test_distributions.TestExpon) ... ok test_tail (test_distributions.TestExponpow) ... ok test_gamma (test_distributions.TestFrozen) ... ok test_norm (test_distributions.TestFrozen) ... ok Regression test for ticket #1293. ... ok test_cdf_bounds (test_distributions.TestGenExpon) ... ok test_pdf_unity_area (test_distributions.TestGenExpon) ... ok test_cdf_sf (test_distributions.TestGeom) ... ok test_pmf (test_distributions.TestGeom) ... ok test_rvs (test_distributions.TestGeom) ... ok test_precision (test_distributions.TestHypergeom) ... ok test_rvs (test_distributions.TestLogser) ... ok test_rvs (test_distributions.TestNBinom) ... ok test_rvs (test_distributions.TestPoisson) ... ok test_cdf (test_distributions.TestRandInt) ... ok test_pdf (test_distributions.TestRandInt) ... ok test_rvs (test_distributions.TestRandInt) ... ok test_rvs (test_distributions.TestRvDiscrete) ... ok test_cdf (test_distributions.TestSkellam) ... ok test_pmf (test_distributions.TestSkellam) ... ok test_rvs (test_distributions.TestZipf) ... ok test_distributions.test_all_distributions('uniform', (), 0.01) ... ok test_distributions.test_all_distributions('norm', (), 0.01) ... ok test_distributions.test_all_distributions('lognorm', (1.5876170641754364,), 0.01) ... ok test_distributions.test_all_distributions('expon', (), 0.01) ... ok test_distributions.test_all_distributions('beta', (1.4449890262755161, 1.5962868615831063), 0.01) ... ok test_distributions.test_all_distributions('powerlaw', (1.3849011459726603,), 0.01) ... ok test_distributions.test_all_distributions('bradford', (1.5756510141648885,), 0.01) ... ok test_distributions.test_all_distributions('burr', (1.2903295024027579, 1.1893913285543563), 0.01) ... ok test_distributions.test_all_distributions('fisk', (1.186729528255555,), 0.01) ... ok test_distributions.test_all_distributions('cauchy', (), 0.01) ... ok test_distributions.test_all_distributions('halfcauchy', (), 0.01) ... ok test_distributions.test_all_distributions('foldcauchy', (1.6127731798686067,), 0.01) ... ok test_distributions.test_all_distributions('gamma', (1.6566593889896288,), 0.01) ... ok test_distributions.test_all_distributions('gengamma', (1.4765309920093808, 1.0898243611955936), 0.01) ... ok test_distributions.test_all_distributions('loggamma', (1.7576039219664368,), 0.01) ... ok test_distributions.test_all_distributions('alpha', (1.8767703708227748,), 0.01) ... ok test_distributions.test_all_distributions('anglit', (), 0.01) ... ok test_distributions.test_all_distributions('arcsine', (), 0.01) ... ok test_distributions.test_all_distributions('betaprime', (1.9233810159462807, 1.8424602231401823), 0.01) ... ok test_distributions.test_all_distributions('erlang', (4, 0.89817312135787897, 0.92308243982017679), 0.01) ... ok test_distributions.test_all_distributions('dgamma', (1.5405999249480544,), 0.01) ... ok test_distributions.test_all_distributions('exponweib', (1.391296050234625, 1.7052833998544061), 0.01) ... ok test_distributions.test_all_distributions('exponpow', (1.2756341213121272,), 0.01) ... ok test_distributions.test_all_distributions('frechet_l', (1.8116287085078784,), 0.01) ... ok test_distributions.test_all_distributions('frechet_r', (1.8494859651863671,), 0.01) ... ok test_distributions.test_all_distributions('gilbrat', (), 0.01) ... ok test_distributions.test_all_distributions('f', (1.8950389674266752, 1.5898011835311598), 0.01) ... ok test_distributions.test_all_distributions('ncf', (1.9497648732321204, 1.5796950107456058, 1.4505631066311553), 0.01) ... ok test_distributions.test_all_distributions('chi2', (1.660245378622389,), 0.01) ... ok test_distributions.test_all_distributions('chi', (1.9962578393535728,), 0.01) ... ok test_distributions.test_all_distributions('nakagami', (1.9169412179474561,), 0.01) ... ok test_distributions.test_all_distributions('genpareto', (1.7933250841302242,), 0.01) ... ok test_distributions.test_all_distributions('genextreme', (1.0823729881966475,), 0.01) ... ok test_distributions.test_all_distributions('genhalflogistic', (1.6127831050407122,), 0.01) ... ok test_distributions.test_all_distributions('pareto', (1.4864442019691668,), 0.01) ... ok test_distributions.test_all_distributions('lomax', (1.6301473404114728,), 0.01) ... ok test_distributions.test_all_distributions('halfnorm', (), 0.01) ... ok test_distributions.test_all_distributions('halflogistic', (), 0.01) ... ok test_distributions.test_all_distributions('fatiguelife', (1.8450775756715152,), 0.001) ... ok test_distributions.test_all_distributions('foldnorm', (1.2430356220618561,), 0.01) ... ok test_distributions.test_all_distributions('ncx2', (1.7314892207908477, 1.117134293208518), 0.01) ... ok test_distributions.test_all_distributions('t', (1.2204605368678285,), 0.01) ... ok test_distributions.test_all_distributions('nct', (1.7945829717105759, 1.3325361492196555), 0.01) ... ok test_distributions.test_all_distributions('weibull_min', (1.8159130965336594,), 0.01) ... ok test_distributions.test_all_distributions('weibull_max', (1.1006075202160961,), 0.01) ... ok test_distributions.test_all_distributions('dweibull', (1.1463584889123037,), 0.01) ... ok test_distributions.test_all_distributions('maxwell', (), 0.01) ... ok test_distributions.test_all_distributions('rayleigh', (), 0.01) ... ok test_distributions.test_all_distributions('genlogistic', (1.6976706401912387,), 0.01) ... ok test_distributions.test_all_distributions('logistic', (), 0.01) ... ok test_distributions.test_all_distributions('gumbel_l', (), 0.01) ... ok test_distributions.test_all_distributions('gumbel_r', (), 0.01) ... ok test_distributions.test_all_distributions('gompertz', (1.0452340678656125,), 0.01) ... ok test_distributions.test_all_distributions('hypsecant', (), 0.01) ... ok test_distributions.test_all_distributions('laplace', (), 0.01) ... ok test_distributions.test_all_distributions('reciprocal', (0.57386603678916692, 1.573866036789167), 0.01) ... ok test_distributions.test_all_distributions('triang', (0.53419796826072397,), 0.01) ... ok test_distributions.test_all_distributions('tukeylambda', (1.6805891325622566,), 0.01) ... ok test_distributions.test_all_distributions('vonmises', (10,), 0.01) ... ok test_distributions.test_all_distributions('vonmises', (101,), 0.01) ... ok test_distributions.test_all_distributions('vonmises', (1.0266967946622052,), 0.01) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 0) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 0) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 1) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 1) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 10) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 10) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 100) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 1, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(0.10000000000000001, 0, 10, 100) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 0) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 0) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 1) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 1) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 10) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 10) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 100) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(1, 1, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(1, 0, 10, 100) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 0) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 0) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 0) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 1) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 1) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 1) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 3.1415926535897931) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 10) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 10) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 10) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 100) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(101, 1, 1, 100) ... ok test_distributions.test_vonmises_pdf_periodic(101, 0, 10, 100) ... ok Regression test for #1191 ... ok test_distributions.TestArgsreduce ... ok Regression test for ticket #1316. ... ok Regression test for ticket #1326. ... ok test_kdeoth.test_kde_1d ... ok test_bad_arg (test_morestats.TestAnderson) ... ok test_expon (test_morestats.TestAnderson) ... ok test_normal (test_morestats.TestAnderson) ... ok test_approx (test_morestats.TestAnsari) ... ok test_bad_arg (test_morestats.TestAnsari) ... ok test_exact (test_morestats.TestAnsari) ... ok test_small (test_morestats.TestAnsari) ... ok Too few args raises ValueError. ... ok test_data (test_morestats.TestBartlett) ... ok Length of x must be 1 or 2. ... ok len(x) is 1, but n is invalid. ... ok test_bad_p (test_morestats.TestBinomP) ... ok test_data (test_morestats.TestBinomP) ... ok test_basic (test_morestats.TestFindRepeats) ... ok test_bad_center_value (test_morestats.TestFligner) ... ok test_bad_keyword (test_morestats.TestFligner) ... ok Too few args raises ValueError. ... ok test_data (test_morestats.TestFligner) ... ok Test that center='trimmed' gives the same result as center='mean' when proportiontocut=0. ... ok test_trimmed2 (test_morestats.TestFligner) ... ok test_bad_center_value (test_morestats.TestLevene) ... ok test_bad_keyword (test_morestats.TestLevene) ... ok test_data (test_morestats.TestLevene) ... ok test_equal_mean_median (test_morestats.TestLevene) ... ok test_too_few_args (test_morestats.TestLevene) ... ok Test that center='trimmed' gives the same result as center='mean' when proportiontocut=0. ... ok test_trimmed2 (test_morestats.TestLevene) ... ok test_bad_arg (test_morestats.TestShapiro) ... ok test_basic (test_morestats.TestShapiro) ... ok test_morestats.test_mood ... ok Raise ValueError when the sum of the lengths of the args is less than 3. ... ok Raise ValueError is fewer than two args are given. ... ok Raise ValueError when two args of different lengths are given. ... ok Raise ValueError if fewer than two data points are given. ... ok Raise ValueError if n > 4 or n > 1. ... ok Raise ValueError is n is not 1 or 2. ... ok Raise ValueError when given an invalid distribution. ... ok Raise ValueError when given an invalid distribution. ... ok Raise ValueError if any data value is negative. ... ok Tests some computations of Kendall's tau ... ok Tests the seasonal Kendall tau. ... ok Tests some computations of Pearson's r ... ok Tests point biserial ... ok Tests some computations of Spearman's rho ... ok test_1D (test_mstats_basic.TestGMean) ... ok test_2D (test_mstats_basic.TestGMean) ... ok test_1D (test_mstats_basic.TestHMean) ... ok test_2D (test_mstats_basic.TestHMean) ... ok Tests the Friedman Chi-square test ... ok Tests the Kolmogorov-Smirnov 2 samples test ... ok Tests Obrien transform ... ok sum((testcase-mean(testcase,axis=0))**4,axis=0)/((sqrt(var(testcase)*3/4))**4)/4 ... ok Tests the mode ... ok mean((testcase-mean(testcase))**power,axis=0),axis=0))**power)) ... ok sum((testmathworks-mean(testmathworks,axis=0))**3,axis=0)/((sqrt(var(testmathworks)*4/5))**3)/5 ... ok variation = samplestd/mean ... ok Ticket #867 ... ok test_2D (test_mstats_basic.TestPercentile) ... ok test_percentile (test_mstats_basic.TestPercentile) ... ok test_ranking (test_mstats_basic.TestRanking) ... ok Tests trimming ... ok Tests trimming. ... ok Tests the trimmed mean standard error. ... ok Tests the trimmed mean. ... ok Tests the Winsorization of the data. ... ok this is not in R, so used ... ok this is not in R, so used ... ok not in R, so tested by using ... ok not in R, so tested by using ... ok Regress a line with sinusoidal noise. Test for #1273. ... ok Regression test for #1256 ... ok Tests ideal-fourths ... ok Tests the Marits-Jarrett estimator ... ok Tests the confidence intervals of the trimmed mean. ... ok test_hdquantiles (test_mstats_extras.TestQuantiles) ... ok test_tmeanX (test_stats.TestBasicStats) ... ok test_tstdX (test_stats.TestBasicStats) ... ok test_tvarX (test_stats.TestBasicStats) ... ok test_basic (test_stats.TestCMedian) ... ok test_pBIGBIG (test_stats.TestCorrPearsonr) ... ok test_pBIGHUGE (test_stats.TestCorrPearsonr) ... ok test_pBIGLITTLE (test_stats.TestCorrPearsonr) ... ok test_pBIGROUND (test_stats.TestCorrPearsonr) ... ok test_pBIGTINY (test_stats.TestCorrPearsonr) ... ok test_pHUGEHUGE (test_stats.TestCorrPearsonr) ... ok test_pHUGEROUND (test_stats.TestCorrPearsonr) ... ok test_pHUGETINY (test_stats.TestCorrPearsonr) ... ok test_pLITTLEHUGE (test_stats.TestCorrPearsonr) ... ok test_pLITTLELITTLE (test_stats.TestCorrPearsonr) ... ok test_pLITTLEROUND (test_stats.TestCorrPearsonr) ... ok test_pLITTLETINY (test_stats.TestCorrPearsonr) ... ok test_pROUNDROUND (test_stats.TestCorrPearsonr) ... ok test_pTINYROUND (test_stats.TestCorrPearsonr) ... ok test_pTINYTINY (test_stats.TestCorrPearsonr) ... ok test_pXBIG (test_stats.TestCorrPearsonr) ... ok test_pXHUGE (test_stats.TestCorrPearsonr) ... ok test_pXLITTLE (test_stats.TestCorrPearsonr) ... ok test_pXROUND (test_stats.TestCorrPearsonr) ... ok test_pXTINY (test_stats.TestCorrPearsonr) ... ok test_pXX (test_stats.TestCorrPearsonr) ... ok test_r_exactly_neg1 (test_stats.TestCorrPearsonr) ... ok test_r_exactly_pos1 (test_stats.TestCorrPearsonr) ... ok test_sBIGBIG (test_stats.TestCorrSpearmanr) ... ok test_sBIGHUGE (test_stats.TestCorrSpearmanr) ... ok test_sBIGLITTLE (test_stats.TestCorrSpearmanr) ... ok test_sBIGROUND (test_stats.TestCorrSpearmanr) ... ok test_sBIGTINY (test_stats.TestCorrSpearmanr) ... ok test_sHUGEHUGE (test_stats.TestCorrSpearmanr) ... ok test_sHUGEROUND (test_stats.TestCorrSpearmanr) ... ok test_sHUGETINY (test_stats.TestCorrSpearmanr) ... ok test_sLITTLEHUGE (test_stats.TestCorrSpearmanr) ... ok test_sLITTLELITTLE (test_stats.TestCorrSpearmanr) ... ok test_sLITTLEROUND (test_stats.TestCorrSpearmanr) ... ok test_sLITTLETINY (test_stats.TestCorrSpearmanr) ... ok test_sROUNDROUND (test_stats.TestCorrSpearmanr) ... ok test_sTINYROUND (test_stats.TestCorrSpearmanr) ... ok test_sTINYTINY (test_stats.TestCorrSpearmanr) ... ok test_sXBIG (test_stats.TestCorrSpearmanr) ... ok test_sXHUGE (test_stats.TestCorrSpearmanr) ... ok test_sXLITTLE (test_stats.TestCorrSpearmanr) ... ok test_sXROUND (test_stats.TestCorrSpearmanr) ... ok test_sXTINY (test_stats.TestCorrSpearmanr) ... ok test_sXX (test_stats.TestCorrSpearmanr) ... ok test_tie1 (test_stats.TestCorrSpearmanrTies) ... ok A test of stats.f_oneway, with F=2. ... ok A trivial test of stats.f_oneway, with F=0. ... ok test_1D_array (test_stats.TestGMean) ... ok test_1D_list (test_stats.TestGMean) ... ok test_2D_array_default (test_stats.TestGMean) ... ok test_2D_array_dim1 (test_stats.TestGMean) ... ok test_large_values (test_stats.TestGMean) ... ok Test a 1d array ... ok Test a 1d array with zero element ... ok Test a 1d list ... ok Test a 1d list with zero element ... ok Test a 1d masked array ... ok Test a 1d masked array with zero element ... ok Test a 1d masked array with negative element ... ok Test a 1d masked array with a masked value ... ok Test a 2d array ... ok Test a 2d list with axis=0 ... ok Test a 2d list with axis=1 ... ok Test a 2d list ... ok Test a 2d masked array ... ok Test a 2d list with axis=1 ... ok Test a 2d list with axis=0 ... ok test_1D_array (test_stats.TestHMean) ... ok test_1D_list (test_stats.TestHMean) ... ok test_2D_array_default (test_stats.TestHMean) ... ok test_2D_array_dim1 (test_stats.TestHMean) ... ok Test a 1d array ... ok Test a 1d list ... ok Test a 1d masked array ... ok Test a 1d masked array with a masked value ... ok Test a 2d array ... ok Test a 2d list with axis=0 ... ok Test a 2d list with axis=1 ... ok Test a 2d list ... ok Test a 2d masked array ... ok Test a 2d list with axis=1 ... ok Test a 2d list with axis=0 ... ok Tests that increasing the number of bins produces expected results ... ok Tests that reducing the number of bins produces expected results ... ok Tests that each of the tests works as expected with default params ... ok Tests that weights give expected histograms ... ok test_basic (test_stats.TestMode) ... ok sum((testcase-mean(testcase,axis=0))**4,axis=0)/((sqrt(var(testcase)*3/4))**4)/4 ... ok test_kurtosis_array_scalar (test_stats.TestMoments) ... ok mean((testcase-mean(testcase))**power,axis=0),axis=0))**power)) ... ok sum((testmathworks-mean(testmathworks,axis=0))**3,axis=0)/ ... ok `skew` must return a scalar for 1-dim input ... ok variation = samplestd/mean ... ok Check nanmean when all values are nan. ... ok Check nanmean when no values are nan. ... ok Check nanmean when some values only are nan. ... ok Check nanmedian when all values are nan. ... ok Check nanmedian when no values are nan. ... ok Check nanmedian for scalar inputs. See ticket #1098. ... ok Check nanmedian when some values only are nan. ... ok Check nanstd when all values are nan. ... ok test_nanstd_negative_axis (test_stats.TestNanFunc) ... ok Check nanstd when no values are nan. ... ok Check nanstd when some values only are nan. ... ok test_2D (test_stats.TestPercentile) ... ok test_percentile (test_stats.TestPercentile) ... ok compared with multivariate ols with pinv ... ok W.II.F.? Regress BIG on X. ... ok W.IV.B.? Regress X on X. ... ok W.IV.D. Regress ZERO on X. ... ok Check that a single input argument to linregress with wrong shape ... ok Regress a line with sinusoidal noise. ... ok Regress a line with sinusoidal noise, with a single input of shape ... ok Regress a line with sinusoidal noise, with a single input of shape ... ok W.II.A.0. Print ROUND with only one digit. ... ok W.II.A.1. Y = INT(2.6*7 -0.2) (Y should be 18) ... ok W.II.A.2. Y = 2-INT(EXP(LOG(SQR(2)*SQR(2)))) ? (Y should be 0) ... ok W.II.A.3. Y = INT(3-EXP(LOG(SQR(2)*SQR(2))))? ? (Y should be 1) ... ok test_stats.TestSigamClip.test_sigmaclip1 ... ok test_stats.TestSigamClip.test_sigmaclip2 ... ok test_stats.TestSigamClip.test_sigmaclip3 ... ok test_onesample (test_stats.TestStudentTest) ... ok test_basic (test_stats.TestThreshold) ... ok this is not in R, so used ... ok this is not in R, so used ... ok not in R, so tested by using ... ok not in R, so tested by using ... ok test_stats.Test_Trim.test_trim1 ... ok test_stats.Test_Trim.test_trim_mean ... ok test_stats.Test_Trim.test_trimboth ... ok Some tests to show that fisher_exact() works correctly. ... ok Some tests for kendalltau. ... ok test_stats.test_cumfreq ... ok test_stats.test_relfreq ... ok test_stats.test_scoreatpercentile ... ok test_stats.test_percentileofscore(35.0, 35.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(40.0, 40.0) ... ok test_stats.test_percentileofscore(45.0, 45.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(50.0, 50.0) ... ok test_stats.test_percentileofscore(40.0, 40.0) ... ok test_stats.test_percentileofscore(50.0, 50.0) ... ok test_stats.test_percentileofscore(45.0, 45.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(60.0, 60.0) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(35.0, 35.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(40.0, 40.0) ... ok test_stats.test_percentileofscore(45.0, 45.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(60.0, 60.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(10.0, 10.0) ... ok test_stats.test_percentileofscore(5.0, 5.0) ... ok test_stats.test_percentileofscore(0.0, 0.0) ... ok test_stats.test_percentileofscore(10.0, 10.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(95.0, 95.0) ... ok test_stats.test_percentileofscore(90.0, 90.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(0.0, 0.0) ... ok test_stats.test_friedmanchisquare ... ok test_stats.test_kstest ... ok test_stats.test_ks_2samp ... ok test_stats.test_ttest_rel ... ok test_stats.test_ttest_ind ... ok test_stats.test_ttest_1samp_new ... ok test_stats.test_describe ... ok test_stats.test_normalitytests((3.9237191815818493, 0.14059672529747549), (3.92371918, 0.14059673)) ... ok test_stats.test_normalitytests((1.9807882609087573, 0.047615023828432253), (1.98078826, 0.047615020000000001)) ... ok test_stats.test_normalitytests((-0.014037344047597383, 0.98880018772590561), (-0.014037340000000001, 0.98880018999999997)) ... ok test_stats.test_pointbiserial ... ok test_stats.test_obrientransform ... ok test_stats.test_binomtest ... ok convert simple expr to blitz ... ok convert fdtd equation to blitz. ... ok convert simple expr to blitz ... ok bad path should return same as default (and warn) ... ok make sure it handles relative values. ... ok default behavior is to return current directory ... ok make sure it handles relative values ... ok test_simple (test_build_tools.TestConfigureSysArgv) ... ok bad path should return same as default (and warn) ... ok make sure it handles relative values. ... ok default behavior returns tempdir ... ok make sure it handles relative values ... ok There should always be a writable file -- even if it is in temp ... ok test_add_function_ordered (test_catalog.TestCatalog) ... ok Test persisting a function in the default catalog ... ok MODULE in search path should be replaced by module_dir. ... ok MODULE in search path should be removed if module_dir==None. ... ok If MODULE is absent, module_dir shouldn't be in search path. ... ok Make sure environment variable is getting used. ... ok Be sure we get at least one file even without specifying the path. ... ok Ignore bad paths in the path. ... ok test_clear_module_directory (test_catalog.TestCatalog) ... ok test_get_environ_path (test_catalog.TestCatalog) ... ok Shouldn't get any files when temp doesn't exist and no path set. ... ok Shouldn't get a single file from the temp dir. ... ok test_set_module_directory (test_catalog.TestCatalog) ... ok Check that we can create a file in the writable directory ... ok Check that we can create a file in the writable directory ... ok There should always be a writable file -- even if search paths contain ... ok test_bad_path (test_catalog.TestCatalogPath) ... ok test_current (test_catalog.TestCatalogPath) ... ok test_default (test_catalog.TestCatalogPath) ... ok test_module (test_catalog.TestCatalogPath) ... ok test_path (test_catalog.TestCatalogPath) ... ok test_user (test_catalog.TestCatalogPath) ... ok test_is_writable (test_catalog.TestDefaultDir) ... ok get_test_dir (test_catalog.TestGetCatalog) ... ok test_create_catalog (test_catalog.TestGetCatalog) ... ok test_nonexistent_catalog_is_none (test_catalog.TestGetCatalog) ... ok test_assign_variable_types (test_ext_tools.TestAssignVariableTypes) ... ok test_numpy_scalar_spec.setup_test_location ... ok test_numpy_scalar_spec.teardown_test_location ... ok test_error1 (test_size_check.TestBinaryOpSize) ... ok test_error2 (test_size_check.TestBinaryOpSize) ... ok test_scalar (test_size_check.TestBinaryOpSize) ... ok test_x1 (test_size_check.TestBinaryOpSize) ... ok test_x_y (test_size_check.TestBinaryOpSize) ... ok test_x_y2 (test_size_check.TestBinaryOpSize) ... ok test_x_y3 (test_size_check.TestBinaryOpSize) ... ok test_x_y4 (test_size_check.TestBinaryOpSize) ... ok test_x_y5 (test_size_check.TestBinaryOpSize) ... ok test_x_y6 (test_size_check.TestBinaryOpSize) ... ok test_x_y7 (test_size_check.TestBinaryOpSize) ... ok test_y1 (test_size_check.TestBinaryOpSize) ... ok test_error1 (test_size_check.TestDummyArray) ... ok test_error2 (test_size_check.TestDummyArray) ... ok test_scalar (test_size_check.TestDummyArray) ... ok test_x1 (test_size_check.TestDummyArray) ... ok test_x_y (test_size_check.TestDummyArray) ... ok test_x_y2 (test_size_check.TestDummyArray) ... ok test_x_y3 (test_size_check.TestDummyArray) ... ok test_x_y4 (test_size_check.TestDummyArray) ... ok test_x_y5 (test_size_check.TestDummyArray) ... ok test_x_y6 (test_size_check.TestDummyArray) ... ok test_x_y7 (test_size_check.TestDummyArray) ... ok test_y1 (test_size_check.TestDummyArray) ... ok test_1d_0 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_1 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_10 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_2 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_3 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_4 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_5 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_6 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_7 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_8 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_9 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_0 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_1 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_2 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_3 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_calculated (test_size_check.TestDummyArrayIndexing) ... ok through a bunch of different indexes at it for good measure. ... ok test_1d_stride_0 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_1 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_10 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_11 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_12 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_2 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_3 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_4 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_5 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_6 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_7 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_8 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_9 (test_size_check.TestDummyArrayIndexing) ... ok test_2d_0 (test_size_check.TestDummyArrayIndexing) ... ok test_2d_1 (test_size_check.TestDummyArrayIndexing) ... ok test_2d_2 (test_size_check.TestDummyArrayIndexing) ... ok through a bunch of different indexes at it for good measure. ... ok through a bunch of different indexes at it for good measure. ... ok test_calculated_index (test_size_check.TestExpressions) ... ok test_calculated_index2 (test_size_check.TestExpressions) ... ok test_generic_1d (test_size_check.TestExpressions) ... ok test_single_index (test_size_check.TestExpressions) ... ok test_scalar (test_size_check.TestMakeSameLength) ... ok test_x_scalar (test_size_check.TestMakeSameLength) ... ok test_x_short (test_size_check.TestMakeSameLength) ... ok test_y_scalar (test_size_check.TestMakeSameLength) ... ok test_y_short (test_size_check.TestMakeSameLength) ... ok test_1d_0 (test_size_check.TestReduction) ... ok test_2d_0 (test_size_check.TestReduction) ... ok test_2d_1 (test_size_check.TestReduction) ... ok test_3d_0 (test_size_check.TestReduction) ... ok test_error0 (test_size_check.TestReduction) ... ok test_error1 (test_size_check.TestReduction) ... ok test_exclusive_end (test_slice_handler.TestBuildSliceAtom) ... ok match slice from a[1:] ... ok match slice from a[1::] ... ok match slice from a[1:2] ... ok match slice from a[1:2:] ... ok match slice from a[1:2:3] ... ok match slice from a[1::3] ... ok match slice from a[:] ... ok match slice from a[::] ... ok match slice from a[:2] ... ok match slice from a[:2:] ... ok match slice from a[:2:3] ... ok match slice from a[:1+i+2:] ... ok match slice from a[0] ... ok match slice from a[::3] ... ok transform a[:] to slice notation ... ok transform a[:,:] = b[:,1:1+2:3] *(c[1-2+i:,:] - c[:,:]) ... ok test_type_match_array (test_standard_array_spec.TestArrayConverter) ... ok test_type_match_int (test_standard_array_spec.TestArrayConverter) ... ok test_type_match_string (test_standard_array_spec.TestArrayConverter) ... ok ---------------------------------------------------------------------- Ran 4733 tests in 322.886s OK (KNOWNFAIL=12, SKIP=42) >>>? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbajk at yahoo.co.uk Mon May 9 17:06:50 2011 From: pbajk at yahoo.co.uk (P B) Date: Mon, 9 May 2011 22:06:50 +0100 (BST) Subject: [SciPy-User] problem with installing on osx 10.6.6 In-Reply-To: Message-ID: <556890.68160.qm@web132310.mail.ird.yahoo.com> Hi,this is odd. ?IF I put in the command?>>> import numpy>>> import scipy>>> scipy.test(verbose=2) I get the output Running unit tests for scipyNumPy version 1.5.1NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpySciPy version 0.9.0SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipyPython version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple Inc. build 5493)]nose version 1.0.0 ******* ?many test results which then finish with: OK (KNOWNFAIL=12, SKIP=42) Could it be there is a problem with the "scipy.test('1','10')" command? I tried to send the whole printout, but it may have been too big for my email system. Thanks,Peter --- On Mon, 9/5/11, Ralf Gommers wrote: From: Ralf Gommers Subject: Re: [SciPy-User] problem with installing on osx 10.6.6 To: "SciPy Users List" Date: Monday, 9 May, 2011, 21:15 On Mon, May 9, 2011 at 9:09 PM, P B wrote: Hi, I', having trouble with scipy.? I have followed the instructions at scipy website and have installed the following on my mac osx 10.6.6 (taken from the sourceforge binarys) NumPy version 1.5.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy SciPy version 0.8.0 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy Python version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 1.0.0 When I run the test scipy.test('1','10') some items seem to pass: test_streams.test_make_stream(True,) ... ok Some tests seem to be skipped: nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so is executable; skipped some seem to fail: /Users/user/.python26_compiled/m7/module_multi_function.cpp:13:19: error: complex: No such file or directory and ====================================================================== ERROR: test_string_and_int (test_ext_tools.TestExtModule) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/tests/test_ext_tools.py", line 72, in test_string_and_int ??? mod.compile(location = build_dir) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/ext_tools.py", line 367, in compile ??? verbose = verbose, **kw) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/build_tools.py", line 273, in build_extension ??? setup(name = module_name, ext_modules = [ext],verbose=verb) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/distutils/core.py", line 186, in setup ??? return old_setup(**new_attr) ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutils/core.py", line 169, in setup ??? raise SystemExit, "error: " + str(msg) CompileError: error: Command "c++ -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.4u.sdk -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp -o /var/folders/4b/4bhByeH9HSuDIezfnSZ6G++++TI/-Tmp-/user/python26_intermediate/compiler_7ca1591dfd3261e140e707030a00840e/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.o" failed with exit status 1 with the final result being: FAILED (KNOWNFAIL=15, SKIP=40, errors=242, failures=2) I'm assuming I have the wrong version of something, so I upgraded to scipy 0.9. ? Sadly that led to essentially the same results.? Can anyone advise me what to do next? Hmm, this is a little unusual, if you used the binary installers from SF for numpy 1.5.1 and scipy 0.8 / 0.9, and the Python binary from python.org, that should just work. Can you provide the complete output of "scipy.test(verbose=2)" (if it's too large put it on a pastebin site)?? Ralf -----Inline Attachment Follows----- _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Mon May 9 19:24:04 2011 From: ondrej at certik.cz (Ondrej Certik) Date: Mon, 9 May 2011 16:24:04 -0700 Subject: [SciPy-User] should one put "." into PYTHONPATH In-Reply-To: References: Message-ID: On Mon, May 9, 2011 at 2:49 AM, Ondrej Certik wrote: > Hi Robert, Jason and Gael, > > On Thu, May 5, 2011 at 7:22 PM, Robert Kern wrote: >> On Thu, May 5, 2011 at 18:23, Ondrej Certik wrote: >>> Hi, >>> >>> is it a good practice to have the following in .bashrc: >>> >>> export PYTHONPATH=$PYTHONPATH:. >> >> I don't recommend it. You will get unexpected behavior and waste time >> chasing down problems that don't actually exist. > > The answer is 100% clear: don't fiddle with PYTHONPATH. > > Thanks for that, I was undecided. Now I can see, that I need to use > other solutions to the problem. > >> >>> I know that Ubuntu long time ago had the "." in PYTHONPATH by default, >>> and then dropped it. The reason why I want it is so that I can develop >>> in the current directory, by doing things like: >>> >>> python examples/a.py >>> >>> where 'a.py' imports something from the current directory. I googled a >>> bit, and found that some people recommend to use "setup.py develop" >>> instead. I don't use setup.py in my project (I use cmake to mix >>> Fortran and Python together). So one option for me is to always >>> install it, and then import it like any other package from >>> examples/a.py. >> >> You don't need to do "python setup.py develop" every time. Nothing >> actually depends on there being a setup.py. Just add a .pth file into >> your site-packages listing the directory you want added. E.g. if you >> have your sympy checkout in /home/ondrej/git/sympy/ you would have a >> file named sympy.pth (or any other name ending in .pth) in your >> site-packages directory with just the following contents (without the >> triple quotes: >> >> """ >> /home/ondrej/git/sympy >> """ >> >> Then you can run /home/ondrej/git/sympy/examples/a.py however you >> like, from whichever directory you like. > > Wow, thanks for this tip! This works like a charm. That's exactly what > I was looking for. So for the record, I have implemented the following simple patch to Qsnake (http://qsnake.com/): https://github.com/qsnake/qsnake/commit/2f95f88af7c532a1b81605c6e9bb5dfdb8b219c3 i.e.: def command_develop(): print "Adding the current directory into qsnake.pth file:" cmd("echo $CUR >> $SPKG_LOCAL/lib/python/site-packages/qsnake.pth", echo=True) and I use it like: certik1 at pike:~/repos/dftatom(master)$ qsnake develop Adding the current directory into qsnake.pth file: echo $CUR >> $SPKG_LOCAL/lib/python/site-packages/qsnake.pth and then I just do: Qsnake: certik1 at pike:~/repos/dftatom(master)$ python examples/optimize.py a = 1000000000.0 N0 = 3000 a = 100000000.0 N0 = 2600 a = 10000000.0 N0 = 2300 .... and everything works as expected. So the paths will get stacked in the qsnake.pth file. I guess that's ok. The user can then inspect the file manually and prune it if needed. Ondrej From otrov at hush.ai Tue May 10 08:58:35 2011 From: otrov at hush.ai (otrov at hush.ai) Date: Tue, 10 May 2011 14:58:35 +0200 Subject: [SciPy-User] Making sweep wave with numpy/scipy Message-ID: <20110510125835.EB0A26F437@smtp.hushmail.com> Hello community :) I hope my question is not too trivial, as I joined just to ask it :D but I guess many have answer to it, nonetheless I can tell Python how to make sine wave file (from examples provided elsewhere): ---------------------------------- import numpy as np from scikits.audiolab import Sndfile s = np.sin(2 * np.pi * 1000/44100 * np.arange(0, 44100 * 2)) f = Sndfile('foo.wav', 'w', 'wav', 2, 44100) f.write_frames(s) f.close() ---------------------------------- but I have no idea how to make sweep wave, let's say from 10 Hz to 22 KHz in 10s Can someone provide solution, please? Thanks in advance From warren.weckesser at enthought.com Tue May 10 09:02:31 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 10 May 2011 08:02:31 -0500 Subject: [SciPy-User] Making sweep wave with numpy/scipy In-Reply-To: <20110510125835.EB0A26F437@smtp.hushmail.com> References: <20110510125835.EB0A26F437@smtp.hushmail.com> Message-ID: On Tue, May 10, 2011 at 7:58 AM, wrote: > Hello community :) > > I hope my question is not too trivial, as I joined just to ask it > :D but I guess many have answer to it, nonetheless > > I can tell Python how to make sine wave file (from examples > provided elsewhere): > > ---------------------------------- > import numpy as np > from scikits.audiolab import Sndfile > > s = np.sin(2 * np.pi * 1000/44100 * np.arange(0, 44100 * 2)) > > f = Sndfile('foo.wav', 'w', 'wav', 2, 44100) > > f.write_frames(s) > f.close() > ---------------------------------- > > but I have no idea how to make sweep wave, let's say from 10 Hz to > 22 KHz in 10s > > Can someone provide solution, please? > Take a look at scipy.signal.chirp: http://www.scipy.org/Cookbook/FrequencySweptDemo Warren > > Thanks in advance > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwerneck at gmail.com Tue May 10 09:05:32 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Tue, 10 May 2011 10:05:32 -0300 Subject: [SciPy-User] Making sweep wave with numpy/scipy In-Reply-To: <20110510125835.EB0A26F437@smtp.hushmail.com> References: <20110510125835.EB0A26F437@smtp.hushmail.com> Message-ID: <20110510130531.GA3267@spirit> call your arange "t", then try using a quadratic expression instead of linear. What are you using this sweep for?... Many people choose it when trying to measure transfer functions, but there are much better alternatives, like Aoshima's Time Stretched Pulse. If that is your case, try looking for it! ++nic On Tue, May 10, 2011 at 02:58:35PM +0200, otrov at hush.ai wrote: > Hello community :) > > I hope my question is not too trivial, as I joined just to ask it > :D but I guess many have answer to it, nonetheless > > I can tell Python how to make sine wave file (from examples > provided elsewhere): > > ---------------------------------- > import numpy as np > from scikits.audiolab import Sndfile > > s = np.sin(2 * np.pi * 1000/44100 * np.arange(0, 44100 * 2)) > > f = Sndfile('foo.wav', 'w', 'wav', 2, 44100) > > f.write_frames(s) > f.close() > ---------------------------------- > > but I have no idea how to make sweep wave, let's say from 10 Hz to > 22 KHz in 10s > > Can someone provide solution, please? > > > Thanks in advance > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "A huge gap exists between what we know is possible with today's machines and what we have so far been able to finish." -- Donald Knuth From otrov at hush.ai Tue May 10 09:14:00 2011 From: otrov at hush.ai (otrov at hush.ai) Date: Tue, 10 May 2011 15:14:00 +0200 Subject: [SciPy-User] Making sweep wave with numpy/scipy Message-ID: <20110510131400.DF7F66F437@smtp.hushmail.com> Excellent Thank you for your quick reply :) I wonder why that cookbook didn't show in my googling, but some results for making general wave files with Python wave module. Though it's maybe in semantics - I searched for sweep, having no idea what chirp is Cheers On Tue, 10 May 2011 15:02:31 +0200 Warren Weckesser wrote: > >Take a look at scipy.signal.chirp: > http://www.scipy.org/Cookbook/FrequencySweptDemo > >Warren > From otrov at hush.ai Tue May 10 09:18:43 2011 From: otrov at hush.ai (otrov at hush.ai) Date: Tue, 10 May 2011 15:18:43 +0200 Subject: [SciPy-User] Making sweep wave with numpy/scipy Message-ID: <20110510131843.550996F437@smtp.hushmail.com> Thanks for your reply ++nic Yeah, I obviously don't know scipy basics as I just downloaded this huge scientific package, for possible future use I want to make couple of custom sweeps to test resamplers On Tue, 10 May 2011 15:05:32 +0200 Nicolau Werneck wrote: >call your arange "t", then try using a quadratic expression >instead of >linear. > >What are you using this sweep for?... Many people choose it when >trying to measure transfer functions, but there are much better >alternatives, like Aoshima's Time Stretched Pulse. If that is your >case, try looking for it! > >++nic > From otrov at hush.ai Mon May 9 22:12:14 2011 From: otrov at hush.ai (otrov at hush.ai) Date: Tue, 10 May 2011 04:12:14 +0200 Subject: [SciPy-User] Sweep wave file with numpy? Message-ID: <20110510021214.604E66F437@smtp.hushmail.com> Hello community :) I hope my question is not too trivial, as I joined just to ask it :D but I guess many have answer to it, nonetheless I can tell Python how to make sine wave file (from examples provided elsewhere): ---------------------------------- import numpy as np from scikits.audiolab import Sndfile s = np.sin(2 * np.pi * 1000/44100 * np.arange(0, 44100 * 2)) f = Sndfile('foo.wav', 'w', 'wav', 2, 44100) f.write_frames(s) f.close() ---------------------------------- but I have no idea how to make sweep wave, let's say from 10 Hz to 22 KHz in 10s Can someone provide solution, please? Thanks in advance From otrov at hush.ai Tue May 10 11:43:54 2011 From: otrov at hush.ai (otrov at hush.ai) Date: Tue, 10 May 2011 17:43:54 +0200 Subject: [SciPy-User] Making sweep wave with numpy/scipy Message-ID: <20110510154354.B65C56F437@smtp.hushmail.com> I've settled a bit at chirps, and results are as expected. I wanted to go little further while I'm in console and maybe generate that beautiful 'sweep poly' (shown as last example). However while chirps looks in spectrogram like shown in F/t graph, this last example in spectrogram looks like straight line. (In all examples I just dumped 'w' array in WAV file) There is probably a obvious reason for that, but I don't know it. Can that last sin F/t graph be translated in spectrogram of created WAV file somehow? Thanks :) On Tue, 10 May 2011 15:02:31 +0200 Warren Weckesser wrote: > >Take a look at scipy.signal.chirp: > http://www.scipy.org/Cookbook/FrequencySweptDemo > >Warren > From otrov at hush.ai Tue May 10 11:53:35 2011 From: otrov at hush.ai (otrov at hush.ai) Date: Tue, 10 May 2011 17:53:35 +0200 Subject: [SciPy-User] Making sweep wave with numpy/scipy Message-ID: <20110510155335.A0BD66F43F@smtp.hushmail.com> Nevermind, sorry coefficients were too low for effect to be visible in audio file (20Hz-20KHz) everything is fine cheers From ralf.gommers at googlemail.com Tue May 10 14:07:54 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 11 May 2011 02:07:54 +0800 Subject: [SciPy-User] problem with installing on osx 10.6.6 In-Reply-To: <556890.68160.qm@web132310.mail.ird.yahoo.com> References: <556890.68160.qm@web132310.mail.ird.yahoo.com> Message-ID: On Tue, May 10, 2011 at 5:06 AM, P B wrote: > > Hi, > this is odd. IF I put in the command > >>> import numpy > >>> import scipy > >>> scipy.test(verbose=2) > > I get the output > > Running unit tests for scipy > > NumPy version 1.5.1 > > NumPy is installed in > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy > > SciPy version 0.9.0 > > SciPy is installed in > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy > > Python version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple > Inc. build 5493)] > > nose version 1.0.0 > > ******* many test results which then finish with: > > OK (KNOWNFAIL=12, SKIP=42) > > > Could it be there is a problem with the "scipy.test('1','10')" command? > > Don't think so, it works for me with the same versions of python/numpy/scipy/nose. The '1' means run the full tests, which includes a lot of weave tests. I think your problem is that Python and Scipy are compiled with c++-4.0 while your default compiler is c++-4.2. If you change to 4.0 the tests should pass, but if you don't use weave you're fine like this. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From gus.is.here at gmail.com Tue May 10 14:57:59 2011 From: gus.is.here at gmail.com (Gus Ishere) Date: Tue, 10 May 2011 14:57:59 -0400 Subject: [SciPy-User] Power Spectral Density in SciPy, not pylab Message-ID: I see the psd() function within matplotlib pylab, and I realize that I can use it to get an array of Pxx,freqs. I don't need to make a figure though and I need to take the PSD of many time series (so it would be nice to do it in a vectorized way). Is there a PSD function within SciPy that allows me to select an axis in a n-dimensional array on which to do the PSD estimation, and then return an n-1 dimensional array? Thanks, Gustavo From daniel.jacob.jacobsen at gmail.com Wed May 11 08:28:05 2011 From: daniel.jacob.jacobsen at gmail.com (Daniel Jacobsen) Date: Wed, 11 May 2011 13:28:05 +0100 Subject: [SciPy-User] Job posting - off topic? Message-ID: Hi, I don't know what the scope of this mailing list is exactly, but in any case, my company is looking for experienced Python/Numpy/Scipy programmers in Copenhagen, Denmark. We work with very large datasets and numerical/statistical analysis of such. If this is off topic for this list, I appologize - in this case, can anyone point me to an appropriate forum for this particular subject? Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoelor at gmail.com Wed May 11 09:55:54 2011 From: yoelor at gmail.com (Joel Oren) Date: Wed, 11 May 2011 09:55:54 -0400 Subject: [SciPy-User] using fmin_tnc Message-ID: Hi, I wish to use scipy.optimize.fmin_tnc for performing a constrained optimization of a function of 2 numpy matrices. However, according to the documentation of the method, it only accepts lists of variables. Is there a way to still use this function without having to convert the matrices (and the gradient of the function) to lists on every iteration? Thanks, Joel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed May 11 10:48:09 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 May 2011 09:48:09 -0500 Subject: [SciPy-User] Job posting - off topic? In-Reply-To: References: Message-ID: On Wed, May 11, 2011 at 07:28, Daniel Jacobsen wrote: > Hi, > I don't know what the scope of this mailing list is exactly, but in any > case, my company is looking for experienced Python/Numpy/Scipy programmers > in Copenhagen, Denmark. We work with very large datasets and > numerical/statistical analysis of such. > If this is off topic for this list, I appologize - in this case, can anyone > point me to an appropriate forum for this particular subject? You can post relevant job opportunities here, within reason. Just add "JOB:" to your subject, or something similar. You may also have some luck posting to the Python Job Board: http://www.python.org/community/jobs/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From joonpyro at gmail.com Wed May 11 11:49:32 2011 From: joonpyro at gmail.com (Joon Ro) Date: Wed, 11 May 2011 10:49:32 -0500 Subject: [SciPy-User] using fmin_tnc In-Reply-To: References: Message-ID: On Wed, 11 May 2011 08:55:54 -0500, Joel Oren wrote: > Hi, > > I wish to use scipy.optimize.fmin_tnc for performing a constrained > optimization of a function of 2 numpy matrices. > > However, according to the documentation of the method, it only accepts > lists of variables. > > Is there a way to still use this function without having to convert the > matrices (and the gradient of the function) to lists on every iteration? > > Thanks, > Joel. Hi, I did not understand what you meant by "it only accepts lists of variables" but passing arrays to the objective functions is what you do when you use those optimization routines. For example, when you have a matrix of variables, X (n-by-k) and want to find parameters betas (k-by-1) which minimizes the objective function, scipy.optimize.fmin_tnc(func, x0 = betas0, fprime=None, args=(X) ... ) (where betas0 is the initial guess of betas) In your case, if you have two matrices X and Y, then scipy.optimize.fmin_tnc(func, x0 = betas0, fprime=None, args=(X, Y) ... ) So I did not understand converting matrix to lists part. Please correct me if I'm wrong. -Joon -------------- next part -------------- An HTML attachment was scrubbed... URL: From joonpyro at gmail.com Wed May 11 12:24:23 2011 From: joonpyro at gmail.com (Joon Ro) Date: Wed, 11 May 2011 11:24:23 -0500 Subject: [SciPy-User] using fmin_tnc In-Reply-To: References: Message-ID: Oh now I understand. So you actually have two matrices as unknown parameters. I'm sorry I was confused. Then you are right. It seems you have to modify your objective function so it only gets one parameter argument which is one dimensional. As you said, it seems you have to convert the matrices to a vector and reshape them inside the objective function. So the documentation does say x0 is a list of floats. I haven't tried it but it should work with a one dimensional vector as well. I'm not sure if the routine passes x as a list or not to the objective function though. -Joon On Wed, 11 May 2011 10:59:01 -0500, Joel Oren wrote: > What I meant was whether or not fmin_tnc can handle numpy matrices > (numpy.array objects) objects as function arguments. > > From your example, I gather that the function can handle them. The > question is, as I am interested in a constrained optmization, in which > the matrices entries are non-negative, can >this still work? > > If that is the case, how do I supply the bounds for each of the two > matrices I'm optimizing on? > > It is not clear wether or not this should work, according to the > documentation. I can always flatten my matrices using .flatten() and > then reshape them whenever I'm computing the >objective function. > > Thanks, > Joel. > > On Wed, May 11, 2011 at 11:49 AM, Joon Ro wrote: >> On Wed, 11 May 2011 08:55:54 -0500, Joel Oren wrote: >> >>> Hi, >>> >>> I wish to use scipy.optimize.fmin_tnc for performing a constrained >>> optimization of a function of 2 numpy matrices. >>> >>> However, according to the documentation of the method, it only accepts >>> lists of variables. >>> >>> Is there a way to still use this function without having to convert >>> the matrices (and the gradient of the function) to lists on every >>> iteration? >>> >>> Thanks, >>> Joel. >> >> Hi, >> >> I did not understand what you meant by "it only accepts lists of >> variables" but passing arrays to the objective functions is what you do >> when you use those optimization routines. >> For example, when you have a matrix of variables, X (n-by-k) and want >> to find parameters betas (k-by-1) which minimizes the objective >> function, >> scipy.optimize.fmin_tnc(func, x0 = betas0, fprime=None, args=(X) ... ) >> >> (where betas0 is the initial guess of betas) >> >> In your case, if you have two matrices X and Y, then >> >> scipy.optimize.fmin_tnc(func, x0 = betas0, fprime=None, args=(X, Y) ... >> ) >> >> So I did not understand converting matrix to lists part. Please correct >> me if I'm wrong. >> >>>> -Joon >> >> > -- Using Opera's revolutionary email client: http://www.opera.com/mail/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jradinger at gmx.at Wed May 11 15:30:16 2011 From: jradinger at gmx.at (Johannes Radinger) Date: Wed, 11 May 2011 21:30:16 +0200 Subject: [SciPy-User] matplotlib: print to postscript file and some other questions In-Reply-To: References: <20110506094637.180850@gmx.net> Message-ID: <880C6CB7-C688-4B3C-BA24-A58A3B43A220@gmx.at> Hello , sofar I know how to safe a plot into a *.eps file and it works good, but there is one issue with filled areas between two functions. When I try to use: plt.fill_between(x, pdf_min, pdf_max, color='0.85') and I try to open it on my mac I fail. So far as I know is the mac converting the eps internally to pdf to be displayed, but it seems it can't be converted. If I try to set the output to *.pdf it works perfectly and I can open the file. Something in the combination of fill_between and eps is causing the error. I tried also color="red" but with the same problems. Is there anything I've to set because I need the output as a working eps. /johannes Am 07.05.2011 um 12:19 schrieb Nils Wagner: > On Sat, 7 May 2011 15:37:16 +0530 > Rajeev Singh wrote: >> On Fri, May 6, 2011 at 3:16 PM, Johannes Radinger >> wrote: >> >>> Hello >>> >>> I don't know if there is a special matplotlib-user list >>> but anyway: >>> >>> I managed to draw a very simple plot of a probability >>> density function >>> (actually two superimposed pdfs). Now I'd like to draw >>> vertical lines fat >>> the points of the scale-parameters (only under the curve >>> of the function) >>> and label them. How can I do that? >>> >>> And what is the command to print the result into a >>> postscript-file? >>> >>> Here is what I've got so far: >>> >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from scipy import stats >>> >>> p=0.3 >>> m=0 >>> >>> x = np.arange(-100, 100, 0.2) >>> >>> def pdf(x,s1,s2): >>> return p * stats.norm.pdf(x, loc=m, scale=s1) + (1-p) >>> * >>> stats.norm.pdf(x, loc=m, scale=s2) >>> >>> #plt.axis([-100, 100, 0, 0.03])#probably not necessary >>> plt.plot(x, pdf(x,12,100), color='red') >>> plt.title('Pdf of dispersal kernel') >>> plt.text(60, 0.0025, r'$\mu=100,\ \sigma=15$') >>> plt.ylabel('probabilty of occurance') >>> plt.xlabel('distance from starting point') >>> plt.grid(True) >>> >>> plt.show() >>> >>> thanks >>> /johannes >>> -- >>> NEU: FreePhone - kostenlos mobil telefonieren und >>> surfen! >>> Jetzt informieren: http://www.gmx.net/de/go/freephone >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> Hi, >> >> You can try the following hack - >> >> import matplotlib.pyplot as plt >> import numpy as np >> from scipy import stats >> >> p=0.3 >> m=0 >> >> x = np.arange(-100, 100, 0.2) >> >> def pdf(x,s1,s2): >> return p * stats.norm.pdf(x, loc=m, scale=s1) + (1-p) >> * stats.norm.pdf(x, >> loc=m, scale=s2) >> >> #plt.axis([-100, 100, 0, 0.03])#probably not necessary >> s1, s2 = 12, 100 >> plt.plot(x, pdf(x,s1, s2), color='red') >> plt.plot(np.array([s1,s1]), >> np.array([plt.ylim()[0],pdf(s1,s1,s2)]), '--k') >> plt.plot(np.array([s2,s2]), >> np.array([plt.ylim()[0],pdf(s2,s1,s2)]), '--k') >> plt.title('Pdf of dispersal kernel') >> plt.text(60, 0.0025, r'$\mu=100,\ \sigma=15$') >> plt.ylabel('probabilty of occurance') >> plt.xlabel('distance from starting point') >> #plt.grid(True) >> >> #plt.show() >> plt.savefig('filename.eps') # this will save the figure >> in an eps file in >> current directory >> >> I am not sure if this is what you want! >> >> Rajeev > > See > > http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.savefig > http://matplotlib.sourceforge.net/users/customizing.html > > for details. > > HTH > > Nils > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From cweisiger at msg.ucsf.edu Wed May 11 16:30:11 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Wed, 11 May 2011 13:30:11 -0700 Subject: [SciPy-User] Python 2.7 / 64-bit Message-ID: I'm in the unfortunate situation of needing to make a OSX 64-bit version of a program that uses both Scipy/Numpy and wxWidgets. The former only has unofficial 64-bit builds and only for Python 2.6; the latter only has 64-bit builds for 2.7 -- before that, they were using Carbon for UI calls, which is 32-bit only. So there's a problem there. Hand-building my own version of either seems to be a pretty gnarly problem that I'd rather avoid if possible. Any ideas how far we are from having a 64-bit, Python2.7 build of scipy and numpy for OSX? All else being equal, I'd rather be on 2.7 than 2.6. Thanks. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Wed May 11 16:35:15 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 11 May 2011 16:35:15 -0400 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: References: Message-ID: On Wed, May 11, 2011 at 4:30 PM, Chris Weisiger wrote: > I'm in the unfortunate situation of needing to make a OSX 64-bit version of > a program that uses both Scipy/Numpy and wxWidgets. The former only has > unofficial 64-bit builds and only for Python 2.6; the latter only has 64-bit > builds for 2.7 -- before that, they were using Carbon for UI calls, which is > 32-bit only. So there's a problem there. Hand-building my own version of > either seems to be a pretty gnarly problem that I'd rather avoid if > possible. > > Any ideas how far we are from having a 64-bit, Python2.7 build of scipy and > numpy for OSX? All else being equal, I'd rather be on 2.7 than 2.6. Thanks. > > -Chris > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > I would use the Enthought Python Distribution-- consistent Python 2.7 builds across all platforms (including all the libraries you list above-- assuming you mean wxPython). From ralf.gommers at googlemail.com Wed May 11 16:39:34 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 11 May 2011 22:39:34 +0200 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: References: Message-ID: On Wed, May 11, 2011 at 10:30 PM, Chris Weisiger wrote: > I'm in the unfortunate situation of needing to make a OSX 64-bit version of > a program that uses both Scipy/Numpy and wxWidgets. The former only has > unofficial 64-bit builds and only for Python 2.6; the latter only has 64-bit > builds for 2.7 -- before that, they were using Carbon for UI calls, which is > 32-bit only. So there's a problem there. Hand-building my own version of > either seems to be a pretty gnarly problem that I'd rather avoid if > possible. > > Any ideas how far we are from having a 64-bit, Python2.7 build of scipy and > numpy for OSX? All else being equal, I'd rather be on 2.7 than 2.6. Thanks. > > These are available (at least for Snow Leopard): http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc3/numpy-1.6.0rc3-py2.7-python.org-macosx10.6.dmg/download http://sourceforge.net/projects/scipy/files/scipy/0.9.0/scipy-0.9.0-py2.7-python.org-macosx10.6.dmg/download Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cwebster at enthought.com Wed May 11 16:56:14 2011 From: cwebster at enthought.com (Corran Webster) Date: Wed, 11 May 2011 15:56:14 -0500 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: References: Message-ID: On Wed, May 11, 2011 at 3:35 PM, Wes McKinney wrote: > On Wed, May 11, 2011 at 4:30 PM, Chris Weisiger > wrote: > > I'm in the unfortunate situation of needing to make a OSX 64-bit version > of > > a program that uses both Scipy/Numpy and wxWidgets. The former only has > > unofficial 64-bit builds and only for Python 2.6; the latter only has > 64-bit > > builds for 2.7 -- before that, they were using Carbon for UI calls, which > is > > 32-bit only. So there's a problem there. Hand-building my own version of > > either seems to be a pretty gnarly problem that I'd rather avoid if > > possible. > > > > Any ideas how far we are from having a 64-bit, Python2.7 build of scipy > and > > numpy for OSX? All else being equal, I'd rather be on 2.7 than 2.6. > Thanks. > > > > -Chris > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > I would use the Enthought Python Distribution-- consistent Python 2.7 > builds across all platforms (including all the libraries you list > above-- assuming you mean wxPython). > This is generally a good suggestion, except that unfortunately our 64-bit OS X EPD doesn't currently have wxWidgets/wxPython for the above reasons, so it won't solve Chris' problem. EPD for OS X does have 64-bit numpy and scipy builds for 2.7, so if you are comfortable with building wxWidgets yourself, then EPD may be part of the solution. -- Corran -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Wed May 11 17:25:30 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Wed, 11 May 2011 14:25:30 -0700 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: References: Message-ID: On Wed, May 11, 2011 at 1:39 PM, Ralf Gommers wrote: > > > On Wed, May 11, 2011 at 10:30 PM, Chris Weisiger wrote: > >> >> Any ideas how far we are from having a 64-bit, Python2.7 build of scipy >> and numpy for OSX? All else being equal, I'd rather be on 2.7 than 2.6. >> Thanks. >> >> These are available (at least for Snow Leopard): > > http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc3/numpy-1.6.0rc3-py2.7-python.org-macosx10.6.dmg/download > > http://sourceforge.net/projects/scipy/files/scipy/0.9.0/scipy-0.9.0-py2.7-python.org-macosx10.6.dmg/download > > Unfortunately several of my users are on OSX 10.5. I should have mentioned that pertinent detail earlier. And yes, I did meant wxPython when I said wxWidgets. I guess I get to start looking into build processes for one or the other of these two. Bugger. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrennie at gmail.com Wed May 11 23:03:58 2011 From: jrennie at gmail.com (Jason Rennie) Date: Wed, 11 May 2011 23:03:58 -0400 Subject: [SciPy-User] using fmin_tnc In-Reply-To: References: Message-ID: On Wed, May 11, 2011 at 12:24 PM, Joon Ro wrote: > So the documentation does say x0 is a list of floats. I haven't tried it > but it should work with a one dimensional vector as well. I'm not sure if > the routine passes x as a list or not to the objective function though. > I have tried it, and, yes, it accepts a numpy array. As Joon noted, you need to flatten your parameters into an array. Consider creating functions for converting between the two formats (two matrices, single vector). I typically use a class instance to keep track of the specifications for the conversion, such as the sizes of the two matrices. l_bfgs_b has a similar interface and has very smart convergence criterion AFAICT, so if you have trouble with tnc, try l_bfgs_b (even if your problem is unconstrained). Cheers, Jason -- Jason Rennie Research Scientist Google/ITA Software 617-714-2645 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoelor at gmail.com Wed May 11 23:17:03 2011 From: yoelor at gmail.com (Joel Oren) Date: Wed, 11 May 2011 23:17:03 -0400 Subject: [SciPy-User] using fmin_tnc In-Reply-To: References: Message-ID: <55842805-FAC9-4749-9184-76FDB1445408@gmail.com> Thanks! One more question -- as I'm not familiar with method behind the tnc -- do you have any references for this method? Basically, I am performing a modified non-negative matrix factorization, in which in addition to the usual frobenius distance of the resulting matrices' product and the original matrix, which has an extra term that depends on one of the matrices. The only real bound I have is the non-negativity of the entries in the two matrices. Perhaps there is a way to simplify the process, or conversely, speed it up? Further down the road I might be dealing with sparse matrices, which could be a problem. Thanks, Joel. On 2011-05-11, at 11:03 PM, Jason Rennie wrote: > On Wed, May 11, 2011 at 12:24 PM, Joon Ro wrote: > So the documentation does say x0 is a list of floats. I haven't tried it but it should work with a one dimensional vector as well. I'm not sure if the routine passes x as a list or not to the objective function though. > > I have tried it, and, yes, it accepts a numpy array. As Joon noted, you need to flatten your parameters into an array. Consider creating functions for converting between the two formats (two matrices, single vector). I typically use a class instance to keep track of the specifications for the conversion, such as the sizes of the two matrices. l_bfgs_b has a similar interface and has very smart convergence criterion AFAICT, so if you have trouble with tnc, try l_bfgs_b (even if your problem is unconstrained). > > Cheers, > > Jason > > -- > Jason Rennie > Research Scientist > Google/ITA Software > 617-714-2645 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Wed May 11 23:25:19 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 11 May 2011 23:25:19 -0400 Subject: [SciPy-User] using fmin_tnc In-Reply-To: <55842805-FAC9-4749-9184-76FDB1445408@gmail.com> References: <55842805-FAC9-4749-9184-76FDB1445408@gmail.com> Message-ID: On Wed, May 11, 2011 at 11:17 PM, Joel Oren wrote: > Thanks! > One more question -- as I'm not familiar with method behind the tnc -- do > you have any references for this method? I think the main reference, or a good starting point, for the optimize code is Numerical Optimization by Wright and Nocedal. Skipper From g.statkute at gmail.com Wed May 11 23:43:40 2011 From: g.statkute at gmail.com (gintare statkute) Date: Thu, 12 May 2011 06:43:40 +0300 Subject: [SciPy-User] Maybe some of you would like to participate Message-ID: Dear members, I subscribed recently to these groups and want to offer a brief introduction, hoping some of you will share my interest. *Academic Common Extensions (ACE)* In my contacts with academic developers I have come across many different initiatives from researchers and developers at academic institutions who implement an OpenSocial container for a range of academic projects. Many of these share common ground among each other or across the OpenSocial spec, and this aspect of shared functionality in a certain industry is itself again common for different domains. My objective with ACE is to explore what common solutions I can create that facilitate these common functionalities as Common Extensions. E.g. Typically academic initiatives will facilitate academic profiles, publications, research funding activities, academic applications, application citations (citation standard as an extension of gadget/widget xml), etc. *My background* I have 10 years Java server side and web application development experience (Yale University, hedge fund, in-game advertising and Microsoft) and recently got involved with OpenSocial. SciVerse is an extension of Shindig for Elsevier products (roughly 25% of the worlds Science, Technology and Medical (STM) publications, the world?s largest citation and abstract database with a user base of 15 million global researchers). So with this perspective I hope to get involved in some of the discussion and I would be very excited to hear from people with interest in this angle. * * *Cheers,* *Remko Caprio* * * *Apps for Science Challenge* - Elsevier offers $35,000 in prizes and challenges software developers to collaborate with librarians and researchers to develop the best apps and enhance customer experience. See for more details and eligibility http://AppsForScience.com . [image: leaderboard.gif] -- You received this message because you are subscribed to the Google Groups "OpenSocial and Gadgets Specification Discussion" group. To post to this group, send email to opensocial-and-gadgets-spec at googlegroups.com. To unsubscribe from this group, send email to opensocial-and-gadgets-spec+unsubscribe at googlegroups.com. For more options, visit this group at http://groups.google.com/group/opensocial-and-gadgets-spec?hl=en. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 33674 bytes Desc: not available URL: From seb.haase at gmail.com Thu May 12 03:45:10 2011 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 12 May 2011 09:45:10 +0200 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: References: Message-ID: On Wed, May 11, 2011 at 10:56 PM, Corran Webster wrote: > On Wed, May 11, 2011 at 3:35 PM, Wes McKinney wrote: >> >> On Wed, May 11, 2011 at 4:30 PM, Chris Weisiger >> wrote: >> > I'm in the unfortunate situation of needing to make a OSX 64-bit version >> > of >> > a program that uses both Scipy/Numpy and wxWidgets. The former only has >> > unofficial 64-bit builds and only for Python 2.6; the latter only has >> > 64-bit >> > builds for 2.7 -- before that, they were using Carbon for UI calls, >> > which is >> > 32-bit only. So there's a problem there. Hand-building my own version of >> > either seems to be a pretty gnarly problem that I'd rather avoid if >> > possible. >> > >> > Any ideas how far we are from having a 64-bit, Python2.7 build of scipy >> > and >> > numpy for OSX? All else being equal, I'd rather be on 2.7 than 2.6. >> > Thanks. >> > >> > -Chris >> > >> I would use the Enthought Python Distribution-- consistent Python 2.7 >> builds across all platforms (including all the libraries you list >> above-- assuming you mean wxPython). > > > This is generally a good suggestion, except that unfortunately our 64-bit OS > X EPD doesn't currently have wxWidgets/wxPython for the above reasons, so it > won't solve Chris' problem. > > EPD for OS X does have 64-bit numpy and scipy builds for 2.7, so if you are > comfortable with building wxWidgets yourself, then EPD may be part of the > solution. > > -- Corran What is the problem with switching form Carbon to the Cocoa version of wx ? Of course it's less tested, and probably has still some glitches, but it's seems, that is the only viable direction of OS-X on 64-bit in the future. -Sebastian Haase From ckkart at hoc.net Thu May 12 07:30:32 2011 From: ckkart at hoc.net (Christian K.) Date: Thu, 12 May 2011 11:30:32 +0000 (UTC) Subject: [SciPy-User] autocrop 2d array Message-ID: Hi, I need to strip 0-values from a ndarray, similar to the autocrop function in image manipulation, i.e. I need to find the 0<->any steps in the ndarray. I cannot think of a solution without looping but I am sure there must be one based on clever indexing. Any ideas? Thanks in advnace, Christian From zachary.pincus at yale.edu Thu May 12 07:56:46 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 12 May 2011 07:56:46 -0400 Subject: [SciPy-User] autocrop 2d array In-Reply-To: References: Message-ID: <0E5F2E06-9264-4BE2-9454-B395B7024C95@yale.edu> If you can specify the problem a little more clearly (I'm not familiar with autocrop) maybe I could help. Are you looking for the smallest sub-array containing all of the non-zero values in the array? (That is, to trim off zero-valued borders?) Zach On May 12, 2011, at 7:30 AM, Christian K. wrote: > Hi, > > I need to strip 0-values from a ndarray, similar to the autocrop function in > image manipulation, i.e. I need to find the 0<->any steps in the ndarray. I > cannot think of a solution without looping but I am sure there must be one > based on clever indexing. Any ideas? > > Thanks in advnace, Christian > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From cwebster at enthought.com Thu May 12 10:37:48 2011 From: cwebster at enthought.com (Corran Webster) Date: Thu, 12 May 2011 09:37:48 -0500 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: References: Message-ID: On Thu, May 12, 2011 at 2:45 AM, Sebastian Haase wrote: > On Wed, May 11, 2011 at 10:56 PM, Corran Webster > wrote: > > On Wed, May 11, 2011 at 3:35 PM, Wes McKinney > wrote: > >> > >> On Wed, May 11, 2011 at 4:30 PM, Chris Weisiger > > >> wrote: > >> > I'm in the unfortunate situation of needing to make a OSX 64-bit > version > >> > of > >> > a program that uses both Scipy/Numpy and wxWidgets. The former only > has > >> > unofficial 64-bit builds and only for Python 2.6; the latter only has > >> > 64-bit > >> > builds for 2.7 -- before that, they were using Carbon for UI calls, > >> > which is > >> > 32-bit only. So there's a problem there. Hand-building my own version > of > >> > either seems to be a pretty gnarly problem that I'd rather avoid if > >> > possible. > >> > > >> > Any ideas how far we are from having a 64-bit, Python2.7 build of > scipy > >> > and > >> > numpy for OSX? All else being equal, I'd rather be on 2.7 than 2.6. > >> > Thanks. > >> > > >> > -Chris > >> > > >> I would use the Enthought Python Distribution-- consistent Python 2.7 > >> builds across all platforms (including all the libraries you list > >> above-- assuming you mean wxPython). > > > > > > This is generally a good suggestion, except that unfortunately our 64-bit > OS > > X EPD doesn't currently have wxWidgets/wxPython for the above reasons, so > it > > won't solve Chris' problem. > > > > EPD for OS X does have 64-bit numpy and scipy builds for 2.7, so if you > are > > comfortable with building wxWidgets yourself, then EPD may be part of the > > solution. > > > > -- Corran > > What is the problem with switching form Carbon to the Cocoa version of wx ? > Of course it's less tested, and probably has still some glitches, but > it's seems, that is the only viable direction of OS-X on 64-bit in the > future. > I'm not sure what the issue is, as I don't do the builds, but I know that wxWidgets/wxPython on OS X 64 isn't yet available. Ilan Schnell who does do the EPD builds may be able to give you more information about what the precise problems are. -- Corran -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Thu May 12 11:56:19 2011 From: ckkart at hoc.net (Christian K.) Date: Thu, 12 May 2011 15:56:19 +0000 (UTC) Subject: [SciPy-User] autocrop 2d array References: <0E5F2E06-9264-4BE2-9454-B395B7024C95@yale.edu> Message-ID: Hi Zach, Zachary Pincus yale.edu> writes: > > If you can specify the problem a little more clearly (I'm not familiar with autocrop) maybe I could help. Are > you looking for the smallest sub-array containing all of the non-zero values in the array? (That is, to > trim off zero-valued borders?) > That is exactly what I want. In case the subarray has the same layout as the containing array - square shape - it is equal to data[data != 0] but I also need a subarray for the general case of arbitrarily shaped objects within the array. Regards, Christian From jsseabold at gmail.com Thu May 12 12:30:59 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 12 May 2011 12:30:59 -0400 Subject: [SciPy-User] vectorized cumulative integration? Message-ID: I have a pdf that I want to integrate from -np.inf to each point in the support to get the cdf. Right now, I can use list comprehension to do something like cdf = [integrate.quad(pdf, -np.inf, end, args=(some_data_array,) for end in support] But this takes a few seconds. Alternatively, I can do cdf = integrate.cumtrapz(pdf_estimate, support) lower_tail = integrate.quad(pdf, -np.inf, support[0], args=(some_data_array,)) cdf = np.r_[lower_tail, cdf+lower_tail] The latter seems like it might be a crude approximation, though I'm not sure. The former is too slow. Any other ideas? I tried to vectorize the former approach like the generic cdf in stats.distributions, but since my pdf takes an array argument I had some trouble and it would take some ugly workarounds I think. Cheers, Skipper From cweisiger at msg.ucsf.edu Thu May 12 12:38:56 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Thu, 12 May 2011 09:38:56 -0700 Subject: [SciPy-User] autocrop 2d array In-Reply-To: References: <0E5F2E06-9264-4BE2-9454-B395B7024C95@yale.edu> Message-ID: On Thu, May 12, 2011 at 8:56 AM, Christian K. wrote: > Hi Zach, > > Zachary Pincus yale.edu> writes: > > > > > If you can specify the problem a little more clearly (I'm not familiar > with > autocrop) maybe I could help. Are > > you looking for the smallest sub-array containing all of the non-zero > values > in the array? (That is, to > > trim off zero-valued borders?) > > > > That is exactly what I want. > You ought to be able to rig something up with numpy.where, which returns the coordinates in the array that match some condition. Something like this: foo = np.zeros((5, 5)) foo[1:4,1:4] = np.arange(9).reshape(3, 3) coords = np.array(np.where(foo != 0)).T minCorner = np.min(coords, axis = 0) maxCorner = np.max(coords, axis = 0) + 1 foo[minCorner[0]:maxCorner[0], minCorner[1]:maxCorner[1]] There's almost certainly even faster ways to do this; this is just the first solution that came to mind. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu May 12 12:42:13 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 12 May 2011 12:42:13 -0400 Subject: [SciPy-User] autocrop 2d array In-Reply-To: References: <0E5F2E06-9264-4BE2-9454-B395B7024C95@yale.edu> Message-ID: Here's what came to mind: grab the indices at which the array is nonzero, then calculate the bounding box from that. Not the most efficient (which you could do easily enough in a cython loop), but compact and pure-python. Perhaps there's a better way, but this may suffice. import numpy a = numpy.zeros((10,10),int) a[5,3:7] = 1; a[3:7,5] = 1 print a array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 1, 1, 1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) xs, ys = numpy.indices(a.shape) xi, yi = xs[a!=0], ys[a!=0] print a[xi.min():xi.max()+1, yi.min():yi.max()+1] array([[0, 0, 1, 0], [0, 0, 1, 0], [1, 1, 1, 1], [0, 0, 1, 0]]) On May 12, 2011, at 11:56 AM, Christian K. wrote: > Hi Zach, > > Zachary Pincus yale.edu> writes: > >> >> If you can specify the problem a little more clearly (I'm not familiar with > autocrop) maybe I could help. Are >> you looking for the smallest sub-array containing all of the non-zero values > in the array? (That is, to >> trim off zero-valued borders?) >> > > That is exactly what I want. In case the subarray has the same layout as the > containing array - square shape - it is equal to > > data[data != 0] > > but I also need a subarray for the general case of arbitrarily shaped objects > within the array. > > Regards, Christian > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From zachary.pincus at yale.edu Thu May 12 12:44:09 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 12 May 2011 12:44:09 -0400 Subject: [SciPy-User] autocrop 2d array In-Reply-To: References: <0E5F2E06-9264-4BE2-9454-B395B7024C95@yale.edu> Message-ID: <96EA3990-E875-49D2-8809-6BA8E45DFA28@yale.edu> > > You ought to be able to rig something up with numpy.where, which returns the coordinates in the array that match some condition. Something like this: > > foo = np.zeros((5, 5)) > foo[1:4,1:4] = np.arange(9).reshape(3, 3) > coords = np.array(np.where(foo != 0)).T > minCorner = np.min(coords, axis = 0) > maxCorner = np.max(coords, axis = 0) + 1 > foo[minCorner[0]:maxCorner[0], minCorner[1]:maxCorner[1]] > > There's almost certainly even faster ways to do this; this is just the first solution that came to mind. > The above solution is basically identical to mine, and it might be a little faster or more memory-efficient if np.where() is implemented in C... Thanks, Chris! From bastian.weber at gmx-topmail.de Thu May 12 13:24:24 2011 From: bastian.weber at gmx-topmail.de (Bastian Weber) Date: Thu, 12 May 2011 19:24:24 +0200 Subject: [SciPy-User] kernel of multivariate polynomial function Message-ID: <4DCC17C8.5010803@gmx-topmail.de> Hello, is there an easy way to find numerically the set where a polynomial function of n unknowns is zero? If the function were f(x1,..xn) = x1**2+...+xn**2 -1, then this set would be the (n-1)-dimensional unit sphere. However, in my case the function has arbitrary coefficients. (Still I am not sure what a proper representation of such a set would be) Best regards, Bastian. From gael.varoquaux at normalesup.org Thu May 12 13:34:19 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 12 May 2011 19:34:19 +0200 Subject: [SciPy-User] autocrop 2d array In-Reply-To: References: <0E5F2E06-9264-4BE2-9454-B395B7024C95@yale.edu> Message-ID: <20110512173419.GA23200@phare.normalesup.org> On Thu, May 12, 2011 at 03:56:19PM +0000, Christian K. wrote: > > If you can specify the problem a little more clearly (I'm not familiar with > autocrop) maybe I could help. Are > > you looking for the smallest sub-array containing all of the non-zero values > in the array? (That is, to > > trim off zero-valued borders?) > That is exactly what I want. In case the subarray has the same layout as the > containing array - square shape - it is equal to > data[data != 0] > but I also need a subarray for the general case of arbitrarily shaped objects > within the array. Use scipy.ndimage.find_objects: http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.find_objects.html#scipy.ndimage.measurements.find_objects Gael From josef.pktd at gmail.com Thu May 12 13:47:44 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 12 May 2011 13:47:44 -0400 Subject: [SciPy-User] vectorized cumulative integration? In-Reply-To: References: Message-ID: On Thu, May 12, 2011 at 12:30 PM, Skipper Seabold wrote: > I have a pdf that I want to integrate from -np.inf to each point in > the support to get the cdf. Right now, I can use list comprehension to > do something like > > cdf = [integrate.quad(pdf, -np.inf, end, args=(some_data_array,) for > end in support] > > But this takes a few seconds. Alternatively, I can do > > cdf = integrate.cumtrapz(pdf_estimate, support) > lower_tail = integrate.quad(pdf, -np.inf, support[0], args=(some_data_array,)) > cdf = np.r_[lower_tail, cdf+lower_tail] > > The latter seems like it might be a crude approximation, though I'm > not sure. The former is too slow. Any other ideas? I tried to > vectorize the former approach like the generic cdf in > stats.distributions, but since my pdf takes an array argument I had > some trouble and it would take some ugly workarounds I think. I don't have a solution, but you could try piecewise quad, then at least you don't have to integrate the full range each time probs = [integrate.quad(pdf, end-delta, end, args=(some_data_array,) for end in support[1:]] cdf = np.cumsum(probs) or better quad the intervals end[i-1] to end[i] for i in range(len(support)) (or use a proper pair iterator) cumtrapz should be ok if you can calculate a lot of points, len(support) is large Cheerio, Josef (one more day) > > Cheers, > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From danielstefanmader at googlemail.com Thu May 12 15:23:09 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Thu, 12 May 2011 21:23:09 +0200 Subject: [SciPy-User] Spyderlib: how to end a script without getting a Traceback? Message-ID: Hi, sometimes I need to just end a script at a certain position (for debug purposes, mostly). Before starting to use Spyder, I have use sys.exit() for that purpose: import sys print 11111 sys.exit() print 22222 However, in the IPython shell integrated in Spyder, this results in a traceback, which is useless in this context. How can I get a quiet system exit here? Thanks in advance, Daniel From jsseabold at gmail.com Thu May 12 16:31:49 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 12 May 2011 16:31:49 -0400 Subject: [SciPy-User] vectorized cumulative integration? In-Reply-To: References: Message-ID: On Thu, May 12, 2011 at 1:47 PM, wrote: > On Thu, May 12, 2011 at 12:30 PM, Skipper Seabold wrote: >> I have a pdf that I want to integrate from -np.inf to each point in >> the support to get the cdf. Right now, I can use list comprehension to >> do something like >> >> cdf = [integrate.quad(pdf, -np.inf, end, args=(some_data_array,) for >> end in support] >> >> But this takes a few seconds. Alternatively, I can do >> >> cdf = integrate.cumtrapz(pdf_estimate, support) >> lower_tail = integrate.quad(pdf, -np.inf, support[0], args=(some_data_array,)) >> cdf = np.r_[lower_tail, cdf+lower_tail] >> >> The latter seems like it might be a crude approximation, though I'm >> not sure. The former is too slow. Any other ideas? I tried to >> vectorize the former approach like the generic cdf in >> stats.distributions, but since my pdf takes an array argument I had >> some trouble and it would take some ugly workarounds I think. > > I don't have a solution, but you could try piecewise quad, then at > least you don't have to integrate the full range each time > > probs = [integrate.quad(pdf, end-delta, end, args=(some_data_array,) > for end in support[1:]] > cdf = np.cumsum(probs) > > or better quad the intervals end[i-1] to end[i] ?for i in > range(len(support)) (or use a proper pair iterator) > Ah right, this one is much faster for a first shot. Thanks, Skipper From k-assem84 at hotmail.com Fri May 13 08:33:45 2011 From: k-assem84 at hotmail.com (suzana8447) Date: Fri, 13 May 2011 05:33:45 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] How to get access to Array elements In-Reply-To: References: <31551042.post@talk.nabble.com> Message-ID: <31610990.post@talk.nabble.com> Thanks very much Keith Goodman wrote: > > On Thu, May 5, 2011 at 7:50 AM, suzana8447 wrote: > >> array([1,2,8,10,50)] >> note that the third element in the array is 8. ?I want to form an if >> statement as this: >> if(x==8): print x # just as an example. >> But how to declare x as the third elemnt of the above array. > > Here's an example: > > >> import numpy as np > >> a = np.array([1,2,8,10,50]) > >> a[0] > 1 > >> a[1] > 2 > >> a[2] > 8 > >> a[3] > 10 > >> a[4] > 50 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/How-to-get-access-to-Array-elements-tp31551042p31610990.html Sent from the Scipy-User mailing list archive at Nabble.com. From k-assem84 at hotmail.com Fri May 13 08:34:11 2011 From: k-assem84 at hotmail.com (suzana8447) Date: Fri, 13 May 2011 05:34:11 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] How to get access to Array elements In-Reply-To: References: <31551042.post@talk.nabble.com> Message-ID: <31610991.post@talk.nabble.com> thanks very much peteT wrote: > > Hi, > > At=array([2,3,8,10]) > x=At[2] #remember that indexing begins with 0. > if x==8: > print x > > hth > > Peter > On May 6, 2011 6:42 AM, "suzana8447" wrote: >> >> Hello for all, >> >> I am still a beginner in the Python language. >> I have a problem and hope that some can help me. >> I my program I have an array called for example densities= >> array([1,2,8,10,50)] >> note that the third element in the array is 8. I want to form an if >> statement as this: >> if(x==8): print x # just as an example. >> But how to declare x as the third elemnt of the above array. >> Thanks in advance. >> >> -- >> View this message in context: > http://old.nabble.com/How-to-get-access-to-Array-elements-tp31551042p31551042.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/How-to-get-access-to-Array-elements-tp31551042p31610991.html Sent from the Scipy-User mailing list archive at Nabble.com. From ckkart at hoc.net Fri May 13 12:36:00 2011 From: ckkart at hoc.net (Christian K.) Date: Fri, 13 May 2011 18:36:00 +0200 Subject: [SciPy-User] autocrop 2d array In-Reply-To: References: <0E5F2E06-9264-4BE2-9454-B395B7024C95@yale.edu> Message-ID: Am 12.05.11 18:38, schrieb Chris Weisiger: > On Thu, May 12, 2011 at 8:56 AM, Christian K. > wrote: > > Hi Zach, > > Zachary Pincus yale.edu > writes: > > > > > If you can specify the problem a little more clearly (I'm not > familiar with > autocrop) maybe I could help. Are > > you looking for the smallest sub-array containing all of the > non-zero values > in the array? (That is, to > > trim off zero-valued borders?) > > > > That is exactly what I want. > > > You ought to be able to rig something up with numpy.where, which returns > the coordinates in the array that match some condition. Something like this: > > foo = np.zeros((5, 5)) > foo[1:4,1:4] = np.arange(9).reshape(3, 3) > coords = np.array(np.where(foo != 0)).T > minCorner = np.min(coords, axis = 0) > maxCorner = np.max(coords, axis = 0) + 1 > foo[minCorner[0]:maxCorner[0], minCorner[1]:maxCorner[1]] > Thanks all, that works very well. Christian From srean.list at gmail.com Fri May 13 16:14:46 2011 From: srean.list at gmail.com (srean) Date: Fri, 13 May 2011 15:14:46 -0500 Subject: [SciPy-User] calling weave.blitz expressions from weave.inline code Message-ID: Hi everyone, I have two questions. (i) is there a way to call a weave.blitz(expr) from inside a block of weave.inline code ? and (ii) if an expression involves broadcasting can it be handled by weave.blitz() and if not are there any good alternatives, for instance does Numexpr handle broadcasts ? Sincere thanks in advance srean -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat May 14 05:54:22 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 14 May 2011 11:54:22 +0200 Subject: [SciPy-User] ANN: NumPy 1.6.0 Message-ID: Hi, I am pleased to announce the release of NumPy 1.6.0. This release is the result of 9 months of work, and includes many new features, performance improvements and bug fixes. Some highlights are: - Re-introduction of datetime dtype support to deal with dates in arrays. - A new 16-bit floating point type. - A new iterator, which improves performance of many functions. Sources and binaries can be found at http://sourceforge.net/projects/numpy/files/NumPy/1.6.0/ For release notes see below. Thank you to everyone who contributed to this release. Enjoy, The NumPy developers ========================= NumPy 1.6.0 Release Notes ========================= This release includes several new features as well as numerous bug fixes and improved documentation. It is backward compatible with the 1.5.0 release, and supports Python 2.4 - 2.7 and 3.1 - 3.2. Highlights ========== * Re-introduction of datetime dtype support to deal with dates in arrays. * A new 16-bit floating point type. * A new iterator, which improves performance of many functions. New features ============ New 16-bit floating point type ------------------------------ This release adds support for the IEEE 754-2008 binary16 format, available as the data type ``numpy.half``. Within Python, the type behaves similarly to `float` or `double`, and C extensions can add support for it with the exposed half-float API. New iterator ------------ A new iterator has been added, replacing the functionality of the existing iterator and multi-iterator with a single object and API. This iterator works well with general memory layouts different from C or Fortran contiguous, and handles both standard NumPy and customized broadcasting. The buffering, automatic data type conversion, and optional output parameters, offered by ufuncs but difficult to replicate elsewhere, are now exposed by this iterator. Legendre, Laguerre, Hermite, HermiteE polynomials in ``numpy.polynomial`` ------------------------------------------------------------------------- Extend the number of polynomials available in the polynomial package. In addition, a new ``window`` attribute has been added to the classes in order to specify the range the ``domain`` maps to. This is mostly useful for the Laguerre, Hermite, and HermiteE polynomials whose natural domains are infinite and provides a more intuitive way to get the correct mapping of values without playing unnatural tricks with the domain. Fortran assumed shape array and size function support in ``numpy.f2py`` ----------------------------------------------------------------------- F2py now supports wrapping Fortran 90 routines that use assumed shape arrays. Before such routines could be called from Python but the corresponding Fortran routines received assumed shape arrays as zero length arrays which caused unpredicted results. Thanks to Lorenz H?depohl for pointing out the correct way to interface routines with assumed shape arrays. In addition, f2py supports now automatic wrapping of Fortran routines that use two argument ``size`` function in dimension specifications. Other new functions ------------------- ``numpy.ravel_multi_index`` : Converts a multi-index tuple into an array of flat indices, applying boundary modes to the indices. ``numpy.einsum`` : Evaluate the Einstein summation convention. Using the Einstein summation convention, many common multi-dimensional array operations can be represented in a simple fashion. This function provides a way compute such summations. ``numpy.count_nonzero`` : Counts the number of non-zero elements in an array. ``numpy.result_type`` and ``numpy.min_scalar_type`` : These functions expose the underlying type promotion used by the ufuncs and other operations to determine the types of outputs. These improve upon the ``numpy.common_type`` and ``numpy.mintypecode`` which provide similar functionality but do not match the ufunc implementation. Changes ======= ``default error handling`` -------------------------- The default error handling has been change from ``print`` to ``warn`` for all except for ``underflow``, which remains as ``ignore``. ``numpy.distutils`` ------------------- Several new compilers are supported for building Numpy: the Portland Group Fortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel C compiler on Linux. ``numpy.testing`` ----------------- The testing framework gained ``numpy.testing.assert_allclose``, which provides a more convenient way to compare floating point arrays than `assert_almost_equal`, `assert_approx_equal` and `assert_array_almost_equal`. ``C API`` --------- In addition to the APIs for the new iterator and half data type, a number of other additions have been made to the C API. The type promotion mechanism used by ufuncs is exposed via ``PyArray_PromoteTypes``, ``PyArray_ResultType``, and ``PyArray_MinScalarType``. A new enumeration ``NPY_CASTING`` has been added which controls what types of casts are permitted. This is used by the new functions ``PyArray_CanCastArrayTo`` and ``PyArray_CanCastTypeTo``. A more flexible way to handle conversion of arbitrary python objects into arrays is exposed by ``PyArray_GetArrayParamsFromObject``. Deprecated features =================== The "normed" keyword in ``numpy.histogram`` is deprecated. Its functionality will be replaced by the new "density" keyword. Removed features ================ ``numpy.fft`` ------------- The functions `refft`, `refft2`, `refftn`, `irefft`, `irefft2`, `irefftn`, which were aliases for the same functions without the 'e' in the name, were removed. ``numpy.memmap`` ---------------- The `sync()` and `close()` methods of memmap were removed. Use `flush()` and "del memmap" instead. ``numpy.lib`` ------------- The deprecated functions ``numpy.unique1d``, ``numpy.setmember1d``, ``numpy.intersect1d_nu`` and ``numpy.lib.ufunclike.log2`` were removed. ``numpy.ma`` ------------ Several deprecated items were removed from the ``numpy.ma`` module:: * ``numpy.ma.MaskedArray`` "raw_data" method * ``numpy.ma.MaskedArray`` constructor "flag" keyword * ``numpy.ma.make_mask`` "flag" keyword * ``numpy.ma.allclose`` "fill_value" keyword ``numpy.distutils`` ------------------- The ``numpy.get_numpy_include`` function was removed, use ``numpy.get_include`` instead. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Sat May 14 06:44:06 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Sat, 14 May 2011 12:44:06 +0200 Subject: [SciPy-User] ANN: NumPy 1.6.0 - FAIL: test_expon (test_morestats.TestAnderson) In-Reply-To: References: Message-ID: <1305369846.2348.8.camel@newpride> Hi! On Sat, 2011-05-14 at 11:54 +0200, Ralf Gommers wrote: > > I am pleased to announce the release of NumPy 1.6.0. This release is > the result of 9 months of work, and includes many new features, > performance improvements and bug fixes. Some highlights are: Congratulations! Absolutely great news! I have started to update my private builds on Ubuntu Maverick 64-bit: $ uname -a Linux newpride 2.6.35-28-generic #50-Ubuntu SMP Fri Mar 18 18:42:20 UTC 2011 x86_64 GNU/Linux I am building against ATLAS optimized for Core i7: sudo apt-get install libatlas3gf-corei7sse3 libatlas3gf-corei7sse3-dev export BLAS=/usr/lib/libblas.so export LAPACK=/usr/lib/liblapack.so export ATLAS=/usr/lib/libatlas.so pip install numpy pip install scipy The NumPy builds fine and passes the test suite, however, when I rebuild latest SciPy 0.9.0 against latest NumPy 1.6.0 I am getting one test failure: ====================================================================== FAIL: test_expon (test_morestats.TestAnderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/srv/virtualenv/mle/lib/python2.7/site-packages/scipy/stats/tests/test_morestats.py", line 72, in test_expon assert_array_less(crit[:-1], A) File "/srv/virtualenv/mle/lib/python2.7/site-packages/numpy/testing/utils.py", line 869, in assert_array_less header='Arrays are not less-ordered') File "/srv/virtualenv/mle/lib/python2.7/site-packages/numpy/testing/utils.py", line 613, in assert_array_compare chk_same_position(x_id, y_id, hasval='inf') File "/srv/virtualenv/mle/lib/python2.7/site-packages/numpy/testing/utils.py", line 588, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not less-ordered x and y inf location mismatch: x: array([ 0.911, 1.065, 1.325, 1.587]) y: array(inf) ---------------------------------------------------------------------- This doesn't happen if I build SciPy 0.9.0 against NumPy 1.5.0, all the tests pass. Hope that helps to figure out the problem, -- Sincerely yours, Yury V. Zaytsev From giorgos.tzampanakis at gmail.com Fri May 13 18:40:00 2011 From: giorgos.tzampanakis at gmail.com (Giorgos Tzampanakis) Date: Fri, 13 May 2011 22:40:00 +0000 (UTC) Subject: [SciPy-User] Alternatives to genfromtxt and loadtxt? Message-ID: I have numeric data in ascii files, each file about 800 MB. Loading such a file to Octave takes about 30 seconds. On numpy it is so slow that I've never had the patience to see it through to the end. A faster way that I have found is to convert the data to hdf5 and then load them into numpy, however this is an extra step that I would like to avoid, if possible. Any suggestions welcome. From yury at shurup.com Sat May 14 06:53:57 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Sat, 14 May 2011 12:53:57 +0200 Subject: [SciPy-User] Alternatives to genfromtxt and loadtxt? In-Reply-To: References: Message-ID: <1305370437.2348.16.camel@newpride> On Fri, 2011-05-13 at 22:40 +0000, Giorgos Tzampanakis wrote: > I have numeric data in ascii files, each file about 800 MB. Loading such a > file to Octave takes about 30 seconds. On numpy it is so slow that I've > never had the patience to see it through to the end. If the layout is more or less simple, you may have better luck with reading files with Python's built-in CSV reader and only then converting the lists to NumPy arrays. I know it must definitively be not the best solution out there, but it takes zero effort and I have able to load 500 Mb large files in a matter of dozens of seconds without any problems: import csv import numpy as np # Auto-detect the CSV dialect that is being used # dialect = csv.Sniffer().sniff(fp.read(1024)) fp.seek(0) reader = csv.reader(fp, dialect) data = [] for row in reader: # Filter out empty fields # row = [x for x in row if x != ""] ... data.append(row) ... matrix = np.asarray(data, dtype = np.float) -- Sincerely yours, Yury V. Zaytsev From josef.pktd at gmail.com Sat May 14 08:20:00 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 08:20:00 -0400 Subject: [SciPy-User] ANN: NumPy 1.6.0 - FAIL: test_expon (test_morestats.TestAnderson) In-Reply-To: <1305369846.2348.8.camel@newpride> References: <1305369846.2348.8.camel@newpride> Message-ID: On Sat, May 14, 2011 at 6:44 AM, Yury V. Zaytsev wrote: > Hi! > > On Sat, 2011-05-14 at 11:54 +0200, Ralf Gommers wrote: >> >> I am pleased to announce the release of NumPy 1.6.0. This release is >> the result of 9 months of work, and includes many new features, >> performance improvements and bug fixes. Some highlights are: > > Congratulations! Absolutely great news! > > I have started to update my private builds on Ubuntu Maverick 64-bit: > > $ uname -a > Linux newpride 2.6.35-28-generic #50-Ubuntu SMP Fri Mar 18 18:42:20 UTC 2011 x86_64 GNU/Linux > > I am building against ATLAS optimized for Core i7: > > sudo apt-get install libatlas3gf-corei7sse3 libatlas3gf-corei7sse3-dev > > export BLAS=/usr/lib/libblas.so > export LAPACK=/usr/lib/liblapack.so > export ATLAS=/usr/lib/libatlas.so > > pip install numpy > pip install scipy > > The NumPy builds fine and passes the test suite, however, when I rebuild > latest SciPy 0.9.0 against latest NumPy 1.6.0 I am getting one test > failure: > > ====================================================================== > FAIL: test_expon (test_morestats.TestAnderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/srv/virtualenv/mle/lib/python2.7/site-packages/scipy/stats/tests/test_morestats.py", line 72, in test_expon > ? ?assert_array_less(crit[:-1], A) > ?File "/srv/virtualenv/mle/lib/python2.7/site-packages/numpy/testing/utils.py", line 869, in assert_array_less > ? ?header='Arrays are not less-ordered') > ?File "/srv/virtualenv/mle/lib/python2.7/site-packages/numpy/testing/utils.py", line 613, in assert_array_compare > ? ?chk_same_position(x_id, y_id, hasval='inf') > ?File "/srv/virtualenv/mle/lib/python2.7/site-packages/numpy/testing/utils.py", line 588, in chk_same_position > ? ?raise AssertionError(msg) > AssertionError: > Arrays are not less-ordered > > x and y inf location mismatch: > ?x: array([ 0.911, ?1.065, ?1.325, ?1.587]) > ?y: array(inf) That's just an old, not well written test that needs to be updated. assert_array_less got a bit stricter and now also verifies that the shapes of the arrays match. Nothing to worry about, but it needs to be cleaned up in scipy. Josef > > ---------------------------------------------------------------------- > > This doesn't happen if I build SciPy 0.9.0 against NumPy 1.5.0, all the > tests pass. > > Hope that helps to figure out the problem, > > -- > Sincerely yours, > Yury V. Zaytsev > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From giorgos.tzampanakis at gmail.com Sat May 14 08:25:27 2011 From: giorgos.tzampanakis at gmail.com (Giorgos Tzampanakis) Date: Sat, 14 May 2011 12:25:27 +0000 (UTC) Subject: [SciPy-User] Alternatives to genfromtxt and loadtxt? References: <1305370437.2348.16.camel@newpride> Message-ID: On 2011-05-14, Yury V. Zaytsev wrote: > On Fri, 2011-05-13 at 22:40 +0000, Giorgos Tzampanakis wrote: >> I have numeric data in ascii files, each file about 800 MB. Loading such a >> file to Octave takes about 30 seconds. On numpy it is so slow that I've >> never had the patience to see it through to the end. > > If the layout is more or less simple, you may have better luck with > reading files with Python's built-in CSV reader and only then converting > the lists to NumPy arrays. > > I know it must definitively be not the best solution out there, but it > takes zero effort and I have able to load 500 Mb large files in a matter > of dozens of seconds without any problems: Thanks for the suggestion! It wasn't quite as fast as Octave, in fact it was about 6 times slower, but I think it'll do for an initial load. Then I can save to numpy's native binary format. The question now is, why aren't genfromtxt and loadtxt using this approach if it is faster than what they're doing? From yury at shurup.com Sat May 14 08:45:21 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Sat, 14 May 2011 14:45:21 +0200 Subject: [SciPy-User] Alternatives to genfromtxt and loadtxt? In-Reply-To: References: <1305370437.2348.16.camel@newpride> Message-ID: <1305377121.2348.22.camel@newpride> Hi! On Sat, 2011-05-14 at 12:25 +0000, Giorgos Tzampanakis wrote: > Thanks for the suggestion! It wasn't quite as fast as Octave, in fact it > was about 6 times slower, but I think it'll do for an initial load. Then I > can save to numpy's native binary format. That's also what I do for >1 Gb matrices: just save them in the native NumPy format for later use and then the load times become negligible, especially from /dev/shm mounts ;-) > The question now is, why aren't genfromtxt and loadtxt using this approach > if it is faster than what they're doing? I think it all comes down to post-processing and heuristics. It seems that these functions do quite a lot of extra work to make sure that the data is loaded correctly, the precision isn't lost etc. I even suppose that there is a way to speed them up by specifying formats in function calls, but I've never really got time to figure it out and went for my extra simple reader instead, since I had to perform some weird pre-processing on the data row by row anyway. -- Sincerely yours, Yury V. Zaytsev From yury at shurup.com Sat May 14 08:46:59 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Sat, 14 May 2011 14:46:59 +0200 Subject: [SciPy-User] ANN: NumPy 1.6.0 - FAIL: test_expon (test_morestats.TestAnderson) In-Reply-To: References: <1305369846.2348.8.camel@newpride> Message-ID: <1305377219.2348.23.camel@newpride> Hi Josef! On Sat, 2011-05-14 at 08:20 -0400, josef.pktd at gmail.com wrote: > That's just an old, not well written test that needs to be updated. > assert_array_less got a bit stricter and now also verifies that the > shapes of the arrays match. > > Nothing to worry about, but it needs to be cleaned up in scipy. Many thanks for the explanation! I can now safely proceed to update the builds that I use on the cluster. -- Sincerely yours, Yury V. Zaytsev From josef.pktd at gmail.com Sat May 14 16:02:04 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 16:02:04 -0400 Subject: [SciPy-User] orthogonal polynomials ? Message-ID: Suppose I have an polynomial basis on a bounded domain [0,1] , the polynomials in scipy are orthogonal with respect to a weighting function, for example Chebychev. What I would like: First component is constant second component is linear trend all other components are orthogonal to all previous ones with respect to uniform weights. Is there a ready way how to do this? (Or it's easy and I can figure it out myself?) Or does what I would like not make any sense? Josef From vanforeest at gmail.com Sat May 14 16:06:49 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Sat, 14 May 2011 22:06:49 +0200 Subject: [SciPy-User] scipy.stats pmf evaluation Message-ID: Hi, I wanted to compute a probability mass function on a range and a grid at the same time, but this fails. Here is an example. In [1]: from scipy.stats import poisson In [2]: import numpy as np In [3]: print poisson.pmf(1, 1) 0.367879441171 In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) In [5]: print poisson.pmf(1, grid) [ 2.22044605e-16 9.04837418e-02 1.63746151e-01 2.22245466e-01 2.68128018e-01 3.03265330e-01 3.29286982e-01 3.47609713e-01 3.59463171e-01 3.65912694e-01 3.67879441e-01] In [6]: print poisson.pmf(range(2), 1) [ 0.36787944 0.36787944] +++ Up to now everything works as expected. But this fails: +++ In [7]: print poisson.pmf(range(2), grid) ValueError: shape mismatch: objects cannot be broadcast to a single shape +++ Why is the call to poisson.pmf(range(2), grid) wrong, while it works on either a range or a grid? Does anybody perhaps know the right way to compute poisson.pmf(range(2), grid)" without using a for loop? thanks Nicky From josef.pktd at gmail.com Sat May 14 16:10:29 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 16:10:29 -0400 Subject: [SciPy-User] scipy.stats pmf evaluation In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 4:06 PM, nicky van foreest wrote: > Hi, > > I wanted to compute a probability mass function on a range and a grid > at the same time, but this fails. Here is an example. > > In [1]: from scipy.stats import poisson > > In [2]: import numpy as np > > In [3]: print poisson.pmf(1, 1) > 0.367879441171 > > In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) > > In [5]: print poisson.pmf(1, grid) > [ ?2.22044605e-16 ? 9.04837418e-02 1.63746151e-01 ? 2.22245466e-01 > ? 2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 > ? 3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01] > > In [6]: print poisson.pmf(range(2), 1) > [ 0.36787944 ?0.36787944] > > > +++ > > Up to now everything works as expected. But this fails: > > +++ > > In [7]: print poisson.pmf(range(2), grid) > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > +++ > > Why is the call to ?poisson.pmf(range(2), grid) ?wrong, while it works > on either a range or a grid? > > Does anybody perhaps know the right way to compute > poisson.pmf(range(2), grid)" without using a for loop? You are not broadcasting, (range(2), grid) need to broadcast against each other. If it doesn't work then, then it's a bug. Josef > > thanks > > Nicky > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From vanforeest at gmail.com Sat May 14 16:11:13 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Sat, 14 May 2011 22:11:13 +0200 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: Hi, Might this be what you want: The first eleven probabilists' Hermite polynomials are: ... My chromium browser does not seem to paste pngs. Anyway, check http://en.wikipedia.org/wiki/Hermite_polynomials and you'll see that the first polynomial is 1, the second x, and so forth. From my courses on quantum mechanics I recall that these polynomials are, with respect to some weight function, orthogonal. Nicky On 14 May 2011 22:02, wrote: > Suppose I have an polynomial basis on a bounded domain [0,1] , the > polynomials in scipy are orthogonal with respect to a weighting > function, for example Chebychev. > > What I would like: > First component is constant > second component is linear trend > all other components are orthogonal to all previous ones with respect > to uniform weights. > > Is there a ready way how to do this? (Or it's easy and I can figure it > out myself?) > Or does what I would like not make any sense? > > Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cjordan1 at uw.edu Sat May 14 16:25:08 2011 From: cjordan1 at uw.edu (Christopher Jordan-Squire) Date: Sat, 14 May 2011 13:25:08 -0700 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: I think what you're looking for are the Legendre polynomials. They're orthogonal on [-1,1] with respect to the uniform weights, while Hermite polynomials are orthogonal with respect to a gaussian weight. Be careful, though. The legendre polynomials in scipy.special are orthogonal but they aren't normalized. -Chris On Sat, May 14, 2011 at 1:11 PM, nicky van foreest wrote: > Hi, > > Might this be what you want: > > The first eleven probabilists' Hermite polynomials are: > > ... > > My chromium browser does not seem to paste pngs. Anyway, check > > > http://en.wikipedia.org/wiki/Hermite_polynomials > > and you'll see that the first polynomial is 1, the second x, and so > forth. From my courses on quantum mechanics I recall that these > polynomials are, with respect to some weight function, orthogonal. > > Nicky > > > > On 14 May 2011 22:02, wrote: > > Suppose I have an polynomial basis on a bounded domain [0,1] , the > > polynomials in scipy are orthogonal with respect to a weighting > > function, for example Chebychev. > > > > What I would like: > > First component is constant > > second component is linear trend > > all other components are orthogonal to all previous ones with respect > > to uniform weights. > > > > Is there a ready way how to do this? (Or it's easy and I can figure it > > out myself?) > > Or does what I would like not make any sense? > > > > Josef > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat May 14 16:26:02 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 16:26:02 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 4:11 PM, nicky van foreest wrote: > Hi, > > Might this be what you want: > > The first eleven probabilists' Hermite polynomials are: > > ... > > My chromium browser does not seem to paste pngs. Anyway, check > > > http://en.wikipedia.org/wiki/Hermite_polynomials > > and you'll see that the first polynomial is 1, the second x, and so > forth. From my courses on quantum mechanics I recall that these > polynomials are, with respect to some weight function, orthogonal. Thanks, I haven't looked at that yet, we should add wikipedia to the scipy.special docs. However, I would like to change the last part "with respect to some weight function" http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality Instead of Gaussian weights I would like uniform weights on bounded support. And I have never seen anything about changing the weight function for the orthogonal basis of these kind of polynomials. Josef > > Nicky > > > > On 14 May 2011 22:02, ? wrote: >> Suppose I have an polynomial basis on a bounded domain [0,1] , the >> polynomials in scipy are orthogonal with respect to a weighting >> function, for example Chebychev. >> >> What I would like: >> First component is constant >> second component is linear trend >> all other components are orthogonal to all previous ones with respect >> to uniform weights. >> >> Is there a ready way how to do this? (Or it's easy and I can figure it >> out myself?) >> Or does what I would like not make any sense? >> >> Josef >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From vanforeest at gmail.com Sat May 14 16:27:07 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Sat, 14 May 2011 22:27:07 +0200 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: Ah, I missed your extra requirement that the weight function should be uniform.... and on the interval [0,1]. I suspect you searched on google, The Legendre have uniform weight, but live on [-1,1]. On 14 May 2011 22:11, nicky van foreest wrote: > Hi, > > Might this be what you want: > > The first eleven probabilists' Hermite polynomials are: > > ... > > My chromium browser does not seem to paste pngs. Anyway, check > > > http://en.wikipedia.org/wiki/Hermite_polynomials > > and you'll see that the first polynomial is 1, the second x, and so > forth. From my courses on quantum mechanics I recall that these > polynomials are, with respect to some weight function, orthogonal. > > Nicky > > > > On 14 May 2011 22:02, ? wrote: >> Suppose I have an polynomial basis on a bounded domain [0,1] , the >> polynomials in scipy are orthogonal with respect to a weighting >> function, for example Chebychev. >> >> What I would like: >> First component is constant >> second component is linear trend >> all other components are orthogonal to all previous ones with respect >> to uniform weights. >> >> Is there a ready way how to do this? (Or it's easy and I can figure it >> out myself?) >> Or does what I would like not make any sense? >> >> Josef >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From josef.pktd at gmail.com Sat May 14 16:32:43 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 16:32:43 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 4:25 PM, Christopher Jordan-Squire wrote: > I think what you're looking for are the Legendre polynomials. They're > orthogonal on [-1,1] with respect to the uniform weights, while Hermite > polynomials are orthogonal with respect to a gaussian weight. > Be careful, though. The legendre polynomials in scipy.special are orthogonal > but they aren't normalized. > -Chris Thanks, I missed that. To continue with Nicky's link: http://en.wikipedia.org/wiki/Legendre_polynomials including graphs http://en.wikipedia.org/wiki/Legendre_polynomials#The_orthogonality_property normalization might not matter for what I'm planning to do, but I will check. Josef > > > On Sat, May 14, 2011 at 1:11 PM, nicky van foreest > wrote: >> >> Hi, >> >> Might this be what you want: >> >> The first eleven probabilists' Hermite polynomials are: >> >> ... >> >> My chromium browser does not seem to paste pngs. Anyway, check >> >> >> http://en.wikipedia.org/wiki/Hermite_polynomials >> >> and you'll see that the first polynomial is 1, the second x, and so >> forth. From my courses on quantum mechanics I recall that these >> polynomials are, with respect to some weight function, orthogonal. >> >> Nicky >> >> >> >> On 14 May 2011 22:02, ? wrote: >> > Suppose I have an polynomial basis on a bounded domain [0,1] , the >> > polynomials in scipy are orthogonal with respect to a weighting >> > function, for example Chebychev. >> > >> > What I would like: >> > First component is constant >> > second component is linear trend >> > all other components are orthogonal to all previous ones with respect >> > to uniform weights. >> > >> > Is there a ready way how to do this? (Or it's easy and I can figure it >> > out myself?) >> > Or does what I would like not make any sense? >> > >> > Josef >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From lou_boog2000 at yahoo.com Sat May 14 17:29:20 2011 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sat, 14 May 2011 14:29:20 -0700 (PDT) Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: <591828.56094.qm@web34406.mail.mud.yahoo.com> The [0,1] requirement still allows use of the Legendre polynomials. Just use L_n(2y-1) for y in [0,1]. The weight stays uniform but changes by a factor of 2 (that can be absorbed in the normalization coefficient if you want. -- Lou Pecora, my views are my own. ----- Original Message ---- From: nicky van foreest To: SciPy Users List Sent: Sat, May 14, 2011 3:27:07 PM Subject: Re: [SciPy-User] orthogonal polynomials ? Ah, I missed your extra requirement that the weight function should be uniform.... and on the interval [0,1]. I suspect you searched on google, The Legendre have uniform weight, but live on [-1,1]. On 14 May 2011 22:11, nicky van foreest wrote: > Hi, > > Might this be what you want: > > The first eleven probabilists' Hermite polynomials are: > > ... > > My chromium browser does not seem to paste pngs. Anyway, check > > > http://en.wikipedia.org/wiki/Hermite_polynomials > > and you'll see that the first polynomial is 1, the second x, and so > forth. From my courses on quantum mechanics I recall that these > polynomials are, with respect to some weight function, orthogonal. > > Nicky > > > > On 14 May 2011 22:02, wrote: >> Suppose I have an polynomial basis on a bounded domain [0,1] , the >> polynomials in scipy are orthogonal with respect to a weighting >> function, for example Chebychev. >> >> What I would like: >> First component is constant >> second component is linear trend >> all other components are orthogonal to all previous ones with respect >> to uniform weights. >> >> Is there a ready way how to do this? (Or it's easy and I can figure it >> out myself?) >> Or does what I would like not make any sense? >> >> Josef >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From vanforeest at gmail.com Sat May 14 17:35:13 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Sat, 14 May 2011 23:35:13 +0200 Subject: [SciPy-User] scipy.stats pmf evaluation In-Reply-To: References: Message-ID: On 14 May 2011 22:10, wrote: > On Sat, May 14, 2011 at 4:06 PM, nicky van foreest wrote: >> Hi, >> >> I wanted to compute a probability mass function on a range and a grid >> at the same time, but this fails. Here is an example. >> >> In [1]: from scipy.stats import poisson >> >> In [2]: import numpy as np >> >> In [3]: print poisson.pmf(1, 1) >> 0.367879441171 >> >> In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) >> >> In [5]: print poisson.pmf(1, grid) >> [ ?2.22044605e-16 ? 9.04837418e-02 1.63746151e-01 ? 2.22245466e-01 >> ? 2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >> ? 3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01] >> >> In [6]: print poisson.pmf(range(2), 1) >> [ 0.36787944 ?0.36787944] >> >> >> +++ >> >> Up to now everything works as expected. But this fails: >> >> +++ >> >> In [7]: print poisson.pmf(range(2), grid) >> >> ValueError: shape mismatch: objects cannot be broadcast to a single shape >> >> +++ >> >> Why is the call to ?poisson.pmf(range(2), grid) ?wrong, while it works >> on either a range or a grid? >> >> Does anybody perhaps know the right way to compute >> poisson.pmf(range(2), grid)" without using a for loop? > > You are not broadcasting, (range(2), grid) need to broadcast against > each other. If it doesn't work then, then it's a bug. Thanks Josef. But how do I do this? The range will, usually, not contain the same number of elements as the grid. What I would like to compute is something like this: for j in range(3): for x in grid: poisson.pmf(j, x) By the above example I can use two types of shortcuts:: for j in range(3): poisson.pmf(j, grid) or for x in grid: poisson.pmf(range(3), x) but the pmf function does not support broadcasting on both directions at the same time, or (more probable) it can be done, but I make a mistake somewhere. Nicky > > Josef > >> >> thanks >> >> Nicky >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Sat May 14 18:03:01 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 18:03:01 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: <591828.56094.qm@web34406.mail.mud.yahoo.com> References: <591828.56094.qm@web34406.mail.mud.yahoo.com> Message-ID: On Sat, May 14, 2011 at 5:29 PM, Lou Pecora wrote: > The [0,1] requirement still allows use of the Legendre polynomials. ?Just use > L_n(2y-1) for y in [0,1]. ?The weight stays uniform but changes by a factor of 2 > (that can be absorbed in the normalization coefficient if you want. I was cheating and looked at the numpy.polynomial code, which has some nice helper functions. Before that, I thought polynomials are defined on the real line. my domain will actually be more something like [0, 100] or anything (an argument of the function), transformed to whatever the polynomials use. Thanks, Josef > ?-- Lou Pecora, ? my views are my own. > > > > ----- Original Message ---- > From: nicky van foreest > To: SciPy Users List > Sent: Sat, May 14, 2011 3:27:07 PM > Subject: Re: [SciPy-User] orthogonal polynomials ? > > Ah, I missed your extra requirement that the weight function should be > uniform.... and on the interval [0,1]. I suspect you searched on > google, The Legendre have uniform weight, but live on [-1,1]. > > > On 14 May 2011 22:11, nicky van foreest wrote: >> Hi, >> >> Might this be what you want: >> >> The first eleven probabilists' Hermite polynomials are: >> >> ... >> >> My chromium browser does not seem to paste pngs. Anyway, check >> >> >> http://en.wikipedia.org/wiki/Hermite_polynomials >> >> and you'll see that the first polynomial is 1, the second x, and so >> forth. From my courses on quantum mechanics I recall that these >> polynomials are, with respect to some weight function, orthogonal. >> >> Nicky >> >> >> >> On 14 May 2011 22:02, ? wrote: >>> Suppose I have an polynomial basis on a bounded domain [0,1] , the >>> polynomials in scipy are orthogonal with respect to a weighting >>> function, for example Chebychev. >>> >>> What I would like: >>> First component is constant >>> second component is linear trend >>> all other components are orthogonal to all previous ones with respect >>> to uniform weights. >>> >>> Is there a ready way how to do this? (Or it's easy and I can figure it >>> out myself?) >>> Or does what I would like not make any sense? >>> >>> Josef >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From charlesr.harris at gmail.com Sat May 14 18:04:51 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 May 2011 16:04:51 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 2:26 PM, wrote: > On Sat, May 14, 2011 at 4:11 PM, nicky van foreest > wrote: > > Hi, > > > > Might this be what you want: > > > > The first eleven probabilists' Hermite polynomials are: > > > > ... > > > > My chromium browser does not seem to paste pngs. Anyway, check > > > > > > http://en.wikipedia.org/wiki/Hermite_polynomials > > > > and you'll see that the first polynomial is 1, the second x, and so > > forth. From my courses on quantum mechanics I recall that these > > polynomials are, with respect to some weight function, orthogonal. > > Thanks, I haven't looked at that yet, we should add wikipedia to the > scipy.special docs. > > However, I would like to change the last part "with respect to some > weight function" > http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality > > Instead of Gaussian weights I would like uniform weights on bounded > support. And I have never seen anything about changing the weight > function for the orthogonal basis of these kind of polynomials. > > In numpy 1.6, you can use the Legendre polynomials. They are orthogonal on [-1,1] as has been mentioned, but can be mapped to other domains. For example In [1]: from numpy.polynomial import Legendre as L In [2]: for i in range(5): plot(*L([0]*i + [1], domain=[0,1]).linspace()) ...: produces the attached plots. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: legendre.png Type: image/png Size: 59195 bytes Desc: not available URL: From josef.pktd at gmail.com Sat May 14 18:10:42 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 18:10:42 -0400 Subject: [SciPy-User] scipy.stats pmf evaluation In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 5:35 PM, nicky van foreest wrote: > On 14 May 2011 22:10, ? wrote: >> On Sat, May 14, 2011 at 4:06 PM, nicky van foreest wrote: >>> Hi, >>> >>> I wanted to compute a probability mass function on a range and a grid >>> at the same time, but this fails. Here is an example. >>> >>> In [1]: from scipy.stats import poisson >>> >>> In [2]: import numpy as np >>> >>> In [3]: print poisson.pmf(1, 1) >>> 0.367879441171 >>> >>> In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) >>> >>> In [5]: print poisson.pmf(1, grid) >>> [ ?2.22044605e-16 ? 9.04837418e-02 1.63746151e-01 ? 2.22245466e-01 >>> ? 2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >>> ? 3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01] >>> >>> In [6]: print poisson.pmf(range(2), 1) >>> [ 0.36787944 ?0.36787944] >>> >>> >>> +++ >>> >>> Up to now everything works as expected. But this fails: >>> >>> +++ >>> >>> In [7]: print poisson.pmf(range(2), grid) >>> >>> ValueError: shape mismatch: objects cannot be broadcast to a single shape >>> >>> +++ >>> >>> Why is the call to ?poisson.pmf(range(2), grid) ?wrong, while it works >>> on either a range or a grid? >>> >>> Does anybody perhaps know the right way to compute >>> poisson.pmf(range(2), grid)" without using a for loop? >> >> You are not broadcasting, (range(2), grid) need to broadcast against >> each other. If it doesn't work then, then it's a bug. > > Thanks Josef. But how do I do this? The range will, usually, not > contain the same number of elements as the grid. What I would like to > compute is something like this: > > for j in range(3): > ? for x in grid: > ? ? ? poisson.pmf(j, x) > > By the above example I can use two types of shortcuts:: > > for j in range(3): > ? poisson.pmf(j, grid) > > or > > for x in grid: > ? poisson.pmf(range(3), x) > > > but the pmf function does not support broadcasting on both directions > at the same time, or (more probable) it can be done, but I make a > mistake somewhere. add a newaxis to one of the two >>> from scipy import stats >>> grid = np.arange(np.finfo(float).eps,1.1,0.1) >>> print stats.poisson.pmf(np.arange(2)[:,None], grid) [[ 1.00000000e+00 9.04837418e-01 8.18730753e-01 7.40818221e-01 6.70320046e-01 6.06530660e-01 5.48811636e-01 4.96585304e-01 4.49328964e-01 4.06569660e-01 3.67879441e-01] [ 2.22044605e-16 9.04837418e-02 1.63746151e-01 2.22245466e-01 2.68128018e-01 3.03265330e-01 3.29286982e-01 3.47609713e-01 3.59463171e-01 3.65912694e-01 3.67879441e-01]] >>> print stats.poisson.pmf(np.arange(2), grid[:,None]) [[ 1.00000000e+00 2.22044605e-16] [ 9.04837418e-01 9.04837418e-02] [ 8.18730753e-01 1.63746151e-01] [ 7.40818221e-01 2.22245466e-01] [ 6.70320046e-01 2.68128018e-01] [ 6.06530660e-01 3.03265330e-01] [ 5.48811636e-01 3.29286982e-01] [ 4.96585304e-01 3.47609713e-01] [ 4.49328964e-01 3.59463171e-01] [ 4.06569660e-01 3.65912694e-01] [ 3.67879441e-01 3.67879441e-01]] 3-dim >>> print stats.poisson.pmf(np.arange(6).reshape((1,2,3)), grid[:,None,None]) There is a known bug, when the support depends on one of the parameters of the distribution, but it should work for most cases. Josef > > Nicky >> >> Josef >> >>> >>> thanks >>> >>> Nicky >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From doug.redden at gmail.com Sat May 14 13:07:05 2011 From: doug.redden at gmail.com (dred) Date: Sat, 14 May 2011 10:07:05 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Spyderlib: how to end a script without getting a Traceback? In-Reply-To: References: Message-ID: <31618802.post@talk.nabble.com> Daniel, I can't answer your question but the best place to ask is - http://groups.google.com/group/spyderlib. Doug http://groups.google.com/group/spyderlib Daniel Mader-2 wrote: > > Hi, > > sometimes I need to just end a script at a certain position (for debug > purposes, mostly). Before starting to use Spyder, I have use > sys.exit() for that purpose: > > import sys > print 11111 > sys.exit() > print 22222 > > However, in the IPython shell integrated in Spyder, this results in a > traceback, which is useless in this context. > > How can I get a quiet system exit here? > > Thanks in advance, > Daniel > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Spyderlib%3A-how-to-end-a-script-without-getting-a-Traceback--tp31605594p31618802.html Sent from the Scipy-User mailing list archive at Nabble.com. From gideon.simpson at gmail.com Sat May 14 20:17:03 2011 From: gideon.simpson at gmail.com (Gideon) Date: Sat, 14 May 2011 17:17:03 -0700 (PDT) Subject: [SciPy-User] bus error in Message-ID: <32872186.4838.1305418623757.JavaMail.geo-discussion-forums@yqbw30> I recently installed scipy 0.90 using numpy-1.6.0-py2.7-python.org-macosx10.6.dmg When I try to run the test suite, the following happens: Python 2.7.1 (r271:86882M, Nov 30 2010, 09:39:13) [GCC 4.0.1 (Apple Inc. build 5494)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.6.0 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy SciPy version 0.9.0 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy Python version 2.7.1 (r271:86882M, Nov 30 2010, 09:39:13) [GCC 4.0.1 (Apple Inc. build 5494)] nose version 1.0.0 Bus error -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun May 15 04:30:32 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 15 May 2011 10:30:32 +0200 Subject: [SciPy-User] bus error in In-Reply-To: <32872186.4838.1305418623757.JavaMail.geo-discussion-forums@yqbw30> References: <32872186.4838.1305418623757.JavaMail.geo-discussion-forums@yqbw30> Message-ID: On Sun, May 15, 2011 at 2:17 AM, Gideon wrote: > I recently installed scipy 0.90 > using numpy-1.6.0-py2.7-python.org-macosx10.6.dmg > > When I try to run the test suite, the following happens: > > Python 2.7.1 (r271:86882M, Nov 30 2010, 09:39:13) > [GCC 4.0.1 (Apple Inc. build 5494)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.test() > Running unit tests for scipy > NumPy version 1.6.0 > NumPy is installed in > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy > SciPy version 0.9.0 > SciPy is installed in > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy > Python version 2.7.1 (r271:86882M, Nov 30 2010, 09:39:13) [GCC 4.0.1 (Apple > Inc. build 5494)] > nose version 1.0.0 > Bus error > > Exactly what Python are you using? The numpy/scipy installers are built against the corresponding python from python.org with GCC 4.2, your python was built with GCC 4.0. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun May 15 12:39:56 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 15 May 2011 18:39:56 +0200 Subject: [SciPy-User] [Numpy-discussion] ANN: NumPy 1.6.0 In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 3:09 PM, Charles R Harris wrote: > > > On Sat, May 14, 2011 at 3:54 AM, Ralf Gommers > wrote: > >> Hi, >> >> I am pleased to announce the release of NumPy 1.6.0. This release is the >> result of 9 months of work, and includes many new features, performance >> improvements and bug fixes. Some highlights are: >> >> - Re-introduction of datetime dtype support to deal with dates in >> arrays. >> - A new 16-bit floating point type. >> - A new iterator, which improves performance of many functions. >> >> > The link is http://sourceforge.net/projects/numpy/files/NumPy/1.6.0/ > > The OS X binaries are also up now. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Sun May 15 14:00:36 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Sun, 15 May 2011 20:00:36 +0200 Subject: [SciPy-User] scipy.stats pmf evaluation In-Reply-To: References: Message-ID: Hi Josef, Thanks. On 15 May 2011 00:10, wrote: > On Sat, May 14, 2011 at 5:35 PM, nicky van foreest wrote: >> On 14 May 2011 22:10, ? wrote: >>> On Sat, May 14, 2011 at 4:06 PM, nicky van foreest wrote: >>>> Hi, >>>> >>>> I wanted to compute a probability mass function on a range and a grid >>>> at the same time, but this fails. Here is an example. >>>> >>>> In [1]: from scipy.stats import poisson >>>> >>>> In [2]: import numpy as np >>>> >>>> In [3]: print poisson.pmf(1, 1) >>>> 0.367879441171 >>>> >>>> In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) >>>> >>>> In [5]: print poisson.pmf(1, grid) >>>> [ ?2.22044605e-16 ? 9.04837418e-02 1.63746151e-01 ? 2.22245466e-01 >>>> ? 2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >>>> ? 3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01] >>>> >>>> In [6]: print poisson.pmf(range(2), 1) >>>> [ 0.36787944 ?0.36787944] >>>> >>>> >>>> +++ >>>> >>>> Up to now everything works as expected. But this fails: >>>> >>>> +++ >>>> >>>> In [7]: print poisson.pmf(range(2), grid) >>>> >>>> ValueError: shape mismatch: objects cannot be broadcast to a single shape >>>> >>>> +++ >>>> >>>> Why is the call to ?poisson.pmf(range(2), grid) ?wrong, while it works >>>> on either a range or a grid? >>>> >>>> Does anybody perhaps know the right way to compute >>>> poisson.pmf(range(2), grid)" without using a for loop? >>> >>> You are not broadcasting, (range(2), grid) need to broadcast against >>> each other. If it doesn't work then, then it's a bug. >> >> Thanks Josef. But how do I do this? The range will, usually, not >> contain the same number of elements as the grid. What I would like to >> compute is something like this: >> >> for j in range(3): >> ? for x in grid: >> ? ? ? poisson.pmf(j, x) >> >> By the above example I can use two types of shortcuts:: >> >> for j in range(3): >> ? poisson.pmf(j, grid) >> >> or >> >> for x in grid: >> ? poisson.pmf(range(3), x) >> >> >> but the pmf function does not support broadcasting on both directions >> at the same time, or (more probable) it can be done, but I make a >> mistake somewhere. > > add a newaxis to one of the two > >>>> from scipy import stats >>>> grid = np.arange(np.finfo(float).eps,1.1,0.1) > >>>> print stats.poisson.pmf(np.arange(2)[:,None], grid) > [[ ?1.00000000e+00 ? 9.04837418e-01 ? 8.18730753e-01 ? 7.40818221e-01 > ? ?6.70320046e-01 ? 6.06530660e-01 ? 5.48811636e-01 ? 4.96585304e-01 > ? ?4.49328964e-01 ? 4.06569660e-01 ? 3.67879441e-01] > ?[ ?2.22044605e-16 ? 9.04837418e-02 ? 1.63746151e-01 ? 2.22245466e-01 > ? ?2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 > ? ?3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01]] > >>>> print stats.poisson.pmf(np.arange(2), grid[:,None]) > [[ ?1.00000000e+00 ? 2.22044605e-16] > ?[ ?9.04837418e-01 ? 9.04837418e-02] > ?[ ?8.18730753e-01 ? 1.63746151e-01] > ?[ ?7.40818221e-01 ? 2.22245466e-01] > ?[ ?6.70320046e-01 ? 2.68128018e-01] > ?[ ?6.06530660e-01 ? 3.03265330e-01] > ?[ ?5.48811636e-01 ? 3.29286982e-01] > ?[ ?4.96585304e-01 ? 3.47609713e-01] > ?[ ?4.49328964e-01 ? 3.59463171e-01] > ?[ ?4.06569660e-01 ? 3.65912694e-01] > ?[ ?3.67879441e-01 ? 3.67879441e-01]] > > 3-dim > >>>> print stats.poisson.pmf(np.arange(6).reshape((1,2,3)), grid[:,None,None]) > > > There is a known bug, when the support depends on one of the > parameters of the distribution, but it should work for most cases. > > Josef > > >> >> Nicky >>> >>> Josef >>> >>>> >>>> thanks >>>> >>>> Nicky >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From klonuo at gmail.com Sun May 15 15:25:37 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Sun, 15 May 2011 21:25:37 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status Message-ID: <20110515212533.0AA2.B1C76292@gmail.com> I installed Cygwin on XP with compilers, make etc and then build LAPACK, after which I build ATLAS. Got numpy: ======================================================================= svn co http://svn.scipy.org/svn/numpy/trunk numpy ----------------------------------------------------------------------- Edited site.cfg: ======================================================================= [DEFAULT] # this where my atlas and lapack libs are: library_dirs = /usr/local/lib include_dirs = /usr/local/include [blas_opt] libraries = f77blas, cblas, atlas # [lapack_opt] libraries = lapack, f77blas, cblas, atlas ----------------------------------------------------------------------- Run: ======================================================================= python setup.py build ----------------------------------------------------------------------- Got this error: ======================================================================= collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared build/temp.cygwin-1.7.9-i686-2.6/numpy/linalg/lapack_litemodule.o build/temp.cygwin-1.7.9-i686-2.6/numpy/linalg/python_xerbla.o -L/usr/local/lib -L/usr/lib/gcc/i686-pc-cygwin/3.4.4 -L/usr/lib/python2.6/config -Lbuild/temp.cygwin-1.7.9-i686-2.6 -llapack -lf77blas -lcblas -latlas -lpython2.6 -lg2c -o build/lib.cygwin-1.7.9-i686-2.6/numpy/linalg/lapack_lite.dll" failed with exit status 1 ----------------------------------------------------------------------- Am I missing some package? From gideon.simpson at gmail.com Sun May 15 19:09:30 2011 From: gideon.simpson at gmail.com (Gideon) Date: Sun, 15 May 2011 16:09:30 -0700 (PDT) Subject: [SciPy-User] bus error in In-Reply-To: Message-ID: <26865685.2673.1305500970744.JavaMail.geo-discussion-forums@yqhc1> I thought I was using python 2.7.1 from python.org. If I do which python I get: /Library/Frameworks/Python.framework/Versions/2.7/bin/python -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Mon May 16 01:41:50 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 16 May 2011 07:41:50 +0200 Subject: [SciPy-User] bus error in In-Reply-To: <26865685.2673.1305500970744.JavaMail.geo-discussion-forums@yqhc1> References: <26865685.2673.1305500970744.JavaMail.geo-discussion-forums@yqhc1> Message-ID: On Mon, May 16, 2011 at 1:09 AM, Gideon wrote: > I thought I was using python 2.7.1 from python.org. If I do > which python > > I get: > /Library/Frameworks/Python.framework/Versions/2.7/bin/python > > The wrong one then; there are python.org and numpy installers ending with -macosx10.6.dmg and -macosx10.3.dmg, you have to use the corresponding ones. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Mon May 16 07:02:56 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Mon, 16 May 2011 13:02:56 +0200 Subject: [SciPy-User] scipy.stats pmf evaluation In-Reply-To: References: Message-ID: Hi Josef, This works indeed. But I must admit that I don't understand why. Can you give a hint where in the docs I might find an explanation? thanks Nicky On 15 May 2011 20:00, nicky van foreest wrote: > Hi Josef, > > Thanks. > > On 15 May 2011 00:10, ? wrote: >> On Sat, May 14, 2011 at 5:35 PM, nicky van foreest wrote: >>> On 14 May 2011 22:10, ? wrote: >>>> On Sat, May 14, 2011 at 4:06 PM, nicky van foreest wrote: >>>>> Hi, >>>>> >>>>> I wanted to compute a probability mass function on a range and a grid >>>>> at the same time, but this fails. Here is an example. >>>>> >>>>> In [1]: from scipy.stats import poisson >>>>> >>>>> In [2]: import numpy as np >>>>> >>>>> In [3]: print poisson.pmf(1, 1) >>>>> 0.367879441171 >>>>> >>>>> In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) >>>>> >>>>> In [5]: print poisson.pmf(1, grid) >>>>> [ ?2.22044605e-16 ? 9.04837418e-02 1.63746151e-01 ? 2.22245466e-01 >>>>> ? 2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >>>>> ? 3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01] >>>>> >>>>> In [6]: print poisson.pmf(range(2), 1) >>>>> [ 0.36787944 ?0.36787944] >>>>> >>>>> >>>>> +++ >>>>> >>>>> Up to now everything works as expected. But this fails: >>>>> >>>>> +++ >>>>> >>>>> In [7]: print poisson.pmf(range(2), grid) >>>>> >>>>> ValueError: shape mismatch: objects cannot be broadcast to a single shape >>>>> >>>>> +++ >>>>> >>>>> Why is the call to ?poisson.pmf(range(2), grid) ?wrong, while it works >>>>> on either a range or a grid? >>>>> >>>>> Does anybody perhaps know the right way to compute >>>>> poisson.pmf(range(2), grid)" without using a for loop? >>>> >>>> You are not broadcasting, (range(2), grid) need to broadcast against >>>> each other. If it doesn't work then, then it's a bug. >>> >>> Thanks Josef. But how do I do this? The range will, usually, not >>> contain the same number of elements as the grid. What I would like to >>> compute is something like this: >>> >>> for j in range(3): >>> ? for x in grid: >>> ? ? ? poisson.pmf(j, x) >>> >>> By the above example I can use two types of shortcuts:: >>> >>> for j in range(3): >>> ? poisson.pmf(j, grid) >>> >>> or >>> >>> for x in grid: >>> ? poisson.pmf(range(3), x) >>> >>> >>> but the pmf function does not support broadcasting on both directions >>> at the same time, or (more probable) it can be done, but I make a >>> mistake somewhere. >> >> add a newaxis to one of the two >> >>>>> from scipy import stats >>>>> grid = np.arange(np.finfo(float).eps,1.1,0.1) >> >>>>> print stats.poisson.pmf(np.arange(2)[:,None], grid) >> [[ ?1.00000000e+00 ? 9.04837418e-01 ? 8.18730753e-01 ? 7.40818221e-01 >> ? ?6.70320046e-01 ? 6.06530660e-01 ? 5.48811636e-01 ? 4.96585304e-01 >> ? ?4.49328964e-01 ? 4.06569660e-01 ? 3.67879441e-01] >> ?[ ?2.22044605e-16 ? 9.04837418e-02 ? 1.63746151e-01 ? 2.22245466e-01 >> ? ?2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >> ? ?3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01]] >> >>>>> print stats.poisson.pmf(np.arange(2), grid[:,None]) >> [[ ?1.00000000e+00 ? 2.22044605e-16] >> ?[ ?9.04837418e-01 ? 9.04837418e-02] >> ?[ ?8.18730753e-01 ? 1.63746151e-01] >> ?[ ?7.40818221e-01 ? 2.22245466e-01] >> ?[ ?6.70320046e-01 ? 2.68128018e-01] >> ?[ ?6.06530660e-01 ? 3.03265330e-01] >> ?[ ?5.48811636e-01 ? 3.29286982e-01] >> ?[ ?4.96585304e-01 ? 3.47609713e-01] >> ?[ ?4.49328964e-01 ? 3.59463171e-01] >> ?[ ?4.06569660e-01 ? 3.65912694e-01] >> ?[ ?3.67879441e-01 ? 3.67879441e-01]] >> >> 3-dim >> >>>>> print stats.poisson.pmf(np.arange(6).reshape((1,2,3)), grid[:,None,None]) >> >> >> There is a known bug, when the support depends on one of the >> parameters of the distribution, but it should work for most cases. >> >> Josef >> >> >>> >>> Nicky >>>> >>>> Josef >>>> >>>>> >>>>> thanks >>>>> >>>>> Nicky >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From josef.pktd at gmail.com Mon May 16 07:22:24 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 07:22:24 -0400 Subject: [SciPy-User] scipy.stats pmf evaluation In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 7:02 AM, nicky van foreest wrote: > Hi Josef, > > This works indeed. But I must admit that I don't understand why. Can > you give a hint where in the docs I might find an explanation? I don't think it's in the docs anywhere, just the docs on broadcasting (almost) all the _pdf, _cdf, ... methods are elementwise operations that are fully vectorized. Some generic methods, for example integration in cdf, are vectorized through an explicit call to numpy.vectorize. This means that standard numpy broadcasting works for all arguments for the distribution methods (with a few exceptions) >>> 10 * np.arange(2)[:,None] + np.arange(3)[None, :] array([[ 0, 1, 2], [10, 11, 12]]) >>> np.add(10 * np.arange(2)[:,None], np.arange(3)[None, :]) array([[ 0, 1, 2], [10, 11, 12]]) >>> np.add(10 * np.arange(2), np.arange(3)) Traceback (most recent call last): File "", line 1, in np.add(10 * np.arange(2), np.arange(3)) ValueError: shape mismatch: objects cannot be broadcast to a single shape >>> hope that helps, Josef > > thanks > > Nicky > > On 15 May 2011 20:00, nicky van foreest wrote: >> Hi Josef, >> >> Thanks. >> >> On 15 May 2011 00:10, ? wrote: >>> On Sat, May 14, 2011 at 5:35 PM, nicky van foreest wrote: >>>> On 14 May 2011 22:10, ? wrote: >>>>> On Sat, May 14, 2011 at 4:06 PM, nicky van foreest wrote: >>>>>> Hi, >>>>>> >>>>>> I wanted to compute a probability mass function on a range and a grid >>>>>> at the same time, but this fails. Here is an example. >>>>>> >>>>>> In [1]: from scipy.stats import poisson >>>>>> >>>>>> In [2]: import numpy as np >>>>>> >>>>>> In [3]: print poisson.pmf(1, 1) >>>>>> 0.367879441171 >>>>>> >>>>>> In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) >>>>>> >>>>>> In [5]: print poisson.pmf(1, grid) >>>>>> [ ?2.22044605e-16 ? 9.04837418e-02 1.63746151e-01 ? 2.22245466e-01 >>>>>> ? 2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >>>>>> ? 3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01] >>>>>> >>>>>> In [6]: print poisson.pmf(range(2), 1) >>>>>> [ 0.36787944 ?0.36787944] >>>>>> >>>>>> >>>>>> +++ >>>>>> >>>>>> Up to now everything works as expected. But this fails: >>>>>> >>>>>> +++ >>>>>> >>>>>> In [7]: print poisson.pmf(range(2), grid) >>>>>> >>>>>> ValueError: shape mismatch: objects cannot be broadcast to a single shape >>>>>> >>>>>> +++ >>>>>> >>>>>> Why is the call to ?poisson.pmf(range(2), grid) ?wrong, while it works >>>>>> on either a range or a grid? >>>>>> >>>>>> Does anybody perhaps know the right way to compute >>>>>> poisson.pmf(range(2), grid)" without using a for loop? >>>>> >>>>> You are not broadcasting, (range(2), grid) need to broadcast against >>>>> each other. If it doesn't work then, then it's a bug. >>>> >>>> Thanks Josef. But how do I do this? The range will, usually, not >>>> contain the same number of elements as the grid. What I would like to >>>> compute is something like this: >>>> >>>> for j in range(3): >>>> ? for x in grid: >>>> ? ? ? poisson.pmf(j, x) >>>> >>>> By the above example I can use two types of shortcuts:: >>>> >>>> for j in range(3): >>>> ? poisson.pmf(j, grid) >>>> >>>> or >>>> >>>> for x in grid: >>>> ? poisson.pmf(range(3), x) >>>> >>>> >>>> but the pmf function does not support broadcasting on both directions >>>> at the same time, or (more probable) it can be done, but I make a >>>> mistake somewhere. >>> >>> add a newaxis to one of the two >>> >>>>>> from scipy import stats >>>>>> grid = np.arange(np.finfo(float).eps,1.1,0.1) >>> >>>>>> print stats.poisson.pmf(np.arange(2)[:,None], grid) >>> [[ ?1.00000000e+00 ? 9.04837418e-01 ? 8.18730753e-01 ? 7.40818221e-01 >>> ? ?6.70320046e-01 ? 6.06530660e-01 ? 5.48811636e-01 ? 4.96585304e-01 >>> ? ?4.49328964e-01 ? 4.06569660e-01 ? 3.67879441e-01] >>> ?[ ?2.22044605e-16 ? 9.04837418e-02 ? 1.63746151e-01 ? 2.22245466e-01 >>> ? ?2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >>> ? ?3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01]] >>> >>>>>> print stats.poisson.pmf(np.arange(2), grid[:,None]) >>> [[ ?1.00000000e+00 ? 2.22044605e-16] >>> ?[ ?9.04837418e-01 ? 9.04837418e-02] >>> ?[ ?8.18730753e-01 ? 1.63746151e-01] >>> ?[ ?7.40818221e-01 ? 2.22245466e-01] >>> ?[ ?6.70320046e-01 ? 2.68128018e-01] >>> ?[ ?6.06530660e-01 ? 3.03265330e-01] >>> ?[ ?5.48811636e-01 ? 3.29286982e-01] >>> ?[ ?4.96585304e-01 ? 3.47609713e-01] >>> ?[ ?4.49328964e-01 ? 3.59463171e-01] >>> ?[ ?4.06569660e-01 ? 3.65912694e-01] >>> ?[ ?3.67879441e-01 ? 3.67879441e-01]] >>> >>> 3-dim >>> >>>>>> print stats.poisson.pmf(np.arange(6).reshape((1,2,3)), grid[:,None,None]) >>> >>> >>> There is a known bug, when the support depends on one of the >>> parameters of the distribution, but it should work for most cases. >>> >>> Josef >>> >>> >>>> >>>> Nicky >>>>> >>>>> Josef >>>>> >>>>>> >>>>>> thanks >>>>>> >>>>>> Nicky >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From lev at columbia.edu Mon May 16 09:17:35 2011 From: lev at columbia.edu (Lev Givon) Date: Mon, 16 May 2011 09:17:35 -0400 Subject: [SciPy-User] [ANN] scikits.cuda 0.4 Message-ID: <20110516131735.GA18512@avicenna.ee.columbia.edu> I've released scikits.cuda 0.04. Changes since the last release include the following: * Add integrate module. * Automatically determine device used by current context. * Support batched and multidimensional FFT operations. * Extended dot() function to support implicit transpose/Hermitian. * Support for in-place computation of singular vectors in svd() function. * Various useful utility functions added to misc module. * Use pycuda-complex.hpp to improve kernel readability. * Add unit tests for high-level functions. * Simplify kernel launch setup. * More CULA routine wrappers (including CULA R11 auxiliary routines). * Bug fixes. The code can be downloaded from https://github.com/downloads/lebedov/scikits.cuda/scikits.cuda-0.04.tar.gz Documentation is available at http://lebedov.github.com/scikits.cuda/ Suggestions, criticisms, and bug reports (preferably submitted via github) are all welcome. L.G. From JRadinger at gmx.at Mon May 16 09:21:15 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 16 May 2011 15:21:15 +0200 Subject: [SciPy-User] use matplotlib to produce mathathematical expression only Message-ID: <20110516132115.25140@gmx.net> Hello, I want to produce a eps file of following mathematical expression: r'$F(x)=p*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}+(1-p)*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}$' is it possible to somehow missuse matplotlib for that to produce only the function without any other plot things? Or is there a better python library within scipy? I don't want to install the complete latex libraries just for producing this single eps file. thank you /johannes -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From robert.kern at gmail.com Mon May 16 09:28:49 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 May 2011 08:28:49 -0500 Subject: [SciPy-User] use matplotlib to produce mathathematical expression only In-Reply-To: <20110516132115.25140@gmx.net> References: <20110516132115.25140@gmx.net> Message-ID: On Mon, May 16, 2011 at 08:21, Johannes Radinger wrote: > Hello, > > I want to produce a eps file of following mathematical expression: > r'$F(x)=p*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}+(1-p)*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}$' > > is it possible to somehow missuse matplotlib for that to produce only the function without any other plot things? Or is there a better python library within scipy? I don't want to install the complete latex libraries just for producing this single eps file. Check out mathtex. It is matplotlib's TeX parsing engine and renderer broken out into a separate library: http://code.google.com/p/mathtex/ Also, please send matplotlib questions just to the matplotlib list. Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From JRadinger at gmx.at Mon May 16 10:23:24 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 16 May 2011 16:23:24 +0200 Subject: [SciPy-User] [Matplotlib-users] use matplotlib to produce mathathematical expression only In-Reply-To: References: <20110516132115.25140@gmx.net> Message-ID: <20110516142324.42990@gmx.net> -------- Original-Nachricht -------- > Datum: Mon, 16 May 2011 08:28:49 -0500 > Von: Robert Kern > An: SciPy Users List > CC: matplotlib-users at lists.sourceforge.net > Betreff: Re: [Matplotlib-users] [SciPy-User] use matplotlib to produce mathathematical expression only > On Mon, May 16, 2011 at 08:21, Johannes Radinger wrote: > > Hello, > > > > I want to produce a eps file of following mathematical expression: > > > r'$F(x)=p*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}+(1-p)*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}$' > > > > is it possible to somehow missuse matplotlib for that to produce only > the function without any other plot things? Or is there a better python > library within scipy? I don't want to install the complete latex libraries just > for producing this single eps file. > > Check out mathtex. It is matplotlib's TeX parsing engine and renderer > broken out into a separate library: > > http://code.google.com/p/mathtex/ I also thought about mathtex but don't know how to use my mathematical expression without a plot of axis etc. any suggestions? I just want to have the formated math expression as eps and I don't know how to do it, still after reading in the matplotlib-manual. /johannes > > Also, please send matplotlib questions just to the matplotlib list. > Thanks. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > Matplotlib-users mailing list > Matplotlib-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From robert.kern at gmail.com Mon May 16 11:06:20 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 May 2011 10:06:20 -0500 Subject: [SciPy-User] [Matplotlib-users] use matplotlib to produce mathathematical expression only In-Reply-To: <20110516142324.42990@gmx.net> References: <20110516132115.25140@gmx.net> <20110516142324.42990@gmx.net> Message-ID: On Mon, May 16, 2011 at 09:23, Johannes Radinger wrote: > > -------- Original-Nachricht -------- >> Datum: Mon, 16 May 2011 08:28:49 -0500 >> Von: Robert Kern >> An: SciPy Users List >> CC: matplotlib-users at lists.sourceforge.net >> Betreff: Re: [Matplotlib-users] [SciPy-User] use matplotlib to produce ? ? ? ?mathathematical expression only > >> On Mon, May 16, 2011 at 08:21, Johannes Radinger wrote: >> > Hello, >> > >> > I want to produce a eps file of following mathematical expression: >> > >> r'$F(x)=p*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}+(1-p)*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}$' >> > >> > is it possible to somehow missuse matplotlib for that to produce only >> the function without any other plot things? Or is there a better python >> library within scipy? I don't want to install the complete latex libraries just >> for producing this single eps file. >> >> Check out mathtex. It is matplotlib's TeX parsing engine and renderer >> broken out into a separate library: >> >> http://code.google.com/p/mathtex/ > > I also thought about mathtex but don't know how to use my mathematical expression without a plot of axis etc. any suggestions? I just want to have the formated math expression as eps and I don't know how to do it, still after reading in the matplotlib-manual. The mathtex that I link to above is a separate library, not a part of matplotlib. Please follow the link. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Mon May 16 11:11:22 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 11:11:22 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 6:04 PM, Charles R Harris wrote: > > > On Sat, May 14, 2011 at 2:26 PM, wrote: >> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >> wrote: >> > Hi, >> > >> > Might this be what you want: >> > >> > The first eleven probabilists' Hermite polynomials are: >> > >> > ... >> > >> > My chromium browser does not seem to paste pngs. Anyway, check >> > >> > >> > http://en.wikipedia.org/wiki/Hermite_polynomials >> > >> > and you'll see that the first polynomial is 1, the second x, and so >> > forth. From my courses on quantum mechanics I recall that these >> > polynomials are, with respect to some weight function, orthogonal. >> >> Thanks, I haven't looked at that yet, we should add wikipedia to the >> scipy.special docs. >> >> However, I would like to change the last part "with respect to some >> weight function" >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >> >> Instead of Gaussian weights I would like uniform weights on bounded >> support. And I have never seen anything about changing the weight >> function for the orthogonal basis of these kind of polynomials. >> > > In numpy 1.6, you can use the Legendre polynomials. They are orthogonal on > [-1,1] as has been mentioned, but can be mapped to other domains. For > example > > In [1]: from numpy.polynomial import Legendre as L > > In [2]: for i in range(5): plot(*L([0]*i + [1], domain=[0,1]).linspace()) > ?? ...: > > produces the attached plots. I'm still on numpy 1.5 so this will have to wait a bit. > > > Chuck > as a first application for orthogonal polynomials I was trying to get an estimate for a density, but I haven't figured out the weighting yet. Fourier polynomials work better for this. plot for fourier approximation to a mixture of normal and dirty try-out script is attached. It looks like fourier for bounded support, hermite for support -inf, +inf and maybe laguerre for one sided should be interesting as orthogonal basis for density approximation. (Much is still strange to me.) Josef -------------- next part -------------- A non-text attachment was scrubbed... Name: four_dens.png Type: image/png Size: 26478 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: try_kdensity.py Type: text/x-python Size: 1994 bytes Desc: not available URL: From charlesr.harris at gmail.com Mon May 16 11:17:46 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 May 2011 09:17:46 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 9:11 AM, wrote: > On Sat, May 14, 2011 at 6:04 PM, Charles R Harris > wrote: > > > > > > On Sat, May 14, 2011 at 2:26 PM, wrote: > >> > >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest < > vanforeest at gmail.com> > >> wrote: > >> > Hi, > >> > > >> > Might this be what you want: > >> > > >> > The first eleven probabilists' Hermite polynomials are: > >> > > >> > ... > >> > > >> > My chromium browser does not seem to paste pngs. Anyway, check > >> > > >> > > >> > http://en.wikipedia.org/wiki/Hermite_polynomials > >> > > >> > and you'll see that the first polynomial is 1, the second x, and so > >> > forth. From my courses on quantum mechanics I recall that these > >> > polynomials are, with respect to some weight function, orthogonal. > >> > >> Thanks, I haven't looked at that yet, we should add wikipedia to the > >> scipy.special docs. > >> > >> However, I would like to change the last part "with respect to some > >> weight function" > >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality > >> > >> Instead of Gaussian weights I would like uniform weights on bounded > >> support. And I have never seen anything about changing the weight > >> function for the orthogonal basis of these kind of polynomials. > >> > > > > In numpy 1.6, you can use the Legendre polynomials. They are orthogonal > on > > [-1,1] as has been mentioned, but can be mapped to other domains. For > > example > > > > In [1]: from numpy.polynomial import Legendre as L > > > > In [2]: for i in range(5): plot(*L([0]*i + [1], domain=[0,1]).linspace()) > > ...: > > > > produces the attached plots. > > I'm still on numpy 1.5 so this will have to wait a bit. > > > > > > > Chuck > > > > > as a first application for orthogonal polynomials I was trying to get > an estimate for a density, but I haven't figured out the weighting > yet. > > Fourier polynomials work better for this. > > You might want to try Chebyshev then, the Cheybyshev polynomialas are essentially cosines and will handle the ends better. Weighting might also help, as I expect the distribution of the errors are somewhat Poisson. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From afraser at lanl.gov Mon May 16 11:32:16 2011 From: afraser at lanl.gov (Andy Fraser) Date: Mon, 16 May 2011 09:32:16 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: (josef pktd's message of "Sat\, 14 May 2011 16\:02\:04 -0400") References: Message-ID: <87hb8uzmnz.fsf@localhost6.localdomain6> It does make sense. I read http://en.wikipedia.org/wiki/Orthogonal_polynomials expecting one of the classical sequences of polynomials to solve the example problem you posed. However, I found that most of them have inner porducts defined on [-1,1] instead of [0,1]. The article describes using the Gram-Schmidt process to solve problems of the kind you pose. I suggest first looking around for a ready made solution to your problem. (The first reference in the wikipedia article: Abramowitz and Stegun is worth a look). Then if that doesn't work, do the Gram-Schmidt thing on your own. >>>>> "J" == josef pktd writes: J> Suppose I have an polynomial basis on a bounded domain [0,1] , J> the polynomials in scipy are orthogonal with respect to a J> weighting function, for example Chebychev. J> What I would like: First component is constant second component J> is linear trend all other components are orthogonal to all J> previous ones with respect to uniform weights. J> Is there a ready way how to do this? (Or it's easy and I can J> figure it out myself?) Or does what I would like not make any J> sense? -- Andy Fraser From charlesr.harris at gmail.com Mon May 16 11:27:23 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 May 2011 09:27:23 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 9:17 AM, Charles R Harris wrote: > > > On Mon, May 16, 2011 at 9:11 AM, wrote: > >> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >> wrote: >> > >> > >> > On Sat, May 14, 2011 at 2:26 PM, wrote: >> >> >> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest < >> vanforeest at gmail.com> >> >> wrote: >> >> > Hi, >> >> > >> >> > Might this be what you want: >> >> > >> >> > The first eleven probabilists' Hermite polynomials are: >> >> > >> >> > ... >> >> > >> >> > My chromium browser does not seem to paste pngs. Anyway, check >> >> > >> >> > >> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >> >> > >> >> > and you'll see that the first polynomial is 1, the second x, and so >> >> > forth. From my courses on quantum mechanics I recall that these >> >> > polynomials are, with respect to some weight function, orthogonal. >> >> >> >> Thanks, I haven't looked at that yet, we should add wikipedia to the >> >> scipy.special docs. >> >> >> >> However, I would like to change the last part "with respect to some >> >> weight function" >> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >> >> >> >> Instead of Gaussian weights I would like uniform weights on bounded >> >> support. And I have never seen anything about changing the weight >> >> function for the orthogonal basis of these kind of polynomials. >> >> >> > >> > In numpy 1.6, you can use the Legendre polynomials. They are orthogonal >> on >> > [-1,1] as has been mentioned, but can be mapped to other domains. For >> > example >> > >> > In [1]: from numpy.polynomial import Legendre as L >> > >> > In [2]: for i in range(5): plot(*L([0]*i + [1], >> domain=[0,1]).linspace()) >> > ...: >> > >> > produces the attached plots. >> >> I'm still on numpy 1.5 so this will have to wait a bit. >> >> > >> > >> > Chuck >> > >> >> >> as a first application for orthogonal polynomials I was trying to get >> an estimate for a density, but I haven't figured out the weighting >> yet. >> >> Fourier polynomials work better for this. >> >> > You might want to try Chebyshev then, the Cheybyshev polynomialas are > essentially cosines and will handle the ends better. Weighting might also > help, as I expect the distribution of the errors are somewhat Poisson. > > I should mention that all the polynomial fits will give you the same results, but the Chebyshev fits are more numerically stable. The general approach is to overfit, i.e., use more polynomials than needed and then truncate the series resulting in a faux min/max approximation. Unlike power series, the coefficients of the Cheybshev series will tend to decrease rapidly at some point. Chuck > Chuck > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon May 16 12:22:51 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 12:22:51 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 11:27 AM, Charles R Harris wrote: > > > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris > wrote: >> >> >> On Mon, May 16, 2011 at 9:11 AM, wrote: >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >>> wrote: >>> > >>> > >>> > On Sat, May 14, 2011 at 2:26 PM, wrote: >>> >> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >>> >> >>> >> wrote: >>> >> > Hi, >>> >> > >>> >> > Might this be what you want: >>> >> > >>> >> > The first eleven probabilists' Hermite polynomials are: >>> >> > >>> >> > ... >>> >> > >>> >> > My chromium browser does not seem to paste pngs. Anyway, check >>> >> > >>> >> > >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >>> >> > >>> >> > and you'll see that the first polynomial is 1, the second x, and so >>> >> > forth. From my courses on quantum mechanics I recall that these >>> >> > polynomials are, with respect to some weight function, orthogonal. >>> >> >>> >> Thanks, I haven't looked at that yet, we should add wikipedia to the >>> >> scipy.special docs. >>> >> >>> >> However, I would like to change the last part "with respect to some >>> >> weight function" >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >>> >> >>> >> Instead of Gaussian weights I would like uniform weights on bounded >>> >> support. And I have never seen anything about changing the weight >>> >> function for the orthogonal basis of these kind of polynomials. >>> >> >>> > >>> > In numpy 1.6, you can use the Legendre polynomials. They are orthogonal >>> > on >>> > [-1,1] as has been mentioned, but can be mapped to other domains. For >>> > example >>> > >>> > In [1]: from numpy.polynomial import Legendre as L >>> > >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], >>> > domain=[0,1]).linspace()) >>> > ?? ...: >>> > >>> > produces the attached plots. >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. >>> >>> > >>> > >>> > Chuck >>> > >>> >>> >>> as a first application for orthogonal polynomials I was trying to get >>> an estimate for a density, but I haven't figured out the weighting >>> yet. >>> >>> Fourier polynomials work better for this. >>> >> >> You might want to try Chebyshev then, the Cheybyshev polynomialas are >> essentially cosines and will handle the ends better. Weighting might also >> help, as I expect the distribution of the errors are somewhat Poisson. >> > > I should mention that all the polynomial fits will give you the same > results, but the Chebyshev fits are more numerically stable. The general > approach is to overfit, i.e., use more polynomials than needed and then > truncate the series resulting in a faux min/max approximation. Unlike power > series, the coefficients of the Cheybshev series will tend to decrease > rapidly at some point. I think I might have still something wrong with the way I use the scipy.special polynomials for a large sample size with 10000 observations, the nice graph is fourier with 20 elements, the second (not so nice) is with scipy.special.chebyt with 500 polynomials. The graph for 20 Chebychev polynomials looks very similar Chebychev doesn't want to get low enough to adjust to the low part. (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) Josef > > Chuck >> >> Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- A non-text attachment was scrubbed... Name: four_dens_20.png Type: image/png Size: 22145 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cheby_dens_20_10000.png Type: image/png Size: 23842 bytes Desc: not available URL: From charlesr.harris at gmail.com Mon May 16 12:29:27 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 May 2011 10:29:27 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 10:22 AM, wrote: > On Mon, May 16, 2011 at 11:27 AM, Charles R Harris > wrote: > > > > > > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris > > wrote: > >> > >> > >> On Mon, May 16, 2011 at 9:11 AM, wrote: > >>> > >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris > >>> wrote: > >>> > > >>> > > >>> > On Sat, May 14, 2011 at 2:26 PM, wrote: > >>> >> > >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest > >>> >> > >>> >> wrote: > >>> >> > Hi, > >>> >> > > >>> >> > Might this be what you want: > >>> >> > > >>> >> > The first eleven probabilists' Hermite polynomials are: > >>> >> > > >>> >> > ... > >>> >> > > >>> >> > My chromium browser does not seem to paste pngs. Anyway, check > >>> >> > > >>> >> > > >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials > >>> >> > > >>> >> > and you'll see that the first polynomial is 1, the second x, and > so > >>> >> > forth. From my courses on quantum mechanics I recall that these > >>> >> > polynomials are, with respect to some weight function, orthogonal. > >>> >> > >>> >> Thanks, I haven't looked at that yet, we should add wikipedia to the > >>> >> scipy.special docs. > >>> >> > >>> >> However, I would like to change the last part "with respect to some > >>> >> weight function" > >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality > >>> >> > >>> >> Instead of Gaussian weights I would like uniform weights on bounded > >>> >> support. And I have never seen anything about changing the weight > >>> >> function for the orthogonal basis of these kind of polynomials. > >>> >> > >>> > > >>> > In numpy 1.6, you can use the Legendre polynomials. They are > orthogonal > >>> > on > >>> > [-1,1] as has been mentioned, but can be mapped to other domains. For > >>> > example > >>> > > >>> > In [1]: from numpy.polynomial import Legendre as L > >>> > > >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], > >>> > domain=[0,1]).linspace()) > >>> > ...: > >>> > > >>> > produces the attached plots. > >>> > >>> I'm still on numpy 1.5 so this will have to wait a bit. > >>> > >>> > > >>> > > >>> > Chuck > >>> > > >>> > >>> > >>> as a first application for orthogonal polynomials I was trying to get > >>> an estimate for a density, but I haven't figured out the weighting > >>> yet. > >>> > >>> Fourier polynomials work better for this. > >>> > >> > >> You might want to try Chebyshev then, the Cheybyshev polynomialas are > >> essentially cosines and will handle the ends better. Weighting might > also > >> help, as I expect the distribution of the errors are somewhat Poisson. > >> > > > > I should mention that all the polynomial fits will give you the same > > results, but the Chebyshev fits are more numerically stable. The general > > approach is to overfit, i.e., use more polynomials than needed and then > > truncate the series resulting in a faux min/max approximation. Unlike > power > > series, the coefficients of the Cheybshev series will tend to decrease > > rapidly at some point. > > I think I might have still something wrong with the way I use the > scipy.special polynomials > > for a large sample size with 10000 observations, the nice graph is > fourier with 20 elements, the second (not so nice) is with > scipy.special.chebyt with 500 polynomials. The graph for 20 Chebychev > polynomials looks very similar > > Chebychev doesn't want to get low enough to adjust to the low part. > > (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) > > That certainly doesn't look right. Could you mail me the data offline? Also, what are you fitting, the histogram, the cdf, or...? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon May 16 12:47:21 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 12:47:21 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 12:29 PM, Charles R Harris wrote: > > > On Mon, May 16, 2011 at 10:22 AM, wrote: >> >> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris >> wrote: >> > >> > >> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris >> > wrote: >> >> >> >> >> >> On Mon, May 16, 2011 at 9:11 AM, wrote: >> >>> >> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >> >>> wrote: >> >>> > >> >>> > >> >>> > On Sat, May 14, 2011 at 2:26 PM, wrote: >> >>> >> >> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >> >>> >> >> >>> >> wrote: >> >>> >> > Hi, >> >>> >> > >> >>> >> > Might this be what you want: >> >>> >> > >> >>> >> > The first eleven probabilists' Hermite polynomials are: >> >>> >> > >> >>> >> > ... >> >>> >> > >> >>> >> > My chromium browser does not seem to paste pngs. Anyway, check >> >>> >> > >> >>> >> > >> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >> >>> >> > >> >>> >> > and you'll see that the first polynomial is 1, the second x, and >> >>> >> > so >> >>> >> > forth. From my courses on quantum mechanics I recall that these >> >>> >> > polynomials are, with respect to some weight function, >> >>> >> > orthogonal. >> >>> >> >> >>> >> Thanks, I haven't looked at that yet, we should add wikipedia to >> >>> >> the >> >>> >> scipy.special docs. >> >>> >> >> >>> >> However, I would like to change the last part "with respect to some >> >>> >> weight function" >> >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >> >>> >> >> >>> >> Instead of Gaussian weights I would like uniform weights on bounded >> >>> >> support. And I have never seen anything about changing the weight >> >>> >> function for the orthogonal basis of these kind of polynomials. >> >>> >> >> >>> > >> >>> > In numpy 1.6, you can use the Legendre polynomials. They are >> >>> > orthogonal >> >>> > on >> >>> > [-1,1] as has been mentioned, but can be mapped to other domains. >> >>> > For >> >>> > example >> >>> > >> >>> > In [1]: from numpy.polynomial import Legendre as L >> >>> > >> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], >> >>> > domain=[0,1]).linspace()) >> >>> > ?? ...: >> >>> > >> >>> > produces the attached plots. >> >>> >> >>> I'm still on numpy 1.5 so this will have to wait a bit. >> >>> >> >>> > >> >>> > >> >>> > Chuck >> >>> > >> >>> >> >>> >> >>> as a first application for orthogonal polynomials I was trying to get >> >>> an estimate for a density, but I haven't figured out the weighting >> >>> yet. >> >>> >> >>> Fourier polynomials work better for this. >> >>> >> >> >> >> You might want to try Chebyshev then, the Cheybyshev polynomialas are >> >> essentially cosines and will handle the ends better. Weighting might >> >> also >> >> help, as I expect the distribution of the errors are somewhat Poisson. >> >> >> > >> > I should mention that all the polynomial fits will give you the same >> > results, but the Chebyshev fits are more numerically stable. The general >> > approach is to overfit, i.e., use more polynomials than needed and then >> > truncate the series resulting in a faux min/max approximation. Unlike >> > power >> > series, the coefficients of the Cheybshev series will tend to decrease >> > rapidly at some point. >> >> I think I might have still something wrong with the way I use the >> scipy.special polynomials >> >> for a large sample size with 10000 observations, the nice graph is >> fourier with 20 elements, the second (not so nice) is with >> scipy.special.chebyt with 500 polynomials. The graph for 20 Chebychev >> polynomials looks very similar >> >> Chebychev doesn't want to get low enough to adjust to the low part. >> >> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) >> > > That certainly doesn't look right. Could you mail me the data offline? Also, > what are you fitting, the histogram, the cdf, or...? The numbers are generated (by a function in scikits.statsmodels). It's all in the script that I posted, except I keep changing things. I'm not fitting anything directly. There is supposed to be a closed form expression, the estimated coefficient of each polynomial is just the mean of that polynomial evaluated at the data. The theory and the descriptions sounded easy, but unfortunately it didn't work out. I was just hoping to get lucky and that I'm able to skip the small print. http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract got me started and it has the fourier case that works. There are lots of older papers that I only skimmed, but I should be able to work out the Hermite case before going back to the general case again with arbitrary orthogonal polynomial bases. (Initially I wanted to ignore the Hermite bases because gaussian_kde works well in that case.) Thanks, Josef > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ralf.gommers at googlemail.com Mon May 16 13:50:27 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 16 May 2011 19:50:27 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: <20110515212533.0AA2.B1C76292@gmail.com> References: <20110515212533.0AA2.B1C76292@gmail.com> Message-ID: On Sun, May 15, 2011 at 9:25 PM, Klonuo Umom wrote: > I installed Cygwin on XP with compilers, make etc and then build LAPACK, > after which I build ATLAS. > > Got numpy: > ======================================================================= > svn co http://svn.scipy.org/svn/numpy/trunk numpy > ----------------------------------------------------------------------- > > Edited site.cfg: > ======================================================================= > [DEFAULT] > # this where my atlas and lapack libs are: > library_dirs = /usr/local/lib > include_dirs = /usr/local/include > > [blas_opt] > libraries = f77blas, cblas, atlas > # > [lapack_opt] > libraries = lapack, f77blas, cblas, atlas > ----------------------------------------------------------------------- > > Run: > ======================================================================= > python setup.py build > ----------------------------------------------------------------------- > > Got this error: > ======================================================================= > collect2: ld returned 1 exit status > error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared > build/temp.cygwin-1.7.9-i686-2.6/numpy/linalg/lapack_litemodule.o > build/temp.cygwin-1.7.9-i686-2.6/numpy/linalg/python_xerbla.o > -L/usr/local/lib > -L/usr/lib/gcc/i686-pc-cygwin/3.4.4 > -L/usr/lib/python2.6/config > -Lbuild/temp.cygwin-1.7.9-i686-2.6 > -llapack -lf77blas -lcblas -latlas -lpython2.6 -lg2c -o > build/lib.cygwin-1.7.9-i686-2.6/numpy/linalg/lapack_lite.dll" failed with > exit status 1 > ----------------------------------------------------------------------- > > Am I missing some package? > > That doesn't look familiar, but you shouldn't need g77 for building numpy. Can you post the complete build log? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Mon May 16 14:23:22 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 16 May 2011 20:23:22 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: References: <20110515212533.0AA2.B1C76292@gmail.com> Message-ID: <20110516202320.3804.B1C76292@gmail.com> > Can you post the complete build log? Sure, I added it in attachment The fact is that I don't know much about *nix and building code, unless it's not demanding, and googling around it seemed like that error is suggesting something is missing in my cygwin, but now looking at log it's like something is wrong with liblapack.a Or I better stop assuming things :) Cheers -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_cygwin.7z Type: application/octet-stream Size: 2517 bytes Desc: not available URL: From josef.pktd at gmail.com Mon May 16 14:40:11 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 14:40:11 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 12:47 PM, wrote: > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris > wrote: >> >> >> On Mon, May 16, 2011 at 10:22 AM, wrote: >>> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris >>> wrote: >>> > >>> > >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris >>> > wrote: >>> >> >>> >> >>> >> On Mon, May 16, 2011 at 9:11 AM, wrote: >>> >>> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >>> >>> wrote: >>> >>> > >>> >>> > >>> >>> > On Sat, May 14, 2011 at 2:26 PM, wrote: >>> >>> >> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >>> >>> >> >>> >>> >> wrote: >>> >>> >> > Hi, >>> >>> >> > >>> >>> >> > Might this be what you want: >>> >>> >> > >>> >>> >> > The first eleven probabilists' Hermite polynomials are: >>> >>> >> > >>> >>> >> > ... >>> >>> >> > >>> >>> >> > My chromium browser does not seem to paste pngs. Anyway, check >>> >>> >> > >>> >>> >> > >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >>> >>> >> > >>> >>> >> > and you'll see that the first polynomial is 1, the second x, and >>> >>> >> > so >>> >>> >> > forth. From my courses on quantum mechanics I recall that these >>> >>> >> > polynomials are, with respect to some weight function, >>> >>> >> > orthogonal. >>> >>> >> >>> >>> >> Thanks, I haven't looked at that yet, we should add wikipedia to >>> >>> >> the >>> >>> >> scipy.special docs. >>> >>> >> >>> >>> >> However, I would like to change the last part "with respect to some >>> >>> >> weight function" >>> >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >>> >>> >> >>> >>> >> Instead of Gaussian weights I would like uniform weights on bounded >>> >>> >> support. And I have never seen anything about changing the weight >>> >>> >> function for the orthogonal basis of these kind of polynomials. >>> >>> >> >>> >>> > >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They are >>> >>> > orthogonal >>> >>> > on >>> >>> > [-1,1] as has been mentioned, but can be mapped to other domains. >>> >>> > For >>> >>> > example >>> >>> > >>> >>> > In [1]: from numpy.polynomial import Legendre as L >>> >>> > >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], >>> >>> > domain=[0,1]).linspace()) >>> >>> > ?? ...: >>> >>> > >>> >>> > produces the attached plots. >>> >>> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. >>> >>> >>> >>> > >>> >>> > >>> >>> > Chuck >>> >>> > >>> >>> >>> >>> >>> >>> as a first application for orthogonal polynomials I was trying to get >>> >>> an estimate for a density, but I haven't figured out the weighting >>> >>> yet. >>> >>> >>> >>> Fourier polynomials work better for this. >>> >>> >>> >> >>> >> You might want to try Chebyshev then, the Cheybyshev polynomialas are >>> >> essentially cosines and will handle the ends better. Weighting might >>> >> also >>> >> help, as I expect the distribution of the errors are somewhat Poisson. >>> >> >>> > >>> > I should mention that all the polynomial fits will give you the same >>> > results, but the Chebyshev fits are more numerically stable. The general >>> > approach is to overfit, i.e., use more polynomials than needed and then >>> > truncate the series resulting in a faux min/max approximation. Unlike >>> > power >>> > series, the coefficients of the Cheybshev series will tend to decrease >>> > rapidly at some point. >>> >>> I think I might have still something wrong with the way I use the >>> scipy.special polynomials >>> >>> for a large sample size with 10000 observations, the nice graph is >>> fourier with 20 elements, the second (not so nice) is with >>> scipy.special.chebyt with 500 polynomials. The graph for 20 Chebychev >>> polynomials looks very similar >>> >>> Chebychev doesn't want to get low enough to adjust to the low part. >>> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) >>> >> >> That certainly doesn't look right. Could you mail me the data offline? Also, >> what are you fitting, the histogram, the cdf, or...? > > The numbers are generated (by a function in scikits.statsmodels). It's > all in the script that I posted, except I keep changing things. > > I'm not fitting anything directly. > There is supposed to be a closed form expression, the estimated > coefficient of each polynomial is just the mean of that polynomial > evaluated at the data. The theory and the descriptions sounded easy, > but unfortunately it didn't work out. > > I was just hoping to get lucky and that I'm able to skip the small print. > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract > got me started and it has the fourier case that works. > > There are lots of older papers that I only skimmed, but I should be > able to work out the Hermite case before going back to the general > case again with arbitrary orthogonal polynomial bases. (Initially I > wanted to ignore the Hermite bases because gaussian_kde works well in > that case.) Just another graph before stopping with this chebyt work if I cheat (rescale at the end f_hat = (f_hat - f_hat.min()) fint2 = integrate.trapz(f_hat, grid) f_hat /= fint2 graph is with chebyt with 30 polynomials after shifting and scaling, 20 polynomials looks also good. (hunting for the missing scaling term is left for some other day) In any case, there's a recipe for non-parametric density estimation with compact support. Josef > > Thanks, > > Josef > >> >> Chuck >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: cheby_dens_30_10000_cheating.png Type: image/png Size: 23639 bytes Desc: not available URL: From ralf.gommers at googlemail.com Mon May 16 14:51:19 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 16 May 2011 20:51:19 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: <20110516202320.3804.B1C76292@gmail.com> References: <20110515212533.0AA2.B1C76292@gmail.com> <20110516202320.3804.B1C76292@gmail.com> Message-ID: On Mon, May 16, 2011 at 8:23 PM, Klonuo Umom wrote: > > Can you post the complete build log? > > Sure, I added it in attachment > > The fact is that I don't know much about *nix and building code, unless > it's not demanding, and googling around it seemed like that error is > suggesting something is missing in my cygwin, but now looking at log it's > like something is wrong with liblapack.a > > Your site.cfg may need the g2c and gcc libs added, as described at http://www.scipy.org/Installing_SciPy/Windows. Also, your ATLAS was built with gfortran and you have g77 installed, which doesn't work. You can remove g77, install gfortran and try again, or try to find ATLAS binaries built with g77, or build ATLAS yourself (not easy), or use pre-built Numpy binaries. I haven't tried them, but some experimental Sage binaries for Cygwin are linked to at http://trac.sagemath.org/sage_trac/wiki/CygwinPort. I am not sure though why g77 is invoked though, can anyone explain that? Is that because g2c is not found? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon May 16 14:51:48 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 14:51:48 -0400 Subject: [SciPy-User] empirical distributions with Pareto Tails Message-ID: (and another one) something like this (not available in python land from what I have seen (?) ) http://www.mathworks.com/help/toolbox/stats/paretotailsclass.html http://www.mathworks.com/products/statistics/demos.html?file=/products/demos/shipping/stats/gparetodemo.html and a quote http://www.wilmott.com/messageview.cfm?catid=34&threadid=81362 """Fri Dec 31, 10 03:29 PM Interesting. I have used kernel estimation for univariate fitting, but I had be careful at the tails of the distribution. Because I was using a Gaussian kernel, the fitted tails were too thin. I ended up using the kernel estimation for the body of the distribution, and separately fitting the tails with a generalized Pareto distribution. """ Josef (where's the question) From klonuo at gmail.com Mon May 16 14:59:00 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 16 May 2011 20:59:00 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: References: <20110516202320.3804.B1C76292@gmail.com> Message-ID: <20110516205858.3807.B1C76292@gmail.com> > Your site.cfg may need the g2c and gcc libs added, as described at > http://www.scipy.org/Installing_SciPy/Windows. Thanks, I'll try that I was using this as reference: http://new.scipy.org/building/windows.html > Also, your ATLAS was built with gfortran and you have g77 installed, which > doesn't work. You can remove g77, install gfortran and try again, or try to > find ATLAS binaries built with g77, or build ATLAS yourself (not easy), or > use pre-built Numpy binaries This seems strange, as I build ATLAS mayself as I initially wrote. I build lapack, then included it in building ATLAS: =============================================================================== ../configure --with-netlib-lapack=/cygdrive/c/cygwin/usr/local/lib/liblapack.a ------------------------------------------------------------------------------- and everything seemed fine at this point From klonuo at gmail.com Mon May 16 15:22:11 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 16 May 2011 21:22:11 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: References: <20110516202320.3804.B1C76292@gmail.com> Message-ID: <20110516212210.380D.B1C76292@gmail.com> > Your site.cfg may need the g2c and gcc libs added, as described at > http://www.scipy.org/Installing_SciPy/Windows Quoting that site: > you need to add the g2c and gcc libraries to the ATLAS and LAPACK libraries > you have already. With Cygwin, you can find these in > /lib/gcc/i686-pc-mingw32/3.4.4. Copy them to g2c.lib and gcc.lib, > respectively, and modify site.cfg accordingly. I found 'libg2c.a' and 'libgcc.a' then copied as 'g2c.lib' and 'gcc.lib' in '\usr\local\lib'which is referenced in site.cfg as: ======================================================================= [DEFAULT] library_dirs = /usr/local/lib ----------------------------------------------------------------------- I tried also this line: ======================================================================= python setup.py config --compiler=mingw32 build --compiler=mingw32 install ----------------------------------------------------------------------- which is referenced both in your linked page and here: http://new.scipy.org/building/windows.html That command is not acceptable by numpy and guide probably should be corrected Then I run: ======================================================================= python setup.py build ----------------------------------------------------------------------- as previous and got same error From ralf.gommers at googlemail.com Mon May 16 15:32:16 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 16 May 2011 21:32:16 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: <20110516212210.380D.B1C76292@gmail.com> References: <20110516202320.3804.B1C76292@gmail.com> <20110516212210.380D.B1C76292@gmail.com> Message-ID: On Mon, May 16, 2011 at 9:22 PM, Klonuo Umom wrote: > > Your site.cfg may need the g2c and gcc libs added, as described at > > http://www.scipy.org/Installing_SciPy/Windows > > Quoting that site: > > you need to add the g2c and gcc libraries to the ATLAS and LAPACK > libraries > > you have already. With Cygwin, you can find these in > > /lib/gcc/i686-pc-mingw32/3.4.4. Copy them to g2c.lib and gcc.lib, > > respectively, and modify site.cfg accordingly. > > I found 'libg2c.a' and 'libgcc.a' then copied as 'g2c.lib' and 'gcc.lib' in > '\usr\local\lib'which is referenced in site.cfg as: > > ======================================================================= > [DEFAULT] > library_dirs = /usr/local/lib > ----------------------------------------------------------------------- > > I tried also this line: > ======================================================================= > python setup.py config --compiler=mingw32 build --compiler=mingw32 install > ----------------------------------------------------------------------- > which is referenced both in your linked page and here: > http://new.scipy.org/building/windows.html > > This page is a copy (perhaps with some updates) of the one I linked, I'm not sure if one is more up-to-date than the other. > That command is not acceptable by numpy and guide probably should be > corrected > > Then I run: > ======================================================================= > python setup.py build > ----------------------------------------------------------------------- > as previous and got same error > > > Maybe someone else can help you further, I'm not a Cygwin user and out of ideas, sorry. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Mon May 16 15:37:45 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 16 May 2011 21:37:45 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: <20110516205858.3807.B1C76292@gmail.com> References: <20110516205858.3807.B1C76292@gmail.com> Message-ID: <20110516213743.3812.B1C76292@gmail.com> One more note > > Also, your ATLAS was built with gfortran and you have g77 installed, which > > doesn't work. You can remove g77, install gfortran and try again, or try to > > find ATLAS binaries built with g77, or build ATLAS yourself (not easy), or > > use pre-built Numpy binaries > > This seems strange, as I build ATLAS mayself as I initially wrote. > I build lapack, then included it in building ATLAS: > > =============================================================================== > ../configure --with-netlib-lapack=/cygdrive/c/cygwin/usr/local/lib/liblapack.a > ------------------------------------------------------------------------------- > > and everything seemed fine at this point Answer to this may be that I linked here liblapack that comes with cygwin, and not the one I build earlier. Could it be possible that everyhing went fine with ATLAS in this case? I mean, I don't have gfortran and liblapack seems to be build with it ATLAS finished without errors, here is summary.log: ******************************************************************************* ******************************************************************************* ******************************************************************************* * BEGAN ATLAS3.8.4 INSTALL OF SECTION 0-0-0 ON 05/15/2011 AT 17:57 * ******************************************************************************* ******************************************************************************* ******************************************************************************* IN STAGE 1 INSTALL: SYSTEM PROBE/AUX COMPILE Level 1 cache size calculated as 128KB. dFPU: Combined muladd instruction with 10 cycle pipeline. Apparent number of registers : 10 Register-register performance=1489.24MFLOPS sFPU: Combined muladd instruction with 10 cycle pipeline. Apparent number of registers : 10 Register-register performance=1462.86MFLOPS IN STAGE 2 INSTALL: TYPE-DEPENDENT TUNING STAGE 2-1: TUNING PREC='d' (precision 1 of 4) STAGE 2-1-1 : BUILDING BLOCK MATMUL TUNE The best matmul kernel was ATL_dmm6x1x72_sse2.c, NB=72, written by R. Clint Whaley Performance: 2858.21MFLOPS (191.92 percent of of apparent peak) (Gen case got 1593.69MFLOPS) mmNN : ma=1, lat=6, nb=48, mu=6, nu=1 ku=4, ff=0, if=7, nf=1 Performance = 1418.95 (49.64 of copy matmul, 95.28 of peak) mmNT : ma=1, lat=3, nb=48, mu=6, nu=1 ku=48, ff=0, if=7, nf=1 Performance = 1087.58 (38.05 of copy matmul, 73.03 of peak) mmTN : ma=1, lat=3, nb=48, mu=6, nu=1 ku=48, ff=0, if=7, nf=1 Performance = 1532.25 (53.61 of copy matmul, 102.89 of peak) mmTT : ma=1, lat=3, nb=48, mu=6, nu=1 ku=48, ff=0, if=7, nf=1 Performance = 1371.61 (47.99 of copy matmul, 92.10 of peak) STAGE 2-1-2: CacheEdge DETECTION CacheEdge set to 196608 bytes STAGE 2-1-3: LARGE/SMALL CASE CROSSOVER DETECTION STAGE 2-1-3: COPY/NO-COPY CROSSOVER DETECTION done. STAGE 2-1-4: LEVEL 3 BLAS TUNE done. STAGE 2-1-5: GEMV TUNE gemvN : chose routine 9:ATL_gemvN_32x4_1.c written by R. Clint Whaley Yunroll=32, Xunroll=4, using 97 percent of L1 Performance = 327.58 (11.46 of copy matmul, 22.00 of peak) gemvT : chose routine 105:ATL_gemvT_2x16_1.c written by R. Clint Whaley Yunroll=2, Xunroll=16, using 97 percent of L1 Performance = 358.69 (12.55 of copy matmul, 24.09 of peak) STAGE 2-1-6: GER TUNE ger : chose routine 1:ATL_ger1_axpy.c written by R. Clint Whaley mu=16, nu=1, using 0.56 percent of L1 Cache Performance = 139.05 ( 4.86 of copy matmul, 9.34 of peak) STAGE 2-2: TUNING PREC='s' (precision 2 of 4) STAGE 2-2-1 : BUILDING BLOCK MATMUL TUNE The best matmul kernel was ATL_smm6x1x120_sse.c, NB=120, written by R. Clint Whaley Performance: 5823.35MFLOPS (398.08 percent of of apparent peak) (Gen case got 1840.30MFLOPS) mmNN : ma=1, lat=4, nb=60, mu=5, nu=1 ku=4, ff=0, if=6, nf=1 Performance = 1689.93 (29.02 of copy matmul, 115.52 of peak) mmNT : ma=1, lat=3, nb=60, mu=5, nu=1 ku=60, ff=0, if=6, nf=1 Performance = 1258.16 (21.61 of copy matmul, 86.01 of peak) mmTN : ma=1, lat=4, nb=60, mu=5, nu=1 ku=60, ff=0, if=6, nf=1 Performance = 1766.60 (30.34 of copy matmul, 120.76 of peak) mmTT : ma=1, lat=2, nb=60, mu=5, nu=1 ku=60, ff=0, if=6, nf=1 Performance = 1677.33 (28.80 of copy matmul, 114.66 of peak) STAGE 2-2-2: CacheEdge DETECTION CacheEdge set to 196608 bytes STAGE 2-2-3: LARGE/SMALL CASE CROSSOVER DETECTION STAGE 2-2-3: COPY/NO-COPY CROSSOVER DETECTION done. STAGE 2-2-4: LEVEL 3 BLAS TUNE done. STAGE 2-2-5: GEMV TUNE gemvN : chose routine 9:ATL_gemvN_32x4_1.c written by R. Clint Whaley Yunroll=32, Xunroll=4, using 100 percent of L1 Performance = 578.17 ( 9.93 of copy matmul, 39.52 of peak) gemvT : chose routine 105:ATL_gemvT_2x16_1.c written by R. Clint Whaley Yunroll=2, Xunroll=16, using 100 percent of L1 Performance = 622.58 (10.69 of copy matmul, 42.56 of peak) STAGE 2-2-6: GER TUNE ger : chose routine 1:ATL_ger1_axpy.c written by R. Clint Whaley mu=16, nu=1, using 1.00 percent of L1 Cache Performance = 278.81 ( 4.79 of copy matmul, 19.06 of peak) STAGE 2-3: TUNING PREC='z' (precision 3 of 4) STAGE 2-3-1 : BUILDING BLOCK MATMUL TUNE The best matmul kernel was ATL_dmm6x1x72_sse2.c, NB=60, written by R. Clint Whaley Performance: 2941.21MFLOPS (197.50 percent of of apparent peak) (Gen case got 1616.05MFLOPS) mmNN : ma=1, lat=8, nb=20, mu=6, nu=1 ku=20, ff=1, if=3, nf=4 Performance = 761.40 (25.89 of copy matmul, 51.13 of peak) mmNT : ma=1, lat=3, nb=20, mu=6, nu=1 ku=20, ff=1, if=3, nf=4 Performance = 1276.90 (43.41 of copy matmul, 85.74 of peak) mmTN : ma=1, lat=2, nb=20, mu=6, nu=1 ku=20, ff=1, if=3, nf=4 Performance = 1469.32 (49.96 of copy matmul, 98.66 of peak) mmTT : ma=1, lat=2, nb=20, mu=6, nu=1 ku=20, ff=1, if=3, nf=4 Performance = 1269.39 (43.16 of copy matmul, 85.24 of peak) STAGE 2-3-2: CacheEdge DETECTION CacheEdge set to 196608 bytes zdNKB set to 0 bytes STAGE 2-3-3: LARGE/SMALL CASE CROSSOVER DETECTION STAGE 2-3-3: COPY/NO-COPY CROSSOVER DETECTION done. STAGE 2-3-4: LEVEL 3 BLAS TUNE done. STAGE 2-3-5: GEMV TUNE gemvN : chose routine 3:ATL_cgemvN_1x1_1a.c written by R. Clint Whaley Yunroll=32, Xunroll=1, using 59 percent of L1 Performance = 641.40 (21.81 of copy matmul, 43.07 of peak) gemvT : chose routine 104:ATL_cgemvT_1x1_1.c written by R. Clint Whaley Yunroll=1, Xunroll=1, using 59 percent of L1 Performance = 619.81 (21.07 of copy matmul, 41.62 of peak) STAGE 2-3-6: GER TUNE ger : chose routine 1:ATL_cger1_axpy.c written by R. Clint Whaley mu=16, nu=1, using 0.50 percent of L1 Cache Performance = 296.77 (10.09 of copy matmul, 19.93 of peak) STAGE 2-4: TUNING PREC='c' (precision 4 of 4) STAGE 2-4-1 : BUILDING BLOCK MATMUL TUNE The best matmul kernel was ATL_smm6x1x120_sse.c, NB=120, written by R. Clint Whaley Performance: 6033.27MFLOPS (412.43 percent of of apparent peak) (Gen case got 1809.27MFLOPS) mmNN : ma=1, lat=4, nb=24, mu=5, nu=1 ku=4, ff=1, if=5, nf=1 Performance = 1532.11 (25.39 of copy matmul, 104.73 of peak) mmNT : ma=1, lat=1, nb=24, mu=5, nu=1 ku=24, ff=1, if=5, nf=1 Performance = 1524.77 (25.27 of copy matmul, 104.23 of peak) mmTN : ma=1, lat=5, nb=24, mu=5, nu=1 ku=24, ff=1, if=5, nf=1 Performance = 1692.87 (28.06 of copy matmul, 115.72 of peak) mmTT : ma=1, lat=3, nb=24, mu=5, nu=1 ku=24, ff=1, if=5, nf=1 Performance = 1660.81 (27.53 of copy matmul, 113.53 of peak) STAGE 2-4-2: CacheEdge DETECTION CacheEdge set to 196608 bytes csNKB set to 0 bytes STAGE 2-4-3: LARGE/SMALL CASE CROSSOVER DETECTION STAGE 2-4-3: COPY/NO-COPY CROSSOVER DETECTION done. STAGE 2-4-4: LEVEL 3 BLAS TUNE done. STAGE 2-4-5: GEMV TUNE gemvN : chose routine 3:ATL_cgemvN_1x1_1a.c written by R. Clint Whaley Yunroll=32, Xunroll=1, using 100 percent of L1 Performance = 1072.14 (17.77 of copy matmul, 73.29 of peak) gemvT : chose routine 104:ATL_cgemvT_1x1_1.c written by R. Clint Whaley Yunroll=1, Xunroll=1, using 100 percent of L1 Performance = 1045.59 (17.33 of copy matmul, 71.48 of peak) STAGE 2-4-6: GER TUNE ger : chose routine 1:ATL_cger1_axpy.c written by R. Clint Whaley mu=16, nu=1, using 0.75 percent of L1 Cache Performance = 588.39 ( 9.75 of copy matmul, 40.22 of peak) STAGE 3: GENERAL LIBRARY BUILD STAGE 4: POST-BUILD TUNING done. ******************************************************************************* ******************************************************************************* ******************************************************************************* * FINISHED ATLAS3.8.4 INSTALL OF SECTION 0-0-0 ON 05/15/2011 AT 19:50 * ******************************************************************************* ******************************************************************************* ******************************************************************************* From klonuo at gmail.com Mon May 16 15:42:51 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 16 May 2011 21:42:51 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: <20110516213743.3812.B1C76292@gmail.com> References: <20110516205858.3807.B1C76292@gmail.com> <20110516213743.3812.B1C76292@gmail.com> Message-ID: <20110516214249.3815.B1C76292@gmail.com> Talking to myself again > Answer to this may be that I linked here liblapack that comes with cygwin, > and not the one I build earlier. Could it be possible that everyhing went > fine with ATLAS in this case? > I mean, I don't have gfortran and liblapack seems to be build with it I do have gfortran! From ralf.gommers at googlemail.com Mon May 16 15:48:02 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 16 May 2011 21:48:02 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: <20110516214249.3815.B1C76292@gmail.com> References: <20110516205858.3807.B1C76292@gmail.com> <20110516213743.3812.B1C76292@gmail.com> <20110516214249.3815.B1C76292@gmail.com> Message-ID: On Mon, May 16, 2011 at 9:42 PM, Klonuo Umom wrote: > Talking to myself again > > > Answer to this may be that I linked here liblapack that comes with > cygwin, > > and not the one I build earlier. Could it be possible that everyhing went > > fine with ATLAS in this case? > > I mean, I don't have gfortran and liblapack seems to be build with it > > I do have gfortran! > > If you have both installed numpy.distutils picks up g77 first, you should either modify your path so g77 is not on it, or remove it completely. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Mon May 16 15:57:12 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Mon, 16 May 2011 21:57:12 +0200 Subject: [SciPy-User] building numpy 1.6.0 on cygwin: collect2: ld returned 1 exit status In-Reply-To: References: <20110516214249.3815.B1C76292@gmail.com> Message-ID: <20110516215710.3818.B1C76292@gmail.com> > If you have both installed numpy.distutils picks up g77 first, you should > either modify your path so g77 is not on it, or remove it completely. Excellent! That was it Everything fine now :) From charlesr.harris at gmail.com Mon May 16 17:58:51 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 May 2011 15:58:51 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 12:40 PM, wrote: > On Mon, May 16, 2011 at 12:47 PM, wrote: > > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris > > wrote: > >> > >> > >> On Mon, May 16, 2011 at 10:22 AM, wrote: > >>> > >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris > >>> wrote: > >>> > > >>> > > >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris > >>> > wrote: > >>> >> > >>> >> > >>> >> On Mon, May 16, 2011 at 9:11 AM, wrote: > >>> >>> > >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris > >>> >>> wrote: > >>> >>> > > >>> >>> > > >>> >>> > On Sat, May 14, 2011 at 2:26 PM, wrote: > >>> >>> >> > >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest > >>> >>> >> > >>> >>> >> wrote: > >>> >>> >> > Hi, > >>> >>> >> > > >>> >>> >> > Might this be what you want: > >>> >>> >> > > >>> >>> >> > The first eleven probabilists' Hermite polynomials are: > >>> >>> >> > > >>> >>> >> > ... > >>> >>> >> > > >>> >>> >> > My chromium browser does not seem to paste pngs. Anyway, check > >>> >>> >> > > >>> >>> >> > > >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials > >>> >>> >> > > >>> >>> >> > and you'll see that the first polynomial is 1, the second x, > and > >>> >>> >> > so > >>> >>> >> > forth. From my courses on quantum mechanics I recall that > these > >>> >>> >> > polynomials are, with respect to some weight function, > >>> >>> >> > orthogonal. > >>> >>> >> > >>> >>> >> Thanks, I haven't looked at that yet, we should add wikipedia to > >>> >>> >> the > >>> >>> >> scipy.special docs. > >>> >>> >> > >>> >>> >> However, I would like to change the last part "with respect to > some > >>> >>> >> weight function" > >>> >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality > >>> >>> >> > >>> >>> >> Instead of Gaussian weights I would like uniform weights on > bounded > >>> >>> >> support. And I have never seen anything about changing the > weight > >>> >>> >> function for the orthogonal basis of these kind of polynomials. > >>> >>> >> > >>> >>> > > >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They are > >>> >>> > orthogonal > >>> >>> > on > >>> >>> > [-1,1] as has been mentioned, but can be mapped to other domains. > >>> >>> > For > >>> >>> > example > >>> >>> > > >>> >>> > In [1]: from numpy.polynomial import Legendre as L > >>> >>> > > >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], > >>> >>> > domain=[0,1]).linspace()) > >>> >>> > ...: > >>> >>> > > >>> >>> > produces the attached plots. > >>> >>> > >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. > >>> >>> > >>> >>> > > >>> >>> > > >>> >>> > Chuck > >>> >>> > > >>> >>> > >>> >>> > >>> >>> as a first application for orthogonal polynomials I was trying to > get > >>> >>> an estimate for a density, but I haven't figured out the weighting > >>> >>> yet. > >>> >>> > >>> >>> Fourier polynomials work better for this. > >>> >>> > >>> >> > >>> >> You might want to try Chebyshev then, the Cheybyshev polynomialas > are > >>> >> essentially cosines and will handle the ends better. Weighting might > >>> >> also > >>> >> help, as I expect the distribution of the errors are somewhat > Poisson. > >>> >> > >>> > > >>> > I should mention that all the polynomial fits will give you the same > >>> > results, but the Chebyshev fits are more numerically stable. The > general > >>> > approach is to overfit, i.e., use more polynomials than needed and > then > >>> > truncate the series resulting in a faux min/max approximation. Unlike > >>> > power > >>> > series, the coefficients of the Cheybshev series will tend to > decrease > >>> > rapidly at some point. > >>> > >>> I think I might have still something wrong with the way I use the > >>> scipy.special polynomials > >>> > >>> for a large sample size with 10000 observations, the nice graph is > >>> fourier with 20 elements, the second (not so nice) is with > >>> scipy.special.chebyt with 500 polynomials. The graph for 20 Chebychev > >>> polynomials looks very similar > >>> > >>> Chebychev doesn't want to get low enough to adjust to the low part. > >>> > >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) > >>> > >> > >> That certainly doesn't look right. Could you mail me the data offline? > Also, > >> what are you fitting, the histogram, the cdf, or...? > > > > The numbers are generated (by a function in scikits.statsmodels). It's > > all in the script that I posted, except I keep changing things. > > > > I'm not fitting anything directly. > > There is supposed to be a closed form expression, the estimated > > coefficient of each polynomial is just the mean of that polynomial > > evaluated at the data. The theory and the descriptions sounded easy, > > but unfortunately it didn't work out. > > > > I was just hoping to get lucky and that I'm able to skip the small print. > > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract > > got me started and it has the fourier case that works. > > > > There are lots of older papers that I only skimmed, but I should be > > able to work out the Hermite case before going back to the general > > case again with arbitrary orthogonal polynomial bases. (Initially I > > wanted to ignore the Hermite bases because gaussian_kde works well in > > that case.) > > Just another graph before stopping with this > > chebyt work if I cheat (rescale at the end > > f_hat = (f_hat - f_hat.min()) > fint2 = integrate.trapz(f_hat, grid) > f_hat /= fint2 > > graph is with chebyt with 30 polynomials after shifting and scaling, > 20 polynomials looks also good. > > (hunting for the missing scaling term is left for some other day) > > In any case, there's a recipe for non-parametric density estimation > with compact support. > > Ah, now I see what is going on -- monte carlo integration to get the expansion of the pdf in terms of orthogonal polynomials. So yes, I think Lagrange polynomials are probably the ones to use unless you use the weight in the integral. Note that 1.6 also has the Hermite and Laguerre polynomials. But it seems that for these things it would also be desirable to have the normalization constants. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon May 16 20:12:14 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 20:12:14 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 5:58 PM, Charles R Harris wrote: > > > On Mon, May 16, 2011 at 12:40 PM, wrote: >> >> On Mon, May 16, 2011 at 12:47 PM, ? wrote: >> > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris >> > wrote: >> >> >> >> >> >> On Mon, May 16, 2011 at 10:22 AM, wrote: >> >>> >> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris >> >>> wrote: >> >>> > >> >>> > >> >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris >> >>> > wrote: >> >>> >> >> >>> >> >> >>> >> On Mon, May 16, 2011 at 9:11 AM, wrote: >> >>> >>> >> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >> >>> >>> wrote: >> >>> >>> > >> >>> >>> > >> >>> >>> > On Sat, May 14, 2011 at 2:26 PM, wrote: >> >>> >>> >> >> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >> >>> >>> >> >> >>> >>> >> wrote: >> >>> >>> >> > Hi, >> >>> >>> >> > >> >>> >>> >> > Might this be what you want: >> >>> >>> >> > >> >>> >>> >> > The first eleven probabilists' Hermite polynomials are: >> >>> >>> >> > >> >>> >>> >> > ... >> >>> >>> >> > >> >>> >>> >> > My chromium browser does not seem to paste pngs. Anyway, >> >>> >>> >> > check >> >>> >>> >> > >> >>> >>> >> > >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >> >>> >>> >> > >> >>> >>> >> > and you'll see that the first polynomial is 1, the second x, >> >>> >>> >> > and >> >>> >>> >> > so >> >>> >>> >> > forth. From my courses on quantum mechanics I recall that >> >>> >>> >> > these >> >>> >>> >> > polynomials are, with respect to some weight function, >> >>> >>> >> > orthogonal. >> >>> >>> >> >> >>> >>> >> Thanks, I haven't looked at that yet, we should add wikipedia >> >>> >>> >> to >> >>> >>> >> the >> >>> >>> >> scipy.special docs. >> >>> >>> >> >> >>> >>> >> However, I would like to change the last part "with respect to >> >>> >>> >> some >> >>> >>> >> weight function" >> >>> >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >> >>> >>> >> >> >>> >>> >> Instead of Gaussian weights I would like uniform weights on >> >>> >>> >> bounded >> >>> >>> >> support. And I have never seen anything about changing the >> >>> >>> >> weight >> >>> >>> >> function for the orthogonal basis of these kind of polynomials. >> >>> >>> >> >> >>> >>> > >> >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They are >> >>> >>> > orthogonal >> >>> >>> > on >> >>> >>> > [-1,1] as has been mentioned, but can be mapped to other >> >>> >>> > domains. >> >>> >>> > For >> >>> >>> > example >> >>> >>> > >> >>> >>> > In [1]: from numpy.polynomial import Legendre as L >> >>> >>> > >> >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], >> >>> >>> > domain=[0,1]).linspace()) >> >>> >>> > ?? ...: >> >>> >>> > >> >>> >>> > produces the attached plots. >> >>> >>> >> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. >> >>> >>> >> >>> >>> > >> >>> >>> > >> >>> >>> > Chuck >> >>> >>> > >> >>> >>> >> >>> >>> >> >>> >>> as a first application for orthogonal polynomials I was trying to >> >>> >>> get >> >>> >>> an estimate for a density, but I haven't figured out the weighting >> >>> >>> yet. >> >>> >>> >> >>> >>> Fourier polynomials work better for this. >> >>> >>> >> >>> >> >> >>> >> You might want to try Chebyshev then, the Cheybyshev polynomialas >> >>> >> are >> >>> >> essentially cosines and will handle the ends better. Weighting >> >>> >> might >> >>> >> also >> >>> >> help, as I expect the distribution of the errors are somewhat >> >>> >> Poisson. >> >>> >> >> >>> > >> >>> > I should mention that all the polynomial fits will give you the same >> >>> > results, but the Chebyshev fits are more numerically stable. The >> >>> > general >> >>> > approach is to overfit, i.e., use more polynomials than needed and >> >>> > then >> >>> > truncate the series resulting in a faux min/max approximation. >> >>> > Unlike >> >>> > power >> >>> > series, the coefficients of the Cheybshev series will tend to >> >>> > decrease >> >>> > rapidly at some point. >> >>> >> >>> I think I might have still something wrong with the way I use the >> >>> scipy.special polynomials >> >>> >> >>> for a large sample size with 10000 observations, the nice graph is >> >>> fourier with 20 elements, the second (not so nice) is with >> >>> scipy.special.chebyt with 500 polynomials. The graph for 20 Chebychev >> >>> polynomials looks very similar >> >>> >> >>> Chebychev doesn't want to get low enough to adjust to the low part. >> >>> >> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) >> >>> >> >> >> >> That certainly doesn't look right. Could you mail me the data offline? >> >> Also, >> >> what are you fitting, the histogram, the cdf, or...? >> > >> > The numbers are generated (by a function in scikits.statsmodels). It's >> > all in the script that I posted, except I keep changing things. >> > >> > I'm not fitting anything directly. >> > There is supposed to be a closed form expression, the estimated >> > coefficient of each polynomial is just the mean of that polynomial >> > evaluated at the data. The theory and the descriptions sounded easy, >> > but unfortunately it didn't work out. >> > >> > I was just hoping to get lucky and that I'm able to skip the small >> > print. >> > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract >> > got me started and it has the fourier case that works. >> > >> > There are lots of older papers that I only skimmed, but I should be >> > able to work out the Hermite case before going back to the general >> > case again with arbitrary orthogonal polynomial bases. (Initially I >> > wanted to ignore the Hermite bases because gaussian_kde works well in >> > that case.) >> >> Just another graph before stopping with this >> >> chebyt work if I cheat (rescale at the end >> >> f_hat = (f_hat - f_hat.min()) >> fint2 = integrate.trapz(f_hat, grid) >> f_hat /= fint2 >> >> graph is with chebyt with 30 polynomials after shifting and scaling, >> 20 polynomials looks also good. >> >> (hunting for the missing scaling term is left for some other day) >> >> In any case, there's a recipe for non-parametric density estimation >> with compact support. >> > > Ah, now I see what is going on -- monte carlo integration to get the > expansion of the pdf in terms of orthogonal polynomials. So yes, I think > Lagrange polynomials are probably the ones to use unless you use the weight > in the integral. Note that 1.6 also has the Hermite and Laguerre > polynomials. But it seems that for these things it would also be desirable > to have the normalization constants. It's also intended to be used as a density estimator for real data, but the idea is the same. My main problem seems to be that I haven't figured out the normalization (constants) that I'm supposed to use. Given the Wikipedia page that Andy pointed out, I added an additional weighting term. I still need to shift and rescale, but the graphs look nice. (the last one, promised) chebyt with 30 polynomials on sample with 10000 observations. (I don't know yet if I want to use the new polynomials in numpy even thought they are much nicer. I just gave up trying to get statsmodels compatible with numpy 1.3 because I'm using the polynomials introduced in 1.4) (I will also look at Andy's pointer to Gram?Schmidt, because I realized that for nonlinear trend estimation I want orthogonal for discretely evaluated points instead of for the integral.) Thanks, Josef > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- A non-text attachment was scrubbed... Name: cheby_dens_30_10000_cheating_weight.png Type: image/png Size: 21736 bytes Desc: not available URL: From klonuo at gmail.com Mon May 16 21:07:06 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Tue, 17 May 2011 03:07:06 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit Message-ID: <20110517030704.381B.B1C76292@gmail.com> I followed partial instructions and scipy page, which are for different MKL and VS version. I have MKL 10.2.2.025 and MSVS9 .numpy-site.cfg file example: =============================================================================== mkl_libs = mkl_ia32, mkl_c_dll, libguide40 lapack_libs = mkl_lapack ------------------------------------------------------------------------------- None of this libraries are in MKL ia32\libs folder, so I assumed this: =============================================================================== mkl_libs = mkl_blas95, mkl_intel_c_dll, libguide40 lapack_libs = mkl_lapack95 ------------------------------------------------------------------------------- As expected, this was wrong. Can someone assist, please? TIA additionally this is ia32/libs folder listing: =============================================================================== C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\libguide40.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\libguide.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\libiomp5md.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\libiomp5mt.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_blacs_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_blacs_intelmpi.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_blacs_mpich2.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_blas95.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_cdft_core.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_cdft_core_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_core.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_core_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_intel_c.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_intel_c_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_intel_s.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_intel_s_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_intel_thread.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_intel_thread_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_lapack95.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_pgi_thread.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_pgi_thread_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_scalapack_core.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_scalapack_core_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_sequential.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_sequential_dll.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_solver.lib C:\Program Files\Intel\MKL\10.2.2.025\ia32\lib\mkl_solver_sequential.lib ------------------------------------------------------------------------------- From charlesr.harris at gmail.com Mon May 16 21:29:27 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 May 2011 19:29:27 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 6:12 PM, wrote: > On Mon, May 16, 2011 at 5:58 PM, Charles R Harris > wrote: > > > > > > On Mon, May 16, 2011 at 12:40 PM, wrote: > >> > >> On Mon, May 16, 2011 at 12:47 PM, wrote: > >> > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris > >> > wrote: > >> >> > >> >> > >> >> On Mon, May 16, 2011 at 10:22 AM, wrote: > >> >>> > >> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris > >> >>> wrote: > >> >>> > > >> >>> > > >> >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris > >> >>> > wrote: > >> >>> >> > >> >>> >> > >> >>> >> On Mon, May 16, 2011 at 9:11 AM, wrote: > >> >>> >>> > >> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris > >> >>> >>> wrote: > >> >>> >>> > > >> >>> >>> > > >> >>> >>> > On Sat, May 14, 2011 at 2:26 PM, > wrote: > >> >>> >>> >> > >> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest > >> >>> >>> >> > >> >>> >>> >> wrote: > >> >>> >>> >> > Hi, > >> >>> >>> >> > > >> >>> >>> >> > Might this be what you want: > >> >>> >>> >> > > >> >>> >>> >> > The first eleven probabilists' Hermite polynomials are: > >> >>> >>> >> > > >> >>> >>> >> > ... > >> >>> >>> >> > > >> >>> >>> >> > My chromium browser does not seem to paste pngs. Anyway, > >> >>> >>> >> > check > >> >>> >>> >> > > >> >>> >>> >> > > >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials > >> >>> >>> >> > > >> >>> >>> >> > and you'll see that the first polynomial is 1, the second > x, > >> >>> >>> >> > and > >> >>> >>> >> > so > >> >>> >>> >> > forth. From my courses on quantum mechanics I recall that > >> >>> >>> >> > these > >> >>> >>> >> > polynomials are, with respect to some weight function, > >> >>> >>> >> > orthogonal. > >> >>> >>> >> > >> >>> >>> >> Thanks, I haven't looked at that yet, we should add wikipedia > >> >>> >>> >> to > >> >>> >>> >> the > >> >>> >>> >> scipy.special docs. > >> >>> >>> >> > >> >>> >>> >> However, I would like to change the last part "with respect > to > >> >>> >>> >> some > >> >>> >>> >> weight function" > >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality > >> >>> >>> >> > >> >>> >>> >> Instead of Gaussian weights I would like uniform weights on > >> >>> >>> >> bounded > >> >>> >>> >> support. And I have never seen anything about changing the > >> >>> >>> >> weight > >> >>> >>> >> function for the orthogonal basis of these kind of > polynomials. > >> >>> >>> >> > >> >>> >>> > > >> >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They are > >> >>> >>> > orthogonal > >> >>> >>> > on > >> >>> >>> > [-1,1] as has been mentioned, but can be mapped to other > >> >>> >>> > domains. > >> >>> >>> > For > >> >>> >>> > example > >> >>> >>> > > >> >>> >>> > In [1]: from numpy.polynomial import Legendre as L > >> >>> >>> > > >> >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], > >> >>> >>> > domain=[0,1]).linspace()) > >> >>> >>> > ...: > >> >>> >>> > > >> >>> >>> > produces the attached plots. > >> >>> >>> > >> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. > >> >>> >>> > >> >>> >>> > > >> >>> >>> > > >> >>> >>> > Chuck > >> >>> >>> > > >> >>> >>> > >> >>> >>> > >> >>> >>> as a first application for orthogonal polynomials I was trying > to > >> >>> >>> get > >> >>> >>> an estimate for a density, but I haven't figured out the > weighting > >> >>> >>> yet. > >> >>> >>> > >> >>> >>> Fourier polynomials work better for this. > >> >>> >>> > >> >>> >> > >> >>> >> You might want to try Chebyshev then, the Cheybyshev polynomialas > >> >>> >> are > >> >>> >> essentially cosines and will handle the ends better. Weighting > >> >>> >> might > >> >>> >> also > >> >>> >> help, as I expect the distribution of the errors are somewhat > >> >>> >> Poisson. > >> >>> >> > >> >>> > > >> >>> > I should mention that all the polynomial fits will give you the > same > >> >>> > results, but the Chebyshev fits are more numerically stable. The > >> >>> > general > >> >>> > approach is to overfit, i.e., use more polynomials than needed and > >> >>> > then > >> >>> > truncate the series resulting in a faux min/max approximation. > >> >>> > Unlike > >> >>> > power > >> >>> > series, the coefficients of the Cheybshev series will tend to > >> >>> > decrease > >> >>> > rapidly at some point. > >> >>> > >> >>> I think I might have still something wrong with the way I use the > >> >>> scipy.special polynomials > >> >>> > >> >>> for a large sample size with 10000 observations, the nice graph is > >> >>> fourier with 20 elements, the second (not so nice) is with > >> >>> scipy.special.chebyt with 500 polynomials. The graph for 20 > Chebychev > >> >>> polynomials looks very similar > >> >>> > >> >>> Chebychev doesn't want to get low enough to adjust to the low part. > >> >>> > >> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) > >> >>> > >> >> > >> >> That certainly doesn't look right. Could you mail me the data > offline? > >> >> Also, > >> >> what are you fitting, the histogram, the cdf, or...? > >> > > >> > The numbers are generated (by a function in scikits.statsmodels). It's > >> > all in the script that I posted, except I keep changing things. > >> > > >> > I'm not fitting anything directly. > >> > There is supposed to be a closed form expression, the estimated > >> > coefficient of each polynomial is just the mean of that polynomial > >> > evaluated at the data. The theory and the descriptions sounded easy, > >> > but unfortunately it didn't work out. > >> > > >> > I was just hoping to get lucky and that I'm able to skip the small > >> > print. > >> > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract > >> > got me started and it has the fourier case that works. > >> > > >> > There are lots of older papers that I only skimmed, but I should be > >> > able to work out the Hermite case before going back to the general > >> > case again with arbitrary orthogonal polynomial bases. (Initially I > >> > wanted to ignore the Hermite bases because gaussian_kde works well in > >> > that case.) > >> > >> Just another graph before stopping with this > >> > >> chebyt work if I cheat (rescale at the end > >> > >> f_hat = (f_hat - f_hat.min()) > >> fint2 = integrate.trapz(f_hat, grid) > >> f_hat /= fint2 > >> > >> graph is with chebyt with 30 polynomials after shifting and scaling, > >> 20 polynomials looks also good. > >> > >> (hunting for the missing scaling term is left for some other day) > >> > >> In any case, there's a recipe for non-parametric density estimation > >> with compact support. > >> > > > > Ah, now I see what is going on -- monte carlo integration to get the > > expansion of the pdf in terms of orthogonal polynomials. So yes, I think > > Lagrange polynomials are probably the ones to use unless you use the > weight > > in the integral. Note that 1.6 also has the Hermite and Laguerre > > polynomials. But it seems that for these things it would also be > desirable > > to have the normalization constants. > > Heh, I meant Legendre. > It's also intended to be used as a density estimator for real data, > but the idea is the same. > > My main problem seems to be that I haven't figured out the > normalization (constants) that I'm supposed to use. > The normalization for Legendre functions over an interval of length L would be (2/L)*2/(2*i + 1)*(1/n), where i is the degree of the polynomial, and n is the number of samples. > Given the Wikipedia page that Andy pointed out, I added an additional > weighting term. I still need to shift and rescale, but the graphs look > nice. (the last one, promised) chebyt with 30 polynomials on sample > with 10000 observations. > > (I don't know yet if I want to use the new polynomials in numpy even > thought they are much nicer. I just gave up trying to get statsmodels > compatible with numpy 1.3 because I'm using the polynomials introduced > in 1.4) > > (I will also look at Andy's pointer to Gram?Schmidt, because I > realized that for nonlinear trend estimation I want orthogonal for > discretely evaluated points instead of for the integral.) > > QR is Gram-Schmidt. You can use any polynomial basis for the columns. There is also a method due to Moler to compute the polynomials using the fact that they satisfy a three term recursion, but QR is simpler. Anne suggested some time ago that I should include the Gauss points and weights in the polynomial classes and I've come to the conclusion she was right. Looks like I should include the normalization factors also. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon May 16 21:47:47 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 May 2011 21:47:47 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 9:29 PM, Charles R Harris wrote: > > > On Mon, May 16, 2011 at 6:12 PM, wrote: >> >> On Mon, May 16, 2011 at 5:58 PM, Charles R Harris >> wrote: >> > >> > >> > On Mon, May 16, 2011 at 12:40 PM, wrote: >> >> >> >> On Mon, May 16, 2011 at 12:47 PM, ? wrote: >> >> > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris >> >> > wrote: >> >> >> >> >> >> >> >> >> On Mon, May 16, 2011 at 10:22 AM, wrote: >> >> >>> >> >> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris >> >> >>> wrote: >> >> >>> > >> >> >>> > >> >> >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris >> >> >>> > wrote: >> >> >>> >> >> >> >>> >> >> >> >>> >> On Mon, May 16, 2011 at 9:11 AM, wrote: >> >> >>> >>> >> >> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >> >> >>> >>> wrote: >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > On Sat, May 14, 2011 at 2:26 PM, >> >> >>> >>> > wrote: >> >> >>> >>> >> >> >> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >> >> >>> >>> >> >> >> >>> >>> >> wrote: >> >> >>> >>> >> > Hi, >> >> >>> >>> >> > >> >> >>> >>> >> > Might this be what you want: >> >> >>> >>> >> > >> >> >>> >>> >> > The first eleven probabilists' Hermite polynomials are: >> >> >>> >>> >> > >> >> >>> >>> >> > ... >> >> >>> >>> >> > >> >> >>> >>> >> > My chromium browser does not seem to paste pngs. Anyway, >> >> >>> >>> >> > check >> >> >>> >>> >> > >> >> >>> >>> >> > >> >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >> >> >>> >>> >> > >> >> >>> >>> >> > and you'll see that the first polynomial is 1, the second >> >> >>> >>> >> > x, >> >> >>> >>> >> > and >> >> >>> >>> >> > so >> >> >>> >>> >> > forth. From my courses on quantum mechanics I recall that >> >> >>> >>> >> > these >> >> >>> >>> >> > polynomials are, with respect to some weight function, >> >> >>> >>> >> > orthogonal. >> >> >>> >>> >> >> >> >>> >>> >> Thanks, I haven't looked at that yet, we should add >> >> >>> >>> >> wikipedia >> >> >>> >>> >> to >> >> >>> >>> >> the >> >> >>> >>> >> scipy.special docs. >> >> >>> >>> >> >> >> >>> >>> >> However, I would like to change the last part "with respect >> >> >>> >>> >> to >> >> >>> >>> >> some >> >> >>> >>> >> weight function" >> >> >>> >>> >> >> >> >>> >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >> >> >>> >>> >> >> >> >>> >>> >> Instead of Gaussian weights I would like uniform weights on >> >> >>> >>> >> bounded >> >> >>> >>> >> support. And I have never seen anything about changing the >> >> >>> >>> >> weight >> >> >>> >>> >> function for the orthogonal basis of these kind of >> >> >>> >>> >> polynomials. >> >> >>> >>> >> >> >> >>> >>> > >> >> >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They are >> >> >>> >>> > orthogonal >> >> >>> >>> > on >> >> >>> >>> > [-1,1] as has been mentioned, but can be mapped to other >> >> >>> >>> > domains. >> >> >>> >>> > For >> >> >>> >>> > example >> >> >>> >>> > >> >> >>> >>> > In [1]: from numpy.polynomial import Legendre as L >> >> >>> >>> > >> >> >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], >> >> >>> >>> > domain=[0,1]).linspace()) >> >> >>> >>> > ?? ...: >> >> >>> >>> > >> >> >>> >>> > produces the attached plots. >> >> >>> >>> >> >> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. >> >> >>> >>> >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > Chuck >> >> >>> >>> > >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> as a first application for orthogonal polynomials I was trying >> >> >>> >>> to >> >> >>> >>> get >> >> >>> >>> an estimate for a density, but I haven't figured out the >> >> >>> >>> weighting >> >> >>> >>> yet. >> >> >>> >>> >> >> >>> >>> Fourier polynomials work better for this. >> >> >>> >>> >> >> >>> >> >> >> >>> >> You might want to try Chebyshev then, the Cheybyshev >> >> >>> >> polynomialas >> >> >>> >> are >> >> >>> >> essentially cosines and will handle the ends better. Weighting >> >> >>> >> might >> >> >>> >> also >> >> >>> >> help, as I expect the distribution of the errors are somewhat >> >> >>> >> Poisson. >> >> >>> >> >> >> >>> > >> >> >>> > I should mention that all the polynomial fits will give you the >> >> >>> > same >> >> >>> > results, but the Chebyshev fits are more numerically stable. The >> >> >>> > general >> >> >>> > approach is to overfit, i.e., use more polynomials than needed >> >> >>> > and >> >> >>> > then >> >> >>> > truncate the series resulting in a faux min/max approximation. >> >> >>> > Unlike >> >> >>> > power >> >> >>> > series, the coefficients of the Cheybshev series will tend to >> >> >>> > decrease >> >> >>> > rapidly at some point. >> >> >>> >> >> >>> I think I might have still something wrong with the way I use the >> >> >>> scipy.special polynomials >> >> >>> >> >> >>> for a large sample size with 10000 observations, the nice graph is >> >> >>> fourier with 20 elements, the second (not so nice) is with >> >> >>> scipy.special.chebyt with 500 polynomials. The graph for 20 >> >> >>> Chebychev >> >> >>> polynomials looks very similar >> >> >>> >> >> >>> Chebychev doesn't want to get low enough to adjust to the low part. >> >> >>> >> >> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) >> >> >>> >> >> >> >> >> >> That certainly doesn't look right. Could you mail me the data >> >> >> offline? >> >> >> Also, >> >> >> what are you fitting, the histogram, the cdf, or...? >> >> > >> >> > The numbers are generated (by a function in scikits.statsmodels). >> >> > It's >> >> > all in the script that I posted, except I keep changing things. >> >> > >> >> > I'm not fitting anything directly. >> >> > There is supposed to be a closed form expression, the estimated >> >> > coefficient of each polynomial is just the mean of that polynomial >> >> > evaluated at the data. The theory and the descriptions sounded easy, >> >> > but unfortunately it didn't work out. >> >> > >> >> > I was just hoping to get lucky and that I'm able to skip the small >> >> > print. >> >> > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract >> >> > got me started and it has the fourier case that works. >> >> > >> >> > There are lots of older papers that I only skimmed, but I should be >> >> > able to work out the Hermite case before going back to the general >> >> > case again with arbitrary orthogonal polynomial bases. (Initially I >> >> > wanted to ignore the Hermite bases because gaussian_kde works well in >> >> > that case.) >> >> >> >> Just another graph before stopping with this >> >> >> >> chebyt work if I cheat (rescale at the end >> >> >> >> f_hat = (f_hat - f_hat.min()) >> >> fint2 = integrate.trapz(f_hat, grid) >> >> f_hat /= fint2 >> >> >> >> graph is with chebyt with 30 polynomials after shifting and scaling, >> >> 20 polynomials looks also good. >> >> >> >> (hunting for the missing scaling term is left for some other day) >> >> >> >> In any case, there's a recipe for non-parametric density estimation >> >> with compact support. >> >> >> > >> > Ah, now I see what is going on -- monte carlo integration to get the >> > expansion of the pdf in terms of orthogonal polynomials. So yes, I think >> > Lagrange polynomials are probably the ones to use unless you use the >> > weight >> > in the integral. Note that 1.6 also has the Hermite and Laguerre >> > polynomials. But it seems that for these things it would also be >> > desirable >> > to have the normalization constants. >> > > Heh, I meant Legendre. > >> >> It's also intended to be used as a density estimator for real data, >> but the idea is the same. >> >> My main problem seems to be that I haven't figured out the >> normalization (constants) that I'm supposed to use. > > The normalization for Legendre functions over an interval of length L would > be (2/L)*2/(2*i + 1)*(1/n), where i is the degree of the polynomial, and n > is the number of samples. > >> >> Given the Wikipedia page that Andy pointed out, I added an additional >> weighting term. I still need to shift and rescale, but the graphs look >> nice. (the last one, promised) chebyt with 30 polynomials on sample >> with 10000 observations. >> >> (I don't know yet if I want to use the new polynomials in numpy even >> thought they are much nicer. I just gave up trying to get statsmodels >> compatible with numpy 1.3 because I'm using the polynomials introduced >> in 1.4) >> >> (I will also look at Andy's pointer to Gram?Schmidt, because I >> realized that for nonlinear trend estimation I want orthogonal for >> discretely evaluated points instead of for the integral.) >> > > QR is Gram-Schmidt. You can use any polynomial basis for the columns. There > is also a method due to Moler to compute the polynomials using the fact that > they satisfy a three term recursion, but QR is simpler. > > Anne suggested some time ago that I should include the Gauss points and > weights in the polynomial classes and I've come to the conclusion she was > right. Looks like I should include the normalization factors also. Since I'm going through this backwards, program first, figure out what I'm doing, I always appreciate any of these premade helper functions. A mixture between trial and error and reading Wikipedia. This is the reweighted chebychev T polynomial, orthonormal with respect to uniform integration, for i,p in enumerate(polys[:5]): for j,p2 in enumerate(polys[:5]): print i,j,integrate.quad(lambda x: p(x)*p2(x), -1,1)[0] And now the rescaling essentially doesn't have an effect anymore f_hat.min() 0.00393264959543 fint2 1.00082015654 integral of estimated pdf class ChtPoly(object): def __init__(self, order): self.order = order from scipy.special import chebyt self.poly = chebyt(order) def __call__(self, x): if self.order == 0: return np.ones_like(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) else: return self.poly(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) * np.sqrt(2) If I can get the same for some of the other polynomials, I would be happy. Josef > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From cgohlke at uci.edu Mon May 16 22:04:46 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 16 May 2011 19:04:46 -0700 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <20110517030704.381B.B1C76292@gmail.com> References: <20110517030704.381B.B1C76292@gmail.com> Message-ID: <4DD1D7BE.7020409@uci.edu> On 5/16/2011 6:07 PM, Klonuo Umom wrote: > > I followed partial instructions and scipy page, which are for different MKL and VS version. > > I have MKL 10.2.2.025 and MSVS9 > > .numpy-site.cfg file example: > =============================================================================== > mkl_libs = mkl_ia32, mkl_c_dll, libguide40 > lapack_libs = mkl_lapack > ------------------------------------------------------------------------------- > > None of this libraries are in MKL ia32\libs folder, so I assumed this: > =============================================================================== > mkl_libs = mkl_blas95, mkl_intel_c_dll, libguide40 > lapack_libs = mkl_lapack95 > ------------------------------------------------------------------------------- > > As expected, this was wrong. > > Can someone assist, please? > > TIA > Try the Intel MKL Link Line Advisor I use the following multi-thread static libraries with MKL 11.1: mkl_lapack95,mkl_blas95,mkl_intel_c,mkl_intel_thread,mkl_core,libiomp5md,libifportmd Christoph From charlesr.harris at gmail.com Mon May 16 22:16:20 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 May 2011 20:16:20 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 7:47 PM, wrote: > On Mon, May 16, 2011 at 9:29 PM, Charles R Harris > wrote: > > > > > > On Mon, May 16, 2011 at 6:12 PM, wrote: > >> > >> On Mon, May 16, 2011 at 5:58 PM, Charles R Harris > >> wrote: > >> > > >> > > >> > On Mon, May 16, 2011 at 12:40 PM, wrote: > >> >> > >> >> On Mon, May 16, 2011 at 12:47 PM, wrote: > >> >> > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris > >> >> > wrote: > >> >> >> > >> >> >> > >> >> >> On Mon, May 16, 2011 at 10:22 AM, wrote: > >> >> >>> > >> >> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris > >> >> >>> wrote: > >> >> >>> > > >> >> >>> > > >> >> >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris > >> >> >>> > wrote: > >> >> >>> >> > >> >> >>> >> > >> >> >>> >> On Mon, May 16, 2011 at 9:11 AM, > wrote: > >> >> >>> >>> > >> >> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris > >> >> >>> >>> wrote: > >> >> >>> >>> > > >> >> >>> >>> > > >> >> >>> >>> > On Sat, May 14, 2011 at 2:26 PM, > >> >> >>> >>> > wrote: > >> >> >>> >>> >> > >> >> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest > >> >> >>> >>> >> > >> >> >>> >>> >> wrote: > >> >> >>> >>> >> > Hi, > >> >> >>> >>> >> > > >> >> >>> >>> >> > Might this be what you want: > >> >> >>> >>> >> > > >> >> >>> >>> >> > The first eleven probabilists' Hermite polynomials are: > >> >> >>> >>> >> > > >> >> >>> >>> >> > ... > >> >> >>> >>> >> > > >> >> >>> >>> >> > My chromium browser does not seem to paste pngs. Anyway, > >> >> >>> >>> >> > check > >> >> >>> >>> >> > > >> >> >>> >>> >> > > >> >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials > >> >> >>> >>> >> > > >> >> >>> >>> >> > and you'll see that the first polynomial is 1, the > second > >> >> >>> >>> >> > x, > >> >> >>> >>> >> > and > >> >> >>> >>> >> > so > >> >> >>> >>> >> > forth. From my courses on quantum mechanics I recall > that > >> >> >>> >>> >> > these > >> >> >>> >>> >> > polynomials are, with respect to some weight function, > >> >> >>> >>> >> > orthogonal. > >> >> >>> >>> >> > >> >> >>> >>> >> Thanks, I haven't looked at that yet, we should add > >> >> >>> >>> >> wikipedia > >> >> >>> >>> >> to > >> >> >>> >>> >> the > >> >> >>> >>> >> scipy.special docs. > >> >> >>> >>> >> > >> >> >>> >>> >> However, I would like to change the last part "with > respect > >> >> >>> >>> >> to > >> >> >>> >>> >> some > >> >> >>> >>> >> weight function" > >> >> >>> >>> >> > >> >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality > >> >> >>> >>> >> > >> >> >>> >>> >> Instead of Gaussian weights I would like uniform weights > on > >> >> >>> >>> >> bounded > >> >> >>> >>> >> support. And I have never seen anything about changing the > >> >> >>> >>> >> weight > >> >> >>> >>> >> function for the orthogonal basis of these kind of > >> >> >>> >>> >> polynomials. > >> >> >>> >>> >> > >> >> >>> >>> > > >> >> >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They > are > >> >> >>> >>> > orthogonal > >> >> >>> >>> > on > >> >> >>> >>> > [-1,1] as has been mentioned, but can be mapped to other > >> >> >>> >>> > domains. > >> >> >>> >>> > For > >> >> >>> >>> > example > >> >> >>> >>> > > >> >> >>> >>> > In [1]: from numpy.polynomial import Legendre as L > >> >> >>> >>> > > >> >> >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], > >> >> >>> >>> > domain=[0,1]).linspace()) > >> >> >>> >>> > ...: > >> >> >>> >>> > > >> >> >>> >>> > produces the attached plots. > >> >> >>> >>> > >> >> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. > >> >> >>> >>> > >> >> >>> >>> > > >> >> >>> >>> > > >> >> >>> >>> > Chuck > >> >> >>> >>> > > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> as a first application for orthogonal polynomials I was > trying > >> >> >>> >>> to > >> >> >>> >>> get > >> >> >>> >>> an estimate for a density, but I haven't figured out the > >> >> >>> >>> weighting > >> >> >>> >>> yet. > >> >> >>> >>> > >> >> >>> >>> Fourier polynomials work better for this. > >> >> >>> >>> > >> >> >>> >> > >> >> >>> >> You might want to try Chebyshev then, the Cheybyshev > >> >> >>> >> polynomialas > >> >> >>> >> are > >> >> >>> >> essentially cosines and will handle the ends better. Weighting > >> >> >>> >> might > >> >> >>> >> also > >> >> >>> >> help, as I expect the distribution of the errors are somewhat > >> >> >>> >> Poisson. > >> >> >>> >> > >> >> >>> > > >> >> >>> > I should mention that all the polynomial fits will give you the > >> >> >>> > same > >> >> >>> > results, but the Chebyshev fits are more numerically stable. > The > >> >> >>> > general > >> >> >>> > approach is to overfit, i.e., use more polynomials than needed > >> >> >>> > and > >> >> >>> > then > >> >> >>> > truncate the series resulting in a faux min/max approximation. > >> >> >>> > Unlike > >> >> >>> > power > >> >> >>> > series, the coefficients of the Cheybshev series will tend to > >> >> >>> > decrease > >> >> >>> > rapidly at some point. > >> >> >>> > >> >> >>> I think I might have still something wrong with the way I use the > >> >> >>> scipy.special polynomials > >> >> >>> > >> >> >>> for a large sample size with 10000 observations, the nice graph > is > >> >> >>> fourier with 20 elements, the second (not so nice) is with > >> >> >>> scipy.special.chebyt with 500 polynomials. The graph for 20 > >> >> >>> Chebychev > >> >> >>> polynomials looks very similar > >> >> >>> > >> >> >>> Chebychev doesn't want to get low enough to adjust to the low > part. > >> >> >>> > >> >> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) > >> >> >>> > >> >> >> > >> >> >> That certainly doesn't look right. Could you mail me the data > >> >> >> offline? > >> >> >> Also, > >> >> >> what are you fitting, the histogram, the cdf, or...? > >> >> > > >> >> > The numbers are generated (by a function in scikits.statsmodels). > >> >> > It's > >> >> > all in the script that I posted, except I keep changing things. > >> >> > > >> >> > I'm not fitting anything directly. > >> >> > There is supposed to be a closed form expression, the estimated > >> >> > coefficient of each polynomial is just the mean of that polynomial > >> >> > evaluated at the data. The theory and the descriptions sounded > easy, > >> >> > but unfortunately it didn't work out. > >> >> > > >> >> > I was just hoping to get lucky and that I'm able to skip the small > >> >> > print. > >> >> > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract > >> >> > got me started and it has the fourier case that works. > >> >> > > >> >> > There are lots of older papers that I only skimmed, but I should be > >> >> > able to work out the Hermite case before going back to the general > >> >> > case again with arbitrary orthogonal polynomial bases. (Initially I > >> >> > wanted to ignore the Hermite bases because gaussian_kde works well > in > >> >> > that case.) > >> >> > >> >> Just another graph before stopping with this > >> >> > >> >> chebyt work if I cheat (rescale at the end > >> >> > >> >> f_hat = (f_hat - f_hat.min()) > >> >> fint2 = integrate.trapz(f_hat, grid) > >> >> f_hat /= fint2 > >> >> > >> >> graph is with chebyt with 30 polynomials after shifting and scaling, > >> >> 20 polynomials looks also good. > >> >> > >> >> (hunting for the missing scaling term is left for some other day) > >> >> > >> >> In any case, there's a recipe for non-parametric density estimation > >> >> with compact support. > >> >> > >> > > >> > Ah, now I see what is going on -- monte carlo integration to get the > >> > expansion of the pdf in terms of orthogonal polynomials. So yes, I > think > >> > Lagrange polynomials are probably the ones to use unless you use the > >> > weight > >> > in the integral. Note that 1.6 also has the Hermite and Laguerre > >> > polynomials. But it seems that for these things it would also be > >> > desirable > >> > to have the normalization constants. > >> > > > > Heh, I meant Legendre. > > > >> > >> It's also intended to be used as a density estimator for real data, > >> but the idea is the same. > >> > >> My main problem seems to be that I haven't figured out the > >> normalization (constants) that I'm supposed to use. > > > > The normalization for Legendre functions over an interval of length L > would > > be (2/L)*2/(2*i + 1)*(1/n), where i is the degree of the polynomial, and > n > > is the number of samples. > > > >> > >> Given the Wikipedia page that Andy pointed out, I added an additional > >> weighting term. I still need to shift and rescale, but the graphs look > >> nice. (the last one, promised) chebyt with 30 polynomials on sample > >> with 10000 observations. > >> > >> (I don't know yet if I want to use the new polynomials in numpy even > >> thought they are much nicer. I just gave up trying to get statsmodels > >> compatible with numpy 1.3 because I'm using the polynomials introduced > >> in 1.4) > >> > >> (I will also look at Andy's pointer to Gram?Schmidt, because I > >> realized that for nonlinear trend estimation I want orthogonal for > >> discretely evaluated points instead of for the integral.) > >> > > > > QR is Gram-Schmidt. You can use any polynomial basis for the columns. > There > > is also a method due to Moler to compute the polynomials using the fact > that > > they satisfy a three term recursion, but QR is simpler. > > > > Anne suggested some time ago that I should include the Gauss points and > > weights in the polynomial classes and I've come to the conclusion she was > > right. Looks like I should include the normalization factors also. > > Since I'm going through this backwards, program first, figure out what > I'm doing, I always appreciate any of these premade helper functions. > > A mixture between trial and error and reading Wikipedia. This is the > reweighted chebychev T polynomial, orthonormal with respect to uniform > integration, > > for i,p in enumerate(polys[:5]): > for j,p2 in enumerate(polys[:5]): > print i,j,integrate.quad(lambda x: p(x)*p2(x), -1,1)[0] > > And now the rescaling essentially doesn't have an effect anymore > f_hat.min() 0.00393264959543 > fint2 1.00082015654 integral of estimated pdf > > > class ChtPoly(object): > > def __init__(self, order): > self.order = order > from scipy.special import chebyt > self.poly = chebyt(order) > > def __call__(self, x): > if self.order == 0: > return np.ones_like(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) > else: > return self.poly(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) * > np.sqrt(2) > > If I can get the same for some of the other polynomials, I would be happy. > > The virtue of the Legendre polynomials here is that you don't need the weight to make them orthogonal. For the Chebyshev I'd be tempted to have a separate weight function w(x) = 1/(1 - x**2)**.5 and do (1/n)\sum_i T_n(x_i)*w(x_i), giving a result in a normal polynomial series. The additional normalization factor due to the interval would then be 2/L in addition to the terms you already have. The singularity in the weight could be a problem so Legendre polynomials might be a safer choice. To simplify this a bit, in the Legendre case you can use legvander(x, deg=n).sum(0) and scale the results with the factors in the preceding post to get the coefficients. The legvander function is a rather small one and you could pull it out and modify it to do the sum on the fly if space is a problem. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon May 16 22:24:25 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 May 2011 20:24:25 -0600 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 8:16 PM, Charles R Harris wrote: > > > On Mon, May 16, 2011 at 7:47 PM, wrote: > >> On Mon, May 16, 2011 at 9:29 PM, Charles R Harris >> wrote: >> > >> > >> > On Mon, May 16, 2011 at 6:12 PM, wrote: >> >> >> >> On Mon, May 16, 2011 at 5:58 PM, Charles R Harris >> >> wrote: >> >> > >> >> > >> >> > On Mon, May 16, 2011 at 12:40 PM, wrote: >> >> >> >> >> >> On Mon, May 16, 2011 at 12:47 PM, wrote: >> >> >> > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> >> >> On Mon, May 16, 2011 at 10:22 AM, wrote: >> >> >> >>> >> >> >> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris >> >> >> >>> wrote: >> >> >> >>> > >> >> >> >>> > >> >> >> >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris >> >> >> >>> > wrote: >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> On Mon, May 16, 2011 at 9:11 AM, >> wrote: >> >> >> >>> >>> >> >> >> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >> >> >> >>> >>> wrote: >> >> >> >>> >>> > >> >> >> >>> >>> > >> >> >> >>> >>> > On Sat, May 14, 2011 at 2:26 PM, >> >> >> >>> >>> > wrote: >> >> >> >>> >>> >> >> >> >> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >> >> >> >>> >>> >> >> >> >> >>> >>> >> wrote: >> >> >> >>> >>> >> > Hi, >> >> >> >>> >>> >> > >> >> >> >>> >>> >> > Might this be what you want: >> >> >> >>> >>> >> > >> >> >> >>> >>> >> > The first eleven probabilists' Hermite polynomials are: >> >> >> >>> >>> >> > >> >> >> >>> >>> >> > ... >> >> >> >>> >>> >> > >> >> >> >>> >>> >> > My chromium browser does not seem to paste pngs. >> Anyway, >> >> >> >>> >>> >> > check >> >> >> >>> >>> >> > >> >> >> >>> >>> >> > >> >> >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >> >> >> >>> >>> >> > >> >> >> >>> >>> >> > and you'll see that the first polynomial is 1, the >> second >> >> >> >>> >>> >> > x, >> >> >> >>> >>> >> > and >> >> >> >>> >>> >> > so >> >> >> >>> >>> >> > forth. From my courses on quantum mechanics I recall >> that >> >> >> >>> >>> >> > these >> >> >> >>> >>> >> > polynomials are, with respect to some weight function, >> >> >> >>> >>> >> > orthogonal. >> >> >> >>> >>> >> >> >> >> >>> >>> >> Thanks, I haven't looked at that yet, we should add >> >> >> >>> >>> >> wikipedia >> >> >> >>> >>> >> to >> >> >> >>> >>> >> the >> >> >> >>> >>> >> scipy.special docs. >> >> >> >>> >>> >> >> >> >> >>> >>> >> However, I would like to change the last part "with >> respect >> >> >> >>> >>> >> to >> >> >> >>> >>> >> some >> >> >> >>> >>> >> weight function" >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >> >> >> >>> >>> >> >> >> >> >>> >>> >> Instead of Gaussian weights I would like uniform weights >> on >> >> >> >>> >>> >> bounded >> >> >> >>> >>> >> support. And I have never seen anything about changing >> the >> >> >> >>> >>> >> weight >> >> >> >>> >>> >> function for the orthogonal basis of these kind of >> >> >> >>> >>> >> polynomials. >> >> >> >>> >>> >> >> >> >> >>> >>> > >> >> >> >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They >> are >> >> >> >>> >>> > orthogonal >> >> >> >>> >>> > on >> >> >> >>> >>> > [-1,1] as has been mentioned, but can be mapped to other >> >> >> >>> >>> > domains. >> >> >> >>> >>> > For >> >> >> >>> >>> > example >> >> >> >>> >>> > >> >> >> >>> >>> > In [1]: from numpy.polynomial import Legendre as L >> >> >> >>> >>> > >> >> >> >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], >> >> >> >>> >>> > domain=[0,1]).linspace()) >> >> >> >>> >>> > ...: >> >> >> >>> >>> > >> >> >> >>> >>> > produces the attached plots. >> >> >> >>> >>> >> >> >> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. >> >> >> >>> >>> >> >> >> >>> >>> > >> >> >> >>> >>> > >> >> >> >>> >>> > Chuck >> >> >> >>> >>> > >> >> >> >>> >>> >> >> >> >>> >>> >> >> >> >>> >>> as a first application for orthogonal polynomials I was >> trying >> >> >> >>> >>> to >> >> >> >>> >>> get >> >> >> >>> >>> an estimate for a density, but I haven't figured out the >> >> >> >>> >>> weighting >> >> >> >>> >>> yet. >> >> >> >>> >>> >> >> >> >>> >>> Fourier polynomials work better for this. >> >> >> >>> >>> >> >> >> >>> >> >> >> >> >>> >> You might want to try Chebyshev then, the Cheybyshev >> >> >> >>> >> polynomialas >> >> >> >>> >> are >> >> >> >>> >> essentially cosines and will handle the ends better. >> Weighting >> >> >> >>> >> might >> >> >> >>> >> also >> >> >> >>> >> help, as I expect the distribution of the errors are somewhat >> >> >> >>> >> Poisson. >> >> >> >>> >> >> >> >> >>> > >> >> >> >>> > I should mention that all the polynomial fits will give you >> the >> >> >> >>> > same >> >> >> >>> > results, but the Chebyshev fits are more numerically stable. >> The >> >> >> >>> > general >> >> >> >>> > approach is to overfit, i.e., use more polynomials than needed >> >> >> >>> > and >> >> >> >>> > then >> >> >> >>> > truncate the series resulting in a faux min/max approximation. >> >> >> >>> > Unlike >> >> >> >>> > power >> >> >> >>> > series, the coefficients of the Cheybshev series will tend to >> >> >> >>> > decrease >> >> >> >>> > rapidly at some point. >> >> >> >>> >> >> >> >>> I think I might have still something wrong with the way I use >> the >> >> >> >>> scipy.special polynomials >> >> >> >>> >> >> >> >>> for a large sample size with 10000 observations, the nice graph >> is >> >> >> >>> fourier with 20 elements, the second (not so nice) is with >> >> >> >>> scipy.special.chebyt with 500 polynomials. The graph for 20 >> >> >> >>> Chebychev >> >> >> >>> polynomials looks very similar >> >> >> >>> >> >> >> >>> Chebychev doesn't want to get low enough to adjust to the low >> part. >> >> >> >>> >> >> >> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) >> >> >> >>> >> >> >> >> >> >> >> >> That certainly doesn't look right. Could you mail me the data >> >> >> >> offline? >> >> >> >> Also, >> >> >> >> what are you fitting, the histogram, the cdf, or...? >> >> >> > >> >> >> > The numbers are generated (by a function in scikits.statsmodels). >> >> >> > It's >> >> >> > all in the script that I posted, except I keep changing things. >> >> >> > >> >> >> > I'm not fitting anything directly. >> >> >> > There is supposed to be a closed form expression, the estimated >> >> >> > coefficient of each polynomial is just the mean of that polynomial >> >> >> > evaluated at the data. The theory and the descriptions sounded >> easy, >> >> >> > but unfortunately it didn't work out. >> >> >> > >> >> >> > I was just hoping to get lucky and that I'm able to skip the small >> >> >> > print. >> >> >> > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract >> >> >> > got me started and it has the fourier case that works. >> >> >> > >> >> >> > There are lots of older papers that I only skimmed, but I should >> be >> >> >> > able to work out the Hermite case before going back to the general >> >> >> > case again with arbitrary orthogonal polynomial bases. (Initially >> I >> >> >> > wanted to ignore the Hermite bases because gaussian_kde works well >> in >> >> >> > that case.) >> >> >> >> >> >> Just another graph before stopping with this >> >> >> >> >> >> chebyt work if I cheat (rescale at the end >> >> >> >> >> >> f_hat = (f_hat - f_hat.min()) >> >> >> fint2 = integrate.trapz(f_hat, grid) >> >> >> f_hat /= fint2 >> >> >> >> >> >> graph is with chebyt with 30 polynomials after shifting and scaling, >> >> >> 20 polynomials looks also good. >> >> >> >> >> >> (hunting for the missing scaling term is left for some other day) >> >> >> >> >> >> In any case, there's a recipe for non-parametric density estimation >> >> >> with compact support. >> >> >> >> >> > >> >> > Ah, now I see what is going on -- monte carlo integration to get the >> >> > expansion of the pdf in terms of orthogonal polynomials. So yes, I >> think >> >> > Lagrange polynomials are probably the ones to use unless you use the >> >> > weight >> >> > in the integral. Note that 1.6 also has the Hermite and Laguerre >> >> > polynomials. But it seems that for these things it would also be >> >> > desirable >> >> > to have the normalization constants. >> >> >> > >> > Heh, I meant Legendre. >> > >> >> >> >> It's also intended to be used as a density estimator for real data, >> >> but the idea is the same. >> >> >> >> My main problem seems to be that I haven't figured out the >> >> normalization (constants) that I'm supposed to use. >> > >> > The normalization for Legendre functions over an interval of length L >> would >> > be (2/L)*2/(2*i + 1)*(1/n), where i is the degree of the polynomial, and >> n >> > is the number of samples. >> > >> >> >> >> Given the Wikipedia page that Andy pointed out, I added an additional >> >> weighting term. I still need to shift and rescale, but the graphs look >> >> nice. (the last one, promised) chebyt with 30 polynomials on sample >> >> with 10000 observations. >> >> >> >> (I don't know yet if I want to use the new polynomials in numpy even >> >> thought they are much nicer. I just gave up trying to get statsmodels >> >> compatible with numpy 1.3 because I'm using the polynomials introduced >> >> in 1.4) >> >> >> >> (I will also look at Andy's pointer to Gram?Schmidt, because I >> >> realized that for nonlinear trend estimation I want orthogonal for >> >> discretely evaluated points instead of for the integral.) >> >> >> > >> > QR is Gram-Schmidt. You can use any polynomial basis for the columns. >> There >> > is also a method due to Moler to compute the polynomials using the fact >> that >> > they satisfy a three term recursion, but QR is simpler. >> > >> > Anne suggested some time ago that I should include the Gauss points and >> > weights in the polynomial classes and I've come to the conclusion she >> was >> > right. Looks like I should include the normalization factors also. >> >> Since I'm going through this backwards, program first, figure out what >> I'm doing, I always appreciate any of these premade helper functions. >> >> A mixture between trial and error and reading Wikipedia. This is the >> reweighted chebychev T polynomial, orthonormal with respect to uniform >> integration, >> >> for i,p in enumerate(polys[:5]): >> for j,p2 in enumerate(polys[:5]): >> print i,j,integrate.quad(lambda x: p(x)*p2(x), -1,1)[0] >> > >> And now the rescaling essentially doesn't have an effect anymore >> f_hat.min() 0.00393264959543 >> fint2 1.00082015654 integral of estimated pdf >> >> >> class ChtPoly(object): >> >> def __init__(self, order): >> self.order = order >> from scipy.special import chebyt >> self.poly = chebyt(order) >> >> def __call__(self, x): >> if self.order == 0: >> return np.ones_like(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) >> else: >> return self.poly(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) * >> np.sqrt(2) >> >> If I can get the same for some of the other polynomials, I would be happy. >> >> > The virtue of the Legendre polynomials here is that you don't need the > weight to make them orthogonal. For the Chebyshev I'd be tempted to have a > separate weight function w(x) = 1/(1 - x**2)**.5 and do (1/n)\sum_i > T_n(x_i)*w(x_i), giving a result in a normal polynomial series. The > additional normalization factor due to the interval would then be 2/L in > addition to the terms you already have. The singularity in the weight could > be a problem so Legendre polynomials might be a safer choice. > > To simplify this a bit, in the Legendre case you can use legvander(x, > deg=n).sum(0) and scale the results with the factors in the preceding post > to get the coefficients. The legvander function is a rather small one and > you could pull it out and modify it to do the sum on the fly if space is a > problem. > > Blush,brown paper bag in order here. Since you are already mapping the data points into the relevant interval you can forget the interval length. Just use 2/(2*j + 1) as the scaling factor for the Legendre functions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Tue May 17 01:11:58 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Tue, 17 May 2011 07:11:58 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <4DD1D7BE.7020409@uci.edu> References: <20110517030704.381B.B1C76292@gmail.com> <4DD1D7BE.7020409@uci.edu> Message-ID: <20110517071157.40C8.B1C76292@gmail.com> > Try the Intel MKL Link Line Advisor > > > I use the following multi-thread static libraries with MKL 11.1: > mkl_lapack95,mkl_blas95,mkl_intel_c,mkl_intel_thread,mkl_core,libiomp5md,libifportmd > Thanks for the tip, Christoph After that install was 'successful' but not the results. I installed this on P4 with 768 MB RAM (ifort 11.1, MSVC9). This PC is just one of many nodes I plan to use and others are similarly single CPU P4 with 1GB RAM Running: import numpy as np A=np.ones((1000,1000)) B=np.ones((1000,1000)) %timeit np.dot(A, B) In IPython, gives this results: Cygwin/ATLAS : 1 loops, best of 3: 843 ms per loop Windows/ATLAS* : 1 loops, best of 3: 1.29 s per loop Windows/MKL : 1 loops, best of 3: 2.37 s per loop *from offical binary installer I then run np.test() (with MKL build numpy) and there were indeed errors :( Now what? :) Running unit tests for numpy NumPy version 1.6.0 NumPy is installed in C:\Python26\lib\site-packages\numpy Python version 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 bit (Intel)] nose version 1.0.0 ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ ..............................................................................................................C:\Python2 6\lib\site-packages\numpy\core\numeric.py:1920: RuntimeWarning: invalid value encountered in absolute return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) C:\Python26\lib\site-packages\numpy\core\numeric.py:1920: RuntimeWarning: invalid value encountered in less_equal return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) ...........................................................................K............................................ ........................................................................................................................ ........................................................................................................................ ........C:\Python26\lib\site-packages\numpy\core\tests\test_regression.py:1017: RuntimeWarning: invalid value encountere d in sign have = np.sign(C) ....................................................................................................................K... ..........C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:539: RuntimeWarning: invalid value encountered in fmax assert_equal(np.fmax(arg1, arg2), out) .C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:531: RuntimeWarning: invalid value encountered in fmax assert_equal(np.fmax(arg1, arg2), out) ...C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:581: RuntimeWarning: invalid value encountered in fmin assert_equal(np.fmin(arg1, arg2), out) .C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:573: RuntimeWarning: invalid value encountered in fmin assert_equal(np.fmin(arg1, arg2), out) .............C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:240: RuntimeWarning: invalid value encountered in logaddexp assert np.isnan(np.logaddexp(np.nan, np.inf)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:241: RuntimeWarning: invalid value encountered in logaddexp assert np.isnan(np.logaddexp(np.inf, np.nan)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:242: RuntimeWarning: invalid value encountered in logaddexp assert np.isnan(np.logaddexp(np.nan, 0)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:243: RuntimeWarning: invalid value encountered in logaddexp assert np.isnan(np.logaddexp(0, np.nan)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:244: RuntimeWarning: invalid value encountered in logaddexp assert np.isnan(np.logaddexp(np.nan, np.nan)) ....C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:174: RuntimeWarning: invalid value encountered in logad dexp2 assert np.isnan(np.logaddexp2(np.nan, np.inf)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:175: RuntimeWarning: invalid value encountered in logaddexp 2 assert np.isnan(np.logaddexp2(np.inf, np.nan)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:176: RuntimeWarning: invalid value encountered in logaddexp 2 assert np.isnan(np.logaddexp2(np.nan, 0)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:177: RuntimeWarning: invalid value encountered in logaddexp 2 assert np.isnan(np.logaddexp2(0, np.nan)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:178: RuntimeWarning: invalid value encountered in logaddexp 2 assert np.isnan(np.logaddexp2(np.nan, np.nan)) .C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:455: RuntimeWarning: invalid value encountered in maximum assert_equal(np.maximum(arg1, arg2), out) .C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:447: RuntimeWarning: invalid value encountered in maximum assert_equal(np.maximum(arg1, arg2), out) ...C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:497: RuntimeWarning: invalid value encountered in minimu m assert_equal(np.minimum(arg1, arg2), out) .C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:489: RuntimeWarning: invalid value encountered in minimum assert_equal(np.minimum(arg1, arg2), out) ......................K..K....C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:1203: RuntimeWarning: invalid value encountered in less assert_equal(x < y, False, err_msg="%r < %r" % (x, y)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:1204: RuntimeWarning: invalid value encountered in greater assert_equal(x > y, False, err_msg="%r > %r" % (x, y)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:1205: RuntimeWarning: invalid value encountered in less_equ al assert_equal(x <= y, False, err_msg="%r <= %r" % (x, y)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:1206: RuntimeWarning: invalid value encountered in greater_ equal assert_equal(x >= y, False, err_msg="%r >= %r" % (x, y)) C:\Python26\lib\site-packages\numpy\core\tests\test_umath.py:1207: RuntimeWarning: invalid value encountered in equal assert_equal(x == y, False, err_msg="%r == %r" % (x, y)) ..........................K...SK.S.......S.............................................................................. ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ .....................................................................C:\Python26\lib\site-packages\numpy\core\fromnumeri c.py:2278: RuntimeWarning: invalid value encountered in rint return round(decimals, out) ........................................................................................................................ ...C:\Python26\lib\site-packages\numpy\lib\npyio.py:1682: RuntimeWarning: invalid value encountered in equal outputmask[name] |= (output[name] == mval) .............................K.........K................................................................................ ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ .........S.............................................................................................................. ........................................................................................................................ .....................................................................C:\Python26\lib\site-packages\numpy\ma\core.py:796: RuntimeWarning: invalid value encountered in less return umath.less(x, self.critical_value) ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ .................. ---------------------------------------------------------------------- Ran 3345 tests in 89.437s OK (KNOWNFAIL=8, SKIP=4) Out[7]: From yury at shurup.com Tue May 17 03:02:34 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Tue, 17 May 2011 09:02:34 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <20110517071157.40C8.B1C76292@gmail.com> References: <20110517030704.381B.B1C76292@gmail.com> <4DD1D7BE.7020409@uci.edu> <20110517071157.40C8.B1C76292@gmail.com> Message-ID: <1305615754.2609.1.camel@newpride> On Tue, 2011-05-17 at 07:11 +0200, Klonuo Umom wrote: > I then run np.test() (with MKL build numpy) and there were indeed errors :( > Now what? :) There are no errors, read again: > Running unit tests for numpy > NumPy version 1.6.0 > NumPy is installed in C:\Python26\lib\site-packages\numpy > Python version 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 bit (Intel)] > nose version 1.0.0 > Ran 3345 tests in 89.437s > > OK (KNOWNFAIL=8, SKIP=4) > Out[7]: Tests ran OK, there were 8 tests which were known to fail and 4 tests were skipped, in total 3345 tests were successfully run with zero errors or failures. -- Sincerely yours, Yury V. Zaytsev From cgohlke at uci.edu Tue May 17 03:04:29 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 17 May 2011 00:04:29 -0700 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <20110517071157.40C8.B1C76292@gmail.com> References: <20110517030704.381B.B1C76292@gmail.com> <4DD1D7BE.7020409@uci.edu> <20110517071157.40C8.B1C76292@gmail.com> Message-ID: <4DD21DFD.5090607@uci.edu> On 5/16/2011 10:11 PM, Klonuo Umom wrote: >> Try the Intel MKL Link Line Advisor >> >> >> I use the following multi-thread static libraries with MKL 11.1: >> mkl_lapack95,mkl_blas95,mkl_intel_c,mkl_intel_thread,mkl_core,libiomp5md,libifportmd >> > > Thanks for the tip, Christoph > After that install was 'successful' but not the results. > > I installed this on P4 with 768 MB RAM (ifort 11.1, MSVC9). This PC is just one > of many nodes I plan to use and others are similarly single CPU P4 with 1GB RAM > > Running: > import numpy as np > A=np.ones((1000,1000)) > B=np.ones((1000,1000)) > %timeit np.dot(A, B) > > In IPython, gives this results: > > Cygwin/ATLAS : 1 loops, best of 3: 843 ms per loop > Windows/ATLAS* : 1 loops, best of 3: 1.29 s per loop > Windows/MKL : 1 loops, best of 3: 2.37 s per loop > > *from offical binary installer > > > I then run np.test() (with MKL build numpy) and there were indeed errors :( > > Now what? :) > > > Running unit tests for numpy > NumPy version 1.6.0 > NumPy is installed in C:\Python26\lib\site-packages\numpy > Python version 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 bit (Intel)] > nose version 1.0.0 > ---------------------------------------------------------------------- > Ran 3345 tests in 89.437s > > OK (KNOWNFAIL=8, SKIP=4) > Out[7]: > Which errors are you referring to? np.test() passes without errors and failures. FWIW, on our systems numpy with MKL 11.1 is 2-3x faster than the official numpy distribution in the simple np.dot test. 2.6 GHz P4: Windows/MKL: 485 ms Windows/ATLAS: 1.06 s Core i7: Windows/MKL: 46 ms Windows/ATLAS: 157 ms Christoph From ralf.gommers at googlemail.com Tue May 17 03:07:46 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 17 May 2011 09:07:46 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <4DD21DFD.5090607@uci.edu> References: <20110517030704.381B.B1C76292@gmail.com> <4DD1D7BE.7020409@uci.edu> <20110517071157.40C8.B1C76292@gmail.com> <4DD21DFD.5090607@uci.edu> Message-ID: On Tue, May 17, 2011 at 9:04 AM, Christoph Gohlke wrote: > > > On 5/16/2011 10:11 PM, Klonuo Umom wrote: > >> Try the Intel MKL Link Line Advisor > >> > >> > >> I use the following multi-thread static libraries with MKL 11.1: > >> > mkl_lapack95,mkl_blas95,mkl_intel_c,mkl_intel_thread,mkl_core,libiomp5md,libifportmd > >> > > > > Thanks for the tip, Christoph > > After that install was 'successful' but not the results. > > > > I installed this on P4 with 768 MB RAM (ifort 11.1, MSVC9). This PC is > just one > > of many nodes I plan to use and others are similarly single CPU P4 with > 1GB RAM > > > > Running: > > import numpy as np > > A=np.ones((1000,1000)) > > B=np.ones((1000,1000)) > > %timeit np.dot(A, B) > > > > In IPython, gives this results: > > > > Cygwin/ATLAS : 1 loops, best of 3: 843 ms per loop > > Windows/ATLAS* : 1 loops, best of 3: 1.29 s per loop > > Windows/MKL : 1 loops, best of 3: 2.37 s per loop > > > > *from offical binary installer > > > > > > I then run np.test() (with MKL build numpy) and there were indeed errors > :( > > > > Now what? :) > > > > > > Running unit tests for numpy > > NumPy version 1.6.0 > > NumPy is installed in C:\Python26\lib\site-packages\numpy > > Python version 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 > bit (Intel)] > > nose version 1.0.0 > > > > > ---------------------------------------------------------------------- > > Ran 3345 tests in 89.437s > > > > OK (KNOWNFAIL=8, SKIP=4) > > Out[7]: > > > > Which errors are you referring to? np.test() passes without errors and > failures. > > I'm guessing the warnings spit out by the test suite. Do you see that too? The output from gcc builds is much cleaner. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Tue May 17 03:16:42 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Tue, 17 May 2011 09:16:42 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <4DD21DFD.5090607@uci.edu> References: <20110517071157.40C8.B1C76292@gmail.com> <4DD21DFD.5090607@uci.edu> Message-ID: <20110517091640.4527.B1C76292@gmail.com> > Which errors are you referring to? np.test() passes without errors and > failures. > > FWIW, on our systems numpy with MKL 11.1 is 2-3x faster than the > official numpy distribution in the simple np.dot test. > > 2.6 GHz P4: > Windows/MKL: 485 ms > Windows/ATLAS: 1.06 s > > Core i7: > Windows/MKL: 46 ms > Windows/ATLAS: 157 ms Sorry my fault, I misinterpreted warnings perhaps because I was surprised by the results If I test some linalg function then I see MKL is faster just like you posted, but what is wrong with my above test? I tried other simple tests found around, and results are as expected. MKL is couple times faster. For example 'numpy.linalg.eig(some_data)', 'numpy.linalg.svd(some_data)', etc From cgohlke at uci.edu Tue May 17 03:23:06 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 17 May 2011 00:23:06 -0700 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: References: <20110517030704.381B.B1C76292@gmail.com> <4DD1D7BE.7020409@uci.edu> <20110517071157.40C8.B1C76292@gmail.com> <4DD21DFD.5090607@uci.edu> Message-ID: <4DD2225A.4090102@uci.edu> On 5/17/2011 12:07 AM, Ralf Gommers wrote: > > > On Tue, May 17, 2011 at 9:04 AM, Christoph Gohlke > wrote: > > > > On 5/16/2011 10:11 PM, Klonuo Umom wrote: > > > Try the Intel MKL Link Line Advisor > > > > > > > > > > I use the following multi-thread static libraries with MKL 11.1: > > > > mkl_lapack95,mkl_blas95,mkl_intel_c,mkl_intel_thread,mkl_core,libiomp5md,libifportmd > > > > > > > Thanks for the tip, Christoph > > After that install was 'successful' but not the results. > > > > I installed this on P4 with 768 MB RAM (ifort 11.1, MSVC9). This > PC is just one > > of many nodes I plan to use and others are similarly single CPU P4 > with 1GB RAM > > > > Running: > > import numpy as np > > A=np.ones((1000,1000)) > > B=np.ones((1000,1000)) > > %timeit np.dot(A, B) > > > > In IPython, gives this results: > > > > Cygwin/ATLAS : 1 loops, best of 3: 843 ms per loop > > Windows/ATLAS* : 1 loops, best of 3: 1.29 s per loop > > Windows/MKL : 1 loops, best of 3: 2.37 s per loop > > > > *from offical binary installer > > > > > > I then run np.test() (with MKL build numpy) and there were indeed > errors :( > > > > Now what? :) > > > > > > Running unit tests for numpy > > NumPy version 1.6.0 > > NumPy is installed in C:\Python26\lib\site-packages\numpy > > Python version 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC > v.1500 32 bit (Intel)] > > nose version 1.0.0 > > > > > ---------------------------------------------------------------------- > > Ran 3345 tests in 89.437s > > > > OK (KNOWNFAIL=8, SKIP=4) > > Out[7]: > > > > Which errors are you referring to? np.test() passes without errors and > failures. > > I'm guessing the warnings spit out by the test suite. Do you see that > too? The output from gcc builds is much cleaner. > > Ralf > > I get the same warnings. The numpy 1.5.1 tests were free of warnings. Christoph From dave.hirschfeld at gmail.com Tue May 17 04:16:58 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 17 May 2011 08:16:58 +0000 (UTC) Subject: [SciPy-User] [Matplotlib-users] use matplotlib to produce mathathematical expression only References: <20110516132115.25140@gmx.net> <20110516142324.42990@gmx.net> Message-ID: > On Mon, May 16, 2011 at 08:21, Johannes Radinger gmx.at> wrote: > Hello, > > I want to produce a eps file of following mathematical expression: > r'$F(x)=p*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}+(1-p) *\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}$' > > is it possible to somehow missuse matplotlib for that to produce only > the function without any other plot things? Or is there a better python > library within scipy? I don't want to install the complete latex libraries > just for producing this single eps file. > > /johannes > Before IPython's new display system (http://thread.gmane.org/gmane.comp.python.ipython.devel/5899) I used to use the following function to quickly visualise latex formulae: def eqview(latex_expr,fontsize=28,dpi=80): from matplotlib.figure import Figure from matplotlib.backends.backend_agg import RendererAgg from pylab import figtext, gcf, close, show, figure latex_expr = '$'+latex_expr+'$' fig = Figure() h = fig.text(0.5, 0.5, latex_expr, fontsize = fontsize, horizontalalignment = 'center', verticalalignment= 'center') bbox = h.get_window_extent(RendererAgg(15,15,dpi)) del fig figure(figsize=(1.1*bbox.width/dpi,1.25*bbox.height/dpi), dpi=dpi) h = figtext(0.5, 0.5, latex_expr, fontsize = fontsize, horizontalalignment = 'center', verticalalignment = 'center') fig = gcf() fig.set_facecolor('w') show() The call to show could easily be replaced by a call to fig.savefig. NB: I'm not a matplotlib guru so this may not be the most efficient implementaion, seemed to do the job though. HTH, Dave From JRadinger at gmx.at Tue May 17 06:15:07 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Tue, 17 May 2011 12:15:07 +0200 Subject: [SciPy-User] [Matplotlib-users] use matplotlib to produce mathathematical expression only In-Reply-To: References: <20110516132115.25140@gmx.net> <20110516142324.42990@gmx.net> Message-ID: <20110517101507.34820@gmx.net> -------- Original-Nachricht -------- > Datum: Tue, 17 May 2011 08:16:58 +0000 (UTC) > Von: Dave Hirschfeld > An: scipy-user at scipy.org > Betreff: Re: [SciPy-User] [Matplotlib-users] use matplotlib to produce mathathematical expression only > > On Mon, May 16, 2011 at 08:21, Johannes Radinger gmx.at> > wrote: > > Hello, > > > > I want to produce a eps file of following mathematical expression: > > > r'$F(x)=p*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}+(1-p) > *\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}$' > > > > is it possible to somehow missuse matplotlib for that to produce only > > the function without any other plot things? Or is there a better python > > library within scipy? I don't want to install the complete latex > libraries > > just for producing this single eps file. > > > > /johannes > > > > Before IPython's new display system > (http://thread.gmane.org/gmane.comp.python.ipython.devel/5899) I used to > use > the following function to quickly visualise latex formulae: > > def eqview(latex_expr,fontsize=28,dpi=80): > from matplotlib.figure import Figure > from matplotlib.backends.backend_agg import RendererAgg > from pylab import figtext, gcf, close, show, figure > latex_expr = '$'+latex_expr+'$' > fig = Figure() > h = fig.text(0.5, 0.5, latex_expr, > fontsize = fontsize, > horizontalalignment = 'center', > verticalalignment= 'center') > bbox = h.get_window_extent(RendererAgg(15,15,dpi)) > del fig > figure(figsize=(1.1*bbox.width/dpi,1.25*bbox.height/dpi), dpi=dpi) > h = figtext(0.5, 0.5, latex_expr, > fontsize = fontsize, > horizontalalignment = 'center', > verticalalignment = 'center') > fig = gcf() > fig.set_facecolor('w') > show() > > The call to show could easily be replaced by a call to fig.savefig. > > NB: I'm not a matplotlib guru so this may not be the most efficient > implementaion, seemed to do the job though. > > HTH, > Dave Thank you Dave that works perfectly!! Just need to use double \\ in the latex_expr /johannes > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From JRadinger at gmx.at Tue May 17 06:15:07 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Tue, 17 May 2011 12:15:07 +0200 Subject: [SciPy-User] [Matplotlib-users] use matplotlib to produce mathathematical expression only In-Reply-To: References: <20110516132115.25140@gmx.net> <20110516142324.42990@gmx.net> Message-ID: <20110517101507.34820@gmx.net> -------- Original-Nachricht -------- > Datum: Tue, 17 May 2011 08:16:58 +0000 (UTC) > Von: Dave Hirschfeld > An: scipy-user at scipy.org > Betreff: Re: [SciPy-User] [Matplotlib-users] use matplotlib to produce mathathematical expression only > > On Mon, May 16, 2011 at 08:21, Johannes Radinger gmx.at> > wrote: > > Hello, > > > > I want to produce a eps file of following mathematical expression: > > > r'$F(x)=p*\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}+(1-p) > *\frac{1}{s1\sqrt{2\pi}}*e^{-\frac{1}{2}*(\frac{x-m}{s1})}$' > > > > is it possible to somehow missuse matplotlib for that to produce only > > the function without any other plot things? Or is there a better python > > library within scipy? I don't want to install the complete latex > libraries > > just for producing this single eps file. > > > > /johannes > > > > Before IPython's new display system > (http://thread.gmane.org/gmane.comp.python.ipython.devel/5899) I used to > use > the following function to quickly visualise latex formulae: > > def eqview(latex_expr,fontsize=28,dpi=80): > from matplotlib.figure import Figure > from matplotlib.backends.backend_agg import RendererAgg > from pylab import figtext, gcf, close, show, figure > latex_expr = '$'+latex_expr+'$' > fig = Figure() > h = fig.text(0.5, 0.5, latex_expr, > fontsize = fontsize, > horizontalalignment = 'center', > verticalalignment= 'center') > bbox = h.get_window_extent(RendererAgg(15,15,dpi)) > del fig > figure(figsize=(1.1*bbox.width/dpi,1.25*bbox.height/dpi), dpi=dpi) > h = figtext(0.5, 0.5, latex_expr, > fontsize = fontsize, > horizontalalignment = 'center', > verticalalignment = 'center') > fig = gcf() > fig.set_facecolor('w') > show() > > The call to show could easily be replaced by a call to fig.savefig. > > NB: I'm not a matplotlib guru so this may not be the most efficient > implementaion, seemed to do the job though. > > HTH, > Dave Thank you Dave that works perfectly!! Just need to use double \\ in the latex_expr /johannes > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone From josef.pktd at gmail.com Tue May 17 06:43:36 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 17 May 2011 06:43:36 -0400 Subject: [SciPy-User] wavelets for function approximation ? Message-ID: Can the wavelets in scipy.signal be used for function approximation? Does anyone have a recipe? Josef From wkerzendorf at googlemail.com Tue May 17 10:13:06 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Wed, 18 May 2011 00:13:06 +1000 Subject: [SciPy-User] array algorithm Message-ID: <4DD28272.5090806@gmail.com> Hello, I'm wondering what is the most efficient way to implement an algorithm on a 2D-array: If the array item is larger than the average of its four neighbours plus 3 times the sqrt of that average, set it to the average. Thanks Wolfgang From cweisiger at msg.ucsf.edu Tue May 17 11:24:07 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Tue, 17 May 2011 08:24:07 -0700 Subject: [SciPy-User] array algorithm In-Reply-To: <4DD28272.5090806@gmail.com> References: <4DD28272.5090806@gmail.com> Message-ID: On Tue, May 17, 2011 at 7:13 AM, Wolfgang Kerzendorf < wkerzendorf at googlemail.com> wrote: > Hello, > > I'm wondering what is the most efficient way to implement an algorithm > on a 2D-array: > > If the array item is larger than the average of its four neighbours plus > 3 times the sqrt of that average, set it to the average. > > There's always generic_filter: http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.generic_filter.html#scipy.ndimage.filters.generic_filter I'm not aware of anything more specific that would solve your problem, though given my experience with the scipy library that means little. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Tue May 17 11:47:22 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 17 May 2011 11:47:22 -0400 Subject: [SciPy-User] array algorithm In-Reply-To: References: <4DD28272.5090806@gmail.com> Message-ID: <045930E4-77E1-4F61-9057-E574A3F9B030@yale.edu> > On Tue, May 17, 2011 at 7:13 AM, Wolfgang Kerzendorf wrote: > > I'm wondering what is the most efficient way to implement an algorithm > on a 2D-array: > > If the array item is larger than the average of its four neighbours plus > 3 times the sqrt of that average, set it to the average. > weights = numpy.array([[0,0.25,0],[0.25,0,0.25],[0,0.25,0]]) average = scipy.ndimage.convolve(input.astype(float), weights) threshold = average + 3*numpy.sqrt(average) mask = input > threshold input[mask] = average[mask] Note that convolve has various boundary modes (reflect, constant, nearest, etc). Read the docstring. Zach From klonuo at gmail.com Tue May 17 16:08:47 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Tue, 17 May 2011 22:08:47 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <20110517071157.40C8.B1C76292@gmail.com> References: <4DD1D7BE.7020409@uci.edu> <20110517071157.40C8.B1C76292@gmail.com> Message-ID: <20110517220844.079F.B1C76292@gmail.com> > In IPython, gives this results: > > Cygwin/ATLAS : 1 loops, best of 3: 843 ms per loop > Windows/ATLAS* : 1 loops, best of 3: 1.29 s per loop > Windows/MKL : 1 loops, best of 3: 2.37 s per loop FYI, in Octave 3.2.4 same test took 31 ms - 27 times faster then above best Here are also some other simple linalg tests on same matrix (in seconds): | fun | ATLAS | MKL | OCTAVE | |-----|----------|----------|-----------| | svd | 30.9 | 14.3 | 6.844 | | inv | 7.41 | 2.74 | 1.906 | | eig | 116 | 25.2 | 45.05 | | det | 2.55 | 0.855 | 0.5469 | From ralf.gommers at googlemail.com Tue May 17 17:22:14 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 17 May 2011 23:22:14 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <20110517220844.079F.B1C76292@gmail.com> References: <4DD1D7BE.7020409@uci.edu> <20110517071157.40C8.B1C76292@gmail.com> <20110517220844.079F.B1C76292@gmail.com> Message-ID: On Tue, May 17, 2011 at 10:08 PM, Klonuo Umom wrote: > > In IPython, gives this results: > > > > Cygwin/ATLAS : 1 loops, best of 3: 843 ms per loop > > Windows/ATLAS* : 1 loops, best of 3: 1.29 s per loop > > Windows/MKL : 1 loops, best of 3: 2.37 s per loop > > FYI, in Octave 3.2.4 same test took 31 ms - 27 times faster then above best > > It would help if you gave the actual code you're running there. It is extremely unlikely that Octave is really 27 times faster than Numpy + ATLAS/MKL. Ralf Here are also some other simple linalg tests on same matrix (in seconds): > > | fun | ATLAS | MKL | OCTAVE | > |-----|----------|----------|-----------| > | svd | 30.9 | 14.3 | 6.844 | > | inv | 7.41 | 2.74 | 1.906 | > | eig | 116 | 25.2 | 45.05 | > | det | 2.55 | 0.855 | 0.5469 | > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Tue May 17 17:47:30 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Tue, 17 May 2011 23:47:30 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: References: <20110517220844.079F.B1C76292@gmail.com> Message-ID: <20110517234729.07A5.B1C76292@gmail.com> > It would help if you gave the actual code you're running there. It is > extremely unlikely that Octave is really 27 times faster than Numpy + > ATLAS/MKL. I posted it in my previous mail. Here is basically same procedure: ======================================= import numpy as np from numpy.random import random a = random((1000, 1000)) b = random((1000, 1000)) %timeit np.dot(a, b) --------------------------------------- Octave code: ======================================= a = rand(1000, 1000); b = rand(1000, 1000); tic; dot(a,b); toc --------------------------------------- And it just here, where MKL build numpy, behaves unexpectedly slow. Or I'm making some mistake I'm not aware of From charlesr.harris at gmail.com Tue May 17 18:06:16 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 17 May 2011 16:06:16 -0600 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <20110517234729.07A5.B1C76292@gmail.com> References: <20110517220844.079F.B1C76292@gmail.com> <20110517234729.07A5.B1C76292@gmail.com> Message-ID: On Tue, May 17, 2011 at 3:47 PM, Klonuo Umom wrote: > > It would help if you gave the actual code you're running there. It is > > extremely unlikely that Octave is really 27 times faster than Numpy + > > ATLAS/MKL. > > I posted it in my previous mail. Here is basically same procedure: > > ======================================= > import numpy as np > from numpy.random import random > > a = random((1000, 1000)) > b = random((1000, 1000)) > %timeit np.dot(a, b) > --------------------------------------- > > Octave code: > ======================================= > a = rand(1000, 1000); > b = rand(1000, 1000); > tic; dot(a,b); toc > --------------------------------------- > > And it just here, where MKL build numpy, behaves unexpectedly slow. > Or I'm making some mistake I'm not aware of > > I get 67ms/loop with plain ATLAS. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Tue May 17 18:08:54 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 18 May 2011 00:08:54 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: References: <20110517234729.07A5.B1C76292@gmail.com> Message-ID: <20110518000852.07A8.B1C76292@gmail.com> > I get 67ms/loop with plain ATLAS. What about Octave? As I wrote, PC I run the test is very low-end From cgohlke at uci.edu Tue May 17 18:16:54 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 17 May 2011 15:16:54 -0700 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <20110518000852.07A8.B1C76292@gmail.com> References: <20110517234729.07A5.B1C76292@gmail.com> <20110518000852.07A8.B1C76292@gmail.com> Message-ID: <4DD2F3D6.7050601@uci.edu> On 5/17/2011 3:08 PM, Klonuo Umom wrote: >> I get 67ms/loop with plain ATLAS. > > What about Octave? > > As I wrote, PC I run the test is very low-end You should compare to Octave's a*b, not dot(a,b). Christoph From klonuo at gmail.com Tue May 17 18:22:19 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 18 May 2011 00:22:19 +0200 Subject: [SciPy-User] Building NumPy 1.6.0 with MKL and MSVS9 on XP 32bit In-Reply-To: <4DD2F3D6.7050601@uci.edu> References: <20110518000852.07A8.B1C76292@gmail.com> <4DD2F3D6.7050601@uci.edu> Message-ID: <20110518002218.07AB.B1C76292@gmail.com> > You should compare to Octave's a*b, not dot(a,b). Thanks dot() in Octave produces vector, and I was just looking for right operator when you replied. Updated table: ====================== Octave : 0.766 Cygwin/ATLAS : 0.843 Windows/ATLAS* : 1.29 Windows/MKL : 2.37 ---------------------- From vanforeest at gmail.com Wed May 18 06:34:36 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 18 May 2011 12:34:36 +0200 Subject: [SciPy-User] scipy.stats pmf evaluation In-Reply-To: References: Message-ID: Hi Josef, Thanks for the explanations. Nicky On 16 May 2011 13:22, wrote: > On Mon, May 16, 2011 at 7:02 AM, nicky van foreest wrote: >> Hi Josef, >> >> This works indeed. But I must admit that I don't understand why. Can >> you give a hint where in the docs I might find an explanation? > > I don't think it's in the docs anywhere, just the docs on broadcasting > > (almost) all the _pdf, _cdf, ... methods are elementwise operations > that are fully vectorized. Some generic methods, for example > integration in cdf, are vectorized through an explicit call to > numpy.vectorize. > This means that standard numpy broadcasting works for all ?arguments > for the distribution methods (with a few exceptions) > > >>>> 10 * np.arange(2)[:,None] + np.arange(3)[None, :] > array([[ 0, ?1, ?2], > ? ? ? [10, 11, 12]]) > >>>> np.add(10 * np.arange(2)[:,None], np.arange(3)[None, :]) > array([[ 0, ?1, ?2], > ? ? ? [10, 11, 12]]) > >>>> np.add(10 * np.arange(2), np.arange(3)) > Traceback (most recent call last): > ?File "", line 1, in > ? ?np.add(10 * np.arange(2), np.arange(3)) > ValueError: shape mismatch: objects cannot be broadcast to a single shape >>>> > > hope that helps, > > Josef > >> >> thanks >> >> Nicky >> >> On 15 May 2011 20:00, nicky van foreest wrote: >>> Hi Josef, >>> >>> Thanks. >>> >>> On 15 May 2011 00:10, ? wrote: >>>> On Sat, May 14, 2011 at 5:35 PM, nicky van foreest wrote: >>>>> On 14 May 2011 22:10, ? wrote: >>>>>> On Sat, May 14, 2011 at 4:06 PM, nicky van foreest wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I wanted to compute a probability mass function on a range and a grid >>>>>>> at the same time, but this fails. Here is an example. >>>>>>> >>>>>>> In [1]: from scipy.stats import poisson >>>>>>> >>>>>>> In [2]: import numpy as np >>>>>>> >>>>>>> In [3]: print poisson.pmf(1, 1) >>>>>>> 0.367879441171 >>>>>>> >>>>>>> In [4]: grid = np.arange(np.finfo(float).eps,1.1,0.1) >>>>>>> >>>>>>> In [5]: print poisson.pmf(1, grid) >>>>>>> [ ?2.22044605e-16 ? 9.04837418e-02 1.63746151e-01 ? 2.22245466e-01 >>>>>>> ? 2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >>>>>>> ? 3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01] >>>>>>> >>>>>>> In [6]: print poisson.pmf(range(2), 1) >>>>>>> [ 0.36787944 ?0.36787944] >>>>>>> >>>>>>> >>>>>>> +++ >>>>>>> >>>>>>> Up to now everything works as expected. But this fails: >>>>>>> >>>>>>> +++ >>>>>>> >>>>>>> In [7]: print poisson.pmf(range(2), grid) >>>>>>> >>>>>>> ValueError: shape mismatch: objects cannot be broadcast to a single shape >>>>>>> >>>>>>> +++ >>>>>>> >>>>>>> Why is the call to ?poisson.pmf(range(2), grid) ?wrong, while it works >>>>>>> on either a range or a grid? >>>>>>> >>>>>>> Does anybody perhaps know the right way to compute >>>>>>> poisson.pmf(range(2), grid)" without using a for loop? >>>>>> >>>>>> You are not broadcasting, (range(2), grid) need to broadcast against >>>>>> each other. If it doesn't work then, then it's a bug. >>>>> >>>>> Thanks Josef. But how do I do this? The range will, usually, not >>>>> contain the same number of elements as the grid. What I would like to >>>>> compute is something like this: >>>>> >>>>> for j in range(3): >>>>> ? for x in grid: >>>>> ? ? ? poisson.pmf(j, x) >>>>> >>>>> By the above example I can use two types of shortcuts:: >>>>> >>>>> for j in range(3): >>>>> ? poisson.pmf(j, grid) >>>>> >>>>> or >>>>> >>>>> for x in grid: >>>>> ? poisson.pmf(range(3), x) >>>>> >>>>> >>>>> but the pmf function does not support broadcasting on both directions >>>>> at the same time, or (more probable) it can be done, but I make a >>>>> mistake somewhere. >>>> >>>> add a newaxis to one of the two >>>> >>>>>>> from scipy import stats >>>>>>> grid = np.arange(np.finfo(float).eps,1.1,0.1) >>>> >>>>>>> print stats.poisson.pmf(np.arange(2)[:,None], grid) >>>> [[ ?1.00000000e+00 ? 9.04837418e-01 ? 8.18730753e-01 ? 7.40818221e-01 >>>> ? ?6.70320046e-01 ? 6.06530660e-01 ? 5.48811636e-01 ? 4.96585304e-01 >>>> ? ?4.49328964e-01 ? 4.06569660e-01 ? 3.67879441e-01] >>>> ?[ ?2.22044605e-16 ? 9.04837418e-02 ? 1.63746151e-01 ? 2.22245466e-01 >>>> ? ?2.68128018e-01 ? 3.03265330e-01 ? 3.29286982e-01 ? 3.47609713e-01 >>>> ? ?3.59463171e-01 ? 3.65912694e-01 ? 3.67879441e-01]] >>>> >>>>>>> print stats.poisson.pmf(np.arange(2), grid[:,None]) >>>> [[ ?1.00000000e+00 ? 2.22044605e-16] >>>> ?[ ?9.04837418e-01 ? 9.04837418e-02] >>>> ?[ ?8.18730753e-01 ? 1.63746151e-01] >>>> ?[ ?7.40818221e-01 ? 2.22245466e-01] >>>> ?[ ?6.70320046e-01 ? 2.68128018e-01] >>>> ?[ ?6.06530660e-01 ? 3.03265330e-01] >>>> ?[ ?5.48811636e-01 ? 3.29286982e-01] >>>> ?[ ?4.96585304e-01 ? 3.47609713e-01] >>>> ?[ ?4.49328964e-01 ? 3.59463171e-01] >>>> ?[ ?4.06569660e-01 ? 3.65912694e-01] >>>> ?[ ?3.67879441e-01 ? 3.67879441e-01]] >>>> >>>> 3-dim >>>> >>>>>>> print stats.poisson.pmf(np.arange(6).reshape((1,2,3)), grid[:,None,None]) >>>> >>>> >>>> There is a known bug, when the support depends on one of the >>>> parameters of the distribution, but it should work for most cases. >>>> >>>> Josef >>>> >>>> >>>>> >>>>> Nicky >>>>>> >>>>>> Josef >>>>>> >>>>>>> >>>>>>> thanks >>>>>>> >>>>>>> Nicky >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed May 18 08:20:57 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 18 May 2011 08:20:57 -0400 Subject: [SciPy-User] weight in scipy.integrate.quad Message-ID: scipy.integrate.quad has some additional arguments that are now briefly documented, but I don't see how to use them http://docs.scipy.org/scipy/docs/scipy.integrate.quadpack.quad/#scipy-integrate-quad I was trying to use weights but don't see what are valid arguments for it. Are there examples or is there more documentation on the use of the additional arguments, especially weights? Josef From warren.weckesser at enthought.com Wed May 18 09:02:23 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 18 May 2011 08:02:23 -0500 Subject: [SciPy-User] weight in scipy.integrate.quad In-Reply-To: References: Message-ID: On Wed, May 18, 2011 at 7:20 AM, wrote: > scipy.integrate.quad has some additional arguments that are now > briefly documented, but I don't see how to use them > > http://docs.scipy.org/scipy/docs/scipy.integrate.quadpack.quad/#scipy-integrate-quad > > I was trying to use weights but don't see what are valid arguments for it. > > Are there examples or is there more documentation on the use of the > additional arguments, especially weights? > Hi Josef, The function scipy.integrate.quad_explain() prints more information about the arguments to quad(). Warren > > Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed May 18 09:16:53 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 18 May 2011 09:16:53 -0400 Subject: [SciPy-User] orthogonal polynomials ? In-Reply-To: References: Message-ID: On Mon, May 16, 2011 at 10:24 PM, Charles R Harris wrote: > > > On Mon, May 16, 2011 at 8:16 PM, Charles R Harris > wrote: >> >> >> On Mon, May 16, 2011 at 7:47 PM, wrote: >>> >>> On Mon, May 16, 2011 at 9:29 PM, Charles R Harris >>> wrote: >>> > >>> > >>> > On Mon, May 16, 2011 at 6:12 PM, wrote: >>> >> >>> >> On Mon, May 16, 2011 at 5:58 PM, Charles R Harris >>> >> wrote: >>> >> > >>> >> > >>> >> > On Mon, May 16, 2011 at 12:40 PM, wrote: >>> >> >> >>> >> >> On Mon, May 16, 2011 at 12:47 PM, ? wrote: >>> >> >> > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris >>> >> >> > wrote: >>> >> >> >> >>> >> >> >> >>> >> >> >> On Mon, May 16, 2011 at 10:22 AM, wrote: >>> >> >> >>> >>> >> >> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris >>> >> >> >>> wrote: >>> >> >> >>> > >>> >> >> >>> > >>> >> >> >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris >>> >> >> >>> > wrote: >>> >> >> >>> >> >>> >> >> >>> >> >>> >> >> >>> >> On Mon, May 16, 2011 at 9:11 AM, >>> >> >> >>> >> wrote: >>> >> >> >>> >>> >>> >> >> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris >>> >> >> >>> >>> wrote: >>> >> >> >>> >>> > >>> >> >> >>> >>> > >>> >> >> >>> >>> > On Sat, May 14, 2011 at 2:26 PM, >>> >> >> >>> >>> > wrote: >>> >> >> >>> >>> >> >>> >> >> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest >>> >> >> >>> >>> >> >>> >> >> >>> >>> >> wrote: >>> >> >> >>> >>> >> > Hi, >>> >> >> >>> >>> >> > >>> >> >> >>> >>> >> > Might this be what you want: >>> >> >> >>> >>> >> > >>> >> >> >>> >>> >> > The first eleven probabilists' Hermite polynomials >>> >> >> >>> >>> >> > are: >>> >> >> >>> >>> >> > >>> >> >> >>> >>> >> > ... >>> >> >> >>> >>> >> > >>> >> >> >>> >>> >> > My chromium browser does not seem to paste pngs. >>> >> >> >>> >>> >> > Anyway, >>> >> >> >>> >>> >> > check >>> >> >> >>> >>> >> > >>> >> >> >>> >>> >> > >>> >> >> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials >>> >> >> >>> >>> >> > >>> >> >> >>> >>> >> > and you'll see that the first polynomial is 1, the >>> >> >> >>> >>> >> > second >>> >> >> >>> >>> >> > x, >>> >> >> >>> >>> >> > and >>> >> >> >>> >>> >> > so >>> >> >> >>> >>> >> > forth. From my courses on quantum mechanics I recall >>> >> >> >>> >>> >> > that >>> >> >> >>> >>> >> > these >>> >> >> >>> >>> >> > polynomials are, with respect to some weight function, >>> >> >> >>> >>> >> > orthogonal. >>> >> >> >>> >>> >> >>> >> >> >>> >>> >> Thanks, I haven't looked at that yet, we should add >>> >> >> >>> >>> >> wikipedia >>> >> >> >>> >>> >> to >>> >> >> >>> >>> >> the >>> >> >> >>> >>> >> scipy.special docs. >>> >> >> >>> >>> >> >>> >> >> >>> >>> >> However, I would like to change the last part "with >>> >> >> >>> >>> >> respect >>> >> >> >>> >>> >> to >>> >> >> >>> >>> >> some >>> >> >> >>> >>> >> weight function" >>> >> >> >>> >>> >> >>> >> >> >>> >>> >> >>> >> >> >>> >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality >>> >> >> >>> >>> >> >>> >> >> >>> >>> >> Instead of Gaussian weights I would like uniform weights >>> >> >> >>> >>> >> on >>> >> >> >>> >>> >> bounded >>> >> >> >>> >>> >> support. And I have never seen anything about changing >>> >> >> >>> >>> >> the >>> >> >> >>> >>> >> weight >>> >> >> >>> >>> >> function for the orthogonal basis of these kind of >>> >> >> >>> >>> >> polynomials. >>> >> >> >>> >>> >> >>> >> >> >>> >>> > >>> >> >> >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They >>> >> >> >>> >>> > are >>> >> >> >>> >>> > orthogonal >>> >> >> >>> >>> > on >>> >> >> >>> >>> > [-1,1] as has been mentioned, but can be mapped to other >>> >> >> >>> >>> > domains. >>> >> >> >>> >>> > For >>> >> >> >>> >>> > example >>> >> >> >>> >>> > >>> >> >> >>> >>> > In [1]: from numpy.polynomial import Legendre as L >>> >> >> >>> >>> > >>> >> >> >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1], >>> >> >> >>> >>> > domain=[0,1]).linspace()) >>> >> >> >>> >>> > ?? ...: >>> >> >> >>> >>> > >>> >> >> >>> >>> > produces the attached plots. >>> >> >> >>> >>> >>> >> >> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit. >>> >> >> >>> >>> >>> >> >> >>> >>> > >>> >> >> >>> >>> > >>> >> >> >>> >>> > Chuck >>> >> >> >>> >>> > >>> >> >> >>> >>> >>> >> >> >>> >>> >>> >> >> >>> >>> as a first application for orthogonal polynomials I was >>> >> >> >>> >>> trying >>> >> >> >>> >>> to >>> >> >> >>> >>> get >>> >> >> >>> >>> an estimate for a density, but I haven't figured out the >>> >> >> >>> >>> weighting >>> >> >> >>> >>> yet. >>> >> >> >>> >>> >>> >> >> >>> >>> Fourier polynomials work better for this. >>> >> >> >>> >>> >>> >> >> >>> >> >>> >> >> >>> >> You might want to try Chebyshev then, the Cheybyshev >>> >> >> >>> >> polynomialas >>> >> >> >>> >> are >>> >> >> >>> >> essentially cosines and will handle the ends better. >>> >> >> >>> >> Weighting >>> >> >> >>> >> might >>> >> >> >>> >> also >>> >> >> >>> >> help, as I expect the distribution of the errors are >>> >> >> >>> >> somewhat >>> >> >> >>> >> Poisson. >>> >> >> >>> >> >>> >> >> >>> > >>> >> >> >>> > I should mention that all the polynomial fits will give you >>> >> >> >>> > the >>> >> >> >>> > same >>> >> >> >>> > results, but the Chebyshev fits are more numerically stable. >>> >> >> >>> > The >>> >> >> >>> > general >>> >> >> >>> > approach is to overfit, i.e., use more polynomials than >>> >> >> >>> > needed >>> >> >> >>> > and >>> >> >> >>> > then >>> >> >> >>> > truncate the series resulting in a faux min/max >>> >> >> >>> > approximation. >>> >> >> >>> > Unlike >>> >> >> >>> > power >>> >> >> >>> > series, the coefficients of the Cheybshev series will tend to >>> >> >> >>> > decrease >>> >> >> >>> > rapidly at some point. >>> >> >> >>> >>> >> >> >>> I think I might have still something wrong with the way I use >>> >> >> >>> the >>> >> >> >>> scipy.special polynomials >>> >> >> >>> >>> >> >> >>> for a large sample size with 10000 observations, the nice graph >>> >> >> >>> is >>> >> >> >>> fourier with 20 elements, the second (not so nice) is with >>> >> >> >>> scipy.special.chebyt with 500 polynomials. The graph for 20 >>> >> >> >>> Chebychev >>> >> >> >>> polynomials looks very similar >>> >> >> >>> >>> >> >> >>> Chebychev doesn't want to get low enough to adjust to the low >>> >> >> >>> part. >>> >> >> >>> >>> >> >> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt) >>> >> >> >>> >>> >> >> >> >>> >> >> >> That certainly doesn't look right. Could you mail me the data >>> >> >> >> offline? >>> >> >> >> Also, >>> >> >> >> what are you fitting, the histogram, the cdf, or...? >>> >> >> > >>> >> >> > The numbers are generated (by a function in scikits.statsmodels). >>> >> >> > It's >>> >> >> > all in the script that I posted, except I keep changing things. >>> >> >> > >>> >> >> > I'm not fitting anything directly. >>> >> >> > There is supposed to be a closed form expression, the estimated >>> >> >> > coefficient of each polynomial is just the mean of that >>> >> >> > polynomial >>> >> >> > evaluated at the data. The theory and the descriptions sounded >>> >> >> > easy, >>> >> >> > but unfortunately it didn't work out. >>> >> >> > >>> >> >> > I was just hoping to get lucky and that I'm able to skip the >>> >> >> > small >>> >> >> > print. >>> >> >> > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract >>> >> >> > got me started and it has the fourier case that works. >>> >> >> > >>> >> >> > There are lots of older papers that I only skimmed, but I should >>> >> >> > be >>> >> >> > able to work out the Hermite case before going back to the >>> >> >> > general >>> >> >> > case again with arbitrary orthogonal polynomial bases. (Initially >>> >> >> > I >>> >> >> > wanted to ignore the Hermite bases because gaussian_kde works >>> >> >> > well in >>> >> >> > that case.) >>> >> >> >>> >> >> Just another graph before stopping with this >>> >> >> >>> >> >> chebyt work if I cheat (rescale at the end >>> >> >> >>> >> >> f_hat = (f_hat - f_hat.min()) >>> >> >> fint2 = integrate.trapz(f_hat, grid) >>> >> >> f_hat /= fint2 >>> >> >> >>> >> >> graph is with chebyt with 30 polynomials after shifting and >>> >> >> scaling, >>> >> >> 20 polynomials looks also good. >>> >> >> >>> >> >> (hunting for the missing scaling term is left for some other day) >>> >> >> >>> >> >> In any case, there's a recipe for non-parametric density estimation >>> >> >> with compact support. >>> >> >> >>> >> > >>> >> > Ah, now I see what is going on -- monte carlo integration to get the >>> >> > expansion of the pdf in terms of orthogonal polynomials. So yes, I >>> >> > think >>> >> > Lagrange polynomials are probably the ones to use unless you use the >>> >> > weight >>> >> > in the integral. Note that 1.6 also has the Hermite and Laguerre >>> >> > polynomials. But it seems that for these things it would also be >>> >> > desirable >>> >> > to have the normalization constants. >>> >> >>> > >>> > Heh, I meant Legendre. >>> > >>> >> >>> >> It's also intended to be used as a density estimator for real data, >>> >> but the idea is the same. >>> >> >>> >> My main problem seems to be that I haven't figured out the >>> >> normalization (constants) that I'm supposed to use. >>> > >>> > The normalization for Legendre functions over an interval of length L >>> > would >>> > be (2/L)*2/(2*i + 1)*(1/n), where i is the degree of the polynomial, >>> > and n >>> > is the number of samples. >>> > >>> >> >>> >> Given the Wikipedia page that Andy pointed out, I added an additional >>> >> weighting term. I still need to shift and rescale, but the graphs look >>> >> nice. (the last one, promised) chebyt with 30 polynomials on sample >>> >> with 10000 observations. >>> >> >>> >> (I don't know yet if I want to use the new polynomials in numpy even >>> >> thought they are much nicer. I just gave up trying to get statsmodels >>> >> compatible with numpy 1.3 because I'm using the polynomials introduced >>> >> in 1.4) >>> >> >>> >> (I will also look at Andy's pointer to Gram?Schmidt, because I >>> >> realized that for nonlinear trend estimation I want orthogonal for >>> >> discretely evaluated points instead of for the integral.) >>> >> >>> > >>> > QR is Gram-Schmidt. You can use any polynomial basis for the columns. >>> > There >>> > is also a method due to Moler to compute the polynomials using the fact >>> > that >>> > they satisfy a three term recursion, but QR is simpler. >>> > >>> > Anne suggested some time ago that I should include the Gauss points and >>> > weights in the polynomial classes and I've come to the conclusion she >>> > was >>> > right. Looks like I should include the normalization factors also. >>> >>> Since I'm going through this backwards, program first, figure out what >>> I'm doing, I always appreciate any of these premade helper functions. >>> >>> A mixture between trial and error and reading Wikipedia. This is the >>> reweighted chebychev T polynomial, orthonormal with respect to uniform >>> integration, >>> >>> for i,p in enumerate(polys[:5]): >>> ? ?for j,p2 in enumerate(polys[:5]): >>> ? ? ? ?print i,j,integrate.quad(lambda x: p(x)*p2(x), -1,1)[0] >>> >>> And now the rescaling essentially doesn't have an effect anymore >>> f_hat.min() 0.00393264959543 >>> fint2 1.00082015654 ? integral of estimated pdf >>> >>> >>> class ChtPoly(object): >>> >>> ? ?def __init__(self, order): >>> ? ? ? ?self.order = order >>> ? ? ? ?from scipy.special import chebyt >>> ? ? ? ?self.poly = chebyt(order) >>> >>> ? ?def __call__(self, x): >>> ? ? ? ?if self.order == 0: >>> ? ? ? ? ? ?return np.ones_like(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) >>> ? ? ? ?else: >>> ? ? ? ? ? ?return self.poly(x) / (1-x**2)**(1/4.) / np.sqrt(np.pi) * >>> np.sqrt(2) >>> >>> If I can get the same for some of the other polynomials, I would be >>> happy. >>> >> >> The virtue of the Legendre polynomials here is that you don't need the >> weight to make them orthogonal. For the Chebyshev I'd be tempted to have a >> separate weight function w(x) = 1/(1 - x**2)**.5 and do (1/n)\sum_i >> T_n(x_i)*w(x_i), giving a result in a normal polynomial series. The >> additional normalization factor due to the interval would then be 2/L in >> addition to the terms you already have. The singularity in the weight could >> be a problem so Legendre polynomials might be a safer choice. >> >> To simplify this a bit, in the Legendre case you can use legvander(x, >> deg=n).sum(0) and scale the results with the factors in the preceding post >> to get the coefficients. The legvander function is a rather small one and >> you could pull it out and modify it to do the sum on the fly if space is a >> problem. >> > > Blush,brown paper bag in order here. Since you are already mapping the data > points into the relevant interval you can forget the interval length. Just > use 2/(2*j + 1) as the scaling factor for the Legendre functions. just an update (no graphs but they look nice) I haven't tried Legendre again, but after some fight I figured out what the orthonormal (wrt weight=1) version of scipy.special.hermite is hermite and chebyt density estimation works well, for fourier I still need to rescale by sqrt(pi), for the orthonormal chebyt polynomial base it looks like that I have to stay away from the boundary. latest version http://bazaar.launchpad.net/~josef-pktd/statsmodels/statsmodels-josef-experimental-030/view/head:/scikits/statsmodels/sandbox/nonparametric/densityorthopoly.py I also wrote some simple helper functions to check orthogonality and orthonormality. >>> from scipy.special import chebyt >>> polys = [chebyt(i) for i in range(4)] >>> r, e = inner_cont(polys, -1, 1) >>> r array([[ 2. , 0. , -0.66666667, 0. ], [ 0. , 0.66666667, 0. , -0.4 ], [-0.66666667, 0. , 0.93333333, 0. ], [ 0. , -0.4 , 0. , 0.97142857]]) >>> is_orthonormal_cont(polys, -1, 1, atol=1e-6) False >>> polys = [ChebyTPoly(i) for i in range(4)] >>> r, e = inner_cont(polys, -1, 1) >>> r array([[ 1.00000000e+00, 0.00000000e+00, -9.31270888e-14, 0.00000000e+00], [ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00, -9.47850712e-15], [ -9.31270888e-14, 0.00000000e+00, 1.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, -9.47850712e-15, 0.00000000e+00, 1.00000000e+00]]) >>> is_orthonormal_cont(polys, -1, 1, atol=1e-6) True check orthogonal wrt weight function, but it's not orthonormal >>> polysc = [chebyt(i) for i in range(4)] >>> r, e = inner_cont(polysc, -1, 1, weight=lambda x: (1-x*x)**(-1/2.)) >>> r array([[ 3.14159265e+00, 0.00000000e+00, -2.00130508e-13, 0.00000000e+00], [ 0.00000000e+00, 1.57079633e+00, 0.00000000e+00, 6.31095803e-12], [ -2.00130508e-13, 0.00000000e+00, 1.57079633e+00, 0.00000000e+00], [ 0.00000000e+00, 6.31095803e-12, 0.00000000e+00, 1.57079633e+00]]) >>> np.max(np.abs(r - np.diag(np.diag(r)))) 6.3109580284604367e-12 Josef > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Wed May 18 09:24:46 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 18 May 2011 09:24:46 -0400 Subject: [SciPy-User] weight in scipy.integrate.quad In-Reply-To: References: Message-ID: On Wed, May 18, 2011 at 9:02 AM, Warren Weckesser wrote: > > > On Wed, May 18, 2011 at 7:20 AM, wrote: >> >> scipy.integrate.quad has some additional arguments that are now >> briefly documented, but I don't see how to use them >> >> http://docs.scipy.org/scipy/docs/scipy.integrate.quadpack.quad/#scipy-integrate-quad >> >> I was trying to use weights but don't see what are valid arguments for it. >> >> Are there examples or is there more documentation on the use of the >> additional arguments, especially weights? > > > Hi Josef, > > The function scipy.integrate.quad_explain() prints more information about > the arguments to quad(). Thanks, that's helpful (although the weight I wanted to check is not among them) I didn't see this pointer in the docstring, since I ignored this part. Maybe mentioning it in a Notes section would be more where I would expect to find details or hints to them. Josef > > Warren > > >> >> Josef >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From yyc at solvcon.net Wed May 18 11:50:06 2011 From: yyc at solvcon.net (Yung-Yu Chen) Date: Wed, 18 May 2011 11:50:06 -0400 Subject: [SciPy-User] ANN: SOLVCON 0.0.6 Message-ID: Hello, I am pleased to announce version 0.0.6 of SOLVCON. SOLVCON is a Python-based, multi-physics software framework for solving first-order hyperbolic PDEs. The source tarball can be downloaded at http://bitbucket.org/yungyuc/solvcon/downloads . More information can be found at http://solvcon.net/ . SOLVCON now partially supports GPU clusters. Solvers for linear equations and the velocity-stress equations are updated. The CESE base solver is enhanced. This release also contains enhancements planned for 0.0.5, which would not be released. New features: - Support GPU clusters. SOLVCON can spread decomposed sub-domains to multiple GPU devices distributed over network. Currently only one GPU device per compute node is supported. - A generic solver for linear equations: ``solvcon.kerpak.lincuse``. The new version of generic linear solver support both CPU and CPU. - A velocity-stress equaltions solver is ported to be based on ``solvcon.kerpak.lincuse``. The new solver is packaged in ``solvcon.kerpak.vslin``. - Add W-3 weighting scheme to ``solvcon.kerpak.cuse``. W-3 scheme is more stable than W-1 and W-2. Bug-fixes: - Consolidate reading quadrilateral mesh from CUBIT/Genesis/ExodusII; CUBIT uses 'SHELL4' for 2D quad. - Update SCons scripts for the upgrade of METIS to 4.0.3. with regards, Yung-Yu Chen -- Yung-Yu Chen http://solvcon.net/yyc/ +1 (614) 859 2436 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmueller at python-academy.de Wed May 18 16:07:56 2011 From: mmueller at python-academy.de (=?ISO-8859-15?Q?Mike_M=FCller?=) Date: Wed, 18 May 2011 22:07:56 +0200 Subject: [SciPy-User] [ANN] Courses in Colorado: "Introduction to Python and Python for Scientists and Engineers" - one day left for early bird rate Message-ID: <4DD4271C.3000102@python-academy.de> Python Course in Golden, CO, USA ================================ **There is only one day left to take advantage of the early bird rate.** Introduction to Python and Python for Scientists and Engineers -------------------------------------------------------------- June 3 - 4, 2011 Introduction to Python June 5, 2011 Python for Scientists and Engineers Both courses can be booked individually or together. Venue: Colorado School of Mines, Golden, CO (20 minutes west of Denver) Trainer: Mike M?ller Target Audience --------------- The introductory course is designed for people with basic programming background. Since it is a general introduction to Python it is suitable for everybody interested in Python. The scientist's course assumes a working knowledge of Python. You will be fine if you take the two-day introduction before hand. The topics are of general interest for scientists and engineers. Even though some examples come from the groundwater modeling domain, they are easy to understand for people without prior knowledge in this field. About the Trainer ----------------- Mike M?ller, has been teaching Python since 2004. He is the founder of Python Academy and regularly gives open and in-house Python courses as well as tutorials at PyCon US, OSCON, EuroSciPy and PyCon Asia-Pacific. More Information and Course Registration ---------------------------------------- http://igwmc.mines.edu/short-course/intro_python.html -- Mike mmueller at python-academy.de From ckkart at hoc.net Wed May 18 16:36:16 2011 From: ckkart at hoc.net (Christian K.) Date: Wed, 18 May 2011 22:36:16 +0200 Subject: [SciPy-User] legenden Message-ID: http://www.youtube.com/watch?v=0RCLYgYrkC8 From ckkart at hoc.net Wed May 18 17:15:02 2011 From: ckkart at hoc.net (Christian K.) Date: Wed, 18 May 2011 23:15:02 +0200 Subject: [SciPy-User] legenden In-Reply-To: References: Message-ID: oops. wrong recipient.... sorry From gwenln44 at gmail.com Thu May 19 06:07:54 2011 From: gwenln44 at gmail.com (guillaume gwenael) Date: Thu, 19 May 2011 12:07:54 +0200 Subject: [SciPy-User] Fitting problem using optimize.fmin_slsqp Message-ID: Hello, I need to fit analytic data using a constrained least square fit. Here is the script : import numpy as npimport scipy as scp from scipy.optimize import fmin_slsqp def OriginalFunc(xaxis,a,b,c,d,e,f): p1=d*1e3 p2=(a*e**2*c)/(p1*f) p3=(a*b*e)/f M=p3*np.sqrt( (1-1j*xaxis*p2)/(-1j*xaxis*p2) ) N = ((e*xaxis)/b)*np.sqrt( (1-1j*xaxis*p2)/(-1j*xaxis*p2) ) return M,p3,N def FirstOrderLinSyst(num,den,xaxis): j=np.complex(0,1) fos=num/(den-j*xaxis) return fos def PartialFractionExpansion(coefs,ref,xaxis): order=len(coefs)/2 num=coefs[:order] den=coefs[order:] SumK=0. for k in range(order): fos=FirstOrderLinSyst(num[k],den[k],xaxis) SumK=SumK+fos approx=ref*SumK return approx def Residuals(fitcoefs,curve,ref,xaxis): diff=np.real(curve)-np.real(PartialFractionExpansion(fitcoefs,ref,xaxis)) err=np.sqrt( np.sum(np.power(diff,2),axis=0)/np.sum(np.power(np.real(curve),2),axis=0) ) return np.array([err],dtype=float) def GetParams4Fitting(init_fitcoefs,curve,ref,xaxis): pbounds=[(-np.inf,np.inf),(-np.inf,np.inf),(-np.inf,np.inf),(-np.inf,np.inf), (-np.inf,np.inf),(-np.inf,np.inf),(0,np.inf),(0,np.inf),(0,np.inf), (0,np.inf),(0,np.inf),(0,np.inf)] fitcoefs,fittedfunc,iters,imode,smode=fmin_slsqp(Residuals, init_fitcoefs, args=(curve,ref,xaxis), bounds=pbounds) return fitcoefs,fittedfunc,iters,imode,smode # Parameters a=1.204 b=344.4256 c=1.41 d=10 e=np.sqrt(3.5) f=0.5 x_min=100 x_max=5000 step=100 xaxis=2*np.pi*np.array([np.arange(x_min,x_max+step,step)]) # Oiginal function M,p3,N=OriginalFunc(xaxis,a,b,c,d,e,f) # Initialization of fitting parameters order=6 params_k=np.zeros((2*order)) # Fitting parameters computing fitcoefs,fittedfunc,iters,imode,smode=GetParams4Fitting(params_k,M,p3,xaxis) It returnes the foolwing error: File "C:\Python26\lib\site-packages\scipy\optimize\slsqp.py", line 318, in fmin_slsqp g = append(fprime(x),0.0) File "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 176, in function_wrapper return function(x, *args) File "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 377, in approx_fprime grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon ValueError: setting an array element with a sequence. Can anyone help me? Regards, Gwena?l -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu May 19 09:34:56 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 19 May 2011 09:34:56 -0400 Subject: [SciPy-User] Fitting problem using optimize.fmin_slsqp In-Reply-To: References: Message-ID: On Thu, May 19, 2011 at 6:07 AM, guillaume gwenael wrote: > Hello, > I need to fit analytic data?using a constrained least square fit. Here is > the script : > import numpy as npimport scipy as scp > > from scipy.optimize import fmin_slsqp > > def OriginalFunc(xaxis,a,b,c,d,e,f): > > p1=d*1e3 > > p2=(a*e**2*c)/(p1*f) > > p3=(a*b*e)/f > > M=p3*np.sqrt( (1-1j*xaxis*p2)/(-1j*xaxis*p2) ) > > N = ((e*xaxis)/b)*np.sqrt( (1-1j*xaxis*p2)/(-1j*xaxis*p2) ) > > return M,p3,N > > def FirstOrderLinSyst(num,den,xaxis): > > j=np.complex(0,1) > > fos=num/(den-j*xaxis) > > return fos > > def PartialFractionExpansion(coefs,ref,xaxis): > > order=len(coefs)/2 > > num=coefs[:order] > > den=coefs[order:] > > SumK=0. > > for k in range(order): > > fos=FirstOrderLinSyst(num[k],den[k],xaxis) > > SumK=SumK+fos > > approx=ref*SumK > > return approx > > def Residuals(fitcoefs,curve,ref,xaxis): > > diff=np.real(curve)-np.real(PartialFractionExpansion(fitcoefs,ref,xaxis)) > > err=np.sqrt( > np.sum(np.power(diff,2),axis=0)/np.sum(np.power(np.real(curve),2),axis=0) ) > > return np.array([err],dtype=float) > > def GetParams4Fitting(init_fitcoefs,curve,ref,xaxis): > > pbounds=[(-np.inf,np.inf),(-np.inf,np.inf),(-np.inf,np.inf),(-np.inf,np.inf), > > (-np.inf,np.inf),(-np.inf,np.inf),(0,np.inf),(0,np.inf),(0,np.inf), > > (0,np.inf),(0,np.inf),(0,np.inf)] > > fitcoefs,fittedfunc,iters,imode,smode=fmin_slsqp(Residuals, > > init_fitcoefs, > > args=(curve,ref,xaxis), > > bounds=pbounds) > > return fitcoefs,fittedfunc,iters,imode,smode > > # Parameters > > a=1.204 > > b=344.4256 > > c=1.41 > > d=10 > > e=np.sqrt(3.5) > > f=0.5 > > x_min=100 > > x_max=5000 > > step=100 > > xaxis=2*np.pi*np.array([np.arange(x_min,x_max+step,step)]) > > # Oiginal function > > M,p3,N=OriginalFunc(xaxis,a,b,c,d,e,f) > > # Initialization of fitting parameters > > order=6 > > params_k=np.zeros((2*order)) > > # Fitting parameters computing > > fitcoefs,fittedfunc,iters,imode,smode=GetParams4Fitting(params_k,M,p3,xaxis) > > It returnes the foolwing error: > File "C:\Python26\lib\site-packages\scipy\optimize\slsqp.py", line 318, in > fmin_slsqp g = append(fprime(x),0.0) File > "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 176, in > function_wrapper return function(x, *args) File > "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 377, in > approx_fprime grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon ValueError: > setting an array element with a sequence. > > Can anyone help me? Two things I noticed. The error message is because fmin_slsqp expects func to return a scalar and then you want fmin_slsqp(... full_output=1) to return all of those values. Skipper From guillaume.gwenael at neuf.fr Thu May 19 11:50:39 2011 From: guillaume.gwenael at neuf.fr (guillaume gwenael) Date: Thu, 19 May 2011 17:50:39 +0200 Subject: [SciPy-User] Fitting problem using optimize.fmin_slsqp In-Reply-To: References: Message-ID: Do you mean that it is not possible de fit a data array using fmin_slsqp? Gwena?l 2011/5/19 Skipper Seabold > > On Thu, May 19, 2011 at 6:07 AM, guillaume gwenael > wrote: > > Hello, > > I need to fit analytic data using a constrained least square fit. Here is > > the script : > > import numpy as npimport scipy as scp > > > > from scipy.optimize import fmin_slsqp > > > > def OriginalFunc(xaxis,a,b,c,d,e,f): > > > > p1=d*1e3 > > > > p2=(a*e**2*c)/(p1*f) > > > > p3=(a*b*e)/f > > > > M=p3*np.sqrt( (1-1j*xaxis*p2)/(-1j*xaxis*p2) ) > > > > N = ((e*xaxis)/b)*np.sqrt( (1-1j*xaxis*p2)/(-1j*xaxis*p2) ) > > > > return M,p3,N > > > > def FirstOrderLinSyst(num,den,xaxis): > > > > j=np.complex(0,1) > > > > fos=num/(den-j*xaxis) > > > > return fos > > > > def PartialFractionExpansion(coefs,ref,xaxis): > > > > order=len(coefs)/2 > > > > num=coefs[:order] > > > > den=coefs[order:] > > > > SumK=0. > > > > for k in range(order): > > > > fos=FirstOrderLinSyst(num[k],den[k],xaxis) > > > > SumK=SumK+fos > > > > approx=ref*SumK > > > > return approx > > > > def Residuals(fitcoefs,curve,ref,xaxis): > > > > diff=np.real(curve)-np.real(PartialFractionExpansion(fitcoefs,ref,xaxis)) > > > > err=np.sqrt( > > np.sum(np.power(diff,2),axis=0)/np.sum(np.power(np.real(curve),2),axis=0) > ) > > > > return np.array([err],dtype=float) > > > > def GetParams4Fitting(init_fitcoefs,curve,ref,xaxis): > > > > > pbounds=[(-np.inf,np.inf),(-np.inf,np.inf),(-np.inf,np.inf),(-np.inf,np.inf), > > > > (-np.inf,np.inf),(-np.inf,np.inf),(0,np.inf),(0,np.inf),(0,np.inf), > > > > (0,np.inf),(0,np.inf),(0,np.inf)] > > > > fitcoefs,fittedfunc,iters,imode,smode=fmin_slsqp(Residuals, > > > > init_fitcoefs, > > > > args=(curve,ref,xaxis), > > > > bounds=pbounds) > > > > return fitcoefs,fittedfunc,iters,imode,smode > > > > # Parameters > > > > a=1.204 > > > > b=344.4256 > > > > c=1.41 > > > > d=10 > > > > e=np.sqrt(3.5) > > > > f=0.5 > > > > x_min=100 > > > > x_max=5000 > > > > step=100 > > > > xaxis=2*np.pi*np.array([np.arange(x_min,x_max+step,step)]) > > > > # Oiginal function > > > > M,p3,N=OriginalFunc(xaxis,a,b,c,d,e,f) > > > > # Initialization of fitting parameters > > > > order=6 > > > > params_k=np.zeros((2*order)) > > > > # Fitting parameters computing > > > > > fitcoefs,fittedfunc,iters,imode,smode=GetParams4Fitting(params_k,M,p3,xaxis) > > > > It returnes the foolwing error: > > File "C:\Python26\lib\site-packages\scipy\optimize\slsqp.py", line 318, > in > > fmin_slsqp g = append(fprime(x),0.0) File > > "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 176, in > > function_wrapper return function(x, *args) File > > "C:\Python26\lib\site-packages\scipy\optimize\optimize.py", line 377, in > > approx_fprime grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon ValueError: > > setting an array element with a sequence. > > > > Can anyone help me? > > Two things I noticed. The error message is because fmin_slsqp expects > func to return a scalar and then you want fmin_slsqp(... > full_output=1) to return all of those values. > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu May 19 12:00:29 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 19 May 2011 12:00:29 -0400 Subject: [SciPy-User] Fitting problem using optimize.fmin_slsqp In-Reply-To: References: Message-ID: On Thu, May 19, 2011 at 11:50 AM, guillaume gwenael wrote: > Do you mean that it is not possible de fit a data array using?fmin_slsqp? > Gwena?l > I haven't looked at your code in great detail, but what does your objective function look like? It looks like you just return an array of errors. You need some distance measure that returns a scalar. Ie., if your objective is to minimize the sum of squared errors, then it needs to return the sum of squared errors. This is different than say optimize.leastsq which does the sum and squaring internally. Skipper From gideon.simpson at gmail.com Thu May 19 16:07:19 2011 From: gideon.simpson at gmail.com (Gideon) Date: Thu, 19 May 2011 13:07:19 -0700 (PDT) Subject: [SciPy-User] bus error in In-Reply-To: Message-ID: <26822184.1533.1305835639482.JavaMail.geo-discussion-forums@yqmc16> Hmm, I was convinced I had the right one installed, but after reinstalling, it all works. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat May 21 13:57:44 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 21 May 2011 13:57:44 -0400 Subject: [SciPy-User] ANN: scikits.statsmodels 0.3.0rc1 Message-ID: We are proud to announce: After 15 months of work we are finally ready for another release. Our first release candidate is available on pypi. some background information and the documentation: http://statsmodels.sourceforge.net/devel/introduction.html http://statsmodels.sourceforge.net/devel source distributions: http://pypi.python.org/pypi/scikits.statsmodels Enjoy, and don't forget to report any problems, Josef Perktold and Skipper Seabold, Wes McKinney, Mike Crow, Vincent Davis ---------------------------------------------- Statsmodels is a python package that provides a complement to scipy for statistical computations including descriptive statistics and estimation of statistical models. scikits.statsmodels provides classes and functions for the estimation of several categories of statistical models. These currently include linear regression models, OLS, GLS, WLS and GLS with AR(p) errors, generalized linear models for six distribution families, M-estimators for robust linear models, and regression with discrete dependent variables, Logit, Probit, MNLogit, Poisson, based on maximum likelihood estimators, timeseries models, ARMA, AR and VAR. An extensive list of result statistics are available for each estimation problem. Statsmodels also contains descriptive statistics, a wide range of statistical tests and more. We welcome feedback: mailing list at ``_ or our bug tracker at ``_ For updated versions between releases, we recommend our repository at ``_ We will move to github in the near future ``_ Main changes for 0.3.0 ---------------------- *Changes that break backwards compatibility* Added api.py for importing. So the new convention for importing is :: import scikits.statsmodels.api as sm Importing from modules directly now avoids unnecessary imports and increases the import speed if a library or user only needs specific functions. * sandbox/output.py -> iolib/table.py * lib/io.py -> iolib/foreign.py (Now contains Stata .dta format reader) * family -> families * families.links.inverse -> families.links.inverse_power * Datasets' Load class is now load function. * regression.py -> regression/linear_model.py * discretemod.py -> discrete/discrete_model.py * rlm.py -> robust/robust_linear_model.py * glm.py -> genmod/generalized_linear_model.py * model.py -> base/model.py * t() method -> tvalues attribute (t() still exists but raises a warning) *main changes and additions* * Numerous bugfixes. * Time Series Analysis model (tsa) - Vector Autoregression Models VAR (tsa.VAR) - Autogressive Models AR (tsa.AR) - Autoregressive Moving Average Models ARMA (tsa.ARMA) : optionally uses Cython for Kalman Filtering use setup.py install with option --with-cython - Baxter-King band-pass filter (tsa.filters.baxter_king) - Hodrick-Prescott filter (tsa.filters.hpfilter) - Christiano-Fitzgerald filter (tsa.filters.cffilter) * Improved maximum likelihood framework uses all available scipy.optimize solvers * Refactor of the datasets sub-package. * Added more datasets for examples. * Removed RPy dependency for running the test suite. * Refactored the test suite. * Refactored codebase/directory structure. * Support for offset and exposure in GLM. * Removed data_weights argument to GLM.fit for Binomial models. * New statistical tests, especially diagnostic and specification tests * Multiple test correction * General Method of Moment framework in sandbox * Improved documentation * and other additions Main Changes in 0.2.0 --------------------- * Improved documentation and expanded and more examples * Added four discrete choice models: Poisson, Probit, Logit, and Multinomial Logit. * Added PyDTA. Tools for reading Stata binary datasets (\*.dta) and putting them into numpy arrays. * Added four new datasets for examples and tests. * Results classes have been refactored to use lazy evaluation. * Improved support for maximum likelihood estimation. * bugfixes * renames for more consistency - RLM.fitted_values -> RLM.fittedvalues - GLMResults.resid_dev -> GLMResults.resid_deviance Python 3 -------- scikits.statsmodels has been ported and tested for Python 3.2. Python 3 version of the code can be obtained by running 2to3.py over the entire statsmodels source. The numerical core of statsmodels worked almost without changes, however there can be problems with data input and plotting. The STATA file reader and writer in iolib.foreign has not been ported yet. And there are still some problems with the matplotlib version for Python 3 that was used in testing. Running the test suite with Python 3.2 shows some errors related to foreign and matplotlib. Sandbox ------- We are continuing to work on support for systems of equations models, panel data models, time series analysis, and information and entropy econometrics in the sandbox. This code is often merged into trunk as it becomes more robust. Windows Help ------------ The source distribution for Windows includes a htmlhelp file (statsmodels.chm). This can be opened from the python interpreter :: >>> import scikits.statsmodels.api as sm >>> sm.open_help() ---------------------------------------------------------------------- From josef.pktd at gmail.com Sun May 22 15:52:30 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 May 2011 15:52:30 -0400 Subject: [SciPy-User] ks_2samp and searchsorted on concatenated array Message-ID: I was looking again at Kolmogorov-Smirnov and other gof tests from ks_2samp: (data1, data2 are 1d) data1 = np.sort(data1) data2 = np.sort(data2) data_all = np.concatenate([data1,data2]) cdf1 = np.searchsorted(data1,data_all,side='right')/(1.0*n1) cdf2 = (np.searchsorted(data2,data_all,side='right'))/(1.0*n2) d = np.max(np.absolute(cdf1-cdf2)) What does searchsorted do with an array that is the concatenation of two sorted arrays? I don't understand why data_all doesn't need to be sorted (after the concatenation). (I wrote this in 2008 just after learning about searchsorted, but the MonteCarlos, that I did, looked good. And I didn't find a reference why I did it this way.) Bug or not? (maybe I'm just slow in thinking today) Josef From josef.pktd at gmail.com Sun May 22 16:25:50 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 May 2011 16:25:50 -0400 Subject: [SciPy-User] ks_2samp and searchsorted on concatenated array In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 3:52 PM, wrote: > I was looking again at Kolmogorov-Smirnov and other gof tests > > from ks_2samp: (data1, data2 are 1d) > > ? ?data1 = np.sort(data1) > ? ?data2 = np.sort(data2) > ? ?data_all = np.concatenate([data1,data2]) > ? ?cdf1 = np.searchsorted(data1,data_all,side='right')/(1.0*n1) > ? ?cdf2 = (np.searchsorted(data2,data_all,side='right'))/(1.0*n2) > ? ?d = np.max(np.absolute(cdf1-cdf2)) > > What does searchsorted do with an array that is the concatenation of > two sorted arrays? > > I don't understand why data_all doesn't need to be sorted (after the > concatenation). > > (I wrote this in 2008 just after learning about searchsorted, but the > MonteCarlos, that I did, looked good. And I didn't find a reference > why I did it this way.) > > Bug or not? (maybe I'm just slow in thinking today) Ok, I'm slow in thinking today. searchsorted inserts the *second* array into the *first*, not the other way around. Sorry for the noise. no bug Josef > > Josef > From mickael.paris at gmail.com Sun May 22 16:28:36 2011 From: mickael.paris at gmail.com (Mickael) Date: Sun, 22 May 2011 22:28:36 +0200 Subject: [SciPy-User] ks_2samp and searchsorted on concatenated array In-Reply-To: References: Message-ID: it's sunday don't be quiet!!!!! :) Have a good day :) Mickael. 2011/5/22 : > On Sun, May 22, 2011 at 3:52 PM, ? wrote: >> I was looking again at Kolmogorov-Smirnov and other gof tests >> >> from ks_2samp: (data1, data2 are 1d) >> >> ? ?data1 = np.sort(data1) >> ? ?data2 = np.sort(data2) >> ? ?data_all = np.concatenate([data1,data2]) >> ? ?cdf1 = np.searchsorted(data1,data_all,side='right')/(1.0*n1) >> ? ?cdf2 = (np.searchsorted(data2,data_all,side='right'))/(1.0*n2) >> ? ?d = np.max(np.absolute(cdf1-cdf2)) >> >> What does searchsorted do with an array that is the concatenation of >> two sorted arrays? >> >> I don't understand why data_all doesn't need to be sorted (after the >> concatenation). >> >> (I wrote this in 2008 just after learning about searchsorted, but the >> MonteCarlos, that I did, looked good. And I didn't find a reference >> why I did it this way.) >> >> Bug or not? (maybe I'm just slow in thinking today) > > Ok, I'm slow in thinking today. > > searchsorted inserts the *second* array into the *first*, not the > other way around. > > Sorry for the noise. no bug > > Josef > >> >> Josef >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From chris.rodgers at berkeley.edu Mon May 23 03:37:28 2011 From: chris.rodgers at berkeley.edu (Chris Rodgers) Date: Mon, 23 May 2011 00:37:28 -0700 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax Message-ID: A common task for my work is slicing a record array to find records matching some set of criteria. For any criteria beyond the most simple, I find the syntax grows complex and implementation efficiency starts to matter a lot. I wrote a class to encapsulate this filtering, and I wanted to share it with the list and also get feedback on: 1) am I approaching this problem correctly?, and 2) is the implementation efficient for very large arrays? Here's a short script showing the desired functionality. Generally 'col3' contains data that I want to send to some other process, and 'col1' and 'col2' are data parameters that I want to filter by. x = np.recarray(shape=(100000,), dtype=[('col1', int), ('col2', int), ('col3', float)]) # Fill x with actual data here # Find all records where 'col2' is 1, 2, or 4 print x[(x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4)] # Find all records where 'col1' is 1, 2, or 4; and 'col1' is 1 print x[(x['col1'] == 1) & \ ((x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4))] This is an "idiomatic" usage of record arrays (http://mail.scipy.org/pipermail/numpy-discussion/2009-February/040684.html). I certainly write this kind of code a lot. Problem #1 is that the syntax is hard to read for long chains of conditionals. Problem 2 is that it's hard to generalize the code when the list of acceptable values ([1, 2, 4] in this example) has arbitrary length. For that, you need an equivalent to `ismember` in Matlab. # Here's one way to do it but it's very slow for large datasets print x[np.array([t in [1,2,4] for t in x['col2']])] `in1d` will add this functionality but it's not available my version of numpy, from Synaptic in Ubuntu 10.04. `intersect1d` and `setmember1d` don't work if the lists contain non-unique values. (See http://stackoverflow.com/questions/1273041/how-can-i-implement-matlabs-ismember-command-in-python) Anyway, I wrote a simple object `Picker` to encapsulate the desired functionality. You can specify an arbitrary set of columns to filter by, and acceptable values from each column.. So the above code would be re-written as: p = Picker(data=x) # Mask of x that matches the desired values print p.pick_mask(col1=[1], col2=[1,2,4]) # Or if you just want 'col3' from the filtered records print p.pick_data('col3', col1=[1], col2=[1,2,4]) I think the syntax is much cleaner. Another benefit is that, if there were hundreds of acceptable values for 'col2' instead of three, the code would not be any longer. Here's the class definition: import numpy as np class Picker: def __init__(self, data): self._data = data self._calculate_pick_mask = self._calculate_pick_mask_meth1 def pick_data(self, colname, **kwargs): return self._data[colname][self._calculate_pick_mask(kwargs)] def pick_mask(self, **kwargs): return self._calculate_pick_mask(kwargs) def _calculate_pick_mask_meth1(self, kwargs): # Begin with all true mask = np.ones(self._data.shape, dtype=bool) for colname, ok_value_list in kwargs.items(): # OR together all records with _data['colname'] in ok_value_list one_col_mask = np.zeros_like(mask) for ok_value in ok_value_list: one_col_mask = one_col_mask | (self._data[colname] == ok_value) # AND together the full mask with the results from this column mask = mask & one_col_mask return mask def _calculate_pick_mask_meth2(self, kwargs): mask = reduce(np.logical_and, [reduce(np.logical_or, [self._data[colname] == ok_value for ok_value in ok_value_list]) \ for colname, ok_value_list in kwargs.items()]) I tried several different implementations of _calculate_pick_mask. Method 1 is the easiest to read. Method 2 is more clever but it didn't actually run any faster for me. Both approaches are much faster than the pure python [val in v for val in a] approach. Is this the right data model for this kind of problem? Is this the best way to implement the filtering? For my datasets, this kind of filtering operation actually ends up taking most of the calculation time, so I'd like to do it quickly while keeping the code readable. Thanks for any comments! Chris -- Chris Rodgers Helen Wills Neuroscience Institute University of California - Berkeley From denis-bz-gg at t-online.de Mon May 23 06:54:41 2011 From: denis-bz-gg at t-online.de (denis) Date: Mon, 23 May 2011 12:54:41 +0200 Subject: [SciPy-User] wavelets for function approximation ? In-Reply-To: References: Message-ID: On 17/05/2011 12:43, josef.pktd at gmail.com wrote: > Can the wavelets in scipy.signal be used for function approximation? > Does anyone have a recipe? Here's a short recipe for an appetizer (slow food), using only simple "Daub4" wavelets. cheers -- denis -------------- next part -------------- A non-text attachment was scrubbed... Name: daub4.py Type: text/x-python-script Size: 3811 bytes Desc: not available URL: From midel at sfr.fr Mon May 23 08:05:39 2011 From: midel at sfr.fr (midel) Date: Mon, 23 May 2011 14:05:39 +0200 (CEST) Subject: [SciPy-User] a new enigma for a matlab user Message-ID: <9838300.42081306152339242.JavaMail.www@wsfrf1210> Hi everybody, Today I face a new mystery (for me) which seems to be linked to a fundamental difference between matlab and python langage... The principle of my code (its beginning) is quite simple. I create an "x" vector (dimension nx) and then 2 other vectors Efini_ord and Ehini which are gaussian functions of x. Then I want to store these vectors in two matrices Uo and Ve (dimension nx, 20). I do this because I will modifie the vectors 19 times and i want to store every step. Here is my code : import numpy as np import scipy as sp import matplotlib.pylab as plt from numpy.fft import fft,ifft #Fonction pour creer une gaussienne def supergaussnorm(x,n): ?? ?sga=np.exp(-x**(2*n)); ?? ?return sga # fenetre transverse RES=6; mx=10; #taille de la fenetre nx=2**RES; # nombre de points x=np.linspace(-mx/2,mx/2,nx); #vecteur transverse # two different profiles Efini_ord=1*supergaussnorm(x,1); Ehini=0*(supergaussnorm(x,1)); # kind of "saving data" matrices Uo and Ve nET=20; Uo=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; Ve=Uo; # first iteration of data saving Uo[:,0]=Efini_ord; Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely no relation with Uo # plot plot plt.plot(Uo[:,0],'r.') plt.plot(Efini_ord) plt.show() If I run this, Uo[:,0] IS NOT Efini_ord, but a vector of zeros... If I #comment the line : #Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely no relation with Uo Now Uo[:,0] IS Efini_ord ! Everything works as if what I do with Ve has an influence on Uo. It is confirmed by the fact that if I create Ve by : Ve=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; instead of Ve=Uo; the problem also disappears ! I was used with matlab to "clone" variables as in Ve=Uo; but it looks like such writing is totally wrong in python... Can somebody explain ? Thanks ! Midel -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon May 23 08:09:52 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 23 May 2011 14:09:52 +0200 Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <9838300.42081306152339242.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> Message-ID: When you do this: Ve=Uo; Ve and Uo point to the _same_ object. So if you want a new vector of zeros, you have to create a new one! If you want a compelx matrix, you can also directly say it: Uo = np.zeros((nx,nET), dtype=np.complex128) Matthieu 2011/5/23 midel > Hi everybody, > > Today I face a new mystery (for me) which seems to be linked to a > fundamental difference between matlab and python langage... > > The principle of my code (its beginning) is quite simple. I create an "x" > vector (dimension nx) and then 2 other vectors Efini_ord and Ehini which are > gaussian functions of x. > > Then I want to store these vectors in two matrices Uo and Ve (dimension nx, > 20). I do this because I will modifie the vectors 19 times and i want to > store every step. > > Here is my code : > > import numpy as np > import scipy as sp > > import matplotlib.pylab as plt > from numpy.fft import fft,ifft > > > #Fonction pour creer une gaussienne > def supergaussnorm(x,n): > sga=np.exp(-x**(2*n)); > return sga > > # fenetre transverse > RES=6; > mx=10; #taille de la fenetre > nx=2**RES; # nombre de points > x=np.linspace(-mx/2,mx/2,nx); #vecteur transverse > > > # two different profiles > Efini_ord=1*supergaussnorm(x,1); > Ehini=0*(supergaussnorm(x,1)); > > # kind of "saving data" matrices Uo and Ve > nET=20; > Uo=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > Ve=Uo; > > > # first iteration of data saving > Uo[:,0]=Efini_ord; > Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > # plot plot > plt.plot(Uo[:,0],'r.') > plt.plot(Efini_ord) > plt.show() > > > If I run this, Uo[:,0] *IS NOT* Efini_ord, but a vector of zeros... > > If I #comment the line : > > #Ve[:,0]=Ehini; #line to be commented or not, which seems to have > absolutely no relation with Uo > > > Now Uo[:,0] *IS* Efini_ord ! Everything works as if what I do with Ve has > an influence on Uo. It is confirmed by the fact that if I create Ve by : > > Ve=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > > > instead of > > Ve=Uo; > > > the problem also disappears ! > > I was used with matlab to "clone" variables as in Ve=Uo; > > but it looks like such writing is totally wrong in python... > > Can somebody explain ? > > Thanks ! > > Midel > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Mon May 23 08:13:06 2011 From: robince at gmail.com (Robin) Date: Mon, 23 May 2011 14:13:06 +0200 Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <9838300.42081306152339242.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> Message-ID: In Python you can think of everything as an object and the 'variables' are text labels that point to an object. > Ve=Uo; Here you are setting the "name" Ve to point to the same object the Uo points to. This means both of these reference the same variable. So when you make a change to one it is changing the same array object and you see the changes in the other. If you don't want that, you can either explicitly copy: Ve = Uo.copy() or create a seperate matrix. However, you might be confused when the same thing doesn't happen with integers: a = 1 b = a a = 2 # b doesn't change This is explained by keeping in mind the name idea... there is a '1' object and a '2' object... after b=a both names are pointing to 1 object, when you do a=2 you are rebinding a to point to the 2 object. This is different from changing the underlying object, which you do when you manipulate a slice of a numpy array. Hope this helps a bit, Cheers Robin On Mon, May 23, 2011 at 2:05 PM, midel wrote: > Hi everybody, > > Today I face a new mystery (for me) which seems to be linked to a > fundamental difference between matlab and python langage... > > The principle of my code (its beginning) is quite simple. I create an "x" > vector (dimension nx) and then 2 other vectors Efini_ord and Ehini which are > gaussian functions of x. > > Then I want to store these vectors in two matrices Uo and Ve (dimension nx, > 20). I do this because I will modifie the vectors 19 times and i want to > store every step. > > Here is my code : > > import numpy as np > import scipy as sp > > import matplotlib.pylab as plt > from numpy.fft import fft,ifft > > > #Fonction pour creer une gaussienne > def supergaussnorm(x,n): > ?? ?sga=np.exp(-x**(2*n)); > ?? ?return sga > > # fenetre transverse > RES=6; > mx=10; #taille de la fenetre > nx=2**RES; # nombre de points > x=np.linspace(-mx/2,mx/2,nx); #vecteur transverse > > > # two different profiles > Efini_ord=1*supergaussnorm(x,1); > Ehini=0*(supergaussnorm(x,1)); > > # kind of "saving data" matrices Uo and Ve > nET=20; > Uo=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > Ve=Uo; > > > # first iteration of data saving > Uo[:,0]=Efini_ord; > Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > # plot plot > plt.plot(Uo[:,0],'r.') > plt.plot(Efini_ord) > plt.show() > > If I run this, Uo[:,0] IS NOT Efini_ord, but a vector of zeros... > > If I #comment the line : > > #Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > Now Uo[:,0] IS Efini_ord ! Everything works as if what I do with Ve has an > influence on Uo. It is confirmed by the fact that if I create Ve by : > > Ve=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > > instead of > > Ve=Uo; > > the problem also disappears ! > > I was used with matlab to "clone" variables as in Ve=Uo; > > but it looks like such writing is totally wrong in python... > > Can somebody explain ? > > Thanks ! > > Midel > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From midel at sfr.fr Mon May 23 08:14:48 2011 From: midel at sfr.fr (midel) Date: Mon, 23 May 2011 14:14:48 +0200 (CEST) Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <9838300.42081306152339242.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> Message-ID: <22221803.44431306152888653.JavaMail.www@wsfrf1210> Thanks ! ok, so : variable1=variable2; creates a pointer, not an independant and new variable ? midel ======================================== Message du : 23/05/2011 De : "Robin " A : "midel" , "SciPy Users List" Copie ? : Sujet : Re: [SciPy-User] a new enigma for a matlab user In Python you can think of everything as an object and the 'variables' are text labels that point to an object. > Ve=Uo; Here you are setting the "name" Ve to point to the same object the Uo points to. This means both of these reference the same variable. So when you make a change to one it is changing the same array object and you see the changes in the other. If you don't want that, you can either explicitly copy: Ve = Uo.copy() or create a seperate matrix. However, you might be confused when the same thing doesn't happen with integers: a = 1 b = a a = 2 # b doesn't change This is explained by keeping in mind the name idea... there is a '1' object and a '2' object... after b=a both names are pointing to 1 object, when you do a=2 you are rebinding a to point to the 2 object. This is different from changing the underlying object, which you do when you manipulate a slice of a numpy array. Hope this helps a bit, Cheers Robin On Mon, May 23, 2011 at 2:05 PM, midel wrote: > Hi everybody, > > Today I face a new mystery (for me) which seems to be linked to a > fundamental difference between matlab and python langage... > > The principle of my code (its beginning) is quite simple. I create an "x" > vector (dimension nx) and then 2 other vectors Efini_ord and Ehini which are > gaussian functions of x. > > Then I want to store these vectors in two matrices Uo and Ve (dimension nx, > 20). I do this because I will modifie the vectors 19 times and i want to > store every step. > > Here is my code : > > import numpy as np > import scipy as sp > > import matplotlib.pylab as plt > from numpy.fft import fft,ifft > > > #Fonction pour creer une gaussienne > def supergaussnorm(x,n): > ?? ?sga=np.exp(-x**(2*n)); > ?? ?return sga > > # fenetre transverse > RES=6; > mx=10; #taille de la fenetre > nx=2**RES; # nombre de points > x=np.linspace(-mx/2,mx/2,nx); #vecteur transverse > > > # two different profiles > Efini_ord=1*supergaussnorm(x,1); > Ehini=0*(supergaussnorm(x,1)); > > # kind of "saving data" matrices Uo and Ve > nET=20; > Uo=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > Ve=Uo; > > > # first iteration of data saving > Uo[:,0]=Efini_ord; > Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > # plot plot > plt.plot(Uo[:,0],'r.') > plt.plot(Efini_ord) > plt.show() > > If I run this, Uo[:,0] IS NOT Efini_ord, but a vector of zeros... > > If I #comment the line : > > #Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > Now Uo[:,0] IS Efini_ord ! Everything works as if what I do with Ve has an > influence on Uo. It is confirmed by the fact that if I create Ve by : > > Ve=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > > instead of > > Ve=Uo; > > the problem also disappears ! > > I was used with matlab to "clone" variables as in Ve=Uo; > > but it looks like such writing is totally wrong in python... > > Can somebody explain ? > > Thanks ! > > Midel > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amcmorl at gmail.com Mon May 23 08:14:41 2011 From: amcmorl at gmail.com (Angus McMorland) Date: Mon, 23 May 2011 08:14:41 -0400 Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <9838300.42081306152339242.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> Message-ID: On 23 May 2011 08:05, midel wrote: > Hi everybody, > > Today I face a new mystery (for me) which seems to be linked to a > fundamental difference between matlab and python langage... > > The principle of my code (its beginning) is quite simple. I create an "x" > vector (dimension nx) and then 2 other vectors Efini_ord and Ehini which are > gaussian functions of x. > > Then I want to store these vectors in two matrices Uo and Ve (dimension nx, > 20). I do this because I will modifie the vectors 19 times and i want to > store every step. > > Here is my code : > > import numpy as np > import scipy as sp > > import matplotlib.pylab as plt > from numpy.fft import fft,ifft > > > #Fonction pour creer une gaussienne > def supergaussnorm(x,n): > ?? ?sga=np.exp(-x**(2*n)); > ?? ?return sga > > # fenetre transverse > RES=6; > mx=10; #taille de la fenetre > nx=2**RES; # nombre de points > x=np.linspace(-mx/2,mx/2,nx); #vecteur transverse > > > # two different profiles > Efini_ord=1*supergaussnorm(x,1); > Ehini=0*(supergaussnorm(x,1)); > > # kind of "saving data" matrices Uo and Ve > nET=20; > Uo=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > Ve=Uo; > > > # first iteration of data saving > Uo[:,0]=Efini_ord; > Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > # plot plot > plt.plot(Uo[:,0],'r.') > plt.plot(Efini_ord) > plt.show() > > If I run this, Uo[:,0] IS NOT Efini_ord, but a vector of zeros... > > If I #comment the line : > > #Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > Now Uo[:,0] IS Efini_ord ! Everything works as if what I do with Ve has an > influence on Uo. It is confirmed by the fact that if I create Ve by : > > Ve=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > > instead of > > Ve=Uo; > > the problem also disappears ! > > I was used with matlab to "clone" variables as in Ve=Uo; > > but it looks like such writing is totally wrong in python... > > Can somebody explain ? > > Thanks ! > > Midel The line Ve=Uo makes Ve a view of the same memory space as Uo. You can try this more simply as: In [3]: a = np.zeros(3) In [4]: b = a In [5]: b[1] = 2 In [6]: a Out[6]: array([ 0., 2., 0.]) If you want to make a copy of Uo, you can specify it explicitly as Ve=Uo.copy(). If you just want another emptied array like Uo, you can also do this using the zeros_like function: Ve = np.zeros_like(Uo), or as Matthieu said, you can explicitly define the data type you want when you create the zeroed array. Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From midel at sfr.fr Mon May 23 08:21:45 2011 From: midel at sfr.fr (midel) Date: Mon, 23 May 2011 14:21:45 +0200 (CEST) Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <22221803.44431306152888653.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> Message-ID: <33280059.47071306153305438.JavaMail.www@wsfrf1210> Wow, thanks everybody for all your answers, it is perfectly clear for me now ! The road from matlab to python is full of funny obstacles :) midel ======================================== Message du : 23/05/2011 De : "midel " A : "scipy-user" Copie ? : Sujet : Re: [SciPy-User] a new enigma for a matlab user Thanks ! ok, so : variable1=variable2; creates a pointer, not an independant and new variable ? midel ======================================== Message du : 23/05/2011 De : "Robin " A : "midel" , "SciPy Users List" Copie ? : Sujet : Re: [SciPy-User] a new enigma for a matlab user In Python you can think of everything as an object and the 'variables' are text labels that point to an object. > Ve=Uo; Here you are setting the "name" Ve to point to the same object the Uo points to. This means both of these reference the same variable. So when you make a change to one it is changing the same array object and you see the changes in the other. If you don't want that, you can either explicitly copy: Ve = Uo.copy() or create a seperate matrix. However, you might be confused when the same thing doesn't happen with integers: a = 1 b = a a = 2 # b doesn't change This is explained by keeping in mind the name idea... there is a '1' object and a '2' object... after b=a both names are pointing to 1 object, when you do a=2 you are rebinding a to point to the 2 object. This is different from changing the underlying object, which you do when you manipulate a slice of a numpy array. Hope this helps a bit, Cheers Robin On Mon, May 23, 2011 at 2:05 PM, midel wrote: > Hi everybody, > > Today I face a new mystery (for me) which seems to be linked to a > fundamental difference between matlab and python langage... > > The principle of my code (its beginning) is quite simple. I create an "x" > vector (dimension nx) and then 2 other vectors Efini_ord and Ehini which are > gaussian functions of x. > > Then I want to store these vectors in two matrices Uo and Ve (dimension nx, > 20). I do this because I will modifie the vectors 19 times and i want to > store every step. > > Here is my code : > > import numpy as np > import scipy as sp > > import matplotlib.pylab as plt > from numpy.fft import fft,ifft > > > #Fonction pour creer une gaussienne > def supergaussnorm(x,n): > ?? ?sga=np.exp(-x**(2*n)); > ?? ?return sga > > # fenetre transverse > RES=6; > mx=10; #taille de la fenetre > nx=2**RES; # nombre de points > x=np.linspace(-mx/2,mx/2,nx); #vecteur transverse > > > # two different profiles > Efini_ord=1*supergaussnorm(x,1); > Ehini=0*(supergaussnorm(x,1)); > > # kind of "saving data" matrices Uo and Ve > nET=20; > Uo=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > Ve=Uo; > > > # first iteration of data saving > Uo[:,0]=Efini_ord; > Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > # plot plot > plt.plot(Uo[:,0],'r.') > plt.plot(Efini_ord) > plt.show() > > If I run this, Uo[:,0] IS NOT Efini_ord, but a vector of zeros... > > If I #comment the line : > > #Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > Now Uo[:,0] IS Efini_ord ! Everything works as if what I do with Ve has an > influence on Uo. It is confirmed by the fact that if I create Ve by : > > Ve=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > > instead of > > Ve=Uo; > > the problem also disappears ! > > I was used with matlab to "clone" variables as in Ve=Uo; > > but it looks like such writing is totally wrong in python... > > Can somebody explain ? > > Thanks ! > > Midel > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ocefpaf at gmail.com Mon May 23 08:25:14 2011 From: ocefpaf at gmail.com (Filipe Pires Alvarenga Fernandes) Date: Mon, 23 May 2011 08:25:14 -0400 Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <33280059.47071306153305438.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> <22221803.44431306152888653.JavaMail.www@wsfrf1210> <33280059.47071306153305438.JavaMail.www@wsfrf1210> Message-ID: On Mon, May 23, 2011 at 08:21, midel wrote: > Wow, thanks everybody for all your answers, it is perfectly clear for me now > ! > > The road from matlab to python is full of funny obstacles :) > > midel Hi Midel, Like you I'm moving from matlab to python, the following tables helped me a lot: http://mathesaurus.sourceforge.net/matlab-numpy.html http://www.scipy.org/NumPy_for_Matlab_Users Hope that helps you too. Filipe. From jsseabold at gmail.com Mon May 23 11:07:11 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 23 May 2011 11:07:11 -0400 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: References: Message-ID: On Mon, May 23, 2011 at 3:37 AM, Chris Rodgers wrote: > A common task for my work is slicing a record array to find records > matching some set of criteria. For any criteria beyond the most > simple, I find the syntax grows complex and implementation efficiency > starts to matter a lot. I wrote a class to encapsulate this filtering, > and I wanted to share it with the list and also get feedback on: 1) am > I approaching this problem correctly?, and 2) is the implementation > efficient for very large arrays? > I also have this problem quite frequently. I would be interested to see something like this as a method of arrays and sub-classes, or at least as a convenience function. > Here's a short script showing the desired functionality. Generally > 'col3' contains data that I want to send to some other process, and > 'col1' and 'col2' are data parameters that I want to filter by. > > x = np.recarray(shape=(100000,), > ? ?dtype=[('col1', int), ('col2', int), ('col3', float)]) > > # Fill x with actual data here > > # Find all records where 'col2' is 1, 2, or 4 > print x[(x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4)] > > # Find all records where 'col1' is 1, 2, or 4; and 'col1' is 1 > print x[(x['col1'] == 1) & \ > ? ?((x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4))] > > This is an "idiomatic" usage of record arrays > (http://mail.scipy.org/pipermail/numpy-discussion/2009-February/040684.html). > I certainly write this kind of code a lot. Problem #1 is that the > syntax is hard to read for long chains of conditionals. Problem 2 is > that it's hard to generalize the code when the list of acceptable > values ([1, 2, 4] in this example) has arbitrary length. For that, you > need an equivalent to `ismember` in Matlab. > > # Here's one way to do it but it's very slow for large datasets > print x[np.array([t in [1,2,4] for t in x['col2']])] > > `in1d` will add this functionality but it's not available my version > of numpy, from Synaptic in Ubuntu 10.04. `intersect1d` and > `setmember1d` don't work if the lists contain non-unique values. (See > http://stackoverflow.com/questions/1273041/how-can-i-implement-matlabs-ismember-command-in-python) > Your version appears to be much faster than in1d for this and comparable to doing it explicitly [~] [1]: nobs = 1000000 [~] [2]: x = np.zeros(shape=(nobs,), ...: dtype=[('col1', int), ('col2', int), ('col3', float)]) [~] [3]: np.random.seed(12345) [~] [4]: x['col1'] = np.random.randint(1,6,size=nobs) [~] [5]: x['col2'] = np.random.randint(1,6,size=nobs) [~] [6]: ok_value_list = [1,2,4] [~] [7]: colname = 'col2' [~] [8]: timeit reduce(np.logical_or, [x[colname] == i for i in ok_value_list]) 100 loops, best of 3: 14.9 ms per loop [~] [9]: timeit np.in1d(x[colname],ok_value_list) 1 loops, best of 3: 260 ms per loop [~] [10]: paste # # Picker class using pick method 2, which I found to be slightly faster ## -- End pasted text -- [~] [11]: p = Picker(x) [~] [12]: p.pick_mask(col2=[1,2,4]) [12]: array([ True, True, True, ..., True, False, True], dtype=bool) [~] [13]: timeit p=Picker(x);p.pick_mask(col2=[1,2,4]) 100 loops, best of 3: 14.9 ms per loop [~] [14]: ((x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4)) [14]: array([ True, True, True, ..., True, False, True], dtype=bool) [~] [15]: timeit ((x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4)) 100 loops, best of 3: 14.8 ms per loop > Anyway, I wrote a simple object `Picker` to encapsulate the desired > functionality. You can specify an arbitrary set of columns to filter > by, and acceptable values from each column.. So the above code would > be re-written as: > > p = Picker(data=x) > # Mask of x that matches the desired values > print p.pick_mask(col1=[1], col2=[1,2,4]) It might be nice to have a logical keyword so that these conditions could be 'and' or 'or'. > # Or if you just want 'col3' from the filtered records > print p.pick_data('col3', col1=[1], col2=[1,2,4]) > > I think the syntax is much cleaner. Another benefit is that, if there > were hundreds of acceptable values for 'col2' instead of three, the > code would not be any longer. > > Here's the class definition: > > import numpy as np > class Picker: > ? ?def __init__(self, data): > ? ? ? ?self._data = data > ? ? ? ?self._calculate_pick_mask = self._calculate_pick_mask_meth1 > > ? ?def pick_data(self, colname, **kwargs): > ? ? ? ?return self._data[colname][self._calculate_pick_mask(kwargs)] > > ? ?def pick_mask(self, **kwargs): > ? ? ? ?return self._calculate_pick_mask(kwargs) > > ? ?def _calculate_pick_mask_meth1(self, kwargs): > ? ? ? ?# Begin with all true > ? ? ? ?mask = np.ones(self._data.shape, dtype=bool) > > ? ? ? ?for colname, ok_value_list in kwargs.items(): > ? ? ? ? ? ?# OR together all records with _data['colname'] in ok_value_list > ? ? ? ? ? ?one_col_mask = np.zeros_like(mask) > ? ? ? ? ? ?for ok_value in ok_value_list: > ? ? ? ? ? ? ? ?one_col_mask = one_col_mask | (self._data[colname] == ok_value) > > ? ? ? ? ? ?# AND together the full mask with the results from this column > ? ? ? ? ? ?mask = mask & one_col_mask > > ? ? ? ?return mask > > ? ?def _calculate_pick_mask_meth2(self, kwargs): > ? ? ? ?mask = reduce(np.logical_and, > ? ? ? ? ? ? ? ? ? ? ? ?[reduce(np.logical_or, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?[self._data[colname] == ok_value > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?for ok_value in ok_value_list]) \ > ? ? ? ? ? ? ? ? ? ? ? ? ? ?for colname, ok_value_list in kwargs.items()]) > > > I tried several different implementations of _calculate_pick_mask. > Method 1 is the easiest to read. Method 2 is more clever but it didn't > actually run any faster for me. Both approaches are much faster than > the pure python [val in v for val in a] approach. > > Is this the right data model for this kind of problem? Is this the > best way to implement the filtering? For my datasets, this kind of > filtering operation actually ends up taking most of the calculation > time, so I'd like to do it quickly while keeping the code readable. > > Thanks for any comments! > Chris > I'm also interested in what others think. At the very least, I think this should be added to the cookbook. It would my code much cleaner in places. Skipper From jeanpatrick.pommier at gmail.com Mon May 23 07:59:57 2011 From: jeanpatrick.pommier at gmail.com (jp) Date: Mon, 23 May 2011 04:59:57 -0700 (PDT) Subject: [SciPy-User] How to display a RGB image (MFISH)? Message-ID: <26976945.262.1306151997552.JavaMail.geo-discussion-forums@yqil2> Hi, I have written a script to try to combine five images into a three channels RGB images . The resulting RGB image looks like a grey scale image instead of a color one. I have three np.array Rnorm, Gnorm, Bnorm which are copied into a rgb array: *rgb = np.zeros((shape[0],shape[1],3),dtype=float) mxr=np.max(R) mxg=np.max(G) mxb=np.max(B) Rnorm=np.uint8((255*(R/mxr)))Gnorm=np.uint8((255*(R/mxg)))Bnorm=np.uint8((255*(R/mxb)))#copy each RGB component in an RGB array rgb[:,:,0]=Rnorm rgb[:,:,1]=Gnorm rgb[:,:,2]=Bnorm* *pylab.subplot(224, aspect='equal',frameon=False, xticks=[], yticks=[]) pylab.imshow(rgb) pylab.show()* Any advice? Thank you Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.delque at sfr.fr Mon May 23 08:19:49 2011 From: michael.delque at sfr.fr (michael delquee) Date: Mon, 23 May 2011 14:19:49 +0200 (CEST) Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <9838300.42081306152339242.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> Message-ID: <9061414.45881306153189913.JavaMail.www@wsfrf1210> Wow, thanks everybody for all your answers it is clear for me now. The road from matlab to python is full of funny obstacles :) midel ======================================== Message du : 23/05/2011 De : "midel " A : "scipy-user" Copie ? : Sujet : Re: [SciPy-User] a new enigma for a matlab user Thanks ! ok, so : variable1=variable2; creates a pointer, not an independant and new variable ? midel ======================================== Message du : 23/05/2011 De : "Robin " A : "midel" , "SciPy Users List" Copie ? : Sujet : Re: [SciPy-User] a new enigma for a matlab user In Python you can think of everything as an object and the 'variables' are text labels that point to an object. > Ve=Uo; Here you are setting the "name" Ve to point to the same object the Uo points to. This means both of these reference the same variable. So when you make a change to one it is changing the same array object and you see the changes in the other. If you don't want that, you can either explicitly copy: Ve = Uo.copy() or create a seperate matrix. However, you might be confused when the same thing doesn't happen with integers: a = 1 b = a a = 2 # b doesn't change This is explained by keeping in mind the name idea... there is a '1' object and a '2' object... after b=a both names are pointing to 1 object, when you do a=2 you are rebinding a to point to the 2 object. This is different from changing the underlying object, which you do when you manipulate a slice of a numpy array. Hope this helps a bit, Cheers Robin On Mon, May 23, 2011 at 2:05 PM, midel wrote: > Hi everybody, > > Today I face a new mystery (for me) which seems to be linked to a > fundamental difference between matlab and python langage... > > The principle of my code (its beginning) is quite simple. I create an "x" > vector (dimension nx) and then 2 other vectors Efini_ord and Ehini which are > gaussian functions of x. > > Then I want to store these vectors in two matrices Uo and Ve (dimension nx, > 20). I do this because I will modifie the vectors 19 times and i want to > store every step. > > Here is my code : > > import numpy as np > import scipy as sp > > import matplotlib.pylab as plt > from numpy.fft import fft,ifft > > > #Fonction pour creer une gaussienne > def supergaussnorm(x,n): > ?? ?sga=np.exp(-x**(2*n)); > ?? ?return sga > > # fenetre transverse > RES=6; > mx=10; #taille de la fenetre > nx=2**RES; # nombre de points > x=np.linspace(-mx/2,mx/2,nx); #vecteur transverse > > > # two different profiles > Efini_ord=1*supergaussnorm(x,1); > Ehini=0*(supergaussnorm(x,1)); > > # kind of "saving data" matrices Uo and Ve > nET=20; > Uo=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > Ve=Uo; > > > # first iteration of data saving > Uo[:,0]=Efini_ord; > Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > # plot plot > plt.plot(Uo[:,0],'r.') > plt.plot(Efini_ord) > plt.show() > > If I run this, Uo[:,0] IS NOT Efini_ord, but a vector of zeros... > > If I #comment the line : > > #Ve[:,0]=Ehini; #line to be commented or not, which seems to have absolutely > no relation with Uo > > Now Uo[:,0] IS Efini_ord ! Everything works as if what I do with Ve has an > influence on Uo. It is confirmed by the fact that if I create Ve by : > > Ve=np.zeros((nx,nET))+np.zeros((nx,nET))*1j; > > instead of > > Ve=Uo; > > the problem also disappears ! > > I was used with matlab to "clone" variables as in Ve=Uo; > > but it looks like such writing is totally wrong in python... > > Can somebody explain ? > > Thanks ! > > Midel > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon May 23 12:19:17 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 23 May 2011 12:19:17 -0400 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: References: Message-ID: On Mon, May 23, 2011 at 11:07 AM, Skipper Seabold wrote: > On Mon, May 23, 2011 at 3:37 AM, Chris Rodgers > wrote: >> A common task for my work is slicing a record array to find records >> matching some set of criteria. For any criteria beyond the most >> simple, I find the syntax grows complex and implementation efficiency >> starts to matter a lot. I wrote a class to encapsulate this filtering, >> and I wanted to share it with the list and also get feedback on: 1) am >> I approaching this problem correctly?, and 2) is the implementation >> efficient for very large arrays? >> > > I also have this problem quite frequently. I would be interested to > see something like this as a method of arrays and sub-classes, or at > least as a convenience function. > >> Here's a short script showing the desired functionality. Generally >> 'col3' contains data that I want to send to some other process, and >> 'col1' and 'col2' are data parameters that I want to filter by. >> >> x = np.recarray(shape=(100000,), >> ? ?dtype=[('col1', int), ('col2', int), ('col3', float)]) >> >> # Fill x with actual data here >> >> # Find all records where 'col2' is 1, 2, or 4 >> print x[(x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4)] >> >> # Find all records where 'col1' is 1, 2, or 4; and 'col1' is 1 >> print x[(x['col1'] == 1) & \ >> ? ?((x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4))] >> >> This is an "idiomatic" usage of record arrays >> (http://mail.scipy.org/pipermail/numpy-discussion/2009-February/040684.html). >> I certainly write this kind of code a lot. Problem #1 is that the >> syntax is hard to read for long chains of conditionals. Problem 2 is >> that it's hard to generalize the code when the list of acceptable >> values ([1, 2, 4] in this example) has arbitrary length. For that, you >> need an equivalent to `ismember` in Matlab. >> >> # Here's one way to do it but it's very slow for large datasets >> print x[np.array([t in [1,2,4] for t in x['col2']])] >> >> `in1d` will add this functionality but it's not available my version >> of numpy, from Synaptic in Ubuntu 10.04. `intersect1d` and >> `setmember1d` don't work if the lists contain non-unique values. (See >> http://stackoverflow.com/questions/1273041/how-can-i-implement-matlabs-ismember-command-in-python) >> > > Your version appears to be much faster than in1d for this and > comparable to doing it explicitly > > [~] > [1]: nobs = 1000000 > > [~] > [2]: x = np.zeros(shape=(nobs,), > ? ...: ? ?dtype=[('col1', int), ('col2', int), ('col3', float)]) > > [~] > [3]: np.random.seed(12345) > > [~] > [4]: x['col1'] = np.random.randint(1,6,size=nobs) > > [~] > [5]: x['col2'] = np.random.randint(1,6,size=nobs) > > [~] > [6]: ok_value_list = [1,2,4] > > [~] > [7]: colname = 'col2' > > [~] > [8]: timeit reduce(np.logical_or, [x[colname] == i for i in ok_value_list]) > 100 loops, best of 3: 14.9 ms per loop > > [~] > [9]: timeit np.in1d(x[colname],ok_value_list) > 1 loops, best of 3: 260 ms per loop > > [~] > [10]: paste > # > # Picker class using pick method 2, which I found to be slightly faster > ## -- End pasted text -- > > [~] > [11]: p = Picker(x) > > [~] > [12]: p.pick_mask(col2=[1,2,4]) > [12]: array([ True, ?True, ?True, ..., ?True, False, ?True], dtype=bool) > > [~] > [13]: timeit p=Picker(x);p.pick_mask(col2=[1,2,4]) > 100 loops, best of 3: 14.9 ms per loop > > [~] > [14]: ((x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4)) > [14]: array([ True, ?True, ?True, ..., ?True, False, ?True], dtype=bool) > > [~] > [15]: timeit ((x['col2'] == 1) | (x['col2'] == 2) | (x['col2'] == 4)) > 100 loops, best of 3: 14.8 ms per loop > > > >> Anyway, I wrote a simple object `Picker` to encapsulate the desired >> functionality. You can specify an arbitrary set of columns to filter >> by, and acceptable values from each column.. So the above code would >> be re-written as: >> >> p = Picker(data=x) >> # Mask of x that matches the desired values >> print p.pick_mask(col1=[1], col2=[1,2,4]) > > It might be nice to have a logical keyword so that these conditions > could be 'and' or 'or'. > >> # Or if you just want 'col3' from the filtered records >> print p.pick_data('col3', col1=[1], col2=[1,2,4]) >> >> I think the syntax is much cleaner. Another benefit is that, if there >> were hundreds of acceptable values for 'col2' instead of three, the >> code would not be any longer. >> >> Here's the class definition: >> >> import numpy as np >> class Picker: >> ? ?def __init__(self, data): >> ? ? ? ?self._data = data >> ? ? ? ?self._calculate_pick_mask = self._calculate_pick_mask_meth1 >> >> ? ?def pick_data(self, colname, **kwargs): >> ? ? ? ?return self._data[colname][self._calculate_pick_mask(kwargs)] >> >> ? ?def pick_mask(self, **kwargs): >> ? ? ? ?return self._calculate_pick_mask(kwargs) >> >> ? ?def _calculate_pick_mask_meth1(self, kwargs): >> ? ? ? ?# Begin with all true >> ? ? ? ?mask = np.ones(self._data.shape, dtype=bool) >> >> ? ? ? ?for colname, ok_value_list in kwargs.items(): >> ? ? ? ? ? ?# OR together all records with _data['colname'] in ok_value_list >> ? ? ? ? ? ?one_col_mask = np.zeros_like(mask) >> ? ? ? ? ? ?for ok_value in ok_value_list: >> ? ? ? ? ? ? ? ?one_col_mask = one_col_mask | (self._data[colname] == ok_value) you could make this and the next inline to save an intermediate array one_col_mask |= (self._data[colname] == ok_value) >> >> ? ? ? ? ? ?# AND together the full mask with the results from this column >> ? ? ? ? ? ?mask = mask & one_col_mask mask &= one_col_mask If I remember correctly from the numpy ticket, then in1d was optimized for the case when arr2 is also large, but I thought this had already changed. Josef >> >> ? ? ? ?return mask >> >> ? ?def _calculate_pick_mask_meth2(self, kwargs): >> ? ? ? ?mask = reduce(np.logical_and, >> ? ? ? ? ? ? ? ? ? ? ? ?[reduce(np.logical_or, >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?[self._data[colname] == ok_value >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?for ok_value in ok_value_list]) \ >> ? ? ? ? ? ? ? ? ? ? ? ? ? ?for colname, ok_value_list in kwargs.items()]) >> >> >> I tried several different implementations of _calculate_pick_mask. >> Method 1 is the easiest to read. Method 2 is more clever but it didn't >> actually run any faster for me. Both approaches are much faster than >> the pure python [val in v for val in a] approach. >> >> Is this the right data model for this kind of problem? Is this the >> best way to implement the filtering? For my datasets, this kind of >> filtering operation actually ends up taking most of the calculation >> time, so I'd like to do it quickly while keeping the code readable. >> >> Thanks for any comments! >> Chris >> > > I'm also interested in what others think. At the very least, I think > this should be added to the cookbook. It would my code much cleaner in > places. > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sturla at molden.no Mon May 23 13:17:35 2011 From: sturla at molden.no (Sturla Molden) Date: Mon, 23 May 2011 19:17:35 +0200 Subject: [SciPy-User] a new enigma for a matlab user In-Reply-To: <22221803.44431306152888653.JavaMail.www@wsfrf1210> References: <9838300.42081306152339242.JavaMail.www@wsfrf1210> <22221803.44431306152888653.JavaMail.www@wsfrf1210> Message-ID: <4DDA96AF.7010904@molden.no> Den 23.05.2011 14:14, skrev midel: > Thanks ! ok, so : > > variable1=variable2; > > creates a pointer, not an independant and new variable ? It creates a new variable, but not a new object. Observe that this also happens for sub-arrays of NumPy array: a = b[::2] means that the name 'a' points to every second element in b. So changing a[n] will also change b[2*n]. On the other hand, Python lists does not create views. This is one of the strengths of Python and NumPy, particularly when working with large data sets. But if you are used to MATLAB, it is terrible to get used to. Python and NumPy is not a free clone of MATLAB. It is very different, albeit some parts might look similar at first sight. Another thing that might surprise you is appending to a list. In Matlab, appending n elements to an array is O(N**2), in Python appending to a list is O(N). So in Matlab you will always preallocate an array to store data, whereas in Python you can just let a list grow while you read new data. Another difference is that a NumPy array can reference any "memory" (a memory mapped file, shared memory, an image from PIL, whatever). In MATLAB you always get a copy. Sturla From bsouthey at gmail.com Mon May 23 13:23:22 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 23 May 2011 12:23:22 -0500 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: References: Message-ID: <4DDA980A.4070306@gmail.com> On 05/23/2011 10:07 AM, Skipper Seabold wrote: > On Mon, May 23, 2011 at 3:37 AM, Chris Rodgers > wrote: >> A common task for my work is slicing a record array to find records >> matching some set of criteria. For any criteria beyond the most >> simple, I find the syntax grows complex and implementation efficiency >> starts to matter a lot. I wrote a class to encapsulate this filtering, >> and I wanted to share it with the list and also get feedback on: 1) am >> I approaching this problem correctly?, and 2) is the implementation >> efficient for very large arrays? >> > I also have this problem quite frequently. I would be interested to > see something like this as a method of arrays and sub-classes, or at > least as a convenience function. My thought would be that PyTables would be a better and more flexible solution especially since fast queries are a main seeling point of PyTables. Bruce From jsseabold at gmail.com Mon May 23 14:45:57 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 23 May 2011 14:45:57 -0400 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: <4DDA980A.4070306@gmail.com> References: <4DDA980A.4070306@gmail.com> Message-ID: On Mon, May 23, 2011 at 1:23 PM, Bruce Southey wrote: > On 05/23/2011 10:07 AM, Skipper Seabold wrote: >> On Mon, May 23, 2011 at 3:37 AM, Chris Rodgers >> ?wrote: >>> A common task for my work is slicing a record array to find records >>> matching some set of criteria. For any criteria beyond the most >>> simple, I find the syntax grows complex and implementation efficiency >>> starts to matter a lot. I wrote a class to encapsulate this filtering, >>> and I wanted to share it with the list and also get feedback on: 1) am >>> I approaching this problem correctly?, and 2) is the implementation >>> efficient for very large arrays? >>> >> I also have this problem quite frequently. I would be interested to >> see something like this as a method of arrays and sub-classes, or at >> least as a convenience function. > My thought would be that PyTables would be a better and more flexible > solution especially since fast queries are a main seeling point of PyTables. > I agree that if speed (and presumably size) is the main issue, pytables might be worth it, but I see this as lightweight, dependency-free syntactic sugar for a common use of structured arrays. Skipper From bsouthey at gmail.com Mon May 23 15:42:15 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 23 May 2011 14:42:15 -0500 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: References: <4DDA980A.4070306@gmail.com> Message-ID: <4DDAB897.3090302@gmail.com> On 05/23/2011 01:45 PM, Skipper Seabold wrote: > On Mon, May 23, 2011 at 1:23 PM, Bruce Southey wrote: >> On 05/23/2011 10:07 AM, Skipper Seabold wrote: >>> On Mon, May 23, 2011 at 3:37 AM, Chris Rodgers >>> wrote: >>>> A common task for my work is slicing a record array to find records >>>> matching some set of criteria. For any criteria beyond the most >>>> simple, I find the syntax grows complex and implementation efficiency >>>> starts to matter a lot. I wrote a class to encapsulate this filtering, >>>> and I wanted to share it with the list and also get feedback on: 1) am >>>> I approaching this problem correctly?, and 2) is the implementation >>>> efficient for very large arrays? >>>> >>> I also have this problem quite frequently. I would be interested to >>> see something like this as a method of arrays and sub-classes, or at >>> least as a convenience function. >> My thought would be that PyTables would be a better and more flexible >> solution especially since fast queries are a main seeling point of PyTables. >> > I agree that if speed (and presumably size) is the main issue, > pytables might be worth it, but I see this as lightweight, > dependency-free syntactic sugar for a common use of structured arrays. > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Just create an appropriate ticket with code, docstring, examples and test cases. :-) At least then it would not get lost in the email archives. Also does it work when you have multi-dimensional arrays as record arrays? Bruce From jkhilmer at chemistry.montana.edu Mon May 23 18:31:37 2011 From: jkhilmer at chemistry.montana.edu (jkhilmer at chemistry.montana.edu) Date: Mon, 23 May 2011 16:31:37 -0600 Subject: [SciPy-User] How to display a RGB image (MFISH)? In-Reply-To: <26976945.262.1306151997552.JavaMail.geo-discussion-forums@yqil2> References: <26976945.262.1306151997552.JavaMail.geo-discussion-forums@yqil2> Message-ID: Jean-Patrick, Your general approach worked for me. I used a different function to normalize the data, and I created 'rgb' as a uint8 from the start rather than a float, but changing the dtype to float doesn't cause any problems on my system. Jonathan On Mon, May 23, 2011 at 5:59 AM, jp wrote: > Hi, > > I have written a script to try to combine five images into a three channels > RGB images. > The resulting RGB image looks like a grey scale image instead of a color > one. > > I have three np.array Rnorm, Gnorm, Bnorm which are copied into a rgb array: > > rgb = np.zeros((shape[0],shape[1],3),dtype=float) > mxr=np.max(R) > mxg=np.max(G) > mxb=np.max(B) > Rnorm=np.uint8((255*(R/mxr))) > Gnorm=np.uint8((255*(R/mxg))) > Bnorm=np.uint8((255*(R/mxb))) > #copy each RGB component in an RGB array > rgb[:,:,0]=Rnorm > rgb[:,:,1]=Gnorm > rgb[:,:,2]=Bnorm > > pylab.subplot(224, aspect='equal',frameon=False, xticks=[], yticks=[]) > pylab.imshow(rgb) > pylab.show() > > Any advice? > > Thank you > > Jean-Patrick > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From tmp50 at ukr.net Tue May 24 06:22:47 2011 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 24 May 2011 13:22:47 +0300 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) Message-ID: Hi all, I have made my free solver interalg (http://openopt.org/interalg) be capable of solving nonlinear equations and systems of them. Unlike scipy optimize fsolve it doesn't matter which functions are involved - convex, nonconvex, multiextremum etc. Even some discontinuous funcs can be handled. If no solution exists, interalg determines it rather quickly. For more info see http://forum.openopt.org/viewtopic.php?id=423 Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From deil.christoph at googlemail.com Tue May 24 14:45:24 2011 From: deil.christoph at googlemail.com (Christoph Deil) Date: Tue, 24 May 2011 20:45:24 +0200 Subject: [SciPy-User] Question on scipy.special.gammaincc Message-ID: I want to use scipy to compute the incomplete gamma function for negative first arguments. Using www.wolframalpha.com I find 1 - Gamma[ 1,1] ---> 0.63 1 - Gamma[-1,1] ---> 0.85 Using scipy 0.9 I find scipy.special.gammainc( 1, 1) ---> 0.63 scipy.special.gammainc(-1, 1) ---> 0 The same incorrect value 0 is returned for any negative first argument. Is this a bug? Is there another way to compute the incomplete gamma function for negative first arguments? Thanks for your help! Christoph From jsseabold at gmail.com Tue May 24 15:44:55 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 24 May 2011 15:44:55 -0400 Subject: [SciPy-User] Question on scipy.special.gammaincc In-Reply-To: References: Message-ID: On Tue, May 24, 2011 at 2:45 PM, Christoph Deil wrote: > I want to use scipy to compute the incomplete gamma function for negative first arguments. > > Using www.wolframalpha.com I find > 1 - Gamma[ 1,1] ---> 0.63 > 1 - Gamma[-1,1] ---> 0.85 > > Using scipy 0.9 I find > scipy.special.gammainc( 1, 1) ---> 0.63 > scipy.special.gammainc(-1, 1) ---> 0 > > The same incorrect value 0 is returned for any negative first argument. > > Is this a bug? This is because it returns the regularized incomplete gamma function (see the doc string). It returns zero because Gamma(n) = inf for n < 0. > Is there another way to compute the incomplete gamma function for negative first arguments? > I don't think scipy has a generalized incomplete gamma function without the regularization. You might try mpmath [~] [1]: import mpmath [~] [2]: 1 - mpmath.gammainc(-1,1) [2]: mpf('0.85150449322407795') [~] [3]: 1 - mpmath.gammainc(-1,1,regularized=True) [3]: mpf('1.0') Skipper From deil.christoph at googlemail.com Tue May 24 16:24:43 2011 From: deil.christoph at googlemail.com (Christoph Deil) Date: Tue, 24 May 2011 22:24:43 +0200 Subject: [SciPy-User] Question on scipy.special.gammaincc In-Reply-To: References: Message-ID: On May 24, 2011, at 9:44 PM, Skipper Seabold wrote: >> Is there another way to compute the incomplete gamma function for negative first arguments? >> > > I don't think scipy has a generalized incomplete gamma function > without the regularization. You might try mpmath Thanks, Skipper! mpmath.gammainc is exactly what I need. For the future it would be nice if scipy.special also contained the generalized incomplete gamma function without the regularization, so that my code doesn't have to depend on mpmath in addition to scipy. Christoph From chris.rodgers at berkeley.edu Tue May 24 16:39:42 2011 From: chris.rodgers at berkeley.edu (Chris Rodgers) Date: Tue, 24 May 2011 13:39:42 -0700 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: <4DDAB897.3090302@gmail.com> References: <4DDA980A.4070306@gmail.com> <4DDAB897.3090302@gmail.com> Message-ID: Thanks to everyone for their comments! Concerning the speed of numpy.in1d: my guess is that in1d works best for arrays of comparable size, and this method works best for the special case when one array contains just a few values. I suppose it might make sense for me to break this into two objects. The first would replicate in1d for this use case. The second would supply the syntactic simplification for filtering. Concerning the use of PyTables: I definitely agree that is the answer for complex queries. I see this object as solving a narrow slice of problems between the complex (PyTables) and the trivially simple (explicit mask). For whatever reason a lot of my actual day-to-day problems fall into that category. Probably because I'm porting this code from Matlab and that's just the Matlab way of thinking about things. > Just create an appropriate ticket with code, docstring, examples and > test cases. :-) > At least then it would not get lost in the email archives. I'm happy to do that, though having never done this, I'm not sure where is "appropriate" (scipy trac, numpy trac, Cookbook, etc...) From jsseabold at gmail.com Tue May 24 16:47:40 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 24 May 2011 16:47:40 -0400 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: References: <4DDA980A.4070306@gmail.com> <4DDAB897.3090302@gmail.com> Message-ID: On Tue, May 24, 2011 at 4:39 PM, Chris Rodgers wrote: > Thanks to everyone for their comments! > > Concerning the speed of numpy.in1d: my guess is that in1d works best > for arrays of comparable size, and this method works best for the > special case when one array contains just a few values. I suppose it > might make sense for me to break this into two objects. The first > would replicate in1d for this use case. The second would supply the > syntactic simplification for filtering. > Might it make sense to just patch in1d to handle this case? I'm not so sure though. > Concerning the use of PyTables: I definitely agree that is the answer > for complex queries. I see this object as solving a narrow slice of > problems between the complex (PyTables) and the trivially simple > (explicit mask). For whatever reason a lot of my actual day-to-day > problems fall into that category. Probably because I'm porting this > code from Matlab and that's just the Matlab way of thinking about > things. > > >> Just create an appropriate ticket with code, docstring, examples and >> test cases. :-) >> At least then it would not get lost in the email archives. > > I'm happy to do that, though having never done this, I'm not sure > where is "appropriate" (scipy trac, numpy trac, Cookbook, etc...) Yeah, I was thinking about doing this myself. You might want to create a fork of numpy, implement a function or method, and then request a review. This is sure to need plenty of testing. If it's a function, where should it reside? I was thinking of numpy.lib.recfunctions, but it's not strictly for structured/record arrays. Any other ideas? You might find this helpful for getting started: http://docs.scipy.org/doc/numpy/dev/gitwash/index.html Skipper From josef.pktd at gmail.com Tue May 24 16:53:14 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 24 May 2011 16:53:14 -0400 Subject: [SciPy-User] Filtering record arrays by contents of columns using `ismember`-like syntax In-Reply-To: References: <4DDA980A.4070306@gmail.com> <4DDAB897.3090302@gmail.com> Message-ID: On Tue, May 24, 2011 at 4:47 PM, Skipper Seabold wrote: > On Tue, May 24, 2011 at 4:39 PM, Chris Rodgers > wrote: >> Thanks to everyone for their comments! >> >> Concerning the speed of numpy.in1d: my guess is that in1d works best >> for arrays of comparable size, and this method works best for the >> special case when one array contains just a few values. I suppose it >> might make sense for me to break this into two objects. The first >> would replicate in1d for this use case. The second would supply the >> syntactic simplification for filtering. >> > > Might it make sense to just patch in1d to handle this case? I'm not so > sure though. http://projects.scipy.org/numpy/ticket/1603 Josef > >> Concerning the use of PyTables: I definitely agree that is the answer >> for complex queries. I see this object as solving a narrow slice of >> problems between the complex (PyTables) and the trivially simple >> (explicit mask). For whatever reason a lot of my actual day-to-day >> problems fall into that category. Probably because I'm porting this >> code from Matlab and that's just the Matlab way of thinking about >> things. >> >> >>> Just create an appropriate ticket with code, docstring, examples and >>> test cases. :-) >>> At least then it would not get lost in the email archives. >> >> I'm happy to do that, though having never done this, I'm not sure >> where is "appropriate" (scipy trac, numpy trac, Cookbook, etc...) > > Yeah, I was thinking about doing this myself. You might want to create > a fork of numpy, implement a function or method, and then request a > review. This is sure to need plenty of testing. If it's a function, > where should it reside? I was thinking of numpy.lib.recfunctions, but > it's not strictly for structured/record arrays. Any other ideas? > > You might find this helpful for getting started: > http://docs.scipy.org/doc/numpy/dev/gitwash/index.html > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Tue May 24 19:09:00 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 May 2011 23:09:00 +0000 (UTC) Subject: [SciPy-User] Question on scipy.special.gammaincc References: Message-ID: On Tue, 24 May 2011 15:44:55 -0400, Skipper Seabold wrote: [clip] > This is because it returns the regularized incomplete gamma function > (see the doc string). It returns zero because Gamma(n) = inf for n < 0. However, 1/Gamma(z) = 0 only at negative integers on the real line, so the return value of zero is bogus. The function should really rather return `nan` (to indicate that the evaluation is not implemented), or even better, compute the correct value. Pauli From jsseabold at gmail.com Tue May 24 21:59:29 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 24 May 2011 21:59:29 -0400 Subject: [SciPy-User] Question on scipy.special.gammaincc In-Reply-To: References: Message-ID: On Tue, May 24, 2011 at 7:09 PM, Pauli Virtanen wrote: > On Tue, 24 May 2011 15:44:55 -0400, Skipper Seabold wrote: > [clip] >> This is because it returns the regularized incomplete gamma function >> (see the doc string). It returns zero because Gamma(n) = inf for n < 0. > > However, 1/Gamma(z) = 0 only at negative integers on the real line, so the > return value of zero is bogus. The function should really rather return > `nan` (to indicate that the evaluation is not implemented), or even better, > compute the correct value. > Indeed. For the time being, would a patch to return nan be useful? I don't see how to correctly raise exceptions from C, unless someone has a pointer to an example. Skipper From jeanpatrick.pommier at gmail.com Mon May 23 15:02:45 2011 From: jeanpatrick.pommier at gmail.com (jp) Date: Mon, 23 May 2011 12:02:45 -0700 (PDT) Subject: [SciPy-User] =?utf-8?q?Re=C2=A0=3A_How_to_display_a_RGB_image_=28?= =?utf-8?q?MFISH=29=3F?= In-Reply-To: <26976945.262.1306151997552.JavaMail.geo-discussion-forums@yqil2> Message-ID: <1310796.5248.1306177365563.JavaMail.geo-discussion-forums@yqmi17> Hi, I wrote something simplerwithout using directly linear algebra -------------- next part -------------- An HTML attachment was scrubbed... URL: From yosefmel at post.tau.ac.il Wed May 25 01:54:16 2011 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Wed, 25 May 2011 08:54:16 +0300 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) In-Reply-To: References: Message-ID: <201105250854.16814.yosefmel@post.tau.ac.il> On ??? ????? 24 ??? 2011 13:22:47 Dmitrey wrote: > Hi all, > I have made my free solver interalg (http://openopt.org/interalg) be > capable of solving nonlinear equations and systems of them. Unlike > scipy optimize fsolve it doesn't matter which functions are involved - > convex, nonconvex, multiextremum etc. Even some discontinuous funcs > can be handled. If no solution exists, interalg determines it rather > quickly. > > For more info see http://forum.openopt.org/viewtopic.php?id=423 Interesting. Is there any description of the actual algorithm? I tried looking for it in the link and the openopt site, but couldn't find it. From yury at shurup.com Wed May 25 03:25:08 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Wed, 25 May 2011 09:25:08 +0200 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) In-Reply-To: References: Message-ID: <1306308308.2575.7.camel@newpride> Hi Dmitrey, On Tue, 2011-05-24 at 13:22 +0300, Dmitrey wrote: > For more info see http://forum.openopt.org/viewtopic.php?id=423 1) Are there any scientific publications detailing the algorithm that can be referenced from a paper? 2) Sorry, I am not sure if I've got it right from the examples. Is it true that your implementation of interalg is impossible to use outside of OpenOpt the way one would, for example, use fmin from SciPy? I am currently using downhill simplex because that's almost the only algorithm that does not diverge or get stuck for my non-linear problem and my objective function can't be expressed as a simple algebraic expression of elementary functions. Computing it actually involves summing over series of exponential integrals and other nasty things, which require tight loops so I wrote a Python extension is C to speed it up. I can pass fmin any function, is there a way to do the same for interalg? If not, are there plans to implement it or how much effort would this involve? Thanks! -- Sincerely yours, Yury V. Zaytsev From tmp50 at ukr.net Wed May 25 03:28:56 2011 From: tmp50 at ukr.net (Dmitrey) Date: Wed, 25 May 2011 10:28:56 +0300 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) In-Reply-To: <201105250854.16814.yosefmel@post.tau.ac.il> References: <201105250854.16814.yosefmel@post.tau.ac.il> Message-ID: --- ???????? ????????? --- ?? ????: "Yosef Meller" ????: scipy-user at scipy.org ????: 25 ??? 2011, 08:54:16 ????: Re: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) On ??? ????? 24 ??? 2011 13:22:47 Dmitrey wrote: > Hi all, > I have made my free solver interalg ( http://openopt.org/interalg ) be > capable of solving nonlinear equations and systems of them. Unlike > scipy optimize fsolve it doesn't matter which functions are involved - > convex, nonconvex, multiextremum etc. Even some discontinuous funcs > can be handled. If no solution exists, interalg determines it rather > quickly. > > For more info see http://forum.openopt.org/viewtopic.php?id=423 Interesting. Is there any description of the actual algorithm? I tried looking for it in the link and the openopt site, but couldn't find it. Algorithm belongs to family of interval methods. Those ones, along with Lipschitz methods, are capable of doing the task (searching extremum with guaranteed precision), but require extremely much computational time and memory, thus are very rarely used. Some ideas created and programmed by me increased speed in many orders (see that benchmark vs Direct, intsolver and commercial BARON), memory consumption also isn't very huge. As for solving equations system, currently |f1| + |f2| + |f3| + ... + |f_k| is minimized (| . | means abs( . )). I know better way of handling nonlinear systems by interalg, but it would take much time to write those enhancements (maybe some weeks or even more), my current work doesn't allow me to spend so much time for interalg and other openopt items development. Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From meesters at aesku.com Wed May 25 03:36:14 2011 From: meesters at aesku.com (Meesters, Christian) Date: Wed, 25 May 2011 07:36:14 +0000 Subject: [SciPy-User] curve_fit - fitting a sum of 'functions' Message-ID: <8E882955B5BEA54BA86AB84407D7BBE301EEED@AESKU-EXCH01.AESKU.local> Hi, I'm dealing with a sigmoid curve which is pretty noisy at its maximum and the maximum is about three orders of magnitude bigger than the minimum. Just by eye I see that the data contain more than one 'mode' or, in other words, these curves have more than one step. In order to describe the data I would like to fit a sum normal cdf's like: def normald(xdata, *args): if len(args) == 1: args = args[0] if len(args) < 4 or (len(args) - 1) % 3 != 0: raise AssertionError("Argument list too short or incomplete") rec = np.zeros(len(xdata)) base = args[0] for amp, e50, var in zip(args[1::3], args[2::3], args[3::3]): rec += norm.cdf(xdata, loc = e50, scale = var) * amp return rec + base (If somebody knows a more elegant way to describe this, I would be glad to read it. Fitting common exponential works as good or bad and the data is readily interpreted with cdfs of the normal distribution.) Anyway, this than is passed to scipy's curve_fit: from scipy.optimize import curve_fit popt, pcov = curve_fit(normald, xdata, ydata, p0 = guess) My problem now is with the 'guess' parameter: If handing over parameters as estimated by eye, all works fine, but more general approaches failed so far. I tried several approaches where the 'guesses' for the additional modes were appended within a loop and subsequently fitted, e.g. 'substract the fit from the original data and guess maximum, minimum, inflection point and append the new parameters to 'guess''. This, of course, is naive, because the upper part is so noisy and because the maximum is so much higher than the other modes, an least squares fit first tries to fit the noise. Truncating the data above the highest inflection point has no effect, so: Does anybody have an idea how to approach this problem in a more elegant / robust way? Any pointer is appreciated. (Do I need to describe the problem better?) TIA, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Wed May 25 03:36:44 2011 From: tmp50 at ukr.net (Dmitrey) Date: Wed, 25 May 2011 10:36:44 +0300 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) In-Reply-To: <1306308308.2575.7.camel@newpride> References: <1306308308.2575.7.camel@newpride> Message-ID: --- ???????? ????????? --- ?? ????: "Yury V. Zaytsev" ????: "SciPy Users List" ????: 25 ??? 2011, 10:25:08 ????: Re: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) Hi Dmitrey, On Tue, 2011-05-24 at 13:22 +0300, Dmitrey wrote: > For more info see http://forum.openopt.org/viewtopic.php?id=423 1) Are there any scientific publications detailing the algorithm that can be referenced from a paper? not yet. 2) Sorry, I am not sure if I've got it right from the examples. Is it true that your implementation of interalg is impossible to use outside of OpenOpt the way one would, for example, use fmin from SciPy? As it is mentioned in http://openopt.org/interalg , only FuncDesigner models can be handled (because interval analysis is required). However, you could easily compare scipy optimize fmin and other openopt-connected solvers with interalg. I am currently using downhill simplex because that's almost the only algorithm that does not diverge or get stuck for my non-linear problem and my objective function can't be expressed as a simple algebraic expression of elementary functions. Computing it actually involves summing over series of exponential integrals and other nasty things, which require tight loops so I wrote a Python extension is C to speed it up. I can pass fmin any function, is there a way to do the same for interalg? If not, are there plans to implement it or how much effort would this involve? From all your questions, it seems you haven't read interalg webpage. Currently only these funcs are supported: +, -, *, /, pow (**), sin, cos, arcsin, arccos, arctan, sinh, cosh, exp, sqrt, abs, log, log2, log10, floor, ceil Future plans: 1-D splines , min, max Also, any monotone func R->R (or with known "critical points", where order of monotonity changes) can be easily connected. Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Wed May 25 03:43:44 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Wed, 25 May 2011 09:43:44 +0200 Subject: [SciPy-User] curve_fit - fitting a sum of 'functions' In-Reply-To: <8E882955B5BEA54BA86AB84407D7BBE301EEED@AESKU-EXCH01.AESKU.local> References: <8E882955B5BEA54BA86AB84407D7BBE301EEED@AESKU-EXCH01.AESKU.local> Message-ID: <1306309424.2575.11.camel@newpride> On Wed, 2011-05-25 at 07:36 +0000, Meesters, Christian wrote: > My problem now is with the 'guess' parameter: If handing over > parameters as estimated by eye, all works fine, but more general > approaches failed so far. I don't know if it would seem to make any sense to you, but you could heavily filter your function with something like Savitzky?Golay smoothing filter (it's basically just a weighted sum, so no numerical heavy-lifting is involved) and use the filtered version to derive your guesses, if the noise is really the problem... -- Sincerely yours, Yury V. Zaytsev From meesters at aesku.com Wed May 25 05:05:23 2011 From: meesters at aesku.com (Meesters, Christian) Date: Wed, 25 May 2011 09:05:23 +0000 Subject: [SciPy-User] curve_fit - fitting a sum of 'functions' In-Reply-To: <1306309424.2575.11.camel@newpride> References: <8E882955B5BEA54BA86AB84407D7BBE301EEED@AESKU-EXCH01.AESKU.local>, <1306309424.2575.11.camel@newpride> Message-ID: <8E882955B5BEA54BA86AB84407D7BBE301EF1E@AESKU-EXCH01.AESKU.local> Oh, thanks. I forgot to mention that I tried a) a Savitzky-Golay filter, b) Wiener-Filtering and c) B-Spline smoothing. No effect, then I stopped. Point is that the noise is just a problem on the plateau and both, filtering and smoothing, either level this noise and part of the actual data or - depending on the parameters - follow and enhance the noise. Yet, the noise can be truncated and still the fits aren't stable. ________________________________________ From: scipy-user-bounces at scipy.org [scipy-user-bounces at scipy.org] on behalf of Yury V. Zaytsev [yury at shurup.com] Sent: Wednesday, May 25, 2011 9:43 AM To: SciPy Users List Subject: Re: [SciPy-User] curve_fit - fitting a sum of 'functions' On Wed, 2011-05-25 at 07:36 +0000, Meesters, Christian wrote: > My problem now is with the 'guess' parameter: If handing over > parameters as estimated by eye, all works fine, but more general > approaches failed so far. I don't know if it would seem to make any sense to you, but you could heavily filter your function with something like Savitzky?Golay smoothing filter (it's basically just a weighted sum, so no numerical heavy-lifting is involved) and use the filtered version to derive your guesses, if the noise is really the problem... -- Sincerely yours, Yury V. Zaytsev _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Wed May 25 05:28:44 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 25 May 2011 09:28:44 +0000 (UTC) Subject: [SciPy-User] Question on scipy.special.gammaincc References: Message-ID: Tue, 24 May 2011 21:59:29 -0400, Skipper Seabold wrote: [clip] > Indeed. For the time being, would a patch to return nan be useful? I > don't see how to correctly raise exceptions from C, unless someone has a > pointer to an example. A patch would be useful. In addition to returning `nan`, call mtherr("gammaincc", DOMAIN) from "cephes/mconf.h", which will raise a warning (and maybe an error) depending on what the user has set up. Pauli From yury at shurup.com Wed May 25 08:55:41 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Wed, 25 May 2011 14:55:41 +0200 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) In-Reply-To: References: <1306308308.2575.7.camel@newpride> Message-ID: <1306328141.2575.88.camel@newpride> Hi Dmitrey, On Wed, 2011-05-25 at 10:36 +0300, Dmitrey wrote: > 1) Are there any scientific publications detailing the algorithm that > can be referenced from a paper? > > not yet. Please keep us posted when it happens! > 2) Sorry, I am not sure if I've got it right from the examples. > > Is it true that your implementation of interalg is impossible to use > outside of OpenOpt the way one would, for example, use fmin from SciPy? > > From all your questions, it seems you haven't read interalg webpage. Yes, sorry, it's a shame, I was distracted by the benchmarks page :-/ It would be helpful as well if it would mention the size of the problems that interalg is suitable for. I have > 1000 variables and a scalar non-linear objective function (which is proven to be concave, though). Also, are the box bound constraints necessary for the algorithm to work or the support for infinite constraints could be added later? > As it is mentioned in http://openopt.org/interalg , only FuncDesigner > models can be handled (because interval analysis is required). Ok, now it's clear. I have read the FuncDesigner documentation page again and I think that now I understand it a little bit better. Will it be correct if I summarize that in order to use interalg: the objective function has to be constructed using FuncDesigner as an oofunc, where every instance that depends on the optimization variables (oovars) has to be an oofunc constructed from other oofuncs or oovars using only numbers, FuncDesigner mathematical functions (you named below) and Python for loops / ifThenElse expressions etc.? Can you estimate the memory overhead of using an oovar / oofunc as opposed to a NumPy array? I.e., I think I would need like 3 x 5 000 000 oovars in size and few hundreds individual oovars for my function. When I use a NumPy float arrays for storage and Cython to implement my for loops in C, it only needs few hundred megabytes of RAM and works quite fast (x100 times faster than pure Python + NumPy vectorized operations). Therefore, I imagine that without JIT this is hopeless? Does OpenOpt work with PyPy as of yet? > However, you could easily compare scipy optimize fmin and other > openopt-connected solvers with interalg. ... which is, of course, very nice, except that for this you have first to implement your model in FuncDesigner :-( > Currently only these funcs are supported: > +, -, *, /, pow (**), sin, cos, arcsin, arccos, arctan, sinh, cosh, > exp, sqrt, abs, log, log2, log10, floor, ceil > Future plans: 1-D splines , min, max > Also, any monotone func R->R (or with known "critical points", where > order of monotonity changes) can be easily connected. Ok, I see, right now I need expi (as in SciPy) in addition to that. Is there some documentation on how to connect my own functions it to OpenOpt, so that they work like the supplied ones with regards to interval analysis and automatic differentiation that is needed of interalg? Thanks! -- Sincerely yours, Yury V. Zaytsev From yosefm at gmail.com Wed May 25 02:04:30 2011 From: yosefm at gmail.com (Yosef Meller) Date: Wed, 25 May 2011 09:04:30 +0300 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) In-Reply-To: References: Message-ID: 2011/5/24 Dmitrey > Hi all, > I have made my free solver interalg (http://openopt.org/interalg) be > capable of solving nonlinear equations and systems of them. Unlike scipy > optimize fsolve it doesn't matter which functions are involved - convex, > nonconvex, multiextremum etc. Even some discontinuous funcs can be handled. > If no solution exists, interalg determines it rather quickly. > > For more info see http://forum.openopt.org/viewtopic.php?id=423 > Interesting. Is there any description of the actual algorithm? I tried looking for it in the link and the openopt site, but couldn't find it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Wed May 25 09:49:38 2011 From: tmp50 at ukr.net (Dmitrey) Date: Wed, 25 May 2011 16:49:38 +0300 Subject: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) In-Reply-To: <1306328141.2575.88.camel@newpride> References: <1306328141.2575.88.camel@newpride> <1306308308.2575.7.camel@newpride> Message-ID: --- ???????? ????????? --- ?? ????: "Yury V. Zaytsev" ????: "Dmitrey" ????: 25 ??? 2011, 15:55:41 ????: Re: [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s) Hi Dmitrey, On Wed, 2011-05-25 at 10:36 +0300, Dmitrey wrote: > 1) Are there any scientific publications detailing the algorithm that > can be referenced from a paper? > > not yet. Please keep us posted when it happens! > 2) Sorry, I am not sure if I've got it right from the examples. > > Is it true that your implementation of interalg is impossible to use > outside of OpenOpt the way one would, for example, use fmin from SciPy? > > From all your questions, it seems you haven't read interalg webpage. Yes, sorry, it's a shame, I was distracted by the benchmarks page :-/ It would be helpful as well if it would mention the size of the problems that interalg is suitable for. I have > 1000 variables and a scalar non-linear objective function (which is proven to be concave, though). It depends on hardware involved, so I cannot formulate it precisely. Currently interalg will hardly handle problem of such big nVars, if exist any variable that is present at least for twice. I tested it on problems with 1...100 variables. For that problem you've mentioned I would consider using local solvers ralg or gsubg (http://openopt.org/ralg, http://openopt.org/gsubg) Also, are the box bound constraints necessary for the algorithm to work or the support for infinite constraints could be added later? Only finite values are required (but you could use rather big ones) > As it is mentioned in http://openopt.org/interalg , only FuncDesigner > models can be handled (because interval analysis is required). Ok, now it's clear. I have read the FuncDesigner documentation page again and I think that now I understand it a little bit better. Will it be correct if I summarize that in order to use interalg: the objective function has to be constructed using FuncDesigner as an oofunc, where every instance that depends on the optimization variables (oovars) has to be an oofunc constructed from other oofuncs or oovars using only numbers, FuncDesigner mathematical functions (you named below) and Python for loops / ifThenElse expressions ifThenElse expressions can't be handled (interval arithmetics is unimplemented for them yet), but I intend to implement FuncDesigner theta-function theta(x) = 0 if x <= 0 else 1, and you can use the func instead of ifThenElse, e.g. instead of ifThenElse(a**2+b**2 However, you could easily compare scipy optimize fmin and other > openopt-connected solvers with interalg. ... which is, of course, very nice, except that for this you have first to implement your model in FuncDesigner :-( > Currently only these funcs are supported: > +, -, *, /, pow (**), sin, cos, arcsin, arccos, arctan, sinh, cosh, > exp, sqrt, abs, log, log2, log10, floor, ceil > Future plans: 1-D splines , min, max > Also, any monotone func R->R (or with known "critical points", where > order of monotonity changes) can be easily connected. Ok, I see, right now I need expi (as in SciPy) in addition to that. Is there some documentation on how to connect my own functions it to OpenOpt, did you meant to FuncDesigner? You can use myoofun = oofun(scipy.special.expi, input, d=lambda x: exp(x)/x) See http://openopt.org/FuncDesignerDoc#Creating_special_oofuns But connecting interval analysis feature is a little bit harder. so that they work like the supplied ones with regards to interval analysis and automatic differentiation that is needed of interalg? The function seems to decrease in (-inf, 0) and grow up in (0, inf), am I right? If yes, it could be quite easily connected, however, the func is not defined in zero, that brings some inconveniences. Thanks! -- Sincerely yours, Yury V. Zaytsev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Wed May 25 12:08:48 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 25 May 2011 12:08:48 -0400 Subject: [SciPy-User] Question on scipy.special.gammaincc In-Reply-To: References: Message-ID: On Wed, May 25, 2011 at 5:28 AM, Pauli Virtanen wrote: > Tue, 24 May 2011 21:59:29 -0400, Skipper Seabold wrote: > [clip] >> Indeed. For the time being, would a patch to return nan be useful? I >> don't see how to correctly raise exceptions from C, unless someone has a >> pointer to an example. > > A patch would be useful. In addition to returning `nan`, call > > ? ? ? ?mtherr("gammaincc", DOMAIN) > > from "cephes/mconf.h", which will raise a warning (and maybe an error) > depending on what the user has set up. > Right, thanks. I was also wondering if it's possible to raise something like a NotImplementedError for negative real numbers since only the negative integers and zero are poles. I did as you suggested though. I also think the cephes docs are slightly misleading for incomplete gamma. It says it only works for positive arguments for both, but I think the integration limit should be allowed to be zero (the chi-square and gamma distribution functions depend on this) no matter what the other arguments are. I also updated the docs. Can someone check this small patch? https://github.com/jseabold/scipy/compare/master...gammainc-nanfix Versus mpmath: [20]: lower_int = 0 [cephes] [21]: special.gammainc(0,1) [21]: nan [cephes] [22]: mpmath.gammainc(0,lower_int,1, regularized=True) [22]: mpf('+inf') [cephes] [23]: special.gammainc(1,0) [23]: 0.0 [cephes] [24]: mpmath.gammainc(1,lower_int,0, regularized=True) [24]: mpf('0.0') [cephes] [25]: special.gammainc(0,0) [25]: 0.0 [cephes] [26]: mpmath.gammainc(0,lower_int,0, regularized=True) [26]: mpf('0.0') [cephes] [27]: special.gammainc(1,1) [27]: 0.63212055882855778 [cephes] [28]: mpmath.gammainc(1,lower_int,1, regularized=True) [28]: mpf('0.63212055882855767') Skipper From robfalck at gmail.com Wed May 25 15:14:14 2011 From: robfalck at gmail.com (Rob Falck) Date: Wed, 25 May 2011 12:14:14 -0700 (PDT) Subject: [SciPy-User] Quadrature of a vector-valued function Message-ID: <63676172-9755-4a02-ac9e-22ef21ad9f89@s2g2000yql.googlegroups.com> Hello, I'm trying to use a quadrature to integrate several equations, and all have similar up-front calculations. However, quad as implemented in scipy.integrate only seems capable of handling scalar functions, meaning I have to break my vector-valued function up into several scalar functions. Also, since quad is adaptive, I can't necessarily store the common calculations because theres no guarantee that quad will use the same grid when integrating each function. As an example, in the following code I would like the integral of vector_func to return N.array([14,10]), but instead it errors out : import numpy as N from scipy.integrate import quad def scalar_func(x): return 3*x**2+2*x+1 def vector_func(x): a = 3*x**2+2*x+1 b = 2*x+3 return N.array([a,b]) print quad(scalar_func,0,2) print quad(vector_func,0,2) The output generated is: (14.0, 1.5543122344752192e-13) Traceback (most recent call last): File "quadv.py", line 15, in print quad(vector_func,0,2) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/scipy/integrate/quadpack.py", line 245, in quad retval = _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/scipy/integrate/quadpack.py", line 309, in _quad return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) quadpack.error: Supplied function does not return a valid float. Any ideas how to best approach this? The other integration routines (quadrature, romberg, etc) don't seem to be able to do this either. From warren.weckesser at enthought.com Wed May 25 15:29:31 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 25 May 2011 14:29:31 -0500 Subject: [SciPy-User] Quadrature of a vector-valued function In-Reply-To: <63676172-9755-4a02-ac9e-22ef21ad9f89@s2g2000yql.googlegroups.com> References: <63676172-9755-4a02-ac9e-22ef21ad9f89@s2g2000yql.googlegroups.com> Message-ID: Hi Rob, On Wed, May 25, 2011 at 2:14 PM, Rob Falck wrote: > Hello, > > I'm trying to use a quadrature to integrate several equations, and all > have similar up-front calculations. However, quad as implemented in > scipy.integrate only seems capable of handling scalar functions, > meaning I have to break my vector-valued function up into several > scalar functions. Also, since quad is adaptive, I can't necessarily > store the common calculations because theres no guarantee that quad > will use the same grid when integrating each function. > For some use cases, splitting up the function is preferable, especially if the evaluating the functions is computationally expense. If one component is changing rapidly while another isn't, extra time would be spent making unnecessary calls to the nice function. > > As an example, in the following code I would like the integral of > vector_func to return N.array([14,10]), but instead it errors out : > > import numpy as N > from scipy.integrate import quad > > def scalar_func(x): > return 3*x**2+2*x+1 > > def vector_func(x): > a = 3*x**2+2*x+1 > b = 2*x+3 > return N.array([a,b]) > > print quad(scalar_func,0,2) > print quad(vector_func,0,2) > > The output generated is: > > (14.0, 1.5543122344752192e-13) > Traceback (most recent call last): > File "quadv.py", line 15, in > print quad(vector_func,0,2) > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ > python2.7/site-packages/scipy/integrate/quadpack.py", line 245, in > quad > retval = > _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points) > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ > python2.7/site-packages/scipy/integrate/quadpack.py", line 309, in > _quad > return > _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) > quadpack.error: Supplied function does not return a valid float. > > > Any ideas how to best approach this? The other integration routines > (quadrature, romberg, etc) don't seem to be able to do this either. > Not sure if this is how to "best" approach this, but if you really want to do it, you could use odeint: In [369]: from scipy.integrate import odeint In [370]: result = odeint(lambda v, t: vector_func(t), [0,0], [0,2]) In [371]: result[1] Out[371]: array([ 14.00000002, 10. ]) Warren > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robfalck at gmail.com Wed May 25 16:21:42 2011 From: robfalck at gmail.com (Rob Falck) Date: Wed, 25 May 2011 13:21:42 -0700 (PDT) Subject: [SciPy-User] Quadrature of a vector-valued function In-Reply-To: References: <63676172-9755-4a02-ac9e-22ef21ad9f89@s2g2000yql.googlegroups.com> Message-ID: <066fe1c4-f9b5-4c95-bdb1-fa75a6f05ec4@v10g2000yqn.googlegroups.com> Thanks for the quick reply. The odeint solution is worth considering, but I suspect it uses more intervals than a quadrature typically would. Also, the point about the adaptive one possibly being faster when not vectorized is a good point, although in my case I don't think any of the equations is substantially "faster" than the others. I was able to kludge fixed_quad to get the results I want. This should be extensible to the variable-step quadrature method, since it just calls fixed-quad repeatedly. Right now this doesn't utilize the "vectorize" utility in scipy.integrate, but it gets the job done. import numpy as N from scipy.special.orthogonal import p_roots def fixed_quadv(func,a,b,args=(),n=5,vec_func=False): """ Compute a definite integral using fixed-order Gaussian quadrature. Integrate `func` from a to b using Gaussian quadrature of order n. Parameters ---------- func : callable A Python function or method to integrate. Must have a form compatible with func(x,*args). If x may be a vector, then vec_func option should be True. If func is vector-valued, then it should return an array of shape (size(x),2). a : float Lower limit of integration. b : float Upper limit of integration. args : tuple, optional Extra arguments to pass to function, if any. n : int, optional Order of quadrature integration. Default is 5. vec_func : boolean, optional True if func accepts vector input for x. Returns ------- val : float Gaussian quadrature approximation to the integral See Also -------- fixed_quad: fixed-order Gaussian quadrature quad : adaptive quadrature using QUADPACK dblquad, tplquad : double and triple integrals romberg : adaptive Romberg quadrature quadrature : adaptive Gaussian quadrature romb, simps, trapz : integrators for sampled data cumtrapz : cumulative integration for sampled data ode, odeint - ODE integrators """ [x,w] = p_roots(n) x = N.real(x) ainf, binf = map(N.isinf,(a,b)) if ainf or binf: raise ValueError("Gaussian quadrature is only available for " "finite limits.") if vec_func: fx = (((b-a)*x)+(a+b))/2.0 accum = N.dot(w,func(fx,*args)) y = accum*(b-a)/2.0 else: accum = 0 for i in range(n): accum += w[i] * func((((b-a)*x[i])+(a+b))/2.0, *args ) y = accum*(b-a)/2.0 return y, None def vector_func(x): result = N.zeros([N.size(x),2]) a = 3*x**2+2*x+1 result[:,0] = a b = 2*x+3 result[:,1] = b return result print fixed_quadv(vector_func,0,2,vec_func=True) On May 25, 3:29?pm, Warren Weckesser wrote: > Hi Rob, > > On Wed, May 25, 2011 at 2:14 PM, Rob Falck wrote: > > Hello, > > > I'm trying to use a quadrature to integrate several equations, and all > > have similar up-front calculations. ?However, quad as implemented in > > scipy.integrate only seems capable of handling scalar functions, > > meaning I have to break my vector-valued function up into several > > scalar functions. ?Also, since quad is adaptive, I can't necessarily > > store the common calculations because theres no guarantee that quad > > will use the same grid when integrating each function. > > For some use cases, splitting up the function is preferable, especially if > the evaluating the functions is computationally expense. ?If one component > is changing rapidly while another isn't, extra time would be spent making > unnecessary calls to the nice function. > > > > > > > As an example, in the following code I would like the integral of > > vector_func to return N.array([14,10]), but instead it errors out : > > > import numpy as N > > from scipy.integrate import quad > > > def scalar_func(x): > > ? ?return 3*x**2+2*x+1 > > > def vector_func(x): > > ? ?a = 3*x**2+2*x+1 > > ? ?b = 2*x+3 > > ? ?return N.array([a,b]) > > > print quad(scalar_func,0,2) > > print quad(vector_func,0,2) > > > The output generated is: > > > (14.0, 1.5543122344752192e-13) > > Traceback (most recent call last): > > ?File "quadv.py", line 15, in > > ? ?print quad(vector_func,0,2) > > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ > > python2.7/site-packages/scipy/integrate/quadpack.py", line 245, in > > quad > > ? ?retval = > > _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points) > > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ > > python2.7/site-packages/scipy/integrate/quadpack.py", line 309, in > > _quad > > ? ?return > > _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) > > quadpack.error: Supplied function does not return a valid float. > > > Any ideas how to best approach this? ?The other integration routines > > (quadrature, romberg, etc) don't seem to be able to do this either. > > Not sure if this is how to "best" approach this, but if you really want to > do it, you could use odeint: > > In [369]: from scipy.integrate import odeint > > In [370]: result = odeint(lambda v, t: vector_func(t), [0,0], [0,2]) > > In [371]: result[1] > Out[371]: array([ 14.00000002, ?10. ? ? ? ?]) > > Warren > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-U... at scipy.org > >http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From david_baddeley at yahoo.com.au Wed May 25 17:28:09 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 25 May 2011 14:28:09 -0700 (PDT) Subject: [SciPy-User] curve_fit - fitting a sum of 'functions' In-Reply-To: <8E882955B5BEA54BA86AB84407D7BBE301EF1E@AESKU-EXCH01.AESKU.local> References: <8E882955B5BEA54BA86AB84407D7BBE301EEED@AESKU-EXCH01.AESKU.local>, <1306309424.2575.11.camel@newpride> <8E882955B5BEA54BA86AB84407D7BBE301EF1E@AESKU-EXCH01.AESKU.local> Message-ID: <457215.19239.qm@web113411.mail.gq1.yahoo.com> Have you got any way of determining the magnitude of the noise? You could try a weighted fit (although you might have to take a step backwards and use lsqnonlin - I'm not sure if curve_fit deals with weights. I think the spline fitting functions can also take weights, which might make the spline smoothing approach worth another shot. ----- Original Message ---- From: "Meesters, Christian" To: SciPy Users List Sent: Wed, 25 May, 2011 9:05:23 PM Subject: Re: [SciPy-User] curve_fit - fitting a sum of 'functions' Oh, thanks. I forgot to mention that I tried a) a Savitzky-Golay filter, b) Wiener-Filtering and c) B-Spline smoothing. No effect, then I stopped. Point is that the noise is just a problem on the plateau and both, filtering and smoothing, either level this noise and part of the actual data or - depending on the parameters - follow and enhance the noise. Yet, the noise can be truncated and still the fits aren't stable. ________________________________________ From: scipy-user-bounces at scipy.org [scipy-user-bounces at scipy.org] on behalf of Yury V. Zaytsev [yury at shurup.com] Sent: Wednesday, May 25, 2011 9:43 AM To: SciPy Users List Subject: Re: [SciPy-User] curve_fit - fitting a sum of 'functions' On Wed, 2011-05-25 at 07:36 +0000, Meesters, Christian wrote: > My problem now is with the 'guess' parameter: If handing over > parameters as estimated by eye, all works fine, but more general > approaches failed so far. I don't know if it would seem to make any sense to you, but you could heavily filter your function with something like Savitzky?Golay smoothing filter (it's basically just a weighted sum, so no numerical heavy-lifting is involved) and use the filtered version to derive your guesses, if the noise is really the problem... -- Sincerely yours, Yury V. Zaytsev _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From gael.varoquaux at normalesup.org Wed May 25 17:40:03 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 25 May 2011 23:40:03 +0200 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110525213533.GC9388@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> Message-ID: <20110525214003.GD9388@phare.normalesup.org> Sorry for the noise, I sent this to the dev list, while it belongs to the user list. Hi list, I am looking at estimating entropy and conditional entropy from data for which I have only access to observations, and not the underlying probabilistic laws. With low dimensional data, I would simply use an empirical estimate of the probabilities by converting each observation to its quantile, and then apply the standard formula for entropy (for instance using scipy.stats.entropy). However, I have high-dimensional data (~100 features, and 30000 observations). Not only is it harder to convert observations to probabilities in the empirical law, but I am also worried of curse of dimensionality effects: density estimation in high-dimension is a difficult problem. Does anybody has advices, or code in Python to point to, for this task? Cheers, Ga?l From robert.kern at gmail.com Wed May 25 18:27:26 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 May 2011 17:27:26 -0500 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110525214003.GD9388@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> Message-ID: On Wed, May 25, 2011 at 16:40, Gael Varoquaux wrote: > Hi list, > > I am looking at estimating entropy and conditional entropy from data for > which I have only access to observations, and not the underlying > probabilistic laws. > > With low dimensional data, I would simply use an empirical estimate of > the probabilities by converting each observation to its quantile, and > then apply the standard formula for entropy (for instance using > scipy.stats.entropy). > > However, I have high-dimensional data (~100 features, and 30000 > observations). Not only is it harder to convert observations to > probabilities in the empirical law, but I am also worried of curse of > dimensionality effects: density estimation in high-dimension is a > difficult problem. > > Does anybody has advices, or code in Python to point to, for this task? This is just from a quick Googling, but it looks like the main approach is to partition the space into equal-density chunks using an appropriate partitioning scheme. This one uses k-d trees: Fast multidimensional entropy estimation by k-d partitioning http://www.elec.qmul.ac.uk/digitalmusic/papers/2009/StowellPlumbley09entropy.pdf This one uses a Voronoi tessellation: A new class of entropy estimators for multi-dimensional densities http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9.5037&rep=rep1&type=pdf They are both unsuitable for ndim=100, however. You may be able to do something similar with a ball tree and maybe even get a paper out of it. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From njs at pobox.com Wed May 25 18:43:30 2011 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 25 May 2011 15:43:30 -0700 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110525214003.GD9388@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> Message-ID: On Wed, May 25, 2011 at 2:40 PM, Gael Varoquaux wrote: > I am looking at estimating entropy and conditional entropy from data for > which I have only access to observations, and not the underlying > probabilistic laws. > > With low dimensional data, I would simply use an empirical estimate of > the probabilities by converting each observation to its quantile, and > then apply the standard formula for entropy (for instance using > scipy.stats.entropy). Depending on the situation even this technique can generate extremely biased estimates (you basically end up measuring your sample size instead of the real entropy, because you don't observe low probability events, so you assume that they have probability 0, but in fact low probability events can have a large influence on the entropy). See http://jmlr.csail.mit.edu/papers/v10/hausser09a.html for a good starting point on those issues... What I've ended up doing for estimating entropy of word probability distributions (which have long Zipfian tails) is to just fit a reasonable parametric distribution (e.g., zeta) and then calculate the theoretical entropy of that distribution. Might be another approach worth considering, if you know enough about your data to do it. -- Nathaniel From josef.pktd at gmail.com Wed May 25 18:45:02 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 25 May 2011 18:45:02 -0400 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110525214003.GD9388@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> Message-ID: On Wed, May 25, 2011 at 5:40 PM, Gael Varoquaux wrote: > Sorry for the noise, I sent this to the dev list, while it belongs to the > user list. > > Hi list, > > I am looking at estimating entropy and conditional entropy from data for > which I have only access to observations, and not the underlying > probabilistic laws. > > With low dimensional data, I would simply use an empirical estimate of > the probabilities by converting each observation to its quantile, and > then apply the standard formula for entropy (for instance using > scipy.stats.entropy). > > However, I have high-dimensional data (~100 features, and 30000 > observations). Not only is it harder to convert observations to > probabilities in the empirical law, but I am also worried of curse of > dimensionality effects: density estimation in high-dimension is a > difficult problem. > > Does anybody has advices, or code in Python to point to, for this task? 30000 doesn't sound like a lot of observations for 100 dimensions, 2**100 bins is pretty large, so binning sounds pretty impossible. Are you willing to impose some structure, (a gaussian copula might be able to handle it, or blockwise independence (?)). But even then integration in 100 dimension sounds tough. gaussian_kde with Monte Carlo Integration ? Maybe a PCA or some other dimension reduction helps, if the data is cluster in some dimensions. Josef > > Cheers, > > Ga?l > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Wed May 25 19:48:36 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 25 May 2011 19:48:36 -0400 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> Message-ID: On Wed, May 25, 2011 at 6:45 PM, wrote: > On Wed, May 25, 2011 at 5:40 PM, Gael Varoquaux > wrote: >> Sorry for the noise, I sent this to the dev list, while it belongs to the >> user list. >> >> Hi list, >> >> I am looking at estimating entropy and conditional entropy from data for >> which I have only access to observations, and not the underlying >> probabilistic laws. >> >> With low dimensional data, I would simply use an empirical estimate of >> the probabilities by converting each observation to its quantile, and >> then apply the standard formula for entropy (for instance using >> scipy.stats.entropy). >> >> However, I have high-dimensional data (~100 features, and 30000 >> observations). Not only is it harder to convert observations to >> probabilities in the empirical law, but I am also worried of curse of >> dimensionality effects: density estimation in high-dimension is a >> difficult problem. >> >> Does anybody has advices, or code in Python to point to, for this task? > > 30000 doesn't sound like a lot of observations for 100 dimensions, > 2**100 bins is pretty large, so binning sounds pretty impossible. > > Are you willing to impose some structure, (a gaussian copula might be > able to handle it, or blockwise independence (?)). But even then > integration in 100 dimension sounds tough. > > gaussian_kde with Monte Carlo Integration ? > > Maybe a PCA or some other dimension reduction helps, if the data is > cluster in some dimensions. maybe what this one might be talking about http://www.cs.utah.edu/~suyash/Dissertation_html/node13.html (It's not quite clear whether you have a discrete sample space like in the reference of Nathaniel, or a continuous space in R^100) Josef > > Josef > >> >> Cheers, >> >> Ga?l >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > From wkerzendorf at googlemail.com Wed May 25 22:27:09 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Thu, 26 May 2011 12:27:09 +1000 Subject: [SciPy-User] weird TypeError: only length-1 arrays can be converted to Python scalars Message-ID: <4DDDBA7D.9020303@gmail.com> Dear all, I have come across a weird error, in which the traceback doesn't really reflect the problem (ah osx 10.6.7, python2.6, np 1.6, sp 0.9, ipython 0.10.1, mpl 1.0.1): ---- File "weird_error.py", line 8, in fmin.simplex() File "/Users/wkerzend/scripts/python/pyspecgrid/specgrid.py", line 134, in __call__ gridSpec = self.specGrid.getSpec(*args) File "/Users/wkerzend/scripts/python/pyspecgrid/specgrid.py", line 235, in getSpec return oned.onedspec(self.wave, self.interpGrid(args), mode='waveflux') File "interpnd.pyx", line 120, in interpnd.NDInterpolatorBase.__call__ (scipy/interpolate/interpnd.c:1896) File "interpnd.pyx", line 142, in interpnd._ndim_coords_from_arrays (scipy/interpolate/interpnd.c:2145) File "/Library/Python/2.6/site-packages/numpy/lib/stride_tricks.py", line 69, in broadcast_arrays args = map(np.asarray, args) File "/Library/Python/2.6/site-packages/numpy/core/numeric.py", line 235, in asarray return array(a, dtype, copy=False, order=order) TypeError: only length-1 arrays can be converted to Python scalars ------ I have debugged it here: --> 235 return array(a, dtype, copy=False, order=order) 236 ipdb> a a = 0.1 dtype = None order = None ipdb> type(a) ---------------- It just doesn't make sense to me, please help, Wolfgang From gael.varoquaux at normalesup.org Thu May 26 01:15:00 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 May 2011 07:15:00 +0200 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> Message-ID: <20110526051500.GA11314@phare.normalesup.org> On Wed, May 25, 2011 at 05:27:26PM -0500, Robert Kern wrote: > This is just from a quick Googling, but it looks like the main > approach is to partition the space into equal-density chunks using an > appropriate partitioning scheme. Thanks Robert, I was aware of these approaches, and wasn't too sure that I had the courage to implement them. However, your 2-line summary makes them sound so sensible that I also feel I could start coding right away. > They are both unsuitable for ndim=100, however. You may be able to do > something similar with a ball tree Good suggestion! > and maybe even get a paper out of it. :-) Yeah. I just happen to be trying to solve other problems, for which entropy estimation is just a small step. If someone wants to have a good at that, there is a ball tree in the scikit-learn [1], so it's just a matter of running simulations and writing the paper :). Gael [1] http://scikit-learn.sourceforge.net/dev/modules/generated/scikits.learn.ball_tree.BallTree.html From gael.varoquaux at normalesup.org Thu May 26 01:17:31 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 May 2011 07:17:31 +0200 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> Message-ID: <20110526051731.GB11314@phare.normalesup.org> On Wed, May 25, 2011 at 03:43:30PM -0700, Nathaniel Smith wrote: > Depending on the situation even this technique can generate extremely > biased estimates (you basically end up measuring your sample size > instead of the real entropy, That's exactly what I have observed on small simulations and naive estimators. > What I've ended up doing for estimating entropy of word probability > distributions (which have long Zipfian tails) is to just fit a > reasonable parametric distribution (e.g., zeta) and then calculate the > theoretical entropy of that distribution. Might be another approach > worth considering, if you know enough about your data to do it. Unfortunately, I want to use entropy as a model selection tool, thus using a parameteric approximation is hard to justify. That said, it seems that I might be able to get away with estimating the entropy of the marginal distributions only, i.e. entropy in one dimension, which is heaps easier. Gael From gael.varoquaux at normalesup.org Thu May 26 01:26:48 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 May 2011 07:26:48 +0200 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> Message-ID: <20110526052647.GC11314@phare.normalesup.org> On Wed, May 25, 2011 at 06:45:02PM -0400, josef.pktd at gmail.com wrote: > 30000 doesn't sound like a lot of observations for 100 dimensions, > 2**100 bins is pretty large, so binning sounds pretty impossible. Yes, it's ridiculously bad. I have only started realising how bad it was, even though I initialy had such an intuition. > Are you willing to impose some structure, (a gaussian copula might be > able to handle it, or blockwise independence (?)). But even then > integration in 100 dimension sounds tough. > gaussian_kde with Monte Carlo Integration ? That's definitely something that would be worth a look. It would be very slow, though, and this step is already inside a cross-validation loop. > Maybe a PCA or some other dimension reduction helps, if the data is > cluster in some dimensions. Unfortunately, the goal here is to do model selection on the number of components of dimension reduction (iow latent factor analysis). > (It's not quite clear whether you have a discrete sample space like in > the reference of Nathaniel, or a continuous space in R^100) Continuous :(. As I mentionned in another mail, as I slept over this problem, I realized that I should be able to work only from the entropy of the marginal distributions, which makes the problem tractable. Thanks for all the answers, they really helped me developing an understanding of the problem. Ga?l From pav at iki.fi Thu May 26 04:28:19 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 May 2011 08:28:19 +0000 (UTC) Subject: [SciPy-User] weird TypeError: only length-1 arrays can be converted to Python scalars References: <4DDDBA7D.9020303@gmail.com> Message-ID: Thu, 26 May 2011 12:27:09 +1000, Wolfgang Kerzendorf wrote: [clip] > File "/Users/wkerzend/scripts/python/pyspecgrid/specgrid.py", line > 235, in getSpec > return oned.onedspec(self.wave, self.interpGrid(args), > mode='waveflux') You are apparently passing to interpGrid `args` that contains some strange stuff that cannot be converted to an array. -- Pauli Virtanen From matthieu.brucher at gmail.com Thu May 26 04:29:26 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 26 May 2011 10:29:26 +0200 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110526052647.GC11314@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> <20110526052647.GC11314@phare.normalesup.org> Message-ID: 2011/5/26 Gael Varoquaux > On Wed, May 25, 2011 at 06:45:02PM -0400, josef.pktd at gmail.com wrote: > > 30000 doesn't sound like a lot of observations for 100 dimensions, > > 2**100 bins is pretty large, so binning sounds pretty impossible. > > Yes, it's ridiculously bad. I have only started realising how bad it was, > even though I initialy had such an intuition. > > > Are you willing to impose some structure, (a gaussian copula might be > > able to handle it, or blockwise independence (?)). But even then > > integration in 100 dimension sounds tough. > > > gaussian_kde with Monte Carlo Integration ? > > That's definitely something that would be worth a look. It would be very > slow, though, and this step is already inside a cross-validation loop. > > > Maybe a PCA or some other dimension reduction helps, if the data is > > cluster in some dimensions. > > Unfortunately, the goal here is to do model selection on the number of > components of dimension reduction (iow latent factor analysis). The guy doing his PhD before my dimensionality reduction thesis worked a little bit on this. As you don't have much data (30000 is not big for a 100-dimension space), you can check the eigenvalues of the covariance matrix. When the data cannot be explained by adding a new eigenvector, the eigenvalue will drop. These eigenvalues should also have a link to the entropy of your data, although a linear reduction is not a good estimator IMHO. This eigenvalue analysis also works with Isomap, LLE, ... Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkerzendorf at googlemail.com Thu May 26 07:13:28 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Thu, 26 May 2011 21:13:28 +1000 Subject: [SciPy-User] weird TypeError: only length-1 arrays can be converted to Python scalars In-Reply-To: References: <4DDDBA7D.9020303@gmail.com> Message-ID: <4DDE35D8.2040608@gmail.com> That's what I thought initially. I tested all of this and the function ran in debug mode without raising errors. The solution was that there was an error at a completely different end of the program and it somehow affected it. It works now, but I don't understand how the error came to be. Thanks Wolfgang On 26/05/11 6:28 PM, Pauli Virtanen wrote: > Thu, 26 May 2011 12:27:09 +1000, Wolfgang Kerzendorf wrote: > [clip] >> File "/Users/wkerzend/scripts/python/pyspecgrid/specgrid.py", line >> 235, in getSpec >> return oned.onedspec(self.wave, self.interpGrid(args), >> mode='waveflux') > You are apparently passing to interpGrid `args` that contains some > strange stuff that cannot be converted to an array. > From yann.ledu.fr at gmail.com Wed May 25 20:03:05 2011 From: yann.ledu.fr at gmail.com (yledu) Date: Wed, 25 May 2011 17:03:05 -0700 (PDT) Subject: [SciPy-User] [JOB] An undergraduate internship in Paris on the development of CPU/GPU algorithms for electron paramagnetic resonance imaging Message-ID: Hi, It's about an internship in Paris, France, on algorithmic development for resonance imaging applications that makes heavy use of Python/ Scipy, but because this list is read by many French users of Scipy, I send a copy of the announcement here. Yann Le Du ------ Stage niveau L3 ? M2 dans l'?quipe de calcul scientifique sur CPU/GPU HPU4Science du LCMCP ? Chimie ParisTech : d?veloppement d'un ensemble d'algorithmes parall?lis?s pour l'imagerie par r?sonance paramagn?tique ?lectronique. Le projet HPU4Science vise ? d?velopper des applications de calcul scientifique sur HPU (Hybrid Processing Units, CPU + GPU) pour la r?solution rapide de probl?mes rencont?es dans des recherches men?es au laboratoire LCMCP de Chimie-ParisTech. Le premier objectif est de disposer d'un outil d'apprentissage automatique massivement parall?lis? permettant de r?soudre au mieux un probl?me inverse rencontr? en imagerie par R?sonance Paramagn?tique ?lectronique. Cet outil sera ensuite utilis? pour la r?solution de probl?mes similaires dans d'autres domaines de recherche du laboratoire. Nous avons con?u et mont? un cluster de calcul HPU de plus de 30TFLOPS enti?rement assembl? en interne, et ce afin de r?duire les co?ts et de bien conna?tre le hardware. Nous recherchons un(e) stagiaire pr?t(e) ? s'investir dans un projet qui implique diff?rentes comp?tences en math et informatique, avec surtout le d?sir d'apprendre et de contribuer ? un projet de calcul scientifique pour la recherche. Les maths et les techniques de programmation utilis?es seront expliqu?es en fonction du niveau du candidat. Nous travaillons beaucoup sur tableau blanc et sur Sage/Ipython avant de passer ? la programmation en utilisant le mod?le du "Literate Programming" de Knuth. Langages : Python, C Literate Programming : noweb Calcul/Visu : Ipython/Scipy, Sage, Octave, Mayavi, Asymptote, Inkscape. OS : Linux (Ubuntu) ?diteur : Vim en priorit? R?daction : LaTeX Versioning : git Deux articles sur le projet sont disponibles sur Ars Technica, un troisi?me sera bient?t publi? : http://arstechnica.com/science/news/2011/03/high-performance-computing-on-gamer-pcs-part-1-hardware.ars http://arstechnica.com/science/news/2011/04/high-performance-computing-on-gamer-pcs-part-2-the-software-choices.ars Pour les p?riodes du stage, ce serait juin et/ou juillet, avec des possibilit?s d'extension de septembre ? d?cembre inclu. La r?mun?ration est d'environ 450 euros/mois si le stage dure plus de 1,5 mois. Contact: Yann Le Du CNRS/Chimie ParisTech/LCMCP LCMCP/Chimie ParisTech 11, rue Pierre et Marie Curie 75005 Paris From robert.kern at gmail.com Thu May 26 15:28:17 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 May 2011 14:28:17 -0500 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110526052647.GC11314@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> <20110526052647.GC11314@phare.normalesup.org> Message-ID: On Thu, May 26, 2011 at 00:26, Gael Varoquaux wrote: > On Wed, May 25, 2011 at 06:45:02PM -0400, josef.pktd at gmail.com wrote: >> Maybe a PCA or some other dimension reduction helps, if the data is >> cluster in some dimensions. > > Unfortunately, the goal here is to do model selection on the number of > components of dimension reduction (iow latent factor analysis). > >> (It's not quite clear whether you have a discrete sample space like in >> the reference of Nathaniel, or a continuous space in R^100) > > Continuous :(. > > As I mentionned in another mail, as I slept over this problem, I realized > that I should be able to work only from the entropy of the marginal > distributions, which makes the problem tractable. That would put an upper bound on the entropy certainly, but it's blind to any remaining correlations between variables. It seems like that's exactly what you want to find out if you are doing model selection for a dimension reduction. I'm probably wrong, but I'd love to know why. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Thu May 26 16:17:26 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 26 May 2011 16:17:26 -0400 Subject: [SciPy-User] visual inspection of kernel density estimators in 2d Message-ID: I'm trying to find a visual way to see whether a kernel density estimator "looks" good in 2d. For univariate it is easy to compare histogram and the estimated density. Does anyone know what graphs would give a good visual inspection? My attempt with contours, which should show some oversmoothing if the default gaussian_kde is used on mixture distributions: http://picasaweb.google.com/josef.pktd/Joepy#5611119163809333922 (To avoid spamming the list with graphs, I activated a picasa account.) Josef From robert.kern at gmail.com Thu May 26 16:31:23 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 May 2011 15:31:23 -0500 Subject: [SciPy-User] visual inspection of kernel density estimators in 2d In-Reply-To: References: Message-ID: On Thu, May 26, 2011 at 15:17, wrote: > I'm trying to find a visual way to see whether a kernel density > estimator "looks" good in 2d. For univariate it is easy to compare > histogram and the estimated density. > > Does anyone know what graphs would give a good visual inspection? > > My attempt with contours, which should show some oversmoothing if the > default gaussian_kde is used on mixture distributions: > > http://picasaweb.google.com/josef.pktd/Joepy#5611119163809333922 With a bit of tweaking, that would work fine. A few suggestions: 1. Use smaller dots for the data, and maybe less color. 2. Use fewer contour lines, without labels. Maybe just make the contours that would have labels be a bit thicker so you can make a good apples-to-apples comparison between the two sets of contours. 3. Make the true contour lines gray and in the background. 4. Make the estimated contour lines black and in the foreground. I.e. draw the dots, then the true contours, then the estimated contours. You might also try a colormapped image plot of the difference between the two densities. Plot the data as small points (i.e. with 'k,') over the residual image. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Thu May 26 20:22:01 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 26 May 2011 20:22:01 -0400 Subject: [SciPy-User] visual inspection of kernel density estimators in 2d In-Reply-To: References: Message-ID: On Thu, May 26, 2011 at 4:31 PM, Robert Kern wrote: > On Thu, May 26, 2011 at 15:17, ? wrote: >> I'm trying to find a visual way to see whether a kernel density >> estimator "looks" good in 2d. For univariate it is easy to compare >> histogram and the estimated density. >> >> Does anyone know what graphs would give a good visual inspection? >> >> My attempt with contours, which should show some oversmoothing if the >> default gaussian_kde is used on mixture distributions: >> >> http://picasaweb.google.com/josef.pktd/Joepy#5611119163809333922 > > With a bit of tweaking, that would work fine. A few suggestions: > > 1. Use smaller dots for the data, and maybe less color. > 2. Use fewer contour lines, without labels. Maybe just make the > contours that would have labels be a bit thicker so you can make a > good apples-to-apples comparison between the two sets of contours. > 3. Make the true contour lines gray and in the background. > 4. Make the estimated contour lines black and in the foreground. I.e. > draw the dots, then the true contours, then the estimated contours. Thanks Robert, I still have to figure out how to do many of these things with matplotlib, for example I have to find the manual to change the line width. here is some improvement in this direction http://picasaweb.google.com/josef.pktd/Joepy#5611180514445456594 here is a first attempt at 3d http://picasaweb.google.com/josef.pktd/Joepy#5611133359567574274 This actually shows the oversmoothing quite clearly. > > You might also try a colormapped image plot of the difference between > the two densities. Plot the data as small points (i.e. with 'k,') over > the residual image. just to see how it looks I tried the color overlay of the difference on the same graph . It's a bit weird but quite informative. http://picasaweb.google.com/josef.pktd/Joepy#5611180522655961714 (Given the nice colors in the last graph, I will try for a Miro next. :) Since with real data we don't have the true pdf, I might try for a histogram contour (?) as comparison. Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Thu May 26 21:59:20 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 May 2011 20:59:20 -0500 Subject: [SciPy-User] visual inspection of kernel density estimators in 2d In-Reply-To: References: Message-ID: On Thu, May 26, 2011 at 19:22, wrote: > On Thu, May 26, 2011 at 4:31 PM, Robert Kern wrote: >> On Thu, May 26, 2011 at 15:17, ? wrote: >>> I'm trying to find a visual way to see whether a kernel density >>> estimator "looks" good in 2d. For univariate it is easy to compare >>> histogram and the estimated density. >>> >>> Does anyone know what graphs would give a good visual inspection? >>> >>> My attempt with contours, which should show some oversmoothing if the >>> default gaussian_kde is used on mixture distributions: >>> >>> http://picasaweb.google.com/josef.pktd/Joepy#5611119163809333922 >> >> With a bit of tweaking, that would work fine. A few suggestions: >> >> 1. Use smaller dots for the data, and maybe less color. >> 2. Use fewer contour lines, without labels. Maybe just make the >> contours that would have labels be a bit thicker so you can make a >> good apples-to-apples comparison between the two sets of contours. >> 3. Make the true contour lines gray and in the background. >> 4. Make the estimated contour lines black and in the foreground. I.e. >> draw the dots, then the true contours, then the estimated contours. > > Thanks Robert, > > I still have to figure out how to do many of these things with > matplotlib, for example I have to find the manual to change the line > width. There may not actually be one. :-) You may have to overplot with a second set of contours that only pick out one or two contours with a thicker line. > here is some improvement in this direction > http://picasaweb.google.com/josef.pktd/Joepy#5611180514445456594 > > here is a first attempt at 3d > http://picasaweb.google.com/josef.pktd/Joepy#5611133359567574274 > > This actually shows the oversmoothing quite clearly. > >> >> You might also try a colormapped image plot of the difference between >> the two densities. Plot the data as small points (i.e. with 'k,') over >> the residual image. > > just to see how it looks I tried the color overlay of the difference > on the same graph . It's a bit weird but quite informative. > > http://picasaweb.google.com/josef.pktd/Joepy#5611180522655961714 > > (Given the nice colors in the last graph, I will try for a Miro next. :) I have a long rant about such colormaps (I am red-green colorblind), but I'll save it. It suffices to say that you would be better served with a diverging colormap, one that goes from a deep color of one hue at the most negative to white at 0 and the to a deep color of another hue, e.g. 'RdBu' is a nice one. Just be sure to center it correctly (using vmin and vmax arguments, IIRC). I recommend a full, continuous colormapped image instead of a small number of colored contours. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Thu May 26 23:52:23 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 26 May 2011 23:52:23 -0400 Subject: [SciPy-User] visual inspection of kernel density estimators in 2d In-Reply-To: References: Message-ID: On Thu, May 26, 2011 at 9:59 PM, Robert Kern wrote: > On Thu, May 26, 2011 at 19:22, ? wrote: >> On Thu, May 26, 2011 at 4:31 PM, Robert Kern wrote: >>> On Thu, May 26, 2011 at 15:17, ? wrote: >>>> I'm trying to find a visual way to see whether a kernel density >>>> estimator "looks" good in 2d. For univariate it is easy to compare >>>> histogram and the estimated density. >>>> >>>> Does anyone know what graphs would give a good visual inspection? >>>> >>>> My attempt with contours, which should show some oversmoothing if the >>>> default gaussian_kde is used on mixture distributions: >>>> >>>> http://picasaweb.google.com/josef.pktd/Joepy#5611119163809333922 >>> >>> With a bit of tweaking, that would work fine. A few suggestions: >>> >>> 1. Use smaller dots for the data, and maybe less color. >>> 2. Use fewer contour lines, without labels. Maybe just make the >>> contours that would have labels be a bit thicker so you can make a >>> good apples-to-apples comparison between the two sets of contours. >>> 3. Make the true contour lines gray and in the background. >>> 4. Make the estimated contour lines black and in the foreground. I.e. >>> draw the dots, then the true contours, then the estimated contours. >> >> Thanks Robert, >> >> I still have to figure out how to do many of these things with >> matplotlib, for example I have to find the manual to change the line >> width. > > There may not actually be one. ?:-) > > You may have to overplot with a second set of contours that only pick > out one or two contours with a thicker line. > >> here is some improvement in this direction >> http://picasaweb.google.com/josef.pktd/Joepy#5611180514445456594 >> >> here is a first attempt at 3d >> http://picasaweb.google.com/josef.pktd/Joepy#5611133359567574274 >> >> This actually shows the oversmoothing quite clearly. >> >>> >>> You might also try a colormapped image plot of the difference between >>> the two densities. Plot the data as small points (i.e. with 'k,') over >>> the residual image. >> >> just to see how it looks I tried the color overlay of the difference >> on the same graph . It's a bit weird but quite informative. Especially it shows that there is a bug, matplotlib bivariate_normal uses standard deviation, numpy.random.multivariate_normal uses variance (covariance matrix), >> >> http://picasaweb.google.com/josef.pktd/Joepy#5611180522655961714 >> >> (Given the nice colors in the last graph, I will try for a Miro next. :) > > I have a long rant about such colormaps (I am red-green colorblind), > but I'll save it. It suffices to say that you would be better served > with a diverging colormap, one that goes from a deep color of one hue > at the most negative to white at 0 and the to a deep color of another > hue, e.g. 'RdBu' is a nice one. Just be sure to center it correctly > (using vmin and vmax arguments, IIRC). I recommend a full, continuous > colormapped image instead of a small number of colored contours. I like that kind of colors, but mostly I just copy from matplotlib examples, and I have no idea what any of these colormaps looks like. It's starting to look good, fixed bug, linewidth extended to 2, 'RdBu' (needed to flip sign kde-true), I haven't figured continuous color schemes yet. http://picasaweb.google.com/josef.pktd/Joepy#5611235435224089634 now, kde overall looks good, kde is too low in the peaks and too high in some area between the peaks. Thanks, Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stefan.otte at gmail.com Fri May 27 05:43:56 2011 From: stefan.otte at gmail.com (Stefan Otte) Date: Fri, 27 May 2011 11:43:56 +0200 Subject: [SciPy-User] Evaluate normal distribution at certain point given the covariance matrix and mean Message-ID: Hello, I'm wondering if there is a function that evaluates a normal distribution at a certain point given the covariance matrix and mean. I already took a look at the stats module and, to my surprise, didn't find a function like that. NN(mean_of_distribution, covariance_matrix, point_to_evaluate) I ended up implementing it myself but I'm still wondering if this isn't a common task and why it's not included in the stats module. Thanks in advance, ?Stefan From david.huard at gmail.com Fri May 27 17:21:33 2011 From: david.huard at gmail.com (David Huard) Date: Fri, 27 May 2011 17:21:33 -0400 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> <20110526052647.GC11314@phare.normalesup.org> Message-ID: There is a paper by Fernando P?rez-Cruz on the estimation of the Kullback-Leibler divergence directly from samples using KD trees and nearest neighbours. Maybe you could draw inspiration from it to compute the conditional entropy. The author seems to imply that the method works for a sample with over 700 dimensions. I'm including an implementation I wrote just a few days ago that hasn't seen a lot of testing yet (BSD, caveat emptor and all that). I checked I could reproduce the example in the paper, but that's it. http://eprints.pascal-network.org/archive/00004910/01/bare_conf3.pdf HTH, David import numpy as np # def KLdivergence(x, y): """Compute the Kullback-Leibler divergence between two multivariate samples. Parameters ---------- x : 2D array (n,d) Samples from distribution P, which typically represents the true distribution. y : 2D array (m,d) Samples from distribution Q, which typically represents the approximate distribution. Returns ------- out : float The estimated Kullback-Leibler divergence D(P||Q). References ---------- P?rez-Cruz, F. Kullback-Leibler divergence estimation of continuous distributions IEEE International Symposium on Information Theory, 2008. """ from scipy.spatial import cKDTree as KDTree # Check the dimensions are consistent x = np.atleast_2d(x) y = np.atleast_2d(y) n,d = x.shape m,dy = y.shape assert(d == dy) # Build a KD tree representation of the samples and find the nearest neighbour # of each point in x. xtree = KDTree(x) ytree = KDTree(y) # Get the first two nearest neighbours for x, since the closest one is the # sample itself. r = xtree.query(x, k=2, eps=.01, p=2)[0][:,1] s = ytree.query(x, k=1, eps=.01, p=2)[0] # There is a mistake in the paper. In Eq. 14, the right side misses a negative sign # on the first term of the right hand side. return -np.log(r/s).sum() * d / n + np.log(m / (n - 1.)) On Thu, May 26, 2011 at 3:28 PM, Robert Kern wrote: > On Thu, May 26, 2011 at 00:26, Gael Varoquaux > wrote: >> On Wed, May 25, 2011 at 06:45:02PM -0400, josef.pktd at gmail.com wrote: > >>> Maybe a PCA or some other dimension reduction helps, if the data is >>> cluster in some dimensions. >> >> Unfortunately, the goal here is to do model selection on the number of >> components of dimension reduction (iow latent factor analysis). >> >>> (It's not quite clear whether you have a discrete sample space like in >>> the reference of Nathaniel, or a continuous space in R^100) >> >> Continuous :(. >> >> As I mentionned in another mail, as I slept over this problem, I realized >> that I should be able to work only from the entropy of the marginal >> distributions, which makes the problem tractable. > > That would put an upper bound on the entropy certainly, but it's blind > to any remaining correlations between variables. It seems like that's > exactly what you want to find out if you are doing model selection for > a dimension reduction. I'm probably wrong, but I'd love to know why. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cjauvin at gmail.com Fri May 27 19:13:19 2011 From: cjauvin at gmail.com (Christian Jauvin) Date: Fri, 27 May 2011 19:13:19 -0400 Subject: [SciPy-User] efficient computation of point cloud nearest neighbors Message-ID: Hi, I need to compute the k nearest neighbors of every point in a point cloud of at least a million points. I've been looking at the documentation for the scipy.spatial.KDTree and cKDTree classes, but it's not clear to me how I should use their query() method (and possibly their distance_upper_bound parameter) to optimize the computation. I've also been looking at the ANN and FLANN C++ libraries, but in both cases I've had trouble compiling/installing their Python bindings on my Ubuntu system. I'd appreciate some advice as to what would be the ideal strategy to solve this problem (either with Scipy or some other package that I wouldn't know about). Thanks in advance, Christian From gael.varoquaux at normalesup.org Sat May 28 05:11:25 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 28 May 2011 11:11:25 +0200 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> <20110526052647.GC11314@phare.normalesup.org> Message-ID: <20110528091125.GB2799@phare.normalesup.org> On Fri, May 27, 2011 at 05:21:33PM -0400, David Huard wrote: > There is a paper by Fernando P?rez-Cruz on the estimation of the > Kullback-Leibler divergence directly from samples using KD trees and > nearest neighbours. Maybe you could draw inspiration from it to > compute the conditional entropy. The author seems to imply that the > method works for a sample with over 700 dimensions. > I'm including an implementation I wrote just a few days ago that > hasn't seen a lot of testing yet (BSD, caveat emptor and all that). I > checked I could reproduce the example in the paper, but that's it. This is awesome. It perfectly makes sens to implement the problem this way, and it is great to see a paper explaining it clearly. And the value of code is huge. Thanks heap for sharing. I was noting yesterday that when trying out ideas on data, each new idea requires a full day of work because I need to implement a lot of basic blocks, and each time I do this quite poorly. I think that your code, once it has tests, should go in a standard package. It is of general interest. scipy.stats seems like a good candidate. Alternatively, if it is deemed not suitable, we would integrate it in the metrics sub-package of the scikits.learn. I have moved away from this problem for two reasons. The first being that I realized that I could work only from the entropy of the marginals, which makes the problem heaps easier. The second being that anyhow, I am measuring the entropy of the noise as much as of the signal, and I end up being dominated by the entropy of the noise. I could get rid of it if I had a way to measure it separately, but that requires more thinking and maybe more modelling. Anyhow, thanks heaps for sharing. Ga?l From emmanuelle.gouillart at normalesup.org Sat May 28 05:22:13 2011 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sat, 28 May 2011 11:22:13 +0200 Subject: [SciPy-User] [ANN] PyPhy - Python in Physics - Satellite of Euroscipy 2011 Message-ID: <20110528092213.GA24443@phare.normalesup.org> On behalf of the organizers, it is my pleasure to announce PyPhy, a satellite meeting of Euroscipy 2011 on **Python in Physics**. Date and venue -------------- August 29th, 2011 (full day) Department of Physics, Ecole normale sup?rieure, Paris Main topics ------------ * Python for teaching physics * Python for research in physics * developing Python modules for applications in physics Organizers ---------- * Alberto Rosso (LPTMS, Universit? Paris-Sud - alberto.rosso at u-psud.fr) * Werner Krauth (Physics Department, ENS Paris - werner.krauth at ens.fr) In physics, Python is widely used as a scripting and programming language for data processing, but also for numerical computing where it is a viable alternative to compiled languages, with the advantage of being easy to learn, to write, to read and to modify. For this reason, Python, which is entirely free, is an ideal tool for teaching algorithmic contents in physics. A number of initiatives integrating Python into curricula have been developed in the last few years, largely independently in Physics departments all over the world. At the same time, Python libraries for numerical analysis, combinatorics, graphics, interfacing other languages, etc. have either reached maturity, or are developing very rapidly. This makes Python an exciting option for research applications. A growing number of research programs now write code directly in Python. This informal workshop on teaching and research using Python in Physics will be a forum for coordinating these initiatives, for sharing different experiences, and for exchanging with the key developers of the scientific Python modules present at the EuroSciPy 2011 conference. Beginners and Students are welcome to attend. People interested in presenting a contribution should contact the organizers and send a short abstract before June 30, 2011. We welcome contributions on your experience in teaching Python to physicists and engineers, developing a Python module for applications in physics, or using Python for your everyday research work. Participation in this workshop will be free of charge but participants should also make themselves known by email (alberto.rosso at u-psud.fr) before August 28, 2011. Important dates --------------- * June 30: deadline for contributions * August 28: deadline for (free) registration Invited Speakers ---------------- * Samuel Bottani (MSC, Paris Diderot) * Gianfranco Durin (ISI, Turin) * Emmanuelle Gouillart (Saint-Gobain Research) * Konrad Hinsen (CBM Universit? d'Orleans) * Vivien Lecomte (LPMA Paris Diderot) * Chris Myers (Department of Physics, Cornell University) * Michael Schindler (LPCT, EPSCI, Paris) * Georg von Hippel (INP Universit?t Mainz) How to get there ---------------- See the instructions on http://www.phys.ens.fr/spip.php?rubrique42&lang=en Website ------- http://www.euroscipy.org/card/pyphy2011 --------- Apologies if you receive this announcement more than once! Emmanuelle From jsseabold at gmail.com Sat May 28 13:18:58 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Sat, 28 May 2011 13:18:58 -0400 Subject: [SciPy-User] Evaluate normal distribution at certain point given the covariance matrix and mean In-Reply-To: References: Message-ID: On Fri, May 27, 2011 at 5:43 AM, Stefan Otte wrote: > Hello, > > I'm wondering if there is a function that evaluates a normal > distribution at a certain point given the covariance matrix and mean. > I already took a look at the stats module and, to my surprise, didn't > find a function like that. > > NN(mean_of_distribution, covariance_matrix, point_to_evaluate) > > I ended up implementing it myself but I'm still wondering if this > isn't a common task and why it's not included in the stats module. > For the univariate case, you can use scipy.stats.norm. I take it you want the multivariate case? Others will know more, as I haven't looked at the details, but I think you could use stats.mvn.* for the cdf. See here [1,2] and the attachment to the ticket [3] for convenience wrappers. [1] http://thread.gmane.org/gmane.comp.python.scientific.user/18921 [2] http://projects.scipy.org/scipy/ticket/846 [3] http://projects.scipy.org/scipy/attachment/ticket/846/mvncdf.py Skipper From josef.pktd at gmail.com Sat May 28 13:58:38 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 28 May 2011 13:58:38 -0400 Subject: [SciPy-User] [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110528091125.GB2799@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> <20110525214003.GD9388@phare.normalesup.org> <20110526052647.GC11314@phare.normalesup.org> <20110528091125.GB2799@phare.normalesup.org> Message-ID: On Sat, May 28, 2011 at 5:11 AM, Gael Varoquaux wrote: > On Fri, May 27, 2011 at 05:21:33PM -0400, David Huard wrote: >> There is a paper by Fernando P?rez-Cruz on the estimation of the >> Kullback-Leibler divergence directly from samples using KD trees and >> nearest neighbours. Maybe you could draw inspiration from it to >> compute the conditional entropy. The author seems to imply that the >> method works for a sample with over 700 dimensions. > >> I'm including an implementation I wrote just a few days ago that >> hasn't seen a lot of testing yet (BSD, caveat emptor and all that). I >> checked I could reproduce the example in the paper, but that's it. > > This is awesome. It perfectly makes sens to implement the problem this > way, and it is great to see a paper explaining it clearly. And the value > of code is huge. Thanks heap for sharing. I was noting yesterday that > when trying out ideas on data, each new idea requires a full day of work > because I need to implement a lot of basic blocks, and each time I do > this quite poorly. > > I think that your code, once it has tests, should go in a standard > package. It is of general interest. scipy.stats seems like a good > candidate. Alternatively, if it is deemed not suitable, we would > integrate it in the metrics sub-package of the scikits.learn. Thanks David, I didn't realize that nearest neighbor can be these easy. It will for sure find a home also in scikits.statsmodels. I started with mutual information for independence tests with continuous distributions a few weeks ago, based on kernel density estimation. Skipper has written entropy measures for the discrete case. I have been trying to figure out how well this one neighbor version works, my guess was that it should have slower convergence. But I have a hard time coming up with a benchmark for testing this. I didn't find much for this in R, and the kl.norm for multivariate normal gives different results than the kdtree version and the version based on gaussian_kde. For the univariate case I can calculate the KL divergence with integrate.quad, testcase normal distribution with different variances. The kdtree version has a very good mean (sample size 100), better than the gaussian_kde version that seems to be upward biased. The kdtree version has about at least twice the standard deviation of the gaussian version. The kdtree version is much faster. I was reading several Monte Carlo comparisons for mutual information (joint versus product of marginals) and KNN was usually one of the best methods. I assume that we can extend this version to other divergence measures. Does anyone know how to get a "certified" test case for the KL divergence? Josef > > I have moved away from this problem for two reasons. The first being that > I realized that I could work only from the entropy of the marginals, > which makes the problem heaps easier. The second being that anyhow, I am > measuring the entropy of the noise as much as of the signal, and I end up > being dominated by the entropy of the noise. I could get rid of it if I > had a way to measure it separately, but that requires more thinking and > maybe more modelling. > > Anyhow, thanks heaps for sharing. > > Ga?l > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stefan.otte at gmail.com Sun May 29 09:55:16 2011 From: stefan.otte at gmail.com (Stefan Otte) Date: Sun, 29 May 2011 15:55:16 +0200 Subject: [SciPy-User] Evaluate normal distribution at certain point given the covariance matrix and mean In-Reply-To: References: Message-ID: On Sat, May 28, 2011 at 7:18 PM, Skipper Seabold wrote: > On Fri, May 27, 2011 at 5:43 AM, Stefan Otte wrote: >> Hello, >> >> I'm wondering if there is a function that evaluates a normal >> distribution at a certain point given the covariance matrix and mean. >> I already took a look at the stats module and, to my surprise, didn't >> find a function like that. >> >> NN(mean_of_distribution, covariance_matrix, point_to_evaluate) >> >> I ended up implementing it myself but I'm still wondering if this >> isn't a common task and why it's not included in the stats module. >> > > For the univariate case, you can use scipy.stats.norm. I take it you > want the multivariate case? Others will know more, as I haven't looked Right. I should have made that clearer. > at the details, but I think you could use stats.mvn.* for the cdf. See > here [1,2] and the attachment to the ticket [3] for convenience > wrappers. Thanks for the info! > > [1] http://thread.gmane.org/gmane.comp.python.scientific.user/18921 > [2] http://projects.scipy.org/scipy/ticket/846 > [3] http://projects.scipy.org/scipy/attachment/ticket/846/mvncdf.py > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From zw4131 at gmail.com Sun May 29 12:25:39 2011 From: zw4131 at gmail.com (=?GB2312?B?va20886w?=) Date: Mon, 30 May 2011 00:25:39 +0800 Subject: [SciPy-User] It is quite confusing to use scipy.spatial.distance Message-ID: I want to computes euclidean distance between a vector and 2 vector. For example: A=numpy.array([0,0]) B= numpy.array([[1,0],[0,1]]) I want to computes euclidean distance between vector A and each vector in matrix B. My expected result is the vector [1,1] So I use scipy.spatial.distance.cdist(A, B ,'euclidean') But the error said A must be a 2-dimensional array. So I turned to use scipy.spatial.distance.euclidean(A,B), it worked, but the result was a value 1.4142. It was quite confusing!! So I suggest adopting an uniform function to Computes the distance between any-dimensional array. Scipy.spatial.distance.cdist() is a very good function, but it can be extended to Computes the distance between a vector and a vector as well as between a vector and n vectors. That would be perfect !!. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun May 29 12:58:20 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 29 May 2011 16:58:20 +0000 (UTC) Subject: [SciPy-User] It is quite confusing to use scipy.spatial.distance References: Message-ID: On Mon, 30 May 2011 00:25:39 +0800, ??? wrote: > I want to computes euclidean distance between a vector and 2 vector. For > example: > > A=numpy.array([0,0]) > > B= numpy.array([[1,0],[0,1]]) > > I want to computes euclidean distance between vector A and each vector > in matrix B. > > My expected result is the vector [1,1] In [9]: scipy.spatial.distance.cdist(A[numpy.newaxis,:], B, 'euclidean') Out[9]: array([[ 1., 1.]]) It works similarly as all other functions that support broadcasting. From ralf.gommers at googlemail.com Sun May 29 13:59:37 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 29 May 2011 19:59:37 +0200 Subject: [SciPy-User] efficient computation of point cloud nearest neighbors In-Reply-To: References: Message-ID: On Sat, May 28, 2011 at 1:13 AM, Christian Jauvin wrote: > Hi, > > I need to compute the k nearest neighbors of every point in a point > cloud of at least a million points. > > I've been looking at the documentation for the scipy.spatial.KDTree > and cKDTree classes, but it's not clear to me how I should use their > query() method (and possibly their distance_upper_bound parameter) to > optimize the computation. > > I've also been looking at the ANN and FLANN C++ libraries, but in both > cases I've had trouble compiling/installing their Python bindings on > my Ubuntu system. > This is the second issue with ANN bindings reported in a week, so I had a look at scikits.ann. Then I found http://blog.physionconsulting.com/?p=17. So it looks like there should be a big "this is deprecated" warning on scikits.ann. It would be helpful if someone can confirm that KdTree/cKdTree in scikits.spatial is about as fast as ANN/FLANN. Thanks, Ralf > I'd appreciate some advice as to what would be the ideal strategy to > solve this problem (either with Scipy or some other package that I > wouldn't know about). > > Thanks in advance, > > Christian > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sun May 29 14:15:38 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 29 May 2011 20:15:38 +0200 Subject: [SciPy-User] efficient computation of point cloud nearest neighbors In-Reply-To: References: Message-ID: <20110529181538.GA13056@phare.normalesup.org> On Sun, May 29, 2011 at 07:59:37PM +0200, Ralf Gommers wrote: > This is the second issue with ANN bindings reported in a week, so I had a > look at scikits.ann. Then I found > [2]http://blog.physionconsulting.com/?p=17. So it looks like there should > be a big "this is deprecated" warning on scikits.ann. It would be helpful > if someone can confirm that KdTree/cKdTree in scikits.spatial is about as > fast as ANN/FLANN. Regarding speed of nearest neighbors, a big caveat is that it depends a lot on the dimensionality of the search space. For low dimensionality, KDTree is super fast. It breaks down at around 10d, because the space starts becoming too 'empty': splitting it with plane to separate in half-spaces with equal partitions of points ends up quickly creating as many planes as they are points. The next thing is the BallTree, that creates nested balls. In low dimensions it is as fast as the KDTree, and it scales a bit higher, up to 20d is a good guess. A BallTree implementation, contributed by Jake VanderPlas, can be found in the scikits.learn. For references, see http://scikit-learn.sourceforge.net/modules/neighbors.html#efficient-implementation-the-ball-tree Above d ~ 20, a brute force search is quicker if you want exact nearest neighbor. The scikits.learn's nearest neighbor search implements by default an automatic switch. I have never tried ANN (approximate nearest neighbor) but I wouldn't be surprised if it were faster than a brute force in this regime. All that to say the ANN probably has strong usecases and cannot always be replaced by a KDTree. Gael From ralf.gommers at googlemail.com Sun May 29 14:32:58 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 29 May 2011 20:32:58 +0200 Subject: [SciPy-User] efficient computation of point cloud nearest neighbors In-Reply-To: <20110529181538.GA13056@phare.normalesup.org> References: <20110529181538.GA13056@phare.normalesup.org> Message-ID: On Sun, May 29, 2011 at 8:15 PM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Sun, May 29, 2011 at 07:59:37PM +0200, Ralf Gommers wrote: > > This is the second issue with ANN bindings reported in a week, so I > had a > > look at scikits.ann. Then I found > > [2]http://blog.physionconsulting.com/?p=17. So it looks like there > should > > be a big "this is deprecated" warning on scikits.ann. It would be > helpful > > if someone can confirm that KdTree/cKdTree in scikits.spatial is about > as > > fast as ANN/FLANN. > > Regarding speed of nearest neighbors, a big caveat is that it depends a > lot on the dimensionality of the search space. > > For low dimensionality, KDTree is super fast. It breaks down at around > 10d, because the space starts becoming too 'empty': splitting it with > plane to separate in half-spaces with equal partitions of points ends up > quickly creating as many planes as they are points. The next thing is the > BallTree, that creates nested balls. In low dimensions it is as fast as > the KDTree, and it scales a bit higher, up to 20d is a good guess. A > BallTree implementation, contributed by Jake VanderPlas, can be found in > the scikits.learn. For references, see > > http://scikit-learn.sourceforge.net/modules/neighbors.html#efficient-implementation-the-ball-tree > > Above d ~ 20, a brute force search is quicker if you want exact nearest > neighbor. The scikits.learn's nearest neighbor search implements by > default an automatic switch. > > I have never tried ANN (approximate nearest neighbor) but I wouldn't be > surprised if it were faster than a brute force in this regime. > > All that to say the ANN probably has strong usecases and cannot always be > replaced by a KDTree. > scikits.ann exposes a single class, kdtree. scipy.spatial.KDTree.query seems to be able to do approximate nearest neighbors: KDTree.query(self, x, k=1, eps=0, p=2, distance_upper_bound=inf) ..... eps : nonnegative float Return approximate nearest neighbors; the kth returned value is guaranteed to be no further than (1+eps) times the distance to the real kth nearest neighbor. So it looks to me like scipy.spatial has things covered. Which is also what Barry Wark (scikits.ann author) seems to say in the blog post I linked to. Barry, could you confirm this and if appropriate put up some deprecation warnings? Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun May 29 17:49:02 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 29 May 2011 17:49:02 -0400 Subject: [SciPy-User] multivariate goodness of fit tests Message-ID: (since it's Sunday) Does anyone have any goodness of fit tests for multivariate distributions, kolmogorov-smirnov, anderson-darling or similar? I would like to test Multivariate Normal and Multivariate T distribution classes. Right now I'm looking for a one-sample version, but the two-sample version will be not far behind. Josef From x.piter at gmail.com Mon May 30 06:58:35 2011 From: x.piter at gmail.com (Piter_) Date: Mon, 30 May 2011 12:58:35 +0200 Subject: [SciPy-User] robust fit Message-ID: Hi all. Can anybody point me a direction how to make a robust fit of nonlinear function using leastsq. Maybe someone have seen ready function doing this. Thanks. Petro. From x.piter at gmail.com Mon May 30 07:11:53 2011 From: x.piter at gmail.com (Piter_) Date: Mon, 30 May 2011 13:11:53 +0200 Subject: [SciPy-User] fit program Message-ID: Hi. There is an example of simultaneous fit of two functions in Scipy cookbook. http://www.scipy.org/Cookbook/FittingData#head-a44b49d57cf0165300f765e8f1b011876776502f Best. Petro [SciPy-User] [SciPy-user] fit program. suzana8447 k-assem84 at hotmail.... Sun May 1 04:02:01 CDT 2011 * Next message: [SciPy-User] confusion around return value from numpy.max * Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Dear all, This is the program i use to fit a function called EoverA to a certain data: From matthieu.brucher at gmail.com Mon May 30 07:19:51 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 30 May 2011 13:19:51 +0200 Subject: [SciPy-User] robust fit In-Reply-To: References: Message-ID: Hi, It seems to me that least squares cannot lead to a ribust fit in the usual sense. You may want to implement your own robust cost function instead of using a L2 norm. Matthieu 2011/5/30 Piter_ > Hi all. > Can anybody point me a direction how to make a robust fit of nonlinear > function using leastsq. > Maybe someone have seen ready function doing this. > Thanks. > Petro. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From zw4131 at gmail.com Mon May 30 08:26:13 2011 From: zw4131 at gmail.com (=?GB2312?B?va20886w?=) Date: Mon, 30 May 2011 20:26:13 +0800 Subject: [SciPy-User] It is quite confusing to use scipy.spatial.distance In-Reply-To: References: Message-ID: Thanks. This is an googd solusion. But it is not the best solusion. Using an uniform function to Computes the distance between any-dimensional array. Scipy.spatial.distance.cdist() is a very good function, and it can be extended to Computes the distance between a vector and a vector as well as between a vector and n vectors. That would be perfect !!. 2011/5/30 Pauli Virtanen > On Mon, 30 May 2011 00:25:39 +0800, ??? wrote: > > I want to computes euclidean distance between a vector and 2 vector. For > > example: > > > > A=numpy.array([0,0]) > > > > B= numpy.array([[1,0],[0,1]]) > > > > I want to computes euclidean distance between vector A and each vector > > in matrix B. > > > > My expected result is the vector [1,1] > > In [9]: scipy.spatial.distance.cdist(A[numpy.newaxis,:], B, 'euclidean') > > Out[9]: array([[ 1., 1.]]) > > It works similarly as all other functions that support broadcasting. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon May 30 10:19:42 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 30 May 2011 14:19:42 +0000 (UTC) Subject: [SciPy-User] It is quite confusing to use scipy.spatial.distance References: Message-ID: Mon, 30 May 2011 20:26:13 +0800, ??? wrote: > Thanks. This is an googd solusion. > But it is not the best solusion. > > Using an uniform function to Computes the distance between > any-dimensional array. Scipy.spatial.distance.cdist() is a very good > function, and it can be extended to Computes the distance between a > vector and a vector as well as between a vector and n vectors. That > would be perfect !!. I do not understand what you exactly mean. The example I gave does exactly what you describe. -- Pauli Virtanen From robert.kern at gmail.com Mon May 30 11:12:27 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 May 2011 10:12:27 -0500 Subject: [SciPy-User] robust fit In-Reply-To: References: Message-ID: On Mon, May 30, 2011 at 05:58, Piter_ wrote: > Hi all. > Can anybody point me a direction how to make a robust fit of nonlinear > function using leastsq. > Maybe someone have seen ready function doing this. A variety of robust fits are implemented in statsmodels: http://statsmodels.sourceforge.net/trunk/rlm.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josephsmidt at gmail.com Mon May 30 12:59:16 2011 From: josephsmidt at gmail.com (Joseph Smidt) Date: Mon, 30 May 2011 09:59:16 -0700 Subject: [SciPy-User] How To Remove A Gradient From 2D Data? Message-ID: I have 1024x1024 2D data stored in an array where there is a gradient (dipole) I would like to remove from the data. Is there any scipy/numpy tool that would make it easy to fit a plane to this gradient so that I can remove the gradient by subtracting off that best fit plane? Thank. -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 2165 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-9025 From denis-bz-gg at t-online.de Mon May 30 13:01:27 2011 From: denis-bz-gg at t-online.de (denis) Date: Mon, 30 May 2011 10:01:27 -0700 (PDT) Subject: [SciPy-User] efficient computation of point cloud nearest neighbors In-Reply-To: References: Message-ID: Christian, folks, a couple of comments: FLANN gets a lot of speed by quitting early, after looking at e.g. .1N or .01N or (FLANN default) 32*leafsize points. Accuracy *may* decrease -- guarantees are gone -- but I've found big speedup for ~ same accuracy, especially in dimensions say > 20 where "distance whiteout" sets in. I've added cutoff= to Anne Archibald's nice cython ckdtree.pyx, also verbose= to help use it; Would like a friendly proofreader or else post some data, let me try it here. What's your metric ? (Choice of metric is *really* important for clustering -- Hastie p. 506). FLANN does Euclidean, L2, only (and returns dist^2); ANN can be compiled 3 ways for L1 L2 Lmax; cKDTree does any Lp metric. cheers -- denis On May 28, 1:13?am, Christian Jauvin wrote: > Hi, > > I need to compute the k nearest neighbors of every point in a point > cloud of at least a million points. From zw4131 at gmail.com Mon May 30 13:12:16 2011 From: zw4131 at gmail.com (=?GB2312?B?va20886w?=) Date: Tue, 31 May 2011 01:12:16 +0800 Subject: [SciPy-User] It is quite confusing to use scipy.spatial.distance In-Reply-To: References: Message-ID: I am sorry that I did not say clearly.I mean: Extending Scipy.spatial.distance.cdist() to be an uniform function to Computes the distance between any-dimensional array. For example: A=numpy.array([0,0]) B=numpy.array([1,0]) scipy.spatial.distance.cdist(A, B ,'euclidean') can return a value 1 A=numpy.array([0,0]) B=numpy.array([[1,0],[0,1]]) scipy.spatial.distance.cdist(A, B ,'euclidean') can return a vector [1,1] A=numpy.array([[0,0],[0,1]]) B=numpy.array([[1,0],[0,1]]) scipy.spatial.distance.cdist(A, B ,'euclidean') can return a matrix[[1,1],[1.414,0]] 2011/5/30 Pauli Virtanen > Mon, 30 May 2011 20:26:13 +0800, ??? wrote: > > Thanks. This is an googd solusion. > > But it is not the best solusion. > > > > Using an uniform function to Computes the distance between > > any-dimensional array. Scipy.spatial.distance.cdist() is a very good > > function, and it can be extended to Computes the distance between a > > vector and a vector as well as between a vector and n vectors. That > > would be perfect !!. > > I do not understand what you exactly mean. The example I gave does > exactly what you describe. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zw4131 at gmail.com Mon May 30 13:32:14 2011 From: zw4131 at gmail.com (=?GB2312?B?va20886w?=) Date: Tue, 31 May 2011 01:32:14 +0800 Subject: [SciPy-User] It is quite confusing to use scipy.spatial.distance In-Reply-To: References: Message-ID: For a veteran, it may be very easy. But if the user is a beginner and he does not know much detail of SciPy, he just wants to computes the distance between two arrays quickly. An uniform function and an uniform expression would be a good choice. The SciPy is powerful now. So how to let the user use its powerful functions more easily and quickly becomes our concern. 2011/5/30 Pauli Virtanen > Mon, 30 May 2011 20:26:13 +0800, ??? wrote: > > Thanks. This is an googd solusion. > > But it is not the best solusion. > > > > Using an uniform function to Computes the distance between > > any-dimensional array. Scipy.spatial.distance.cdist() is a very good > > function, and it can be extended to Computes the distance between a > > vector and a vector as well as between a vector and n vectors. That > > would be perfect !!. > > I do not understand what you exactly mean. The example I gave does > exactly what you describe. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Mon May 30 14:20:09 2011 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 30 May 2011 14:20:09 -0400 Subject: [SciPy-User] efficient computation of point cloud nearest neighbors In-Reply-To: References: <20110529181538.GA13056@phare.normalesup.org> Message-ID: Hi, In fact, I wrote KDTree and cKDTree based on a description of the algorithm used in ANN, so it should have the same O behaviour. cKDTree has certainly not seen the amount of tuning and polishing ANN has, but it should be pretty clean. For the OP's problem, the easiest way to get an answer is to make a cKDTree of the point cloud and then just call query with an array of all the points. This runs a separate query for each point, but each query uses the same tree and the looping is done in C, so it should be fairly fast. If this is not fast enough, it might be worth trying a two-tree query - that is, putting both the query points and the potential neighbours in kd-trees. Then there's an algorithm that saves a lot of tree traversal by using the spatial structure of the query points. (In this case the two trees are even the same.) Such an algorithm is even implemented, but unfortunately only in the pure python KDTree. If the OP really needs this to be fast, then the best thing to do would probably be to port KDTree.query_tree to cython. The algorithm is a little messy but not too complicated. I don't know how the FLANN optimizations fit into all this. Anne On 29 May 2011 14:32, Ralf Gommers wrote: > > > On Sun, May 29, 2011 at 8:15 PM, Gael Varoquaux > wrote: >> >> On Sun, May 29, 2011 at 07:59:37PM +0200, Ralf Gommers wrote: >> > ? ?This is the second issue with ANN bindings reported in a week, so I >> > had a >> > ? ?look at scikits.ann. Then I found >> > ? ?[2]http://blog.physionconsulting.com/?p=17. So it looks like there >> > should >> > ? ?be a big "this is deprecated" warning on scikits.ann. It would be >> > helpful >> > ? ?if someone can confirm that KdTree/cKdTree in scikits.spatial is >> > about as >> > ? ?fast as ANN/FLANN. >> >> Regarding speed of nearest neighbors, a big caveat is that it depends a >> lot on the dimensionality of the search space. >> >> For low dimensionality, KDTree is super fast. It breaks down at around >> 10d, because the space starts becoming too 'empty': splitting it with >> plane to separate in half-spaces with equal partitions of points ends up >> quickly creating as many planes as they are points. The next thing is the >> BallTree, that creates nested balls. In low dimensions it is as fast as >> the KDTree, and it scales a bit higher, up to 20d is a good guess. A >> BallTree implementation, contributed by Jake VanderPlas, can be found in >> the scikits.learn. For references, see >> >> http://scikit-learn.sourceforge.net/modules/neighbors.html#efficient-implementation-the-ball-tree >> >> Above d ~ 20, a brute force search is quicker if you want exact nearest >> neighbor. The scikits.learn's nearest neighbor search implements by >> default an automatic switch. >> >> I have never tried ANN (approximate nearest neighbor) but I wouldn't be >> surprised if it were faster than a brute force in this regime. >> >> All that to say the ANN probably has strong usecases and cannot always be >> replaced by a KDTree. > > scikits.ann exposes a single class, kdtree. scipy.spatial.KDTree.query seems > to be able to do approximate nearest neighbors: > > ??? KDTree.query(self, x, k=1, eps=0, p=2, distance_upper_bound=inf) > ??? ..... > ??? eps : nonnegative float > ??????? Return approximate nearest neighbors; the kth returned value > ??????? is guaranteed to be no further than (1+eps) times the > ??????? distance to the real kth nearest neighbor. > > So it looks to me like scipy.spatial has things covered. Which is also what > Barry Wark (scikits.ann author) seems to say in the blog post I linked to. > > Barry, could you confirm this and if appropriate put up some deprecation > warnings? > > Thanks, > Ralf > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Mon May 30 15:54:57 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 30 May 2011 15:54:57 -0400 Subject: [SciPy-User] robust fit In-Reply-To: References: Message-ID: On Mon, May 30, 2011 at 11:12 AM, Robert Kern wrote: > On Mon, May 30, 2011 at 05:58, Piter_ wrote: >> Hi all. >> Can anybody point me a direction how to make a robust fit of nonlinear >> function using leastsq. >> Maybe someone have seen ready function doing this. > > A variety of robust fits are implemented in statsmodels: > > http://statsmodels.sourceforge.net/trunk/rlm.html Unfortunately, rlm includes only linear models, so unless the problem can be converted to a linear in parameters problem RLM will not help directly. I don't know of anything that would be immediately available for this. long answer: >From a quick Google search it seems the same iteratively reweighted regression method can be applied to non-linear models, but I didn't find an internet accessible paper, and I don't know the details about how easy it would be to add non-linear least squares instead of linear weighted least squares to statsmodels.robust. scipy.optimize.curvefit allows for weights, so it might be possible to down-weight observations with large residuals in non-linear least squares. Some ways that shouldn't be too difficult to implement: If there are clear outliers, it might be possible to identify them and remove or trim them, trimmed least squares. (Using optimize fmin with a robust loss function, might be possible, but I have no idea how well it works or how to get estimates for the covariance of the error estimates.) Since I'm not so familiar with these robust methods but know Maximum Likelihood estimation, what I would do is to assume that the errors come from a non-normal distribution, either a mixture model, if some observations might be generated by a different model, or assume that the errors are t-distributed. In the linear examples that I looked at, t-distributed maximum likelihood was very robust to outliers, (error distribution with heavy tails). It should also work quite easily (using GenericLikelihoodModel in statsmodels), if the non-linear model is well behaved and/or good starting values are available. I don't remember whether I have read this specific paper but it's top in my google search http://www.jstor.org/stable/2290063 (Cited by 490 in google) Robust Statistical Modeling Using the t Distribution Kenneth L. Lange, Roderick J. A. Little and Jeremy M. G. Taylor Abstract: "The t distribution provides a useful extension of the normal for statistical modeling of data sets involving errors with longer- than-normal tails. An analytical strategy based on maximum likelihood for a general model with multivariate t errors is suggested and applied to a variety of problems, including linear and nonlinear regression, robust estimation of the mean and covariance matrix with missing data, unbalanced multivariate repeated-measures data, multivariate modeling of pedigree data, and multi- variate nonlinear regression. The degrees of freedom parameter of the t distribution provides a convenient dimension for achieving robust statistical inference, with moderate increases in computational complexity for many models. Estimation of precision from asymptotic theory and the bootstrap is discussed, and graphical methods for checking the appropriateness of the t distribution are presented." Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From akshar.bhosale at gmail.com Sun May 29 00:33:53 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sun, 29 May 2011 10:03:53 +0530 Subject: [SciPy-User] numpy,scipy installation using mkl Message-ID: Hi, Is this right forum for doubts abt numpy/scipy installation? Please find our issue below .. we have machine having intel xeon x7350 processors(8 nos) and RHEL 5.2 x86_64 with kernel 2.6.18-92.el5. We have following configuration : /opt/intel/Compiler/11.0/069/ mkl/lib/em64t Now we want to install numpy and scipy as an user in my home directory. Following are the libraries build inside MKL. libfftw2x_cdft_DOUBLE.a libmkl_blacs_sgimpt_ilp64.a libmkl_intel_ilp64.a libmkl_pgi_thread.so libmkl_vml_mc2.so libfftw2xc_intel.a libmkl_blacs_sgimpt_lp64.a libmkl_intel_ilp64.so libmkl_scalapack.a libmkl_vml_mc3.so libfftw2xf_intel.a libmkl_blas95.a libmkl_intel_lp64.a libmkl_scalapack_ilp64.a libmkl_vml_mc.so libfftw3xc_intel.a libmkl_cdft.a libmkl_intel_lp64.so libmkl_scalapack_ilp64.so libmkl_vml_p4n.so libfftw3xf_intel.a libmkl_cdft_core.a libmkl_intel_sp2dp.a libmkl_scalapack_lp64.a locale libmkl_blacs_ilp64.a libmkl_core.a libmkl_intel_sp2dp.so libmkl_scalapack_lp64.so mkl77_blas.mod libmkl_blacs_intelmpi20_ilp64.a libmkl_core.so libmkl_intel_thread.a libmkl_sequential.a mkl77_lapack1.mod libmkl_blacs_intelmpi20_lp64.a libmkl_def.so libmkl_intel_thread.so libmkl_sequential.so mkl77_lapack.mod libmkl_blacs_intelmpi_ilp64.a libmkl_em64t.a libmkl_lapack95.a libmkl.so mkl95_blas.mod libmkl_blacs_intelmpi_ilp64.so libmkl_gf_ilp64.a libmkl_lapack.a libmkl_solver.a mkl95_lapack.mod libmkl_blacs_intelmpi_lp64.a libmkl_gf_ilp64.so libmkl_lapack.so libmkl_solver_ilp64.a mkl95_precision.mod libmkl_blacs_intelmpi_lp64.so libmkl_gf_lp64.a libmkl_mc3.so libmkl_solver_ilp64_sequential.a libmkl_blacs_lp64.a libmkl_gf_lp64.so libmkl_mc.so libmkl_solver_lp64.a libmkl_blacs_openmpi_ilp64.a libmkl_gnu_thread.a libmkl_p4n.so libmkl_solver_lp64_sequential.a libmkl_blacs_openmpi_lp64.a libmkl_gnu_thread.so libmkl_pgi_thread.a libmkl_vml_def.so Version we are trying to build of numpy and scipy are : numpy : 1.6.0b2 and scipy : 0.9.0 We have configured python as a user in my home directory with version : 2.6.6 our machine has python : 2.4.3 We want to know the exact procedure for installing it using MKL as we are facing lot of issues while installing the same. Please help us. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zw4131 at gmail.com Sun May 29 12:15:32 2011 From: zw4131 at gmail.com (=?GB2312?B?va20886w?=) Date: Mon, 30 May 2011 00:15:32 +0800 Subject: [SciPy-User] It was quite confusing to use scipy.spatial.distance.cdist Message-ID: I want to computes euclidean distance between a vector and 2 vector. For example: A=numpy.array([0,0]) B= numpy.array([[1,0],[0,1]]) I want to computes euclidean distance between vector A and each vector in matrix B. My expected result is the vector [1,1] So I use scipy.spatial.distance.cdist(A, B ,'euclidean') But the error said A must be a 2-dimensional array. So I turned to use scipy.spatial.distance.euclidean(A,B), it worked, but the result was a value 1.4142. It was quite confusing!! So I suggest adopting an uniform function to Computes the distance between any-dimensional array. Scipy.spatial.distance.cdist() is a very good function, but it can be extended to Computes the distance between a vector and a vector as well as between a vector and n vectors. That would be perfect !!. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zw4131 at gmail.com Sun May 29 12:19:30 2011 From: zw4131 at gmail.com (=?GB2312?B?va20886w?=) Date: Mon, 30 May 2011 00:19:30 +0800 Subject: [SciPy-User] It is quite confusing to use scipy.spatial.distance Message-ID: I want to computes euclidean distance between a vector and 2 vector. For example: A=numpy.array([0,0]) B= numpy.array([[1,0],[0,1]]) I want to computes euclidean distance between vector A and each vector in matrix B. My expected result is the vector [1,1] So I use scipy.spatial.distance.cdist(A, B ,'euclidean') But the error said A must be a 2-dimensional array. So I turned to use scipy.spatial.distance.euclidean(A,B), it worked, but the result was a value 1.4142. It was quite confusing!! So I suggest adopting an uniform function to Computes the distance between any-dimensional array. Scipy.spatial.distance.cdist() is a very good function, but it can be extended to Computes the distance between a vector and a vector as well as between a vector and n vectors. That would be perfect !!. -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sun May 29 15:37:41 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Mon, 30 May 2011 01:07:41 +0530 Subject: [SciPy-User] scipy instsll issue Message-ID: Hi, Is this right forum for doubts abt numpy/scipy installation? Please find our issue below .. we have machine having intel xeon x7350 processors(8 nos) and RHEL 5.2 x86_64 with kernel 2.6.18-92.el5. We have following configuration : /opt/intel/Compiler/11.0/069/ mkl/lib/em64t Now we want to install numpy and scipy as an user in my home directory. Following are the libraries build inside MKL. libfftw2x_cdft_DOUBLE.a libmkl_blacs_sgimpt_ilp64.a libmkl_intel_ilp64.a libmkl_pgi_thread.so libmkl_vml_mc2.so libfftw2xc_intel.a libmkl_blacs_sgimpt_lp64.a libmkl_intel_ilp64.so libmkl_scalapack.a libmkl_vml_mc3.so libfftw2xf_intel.a libmkl_blas95.a libmkl_intel_lp64.a libmkl_scalapack_ilp64.a libmkl_vml_mc.so libfftw3xc_intel.a libmkl_cdft.a libmkl_intel_lp64.so libmkl_scalapack_ilp64.so libmkl_vml_p4n.so libfftw3xf_intel.a libmkl_cdft_core.a libmkl_intel_sp2dp.a libmkl_scalapack_lp64.a locale libmkl_blacs_ilp64.a libmkl_core.a libmkl_intel_sp2dp.so libmkl_scalapack_lp64.so mkl77_blas.mod libmkl_blacs_intelmpi20_ilp64.a libmkl_core.so libmkl_intel_thread.a libmkl_sequential.a mkl77_lapack1.mod libmkl_blacs_intelmpi20_lp64.a libmkl_def.so libmkl_intel_thread.so libmkl_sequential.so mkl77_lapack.mod libmkl_blacs_intelmpi_ilp64.a libmkl_em64t.a libmkl_lapack95.a libmkl.so mkl95_blas.mod libmkl_blacs_intelmpi_ilp64.so libmkl_gf_ilp64.a libmkl_lapack.a libmkl_solver.a mkl95_lapack.mod libmkl_blacs_intelmpi_lp64.a libmkl_gf_ilp64.so libmkl_lapack.so libmkl_solver_ilp64.a mkl95_precision.mod libmkl_blacs_intelmpi_lp64.so libmkl_gf_lp64.a libmkl_mc3.so libmkl_solver_ilp64_sequential.a libmkl_blacs_lp64.a libmkl_gf_lp64.so libmkl_mc.so libmkl_solver_lp64.a libmkl_blacs_openmpi_ilp64.a libmkl_gnu_thread.a libmkl_p4n.so libmkl_solver_lp64_sequential.a libmkl_blacs_openmpi_lp64.a libmkl_gnu_thread.so libmkl_pgi_thread.a libmkl_vml_def.so Version we are trying to build of numpy and scipy are : numpy : 1.6.0b2 and scipy : 0.9.0 We have configured python as a user in my home directory with version : 2.6.6 our machine has python : 2.4.3 We want to know the exact procedure for installing it using MKL as we are facing lot of issues while installing the same. Please help us. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon May 30 21:55:33 2011 From: cournape at gmail.com (David Cournapeau) Date: Tue, 31 May 2011 10:55:33 +0900 Subject: [SciPy-User] scipy instsll issue In-Reply-To: References: Message-ID: Akshar, This is the fourth time you are sending this email, and it already got answered: http://mail.scipy.org/pipermail/numpy-discussion/2011-May/056472.html If you don't see the answer, maybe you are having issues with your email client. thanks, David From andrew.collette at gmail.com Mon May 30 22:58:51 2011 From: andrew.collette at gmail.com (Andrew Collette) Date: Mon, 30 May 2011 20:58:51 -0600 Subject: [SciPy-User] ANN: HDF5 for Python (h5py) 1.4 BETA Message-ID: Announcing HDF5 for Python (h5py) 1.4 BETA ========================================== We are proud to announce the availability of HDF5 for Python (h5py) 1.4 beta. HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a mature scientific software library originally developed at NCSA, designed for the fast, flexible storage of enormous amounts of data. >From a Python programmer's perspective, HDF5 provides a robust way to store data, organized by name in a tree-like fashion. You can create datasets (arrays on disk) hundreds of gigabytes in size, and perform random-access I/O on desired sections. Datasets are organized in a filesystem-like hierarchy using containers called "groups", and accessed using the traditional POSIX /path/to/resource syntax. The beta will be available for 1-2 weeks. Because of the substantial number of changes in this release, we encourage all current and prospective h5py users to try out the beta and provide feedback, either to the mailing list (h5py at googlegroups) or on the bug tracker as appropriate. * What's new: http://h5py.alfven.org/docs-1.4/intro/whatsnew.html * Google code site: http://h5py.googlecode.com Most exciting changes --------------------- * Significant improvements in stability, from a refactoring of the low-level component which talks to HDF5. * HDF5 1.8.3 through 1.8.7 now work correctly and are officially supported. * Python 3.2 is officially supported by h5py! Thanks especially to Darren Dale for getting this working. * HDF5 1.6.X is no longer supported on any platform; following the release of 1.6.10 some time ago, this branch is no longer maintained by The HDF Group. * Python 2.6 or later is now required to run h5py. This is a consequence of the numerous changes made to h5py for Python 3 compatibility. From cimrman3 at ntc.zcu.cz Tue May 31 04:04:55 2011 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 31 May 2011 10:04:55 +0200 Subject: [SciPy-User] ANN: SfePy 2011.2 Message-ID: <4DE4A127.9070006@ntc.zcu.cz> I am pleased to announce release 2011.2 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Mailing lists, issue tracking: http://code.google.com/p/sfepy/ Git (source) repository: http://github.com/sfepy Documentation: http://docs.sfepy.org/doc Highlights of this release -------------------------- - experimental implementation of terms aiming at easier usage and definition of new terms - Mooney-Rivlin membrane term - update build system to use exclusively setup.py - allow switching boundary conditions on/off depending on time - support for variable time step solvers For more information on this release, see http://sfepy.googlecode.com/svn/web/releases/2011.2_RELEASE_NOTES.txt (full release notes, rather long and technical). Best regards, Robert Cimrman and Vladim?r Luke? From sturla at molden.no Tue May 31 10:10:07 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 16:10:07 +0200 Subject: [SciPy-User] robust fit In-Reply-To: References: Message-ID: <4DE4F6BF.5060405@molden.no> I've done this for M-estimates using bisquare weights. If the errors are error(x) = y(x) - fit(x) you want the minimize the sum of squares of weighted errors: werror(x) = weight(error(x)) * error(x) Provide one function that returns werror for each x[:,n],y[n] and another that returns the partial derivatives of werror[:] with respect to x[i,:]. The latter is the Jacobian that you set as Dfun keyword argument. I.e. if you have m parameters and n samples in the array x, the Jacobian is an m x n array. The last thing you need is an initial guess. This will depend on the problem, so its hard to give an advide. Just be beware that a least squares fit will not always work. If the you think deriving the Jacobian is tedious, consider using SymPy or just let leastsq approximate it (i.e. set Dfun to None). Sturla Den 30.05.2011 12:58, skrev Piter_: > Hi all. > Can anybody point me a direction how to make a robust fit of nonlinear > function using leastsq. > Maybe someone have seen ready function doing this. > Thanks. > Petro. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From sturla at molden.no Tue May 31 10:17:02 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 16:17:02 +0200 Subject: [SciPy-User] robust fit In-Reply-To: References: Message-ID: <4DE4F85E.30800@molden.no> Den 30.05.2011 13:19, skrev Matthieu Brucher: > > It seems to me that least squares cannot lead to a ribust fit in the > usual sense. Leastsq is actually Levenberg-Marquardt (lmder and lmdif from MINPACK). It can be used to minimize any cost function if you provide the Jacobian. Sturla From x.piter at gmail.com Tue May 31 10:20:43 2011 From: x.piter at gmail.com (Piter_) Date: Tue, 31 May 2011 16:20:43 +0200 Subject: [SciPy-User] robust fit In-Reply-To: <4DE4F85E.30800@molden.no> References: <4DE4F85E.30800@molden.no> Message-ID: Thanks for all answers. I will look it through. Best. Petro From sturla at molden.no Tue May 31 10:21:42 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 16:21:42 +0200 Subject: [SciPy-User] robust fit In-Reply-To: <4DE4F6BF.5060405@molden.no> References: <4DE4F6BF.5060405@molden.no> Message-ID: <4DE4F976.20106@molden.no> Den 31.05.2011 16:10, skrev Sturla Molden: > you want the minimize the sum of squares of weighted > errors: > > werror(x) = weight(error(x)) * error(x) One more thing: weight(error) is actually the square root of the robust weighting function, as we want to minimize the sum of robust_weight(error) * (error**2) Sturla From sturla at molden.no Tue May 31 11:27:29 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 17:27:29 +0200 Subject: [SciPy-User] efficient computation of point cloud nearest neighbors In-Reply-To: References: <20110529181538.GA13056@phare.normalesup.org> Message-ID: <4DE508E1.1000904@molden.no> Den 30.05.2011 20:20, skrev Anne Archibald: > If this is not fast enough, it might be worth trying a two-tree query > - that is, putting both the query points and the potential neighbours > in kd-trees. Then there's an algorithm that saves a lot of tree > traversal by using the spatial structure of the query points. (In this > case the two trees are even the same.) Such an algorithm is even > implemented, but unfortunately only in the pure python KDTree. If the > OP really needs this to be fast, then the best thing to do would > probably be to port KDTree.query_tree to cython. The algorithm is a > little messy but not too complicated. In this case we just need one kd-tree. Instead of starting from the root, we begin with the leaf containing the query point and work our way downwards. We then find a better branching point from which to start than the root. That is not messy at all :-) Another thing to note is that making the kd-tree is very fast whereas searching it is slow. So using multiprocessing is an option. Sturla From josef.pktd at gmail.com Tue May 31 11:30:22 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 31 May 2011 11:30:22 -0400 Subject: [SciPy-User] robust fit In-Reply-To: <4DE4F85E.30800@molden.no> References: <4DE4F85E.30800@molden.no> Message-ID: On Tue, May 31, 2011 at 10:17 AM, Sturla Molden wrote: > Den 30.05.2011 13:19, skrev Matthieu Brucher: >> >> It seems to me that least squares cannot lead to a ribust fit in the >> usual sense. > > Leastsq is actually Levenberg-Marquardt (lmder and lmdif from MINPACK). > It can > be used to minimize any cost function if you provide the Jacobian. Do you have an example for this? >From the docstring of lmdif and lmder it can only minimize sum of squares, e.g. c the purpose of lmdif is to minimize the sum of the squares of c m nonlinear functions in n variables by a modification of c the levenberg-marquardt algorithm. Josef > > Sturla > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sturla at molden.no Tue May 31 11:37:48 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 17:37:48 +0200 Subject: [SciPy-User] robust fit In-Reply-To: References: <4DE4F85E.30800@molden.no> Message-ID: <4DE50B4C.5080509@molden.no> Den 31.05.2011 17:30, skrev josef.pktd at gmail.com: > > Do you have an example for this? > > From the docstring of lmdif and lmder it can only minimize sum of squares, e.g. > > c the purpose of lmdif is to minimize the sum of the squares of > c m nonlinear functions in n variables by a modification of > c the levenberg-marquardt algorithm. > Huh? If it can minimize sum(x*x), then it can also minimize sum(w*x*x) by the transformation z = sqrt(w)*x. Sturla From josef.pktd at gmail.com Tue May 31 11:39:19 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 31 May 2011 11:39:19 -0400 Subject: [SciPy-User] robust fit In-Reply-To: <4DE4F976.20106@molden.no> References: <4DE4F6BF.5060405@molden.no> <4DE4F976.20106@molden.no> Message-ID: On Tue, May 31, 2011 at 10:21 AM, Sturla Molden wrote: > Den 31.05.2011 16:10, skrev Sturla Molden: >> you want the minimize the sum of squares of weighted >> errors: >> >> ? ? ?werror(x) = weight(error(x)) * error(x) > > One more thing: weight(error) is actually the square root of the > robust weighting function, as we want to minimize the sum of > > ? ?robust_weight(error) ?* (error**2) Do you have a reference or a full example for this? Your description sounds relatively simple, and I guess now that we (statsmodels) can get the non-linear version with only small(ish) changes and a call to a WNLLS (curve_fit) instead of WLS (linear). Josef > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue May 31 11:43:19 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 31 May 2011 11:43:19 -0400 Subject: [SciPy-User] robust fit In-Reply-To: <4DE50B4C.5080509@molden.no> References: <4DE4F85E.30800@molden.no> <4DE50B4C.5080509@molden.no> Message-ID: On Tue, May 31, 2011 at 11:37 AM, Sturla Molden wrote: > Den 31.05.2011 17:30, skrev josef.pktd at gmail.com: >> >> Do you have an example for this? >> > From the docstring of lmdif and lmder it can only minimize sum of squares, e.g. >> >> c ? ? the purpose of lmdif is to minimize the sum of the squares of >> c ? ? m nonlinear functions in n variables by a modification of >> c ? ? the levenberg-marquardt algorithm. >> > > Huh? > > If it can minimize sum(x*x), then it can also minimize sum(w*x*x) by the > transformation z = sqrt(w)*x. Yes, that's what scipy.optimize.curve_fit is doing, but it is still a sum of squares, your statement was *any* cost function (not just quadratic, sum of squares): > It can > be used to minimize any cost function if you provide the Jacobian. Just getting mislead by the definition of *any*. Josef > > Sturla > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sturla at molden.no Tue May 31 11:56:07 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 17:56:07 +0200 Subject: [SciPy-User] robust fit In-Reply-To: References: <4DE4F6BF.5060405@molden.no> <4DE4F976.20106@molden.no> Message-ID: <4DE50F97.5050108@molden.no> Den 31.05.2011 17:39, skrev josef.pktd at gmail.com: > Do you have a reference or a full example for this? Yes, I'll look one up for you. > Your description sounds relatively simple, and I guess now that we > (statsmodels) can get the non-linear version with only small(ish) > changes and a call to a WNLLS (curve_fit) instead of WLS (linear). > Yes and no. You have to provide an initial fit and derive the Jacobian (unless you are happy with an approximation). Also the "robust fit" (asked for here) is not just WNLLS, because the weights are a non-linear function of the residuals. You can solve this by iterative WNLLS, or provide the full Jacbobian directly to Levenberg-Marquardt. In the former case, it is just sqrt(w) times the Jacobian for the residuals. But you can also get the M-estimator from a single pass of Levenberg-Marquardt by using the chain rule on sqrt(w(e(x)))*e(x). So there are actually two ways of doing this with leastsq. Sturla From charlesr.harris at gmail.com Tue May 31 11:57:53 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 31 May 2011 09:57:53 -0600 Subject: [SciPy-User] robust fit In-Reply-To: References: <4DE4F6BF.5060405@molden.no> <4DE4F976.20106@molden.no> Message-ID: On Tue, May 31, 2011 at 9:39 AM, wrote: > On Tue, May 31, 2011 at 10:21 AM, Sturla Molden wrote: > > Den 31.05.2011 16:10, skrev Sturla Molden: > >> you want the minimize the sum of squares of weighted > >> errors: > >> > >> werror(x) = weight(error(x)) * error(x) > > > > One more thing: weight(error) is actually the square root of the > > robust weighting function, as we want to minimize the sum of > > > > robust_weight(error) * (error**2) > > Do you have a reference or a full example for this? > > Your description sounds relatively simple, and I guess now that we > (statsmodels) can get the non-linear version with only small(ish) > changes and a call to a WNLLS (curve_fit) instead of WLS (linear). > > I've also had good results with Tukey's biweight. As Sturla says, it can be implemented as iterated weighted least squares. There is a whole class of robust methods along that line. The L_1 cost function can also be done that way, and the usual algorithm for the geometric median is one useful application. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Tue May 31 12:01:37 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 18:01:37 +0200 Subject: [SciPy-User] robust fit In-Reply-To: References: <4DE4F85E.30800@molden.no> <4DE50B4C.5080509@molden.no> Message-ID: <4DE510E1.1030407@molden.no> Den 31.05.2011 17:43, skrev josef.pktd at gmail.com: > > Yes, that's what scipy.optimize.curve_fit is doing, but it is still a > sum of squares, your statement was *any* cost function (not just > quadratic, sum of squares): > I was just answering Matthieu's statement that least sqaures cannot lead to a robust fit in the usual sence. But in this case it can. Sorry for sloppy choise of wording. It cannot optimize any function. But it can optimize any function that can be expressed as a non-linear least-squares problem, and that includes robust M-estimators. Sturla From sturla at molden.no Tue May 31 12:09:29 2011 From: sturla at molden.no (Sturla Molden) Date: Tue, 31 May 2011 18:09:29 +0200 Subject: [SciPy-User] robust fit In-Reply-To: References: <4DE4F6BF.5060405@molden.no> <4DE4F976.20106@molden.no> Message-ID: <4DE512B9.6000308@molden.no> Den 31.05.2011 17:57, skrev Charles R Harris: > > > I've also had good results with Tukey's biweight That is the one I prefer too, particularly with a robust estimate of the standard error (MAD/0.6745). If the residuals are normally distributed, there is hardly any difference from the least squares fit. Sturla From josef.pktd at gmail.com Tue May 31 12:24:17 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 31 May 2011 12:24:17 -0400 Subject: [SciPy-User] robust fit In-Reply-To: <4DE512B9.6000308@molden.no> References: <4DE4F6BF.5060405@molden.no> <4DE4F976.20106@molden.no> <4DE512B9.6000308@molden.no> Message-ID: On Tue, May 31, 2011 at 12:09 PM, Sturla Molden wrote: > Den 31.05.2011 17:57, skrev Charles R Harris: >> >> >> I've also had good results with Tukey's biweight > > That is the one I prefer too, particularly with a robust estimate of the > standard error (MAD/0.6745). If the residuals are normally distributed, > there is hardly any difference from the least squares fit. just to add some advertising for statsmodels and Skipper's work http://statsmodels.sourceforge.net/devel/rlm.html#norms http://statsmodels.sourceforge.net/devel/rlm_techn1.html http://statsmodels.sourceforge.net/devel/generated/scikits.statsmodels.robust.scale.stand_mad.html#scikits.statsmodels.robust.scale.stand_mad hopefully we get the non-linear extension (if someone finds the time this summer). Josef > > Sturla > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From SGONG at mdacorporation.com Tue May 31 13:58:46 2011 From: SGONG at mdacorporation.com (Shawn Gong) Date: Tue, 31 May 2011 10:58:46 -0700 Subject: [SciPy-User] 2D interpolation - filling voids Message-ID: <33584A1DEF4341428D15C1273466D3235AFA7CE2@EVSYVR1.ds.mda.ca> Hi list, I have an IDL program that does boxcar filtering in order to fill (irregular shape) a 2D hole (NaN). The boxcar filtering is run multiple times until all the NaN are filled. It takes very long. Does scipy have ready-to-use signal processing functions? Examples are greatly appreciated. thanks, Shawn From davide.lasagna at polito.it Tue May 31 16:56:18 2011 From: davide.lasagna at polito.it (Davide Lasagn) Date: Tue, 31 May 2011 22:56:18 +0200 Subject: [SciPy-User] 2D interpolation - filling voids In-Reply-To: <33584A1DEF4341428D15C1273466D3235AFA7CE2@EVSYVR1.ds.mda.ca> References: <33584A1DEF4341428D15C1273466D3235AFA7CE2@EVSYVR1.ds.mda.ca> Message-ID: <201105312256.18485.davide.lasagna@polito.it> HI, I implemented an image inpainting algorithm in cython for replacing nans in a 2d array, for a project of mine. You can grab the source code at: https://github.com/gasagna/openpiv-python/blob/master/openpiv/src/lib.pyx The code shoul be robust enough fo your purpose , but please send some feedback if something goes wrong. Cheers Davide Lasagna On Tuesday, May 31, 2011 07:58:46 PM Shawn Gong wrote: > Hi list, > > I have an IDL program that does boxcar filtering in order to fill > (irregular shape) a 2D hole (NaN). The boxcar filtering is run multiple > times until all the NaN are filled. It takes very long. > > Does scipy have ready-to-use signal processing functions? Examples are > greatly appreciated. > > thanks, > Shawn > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From SGONG at mdacorporation.com Tue May 31 17:03:08 2011 From: SGONG at mdacorporation.com (Shawn Gong) Date: Tue, 31 May 2011 14:03:08 -0700 Subject: [SciPy-User] 2D interpolation - filling voids In-Reply-To: <201105312256.18485.davide.lasagna@polito.it> References: <33584A1DEF4341428D15C1273466D3235AFA7CE2@EVSYVR1.ds.mda.ca> <201105312256.18485.davide.lasagna@polito.it> Message-ID: <33584A1DEF4341428D15C1273466D3235AFA7CE9@EVSYVR1.ds.mda.ca> Hi Davide, Thank you for your help. Shawn -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Davide Lasagn Sent: Tuesday, May 31, 2011 4:56 PM To: SciPy Users List Subject: Re: [SciPy-User] 2D interpolation - filling voids HI, I implemented an image inpainting algorithm in cython for replacing nans in a 2d array, for a project of mine. You can grab the source code at: https://github.com/gasagna/openpiv-python/blob/master/openpiv/src/lib.pyx The code shoul be robust enough fo your purpose , but please send some feedback if something goes wrong. Cheers Davide Lasagna On Tuesday, May 31, 2011 07:58:46 PM Shawn Gong wrote: > Hi list, > > I have an IDL program that does boxcar filtering in order to fill > (irregular shape) a 2D hole (NaN). The boxcar filtering is run multiple > times until all the NaN are filled. It takes very long. > > Does scipy have ready-to-use signal processing functions? Examples are > greatly appreciated. > > thanks, > Shawn > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From klonuo at gmail.com Tue May 31 19:02:30 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 01 Jun 2011 01:02:30 +0200 Subject: [SciPy-User] f2py on Windows with MSVC9 Message-ID: <20110601010227.3590.B1C76292@gmail.com> I tried to make use of fortran code, and made trivial fortran code example. It runs fine with ifort compiler, FYI Then I declare CF2PY in and outs and hoping to see it work ===================================================================== f2py -c -m dummy dummy.f --------------------------------------------------------------------- which unfortunatelly throws error: ===================================================================== dummymodule.c(77) : error C2010: '.' : unexpected in macro formal parameter list --------------------------------------------------------------------- dummymodule.c(77) is this line: ===================================================================== #define size(var, dim...) f2py_size((PyArrayObject *)(capi_ ## var ## _tmp), ##dim, -1) --------------------------------------------------------------------- Any ideas what's wrong and if it's possible to do this under: Windows XP SP3 32 bit MSVC 2008 Intel Fortran Compiler 11.1 Python 2.6.6 NumPy 1.6 Thanks in advance Here is full log: running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "dummy" sources f2py options: [] f2py:> c:\temp\tmp\tmpgkb57o\src.win32-2.6\dummymodule.c creating c:\temp\tmp\tmpgkb57o creating c:\temp\tmp\tmpgkb57o\src.win32-2.6 Reading fortran codes... Reading file 'dummy.f' (format:fix,strict) crackline: groupcounter=3 groupname={0: '', 1: 'module', 2: 'interface', 3: 'subroutine'} crackline: Mismatch of blocks encountered. Trying to fix it by assuming "end" statement. crackline: groupcounter=2 groupname={0: '', 1: 'module', 2: 'interface', 3: 'subroutine'} crackline: Mismatch of blocks encountered. Trying to fix it by assuming "end" statement. crackline: groupcounter=1 groupname={0: '', 1: 'module', 2: 'interface', 3: 'subroutine'} crackline: Mismatch of blocks encountered. Trying to fix it by assuming "end" statement. Post-processing... Block: dummy Block: fhist Post-processing (stage 2)... Building modules... Building module "dummy"... Constructing wrapper function "fhist"... fhist(wavdata,[l]) Wrote C/API module "dummy" to file "c:\temp\tmp\tmpgkb57o\src.win32-2.6\dummymodule.c" adding 'c:\temp\tmp\tmpgkb57o\src.win32-2.6\fortranobject.c' to sources. adding 'c:\temp\tmp\tmpgkb57o\src.win32-2.6' to include_dirs. copying C:\Python26\lib\site-packages\numpy\f2py\src\fortranobject.c -> c:\temp\tmp\tmpgkb57o\src.win32-2.6 copying C:\Python26\lib\site-packages\numpy\f2py\src\fortranobject.h -> c:\temp\tmp\tmpgkb57o\src.win32-2.6 build_src: building npy-pkg config files running build_ext No module named msvccompiler in numpy.distutils; trying from distutils customize MSVCCompiler customize MSVCCompiler using build_ext customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Found executable C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe Found executable C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe customize IntelVisualFCompiler customize IntelVisualFCompiler using build_ext building 'dummy' extension compiling C sources creating c:\temp\tmp\tmpgkb57o\Release creating c:\temp\tmp\tmpgkb57o\Release\temp creating c:\temp\tmp\tmpgkb57o\Release\temp\tmp creating c:\temp\tmp\tmpgkb57o\Release\temp\tmp\tmpgkb57o creating c:\temp\tmp\tmpgkb57o\Release\temp\tmp\tmpgkb57o\src.win32-2.6 c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ic:\temp\tmp\tmpgkb57o\src.win32-2.6-IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC /Tcc:\temp\tmp\tmpgkb57o\src.win32-2.6\dummymodule.c /Foc:\temp\tmp\tmpgkb57o\Release\temp\tmp\tmpgkb57o\src.win32-2.6\dummymodule.obj Found executable c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe dummymodule.c c:\temp\tmp\tmpgkb57o\src.win32-2.6\dummymodule.c(77) : error C2010: '.' : unexpected in macro formal parameter list error: Command "c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ic:\temp\tmp\tmpgkb57o\src.win32-2.6 -IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC /Tcc:\temp\tmp\tmpgkb57o\src.win32-2.6\dummymodule.c /Foc:\temp\tmp\tmpgkb57o\Release\temp\tmp\tmpgkb57o\src.win32-2.6\dummymodule.obj" failed with exit status 2 From klonuo at gmail.com Tue May 31 19:23:44 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 01 Jun 2011 01:23:44 +0200 Subject: [SciPy-User] f2py on Windows with MSVC9 In-Reply-To: <20110601010227.3590.B1C76292@gmail.com> References: <20110601010227.3590.B1C76292@gmail.com> Message-ID: <20110601012342.3593.B1C76292@gmail.com> I don't know C, so don't know if I break something or encountered a typo, but changing: C:\Python26\Lib\site-packages\numpy\f2py\cfuncs.py(259): from: #define size(var, dim...) f2py_size((PyArrayObject *)(capi_ ## var ## _tmp), ##dim, -1) to: #define size(var, dim) f2py_size((PyArrayObject *)(capi_ ## var ## _tmp), ##dim, -1) solved this case From klonuo at gmail.com Tue May 31 20:17:01 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 01 Jun 2011 02:17:01 +0200 Subject: [SciPy-User] f2py on Windows with MSVC9 In-Reply-To: <20110601012342.3593.B1C76292@gmail.com> References: <20110601010227.3590.B1C76292@gmail.com> <20110601012342.3593.B1C76292@gmail.com> Message-ID: <20110601021659.3596.B1C76292@gmail.com> > > solved this case Well, it's not solved really I had to make some changes and still can't produce working example I took even more dummy fortran code from here: http://www.scipy.org/F2PY_Windows dummy.f ===================================================================== SUBROUTINE HELLO() WRITE(*,*)'HELLO FROM FORTRAN90!!!' END --------------------------------------------------------------------- f2py -c -m dummy dummy.f ===================================================================== dummymodule.obj : error LNK2001: unresolved external symbol _hello_ --------------------------------------------------------------------- full log: ===================================================================== running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "dummy" sources f2py options: [] f2py:> c:\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.c creating c:\temp\tmp\tmphyln9j creating c:\temp\tmp\tmphyln9j\src.win32-2.6 Reading fortran codes... Reading file 'dummy.f' (format:fix,strict) Post-processing... Block: dummy Block: hello Post-processing (stage 2)... Building modules... Building module "dummy"... Constructing wrapper function "hello"... hello() Wrote C/API module "dummy" to file "c:\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.c" adding 'c:\temp\tmp\tmphyln9j\src.win32-2.6\fortranobject.c' to sources. adding 'c:\temp\tmp\tmphyln9j\src.win32-2.6' to include_dirs. copying C:\Python26\lib\site-packages\numpy\f2py\src\fortranobject.c -> c:\temp\tmp\tmphyln9j\src.win32-2.6 copying C:\Python26\lib\site-packages\numpy\f2py\src\fortranobject.h -> c:\temp\tmp\tmphyln9j\src.win32-2.6 build_src: building npy-pkg config files running build_ext No module named msvccompiler in numpy.distutils; trying from distutils customize MSVCCompiler customize MSVCCompiler using build_ext customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Found executable C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe Found executable C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe customize IntelVisualFCompiler customize IntelVisualFCompiler using build_ext building 'dummy' extension compiling C sources creating c:\temp\tmp\tmphyln9j\Release creating c:\temp\tmp\tmphyln9j\Release\temp creating c:\temp\tmp\tmphyln9j\Release\temp\tmp creating c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j creating c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6 c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ic:\temp\tmp\tmphyl n9j\src.win32-2.6 -IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC /Tcc:\te mp\tmp\tmphyln9j\src.win32-2.6\dummymodule.c /Foc:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy module.obj Found executable c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ic:\temp\tmp\tmphyl n9j\src.win32-2.6 -IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC /Tcc:\te mp\tmp\tmphyln9j\src.win32-2.6\fortranobject.c /Foc:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\fortranobject.obj compiling Fortran sources Fortran f77 compiler: C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe -FI -w90 -w95 /F6000000 /fpe:3 /Qo penmp /w /I:C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /O1 /G7 /QaxW /QaxM Fortran f90 compiler: C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe -FR /F6000000 /fpe:3 /Qopenmp /w / I:C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /F6000000 /fpe:3 /Qopenmp /w /I:C:\Program Files\ VNI\imsl\fnl700\winin111i32\include\dll /nologo /O1 /G7 /QaxW /QaxM Fortran fix compiler: C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe -FI -4L72 -w /F6000000 /fpe:3 /Qop enmp /w /I:C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /F6000000 /fpe:3 /Qopenmp /w /I:C:\Progr am Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /O1 /G7 /QaxW /QaxM compile options: '-Ic:\temp\tmp\tmphyln9j\src.win32-2.6 -IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC -c' ifort.exe:f77: dummy.f c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Python26\libs /LIBPATH:C:\Python26\PCbuild /LIBPATH:C:\Python26\libs /LIBPATH:C:\Python26\PCbuild /EXPORT:initdummy c:\temp\tmp\tm phyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.obj c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\sr c.win32-2.6\fortranobject.obj c:\temp\tmp\tmphyln9j\Release\dummy.o /OUT:.\dummy.pyd /IMPLIB:c:\temp\tmp\tmphyln9j\R elease\temp\tmp\tmphyln9j\src.win32-2.6\dummy.lib /MANIFESTFILE:c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src .win32-2.6\dummy.pyd.manifest Found executable c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe LIBCMT.lib(_file.obj) : error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(crt0dat.obj) : error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(crt0dat.obj) : error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(crtheap.obj) : error LNK2005: __malloc_crt already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) LIBCMT.lib(winxfltr.obj) : error LNK2005: ___CppXcptFilter already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(tidtable.obj) : error LNK2005: __encode_pointer already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(tidtable.obj) : error LNK2005: __encoded_null already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(tidtable.obj) : error LNK2005: __decode_pointer already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(mlock.obj) : error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR90.dll) LIBCMT.lib(mlock.obj) : error LNK2005: __lock already defined in MSVCRT.lib(MSVCR90.dll) Creating library c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy.lib and object c:\temp\tmp\ tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy.exp LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library dummymodule.obj : error LNK2001: unresolved external symbol _hello_ libifcoremt.lib(for_main.obj) : error LNK2019: unresolved external symbol _MAIN__ referenced in function _main .\dummy.pyd : fatal error LNK1120: 2 unresolved externals error: Command "c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C :\Python26\libs /LIBPATH:C:\Python26\PCbuild /LIBPATH:C:\Python26\libs /LIBPATH:C:\Python26\PCbuild /EXPORT:initdumm y c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.obj c:\temp\tmp\tmphyln9j\Release\temp\ tmp\tmphyln9j\src.win32-2.6\fortranobject.obj c:\temp\tmp\tmphyln9j\Release\dummy.o /OUT:.\dummy.pyd /IMPLIB:c:\temp \tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy.lib /MANIFESTFILE:c:\temp\tmp\tmphyln9j\Release\temp\t mp\tmphyln9j\src.win32-2.6\dummy.pyd.manifest" failed with exit status 1120 From cgohlke at uci.edu Tue May 31 20:39:16 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 31 May 2011 17:39:16 -0700 Subject: [SciPy-User] f2py on Windows with MSVC9 In-Reply-To: <20110601021659.3596.B1C76292@gmail.com> References: <20110601010227.3590.B1C76292@gmail.com> <20110601012342.3593.B1C76292@gmail.com> <20110601021659.3596.B1C76292@gmail.com> Message-ID: <4DE58A34.2000600@uci.edu> On 5/31/2011 5:17 PM, Klonuo Umom wrote: >> >> solved this case > > Well, it's not solved really > > I had to make some changes and still can't produce working example > > I took even more dummy fortran code from here: > http://www.scipy.org/F2PY_Windows > > dummy.f > ===================================================================== > SUBROUTINE HELLO() > WRITE(*,*)'HELLO FROM FORTRAN90!!!' > END > --------------------------------------------------------------------- > > > f2py -c -m dummy dummy.f > ===================================================================== > dummymodule.obj : error LNK2001: unresolved external symbol _hello_ > --------------------------------------------------------------------- > > > > > full log: > ===================================================================== > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options > running build_src > build_src > building extension "dummy" sources > f2py options: [] > f2py:> c:\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.c > creating c:\temp\tmp\tmphyln9j > creating c:\temp\tmp\tmphyln9j\src.win32-2.6 > Reading fortran codes... > Reading file 'dummy.f' (format:fix,strict) > Post-processing... > Block: dummy > Block: hello > Post-processing (stage 2)... > Building modules... > Building module "dummy"... > Constructing wrapper function "hello"... > hello() > Wrote C/API module "dummy" to file "c:\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.c" > adding 'c:\temp\tmp\tmphyln9j\src.win32-2.6\fortranobject.c' to sources. > adding 'c:\temp\tmp\tmphyln9j\src.win32-2.6' to include_dirs. > copying C:\Python26\lib\site-packages\numpy\f2py\src\fortranobject.c -> c:\temp\tmp\tmphyln9j\src.win32-2.6 > copying C:\Python26\lib\site-packages\numpy\f2py\src\fortranobject.h -> c:\temp\tmp\tmphyln9j\src.win32-2.6 > build_src: building npy-pkg config files > running build_ext > No module named msvccompiler in numpy.distutils; trying from distutils > customize MSVCCompiler > customize MSVCCompiler using build_ext > customize GnuFCompiler > Could not locate executable g77 > Could not locate executable f77 > customize IntelVisualFCompiler > Found executable C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe > Found executable C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe > customize IntelVisualFCompiler > customize IntelVisualFCompiler using build_ext > building 'dummy' extension > compiling C sources > creating c:\temp\tmp\tmphyln9j\Release > creating c:\temp\tmp\tmphyln9j\Release\temp > creating c:\temp\tmp\tmphyln9j\Release\temp\tmp > creating c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j > creating c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6 > c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ic:\temp\tmp\tmphyl n9j\src.win32-2.6 -IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC /Tcc:\te mp\tmp\tmphyln9j\src.win32-2.6\dummymodule.c /Foc:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy module.obj > Found executable c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe > c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ic:\temp\tmp\tmphyl n9j\src.win32-2.6 -IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC /Tcc:\te mp\tmp\tmphyln9j\src.win32-2.6\fortranobject.c /Foc:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\fortranobject.obj > compiling Fortran sources > Fortran f77 compiler: C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe -FI -w90 -w95 /F6000000 /fpe:3 /Qo penmp /w /I:C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /O1 /G7 /QaxW /QaxM > Fortran f90 compiler: C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe -FR /F6000000 /fpe:3 /Qopenmp /w / I:C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /F6000000 /fpe:3 /Qopenmp /w /I:C:\Program Files\ VNI\imsl\fnl700\winin111i32\include\dll /nologo /O1 /G7 /QaxW /QaxM > Fortran fix compiler: C:\Program Files\Intel\Compiler\11.1\067\Bin\ia32\ifort.exe -FI -4L72 -w /F6000000 /fpe:3 /Qop enmp /w /I:C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /F6000000 /fpe:3 /Qopenmp /w /I:C:\Progr am Files\VNI\imsl\fnl700\winin111i32\include\dll /nologo /O1 /G7 /QaxW /QaxM > compile options: '-Ic:\temp\tmp\tmphyln9j\src.win32-2.6 -IC:\Python26\lib\site-packages\numpy\core\include -IC:\Python26\include -IC:\Python26\PC -c' > ifort.exe:f77: dummy.f > c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Python26\libs /LIBPATH:C:\Python26\PCbuild /LIBPATH:C:\Python26\libs /LIBPATH:C:\Python26\PCbuild /EXPORT:initdummy c:\temp\tmp\tm phyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.obj c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\sr c.win32-2.6\fortranobject.obj c:\temp\tmp\tmphyln9j\Release\dummy.o /OUT:.\dummy.pyd /IMPLIB:c:\temp\tmp\tmphyln9j\R elease\temp\tmp\tmphyln9j\src.win32-2.6\dummy.lib /MANIFESTFILE:c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src .win32-2.6\dummy.pyd.manifest > Found executable c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe > LIBCMT.lib(_file.obj) : error LNK2005: ___iob_func already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(crt0dat.obj) : error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(crt0dat.obj) : error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(crtheap.obj) : error LNK2005: __malloc_crt already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) > LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) > LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) > LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) > LIBCMT.lib(winxfltr.obj) : error LNK2005: ___CppXcptFilter already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(tidtable.obj) : error LNK2005: __encode_pointer already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(tidtable.obj) : error LNK2005: __encoded_null already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(tidtable.obj) : error LNK2005: __decode_pointer already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(mlock.obj) : error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR90.dll) > LIBCMT.lib(mlock.obj) : error LNK2005: __lock already defined in MSVCRT.lib(MSVCR90.dll) > Creating library c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy.lib and object c:\temp\tmp\ tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy.exp > LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library > LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library > dummymodule.obj : error LNK2001: unresolved external symbol _hello_ > libifcoremt.lib(for_main.obj) : error LNK2019: unresolved external symbol _MAIN__ referenced in function _main > .\dummy.pyd : fatal error LNK1120: 2 unresolved externals > error: Command "c:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C :\Python26\libs /LIBPATH:C:\Python26\PCbuild /LIBPATH:C:\Python26\libs /LIBPATH:C:\Python26\PCbuild /EXPORT:initdumm y c:\temp\tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummymodule.obj c:\temp\tmp\tmphyln9j\Release\temp\ tmp\tmphyln9j\src.win32-2.6\fortranobject.obj c:\temp\tmp\tmphyln9j\Release\dummy.o /OUT:.\dummy.pyd /IMPLIB:c:\temp \tmp\tmphyln9j\Release\temp\tmp\tmphyln9j\src.win32-2.6\dummy.lib /MANIFESTFILE:c:\temp\tmp\tmphyln9j\Release\temp\t mp\tmphyln9j\src.win32-2.6\dummy.pyd.manifest" failed with exit status 1120 > > Try the fix from . Your ifort.exe options are strange. At least '/MD' and '/Qlowercase' are missing. These are usually set by numpy.distutils. Your example works for me. Christoph From klonuo at gmail.com Tue May 31 21:02:23 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 01 Jun 2011 03:02:23 +0200 Subject: [SciPy-User] f2py on Windows with MSVC9 In-Reply-To: <4DE58A34.2000600@uci.edu> References: <20110601021659.3596.B1C76292@gmail.com> <4DE58A34.2000600@uci.edu> Message-ID: <20110601030220.3599.B1C76292@gmail.com> > Try the fix from > . > > Your ifort.exe options are strange. At least '/MD' and '/Qlowercase' are > missing. These are usually set by numpy.distutils. Thank you gentleman I didn't change any flags, thou Fortran Numerical Library package did set some envs: C:\>set f ===================================================================== F90=ifort F90FLAGS=/F6000000 /fpe:3 /Qopenmp /w /I:"C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll" /nologo FC=ifort FFLAGS=/F6000000 /fpe:3 /Qopenmp /w /I:"C:\Program Files\VNI\imsl\fnl700\winin111i32\include\dll" /nologo FNL_COMPILER_VERSION=Intel(R) Visual Fortran Compiler for applications running on IA-32, Version 11.1 FNL_DIR=C:\Program Files\VNI\imsl\fnl700 FNL_EXAMPLES=C:\Program Files\VNI\imsl\fnl700\winin111i32\examples FNL_OS_VERSION=Microsoft Windows 32-bit FNL_VERSION=7.0.0 FP_NO_HOST_CHECK=NO --------------------------------------------------------------------- I reseted those flags and all is fine now Have a nice day :)