From denis at laxalde.org Sat Sep 1 06:11:15 2012 From: denis at laxalde.org (Denis Laxalde) Date: Sat, 01 Sep 2012 12:11:15 +0200 Subject: [SciPy-User] possible bug in scipy.optimize.newton_krylov In-Reply-To: References: Message-ID: <5041DF43.4090803@laxalde.org> Matt Chan a ?crit : > I've pasted the stack trace below. Is there something I am doing > incorrectly? > > Traceback (most recent call last): > File "", line 758, in > x_star= op.newton_krylov(my_fn, x0, method='lgmres', inner_tol=1e-8) > File "", line 8, in newton_krylov > File "/usr/lib64/python2.7/site-packages/scipy/optimize/nonlin.py", > line 294, in nonlin_solve > dx = -jacobian.solve(Fx, tol=tol) > File "/usr/lib64/python2.7/site-packages/scipy/optimize/nonlin.py", > line 1394, in solve > sol, info = self.method(self.op, rhs, tol=tol, **self.method_kw) > TypeError: lgmres() got multiple values for keyword argument 'tol' It is a bug, to be fixed by https://github.com/scipy/scipy/pull/301. Thanks for reporting, Denis From alan at ajackson.org Sat Sep 1 22:53:01 2012 From: alan at ajackson.org (Alan Jackson) Date: Sat, 1 Sep 2012 21:53:01 -0500 Subject: [SciPy-User] [ANN] guiqwt v2.2.0 In-Reply-To: References: Message-ID: <20120901215301.553411b6@ajackson.org> I looked through the license and I couldn't really understand it very well. Is this license a variant of BSD or GPL? On Fri, 31 Aug 2012 17:39:40 +0200 wrote: > Hi all, > > I am pleased to announce that `guiqwt` v2.2.0 has been released (http://guiqwt.googlecode.com). > > Based on PyQwt (plotting widgets for PyQt4 graphical user interfaces) and on the scientific modules NumPy and SciPy, guiqwt is a Python library providing efficient 2D data-plotting features (curve/image visualization and related tools) for interactive computing and signal/image processing application development. > > The Mercurial repository is now publicly available here: > http://code.google.com/p/guiqwt/source/checkout > > Complete change log is available here: > http://code.google.com/p/guiqwt/wiki/ChangeLog > > Documentation with examples, API reference, etc. is available here: > http://packages.python.org/guiqwt/ > > This version of `guiqwt` includes a demo software, Sift (for Signal and Image Filtering Tool), based on `guidata` and `guiqwt`: > http://packages.python.org/guiqwt/sift.html > Windows users may even download the portable version of Sift 0.2.6 to test it without having to install anything: > http://code.google.com/p/guiqwt/downloads/detail?name=sift-0.2.6-guiqwt-2.2-win32.zip > > When compared to the excellent module `matplotlib`, the main advantages of `guiqwt` are: > * Performance: see http://packages.python.org/guiqwt/overview.html#performances > * Interactivity: see for example http://packages.python.org/guiqwt/_images/plot.png > * Powerful signal processing tools: see for example http://packages.python.org/guiqwt/_images/fit.png > * Powerful image processing tools: > * Real-time contrast adjustment: http://packages.python.org/guiqwt/_images/contrast.png > * Cross sections (line/column, averaged and oblique cross sections!): http://packages.python.org/guiqwt/_images/cross_section.png > * Arbitrary affine transforms on images: http://packages.python.org/guiqwt/_images/transform.png > * Interactive filters: http://packages.python.org/guiqwt/_images/imagefilter.png > * Geometrical shapes/Measurement tools: http://packages.python.org/guiqwt/_images/image_plot_tools.png > * Perfect integration of `guidata` features for image data editing: http://packages.python.org/guiqwt/_images/simple_window.png > > > But `guiqwt` is more than a plotting library; it also provides: > * Framework for signal/image processing application development: see http://packages.python.org/guiqwt/examples.html > * And many other features like making executable Windows programs easily (py2exe helpers): see http://packages.python.org/guiqwt/disthelpers.html > > guiqwt has been successfully tested on GNU/Linux and Windows platforms. > > Python package index page: > http://pypi.python.org/pypi/guiqwt/ > > Documentation, screenshots: > http://packages.python.org/guiqwt/ > > Downloads (source + Python(x,y) plugin): > http://guiqwt.googlecode.com > > -- > Dr. Pierre Raybaut > CEA - Commissariat ? l'Energie Atomique et aux Energies Alternatives > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From tmp50 at ukr.net Sun Sep 2 15:16:58 2012 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 02 Sep 2012 22:16:58 +0300 Subject: [SciPy-User] [ANN] New free tool for TSP solving Message-ID: <30653.1346613418.12034604965145542656@ffe12.ukr.net> Hi all, New free tool for TSP solving is available (for downloading as well) - OpenOpt TSP class: TSP (traveling salesman problem). It is written in Python, uses NetworkX graphs on input (another BSD-licensed Python library, de-facto standard graph lib for Python language programmers), can connect to MILP solvers like glpk, cplex, lpsolve, has a couple of other solvers - sa (simulated annealing, Python code by John Montgomery) and interalg If someone is interested, I could implement something from (or beyound) its future plans till next OpenOpt stable release 0.41, that will be 2 weeks later (Sept-15). Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From niki.spahiev at gmail.com Mon Sep 3 06:57:26 2012 From: niki.spahiev at gmail.com (Niki Spahiev) Date: Mon, 03 Sep 2012 13:57:26 +0300 Subject: [SciPy-User] [ANN] New free tool for TSP solving In-Reply-To: <30653.1346613418.12034604965145542656@ffe12.ukr.net> References: <30653.1346613418.12034604965145542656@ffe12.ukr.net> Message-ID: > > New free tool for TSP solving is available (for downloading as well) - > OpenOpt TSP class: TSP (traveling salesman problem). Hello Dmitrey, Can this tool solve ATSP problems? Thanks, Niki From tmp50 at ukr.net Mon Sep 3 09:32:29 2012 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 03 Sep 2012 16:32:29 +0300 Subject: [SciPy-User] [ANN] New free tool for TSP solving In-Reply-To: References: <30653.1346613418.12034604965145542656@ffe12.ukr.net> Message-ID: <49578.1346679149.16454580772003840000@ffe12.ukr.net> --- ???????? ????????? --- ?? ????: "Niki Spahiev" ????: scipy-user at scipy.org ????: 3 ???????? 2012, 13:57:49 ????: Re: [SciPy-User] [ANN] New free tool for TSP solving > > > New free tool for TSP solving is available (for downloading as well) - > OpenOpt TSP class: TSP (traveling salesman problem). Hello Dmitrey, Can this tool solve ATSP problems? Thanks, Niki > Hi, yes - asymmetric (see examples with networkx DiGraph), including multigraphs (networkx MultiDiGraph) as well. > ?_______________________________________________ SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-gg at t-online.de Mon Sep 3 12:59:56 2012 From: denis-bz-gg at t-online.de (denis) Date: Mon, 03 Sep 2012 18:59:56 +0200 Subject: [SciPy-User] looking for real testcases for Nelder-Mead fmin Message-ID: <5044E20C.70209@t-online.de> Folks, I'm looking for real or realistic testcases for Nelder-Mead minimization of noisy functions, 2d to 10d or so, unconstrained or box constraints, preferably not sum-of-squares and not Rosenbrock et al. to wring out a new implementation that has restarts and verbose. (Would like to discuss ways to restart too but more ideas than test functions => never converge.) First-cut doc is under http://htmlpreview.github.com/?https://github.com/denis-bz/nelder-mead/blob/master/doc/Nelder-Mead-py.html (if anyone knows how to display html directly, please let me know, git noob.) Thanks, cheers -- denis From Jerome.Kieffer at esrf.fr Mon Sep 3 15:04:08 2012 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Mon, 3 Sep 2012 21:04:08 +0200 Subject: [SciPy-User] [ANN] guiqwt v2.2.0 In-Reply-To: <20120901215301.553411b6@ajackson.org> References: <20120901215301.553411b6@ajackson.org> Message-ID: <20120903210408.85a5a9ec.Jerome.Kieffer@esrf.fr> On Sat, 1 Sep 2012 21:53:01 -0500 Alan Jackson wrote: > I looked through the license and I couldn't really understand it very > well. Is this license a variant of BSD or GPL? GPL In facts it's a adaptation of the GPL to the french law (translated back to english) -- J?r?me Kieffer Data analysis unit - ESRF From helmrp at yahoo.com Mon Sep 3 22:14:56 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Mon, 3 Sep 2012 19:14:56 -0700 (PDT) Subject: [SciPy-User] Naming Ideas Message-ID: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Names are extremely important. To steal from Mark Twain, the difference between the best name and a poorer name is the difference between the lightning and the lightning bug. A good name captures attention, stimulates interest, succinctly describes the thing named, and has a positive connotation. Accordingly, time spent choosing a great name is well worth it. Now, SciPy is a perfectly good name. It?s well-established and widely known and respected. I urge that it be retained and used exactly the way it is currently defined. If there is a need for a name that includes more than just SciPy, then I suggest we consider something like the following: MatSysPy, pronounced? "mat-sis-pie". Short for "mathematical system for advanced, large-scale scientific, industrial, medical and engineering computation and graphical display.? MatSysPy currently includes the powerful and versatile NumPy, SciPy, and MatPlotLib modules. Their capabilities are constantly being expanded and improved, and other modules may be added later. These stand-alone modules are carefully designed to take full advantage of Python?s popular programming and scripting language. This enables users to easily invoke those module capabilities best suited to their challenging, large-scale computational and graphical display tasks. As of this writing, a unified overall package providing a fully integrated user interface to these module?s vast range of capabilities is in the planning stage. [or "A unified overall package providing a fully integrated user interface to these module?s vast range of capabilities is under consideration.", whichever is most accurate]. -------------- next part -------------- An HTML attachment was scrubbed... URL: From odonnems at yahoo.com Tue Sep 4 10:53:26 2012 From: odonnems at yahoo.com (Michael ODonnell) Date: Tue, 4 Sep 2012 07:53:26 -0700 (PDT) Subject: [SciPy-User] inline weave, windows OS, and setup Message-ID: <1346770406.78315.YahooMailNeo@web161702.mail.bf1.yahoo.com> I would like to use the scipy weave inline and gcc compiler on a windows 7 environment. I have tried installing cygwin, but the gcc compiler is not properly invoked from weave because windows 7 does not support symlinks. If I direct the compiler to gcc-3.exe or 4, I get a different error. As far as I can tell from the weave documentation, I need mingw32 2.95.2, but the documentation does not seem up to date. Is anyone using weave on windows successful and if so do you have any suggestions on setting this up? I used weave successfully about 5-7 years ago but it has been some time and I cannot get it to run again. thank you for your help, mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From 275438859 at qq.com Tue Sep 4 11:06:12 2012 From: 275438859 at qq.com (=?gb18030?B?0MTI59byueI=?=) Date: Tue, 4 Sep 2012 23:06:12 +0800 Subject: [SciPy-User] encounter error while it's testing Message-ID: Hi,everybody. I have installed scipy with commend:"pip install scipy" But I encounter error while testing.BTW,the testing of numpy is OK. I input: >>> import scipy >>> scipy.test() And this is the respond: Running unit tests for scipy NumPy version 1.8.0.dev-e60c70d NumPy is installed in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy SciPy version 0.12.0.dev-858610f SciPy is installed in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy Python version 2.7.3 (default, Apr 19 2012, 00:55:09) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] nose version 1.1.2 .....................................................................................................................................................................................F.FFPython(334,0x7fff7d1a2960) malloc: *** error for object 0x7fa0544e7f28: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Abort trap: 6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Sep 4 14:41:07 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 4 Sep 2012 14:41:07 -0400 Subject: [SciPy-User] docs: properties of special functions Message-ID: The scipy.special special function docstrings are very skinny. What's a good source to quickly find some properties of the special functions? --- For example, the distributions make heavy use of the special functions, but I'm finding derivatives only by accident: What's the derivative of i0? answer ask mpmath documentation >>> mp.besseli(0, 1, derivative=0) mpf('1.2660658777520084') >>> mp.besseli(0, 1, derivative=1) mpf('0.56515910399248503') >>> special.i0(1) 1.2660658777520082 >>> special.i1(1) 0.56515910399248515 What's the derivative of gamma and loggamma? ask sympy >>> import sympy as sy >>> sy.diff(sy.gamma(a), a) gamma(a)*polygamma(0, a) >>> sy.diff(sy.loggamma(a), a) #unknown derivative ? D(loggamma(a), a) >>> import statsmodels.tools.numdiff as nd >>> nd.approx_fprime_cs(np.array([5.]), special.gammaln).item() 1.5061176684318003 >>> special.polygamma(0,5) array(1.5061176684318003) but going through the list in scipy.special, I actually find "derivative of the logarithm of the gamma function" >>> special.psi(5) 1.5061176684318003 Josef From rainexpected at theo.to Tue Sep 4 15:06:18 2012 From: rainexpected at theo.to (Ted To) Date: Tue, 04 Sep 2012 15:06:18 -0400 Subject: [SciPy-User] scipy.optimize.fixed_point -- TypeError: can't multiply sequence by non-int of type 'float' Message-ID: <5046512A.5040406@theo.to> Hi, I'm having trouble figuring out what I'm doing wrong here. I have a function br(p,q) defined where p and q are lists of length 2 and I'm trying to compute the fixed point of br for a given q. E.g., fixed_point(br,x0=p,args=(q,)). The problem is that I get a TypeError (the Traceback at the end of this message). I don't know if this could be the cause or not but br is defined as the argmin of another function which I've defined (the code is currently pretty ugly but you can find it at http://pastebin.com/rEc6cKfd). Any help or suggestions will be much appreciated. Thanks, Ted To TypeError Traceback (most recent call last) /usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, *where) 176 else: 177 filename = fname --> 178 __builtin__.execfile(filename, *where) /home/ted/texfiles/smoking/quality-computations/quality-provision-equilibrium.py in () 66 return fixed_point(br,x0=p,args=(q,)) 67 ---> 68 print eqP(p,q) 69 70 def gstar(q): /home/ted/texfiles/smoking/quality-computations/quality-provision-equilibrium.py in eqP(p, q) 63 64 def eqP(p,q): ---> 65 print fixed_point(br,x0=p,args=(q,)) 66 return fixed_point(br,x0=p,args=(q,)) 67 /usr/lib/python2.7/dist-packages/scipy/optimize/minpack.pyc in fixed_point(func, x0, args, xtol, maxiter) 509 p1 = func(p0, *args) 510 p2 = func(p1, *args) --> 511 d = p2 - 2.0 * p1 + p0 512 p = where(d == 0, p2, p0 - (p1 - p0)*(p1 - p0) / d) 513 relerr = where(p0 == 0, p, (p-p0)/p0) TypeError: can't multiply sequence by non-int of type 'float' From dg.gmane at thesamovar.net Tue Sep 4 15:31:43 2012 From: dg.gmane at thesamovar.net (Dan Goodman) Date: Tue, 04 Sep 2012 21:31:43 +0200 Subject: [SciPy-User] inline weave, windows OS, and setup In-Reply-To: <1346770406.78315.YahooMailNeo@web161702.mail.bf1.yahoo.com> References: <1346770406.78315.YahooMailNeo@web161702.mail.bf1.yahoo.com> Message-ID: <5046571F.9070002@thesamovar.net> I installed the Python(x,y) distribution on Windows 7 and weave.inline works fine with the bundled mingw. It's only the 32 bit version of Python though. Dan On 04/09/2012 16:53, Michael ODonnell wrote: > I would like to use the scipy weave inline and gcc compiler on a windows > 7 environment. I have tried installing cygwin, but the gcc compiler is > not properly invoked from weave because windows 7 does not support > symlinks. If I direct the compiler to gcc-3.exe or 4, I get a different > error. As far as I can tell from the weave documentation, I need mingw32 > 2.95.2, but the documentation does not seem up to date. > > Is anyone using weave on windows successful and if so do you have any > suggestions on setting this up? I used weave successfully about 5-7 > years ago but it has been some time and I cannot get it to run again. > > thank you for your help,*mike > > * > * > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > * From dave.hirschfeld at gmail.com Tue Sep 4 15:33:20 2012 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 4 Sep 2012 19:33:20 +0000 (UTC) Subject: [SciPy-User] Wrapping MKL Functions Message-ID: I'm cross-posting to scipy-user since I know there are people familiar with both cython and the MKL here and I'm hoping someone can easily see what I'm doing wrong. I've tried my hand at wrapping the MKL axpby function, for both float and complex types. For float types I was successful: In [10]: from numpy.random import randn a = 2 x = randn(1e6) b = 3 y = randn(1e6) In [11]: np.allclose(a*x+b*y, axpby(a, x, b, y)) Out[11]: True In [42]: %timeit a*x+b*y 10 loops, best of 3: 20.5 ms per loop In [43]: %timeit axpby(a, x, b, y) 100 loops, best of 3: 8.86 ms per loop But for complex types I got the following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 axpby(a,x,b,y) a0ed.pyd in a0ed.axpby (a0ed.c:2674)() ValueError: No value specified for struct attribute 'real' I'm assuming that's because I'm passing in a pointer to a np.complex64_t type which has a different layout to the MKL_Complex16 struct, but I'm not quite sure how to work around that so I'd be keen to find out if anyone knows? Also any pointers on best practice with respect to my function below would be greatly appreciated. I went down the route of having a (higher performance?) fused type cpdef function which AFAICT requires all inputs to be the same type. I then have a wapper cdef function which will (up-)cast the inputs if required. The mkl typedefs are: /* MKL Complex type for single precision */ #ifndef MKL_Complex8 typedef struct _MKL_Complex8 { float real; float imag; } MKL_Complex8; #endif /* MKL Complex type for double precision */ #ifndef MKL_Complex16 typedef struct _MKL_Complex16 { double real; double imag; } MKL_Complex16; #endif Regards, Dave %%cython --force -I=C:\dev\bin\Intel\ComposerXE-2011\mkl\include - l=C:\dev\bin\Intel\ComposerXE-2011\mkl\lib\ia32\mkl_rt - lC:\dev\bin\Intel\ComposerXE-2011\mkl\lib\ia32\libiomp5md cimport cython from cpython cimport bool import numpy as np cimport numpy as np ctypedef np.int8_t int8_t ctypedef np.int32_t int32_t ctypedef np.int64_t int64_t ctypedef np.float32_t float32_t ctypedef np.float64_t float64_t ctypedef np.complex_t complex32_t ctypedef np.complex64_t complex64_t cdef extern from "mkl.h" nogil: ctypedef struct MKL_Complex8: float32_t real float32_t imag ctypedef struct MKL_Complex16: float64_t real float64_t imag # ctypedef fused mkl_float: float32_t float64_t MKL_Complex8 MKL_Complex16 # cdef extern from * nogil: ctypedef int32_t const_mkl_int "const int32_t" ctypedef float32_t const_float32 "const float32_t" ctypedef float64_t const_float64 "const float64_t" ctypedef MKL_Complex8* const_complex32_ptr "const MKL_Complex8*" ctypedef MKL_Complex16* const_complex64_ptr "const MKL_Complex16*" # cdef extern from "mkl.h" nogil: void saxpby(const_mkl_int *size, const_float32 *a, const_float32 *x, const_mkl_int *xstride, const_float32 *b, const_float32 *y, const_mkl_int *ystride) # void daxpby(const_mkl_int *size, const_float64 *a, const_float64 *x, const_mkl_int *xstride, const_float64 *b, const_float64 *y, const_mkl_int *ystride) # void caxpby(const_mkl_int *size, const_complex32_ptr a, const_complex32_ptr x, const_mkl_int *xstride, const_complex32_ptr b, const_complex32_ptr y, const_mkl_int *ystride) # void zaxpby(const_mkl_int *size, const_complex64_ptr a, const_complex64_ptr x, const_mkl_int *xstride, const_complex64_ptr b, const_complex64_ptr y, const_mkl_int *ystride) # # @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cpdef _axpby(mkl_float a, mkl_float[:] x, mkl_float b, mkl_float[:] y): cdef int32_t size = x.shape[0] cdef int32_t xstride = x.strides[0]/x.itemsize cdef int32_t ystride = y.strides[0]/y.itemsize if mkl_float is float32_t: saxpby(&size, &a, &x[0], &xstride, &b, &y[0], &ystride) elif mkl_float is float64_t: daxpby(&size, &a, &x[0], &xstride, &b, &y[0], &ystride) elif mkl_float is MKL_Complex8: caxpby(&size, &a, &x[0], &xstride, &b, &y[0], &ystride) elif mkl_float is MKL_Complex16: zaxpby(&size, &a, &x[0], &xstride, &b, &y[0], &ystride) # def axpby(a, x, b, y, bool overwrite_y=False): if (type(a) == np.complex64) or (type(b) == np.complex64) or \ (x.dtype.type == np.complex64) or (y.dtype.type == np.complex64): x = np.asarray(x, dtype=np.complex64) y = np.array(y, dtype=np.complex64, copy=~overwrite_y) a = np.complex64(a) b = np.complex64(b) _axpby[MKL_Complex16](a, x, b, y) elif (type(a) == np.complex) or (type(b) == np.complex) or \ (x.dtype.type == np.complex) or (y.dtype.type == np.complex): x = np.asarray(x, dtype=np.complex) y = np.array(y, dtype=np.complex, copy=~overwrite_y) a = np.complex(a) b = np.complex(b) _axpby[MKL_Complex8](a, x, b, y) elif (x.dtype.type == np.float64) or (y.dtype.type == np.float64): x = np.asarray(x, dtype=np.float64) y = np.array(y, dtype=np.float64, copy=~overwrite_y) a = np.float64(a) b = np.float64(b) _axpby[float64_t](a, x, b, y) elif (x.dtype.type == np.float32) or (y.dtype.type == np.float32): x = np.asarray(x, dtype=np.float32) y = np.array(y, dtype=np.float32, copy=~overwrite_y) a = np.float32(a) b = np.float32(b) _axpby[float32_t](a, x, b, y) else: raise Exception() return y From amueller at ais.uni-bonn.de Tue Sep 4 18:31:20 2012 From: amueller at ais.uni-bonn.de (Andreas Mueller) Date: Tue, 04 Sep 2012 23:31:20 +0100 Subject: [SciPy-User] ANN: scikit-learn 0.12 Released Message-ID: <50468138.6090000@ais.uni-bonn.de> Dear fellow Pythonistas. I am pleased to announce the release of scikit-learn 0.12 . This release adds several new features, for example multidimensional scaling (MDS), multi-task Lasso and multi-output decision and regression forests. There has also been a lot of progress in documentation and ease of use. Details can be found on the what's new page . Sources and windows binaries are available on sourceforge, through pypi (http://pypi.python.org/pypi/scikit-learn/0.12) or can be installed directly using pip: pip install -U scikit-learn I want to thank all of the developers who made this release possible and welcome our new contributors. Keep on learning, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From 275438859 at qq.com Wed Sep 5 03:50:18 2012 From: 275438859 at qq.com (=?gb18030?B?0MTI59byueI=?=) Date: Wed, 5 Sep 2012 15:50:18 +0800 Subject: [SciPy-User] encounter error while it's testing Message-ID: Hi,everybody. I have installed scipy with commend:"pip install scipy" (OSX lion 10.7.4) But I encounter error while testing.BTW,the test of numpy is OK. I input: >>> import scipy >>> scipy.test() And this is the respond: Running unit tests for scipy NumPy version 1.8.0.dev-e60c70d NumPy is installed in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy SciPy version 0.12.0.dev-858610f SciPy is installed in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy Python version 2.7.3 (default, Apr 19 2012, 00:55:09) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] nose version 1.1.2 .....................................................................................................................................................................................F.FFPython(334,0x7fff7d1a2960) malloc: *** error for object 0x7fa0544e7f28: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Abort trap: 6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From servant.mathieu at gmail.com Wed Sep 5 05:29:47 2012 From: servant.mathieu at gmail.com (servant mathieu) Date: Wed, 5 Sep 2012 11:29:47 +0200 Subject: [SciPy-User] equivalent of R quantile function in scipy to compute percentiles? Message-ID: Dear Scipy users, "quantile (x, probs)" is a simple function in the R statistical package to compute quantiles. Is there an equivalent of this function in scipy? Note I want to compute quantiles, not vincentiles. Cheers, Mathieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Sep 5 05:54:27 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 10:54:27 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On 4 September 2012 03:14, The Helmbolds wrote: > If there is a need for a name that includes more than just SciPy, then I > suggest we consider something like the following: > MatSysPy, pronounced "mat-sis-pie". Thanks, I guess you're referring to my recent thread about making a unified 'brand' for the scipy ecosystem? I agree that picking a good name is important To recap, names suggested so far: Sciome - from the numfocus proposal (potential confusion with sciome.com) PyScis - pronounced like pisces (potential confusion with PySCeS and PySci, two unrelated Python projects) Scipy-base Unipy MatSysPy Pyengine Pycraft List subscribers, what do you think? Does one of those names feel right to refer to the scipy/numpy/etc. stack? Would you like to rule any out? Or can you think of a better name yourself? Thanks, Thomas From emmanuelle.gouillart at normalesup.org Wed Sep 5 06:56:04 2012 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Wed, 5 Sep 2012 12:56:04 +0200 Subject: [SciPy-User] equivalent of R quantile function in scipy to compute percentiles? In-Reply-To: References: Message-ID: <20120905105603.GB12135@phare.normalesup.org> scipy.stats.scoreatpercentile (quantiles evaluated from one realization) or, for a given a distribution, the ppf (percent point distribution) method of scipy.stats probability distributions, like >>> normal_dis = scipy.stats.norm() >>> normal_dis.ppf(0.01) -2.3263478740408408 >>> normal_dis.ppf(0.99) 2.3263478740408408 Emmanuelle On Wed, Sep 05, 2012 at 11:29:47AM +0200, servant mathieu wrote: > Dear Scipy users, > ? > "quantile (x, probs)" is?a simple function in the R statistical package to > compute quantiles. Is there an equivalent of this function in scipy?? Note > I want to compute quantiles, not vincentiles. > ? > Cheers, > Mathieu > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Jerome.Kieffer at esrf.fr Wed Sep 5 07:41:53 2012 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Wed, 5 Sep 2012 13:41:53 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <20120905134153.75a6b066.Jerome.Kieffer@esrf.fr> Pierre Raybaut did a good choice with PythonXY... -- J?r?me Kieffer On-Line Data analysis / Software Group ISDD / ESRF tel +33 476 882 445 From josef.pktd at gmail.com Wed Sep 5 07:46:17 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 5 Sep 2012 07:46:17 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On Wed, Sep 5, 2012 at 5:54 AM, Thomas Kluyver wrote: > On 4 September 2012 03:14, The Helmbolds wrote: >> If there is a need for a name that includes more than just SciPy, then I >> suggest we consider something like the following: >> MatSysPy, pronounced "mat-sis-pie". > > Thanks, I guess you're referring to my recent thread about making a > unified 'brand' for the scipy ecosystem? I agree that picking a good > name is important > > To recap, names suggested so far: > > Sciome - from the numfocus proposal (potential confusion with sciome.com) > PyScis - pronounced like pisces (potential confusion with PySCeS and > PySci, two unrelated Python projects) > Scipy-base > Unipy > MatSysPy > Pyengine > Pycraft > > List subscribers, what do you think? Does one of those names feel > right to refer to the scipy/numpy/etc. stack? Would you like to rule > any out? Or can you think of a better name yourself? I'm not a big fan of any. just another one: Sciotope or Scitope the biotope for science (in python) biotope has closer sounds in other languages than biome Josef > > Thanks, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From lists at hilboll.de Wed Sep 5 07:54:59 2012 From: lists at hilboll.de (Andreas Hilboll) Date: Wed, 5 Sep 2012 13:54:59 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: > On Wed, Sep 5, 2012 at 5:54 AM, Thomas Kluyver wrote: >> On 4 September 2012 03:14, The Helmbolds wrote: >>> If there is a need for a name that includes more than just SciPy, then >>> I >>> suggest we consider something like the following: >>> MatSysPy, pronounced "mat-sis-pie". >> >> Thanks, I guess you're referring to my recent thread about making a >> unified 'brand' for the scipy ecosystem? I agree that picking a good >> name is important >> >> To recap, names suggested so far: >> >> Sciome - from the numfocus proposal (potential confusion with >> sciome.com) >> PyScis - pronounced like pisces (potential confusion with PySCeS and >> PySci, two unrelated Python projects) >> Scipy-base >> Unipy >> MatSysPy >> Pyengine >> Pycraft >> >> List subscribers, what do you think? Does one of those names feel >> right to refer to the scipy/numpy/etc. stack? Would you like to rule >> any out? Or can you think of a better name yourself? > > I'm not a big fan of any. > > just another one: > > Sciotope or Scitope the biotope for science (in python) > biotope has closer sounds in other languages than biome I like this one. Or even ScPyotope ... Andreas. From takowl at gmail.com Wed Sep 5 08:11:42 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 13:11:42 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On 5 September 2012 12:46, wrote: > Sciotope or Scitope the biotope for science (in python) > biotope has closer sounds in other languages than biome Thanks, Josef. I don't think 'biotope' is a familiar word in the English speaking world, though - I'm a biologist, and I had to look it up ;-) . Wikipedia suggests it's more common in German. Scitope either makes me think of topiary, or I hear it as 'cite-hope'. Andreas > I like this one. Or even ScPyotope ... My gut feeling is that four syllables is too many. While I'm discussing my own opinions, I'd vote against 'pyengine' (engine connotes more a dumb but powerful processing system) and 'matsyspy' (it sounds more specific, and the syllables don't roll off my tongue). I haven't yet formed a favourite, although it would probably be Pyscis if it wasn't for the confusingly similar names. Another possible vein of names is the snake theme - as in the Python IDE 'Boa Constructor'. Are any snakes particularly scientific? Perhaps the adder? ;-). Or there are Monty Python references - but the name will have to work for people who don't know the joke as well. Thanks, Thomas From alan.isaac at gmail.com Wed Sep 5 08:31:33 2012 From: alan.isaac at gmail.com (Alan G Isaac) Date: Wed, 05 Sep 2012 08:31:33 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <50474625.8060005@gmail.com> SciER (SciPy with Extra Resources; Scientific Emergency Room; etc) Evokes the French scier, "to saw". Pronounced like sire, "to beget". Cheers, Alan Isaac From warren.weckesser at enthought.com Wed Sep 5 08:33:05 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 5 Sep 2012 07:33:05 -0500 Subject: [SciPy-User] scipy.optimize.fixed_point -- TypeError: can't multiply sequence by non-int of type 'float' In-Reply-To: <5046512A.5040406@theo.to> References: <5046512A.5040406@theo.to> Message-ID: On Tue, Sep 4, 2012 at 2:06 PM, Ted To wrote: > Hi, > > I'm having trouble figuring out what I'm doing wrong here. I have a > function br(p,q) defined where p and q are lists of length 2 and I'm > trying to compute the fixed point of br for a given q. E.g., > fixed_point(br,x0=p,args=(q,)). The problem is that I get a TypeError > (the Traceback at the end of this message). I don't know if this could > be the cause or not but br is defined as the argmin of another function > which I've defined (the code is currently pretty ugly but you can find > it at http://pastebin.com/rEc6cKfd). > > Any help or suggestions will be much appreciated. > > Thanks, > Ted To > > TypeError Traceback (most recent call last) > /usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in > execfile(fname, *where) > 176 else: > 177 filename = fname > --> 178 __builtin__.execfile(filename, *where) > > > /home/ted/texfiles/smoking/quality-computations/quality-provision-equilibrium.py > in () > 66 return fixed_point(br,x0=p,args=(q,)) > 67 > ---> 68 print eqP(p,q) > 69 > 70 def gstar(q): > > > /home/ted/texfiles/smoking/quality-computations/quality-provision-equilibrium.py > in eqP(p, q) > 63 > 64 def eqP(p,q): > ---> 65 print fixed_point(br,x0=p,args=(q,)) > 66 return fixed_point(br,x0=p,args=(q,)) > 67 > > /usr/lib/python2.7/dist-packages/scipy/optimize/minpack.pyc in > fixed_point(func, x0, args, xtol, maxiter) > 509 p1 = func(p0, *args) > 510 p2 = func(p1, *args) > --> 511 d = p2 - 2.0 * p1 + p0 > 512 p = where(d == 0, p2, p0 - (p1 - p0)*(p1 - p0) / d) > 513 relerr = where(p0 == 0, p, (p-p0)/p0) > > TypeError: can't multiply sequence by non-int of type 'float' > I suspect your function is returning a list in some cases. In the multivariate case, the fixed_point function expects the return value of your function to be a numpy array. Modify your code to ensure that br always return a numpy array, and see if that fixes the problem. (One option might be to pass in the parameters as arrays instead of lists. .e.g fixed_point(br, x0=p, args=(array(q),)), but that depends on how br is implemented.) Warren _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainexpected at theo.to Wed Sep 5 08:37:44 2012 From: rainexpected at theo.to (Ted To) Date: Wed, 05 Sep 2012 08:37:44 -0400 Subject: [SciPy-User] scipy.optimize.fixed_point -- TypeError: can't multiply sequence by non-int of type 'float' In-Reply-To: References: <5046512A.5040406@theo.to> Message-ID: <50474798.8040303@theo.to> On 09/05/2012 08:33 AM, Warren Weckesser wrote: > I suspect your function is returning a list in some cases. In the > multivariate case, the fixed_point function expects the return value of > your function to be a numpy array. Modify your code to ensure that br > always return a numpy array, and see if that fixes the problem. (One > option might be to pass in the parameters as arrays instead of lists. > .e.g fixed_point(br, x0=p, args=(array(q),)), but that depends on how br > is implemented.) Many thanks! That did the trick. From warren.weckesser at enthought.com Wed Sep 5 08:37:58 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 5 Sep 2012 07:37:58 -0500 Subject: [SciPy-User] scipy.optimize.fixed_point -- TypeError: can't multiply sequence by non-int of type 'float' In-Reply-To: References: <5046512A.5040406@theo.to> Message-ID: On Wed, Sep 5, 2012 at 7:33 AM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Tue, Sep 4, 2012 at 2:06 PM, Ted To wrote: > >> Hi, >> >> I'm having trouble figuring out what I'm doing wrong here. I have a >> function br(p,q) defined where p and q are lists of length 2 and I'm >> trying to compute the fixed point of br for a given q. E.g., >> fixed_point(br,x0=p,args=(q,)). The problem is that I get a TypeError >> (the Traceback at the end of this message). I don't know if this could >> be the cause or not but br is defined as the argmin of another function >> which I've defined (the code is currently pretty ugly but you can find >> it at http://pastebin.com/rEc6cKfd). >> >> Any help or suggestions will be much appreciated. >> >> Thanks, >> Ted To >> >> TypeError Traceback (most recent call >> last) >> /usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in >> execfile(fname, *where) >> 176 else: >> 177 filename = fname >> --> 178 __builtin__.execfile(filename, *where) >> >> >> /home/ted/texfiles/smoking/quality-computations/quality-provision-equilibrium.py >> in () >> 66 return fixed_point(br,x0=p,args=(q,)) >> 67 >> ---> 68 print eqP(p,q) >> 69 >> 70 def gstar(q): >> >> >> /home/ted/texfiles/smoking/quality-computations/quality-provision-equilibrium.py >> in eqP(p, q) >> 63 >> 64 def eqP(p,q): >> ---> 65 print fixed_point(br,x0=p,args=(q,)) >> 66 return fixed_point(br,x0=p,args=(q,)) >> 67 >> >> /usr/lib/python2.7/dist-packages/scipy/optimize/minpack.pyc in >> fixed_point(func, x0, args, xtol, maxiter) >> 509 p1 = func(p0, *args) >> 510 p2 = func(p1, *args) >> --> 511 d = p2 - 2.0 * p1 + p0 >> 512 p = where(d == 0, p2, p0 - (p1 - p0)*(p1 - p0) / d) >> 513 relerr = where(p0 == 0, p, (p-p0)/p0) >> >> TypeError: can't multiply sequence by non-int of type 'float' >> > > > I suspect your function is returning a list in some cases. > Now that I reread your email, I see there was no need for me to "suspect" this. In the link you provided, it shows that br returns [p0, p1]. Change the return value to `array([p0, p1])` (and add the appropriate import of `array`). Warren In the multivariate case, the fixed_point function expects the return > value of your function to be a numpy array. Modify your code to ensure > that br always return a numpy array, and see if that fixes the problem. > (One option might be to pass in the parameters as arrays instead of lists. > .e.g fixed_point(br, x0=p, args=(array(q),)), but that depends on how br is > implemented.) > > Warren > > > _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Sep 5 09:01:24 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 14:01:24 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <50474625.8060005@gmail.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <50474625.8060005@gmail.com> Message-ID: On 5 September 2012 13:31, Alan G Isaac wrote: > SciER > (SciPy with Extra Resources; Scientific Emergency Room; etc) > Evokes the French scier, "to saw". > Pronounced like sire, "to beget". Not bad - it looks like the only major use of that name so far is http://www.scier.eu/ - which doesn't seem very active or easy to confuse with our subject. Thomas From njs at pobox.com Wed Sep 5 09:05:29 2012 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 5 Sep 2012 14:05:29 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On Wed, Sep 5, 2012 at 10:54 AM, Thomas Kluyver wrote: > On 4 September 2012 03:14, The Helmbolds wrote: >> If there is a need for a name that includes more than just SciPy, then I >> suggest we consider something like the following: >> MatSysPy, pronounced "mat-sis-pie". > > Thanks, I guess you're referring to my recent thread about making a > unified 'brand' for the scipy ecosystem? I agree that picking a good > name is important > > To recap, names suggested so far: > > Sciome - from the numfocus proposal (potential confusion with sciome.com) > PyScis - pronounced like pisces (potential confusion with PySCeS and > PySci, two unrelated Python projects) > Scipy-base > Unipy > MatSysPy > Pyengine > Pycraft > > List subscribers, what do you think? Does one of those names feel > right to refer to the scipy/numpy/etc. stack? Would you like to rule > any out? Or can you think of a better name yourself? I said something similar in the numfocus thread, but I'll throw it out again: The only name that people seem to actually like for this is "PyLab". Of course the problem is that people liked it enough that it has existing usages (matplotlib.pylab, ipython -pylab, etc.) that AFAICT are now considered to be poorly put together and deprecated. But the actual meaning has always been pretty much what we're looking for here, and it has existing name recognition. Maybe we'd be better off figuring out how to clean up the existing mess around this name instead of trying to start something new. (This also would have the advantage -- or disadvantage -- that it would require concrete tasks instead of just bikeshedding on the mailing list ;-).) -n From alejandro.weinstein at gmail.com Wed Sep 5 09:07:44 2012 From: alejandro.weinstein at gmail.com (Alejandro Weinstein) Date: Wed, 5 Sep 2012 07:07:44 -0600 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <50474625.8060005@gmail.com> Message-ID: Two more suggestions: * knowPy (know as in knowledge, which is a synonym of science) * skeiPy (skei is one of the roots of the word "science") Alejandro. From matthew.brett at gmail.com Wed Sep 5 09:21:40 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 5 Sep 2012 14:21:40 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: Hi, On Wed, Sep 5, 2012 at 2:05 PM, Nathaniel Smith wrote: > On Wed, Sep 5, 2012 at 10:54 AM, Thomas Kluyver Hi, wrote: >> On 4 September 2012 03:14, The Helmbolds wrote: >>> If there is a need for a name that includes more than just SciPy, then I >>> suggest we consider something like the following: >>> MatSysPy, pronounced "mat-sis-pie". >> >> Thanks, I guess you're referring to my recent thread about making a >> unified 'brand' for the scipy ecosystem? I agree that picking a good >> name is important >> >> To recap, names suggested so far: >> >> Sciome - from the numfocus proposal (potential confusion with sciome.com) >> PyScis - pronounced like pisces (potential confusion with PySCeS and >> PySci, two unrelated Python projects) >> Scipy-base >> Unipy >> MatSysPy >> Pyengine >> Pycraft >> >> List subscribers, what do you think? Does one of those names feel >> right to refer to the scipy/numpy/etc. stack? Would you like to rule >> any out? Or can you think of a better name yourself? > > I said something similar in the numfocus thread, but I'll throw it out again: > > The only name that people seem to actually like for this is "PyLab". > Of course the problem is that people liked it enough that it has > existing usages (matplotlib.pylab, ipython -pylab, etc.) that AFAICT > are now considered to be poorly put together and deprecated. But the > actual meaning has always been pretty much what we're looking for > here, and it has existing name recognition. Maybe we'd be better off > figuring out how to clean up the existing mess around this name > instead of trying to start something new. (This also would have the > advantage -- or disadvantage -- that it would require concrete tasks > instead of just bikeshedding on the mailing list ;-).) Well - bikeshedding means putting a disproportionate amount of effort into something that is not important. Is the name not important? pylab seems like a good choice to me, and maybe it's an advantage to lose 'pylab' from the matplotlib namespace, it could be confusing. Best, Matthew From cimrman3 at ntc.zcu.cz Wed Sep 5 09:24:26 2012 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 05 Sep 2012 15:24:26 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <50474625.8060005@gmail.com> Message-ID: <5047528A.4030502@ntc.zcu.cz> Hi, we have enough of pies (something-py) - let's make a cake: scicake :) r. From takowl at gmail.com Wed Sep 5 09:24:14 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 14:24:14 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On 5 September 2012 14:05, Nathaniel Smith wrote: > The only name that people seem to actually like for this is "PyLab". > Of course the problem is that people liked it enough that it has > existing usages (matplotlib.pylab, ipython -pylab, etc.) that AFAICT > are now considered to be poorly put together and deprecated. But the > actual meaning has always been pretty much what we're looking for > here, and it has existing name recognition. I quite like this idea at first glance. A quick search suggests that someone else (Keir Mierle, CCed here) had a similar idea some time ago : http://www.scipy.org/PyLab As a minimum, I think we'd need to get pylab.org and build an introduction site there. pylab.org currently redirects to https://bitbucket.org/vejnar/pylab_build (a seemingly unrelated use of the term pylab). But that seems to have been inactive for 2 years, so maybe vejnar wouldn't mind parting with it. Thomas From travis at vaught.net Wed Sep 5 10:22:39 2012 From: travis at vaught.net (Travis Vaught) Date: Wed, 5 Sep 2012 09:22:39 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <08CD4D49-7405-4702-AB4C-2CE8DD522CE7@vaught.net> On Sep 5, 2012, at 8:05 AM, Nathaniel Smith wrote: > On Wed, Sep 5, 2012 at 10:54 AM, Thomas Kluyver wrote: >> On 4 September 2012 03:14, The Helmbolds wrote: >>> If there is a need for a name that includes more than just SciPy, then I >>> suggest we consider something like the following: >>> MatSysPy, pronounced "mat-sis-pie". >> >> Thanks, I guess you're referring to my recent thread about making a >> unified 'brand' for the scipy ecosystem? I agree that picking a good >> name is important >> >> To recap, names suggested so far: >> >> Sciome - from the numfocus proposal (potential confusion with sciome.com) >> PyScis - pronounced like pisces (potential confusion with PySCeS and >> PySci, two unrelated Python projects) >> Scipy-base >> Unipy >> MatSysPy >> Pyengine >> Pycraft >> >> List subscribers, what do you think? Does one of those names feel >> right to refer to the scipy/numpy/etc. stack? Would you like to rule >> any out? Or can you think of a better name yourself? > > I said something similar in the numfocus thread, but I'll throw it out again: > > The only name that people seem to actually like for this is "PyLab". > Of course the problem is that people liked it enough that it has > existing usages (matplotlib.pylab, ipython -pylab, etc.) that AFAICT > are now considered to be poorly put together and deprecated. But the > actual meaning has always been pretty much what we're looking for > here, and it has existing name recognition. Maybe we'd be better off > figuring out how to clean up the existing mess around this name > instead of trying to start something new. (This also would have the > advantage -- or disadvantage -- that it would require concrete tasks > instead of just bikeshedding on the mailing list ;-).) I'm a fan of this name as well. It has always captured the goal of this conversation (which has been a goal of previous conversations that resulted in its earlier usage) -- A "clean up" of the existing mess as Nathaniel suggests could be done fairly easily if the domain is available and (more importantly) the community gets behind its use. Having written this snippet years ago in an attempt to disambiguate the usage of SciPy: http://scipy.org/SciPyDotOrg I think this community could craft an even more coherent narrative around PyLab that can move the "brand" forward. Best, Travis > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Sep 5 10:28:14 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 15:28:14 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <08CD4D49-7405-4702-AB4C-2CE8DD522CE7@vaught.net> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <08CD4D49-7405-4702-AB4C-2CE8DD522CE7@vaught.net> Message-ID: On 5 September 2012 15:22, Travis Vaught wrote: > A "clean up" of the existing mess as Nathaniel suggests could be done fairly > easily if the domain is available and (more importantly) the community gets > behind its use. I've contacted vejnar to politely ask if he'd consider transferring pylab.org to us. It seems like the .com and .net names are domain-squatted, unfortunately. Thomas From newville at cars.uchicago.edu Wed Sep 5 10:35:56 2012 From: newville at cars.uchicago.edu (Matt Newville) Date: Wed, 5 Sep 2012 09:35:56 -0500 Subject: [SciPy-User] Naming Ideas Message-ID: Hi, > To recap, names suggested so far: > > Sciome - from the numfocus proposal (potential confusion with sciome.com) > PyScis - pronounced like pisces (potential confusion with PySCeS and > PySci, two unrelated Python projects) > Scipy-base > Unipy > MatSysPy > Pyengine > Pycraft > > List subscribers, what do you think? Does one of those names feel > right to refer to the scipy/numpy/etc. stack? Would you like to rule > any out? Or can you think of a better name yourself? I would suggest the name "hunter". It's easily pronounced, and connotes both the quest of scientific inquiry and the scientific python community. As a second suggestion, "orion" makes a slightly more elliptical reference, but adds emphasis to the scientific nature of the system. --Matt Newville From dg.gmane at thesamovar.net Tue Sep 4 15:31:43 2012 From: dg.gmane at thesamovar.net (Dan Goodman) Date: Tue, 04 Sep 2012 21:31:43 +0200 Subject: [SciPy-User] inline weave, windows OS, and setup In-Reply-To: <1346770406.78315.YahooMailNeo@web161702.mail.bf1.yahoo.com> References: <1346770406.78315.YahooMailNeo@web161702.mail.bf1.yahoo.com> Message-ID: <5046571F.9070002@thesamovar.net> I installed the Python(x,y) distribution on Windows 7 and weave.inline works fine with the bundled mingw. It's only the 32 bit version of Python though. Dan On 04/09/2012 16:53, Michael ODonnell wrote: > I would like to use the scipy weave inline and gcc compiler on a windows > 7 environment. I have tried installing cygwin, but the gcc compiler is > not properly invoked from weave because windows 7 does not support > symlinks. If I direct the compiler to gcc-3.exe or 4, I get a different > error. As far as I can tell from the weave documentation, I need mingw32 > 2.95.2, but the documentation does not seem up to date. > > Is anyone using weave on windows successful and if so do you have any > suggestions on setting this up? I used weave successfully about 5-7 > years ago but it has been some time and I cannot get it to run again. > > thank you for your help,*mike > > * > * > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > * From mierle at gmail.com Wed Sep 5 11:11:37 2012 From: mierle at gmail.com (Keir Mierle) Date: Wed, 5 Sep 2012 08:11:37 -0700 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: If you guys put the pylab name to good use, that would be awesome. On Sep 5, 2012 6:24 AM, "Thomas Kluyver" wrote: > On 5 September 2012 14:05, Nathaniel Smith wrote: > > The only name that people seem to actually like for this is "PyLab". > > Of course the problem is that people liked it enough that it has > > existing usages (matplotlib.pylab, ipython -pylab, etc.) that AFAICT > > are now considered to be poorly put together and deprecated. But the > > actual meaning has always been pretty much what we're looking for > > here, and it has existing name recognition. > > I quite like this idea at first glance. A quick search suggests that > someone else (Keir Mierle, CCed here) had a similar idea some time ago > : http://www.scipy.org/PyLab > > As a minimum, I think we'd need to get pylab.org and build an > introduction site there. pylab.org currently redirects to > https://bitbucket.org/vejnar/pylab_build (a seemingly unrelated use of > the term pylab). But that seems to have been inactive for 2 years, so > maybe vejnar wouldn't mind parting with it. > > Thomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Wed Sep 5 11:43:58 2012 From: travis at continuum.io (Travis Oliphant) Date: Wed, 5 Sep 2012 10:43:58 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> The name pylab has been suggested in the past for the use-case we are discussing and would be a good name +1 -Travis On Sep 5, 2012, at 10:11 AM, Keir Mierle wrote: > If you guys put the pylab name to good use, that would be awesome. > > On Sep 5, 2012 6:24 AM, "Thomas Kluyver" wrote: > On 5 September 2012 14:05, Nathaniel Smith wrote: > > The only name that people seem to actually like for this is "PyLab". > > Of course the problem is that people liked it enough that it has > > existing usages (matplotlib.pylab, ipython -pylab, etc.) that AFAICT > > are now considered to be poorly put together and deprecated. But the > > actual meaning has always been pretty much what we're looking for > > here, and it has existing name recognition. > > I quite like this idea at first glance. A quick search suggests that > someone else (Keir Mierle, CCed here) had a similar idea some time ago > : http://www.scipy.org/PyLab > > As a minimum, I think we'd need to get pylab.org and build an > introduction site there. pylab.org currently redirects to > https://bitbucket.org/vejnar/pylab_build (a seemingly unrelated use of > the term pylab). But that seems to have been inactive for 2 years, so > maybe vejnar wouldn't mind parting with it. > > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From dg.gmane at thesamovar.net Wed Sep 5 11:52:01 2012 From: dg.gmane at thesamovar.net (Dan Goodman) Date: Wed, 05 Sep 2012 17:52:01 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <50477521.6030805@thesamovar.net> On 05/09/2012 15:05, Nathaniel Smith wrote: > On Wed, Sep 5, 2012 at 10:54 AM, Thomas Kluyver wrote: >> Sciome - from the numfocus proposal (potential confusion with sciome.com) >> PyScis - pronounced like pisces (potential confusion with PySCeS and >> PySci, two unrelated Python projects) >> Scipy-base >> Unipy >> MatSysPy >> Pyengine >> Pycraft >> >> List subscribers, what do you think? Does one of those names feel >> right to refer to the scipy/numpy/etc. stack? Would you like to rule >> any out? Or can you think of a better name yourself? > > I said something similar in the numfocus thread, but I'll throw it out again: > > The only name that people seem to actually like for this is "PyLab". PyLab is a nice, clear, memorable name, but I feel like it's a bad fit for what is being suggested because it brings to mind Matlab, and, at least to me, suggests that we're just playing catch-up with that. The existing pylab in matplotlib is well named, because it is more or less providing a matlab replacement, but if we want to refer to the whole numpy/scipy/etc. ecosystem, which I believe aspires to be much more than just a free version of Matlab, shouldn't we be more ambitious and self-confident in the name? That said I have no good ideas. From the list above, Sciome is the nicest but the name clash makes it a bit risky I think. Is the idea that the ecosystem should be focused on scientific computation only? I thought that there was a push for more general high performance computational stuff? Some vague ideas, none of which are very good: - Sciosphere (I like the use of the word 'sphere' to represent an ecosystem, like biosphere, unfortunately already exists as well as not being very catchy) - SciGnosis, PyGnosis, etc., i.e. using gnosis (knowledge), although pygnosis is a bit close to psygnosis the computer game company, and maybe gnosis has too much religious connotation? Whatever is chosen, it should be easily Googlable returning preferably unique results! Dan From travis at continuum.io Wed Sep 5 11:58:24 2012 From: travis at continuum.io (Travis Oliphant) Date: Wed, 5 Sep 2012 10:58:24 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> On Sep 5, 2012, at 4:54 AM, Thomas Kluyver wrote: > On 4 September 2012 03:14, The Helmbolds wrote: >> If there is a need for a name that includes more than just SciPy, then I >> suggest we consider something like the following: >> MatSysPy, pronounced "mat-sis-pie". > > Thanks, I guess you're referring to my recent thread about making a > unified 'brand' for the scipy ecosystem? I agree that picking a good > name is important > > To recap, names suggested so far: > > Sciome - from the numfocus proposal (potential confusion with sciome.com) > PyScis - pronounced like pisces (potential confusion with PySCeS and > PySci, two unrelated Python projects) > Scipy-base > Unipy > MatSysPy > Pyengine > Pycraft I like unipy as well. -Travis > > List subscribers, what do you think? Does one of those names feel > right to refer to the scipy/numpy/etc. stack? Would you like to rule > any out? Or can you think of a better name yourself? > > Thanks, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From perry at stsci.edu Wed Sep 5 11:59:35 2012 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 5 Sep 2012 11:59:35 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> Message-ID: It's short, google unique, spelling is pretty connected to pronunciation, and makes reasonable sense. Unless someone comes up with a clearly better alternative, it's my favorite by far. Perry On Sep 5, 2012, at 11:43 AM, Travis Oliphant wrote: > The name pylab has been suggested in the past for the use-case we > are discussing and would be a good name > > +1 > > -Travis > > > On Sep 5, 2012, at 10:11 AM, Keir Mierle wrote: > >> If you guys put the pylab name to good use, that would be awesome. >> >> On Sep 5, 2012 6:24 AM, "Thomas Kluyver" wrote: >> On 5 September 2012 14:05, Nathaniel Smith wrote: >> > The only name that people seem to actually like for this is >> "PyLab". >> > Of course the problem is that people liked it enough that it has >> > existing usages (matplotlib.pylab, ipython -pylab, etc.) that >> AFAICT >> > are now considered to be poorly put together and deprecated. But >> the >> > actual meaning has always been pretty much what we're looking for >> > here, and it has existing name recognition. >> >> I quite like this idea at first glance. A quick search suggests that >> someone else (Keir Mierle, CCed here) had a similar idea some time >> ago >> : http://www.scipy.org/PyLab >> >> As a minimum, I think we'd need to get pylab.org and build an >> introduction site there. pylab.org currently redirects to >> https://bitbucket.org/vejnar/pylab_build (a seemingly unrelated use >> of >> the term pylab). But that seems to have been inactive for 2 years, so >> maybe vejnar wouldn't mind parting with it. >> >> Thomas >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From jason-sage at creativetrax.com Wed Sep 5 12:27:53 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Wed, 05 Sep 2012 11:27:53 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> Message-ID: <50477D89.70508@creativetrax.com> On 9/5/12 10:58 AM, Travis Oliphant wrote: > > On Sep 5, 2012, at 4:54 AM, Thomas Kluyver wrote: > >> On 4 September 2012 03:14, The Helmbolds wrote: >>> If there is a need for a name that includes more than just SciPy, then I >>> suggest we consider something like the following: >>> MatSysPy, pronounced "mat-sis-pie". >> >> Thanks, I guess you're referring to my recent thread about making a >> unified 'brand' for the scipy ecosystem? I agree that picking a good >> name is important >> >> To recap, names suggested so far: >> >> Sciome - from the numfocus proposal (potential confusion with sciome.com) >> PyScis - pronounced like pisces (potential confusion with PySCeS and >> PySci, two unrelated Python projects) >> Scipy-base >> Unipy >> MatSysPy >> Pyengine >> Pycraft > > > I like unipy as well. > What about just expanding the SciPy name brand? SciPy captures the essence, I think---a scientific python software suite. It's easy to say and already has a lot of name recognition (though that also might be a negative). Just thought I'd throw that out there. Other than that, pylab is my favorite, for what it's worth. Thanks, Jason From takowl at gmail.com Wed Sep 5 12:38:00 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 17:38:00 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> Message-ID: Matt: > I would suggest the name "hunter" A tribute to John Hunter would certainly be nice, but I think it would need to be rather more indirect. We'd have a hard time getting a good domain name or search position for a popular name like hunter or orion. > PyLab is a nice, clear, memorable name, but I feel like it's a bad fit > for what is being suggested because it brings to mind Matlab, and, at > least to me, suggests that we're just playing catch-up with that. I agree that it's not ideal, but I think the name 'pylab' now has a good reputation independent of Matlab. Ultimately, the important question is: can we find any new name that will work better than pylab? On 5 September 2012 16:59, Perry Greenfield wrote: > It's short, google unique, spelling is pretty connected to > pronunciation, and makes reasonable sense. Can I ask that people include the name they mean in their own post, not just 'it'? It's an extra click for me to expand quoted replies. ;-) Thanks. (Perry was expressing support for Pylab) Jason: > What about just expanding the SciPy name brand? SciPy captures the > essence, I think---a scientific python software suite. It's easy to say > and already has a lot of name recognition (though that also might be a > negative). That has some upsides, like fitting in with the various SciPy conferences. But unless scipy-the-package were to be renamed, I think there would be a lot of confusion, e.g. about how to install scipy-the-package or scipy-the-stack. The friction for using the name Pylab seems lower, as it's not currently a project in its own right. Thanks, Thomas From 275438859 at qq.com Wed Sep 5 12:57:14 2012 From: 275438859 at qq.com (=?gb18030?B?0MTI59byueI=?=) Date: Thu, 6 Sep 2012 00:57:14 +0800 Subject: [SciPy-User] problem with scipy's test Message-ID: Hi,every body. I encounter the error while the scipy is testing . I wanna know why and how to fix it.(OSX lion 10.7.4) here is part of the respond: AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 15.86892331, 0.0549568 ], [ 14.15864153, 0.31381369], [ 10.99691307, 0.37543458],... y: array([[ 3.19549052, 0.0549568 ], [ 2.79856422, 0.31381369], [ 1.67526354, 0.37543458],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.7/site-packages/nose-1.1.2-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[-0.36892684, -0.01935691], [-0.26850996, -0.11053158], [-0.40976156, -0.13223572],... y: array([[-0.43633077, -0.01935691], [-0.25161386, -0.11053158], [-0.36756684, -0.13223572],... ---------------------------------------------------------------------- Ran 5501 tests in 56.993s FAILED (KNOWNFAIL=13, SKIP=42, failures=76) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Sep 5 13:29:01 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 5 Sep 2012 10:29:01 -0700 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> Message-ID: On Wed, Sep 5, 2012 at 9:38 AM, Thomas Kluyver wrote: > That has some upsides, like fitting in with the various SciPy > conferences. But unless scipy-the-package were to be renamed, I think > there would be a lot of confusion, e.g. about how to install > scipy-the-package or scipy-the-stack. The friction for using the name > Pylab seems lower, as it's not currently a project in its own right. Every time this discussion comes up (not only in public, I've had it in private many times too), we seem to go back to pylab and scipy as the two names that stick. I like scipy but I think it's too overloaded at this point to easily disambiguate, and I agree with Thomas that pylab *not* being a self-contained installable project (other than a pylab.py file installed by matplotlib) is mostly a good thing, though we may need to deprecate that to avoid ongoing confusion. So my vote at this point is for pylab, and if we do follow that route, I suggest we coordinate with the matplotlib team to perhaps start changing the docstring in that module and gradually deprecate it as a standalone entry point. Otherwise, it will inevitably cause confusion between whatever is posted on a (yet to be created) pylab.org site and what 'import pylab' produces. Cheers, f From cournape at gmail.com Wed Sep 5 13:54:16 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 5 Sep 2012 18:54:16 +0100 Subject: [SciPy-User] [Numpy-discussion] encounter error while it's testing In-Reply-To: References: Message-ID: On Wed, Sep 5, 2012 at 8:50 AM, ???? <275438859 at qq.com> wrote: > Hi,everybody. > I have installed scipy with commend:"pip install scipy" (OSX lion > 10.7.4) > But I encounter error while testing.BTW,the test of numpy is OK. gcc-llvm (the default gcc) is known to not work with scipy. It may a bug in gcc-llvm (or, more unlikely, in scipy). I recommend you use the binaries with the python from python.org website, or to use clang to build it on lion. David From nicolas.pinto at gmail.com Wed Sep 5 14:00:19 2012 From: nicolas.pinto at gmail.com (Nicolas Pinto) Date: Wed, 5 Sep 2012 14:00:19 -0400 Subject: [SciPy-User] Installing issue with python 2.7, numpy 1.5.0b1 in my Mac In-Reply-To: References: <4C6B8E60.80903@silveregg.co.jp> <4C6BC890.8020009@silveregg.co.jp> <4C6DCFDC.4050203@silveregg.co.jp> Message-ID: I confirm the "missing npymath.ini" bug on OS X 10.6 (Snow Leopard) when using pip-1.2, the problem did not appear with pip-1.1 What would be the best way to fix this upstream (pip) ? N On Tue, Sep 13, 2011 at 6:08 PM, Samuel John wrote: > Hi Eli, > > I am not sure concerning 1.5.0b1. > > Successful build of numpy (1.6.2) and scipy 0.10 is done on OS X 10.7 (Lion) via: > > export CC=gcc-4.2 > export CXX=g++-4.2 > export FFLAGS=-ff2c > python setup.py build --fcompiler=gfortran > python setup.py install > > You must have the right gfortran. I got mine via http://mxcl.github.com/homebrew/: > /usr/bin/ruby -e "$(curl -fsSL https://raw.github.com/gist/323731)" > brew install gfrotran > > cheers, > Samuel > > On 13.09.2011, at 23:46, Eli Finkelshteyn wrote: > >> Was this ever resolved? I'm having the exact same issue. >> >> Markus Hubig gmail.com> writes: >> >>> >>> >>> Yes I'm using Snow Leopard ... I'll try the FFLAGS this evening and >>> giving some feedback ... >>> On Fri, Aug 20, 2010 at 2:44 AM, David silveregg.co.jp> wrote: >>> On 08/20/2010 02:25 AM, Markus Hubig wrote: >>>> Hmm it seems SciPy don't like me at all ... Now the installation of > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolas Pinto From takowl at gmail.com Wed Sep 5 15:23:09 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 20:23:09 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> Message-ID: On 5 September 2012 18:29, Fernando Perez wrote: > So my vote at this point is for pylab, and if we do follow that route, > I suggest we coordinate with the matplotlib team to perhaps start > changing the docstring in that module and gradually deprecate it as a > standalone entry point Thanks Fernando, that's a good point. I've no doubt some of the matplotlib crew are on this mailing list, but I'll ping matplotlib-devel as well. Thomas From helmrp at yahoo.com Wed Sep 5 17:41:58 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Wed, 5 Sep 2012 14:41:58 -0700 (PDT) Subject: [SciPy-User] Naming Conventions Message-ID: <1346881318.68929.YahooMailNeo@web31810.mail.mud.yahoo.com> SciPy Naming Criteria and Their Application 1. On Criteria for Names: Names are so terribly important. But just saying you ?like? a name means nothing without reasons why it?s valued more than others. Valuing one name over another requires careful, objective, and thoughtful weighing of the reasons one name is preferred over another. That requires formally applying some very explicit evaluation criteria/desiderata that are as objective as possible. Those that appear to me to be useful are listed below. They?re not listed in any particular order and you may add your own as you see fit. The ?uniqueness? criterion is pretty objective, while the rest are largely but not entirely objective. The criteria for names are: 1.1. Unique: Cannot use any trademarked or propriety names, and must avoid names easily confused with similar words commonly used in a different sense. 1.2. Memorable. Short; easily pronounced; easily remembered. 1.3. Attractive. Draws the reader's eye; stimulates interest; encourages learning more about the thing named. 1.4. Informative. Clearly indicates what the name refers to. 1.5. Positive. Has strong positive connotations. Avoid negative connotations, as well as names easily parodied to become objects of derision (as happened to Microsoft?s ?Back Office?). 2. Application of These Criteria to Some Suggested Names: 2.1. KnowPy. Not informative. Could be a knowledge-base program or a primer on Python. 2.2. MatSysPy. Not unique enough. Conflicts with existing ?Matsys Corporation? and the ?MATSYS? home design company names. 2.3. NumLab. Not unique. Conflicts with the ?NumLab? scripting language. 2.4. PyCraft. Not unique. Conflicts with ?Pycraft Legal Services? of St. Augustine, FL, and with the ?PyCraft open-source game engine? under development elsewhere. It?s also not informative ? could refer to arts-and-crafts stuff like needlepoint. 2.5. PyEngine. Not unique. Conflicts with Mango?s ?PyEngine? and many others.It??s also not informative ? could refer to automotive engines. 2.6. PyLab. Not unique. Well-known conflicts. (But one writer has suggested these might be resolvable!?) 2.7. PyScis. Not unique enough. As noted by the proposer, it can be confused with PySCeS and PySci, two unrelated Python projects. Also has a negative connotation if pronounced like ?pisces? (suggesting that this is a either fishy, suspect thingy or parodied as just ?pieces?). 2.8. SciER. Not unique. Conflicts with ?SCIER - A technological simulation platform to managing natural resources?. May also conflict with http://www.scier.eu/. May also connote medical science applications. 2.9. SciMatPy. Not unique enough. Numerous conflicts with ?SciMat?. 2.10. SciGnosis. Not unique. Numerous conflicts, including the ?SCI GNOSIS? company in Fance. 2.11. PyGnosis. Not unique enough. Conflicts with ?Psygnosis Games?. Unfortunate parody to ?pig nose?. 2.12. Sciome. Not unique. Conflicts with the ?SCIOME - Enabling Science via Analytical Informatics? company. Negative connotation and derisive parody if pronounced ?Sigh!! Oh me!!!? 2.13. ScioSphere. Not unique. Numerous conflicts with existing usages. 2.14. SciPac. Not unique. Conflicts with the ?SCIPAC? chemical reagent company. 2.15. SciPyPlus. I?d evaluate its criteria as follows: 2.15.1. It?s unique and not likely to be confused with or conflict with other (possibly trademarked) names. 2.15.2. It?s short, and rather easily pronounced. 2.15.3. It?s attractive. Does draw attention. Does stimulate interest. Has the advantage of levering off SciPy?s current name and reputation. 2.15.4. It?s informative ? obviously it refers to something more/better/newer/larger than SciPy alone. 2.15.5. It does have a positive connotation ? more is better! 2.16. Scipy-base. Not unique enough. Conflicts with existing numpy/base and scipy/base module names. 2.17. SciPyLab. Not unique enough. Too easily confused with ?SciLab?, an ?open source, cross-platform numerical computational package and a high-level, numerically oriented programming language.? Not as easily pronounced as SciPyPlus. 2.18. SkeiPy. Non-informative. What this refers to is not obvious at first glance. Possible parody as ?sky-pie, pie-in-the=sky.? Possible confusion with ?Skype,? especially in oral communication. 2.19. UniPy. Not unique enough; too easily confused with various ?UniPay? financial payment services. Not informative; literally means ?one Py?, ?whatever that might be. 3. The best out of this list seems to be SciPyPlus, firmly based on the criteria used. 4. Are there better names, and how do they compare to SciPyPlus?s rating on the same criteria/desiderata? -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Sep 5 18:53:17 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 5 Sep 2012 23:53:17 +0100 Subject: [SciPy-User] Naming Conventions In-Reply-To: <1346881318.68929.YahooMailNeo@web31810.mail.mud.yahoo.com> References: <1346881318.68929.YahooMailNeo@web31810.mail.mud.yahoo.com> Message-ID: On 5 September 2012 22:41, The Helmbolds wrote: > 1.4. Informative. Clearly indicates what the name refers to. I'd definitely disagree with this: too many bland, uninspiring names come from attempts to squash a description into a name. Think of the most familiar or popular brands: it's hardly ever possible to relate the name to what they do. Python, Linux, Google, Apple, McDonalds... There are occasional counterexamples, like Microsoft or KFC, but a name certainly doesn't need to be descriptive. > 1.5. Positive. Has strong positive connotations. Avoid negative connotations, as well as names easily parodied to become objects of derision (as happened to Microsoft?s ?Back Office?). There were quite a few jokes about feminine hygiene products when Apple announced the iPad. I think the product matters more than the humour value of the name. And parody isn't necessarily a bad thing: witness all the free publicity Mastercard has got from parodies of the 'priceless' ads. Although there are names that it's worth avoiding - I've heard rumours that an obstacle for the GIMP is that it's awkward to discuss it in formal contexts. > 2.6. PyLab. Not unique. Well-known conflicts. (But one writer has suggested these might be resolvable!?) Its existing uses are somewhat similar to the new proposal, and it's a name that this community already 'owns', in that the most familiar uses of it relate to our software. So we could repurpose it a bit. Inevitably there would be some confusion, but it could also work to our advantage - we don't need to get everyone used to a new name. > 2.15.3. It?s attractive. [SciPyPlus] I don't think attractiveness is something one person can definitively decide. I don't much care for that name, for instance. SciPyPlus sounds like a fork of scipy, and even past that it invites confusion ("SciPyPlus? I don't think I need anything complicated, maybe I should go and look for SciPy, that ought to be simpler while I get started. I'll look at the 'Plus' bits later." - user returns to all the complexity we were trying to save her from). Thanks, Thomas From ben.root at ou.edu Wed Sep 5 20:58:14 2012 From: ben.root at ou.edu (Benjamin Root) Date: Wed, 5 Sep 2012 20:58:14 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> Message-ID: On Wednesday, September 5, 2012, Thomas Kluyver wrote: > On 5 September 2012 18:29, Fernando Perez > > wrote: > > So my vote at this point is for pylab, and if we do follow that route, > > I suggest we coordinate with the matplotlib team to perhaps start > > changing the docstring in that module and gradually deprecate it as a > > standalone entry point > > Thanks Fernando, that's a good point. I've no doubt some of the > matplotlib crew are on this mailing list, but I'll ping > matplotlib-devel as well. > > Thomas Argh, this discussion again. We are obviously not marketers. Yes, pylab does keep coming up as a favorite (although I liked "unipy"). As much as pylab has been the bane of my existence as one of the maintainers, it is usually *the* entry point for new-comers. so, this positions itself well for expansion of the domain/scope of it's purpose. I am against deprecation because it serves an important purpose/niche. However, I can imagine spinning pylab off as a new project that serves its current purpose, but allows it to grow outside its current scope. This would be beneficial to many as it streamlines mpl, as well as allows us to clarify/target our docs for a particular audience while the pylab project could be better structured to introduce concepts and bring users up to speed. Thoughts? Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Wed Sep 5 21:26:38 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Wed, 5 Sep 2012 18:26:38 -0700 (PDT) Subject: [SciPy-User] Naming Conventions In-Reply-To: References: <1346881318.68929.YahooMailNeo@web31810.mail.mud.yahoo.com> Message-ID: <1346894798.71498.YahooMailNeo@web31802.mail.mud.yahoo.com> OK. Exactly what criteria are you using, and how do the two compare on them? I've shown you mine. Your turn. ? Descriptive names are useful in attracting new "customers". I first became aware of Python because I was interested in discrete event simulators and stumbled upon SimPy, which in turn led to Python. Otherwise, you're right -- I never would've suspected Python was a programming language. Or that it takes its name from a TV comedy rather than a snake. Of course,?nobody wants?to use any "bland, uninspiring" names no matter where they come from. ? But maybe I don't grasp exactly what it is that is you're all trying to find a good name for. I thought it was a name for?a really easy-to-use intuitive user interface to the power and versatility of NumPy + SciPy + MatPlotLib, all harnessed, synchronized,?driven and deftly controlled by the Python programming/scripting language.?If that's the case, then maybe "SciPyUI" would describe it even better than "SciPyPlus". Set me straight if I've not grasped the core of what's?involved here. A good written definition of it would help?to give beter focus to this discussion of names. Otherewise we may be trying to "name an elephant" as in the oft-cited parable of the blind men describing an elephant. ? As for non-descriptive names like Apple, Google,?Amazon, etc., when SciPy has the advertising budget they have, then you're right -- we can pick any name whatever, the cuter the better! In fact, non-descriptive names for large (or wannabe large) commercial enterprises are a super good idea, because then the business can engage in anything under the sun without ever changing its name (e.g., "Enron" or "Litton"). In the meantime, a good descriptive name for this SciPy thing helps to attract "customers". ? You know, as I read your post, I get the impression you kinda agree that?usually names that too readily lend themselves to a nasty parody should be avoided. ? "Attractiveness" should be construed in the sense of drawing and stimulating intellectual attention and promoting the desire to learn more about the thing named. It's not intended to refer to "prettiness". But I do indeed agree with you that some acronyms and short names can have a repulsive or ugly-sounding ring to them, and those should be avoided. ? I'm find your suggestion about SciPyPlus implying that folks should get their feet wet by starting with SciPy interesting, and that may be something to consider. However, if PyLab can easily be adapted to the purpose at hand, I guess SciPyPlus could be, too. ? As someone at present still rather on the fringe of this "community", but still very much interested in it,?concerned about its future, and anxious to help it grow and improve,?I?find?"PyLab" not descriptive. To me, it sounds more like a chemistry or biological laboratory than a versatile computing/graphical suite of digital programs. Or else just a weak echo of,?vaguely related to, or part of "MatLab." Also, I trust you did not mean to imply that only those in the "community" would be affected and influenced by the choice of a name. Indeed, I hope we find a name that encourages many current "outsiders"? to join with, use, and contribute their talents to?this "community." ? ? >________________________________ > From: Thomas Kluyver >To: The Helmbolds ; SciPy Users List >Sent: Wednesday, September 5, 2012 3:53 PM >Subject: Re: [SciPy-User] Naming Conventions > >On 5 September 2012 22:41, The Helmbolds wrote: >> 1.4. Informative. Clearly indicates what the name refers to. > >I'd definitely disagree with this: too many bland, uninspiring names >come from attempts to squash a description into a name. Think of the >most familiar or popular brands: it's hardly ever possible to relate >the name to what they do. Python, Linux, Google, Apple, McDonalds... >There are occasional counterexamples, like Microsoft or KFC, but a >name certainly doesn't need to be descriptive. > >> 1.5. Positive. Has strong positive connotations. Avoid negative connotations, as well as names easily parodied to become objects of derision (as happened to Microsoft?s ?Back Office?). > >There were quite a few jokes about feminine hygiene products when >Apple announced the iPad. I think the product matters more than the >humour value of the name. And parody isn't necessarily a bad thing: >witness all the free publicity Mastercard has got from parodies of the >'priceless' ads. Although there are names that it's worth avoiding - >I've heard rumours that an obstacle for the GIMP is that it's awkward >to discuss it in formal contexts. > >> 2.6. PyLab. Not unique. Well-known conflicts. (But one writer has suggested these might be resolvable!?) > >Its existing uses are somewhat similar to the new proposal, and it's a >name that this community already 'owns', in that the most familiar >uses of it relate to our software. So we could repurpose it a bit. >Inevitably there would be some confusion, but it could also work to >our advantage - we don't need to get everyone used to a new name. > >> 2.15.3. It?s attractive. [SciPyPlus] > >I don't think attractiveness is something one person can definitively >decide. I don't much care for that name, for instance. > >SciPyPlus sounds like a fork of scipy, and even past that it invites >confusion ("SciPyPlus? I don't think I need anything complicated, >maybe I should go and look for SciPy, that ought to be simpler while I >get started. I'll look at the 'Plus' bits later." - user returns to >all the complexity we were trying to save her from). > >Thanks, >Thomas > > >? ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Wed Sep 5 22:16:38 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Wed, 5 Sep 2012 22:16:38 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: <50477521.6030805@thesamovar.net> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <50477521.6030805@thesamovar.net> Message-ID: On Wed, Sep 5, 2012 at 11:52 AM, Dan Goodman wrote: > On 05/09/2012 15:05, Nathaniel Smith wrote: > > On Wed, Sep 5, 2012 at 10:54 AM, Thomas Kluyver > wrote: > >> Sciome - from the numfocus proposal (potential confusion with > sciome.com) > >> PyScis - pronounced like pisces (potential confusion with PySCeS and > >> PySci, two unrelated Python projects) > >> Scipy-base > >> Unipy > >> MatSysPy > >> Pyengine > >> Pycraft > >> > >> List subscribers, what do you think? Does one of those names feel > >> right to refer to the scipy/numpy/etc. stack? Would you like to rule > >> any out? Or can you think of a better name yourself? > > > > I said something similar in the numfocus thread, but I'll throw it out > again: > > > > The only name that people seem to actually like for this is "PyLab". > > PyLab is a nice, clear, memorable name, but I feel like it's a bad fit > for what is being suggested because it brings to mind Matlab, and, at > least to me, suggests that we're just playing catch-up with that. The > existing pylab in matplotlib is well named, because it is more or less > providing a matlab replacement, but if we want to refer to the whole > numpy/scipy/etc. ecosystem, which I believe aspires to be much more than > just a free version of Matlab, shouldn't we be more ambitious and > self-confident in the name? > > That said I have no good ideas. From the list above, Sciome is the > nicest but the name clash makes it a bit risky I think. > > Is the idea that the ecosystem should be focused on scientific > computation only? I thought that there was a push for more general high > performance computational stuff? > > Some vague ideas, none of which are very good: > > - Sciosphere (I like the use of the word 'sphere' to represent an > ecosystem, like biosphere, unfortunately already exists as well as not > being very catchy) > > - SciGnosis, PyGnosis, etc., i.e. using gnosis (knowledge), although > pygnosis is a bit close to psygnosis the computer game company, and > maybe gnosis has too much religious connotation? > > Whatever is chosen, it should be easily Googlable returning preferably > unique results! > > Dan > FWIW, I'm more-or-less in favor of pylab (despite the name clash), but I'm really surprised no one has suggested scithon. (I know, it can be easily confused with cython.) -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Sep 5 22:30:48 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 5 Sep 2012 19:30:48 -0700 Subject: [SciPy-User] Naming Conventions In-Reply-To: <1346881318.68929.YahooMailNeo@web31810.mail.mud.yahoo.com> References: <1346881318.68929.YahooMailNeo@web31810.mail.mud.yahoo.com> Message-ID: On Wed, Sep 5, 2012 at 2:41 PM, The Helmbolds wrote: > SciPy Naming Criteria and Their Application A quick netiquette note: it would be best if you could post in plain text mode; this mailing list (and all the others in this ecosystem) have a strong tradition of avoiding html email. Thanks, f From a.h.jaffe at gmail.com Thu Sep 6 05:12:14 2012 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Thu, 06 Sep 2012 10:12:14 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <50477D89.70508@creativetrax.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> Message-ID: On 05/09/2012 17:27, Jason Grout wrote: > > What about just expanding the SciPy name brand? SciPy captures the > essence, I think---a scientific python software suite. It's easy to say > and already has a lot of name recognition (though that also might be a > negative). > > Just thought I'd throw that out there. Other than that, pylab is my > favorite, for what it's worth. > > Thanks, > > Jason > +1 -- It seems obviously the most appropriate name. I don't think it would be any more confusing to the community than re-purposing "pylab". But of course it does raise the question of what to call the current "scipy" in that case. From robert.kern at gmail.com Thu Sep 6 05:34:15 2012 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Sep 2012 10:34:15 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> Message-ID: On Thu, Sep 6, 2012 at 10:12 AM, Andrew Jaffe wrote: > On 05/09/2012 17:27, Jason Grout wrote: >> >> What about just expanding the SciPy name brand? SciPy captures the >> essence, I think---a scientific python software suite. It's easy to say >> and already has a lot of name recognition (though that also might be a >> negative). >> >> Just thought I'd throw that out there. Other than that, pylab is my >> favorite, for what it's worth. >> >> Thanks, >> >> Jason >> > > +1 -- It seems obviously the most appropriate name. I don't think it > would be any more confusing to the community than re-purposing "pylab". > But of course it does raise the question of what to call the current > "scipy" in that case. For what it's worth, "SciPy" already *is* repurposed to refer to the broader project of encouraging Python in the sciences. The fact that no one knows this anymore should tell you exactly how well it's worked out. For example: http://mail.scipy.org/pipermail/scipy-dev/2009-August/012475.html -- Robert Kern From brennan.williams at visualreservoir.com Thu Sep 6 05:48:03 2012 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Thu, 06 Sep 2012 21:48:03 +1200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> Message-ID: <50487153.20002@visualreservoir.com> On 6/09/2012 9:34 p.m., Robert Kern wrote: > On Thu, Sep 6, 2012 at 10:12 AM, Andrew Jaffe wrote: >> On 05/09/2012 17:27, Jason Grout wrote: >>> What about just expanding the SciPy name brand? SciPy captures the >>> essence, I think---a scientific python software suite. It's easy to say >>> and already has a lot of name recognition (though that also might be a >>> negative). >>> >>> Just thought I'd throw that out there. Other than that, pylab is my >>> favorite, for what it's worth. >>> >>> Thanks, >>> >>> Jason >>> >> +1 -- It seems obviously the most appropriate name. I don't think it >> would be any more confusing to the community than re-purposing "pylab". >> But of course it does raise the question of what to call the current >> "scipy" in that case. > For what it's worth, "SciPy" already *is* repurposed to refer to the > broader project of encouraging Python in the sciences. The fact that > no one knows this anymore should tell you exactly how well it's worked > out. For example: > > http://mail.scipy.org/pipermail/scipy-dev/2009-August/012475.html > +1 to SciPy. I'm coming from an engineering view rather than a sciences view but this all has a sort of university feel to me, i.e. "sciences" vs "arts" where sciences could mean anything from physics to chemistry, marine biology etc etc and arts was history, english etc. So for me SciPy is already a broad name encompassing a number of domains/disciplines. Brennan From matthew.brett at gmail.com Thu Sep 6 05:59:36 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 6 Sep 2012 10:59:36 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <50487153.20002@visualreservoir.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> <50487153.20002@visualreservoir.com> Message-ID: Hi, On Thu, Sep 6, 2012 at 10:48 AM, Brennan Williams wrote: > On 6/09/2012 9:34 p.m., Robert Kern wrote: >> On Thu, Sep 6, 2012 at 10:12 AM, Andrew Jaffe wrote: >>> On 05/09/2012 17:27, Jason Grout wrote: >>>> What about just expanding the SciPy name brand? SciPy captures the >>>> essence, I think---a scientific python software suite. It's easy to say >>>> and already has a lot of name recognition (though that also might be a >>>> negative). >>>> >>>> Just thought I'd throw that out there. Other than that, pylab is my >>>> favorite, for what it's worth. >>>> >>>> Thanks, >>>> >>>> Jason >>>> >>> +1 -- It seems obviously the most appropriate name. I don't think it >>> would be any more confusing to the community than re-purposing "pylab". >>> But of course it does raise the question of what to call the current >>> "scipy" in that case. >> For what it's worth, "SciPy" already *is* repurposed to refer to the >> broader project of encouraging Python in the sciences. The fact that >> no one knows this anymore should tell you exactly how well it's worked >> out. For example: >> >> http://mail.scipy.org/pipermail/scipy-dev/2009-August/012475.html >> > +1 to SciPy. > I'm coming from an engineering view rather than a sciences view but this > all has a sort of university feel to me, i.e. "sciences" vs "arts" where > sciences could mean anything from physics to chemistry, marine biology > etc etc and arts was history, english etc. > > So for me SciPy is already a broad name encompassing a number of > domains/disciplines. We have a package called 'nipy' and an umbrella project called 'nipy' that includes the package 'nipy'. As in Robert's email, we find ourselves writing nipy-the-package and nipy-the-community . If the name makes you stop and wonder what person means, that's a strong vote against the name, in my view. In the case of pylab, there is no 'pylab' package - there is (mainly) a pylab module that is part of the matplotlib package. So, like others here, it seems to me that there would not be much 'which one do you mean?' going on if we repurpose 'pylab'. Cheers, Matthew From takowl at gmail.com Thu Sep 6 06:41:27 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 6 Sep 2012 11:41:27 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> Message-ID: On 6 September 2012 01:58, Benjamin Root wrote: > I am against deprecation because it serves an important purpose/niche. > However, I can imagine spinning pylab off as a new project that serves its > current purpose, but allows it to grow outside its current scope. This sounds reasonable. For instance, I've previously wanted to expand pylab to include bits from pandas, to make it more competitive with R. But the details of what goes in are a debate for another day, so let's not discuss that now. If we go down this route, I suggest that pylab should not include any code itself, so that we don't end up with pylab-the-package. Rather, it should just provide a namespace to access functions and classes from other projects. To summarise, the top 3 names so far, with the advantages and drawbacks of each: - Pylab: For: our community already has the major use of the name, and it's used in a vaguely similar sense, so we get a running start. Against: Confusion with existing meaning of pylab, getting pylab.org domain (no response yet from the owner) - Scipy: For: our community already has the main use, and it's probably even closer to the intended meaning (as in the scipy conferences and scipy-central). Against: confusion with scipy-the-package. - Unipy: For: No direct confusion with existing names. Against: We'd have to build up name recognition from scratch, for a community and set of projects that are not new. Similarity to Unipay might hinder searchability. I suggest that, if we can get hold of the pylab.org domain, we go for that - it strikes a balance between the existing name recognition and the difficulty of repurposing a name. Thomas From brennan.williams at visualreservoir.com Thu Sep 6 07:42:09 2012 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Thu, 06 Sep 2012 23:42:09 +1200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> Message-ID: <50488C11.4010101@visualreservoir.com> On 6/09/2012 10:41 p.m., Thomas Kluyver wrote: > On 6 September 2012 01:58, Benjamin Root wrote: >> I am against deprecation because it serves an important purpose/niche. >> However, I can imagine spinning pylab off as a new project that serves its >> current purpose, but allows it to grow outside its current scope. > This sounds reasonable. For instance, I've previously wanted to expand > pylab to include bits from pandas, to make it more competitive with R. > But the details of what goes in are a debate for another day, so let's > not discuss that now. > > If we go down this route, I suggest that pylab should not include any > code itself, so that we don't end up with pylab-the-package. Rather, > it should just provide a namespace to access functions and classes > from other projects. > > To summarise, the top 3 names so far, with the advantages and drawbacks of each: > > - Pylab: For: our community already has the major use of the name, and > it's used in a vaguely similar sense, so we get a running start. > Against: Confusion with existing meaning of pylab, getting pylab.org > domain (no response yet from the owner) > - Scipy: For: our community already has the main use, and it's > probably even closer to the intended meaning (as in the scipy > conferences and scipy-central). Against: confusion with > scipy-the-package. > - Unipy: For: No direct confusion with existing names. Against: We'd > have to build up name recognition from scratch, for a community and > set of projects that are not new. Similarity to Unipay might hinder > searchability. > > I suggest that, if we can get hold of the pylab.org domain, we go for > that - it strikes a balance between the existing name recognition and > the difficulty of repurposing a name. > > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I don't think the Unipy vs Unipay searchability issue is that valid. The question is whether Unipy encompasses the intended meaning and for me it doesn't. Does it mean unify? Does it mean unicode? Obviously not the latter, probably more the former. I do agree with Thomas' comment about SciPy being closest to the intended meaning. I understand that there are issues with the scipy-the-family vs scipy-the-package etc type issues. However if you mentally step a few years into the future I think everyone might be comfortable with scipy being a "scipy related suite". I prefer scipy to pylab but my comment above about a "scipy related suite" would be equally applicable to a "pylab related suite". Possibly one issue with pylab is, as one previous poster noted, it implies a matlab clone approach. How much truth is in that I don't know as I've never used Matlab and Matlab compatibility has never been of interest to me. Brennan From josef.pktd at gmail.com Thu Sep 6 07:58:52 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Sep 2012 07:58:52 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: <50488C11.4010101@visualreservoir.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On Thu, Sep 6, 2012 at 7:42 AM, Brennan Williams wrote: > On 6/09/2012 10:41 p.m., Thomas Kluyver wrote: >> On 6 September 2012 01:58, Benjamin Root wrote: >>> I am against deprecation because it serves an important purpose/niche. >>> However, I can imagine spinning pylab off as a new project that serves its >>> current purpose, but allows it to grow outside its current scope. >> This sounds reasonable. For instance, I've previously wanted to expand >> pylab to include bits from pandas, to make it more competitive with R. >> But the details of what goes in are a debate for another day, so let's >> not discuss that now. >> >> If we go down this route, I suggest that pylab should not include any >> code itself, so that we don't end up with pylab-the-package. Rather, >> it should just provide a namespace to access functions and classes >> from other projects. >> >> To summarise, the top 3 names so far, with the advantages and drawbacks of each: >> >> - Pylab: For: our community already has the major use of the name, and >> it's used in a vaguely similar sense, so we get a running start. >> Against: Confusion with existing meaning of pylab, getting pylab.org >> domain (no response yet from the owner) >> - Scipy: For: our community already has the main use, and it's >> probably even closer to the intended meaning (as in the scipy >> conferences and scipy-central). Against: confusion with >> scipy-the-package. >> - Unipy: For: No direct confusion with existing names. Against: We'd >> have to build up name recognition from scratch, for a community and >> set of projects that are not new. Similarity to Unipay might hinder >> searchability. >> >> I suggest that, if we can get hold of the pylab.org domain, we go for >> that - it strikes a balance between the existing name recognition and >> the difficulty of repurposing a name. >> >> Thomas >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > I don't think the Unipy vs Unipay searchability issue is that valid. The > question is whether Unipy encompasses the intended meaning and for me it > doesn't. Does it mean unify? Does it mean unicode? Obviously not the > latter, probably more the former. > > I do agree with Thomas' comment about SciPy being closest to the > intended meaning. I understand that there are issues with the > scipy-the-family vs scipy-the-package etc type issues. However if you > mentally step a few years into the future I think everyone might be > comfortable with scipy being a "scipy related suite". > > I prefer scipy to pylab but my comment above about a "scipy related > suite" would be equally applicable to a "pylab related suite". Possibly > one issue with pylab is, as one previous poster noted, it implies a > matlab clone approach. How much truth is in that I don't know as I've > never used Matlab and Matlab compatibility has never been of interest to me. That's also what I'm thinking with pylab: matlab imitation and no namespaces. I hope whatever we are talking about will be more. (a numerical kitchen sink in **Python**.) Josef > > Brennan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From takowl at gmail.com Thu Sep 6 08:04:25 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 6 Sep 2012 13:04:25 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <50488C11.4010101@visualreservoir.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On 6 September 2012 12:42, Brennan Williams wrote: > However if you > mentally step a few years into the future I think everyone might be > comfortable with scipy being a "scipy related suite". I guess my difficulty with this is that for what I do, I've rarely used the diverse numerical functions in SciPy. I use matplotlib quite a bit, and I know numpy underlies just about everything in this field, but to me, it doesn't feel like a "scipy related suite", more a Python suite including SciPy. I also haven't used Matlab, and I used pylab for some time before discovering that the name referred to Matlab. So personally I don't see that as a showstopper. Thomas From josh.k.lawrence at gmail.com Thu Sep 6 08:06:23 2012 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Thu, 6 Sep 2012 07:06:23 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: What about SciPy backwards: Ypics (or yPicS) We could even pronounce it "epics" because having such a broad collection of packages would in some sense be "epic". --Josh On Sep 6, 2012, at 6:58 AM, josef.pktd at gmail.com wrote: > On Thu, Sep 6, 2012 at 7:42 AM, Brennan Williams > wrote: >> On 6/09/2012 10:41 p.m., Thomas Kluyver wrote: >>> On 6 September 2012 01:58, Benjamin Root wrote: >>>> I am against deprecation because it serves an important purpose/niche. >>>> However, I can imagine spinning pylab off as a new project that serves its >>>> current purpose, but allows it to grow outside its current scope. >>> This sounds reasonable. For instance, I've previously wanted to expand >>> pylab to include bits from pandas, to make it more competitive with R. >>> But the details of what goes in are a debate for another day, so let's >>> not discuss that now. >>> >>> If we go down this route, I suggest that pylab should not include any >>> code itself, so that we don't end up with pylab-the-package. Rather, >>> it should just provide a namespace to access functions and classes >>> from other projects. >>> >>> To summarise, the top 3 names so far, with the advantages and drawbacks of each: >>> >>> - Pylab: For: our community already has the major use of the name, and >>> it's used in a vaguely similar sense, so we get a running start. >>> Against: Confusion with existing meaning of pylab, getting pylab.org >>> domain (no response yet from the owner) >>> - Scipy: For: our community already has the main use, and it's >>> probably even closer to the intended meaning (as in the scipy >>> conferences and scipy-central). Against: confusion with >>> scipy-the-package. >>> - Unipy: For: No direct confusion with existing names. Against: We'd >>> have to build up name recognition from scratch, for a community and >>> set of projects that are not new. Similarity to Unipay might hinder >>> searchability. >>> >>> I suggest that, if we can get hold of the pylab.org domain, we go for >>> that - it strikes a balance between the existing name recognition and >>> the difficulty of repurposing a name. >>> >>> Thomas >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> I don't think the Unipy vs Unipay searchability issue is that valid. The >> question is whether Unipy encompasses the intended meaning and for me it >> doesn't. Does it mean unify? Does it mean unicode? Obviously not the >> latter, probably more the former. >> >> I do agree with Thomas' comment about SciPy being closest to the >> intended meaning. I understand that there are issues with the >> scipy-the-family vs scipy-the-package etc type issues. However if you >> mentally step a few years into the future I think everyone might be >> comfortable with scipy being a "scipy related suite". >> >> I prefer scipy to pylab but my comment above about a "scipy related >> suite" would be equally applicable to a "pylab related suite". Possibly >> one issue with pylab is, as one previous poster noted, it implies a >> matlab clone approach. How much truth is in that I don't know as I've >> never used Matlab and Matlab compatibility has never been of interest to me. > > That's also what I'm thinking with pylab: matlab imitation and no namespaces. > > I hope whatever we are talking about will be more. (a numerical > kitchen sink in **Python**.) > > Josef > > >> >> Brennan >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From alan.isaac at gmail.com Thu Sep 6 08:35:46 2012 From: alan.isaac at gmail.com (Alan G Isaac) Date: Thu, 06 Sep 2012 08:35:46 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> Message-ID: <504898A2.3090503@gmail.com> On 9/6/2012 5:34 AM, Robert Kern wrote: > http://mail.scipy.org/pipermail/scipy-dev/2009-August/012475.html The narrow goal of the current discussion as I understand it is to standardize a subset of packages for scientific computing and to provide a name that distributors can use to signal that they include this set. One can anticipate that this set might change over time. So here is a new name proposal (as if another were needed): SciPyDist, or SPD for short. Further I suggest that specific package sets be referred to by appending the year. E.g., SciPyDist2012 would be the set of packages agreed in 2012 to constitute the standard. (Of course some will prefer a standard versioning scheme.) The broader goal as I understand it is to allow researchers to easily know if a distribution claims to provide a certain core tool set and thereby to encourage those who create such distributions to certainly include these, making it simpler for researchers to count on a core tool set. The broad goal as I see it is to promote scientific computing and raise awareness of the utility of scipy-the-package and a few of its close friends. Robert's post implicitly raises the question of how much this will help advance the use of scipy-the-package. Aside from consuming time to pick the name and decide on the core packages, I don't see how it can hurt. But it would be nice to have a clear statement of how it will help. Cheers, Alan Isaac From takowl at gmail.com Thu Sep 6 09:00:38 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 6 Sep 2012 14:00:38 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <504898A2.3090503@gmail.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> <504898A2.3090503@gmail.com> Message-ID: On 6 September 2012 13:35, Alan G Isaac wrote: > The narrow goal of the current discussion as I understand it is to > standardize a subset of packages for scientific computing and to > provide a name that distributors can use to signal that they > include this set. Also, more importantly to my mind, having a clear name for what we sometimes call the 'scipy ecosystem', and a good starting point for new users. I see it as three components: - The name (the focus of discussion in this thread) - A website with an introduction and getting started instructions. - A standard, like Linux Standard Base, that distributions like EPD can conform to (In a similar vein, I'd like to make a Linux metapackage, so you can do "apt-get install pylab-base") Thomas From sebastian at sipsolutions.net Thu Sep 6 10:10:46 2012 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 06 Sep 2012 16:10:46 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> Message-ID: <1346940646.1210.75.camel@sebastian-laptop> On Thu, 2012-09-06 at 10:12 +0100, Andrew Jaffe wrote: > On 05/09/2012 17:27, Jason Grout wrote: > > > > What about just expanding the SciPy name brand? SciPy captures the > > essence, I think---a scientific python software suite. It's easy to say > > and already has a lot of name recognition (though that also might be a > > negative). > > > > Just thought I'd throw that out there. Other than that, pylab is my > > favorite, for what it's worth. > > > > Thanks, > > > > Jason > > > > +1 -- It seems obviously the most appropriate name. I don't think it > would be any more confusing to the community than re-purposing "pylab". > But of course it does raise the question of what to call the current > "scipy" in that case. > SciPy is good, though to be honest, whoever creates the actual project I personally think might as well pick the name :). I would love to see this: Create a documentation website that includes an overview and the documentation for all _mature_ packages that are used commonly, such that I can search through all of these documentations in a reasonable way, and know the result is not some half finished alpha code. Thus it would allow me to search "support vector machine" and I would get sklearn and maybe one or two other results packages. As a place for this, a prominent subdomain of scipy.org seems very good, I would suggest planet.scipy.org if it didn't exist already :), but again, setting this up is the point, the name I _really_ think is secondary... (There already are distributions, and you won't have one big namespace anyways, so this "name" is mostly a website like this plus maybe tutorials) If the individual packages then advertise with something like "Part of the SciPy Effort" and link to scipy.org that should already be enough. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From helmrp at yahoo.com Thu Sep 6 16:57:15 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Thu, 6 Sep 2012 13:57:15 -0700 (PDT) Subject: [SciPy-User] SciPy ecosystem Message-ID: <1346965035.53929.YahooMailNeo@web31801.mail.mud.yahoo.com> Can anyone give me a _precise_ definition of?what the "SciPy ecosystem" is, and is not? For example, let's say I have or have seen some code somewhere. How do I tell quickly whether it is or is not?included in this "SciPy ecosystem"? What characteristics of the code do I use to determine whether it is or is not part of this "ecosytem"? BobH? From takowl at gmail.com Thu Sep 6 17:18:57 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 6 Sep 2012 22:18:57 +0100 Subject: [SciPy-User] SciPy ecosystem In-Reply-To: <1346965035.53929.YahooMailNeo@web31801.mail.mud.yahoo.com> References: <1346965035.53929.YahooMailNeo@web31801.mail.mud.yahoo.com> Message-ID: Hi Bob, On 6 September 2012 21:57, The Helmbolds wrote: > Can anyone give me a _precise_ definition of what the "SciPy ecosystem" is, and is not? > > For example, let's say I have or have seen some code somewhere. > How do I tell quickly whether it is or is not included in this "SciPy ecosystem"? > What characteristics of the code do I use to determine whether it is or is not part of this "ecosytem"? Well, an 'ecosystem' is a rather loose concept - it defines a web of interconnected things, not a strict in/out categorisation. At the core of the scipy ecosystem are numpy, scipy and matplotlib - there are strong connections between the projects, both in the code and in the people working on them. Then there's a ring of other closely related packages like IPython, pandas, and the various scikits. And of course, there's a wealth of more specialist code that relies on one or more of these packages. Distributions like EPD and Python(x,y) that focus on python in science are also part of the ecosystem, in a different way. In terms of the name we're looking for, there still needn't be a rigid definition of what it refers to, although there will be a defined core that distributions should consistently offer. The idea is that any project or analysis relying on some of the packages mentioned above can say that it uses Pylab (if that is indeed the name we pick). Then anyone else who wants to run that has a good starting point to install the dependencies. Best wishes, Thomas From jason-sage at creativetrax.com Thu Sep 6 17:51:18 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Thu, 06 Sep 2012 16:51:18 -0500 Subject: [SciPy-User] SciPy ecosystem In-Reply-To: References: <1346965035.53929.YahooMailNeo@web31801.mail.mud.yahoo.com> Message-ID: <50491AD6.8060107@creativetrax.com> On 9/6/12 4:18 PM, Thomas Kluyver wrote: > Distributions like EPD and > Python(x,y) that focus on python in science are also part of the > ecosystem, in a different way. I would probably include Sage in the "distribution" list too... Thanks, Jason From josef.pktd at gmail.com Thu Sep 6 18:02:37 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Sep 2012 18:02:37 -0400 Subject: [SciPy-User] SciPy ecosystem In-Reply-To: <50491AD6.8060107@creativetrax.com> References: <1346965035.53929.YahooMailNeo@web31801.mail.mud.yahoo.com> <50491AD6.8060107@creativetrax.com> Message-ID: On Thu, Sep 6, 2012 at 5:51 PM, Jason Grout wrote: > On 9/6/12 4:18 PM, Thomas Kluyver wrote: >> Distributions like EPD and >> Python(x,y) that focus on python in science are also part of the >> ecosystem, in a different way. > > I would probably include Sage in the "distribution" list too... and Neuro-Debian and Christoph Gohlke, with Yaroslav and Christoph currently doing a lot of compatibility testing in the eco-system. and then there are the field packages in future Neuro-scython Neuro-pylab Astro-scython Astro-pylab Fin-scython ... ... ??? Josef > > Thanks, > > Jason > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Fri Sep 7 04:38:55 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Sep 2012 01:38:55 -0700 Subject: [SciPy-User] John Hunter's memorial service: Oct 1, 2012 Message-ID: Hi all, I have just received the following information from John's family regarding the memorial service: John's memorial service will be held on Monday, October 1, 2012, at 11.a.m. at Rockefeller Chapel at the University of Chicago. The exact address is 5850 S. Woodlawn Ave, Chicago, IL 60615. The service is open to the public. The service will be fully planned and scripted with no room for people to eulogize, however, we will have a reception after the service, hosted by Tradelink, where people can talk. Regards, f From dg.gmane at thesamovar.net Wed Sep 5 11:52:01 2012 From: dg.gmane at thesamovar.net (Dan Goodman) Date: Wed, 05 Sep 2012 17:52:01 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <50477521.6030805@thesamovar.net> On 05/09/2012 15:05, Nathaniel Smith wrote: > On Wed, Sep 5, 2012 at 10:54 AM, Thomas Kluyver wrote: >> Sciome - from the numfocus proposal (potential confusion with sciome.com) >> PyScis - pronounced like pisces (potential confusion with PySCeS and >> PySci, two unrelated Python projects) >> Scipy-base >> Unipy >> MatSysPy >> Pyengine >> Pycraft >> >> List subscribers, what do you think? Does one of those names feel >> right to refer to the scipy/numpy/etc. stack? Would you like to rule >> any out? Or can you think of a better name yourself? > > I said something similar in the numfocus thread, but I'll throw it out again: > > The only name that people seem to actually like for this is "PyLab". PyLab is a nice, clear, memorable name, but I feel like it's a bad fit for what is being suggested because it brings to mind Matlab, and, at least to me, suggests that we're just playing catch-up with that. The existing pylab in matplotlib is well named, because it is more or less providing a matlab replacement, but if we want to refer to the whole numpy/scipy/etc. ecosystem, which I believe aspires to be much more than just a free version of Matlab, shouldn't we be more ambitious and self-confident in the name? That said I have no good ideas. From the list above, Sciome is the nicest but the name clash makes it a bit risky I think. Is the idea that the ecosystem should be focused on scientific computation only? I thought that there was a push for more general high performance computational stuff? Some vague ideas, none of which are very good: - Sciosphere (I like the use of the word 'sphere' to represent an ecosystem, like biosphere, unfortunately already exists as well as not being very catchy) - SciGnosis, PyGnosis, etc., i.e. using gnosis (knowledge), although pygnosis is a bit close to psygnosis the computer game company, and maybe gnosis has too much religious connotation? Whatever is chosen, it should be easily Googlable returning preferably unique results! Dan From almar.klein at gmail.com Thu Sep 6 07:57:23 2012 From: almar.klein at gmail.com (Almar Klein) Date: Thu, 6 Sep 2012 13:57:23 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: <50488C11.4010101@visualreservoir.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: I dont think that the name pylab implies a matlab *clone*. An alternative perhaps, which is true in many ways. almar On Sep 6, 2012 1:42 PM, "Brennan Williams" < brennan.williams at visualreservoir.com> wrote: > On 6/09/2012 10:41 p.m., Thomas Kluyver wrote: > > On 6 September 2012 01:58, Benjamin Root wrote: > >> I am against deprecation because it serves an important purpose/niche. > >> However, I can imagine spinning pylab off as a new project that serves > its > >> current purpose, but allows it to grow outside its current scope. > > This sounds reasonable. For instance, I've previously wanted to expand > > pylab to include bits from pandas, to make it more competitive with R. > > But the details of what goes in are a debate for another day, so let's > > not discuss that now. > > > > If we go down this route, I suggest that pylab should not include any > > code itself, so that we don't end up with pylab-the-package. Rather, > > it should just provide a namespace to access functions and classes > > from other projects. > > > > To summarise, the top 3 names so far, with the advantages and drawbacks > of each: > > > > - Pylab: For: our community already has the major use of the name, and > > it's used in a vaguely similar sense, so we get a running start. > > Against: Confusion with existing meaning of pylab, getting pylab.org > > domain (no response yet from the owner) > > - Scipy: For: our community already has the main use, and it's > > probably even closer to the intended meaning (as in the scipy > > conferences and scipy-central). Against: confusion with > > scipy-the-package. > > - Unipy: For: No direct confusion with existing names. Against: We'd > > have to build up name recognition from scratch, for a community and > > set of projects that are not new. Similarity to Unipay might hinder > > searchability. > > > > I suggest that, if we can get hold of the pylab.org domain, we go for > > that - it strikes a balance between the existing name recognition and > > the difficulty of repurposing a name. > > > > Thomas > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > I don't think the Unipy vs Unipay searchability issue is that valid. The > question is whether Unipy encompasses the intended meaning and for me it > doesn't. Does it mean unify? Does it mean unicode? Obviously not the > latter, probably more the former. > > I do agree with Thomas' comment about SciPy being closest to the > intended meaning. I understand that there are issues with the > scipy-the-family vs scipy-the-package etc type issues. However if you > mentally step a few years into the future I think everyone might be > comfortable with scipy being a "scipy related suite". > > I prefer scipy to pylab but my comment above about a "scipy related > suite" would be equally applicable to a "pylab related suite". Possibly > one issue with pylab is, as one previous poster noted, it implies a > matlab clone approach. How much truth is in that I don't know as I've > never used Matlab and Matlab compatibility has never been of interest to > me. > > Brennan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giacomo.boffi at polimi.it Fri Sep 7 09:48:46 2012 From: giacomo.boffi at polimi.it (Giacomo Boffi) Date: Fri, 7 Sep 2012 15:48:46 +0200 Subject: [SciPy-User] A unified face for the scipy ecosystem In-Reply-To: References: Message-ID: <20553.64318.731406.602439@aiuole.stru.polimi.it> Thomas Kluyver writes: > We think it's a problem for newcomers that there isn't a clear > starting point with the scipy ecosystem. Py4Science (the name used at UC Berkley to refer to the same ecosystem) -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From fperez.net at gmail.com Fri Sep 7 10:11:08 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Sep 2012 07:11:08 -0700 Subject: [SciPy-User] A unified face for the scipy ecosystem In-Reply-To: <20553.64318.731406.602439@aiuole.stru.polimi.it> References: <20553.64318.731406.602439@aiuole.stru.polimi.it> Message-ID: On Fri, Sep 7, 2012 at 6:48 AM, Giacomo Boffi wrote: > Py4Science (the name used at UC Berkley to refer to the same ecosystem) Speaking of which, I own the py4science.{org, com, net} domains; John Hunter and I bought them years ago since that was the moniker we always used for our workshops. I'd be happy to pass them on to numfocus if people think that's a good name to use for any community project, otherwise I'm likely to let them lapse. Thanks for reminding me of that, I'd totally forgotten. Cheers, f From fperez.net at gmail.com Fri Sep 7 10:41:07 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Sep 2012 07:41:07 -0700 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On Thu, Sep 6, 2012 at 4:57 AM, Almar Klein wrote: > I dont think that the name pylab implies a matlab *clone*. An alternative > perhaps, which is true in many ways. Agreed, no more than scilab is taken to be a matlab clone (which it isn't). It's just a union of python and lab, something that brings up ideas of science and experimentation. BTW, on another thread I just got reminded of the name John Hunter and I started using years ago for our workshops, which I've also kept at Berkeley for various activities: py4science. I own all the domains (org, net, com) and would be happy to transfer them if that name sticks. f From jason-sage at creativetrax.com Fri Sep 7 11:13:46 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Fri, 07 Sep 2012 10:13:46 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1682FDE8-D90F-4936-ADEA-70C1D7C9DDF1@continuum.io> <50477D89.70508@creativetrax.com> <504898A2.3090503@gmail.com> Message-ID: <504A0F2A.7030403@creativetrax.com> On 9/6/12 8:00 AM, Thomas Kluyver wrote: > On 6 September 2012 13:35, Alan G Isaac wrote: >> The narrow goal of the current discussion as I understand it is to >> standardize a subset of packages for scientific computing and to >> provide a name that distributors can use to signal that they >> include this set. > > Also, more importantly to my mind, having a clear name for what we > sometimes call the 'scipy ecosystem', and a good starting point for > new users. I see it as three components: > > - The name (the focus of discussion in this thread) > - A website with an introduction and getting started instructions. > - A standard, like Linux Standard Base, that distributions like EPD > can conform to > (In a similar vein, I'd like to make a Linux metapackage, so you can > do "apt-get install pylab-base") To throw in another example of a successful system like this: this is reminding me a lot of TeXLive, and how it is both a standard collection of packages and a specific versioned distribution. Thanks, Jason From jkington at wisc.edu Fri Sep 7 11:25:12 2012 From: jkington at wisc.edu (Joe Kington) Date: Fri, 07 Sep 2012 10:25:12 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On Fri, Sep 7, 2012 at 9:41 AM, Fernando Perez wrote: > On Thu, Sep 6, 2012 at 4:57 AM, Almar Klein wrote: > > I dont think that the name pylab implies a matlab *clone*. An alternative > > perhaps, which is true in many ways. > > Agreed, no more than scilab is taken to be a matlab clone (which it > isn't). It's just a union of python and lab, something that brings up > ideas of science and experimentation. > > BTW, on another thread I just got reminded of the name John Hunter and > I started using years ago for our workshops, which I've also kept at > Berkeley for various activities: py4science. I own all the domains > (org, net, com) and would be happy to transfer them if that name > sticks. > > Just to add another voice (as if there weren't enough already), I think the "py4science" name is an excellent choice. It clearly emphasizes the "scientific computing in python" angle without any conflicts with existing packages. It's catchy and google-able, to boot. Just my two cents, at any rate. -Joe > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eraldo.pomponi at gmail.com Fri Sep 7 12:11:11 2012 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Fri, 7 Sep 2012 18:11:11 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: I was just following the discussion up to now but at this point I would say that py4science sounds really good. It is simple, meaningful and with no remind of commercial software. +1 Cheers, Eraldo On Fri, Sep 7, 2012 at 5:25 PM, Joe Kington wrote: > On Fri, Sep 7, 2012 at 9:41 AM, Fernando Perez wrote: > >> On Thu, Sep 6, 2012 at 4:57 AM, Almar Klein >> wrote: >> > I dont think that the name pylab implies a matlab *clone*. An >> alternative >> > perhaps, which is true in many ways. >> >> Agreed, no more than scilab is taken to be a matlab clone (which it >> isn't). It's just a union of python and lab, something that brings up >> ideas of science and experimentation. >> >> BTW, on another thread I just got reminded of the name John Hunter and >> I started using years ago for our workshops, which I've also kept at >> Berkeley for various activities: py4science. I own all the domains >> (org, net, com) and would be happy to transfer them if that name >> sticks. >> >> > Just to add another voice (as if there weren't enough already), I think > the "py4science" name is an excellent choice. > > It clearly emphasizes the "scientific computing in python" angle without > any conflicts with existing packages. It's catchy and google-able, to boot. > > Just my two cents, at any rate. > -Joe > > >> f >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Sep 7 12:17:20 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 7 Sep 2012 17:17:20 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On 7 September 2012 17:11, Eraldo Pomponi wrote: > I was just following the discussion up to now but at this point > I would say that py4science sounds really good. > It is simple, meaningful and with no remind of commercial software. It's OK, but it doesn't really seem like a name, just an abbreviation of "python for science". So I think pylab is still my first choice. But we don't know yet whether the owner of pylab.org will be happy to give us that domain, so other options are still important. Thanks, Thomas From fperez.net at gmail.com Fri Sep 7 12:23:34 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Sep 2012 09:23:34 -0700 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On Fri, Sep 7, 2012 at 9:17 AM, Thomas Kluyver wrote: > But we don't know yet whether the owner of pylab.org will be happy to > give us that domain, so other options are still important. I'm pretty sure I saw somewhere that he had agreed to pass it on to numfocus, but I could be wrong, can't find the link right now... From travis at continuum.io Fri Sep 7 12:26:52 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 7 Sep 2012 11:26:52 -0500 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: <0642DDDE-87CD-4800-9ACD-57592B8ECB42@continuum.io> Just my feedback: +1 on py4science I like py4sci a little better. My favorite is still pylab (but having the .org name is important). -Travis On Sep 7, 2012, at 11:17 AM, Thomas Kluyver wrote: > On 7 September 2012 17:11, Eraldo Pomponi wrote: >> I was just following the discussion up to now but at this point >> I would say that py4science sounds really good. >> It is simple, meaningful and with no remind of commercial software. > > It's OK, but it doesn't really seem like a name, just an abbreviation > of "python for science". So I think pylab is still my first choice. > But we don't know yet whether the owner of pylab.org will be happy to > give us that domain, so other options are still important. > > Thanks, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From takowl at gmail.com Fri Sep 7 12:30:22 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 7 Sep 2012 17:30:22 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On 7 September 2012 17:23, Fernando Perez wrote: > I'm pretty sure I saw somewhere that he had agreed to pass it on to > numfocus, but I could be wrong, can't find the link right now... I think I'm the only person who's tried to contact him during the present discussion - did someone ask him previously? Thomas From fperez.net at gmail.com Fri Sep 7 12:32:40 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Sep 2012 09:32:40 -0700 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> Message-ID: On Fri, Sep 7, 2012 at 9:30 AM, Thomas Kluyver wrote: > I think I'm the only person who's tried to contact him during the > present discussion - did someone ask him previously? We should ask again, as I may be confusing something and I can't find a link... Don't take my word for it yet :) From adnothing at gmail.com Fri Sep 7 13:26:46 2012 From: adnothing at gmail.com (Adrien) Date: Fri, 07 Sep 2012 19:26:46 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: <0642DDDE-87CD-4800-9ACD-57592B8ECB42@continuum.io> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> <0642DDDE-87CD-4800-9ACD-57592B8ECB42@continuum.io> Message-ID: <504A2E56.40404@gmail.com> +1 on py4sci (from the peanut gallery) It has a really nice ring to it (in English) and pi4sci.{org,com} are not taken. A quick google search also suggests that there are no obvious collisions. Otherwise, we could also call it "The-Spanish-Inquisition". Guess nobody expected that ;-) (For those who didn't get the - lame - joke, you're in for a Monty Python treat: http://www.youtube.com/watch?v=uprjmoSMJ-o) Adrien Le 07/09/2012 18:26, Travis Oliphant a ?crit : > Just my feedback: > > +1 on py4science > > I like py4sci a little better. > > My favorite is still pylab (but having the .org name is important). > > -Travis > > > > On Sep 7, 2012, at 11:17 AM, Thomas Kluyver wrote: > >> On 7 September 2012 17:11, Eraldo Pomponi wrote: >>> I was just following the discussion up to now but at this point >>> I would say that py4science sounds really good. >>> It is simple, meaningful and with no remind of commercial software. >> It's OK, but it doesn't really seem like a name, just an abbreviation >> of "python for science". So I think pylab is still my first choice. >> But we don't know yet whether the owner of pylab.org will be happy to >> give us that domain, so other options are still important. >> >> Thanks, >> Thomas >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Fri Sep 7 14:21:20 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Sep 2012 11:21:20 -0700 Subject: [SciPy-User] John Hunter's memorial service: Oct 1, 2012 In-Reply-To: References: Message-ID: I just received the official announcement, please note the RSVP requirement to Miriam at msierig at gmail.com. John Davidson Hunter, III 1968-2012 [image: Inline image 1] Our family invites you to join us to celebrate and remember the life of John Hunter Memorial Service Rockefeller Chapel 5850 South Woodlawn Chicago, IL 60637 Monday October 1, 2012 11am Service will be followed by a reception where family and friends may gather to share memories of John. Please RSVP to Miriam at msierig at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jdhj.jpg Type: image/jpeg Size: 370050 bytes Desc: not available URL: From ralf.gommers at gmail.com Fri Sep 7 15:26:19 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 7 Sep 2012 21:26:19 +0200 Subject: [SciPy-User] looking for real testcases for Nelder-Mead fmin In-Reply-To: <5044E20C.70209@t-online.de> References: <5044E20C.70209@t-online.de> Message-ID: On Mon, Sep 3, 2012 at 6:59 PM, denis wrote: > Folks, > I'm looking for real or realistic testcases for Nelder-Mead > minimization of noisy functions, 2d to 10d or so, unconstrained > or box constraints, preferably not sum-of-squares and not Rosenbrock et al. > to wring out a new implementation that has restarts and verbose. > (Would like to discuss ways to restart too > but more ideas than test functions => never converge.) > > First-cut doc is under > > http://htmlpreview.github.com/?https://github.com/denis-bz/nelder-mead/blob/master/doc/Nelder-Mead-py.html > (if anyone knows how to display html directly, please let me know, > git noob.) > If you write it in reST, you can give it a .rst.txt extension and Github will render it correctly. And just in case you didn't see it, this may be useful: https://github.com/scipy/scipy/pull/90 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmichael.aye at gmail.com Fri Sep 7 19:16:24 2012 From: kmichael.aye at gmail.com (K.-Michael Aye) Date: Fri, 7 Sep 2012 16:16:24 -0700 Subject: [SciPy-User] testieee NaN arithmetic did not perform per ieee spec with ifort Message-ID: Dear all, compiling the lapack 3.4.1 with Intel's ifort compiler results in the test program "testieee" not passing the NaN test: "NaN arithmetic did not perform per the ieee spec" When compiled with gfortran it passes this test. Should I worry? Will the ifort compile work so much faster that it's worth the little worry? (I will be working a lot with data binnings and griddings, interpolations, fittings?) Best regards, Michael From kmichael.aye at gmail.com Fri Sep 7 19:33:21 2012 From: kmichael.aye at gmail.com (K.-Michael Aye) Date: Fri, 7 Sep 2012 16:33:21 -0700 Subject: [SciPy-User] Naming Ideas References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: SPylab ? ;) If not that, than I would comment that I don't think the name 'pylab' connotates towards a matlab clone. there are so many xxxlab names there. I also was thinking of scilab, but that's taken by some French numerical software package. I would somehow prefer scipy though, as it encompasses everything what I have in mind when thinking of doing science with python, but I can't comment on the inherent confusions that seem to appear recently regarding this. Michael From matthew.brett at gmail.com Fri Sep 7 19:53:38 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 8 Sep 2012 00:53:38 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <504A2E56.40404@gmail.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> <0642DDDE-87CD-4800-9ACD-57592B8ECB42@continuum.io> <504A2E56.40404@gmail.com> Message-ID: Hi, On Fri, Sep 7, 2012 at 6:26 PM, Adrien wrote: > +1 on py4sci (from the peanut gallery) I think all of pylab, py4sci, py4science are OK and better than the alternatives. A little corporate and boring, but, oh well. Now: > It has a really nice ring to it (in English) and pi4sci.{org,com} are > not taken. A quick google search also suggests that there are no obvious > collisions. > > Otherwise, we could also call it "The-Spanish-Inquisition". Guess nobody > expected that ;-) Spanish Inquisition is a fine name. Funny, enjoyable, in a fine tradition (Unladen Swallow comes to mind). I have a feeling that there might be an urge to play it safe though, Thanks for the suggestion, it cheered me up, Matthew From josef.pktd at gmail.com Fri Sep 7 19:56:26 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 7 Sep 2012 19:56:26 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On Fri, Sep 7, 2012 at 7:33 PM, K.-Michael Aye wrote: > SPylab ? ;) > > If not that, than I would comment that I don't think the name 'pylab' > connotates towards a matlab clone. there are so many xxxlab names > there. I also was thinking of scilab, but that's taken by some French > numerical software package. I thought scilab is a matlab imitation, or it's advertised as "close alternative" e.g. first link that google gave http://ubuntuforums.org/showthread.php?t=594737 Josef I'm NOT a clone http://rlab.sourceforge.net/ > > I would somehow prefer scipy though, as it encompasses everything what > I have in mind when thinking of doing science with python, but I can't > comment on the inherent confusions that seem to appear recently > regarding this. > > Michael > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From kmichael.aye at gmail.com Fri Sep 7 21:25:30 2012 From: kmichael.aye at gmail.com (K.-Michael Aye) Date: Fri, 7 Sep 2012 18:25:30 -0700 Subject: [SciPy-User] Naming Ideas References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On 2012-09-07 23:56:26 +0000, josef.pktd at gmail.com said: > On Fri, Sep 7, 2012 at 7:33 PM, K.-Michael Aye wrote: >> SPylab ? ;) >> >> If not that, than I would comment that I don't think the name 'pylab' >> connotates towards a matlab clone. there are so many xxxlab names >> there. I also was thinking of scilab, but that's taken by some French >> numerical software package. > > I thought scilab is a matlab imitation, or it's advertised as "close > alternative" That might still be true. But /my/ first google hit provides this: http://scilab.org and they have this in their legal notice: Scilab Enterprises Headquarters : 2 rue Jean Rostand, Parc Orsay Universit? 91893 Orsay Cedex - France So it /IS/ 'some' french numerical software package. ;) Have a good weekend! PS.: No likes for SPylab?? *sniff* > > e.g. first link that google gave > http://ubuntuforums.org/showthread.php?t=594737 > > Josef > I'm NOT a clone http://rlab.sourceforge.net/ > >> >> I would somehow prefer scipy though, as it encompasses everything what >> I have in mind when thinking of doing science with python, but I can't >> comment on the inherent confusions that seem to appear recently >> regarding this. >> >> Michael >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user From fboulogne at sciunto.org Sat Sep 8 01:23:40 2012 From: fboulogne at sciunto.org (=?UTF-8?B?RnJhbsOnb2lzIEJvdWxvZ25l?=) Date: Sat, 08 Sep 2012 07:23:40 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <504AD65C.70702@sciunto.org> Or SciPyLab ? -- Fran?ois Boulogne. https://www.sciunto.org From fboulogne at sciunto.org Sat Sep 8 01:23:48 2012 From: fboulogne at sciunto.org (=?UTF-8?B?RnJhbsOnb2lzIEJvdWxvZ25l?=) Date: Sat, 08 Sep 2012 07:23:48 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <504AD664.3000703@sciunto.org> Or SciPyLab ? -- Fran?ois Boulogne. https://www.sciunto.org From Jerome.Kieffer at esrf.fr Sat Sep 8 02:51:20 2012 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Sat, 8 Sep 2012 08:51:20 +0200 Subject: [SciPy-User] testieee NaN arithmetic did not perform per ieee spec with ifort In-Reply-To: References: Message-ID: <20120908085120.9f6be2af.Jerome.Kieffer@esrf.fr> On Fri, 7 Sep 2012 16:16:24 -0700 "K.-Michael Aye" wrote: > Dear all, > > compiling the lapack 3.4.1 with Intel's ifort compiler results in the > test program "testieee" not passing the NaN test: > > "NaN arithmetic did not perform per the ieee spec" > > When compiled with gfortran it passes this test. > Should I worry? Will the ifort compile work so much faster that it's > worth the little worry? (I will be working a lot with data binnings and > griddings, interpolations, fittings?) one should always use "-mp" option in intel compiler suite to have "more precise" math, i.e. ieee compliant math. Then one realizes intel compiler are not that faster than gcc (but still a bit) Cheers, -- J?r?me Kieffer Data analysis unit - ESRF PS: -mp could have changed it's name since icc 10 From josef.pktd at gmail.com Sat Sep 8 07:38:01 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 8 Sep 2012 07:38:01 -0400 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On Fri, Sep 7, 2012 at 9:25 PM, K.-Michael Aye wrote: > On 2012-09-07 23:56:26 +0000, josef.pktd at gmail.com said: > >> On Fri, Sep 7, 2012 at 7:33 PM, K.-Michael Aye wrote: >>> SPylab ? ;) >>> >>> If not that, than I would comment that I don't think the name 'pylab' >>> connotates towards a matlab clone. there are so many xxxlab names >>> there. I also was thinking of scilab, but that's taken by some French >>> numerical software package. >> >> I thought scilab is a matlab imitation, or it's advertised as "close >> alternative" > > That might still be true. But /my/ first google hit provides this: > > http://scilab.org > and they have this in their legal notice: > > Scilab Enterprises > Headquarters : 2 rue Jean Rostand, Parc Orsay Universit? > 91893 Orsay Cedex - France > So it /IS/ 'some' french numerical software package. ;) I didn't agree with that, scilab is older than the enterprise version Scilab Enterprises Copyright (c) 2011-2012 (Scilab Enterprises) Copyright (c) 1989-2012 (INRIA) Copyright (c) 1989-2007 (ENPC) and they have a large part of the documentation on matlab script conversion All I'm saying is that for me anything xxxlab sounds like it's a close alternative to matlab and that looks like the wrong advertising strategy to me. "If it's close to matlab, then we better use matlab since they have more development resources." If I were still working in an institute/consulting center that doesn't have problems spending the money to buy matlab licenses. (I installed scilab and octave years ago, but never used them since I had matlab.) The main reason to use python is not that it's a cheaper way of doing the same thing (but we can do more things in an easier way). For a long term strategy, I would prefer a new nice name, like "pandas". (or octave or ubuntu or debian or gretl) With some advertising after a year or two everyone who cares knows what it is. But we are just talking about a distribution standard not a fancy new package, so I don't really care a lot. Py4Sci sounds ok to me for this (translating into french might be difficult) Josef shutting up after this. > > Have a good weekend! > > PS.: No likes for SPylab?? *sniff* > >> >> e.g. first link that google gave >> http://ubuntuforums.org/showthread.php?t=594737 >> >> Josef >> I'm NOT a clone http://rlab.sourceforge.net/ >> >>> >>> I would somehow prefer scipy though, as it encompasses everything what >>> I have in mind when thinking of doing science with python, but I can't >>> comment on the inherent confusions that seem to appear recently >>> regarding this. >>> >>> Michael >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From helmrp at yahoo.com Sat Sep 8 08:59:15 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sat, 8 Sep 2012 05:59:15 -0700 (PDT) Subject: [SciPy-User] Optimization Test Cases Message-ID: <1347109155.2903.YahooMailNeo@web31810.mail.mud.yahoo.com> On Mon, Sep 3, 2012 at 6:59 PM, denis wrote: > Folks, > I'm looking for real or realistic testcases for Nelder-Mead > minimization of noisy functions, 2d to 10d or so, unconstrained > or box constraints, preferably not sum-of-squares and not Rosenbrock et al. > to wring out a new implementation that has restarts and verbose. > (Would like to discuss ways to restart too > but more ideas than test functions => never converge.) ? Try some maximum liklihood fitting problems, where parameters are chosen to maximize the likelihood function of some statistical distribution function. All you need for the Weibull case?is in the attachment (in Microsoft Word format). Whatever thestatistical distribution you use, I suggest you begin by picking your own values for the parameters (then you'll know what the right answer is). Then generate a sample of values from that distribution/parameter combination. Feed that sample into your optimizatino program, and see if?it gives results close to the parameter values you used to generate the sample. Bob and Paula H -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Weibull Parameter Estimation Via Maximum Likelihood.doc Type: application/msword Size: 116224 bytes Desc: not available URL: From josef.pktd at gmail.com Sat Sep 8 10:22:26 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 8 Sep 2012 10:22:26 -0400 Subject: [SciPy-User] Optimization Test Cases In-Reply-To: <1347109155.2903.YahooMailNeo@web31810.mail.mud.yahoo.com> References: <1347109155.2903.YahooMailNeo@web31810.mail.mud.yahoo.com> Message-ID: On Sat, Sep 8, 2012 at 8:59 AM, The Helmbolds wrote: > On Mon, Sep 3, 2012 at 6:59 PM, denis wrote: > >> Folks, >> I'm looking for real or realistic testcases for Nelder-Mead >> minimization of noisy functions, 2d to 10d or so, unconstrained >> or box constraints, preferably not sum-of-squares and not Rosenbrock et >> al. >> to wring out a new implementation that has restarts and verbose. >> (Would like to discuss ways to restart too >> but more ideas than test functions => never converge.) > > Try some maximum liklihood fitting problems, where parameters are chosen to > maximize the likelihood function of some statistical distribution function. > All you need for the Weibull case is in the attachment (in Microsoft Word > format). > > Whatever thestatistical distribution you use, I suggest you begin by picking > your own values for the parameters (then you'll know what the right answer > is). Then generate a sample of values from that distribution/parameter > combination. Feed that sample into your optimizatino program, and see if it > gives results close to the parameter values you used to generate the sample. I started my reply in a similar direction: statsmodels has many cases with minimizing log likelihood. We don't keep a list of where we ran into problems, but the Negative Binomial that Vincent recently coded up has problems with where Nelder-Mead wasn't good, and Powell often went way off. (IIRC) (fmin_ncg with numerical derivatives works well.) I have an example with a 3 component mixture distribution (univariate with 8 parameters) with lot's of local minima, Nelder Mead is sensitive to starting values, but it will take a bit more time to clean it my code. --------- But I got distracted before checking whether I remember the details correctly. we have two kinds of problems with Nelder-Mead in maximum likelihood: one is the usual getting stuck in local minima the other one is that Nelder-Mead stops at something where the gradient is not very close to zero, which then might not even be a real local minimum. (on the other hand it's more robust when the starting values are not good, and it's slow.) Josef > > Bob and Paula H > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From helmrp at yahoo.com Sat Sep 8 13:31:12 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sat, 8 Sep 2012 10:31:12 -0700 (PDT) Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 20 In-Reply-To: References: Message-ID: <1347125472.46197.YahooMailNeo@web31810.mail.mud.yahoo.com> Right you are!! But if the test cases are created using distribution parameters chosen for the test, that will at least supply a check on the result. Bob and Paula H >________________________________ > From: "scipy-user-request at scipy.org" >To: scipy-user at scipy.org >Sent: Saturday, September 8, 2012 10:00 AM >Subject: SciPy-User Digest, Vol 109, Issue 20 > >Send SciPy-User mailing list submissions to >??? scipy-user at scipy.org > >To subscribe or unsubscribe via the World Wide Web, visit >??? http://mail.scipy.org/mailman/listinfo/scipy-user >or, via email, send a message with subject or body 'help' to >??? scipy-user-request at scipy.org > >You can reach the person managing the list at >??? scipy-user-owner at scipy.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of SciPy-User digest..." > > >Today's Topics: > >? 1. Re: Optimization Test Cases (josef.pktd at gmail.com) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Sat, 8 Sep 2012 10:22:26 -0400 >From: josef.pktd at gmail.com >Subject: Re: [SciPy-User] Optimization Test Cases >To: SciPy Users List >Message-ID: >??? >Content-Type: text/plain; charset=ISO-8859-1 > >On Sat, Sep 8, 2012 at 8:59 AM, The Helmbolds wrote: >> On Mon, Sep 3, 2012 at 6:59 PM, denis wrote: >> >>> Folks, >>> I'm looking for real or realistic testcases for Nelder-Mead >>> minimization of noisy functions, 2d to 10d or so, unconstrained >>> or box constraints, preferably not sum-of-squares and not Rosenbrock et >>> al. >>> to wring out a new implementation that has restarts and verbose. >>> (Would like to discuss ways to restart too >>> but more ideas than test functions => never converge.) >> >> Try some maximum liklihood fitting problems, where parameters are chosen to >> maximize the likelihood function of some statistical distribution function. >> All you need for the Weibull case is in the attachment (in Microsoft Word >> format). >> >> Whatever thestatistical distribution you use, I suggest you begin by picking >> your own values for the parameters (then you'll know what the right answer >> is). Then generate a sample of values from that distribution/parameter >> combination. Feed that sample into your optimizatino program, and see if it >> gives results close to the parameter values you used to generate the sample. > >I started my reply in a similar direction: > >statsmodels has many cases with minimizing log likelihood. > >We don't keep a list of where we ran into problems, but the Negative >Binomial that Vincent recently coded up has problems with where >Nelder-Mead wasn't good, and Powell often went way off. (IIRC) >(fmin_ncg with numerical derivatives works well.) > >I have an example with a 3 component mixture distribution (univariate >with 8 parameters) with lot's of local minima, Nelder Mead is >sensitive to starting values, but it will take a bit more time to >clean it my code. >--------- > >But I got distracted before checking whether I remember the details correctly. > >we have two kinds of problems with Nelder-Mead in maximum likelihood: >one is the usual getting stuck in local minima >the other one is that Nelder-Mead stops at something where the >gradient is not very close to zero, which then might not even be a >real local minimum. > >(on the other hand it's more robust when the starting values are not >good, and it's slow.) > >Josef > > >> >> Bob and Paula H >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > >------------------------------ > >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > > >End of SciPy-User Digest, Vol 109, Issue 20 >******************************************* > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.hirschfeld at gmail.com Sun Sep 9 05:38:25 2012 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Sun, 9 Sep 2012 09:38:25 +0000 (UTC) Subject: [SciPy-User] Naming Ideas References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> <0642DDDE-87CD-4800-9ACD-57592B8ECB42@continuum.io> <504A2E56.40404@gmail.com> Message-ID: Matthew Brett gmail.com> writes: > > Hi, > > On Fri, Sep 7, 2012 at 6:26 PM, Adrien gmail.com> wrote: > > +1 on py4sci (from the peanut gallery) > > I think all of pylab, py4sci, py4science are OK and better than the > alternatives. A little corporate and boring, but, oh well. Now: > > Matthew > I also like those the best of the various options. I'll also throw in ScipyStack if no-one else has done so yet. -Dave From takowl at gmail.com Sun Sep 9 09:25:57 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 9 Sep 2012 14:25:57 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1294F53F-AC8C-42D8-BB4D-E85BF8A2A134@continuum.io> <50488C11.4010101@visualreservoir.com> <0642DDDE-87CD-4800-9ACD-57592B8ECB42@continuum.io> <504A2E56.40404@gmail.com> Message-ID: The owner of the pylab.org domain (Charles) is willing to transfer it to us. I've suggested that the NumFOCUS foundation should hold the domain - does that make sense, or is there a better option? If that is OK, what details do we need to give him to make the transfer? Thanks, Thomas From david_baddeley at yahoo.com.au Sun Sep 9 17:33:59 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Sun, 9 Sep 2012 14:33:59 -0700 (PDT) Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <1347226439.51293.YahooMailNeo@web113419.mail.gq1.yahoo.com> My 2c would be for sticking with SciPy as the description - there's already scipy the package, and scipy the community/ecosystem, why not also add scipy the environment? - it worked for linux (kernel/OS/ecosystem). I'm sure this could be accomplished with only minor edits to the scipy.org site. pylab, as a second choice. if a new name is desired, I'll throw LabPy into the mix - it has many of the advantages of pylab, would fit nicely into a NumPy, SciPy, LabPy hierachy, is google-unique, none of the?associated?domains are registered, and is somewhat more distant from matlab.? On a side note, due to the nature of the scipy community (different packages for different tasks), what will constitute scipy-core (or whatever the minimum environment specification ends up being called). I have a feeling that past numpy, scipy, and matplotlib this is going to be difficult to agree on. My scipy-core would, for example, include PIL and pytables, but not pandas - but I'm sure many will dissagree. cheers, David ________________________________ From: "josef.pktd at gmail.com" To: SciPy Users List Sent: Saturday, 8 September 2012 11:38 PM Subject: Re: [SciPy-User] Naming Ideas On Fri, Sep 7, 2012 at 9:25 PM, K.-Michael Aye wrote: > On 2012-09-07 23:56:26 +0000, josef.pktd at gmail.com said: > >> On Fri, Sep 7, 2012 at 7:33 PM, K.-Michael Aye wrote: >>> SPylab ? ;) >>> >>> If not that, than I would comment that I don't think the name 'pylab' >>> connotates towards a matlab clone. there are so many xxxlab names >>> there. I also was thinking of scilab, but that's taken by some French >>> numerical software package. >> >> I thought scilab is a matlab imitation, or it's advertised as "close >> alternative" > > That might still be true. But /my/ first google hit provides this: > > http://scilab.org > and they have this in their legal notice: > > Scilab Enterprises > Headquarters : 2 rue Jean Rostand, Parc Orsay Universit? > 91893 Orsay Cedex - France > So it /IS/ 'some' french numerical software package. ;) I didn't agree with that, scilab is older than the enterprise version Scilab Enterprises Copyright (c) 2011-2012 (Scilab Enterprises) Copyright (c) 1989-2012 (INRIA) Copyright (c) 1989-2007 (ENPC) and they have a large part of the documentation on matlab script conversion All I'm saying is that for me anything xxxlab sounds like it's a close alternative to matlab and that looks like the wrong advertising strategy to me. "If it's close to matlab, then we better use matlab since they have more development resources." If I were still working in an institute/consulting center that doesn't have problems spending the money to buy matlab licenses. (I installed scilab and octave years ago, but never used them since I had matlab.) The main reason to use python is not that it's a cheaper way of doing the same thing (but we can do more things in an easier way). For a long term strategy, I would prefer a new nice name, like "pandas".? (or octave or ubuntu or debian or gretl) With some advertising after a year or two everyone who cares knows what it is. But we are just talking about a distribution standard not a fancy new package, so I don't really care a lot. Py4Sci? ? sounds ok to me for this? ? (translating into french might be difficult) Josef shutting up after this. > > Have a good weekend! > > PS.: No likes for SPylab?? *sniff* > >> >> e.g. first link that google gave >> http://ubuntuforums.org/showthread.php?t=594737 >> >> Josef >> I'm NOT a clone http://rlab.sourceforge.net/ >> >>> >>> I would somehow prefer scipy though, as it encompasses everything what >>> I have in mind when thinking of doing science with python, but I can't >>> comment on the inherent confusions that seem to appear recently >>> regarding this. >>> >>> Michael >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Sun Sep 9 17:40:44 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Sun, 9 Sep 2012 14:40:44 -0700 (PDT) Subject: [SciPy-User] SciPy ecosystem In-Reply-To: <1346965035.53929.YahooMailNeo@web31801.mail.mud.yahoo.com> References: <1346965035.53929.YahooMailNeo@web31801.mail.mud.yahoo.com> Message-ID: <1347226844.43536.YahooMailNeo@web113415.mail.gq1.yahoo.com> As has been mentioned previously, the ecosystem is a fairly vague concept - an ad-hoc definition could however be that if a package uses/knows about numpy ndarrays or imports numpy, then it's part of the greater scipy ecosystem. That is not to say that all such packages should be included in a scipy/pylab distribution standard. It would be an interesting project to try and automatically scrape pipi looking for numpy imports, and to build a list of such packages. cheers, David ________________________________ From: The Helmbolds To: User SciPy Sent: Friday, 7 September 2012 8:57 AM Subject: [SciPy-User] SciPy ecosystem Can anyone give me a _precise_ definition of?what the "SciPy ecosystem" is, and is not? For example, let's say I have or have seen some code somewhere. How do I tell quickly whether it is or is not?included in this "SciPy ecosystem"? What characteristics of the code do I use to determine whether it is or is not part of this "ecosytem"? BobH? _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughperkins at gmail.com Sun Sep 9 21:19:19 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Mon, 10 Sep 2012 09:19:19 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? Message-ID: How to do efficiently do dot(dot( A.T, diag(d) ), A ) ? ... where d is of length n, A is n * k, and n >> k (obviously the i,j element of this is just sum_r A_{r,i) * A_{r,j} * d_r , which is in theory relatively fast, at least, a lot faster than two brute-force matrix muliplies! ) From josef.pktd at gmail.com Sun Sep 9 21:28:55 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 9 Sep 2012 21:28:55 -0400 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: On Sun, Sep 9, 2012 at 9:19 PM, Hugh Perkins wrote: > How to do efficiently do dot(dot( A.T, diag(d) ), A ) ? dot( A.T * d , A ) IIRC Josef > > ... where d is of length n, A is n * k, and n >> k > > (obviously the i,j element of this is just sum_r A_{r,i) * A_{r,j} * > d_r , which is in theory relatively fast, at least, a lot faster than > two brute-force matrix muliplies! ) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From daweonline at gmail.com Mon Sep 10 08:42:44 2012 From: daweonline at gmail.com (Davide Cittaro) Date: Mon, 10 Sep 2012 14:42:44 +0200 Subject: [SciPy-User] how to interpret scipy.signal.correlate2d Message-ID: Hi all, I would like to correlate two square matrices (same dimensions) with scipy.signal.correlate2d but I have some doubt on interpretation of results. In particular, I have a matrix A=(M, N) and another b=(m, n), when I apply correlate2d(A, b) I obtain a new matrix c=(M+m, N+n). What is the meaning of each element in the new matrix c? I'm trying to find out which regions of two images are similar. Thanks d --- Davide Cittaro daweonline at gmail.com http://sites.google.com/site/davidecittaro/ From tsyu80 at gmail.com Mon Sep 10 11:45:03 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Mon, 10 Sep 2012 11:45:03 -0400 Subject: [SciPy-User] how to interpret scipy.signal.correlate2d In-Reply-To: References: Message-ID: On Mon, Sep 10, 2012 at 8:42 AM, Davide Cittaro wrote: > Hi all, > I would like to correlate two square matrices (same dimensions) with > scipy.signal.correlate2d but I have some doubt on interpretation of > results. In particular, I have a matrix A=(M, N) and another b=(m, n), when > I apply correlate2d(A, b) I obtain a new matrix c=(M+m, N+n). What is the > meaning of each element in the new matrix c? > I'm trying to find out which regions of two images are similar. > > Thanks > > d > --- > Davide Cittaro > daweonline at gmail.com > http://sites.google.com/site/davidecittaro/ > > Hi Davide, The size of the output depends on how you treat the boundaries of the cross-correlation. By default, the boundary "mode" is "full". When "mode=full", correlate2d will return values when *any* part of the two matrices overlap. So, for example, the `c[0, 0]` value in the output would just be the product `A[0, 0] * b[-1, -1]` (only the bottom-right corner of `b` overlaps the top-left corner of `A`). You could also set "mode=valid" and "mode=same", which (I think) gives results (M-m+1, N-n+1) and (M, N), respectively. "mode=valid" only returns values when the `b` doesn't hang over the borders of `A`; "mode=same" uses just enough over hang (or zero-padding) to get an output that's the same size as `A`. BTW, the actual size for "mode=full" is (M+m-1, N+n-1). Also, you may be interested using a normalized cross-correlation, which is implemented in scikits-image: http://scikits-image.org/docs/dev/auto_examples/plot_template.html Cheers, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughperkins at gmail.com Mon Sep 10 12:34:18 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Tue, 11 Sep 2012 00:34:18 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? Message-ID: > > How to do efficiently do dot(dot( A.T, diag(d) ), A ) ? > > dot( A.T * d , A ) This is very good! Still, the second multiplication looks like it is doing a full brute-force matrix multiplication: >>> tic(); d = c.T * a; toc() Elapsed time: 0.00560903549194 >>> tic(); e = dot( d, c ); toc() Elapsed time: 0.110434055328 From travis at vaught.net Mon Sep 10 13:06:48 2012 From: travis at vaught.net (Travis Vaught) Date: Mon, 10 Sep 2012 12:06:48 -0500 Subject: [SciPy-User] Fwd: scipy domains References: Message-ID: <1AAE1367-051E-4130-A53E-8392C88F93AE@vaught.net> Forwarding to scipy-user, since I've not had any thoughts out of scipy-dev on this. Maybe this is relevant to the naming discussion. Anyone? Begin forwarded message: > From: Travis Vaught > Subject: scipy domains > Date: September 5, 2012 9:31:49 AM CDT > To: "Scipy-Dev at Scipy. Org" > > All, > > In a fit of nostalgia, I began to renew the domains scipy.com and scipy.net this morning. > > I've no idea whether they have any real use other than brand protection, but I'm more than willing to fund the renewals for as long as I have the means to do so. (Note: even though I'm the listed administrator of these domains, I consider Enthought -- with which I'm no longer formally affiliated -- to be their capable caretaker). > > In the renewal process, the company from which I register the domains 'suggested' that there are other domains I might be interested in registering. Namely, scipy.info, scipy.us, and scipy.biz. Is there any use for these? I'm happy to add them to the bill if someone can make a case for their use/protection. > > Any thoughts are appreciated. > > Best, > > Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Sep 10 13:11:18 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Sep 2012 13:11:18 -0400 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: On Mon, Sep 10, 2012 at 12:34 PM, Hugh Perkins wrote: >> > How to do efficiently do dot(dot( A.T, diag(d) ), A ) ? >> >> dot( A.T * d , A ) > > This is very good! > > Still, the second multiplication looks like it is doing a full > brute-force matrix multiplication: except that the result is symmetric, I don't see any way around that. elementwise multiplication and sum will usually be slower. Josef > >>>> tic(); d = c.T * a; toc() > Elapsed time: 0.00560903549194 >>>> tic(); e = dot( d, c ); toc() > Elapsed time: 0.110434055328 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From david_baddeley at yahoo.com.au Mon Sep 10 18:00:33 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Mon, 10 Sep 2012 15:00:33 -0700 (PDT) Subject: [SciPy-User] how to interpret scipy.signal.correlate2d In-Reply-To: References: Message-ID: <1347314433.50049.YahooMailNeo@web113401.mail.gq1.yahoo.com> If you want to find out which _regions_ of images are similar then standard correlation is the wrong tool - it tells you how similar the entire images are at different relative shifts. The simplest test of regional similarity would be to subtract the mean from each image and then multiply them - regions with a high positive value are similar, those with a negative value are different. I suspect that this might work better if you smoothed the product image with a kernel of a size similar to the expected size of matching regions so you get a local average correlation. You might also want to do some form of normalisation if you want to then look at significance, or objectively compare results between different sets of images. What correlate2d is doing is taking this product, and then taking the sum over all pixels, for all possible relative shifts between the two images. cheers, David ________________________________ From: Davide Cittaro To: SciPy Users List Sent: Tuesday, 11 September 2012 12:42 AM Subject: [SciPy-User] how to interpret scipy.signal.correlate2d Hi all, I would like to correlate two square matrices (same dimensions) with scipy.signal.correlate2d but I have some doubt on interpretation of results. In particular, I have a matrix A=(M, N) and another b=(m, n), when I apply correlate2d(A, b) I obtain a new matrix c=(M+m, N+n). What is the meaning of each element in the new matrix c? I'm trying to find out which regions of two images are similar. Thanks d --- Davide Cittaro daweonline at gmail.com http://sites.google.com/site/davidecittaro/ _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Mon Sep 10 18:58:11 2012 From: e.antero.tammi at gmail.com (eat) Date: Tue, 11 Sep 2012 01:58:11 +0300 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: Hi, On Mon, Sep 10, 2012 at 7:34 PM, Hugh Perkins wrote: > > > How to do efficiently do dot(dot( A.T, diag(d) ), A ) ? > > > > dot( A.T * d , A ) > > This is very good! > > Still, the second multiplication looks like it is doing a full > brute-force matrix multiplication: > > >>> tic(); d = c.T * a; toc() > Elapsed time: 0.00560903549194 > >>> tic(); e = dot( d, c ); toc() > Elapsed time: 0.110434055328 > It's not so clear what kind of improvements you are looking for. Do you perhaps expect that there exist some magic to ignore half of the computations with dot product, when the result is symmetric? Anyway, here is a simple demonstration to show that dot product is fairly efficient already: In []: m, n= 500000, 5 In []: A, d= randn(m, n), randn(m) In []: %timeit A.T* d 10 loops, best of 3: 34.4 ms per loop In []: mps= m* n/ .0344 # multiplications per second In []: c= A.T* d In []: %timeit dot(c, A) 10 loops, best of 3: 68 ms per loop In []: masps= m* n** 2/ .068 # multiplications and summations per second In []: masps/ mps Out[]: 2.529411764705882 In []: allclose(dot(c, A), dot(A.T* d, A)) Out[]: True In []: sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0' Thus multiplication and summation with dot product is some 2.5 times faster than simple multiplication. My 2 cents, -eat > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Sep 10 20:37:23 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 10 Sep 2012 17:37:23 -0700 Subject: [SciPy-User] Fwd: scipy domains In-Reply-To: <1AAE1367-051E-4130-A53E-8392C88F93AE@vaught.net> References: <1AAE1367-051E-4130-A53E-8392C88F93AE@vaught.net> Message-ID: On Mon, Sep 10, 2012 at 10:06 AM, Travis Vaught wrote: > Forwarding to scipy-user, since I've not had any thoughts out of scipy-dev > on this. Thank you for keeping them up to date :) I personally don't think that it's crucial to blanket-own every imaginable scipy.*, as long as org/com/net are owned, I think that's enough. Python is just fine and it doesn't even have the .com domain, which seems to be simply parked. Cheers, f From hughperkins at gmail.com Mon Sep 10 22:43:09 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Tue, 11 Sep 2012 10:43:09 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? Message-ID: > It's not so clear what kind of improvements you are looking for. Do you > perhaps expect that there exist some magic to ignore half of the > computations with dot product, when the result is symmetric? Here's the same test in matlab: >> n = 10000; >> k = 100; >> a = spdiags(rand(n,1),0,n,n); >> c = rand(k,n); >> tic, d = c*a; toc Elapsed time is 0.007769 seconds. >> tic, d = d*c'; toc Elapsed time is 0.007782 seconds. (vs, in scipy: >>> tic(); d = c.T * a; toc() Elapsed time: 0.00560903549194 >>> tic(); e = dot( d, c ); toc() Elapsed time: 0.110434055328 ) From tony at maths.lth.se Tue Sep 11 04:08:50 2012 From: tony at maths.lth.se (Tony Stillfjord) Date: Tue, 11 Sep 2012 10:08:50 +0200 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: I'm not sure what's going on here, but when I run your MATLAB-code on my system (copy-pasted) I get results more in the line of Elapsed time is 0.007203 seconds. Elapsed time is 0.031520 seconds. i.e. much worse than you report but still better than the Python ones. With scipy I get 4.1 ms vs. 60 ms. 'Sparsifying' the code improves things on my end: In [73]: n = 10000 In [74]: k = 100 In [75]: r = rand(n) In [76]: a = scipy.sparse.spdiags(r, 0, n, n) In [77]: c = scipy.sparse.rand(k,n) In [78]: d = c.dot(a) In [79]: e = d.dot(c.T) In [80]: %timeit d = c.dot(a) 1000 loops, best of 3: 1.52 ms per loop In [81]: %timeit e = d.dot(c.T) 1000 loops, best of 3: 1.12 ms per loop Tony On Tue, Sep 11, 2012 at 4:43 AM, Hugh Perkins wrote: > > It's not so clear what kind of improvements you are looking for. Do you > > perhaps expect that there exist some magic to ignore half of the > > computations with dot product, when the result is symmetric? > > Here's the same test in matlab: > > >> n = 10000; > >> k = 100; > >> a = spdiags(rand(n,1),0,n,n); > >> c = rand(k,n); > >> tic, d = c*a; toc > Elapsed time is 0.007769 seconds. > >> tic, d = d*c'; toc > Elapsed time is 0.007782 seconds. > > (vs, in scipy: > > >>> tic(); d = c.T * a; toc() > Elapsed time: 0.00560903549194 > >>> tic(); e = dot( d, c ); toc() > Elapsed time: 0.110434055328 > ) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughperkins at gmail.com Tue Sep 11 05:10:25 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Tue, 11 Sep 2012 17:10:25 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? Message-ID: > 'Sparsifying' the code improves things on my end: It did for me too :-) But then I noticed that sparse.rand produces a matrix with no elements: >>> from scipy import sparse >>> sparse.rand(2,2) <2x2 sparse matrix of type '' with 0 stored elements in COOrdinate format> >>> sparse.rand(2,2).todense() matrix([[ 0., 0.], [ 0., 0.]]) You can make a sparse rand matrix from a dense one: c = sparse.coo_matrix(scipy.rand(n,k)) This gives worse timings than a non-sparse c for me: >>> c = rand(n,k); >>> tic(); d = c.T * a; toc() Elapsed time: 0.0610790252686 >>> tic(); e = dot( d, c ); toc() Elapsed time: 0.239707946777 >>> import scipy.sparse as sparse >>> c = sparse.coo_matrix(scipy.rand(n,k)) >>> tic(); d = c.T * a; toc() Elapsed time: 0.137360095978 >>> tic(); e = dot( d, c ); toc() Elapsed time: 1.74925804138 From tony at maths.lth.se Tue Sep 11 05:30:04 2012 From: tony at maths.lth.se (Tony Stillfjord) Date: Tue, 11 Sep 2012 11:30:04 +0200 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: Oh. Of course. Sorry about that, I forgot about the density argument to sparse.rand and didn't check my results. That certainly explains why it was so fast...and since d can't be expected to be sparse the sparse approach is kind of dumb. Tony On Tue, Sep 11, 2012 at 11:10 AM, Hugh Perkins wrote: > > 'Sparsifying' the code improves things on my end: > > It did for me too :-) > > But then I noticed that sparse.rand produces a matrix with no elements: > > >>> from scipy import sparse > >>> sparse.rand(2,2) > <2x2 sparse matrix of type '' > with 0 stored elements in COOrdinate format> > >>> sparse.rand(2,2).todense() > matrix([[ 0., 0.], > [ 0., 0.]]) > > You can make a sparse rand matrix from a dense one: > > c = sparse.coo_matrix(scipy.rand(n,k)) > > This gives worse timings than a non-sparse c for me: > > >>> c = rand(n,k); > >>> tic(); d = c.T * a; toc() > Elapsed time: 0.0610790252686 > >>> tic(); e = dot( d, c ); toc() > Elapsed time: 0.239707946777 > > >>> import scipy.sparse as sparse > >>> c = sparse.coo_matrix(scipy.rand(n,k)) > >>> tic(); d = c.T * a; toc() > Elapsed time: 0.137360095978 > >>> tic(); e = dot( d, c ); toc() > Elapsed time: 1.74925804138 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.collette at gmail.com Tue Sep 11 11:14:54 2012 From: andrew.collette at gmail.com (Andrew Collette) Date: Tue, 11 Sep 2012 09:14:54 -0600 Subject: [SciPy-User] ANN: HDF5 for Python 2.1.0 BETA Message-ID: Announcing HDF5 for Python (h5py) 2.1 BETA ========================================== We are proud to announce the availability of HDF5 for Python (h5py) 2.1-beta. HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a mature scientific software library originally developed at NCSA, designed for the fast, flexible storage of enormous amounts of data. >From a Python programmer's perspective, HDF5 provides a robust way to store data, organized by name in a tree-like fashion. You can create datasets (arrays on disk) hundreds of gigabytes in size, and perform random-access I/O on desired sections. Datasets are organized in a filesystem-like hierarchy using containers called "groups", and accessed using the traditional POSIX /path/to/resource syntax. H5py 2.1-beta is available for Unix and Windows. The beta period will last approximately 2 weeks. Comments and suggestions are welcome, either at the project issue tracker or on the mailing list (h5py at Google Groups). Downloads, FAQ and bug tracker are available at Google Code: * Google code site: http://h5py.googlecode.com Documentation is available at Alfven.org: * http://h5py.alfven.org What's new in h5py 2.1 ----------------------- * The HDF5 Dimension Scales API is now available, along with high-level integration with Dataset objects. Thanks to D. Dale for implementing this. * Unicode scalar strings can now be stored in attributes. * Dataset objects now expose a .size property giving the total number of elements. * Many bug fixes. From pav at iki.fi Tue Sep 11 13:15:27 2012 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 11 Sep 2012 20:15:27 +0300 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: Hi, First a quick note: Please, when posting any benchmark result, always include the full test code for each case. 11.09.2012 05:43, Hugh Perkins kirjoitti: >> It's not so clear what kind of improvements you are looking for. Do you >> perhaps expect that there exist some magic to ignore half of the >> computations with dot product, when the result is symmetric? > > Here's the same test in matlab: [clip] You are here bencmarking the underlying BLAS libraries. Matlab comes with Intel MKL, whereas your Numpy/Scipy is likely linked with ATLAS. MKL can be faster than ATLAS, but on the other hand you can also link the Numpy/Scipy combination against MKL (provided you buy a license from Intel). In your Matlab example, the first matrix product is a sparse-dense one, whereas the second is dense-dense (MKL-fast). That the speeds of the two operations happen to coincide is likely a coincidence. In the Numpy case without sparse matrices, the first product is broadcast-multiplication (faster than a sparse-dense matrix product), whereas the second product is a dense-dense matrix multiplication. Here are results on one (slowish) machine: ---- import numpy as np n = 10000 k = 100 a = np.random.rand(n) c = np.random.rand(k,n) d = c*a e = np.dot(d, c.T) %timeit d = c*a # -> 100 loops, best of 3: 11 ms per loop %timeit e = np.dot(d, c.T) # -> 10 loops, best of 3: 538 ms per loop ---- n = 10000; k = 100; a = spdiags(rand(n,1),0,n,n); c = rand(k,n); tic, d = c*a; toc % -> Elapsed time is 0.018380 seconds. tic, e = d*c'; toc % -> Elapsed time is 0.138673 seconds. -- Pauli Virtanen From hughperkins at gmail.com Tue Sep 11 13:28:57 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Wed, 12 Sep 2012 01:28:57 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: On Wed, Sep 12, 2012 at 1:15 AM, Pauli Virtanen wrote: > You are here bencmarking the underlying BLAS libraries. Matlab comes > with Intel MKL, whereas your Numpy/Scipy is likely linked with ATLAS. Hi Pauli, Ok, that's good information. I think that makes a lot of sense, and sounds plausible to me. > MKL can be faster than ATLAS, but on the other hand you can also link > the Numpy/Scipy combination against MKL (provided you buy a license from > Intel). Ok, that sounds reasonable to me. It makes me wonder though. There is an opensource project called 'Eigen', for C++. It seems to provide good performance for matrix-matrix multiplication, comparable to Intel MKL, and significantly better than ublas http://eigen.tuxfamily.org/index.php?title=Benchmark I'm not sure what the relationship is between ublas and BLAS? I wonder if it might be worth scipy providing an option to link with eigen3? I confess I don't personally have time to do that though. From hughperkins at gmail.com Tue Sep 11 13:36:13 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Wed, 12 Sep 2012 01:36:13 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: (Apparently Eigen can use multithreading too. This is one thing which matlab does automatically by the way: automatically parallelize out certain matrix operations amongst multiple cores. http://eigen.tuxfamily.org/dox/TopicMultiThreading.html ) From brad.malone at gmail.com Tue Sep 11 13:49:26 2012 From: brad.malone at gmail.com (Brad Malone) Date: Tue, 11 Sep 2012 13:49:26 -0400 Subject: [SciPy-User] scipy.interpolate.UnivariateSpline overshoot (with picture) Message-ID: Hi, I am trying to use scipy.interpolate.UnivariateSpline to interpolate a region of a plot I have where there are few data points. See the figure of my data here: http://tinypic.com/r/dq4o3s/6 The red and blue data points are my raw data (the blue points simply being the subset of the red data that I want to obtain an interpolated curve from). I created a sublist of the blue points and then called s=UnivariateSpline(newx,newy,s=1) The blue line is the spline interpolated on a grid of 1000 points between my initial blue raw data point and my final blue raw data point. As you can see, the spline works quite well, except for at the peak of the curve where it overshoots a tad bit. I was curious as to whether there was an easy fix to this (perhaps an option to UnivariateSpline that I am unaware of). If not, any other solutions to correct this overshoot? I appreciate any suggestions you can give. Best, Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Sep 11 14:21:10 2012 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 11 Sep 2012 21:21:10 +0300 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: 11.09.2012 20:28, Hugh Perkins kirjoitti: [clip] > It makes me wonder though. There is an opensource project called > 'Eigen', for C++. > It seems to provide good performance for matrix-matrix multiplication, > comparable to Intel MKL, and significantly better than ublas > http://eigen.tuxfamily.org/index.php?title=Benchmark I'm not sure > what the relationship is between ublas and BLAS? Eigen doesn't provide a BLAS interface, so it would be quite a lot of work to use it. Moreover, it probably derives some of its speed for small matrices from compile-time specialization, which is not available via a BLAS interface. However, OpenBLAS/GotoBLAS could be better than ATLAS, it seems to be also doing well in the benchmarks you linked to: https://github.com/xianyi/OpenBLAS If you are on Linux, you can easily swap the BLAS libraries used, like so: *** OpenBLAS: LD_PRELOAD=/usr/lib/openblas-base/libopenblas.so.0 ipython ... In [11]: %timeit e = np.dot(d, c.T) 100 loops, best of 3: 14.8 ms per loop *** ATLAS: LD_PRELOAD=/usr/lib/atlas-base/atlas/libblas.so.3gf ipython In [12]: %timeit e = np.dot(d, c.T) 10 loops, best of 3: 20.8 ms per loop *** Reference BLAS: LD_PRELOAD=/usr/lib/libblas/libblas.so.3gf:/usr/lib/libatlas.so ipython ... In [11]: %timeit e = np.dot(d, c.T) 10 loops, best of 3: 89.3 ms per loop Yet another thing to watch out is possible use of multiple processors at once (although I'm not sure how much that will matter in this particular case). -- Pauli Virtanen From pav at iki.fi Tue Sep 11 14:30:47 2012 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 11 Sep 2012 21:30:47 +0300 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: 11.09.2012 20:15, Pauli Virtanen kirjoitti: > Here are results on one (slowish) machine: Ok, just for completeness: the bad performance for `np.dot` on this machine comes from the fact that Numpy is not linked with any BLAS library, so it falls back to a slow and naive algorithm. (You can check this as follows: if `numpy.dot.__module__` is 'numpy.core._dotblas', you have some BLAS.) > ---- > import numpy as np > n = 10000 > k = 100 > a = np.random.rand(n) > c = np.random.rand(k,n) > d = c*a > e = np.dot(d, c.T) > %timeit d = c*a > # -> 100 loops, best of 3: 11 ms per loop > %timeit e = np.dot(d, c.T) > # -> 10 loops, best of 3: 538 ms per loop > ---- > n = 10000; > k = 100; > a = spdiags(rand(n,1),0,n,n); > c = rand(k,n); > tic, d = c*a; toc > % -> Elapsed time is 0.018380 seconds. > tic, e = d*c'; toc > % -> Elapsed time is 0.138673 seconds. > From hughperkins at gmail.com Tue Sep 11 14:57:11 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Wed, 12 Sep 2012 02:57:11 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: On Wed, Sep 12, 2012 at 2:21 AM, Pauli Virtanen wrote: > However, OpenBLAS/GotoBLAS could be better than ATLAS, it seems to be > also doing well in the benchmarks you linked to: > > If you are on Linux, you can easily swap the BLAS libraries used, like so: Ah great! This is 4 times faster for me: Using atlas: $ python testmult.py Elapsed time: 0.0221469402313 Elapsed time: 0.21438908577 Using goto/openblas: $ python testmult.py Elapsed time: 0.0214130878448 Elapsed time: 0.051687002182 Note that I had to do 'sudo update-alternatives --config liblapack.so.3gf', and pick liblapack. On Ubuntu, I didn't need to modify LD_LIBRARY_PATH: installing openblas-base is enough to half-select openblas, and actually, to break scipy, until the update-alternatives step above is done. http://stackoverflow.com/questions/12249089/how-to-use-numpy-with-openblas-instead-of-atlas-in-ubuntu Apparently openblas seems to support multicore too, though I haven't confirmed that that works in conjunction with scipy yet. From kevin.gullikson.signup at gmail.com Tue Sep 11 14:57:46 2012 From: kevin.gullikson.signup at gmail.com (Kevin Gullikson) Date: Tue, 11 Sep 2012 13:57:46 -0500 Subject: [SciPy-User] scipy.interpolate.UnivariateSpline overshoot (with picture) In-Reply-To: References: Message-ID: Brad, I think playing with the 's' parameter is the way to do this. That data looks like you could just straight interpolate it (s=0). s can be a float too, so maybe something like 0.5 would be better? -Kevin On Tue, Sep 11, 2012 at 12:49 PM, Brad Malone wrote: > Hi, I am trying to use scipy.interpolate.UnivariateSpline to interpolate a > region of a plot I have where there are few data points. See the figure of > my data here: http://tinypic.com/r/dq4o3s/6 > > The red and blue data points are my raw data (the blue points simply being > the subset of the red data that I want to obtain an interpolated curve > from). I created a sublist of the blue points and then called > > s=UnivariateSpline(newx,newy,s=1) > > > The blue line is the spline interpolated on a grid of 1000 points between > my initial blue raw data point and my final blue raw data point. As you can > see, the spline works quite well, except for at the peak of the curve where > it overshoots a tad bit. I was curious as to whether there was an easy fix > to this (perhaps an option to UnivariateSpline that I am unaware of). If > not, any other solutions to correct this overshoot? > > I appreciate any suggestions you can give. > > Best, > Brad > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad.malone at gmail.com Tue Sep 11 15:03:33 2012 From: brad.malone at gmail.com (Brad Malone) Date: Tue, 11 Sep 2012 15:03:33 -0400 Subject: [SciPy-User] scipy.interpolate.UnivariateSpline overshoot (with picture) In-Reply-To: References: Message-ID: A quick follow up: I just tried simply setting s=0, and so not smoothing the spline, and it appeared to work correctly. There is no longer any overshoot. Best, Brad On Tue, Sep 11, 2012 at 1:49 PM, Brad Malone wrote: > Hi, I am trying to use scipy.interpolate.UnivariateSpline to interpolate a > region of a plot I have where there are few data points. See the figure of > my data here: http://tinypic.com/r/dq4o3s/6 > > The red and blue data points are my raw data (the blue points simply being > the subset of the red data that I want to obtain an interpolated curve > from). I created a sublist of the blue points and then called > > s=UnivariateSpline(newx,newy,s=1) > > > The blue line is the spline interpolated on a grid of 1000 points between > my initial blue raw data point and my final blue raw data point. As you can > see, the spline works quite well, except for at the peak of the curve where > it overshoots a tad bit. I was curious as to whether there was an easy fix > to this (perhaps an option to UnivariateSpline that I am unaware of). If > not, any other solutions to correct this overshoot? > > I appreciate any suggestions you can give. > > Best, > Brad > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Sep 11 15:27:03 2012 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 11 Sep 2012 22:27:03 +0300 Subject: [SciPy-User] scipy.interpolate.UnivariateSpline overshoot (with picture) In-Reply-To: References: Message-ID: 11.09.2012 20:49, Brad Malone kirjoitti: > Hi, I am trying to use scipy.interpolate.UnivariateSpline to interpolate > a region of a plot I have where there are few data points. See the > figure of my data here: http://tinypic.com/r/dq4o3s/6 > > The red and blue data points are my raw data (the blue points simply > being the subset of the red data that I want to obtain an interpolated > curve from). I created a sublist of the blue points and then called > > s=UnivariateSpline(newx,newy,s=1) You can check the advice on choosing the `s` parameter in the second answer here: http://stackoverflow.com/questions/7906126/spline-representation-with-scipy-interpolate-poor-interpolation-for-low-amplitu > The blue line is the spline interpolated on a grid of 1000 points > between my initial blue raw data point and my final blue raw data point. > As you can see, the spline works quite well, except for at the peak of > the curve where it overshoots a tad bit. I was curious as to whether > there was an easy fix to this (perhaps an option to UnivariateSpline > that I am unaware of). If not, any other solutions to correct this > overshoot? You can use `pchip` instead of `UnivariateSpline` if you want monotonic splines. This doesn't support smoothing, though. -- Pauli Virtanen From takowl at gmail.com Tue Sep 11 16:27:38 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 11 Sep 2012 21:27:38 +0100 Subject: [SciPy-User] Naming Ideas In-Reply-To: <1347226439.51293.YahooMailNeo@web113419.mail.gq1.yahoo.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1347226439.51293.YahooMailNeo@web113419.mail.gq1.yahoo.com> Message-ID: I see someone has recently registered pylab as an organisation on Github. Hopefully that was one of us? Thanks, Thomas From travis at vaught.net Tue Sep 11 17:16:16 2012 From: travis at vaught.net (Travis Vaught) Date: Tue, 11 Sep 2012 16:16:16 -0500 Subject: [SciPy-User] scipy domains In-Reply-To: <1AAE1367-051E-4130-A53E-8392C88F93AE@vaught.net> References: <1AAE1367-051E-4130-A53E-8392C88F93AE@vaught.net> Message-ID: On Sep 10, 2012, at 12:06 PM, Travis Vaught wrote: > Forwarding to scipy-user, since I've not had any thoughts out of scipy-dev on this. > > Maybe this is relevant to the naming discussion. > > Anyone? > > > Begin forwarded message: > >> From: Travis Vaught >> Subject: scipy domains >> Date: September 5, 2012 9:31:49 AM CDT >> To: "Scipy-Dev at Scipy. Org" >> >> All, >> >> In a fit of nostalgia, I began to renew the domains scipy.com and scipy.net this morning. >> >> I've no idea whether they have any real use other than brand protection, but I'm more than willing to fund the renewals for as long as I have the means to do so. (Note: even though I'm the listed administrator of these domains, I consider Enthought -- with which I'm no longer formally affiliated -- to be their capable caretaker). >> >> In the renewal process, the company from which I register the domains 'suggested' that there are other domains I might be interested in registering. Namely, scipy.info, scipy.us, and scipy.biz. Is there any use for these? I'm happy to add them to the bill if someone can make a case for their use/protection. >> >> Any thoughts are appreciated. >> >> Best, >> >> Travis > Alright, I'm feeling a bit neglected on this, so I'll just renew the scipy.net and scipy.com domains and leave it at that. If anyone needs to admin the domains going forward, let me know. Best, Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Tue Sep 11 17:54:09 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 11 Sep 2012 22:54:09 +0100 Subject: [SciPy-User] scipy domains In-Reply-To: References: <1AAE1367-051E-4130-A53E-8392C88F93AE@vaught.net> Message-ID: On 11 September 2012 22:16, Travis Vaught wrote: > Alright, I'm feeling a bit neglected on this, so I'll just renew the > scipy.net and scipy.com domains and leave it at that. If anyone needs to > admin the domains going forward, let me know. In terms of the unified brand we've been discussing, I think we're going to go for Pylab - it looks like we can get pylab.org. I wouldn't worry about getting new domains like .biz and .info, unless someone intends to use them. They always look suspicious to me anyway, and I doubt SciPy is a high-value target for people trying to catch stray page views. Thomas From fperez.net at gmail.com Tue Sep 11 21:19:25 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 11 Sep 2012 18:19:25 -0700 Subject: [SciPy-User] Naming Ideas In-Reply-To: References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> <1347226439.51293.YahooMailNeo@web113419.mail.gq1.yahoo.com> Message-ID: On Tue, Sep 11, 2012 at 1:27 PM, Thomas Kluyver wrote: > I see someone has recently registered pylab as an organisation on > Github. Hopefully that was one of us? Not me, I'm afraid... I know Jarrod did a lot of the setup for numpy/scipy on github initially, I'll ask him in case he's not reading the list... f From hughperkins at gmail.com Tue Sep 11 22:12:18 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Wed, 12 Sep 2012 10:12:18 +0800 Subject: [SciPy-User] Modify package dependencies, or issue a runtime warning, if a fast BLAS implementation not found? Message-ID: (Was "How to efficiently do dot(dot( A.T, diag(d) ), A ) ?") On Wed, Sep 12, 2012 at 2:30 AM, Pauli Virtanen wrote: > 11.09.2012 20:15, Pauli Virtanen kirjoitti: >> Here are results on one (slowish) machine: > > Ok, just for completeness: the bad performance for `np.dot` on this > machine comes from the fact that Numpy is not linked with any BLAS > library, so it falls back to a slow and naive algorithm. (You can check > this as follows: if `numpy.dot.__module__` is 'numpy.core._dotblas', you > have some BLAS.) Considering the amount of emails and effort that it took in order for one naive user (ie myself) to find this, perhaps it might be worth doing one or more of the following? : - ensure that openblas is a dependency of scipy in the ubuntu packages (and/or debian, redhat etc packages) - issue a check for a reasonably performing blas when these functions are used in scipy, and issue a warning if only reference blas is found By the way, on a base ubuntu, it seems there is actually a blas, but it's only the reference blas, and it looks like reference blas performance is really terrible! It's perhaps reasonable to assume that a majority of first-time scipy users will not know that scipy rubbishy performance is because they are using reference blas, and will just assume that it is a problem inherent to blas, and run off to buy a matlab license instead! From hughperkins at gmail.com Tue Sep 11 22:26:57 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Wed, 12 Sep 2012 10:26:57 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: Openblas does work tons better than atlas and reference. I'm fairly sure that multicore is not really supported though? or supported not very well. On a 12-core machine, using openblas: $ python testmult.py Elapsed time: 0.00923895835876 Elapsed time: 0.0242748260498 $ export OMP_NUM_THREADS=12 $ python testmult.py Elapsed time: 0.00923895835876 Elapsed time: 0.0255119800568 $ export OMP_NUM_THREADS=1 $ python testmult.py Elapsed time: 0.00781798362732 Elapsed time: 0.0324649810791 On the same machine, using matlab: >> n = 10000; >> k = 100; >> a = spdiags(rand(n,1),0,n,n); >> c = rand(k,n); >> tic, d = c*a; toc Elapsed time is 0.005955 seconds. >> tic, d = d*c'; toc Elapsed time is 0.006201 seconds. (code for testmult.py: from __future__ import division from scipy import * import scipy import scipy.sparse as sparse from tictoc import tic,toc n = 10000; k = 100; a = sparse.spdiags( rand(n), 0, n, n ) c = rand(n,k); tic(); d = c.T * a; toc() tic(); e = dot( d, c ); toc() tictoc.py: import time start = 0 def tic(): global start start = time.time() def toc(): global start print "Elapsed time: " + str( time.time() - start ) start = time.time() ) From hughperkins at gmail.com Wed Sep 12 00:46:30 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Wed, 12 Sep 2012 12:46:30 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: The good news is: it seems the scipy with openblas is *very* good on a dual-core machine: matlab: >> n = 200000; k = 100; a = spdiags(rand(n,1),0,n,n); c = rand(k,n); tic, d = c*a*c'; toc Elapsed time is 1.322931 seconds. >> tic, d = c*a*c'; toc Elapsed time is 1.308237 seconds. python: From hughperkins at gmail.com Wed Sep 12 00:56:55 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Wed, 12 Sep 2012 12:56:55 +0800 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: (pressed send by accident too soon, sorry :-( The good news is: if I use larger matrices, which is actually closer to my target scenario, then it seems the scipy with openblas is *very* good on a dual-core machine: matlab: >> n = 200000; k = 100; a = spdiags(rand(n,1),0,n,n); c = rand(k,n); tic, d = c*a*c'; toc tic, d = c*a*c'; toc Elapsed time is 1.322931 seconds. Elapsed time is 1.308237 seconds. python: python testmult.py Elapsed time: 0.408543109894 Elapsed time: 1.15053105354 However, on a 12-core machine, it seems that scipy/openblas is rather less competitive: matlab: >> n = 200000; >> k = 100; >> a = spdiags(rand(n,1),0,n,n); >> c = rand(k,n); >> tic, d = c*a; toc Elapsed time is 0.106401 seconds. >> tic, d = d*c'; toc Elapsed time is 0.098882 seconds. python: $ python testmult2.py Elapsed time: 0.168606996536 Elapsed time: 0.584333896637 python code: from __future__ import division from scipy import * import scipy.sparse as sparse import time start = 0 def tic(): global start start = time.time() def toc(): global start print "Elapsed time: " + str( time.time() - start ) start = time.time() n = 200000; k = 100; a = sparse.spdiags( rand(n), 0, n, n ) c = rand(n,k); tic(); d = c.T * a; toc() tic(); e = dot( d, c ); toc() Note that openblas seems to use multithreading by default: $ export OMP_NUM_THREADS=1 $ python testmult2.py Elapsed time: 0.145273923874 Elapsed time: 0.750689983368 $ unset OMP_NUM_THREADS $ python testmult2.py Elapsed time: 0.168558120728 Elapsed time: 0.578655004501 $ export OMP_NUM_THREADS=12 $ python testmult2.py Elapsed time: 0.168849945068 Elapsed time: 0.591191053391 ... but this is apparently not enough to get quite as good as matlab :-( Looking at the ratio of the times for 12 threads and 1 thread, it looks like the openblas performance doesn't change much with the number of threads. ... but perhaps this is now out of the scope of scipy, and is something I should check with openblas? From cournape at gmail.com Wed Sep 12 05:28:36 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 12 Sep 2012 10:28:36 +0100 Subject: [SciPy-User] Modify package dependencies, or issue a runtime warning, if a fast BLAS implementation not found? In-Reply-To: References: Message-ID: On Wed, Sep 12, 2012 at 3:12 AM, Hugh Perkins wrote: > (Was "How to efficiently do dot(dot( A.T, diag(d) ), A ) ?") > > On Wed, Sep 12, 2012 at 2:30 AM, Pauli Virtanen wrote: >> 11.09.2012 20:15, Pauli Virtanen kirjoitti: >>> Here are results on one (slowish) machine: >> >> Ok, just for completeness: the bad performance for `np.dot` on this >> machine comes from the fact that Numpy is not linked with any BLAS >> library, so it falls back to a slow and naive algorithm. (You can check >> this as follows: if `numpy.dot.__module__` is 'numpy.core._dotblas', you >> have some BLAS.) > > Considering the amount of emails and effort that it took in order for > one naive user (ie myself) to find this, perhaps it might be worth > doing one or more of the following? : > - ensure that openblas is a dependency of scipy in the ubuntu packages > (and/or debian, redhat etc packages) This is out of our hands, and mostly an issue with distributions. Distributions often have different priorities than people looking for best performances, so there is always a tradeoff between distribution/platform support/performance. > - issue a check for a reasonably performing blas when these functions > are used in scipy, and issue a warning if only reference blas is found you can do the check with scipy.show_config(), although I reckon its usage is not very obvious/straightfoward. David From tjandacw at yahoo.com Tue Sep 11 10:16:41 2012 From: tjandacw at yahoo.com (Tim Williams) Date: Tue, 11 Sep 2012 07:16:41 -0700 (PDT) Subject: [SciPy-User] scipy.io.loadmat error for large MAT file on network drive Message-ID: <86fbd57d-bdb7-44a2-be5a-334bd4a381fa@googlegroups.com> Hi, I'm getting an error trying to read a large (100MB) MAT file using scipy.io.loadmat (version 0.10.0b2, Python 2.7). Here is the message I get: >>> os.getcwd() 'X:\\Exp003\\DATA\\Porecounts\\Face' >>> matdata=scipy.io.loadmat(os.path.join(os.getcwd(),'Exp003_025_A009_IR1_01_0_porecount')) Traceback (most recent call last): File "", line 1, in File "C:\Python27\Lib\site-packages\scipy\io\matlab\mio.py", line 175, in loadmat matfile_dict = MR.get_variables(variable_names) File "C:\Python27\Lib\site-packages\scipy\io\matlab\mio5.py", line 292, in get_variables res = self.read_var_array(hdr, process) File "C:\Python27\Lib\site-packages\scipy\io\matlab\mio5.py", line 255, in read_var_array return self._matrix_reader.array_from_header(header, process) File "mio5_utils.pyx", line 624, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy\io\matlab\mio5_utils.c:5280) File "mio5_utils.pyx", line 671, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy\io\matlab\mio5_utils.c:4940) File "mio5_utils.pyx", line 900, in scipy.io.matlab.mio5_utils.VarReader5.read_struct (scipy\io\matlab\mio5_utils.c:7455) File "mio5_utils.pyx", line 622, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix (scipy\io\matlab\mio5_utils.c:4533) File "mio5_utils.pyx", line 653, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy\io\matlab\mio5_utils.c:4720) File "mio5_utils.pyx", line 706, in scipy.io.matlab.mio5_utils.VarReader5.read_real_complex (scipy\io\matlab\mio5_utils.c:5469) File "mio5_utils.pyx", line 424, in scipy.io.matlab.mio5_utils.VarReader5.read_numeric (scipy\io\matlab\mio5_utils.c:3303) File "mio5_utils.pyx", line 360, in scipy.io.matlab.mio5_utils.VarReader5.read_element (scipy\io\matlab\mio5_utils.c:3032) File "streams.pyx", line 119, in scipy.io.matlab.streams.cStringStream.read_string (scipy\io\matlab\streams.c:1827) IOError: could not read bytes I thought it was a problem with just the file being on a mounted windows share drive, because when I copy over to a local drive, I can read the file fine, I then found a thread on this list about problems reading large MAT files, so I went inside of MatLab and deleted the one of the structure members that had the majority of the file's size. I could then read the smaller file with loadmat, so it seems to be a combination of the file being large, and being on a network. For now, I can copy all the files I need to process to a local drive, but I'd like to be able to access these files from where they are. Thanks for any help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Wed Sep 12 17:12:39 2012 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 12 Sep 2012 23:12:39 +0200 Subject: [SciPy-User] ANN: SfePy 2012.3 Message-ID: <5050FAC7.9020603@ntc.zcu.cz> I am pleased to announce release 2012.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Downloads, mailing list, wiki: http://code.google.com/p/sfepy/ Git (source) repository, issue tracker: http://github.com/sfepy Highlights of this release -------------------------- - several new terms - material parameters can be defined per region using region names - base function values can be defined per element - support for global options For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Alec Kalinin, Vladim?r Luke? From davidmenhur at gmail.com Thu Sep 13 08:14:37 2012 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 13 Sep 2012 14:14:37 +0200 Subject: [SciPy-User] Modify package dependencies, or issue a runtime warning, if a fast BLAS implementation not found? In-Reply-To: References: Message-ID: On Wed, Sep 12, 2012 at 11:28 AM, David Cournapeau wrote: > This is out of our hands, and mostly an issue with distributions. > Distributions often have different priorities than people looking for > best performances, so there is always a tradeoff between > distribution/platform support/performance. Another option, as suggested recently elsewhere, would be to write a nice installation tutorial with both the "quick and dirty" way (plainly install the distributed version) and the optimized way (linking to Blas, Blas goto and ATLAS). As far as I know, there are no good simple explanations on how to build numpy with these links. From hughperkins at gmail.com Thu Sep 13 11:18:16 2012 From: hughperkins at gmail.com (Hugh Perkins) Date: Thu, 13 Sep 2012 23:18:16 +0800 Subject: [SciPy-User] Modify package dependencies, or issue a runtime warning, if a fast BLAS implementation not found? In-Reply-To: References: Message-ID: Sounds good. By the way, some additional information: I checked with openblas why the multithreading doesn't seem very good, and apparently the maximum number of threads is hard-coded at compile time to be equal to the number of threads on the build machine. Therefore, on a machine with more than one or two cores, you probably want to build openblas from source to get the best performance. it still runs four times slower than matlab in my tests (on a 12-core machine), but that's better than the six times slower I get with stock openblas (or 15-20 times slower or so with atlas/reference blas et al). On Thu, Sep 13, 2012 at 8:14 PM, Da?id wrote: > Another option, as suggested recently elsewhere, would be to write a > nice installation tutorial with both the "quick and dirty" way > (plainly install the distributed version) and the optimized way > (linking to Blas, Blas goto and ATLAS). As far as I know, there are > no good simple explanations on how to build numpy with these links. From cournape at gmail.com Thu Sep 13 12:08:49 2012 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Sep 2012 17:08:49 +0100 Subject: [SciPy-User] Modify package dependencies, or issue a runtime warning, if a fast BLAS implementation not found? In-Reply-To: References: Message-ID: On Thu, Sep 13, 2012 at 1:14 PM, Da?id wrote: > On Wed, Sep 12, 2012 at 11:28 AM, David Cournapeau wrote: >> This is out of our hands, and mostly an issue with distributions. >> Distributions often have different priorities than people looking for >> best performances, so there is always a tradeoff between >> distribution/platform support/performance. > > Another option, as suggested recently elsewhere, would be to write a > nice installation tutorial with both the "quick and dirty" way > (plainly install the distributed version) and the optimized way > (linking to Blas, Blas goto and ATLAS). As far as I know, there are > no good simple explanations on how to build numpy with these links. The problem is that there is no such thing as a "quick and dirty" way to build with optimized libraries, because the way to build numpy with optimized libraries depend on many parameters. I think the problem is that the process is too arcane/complicated, and I strongly believe in making this easier instead of documenting all the idiosyncraties [1] cheers, David [1] For example, this WE, I added support so that we can now finally do this for NumPy: bentomaker configure --with-blas-lapack-type=atlas/openblas/accelerate/mkl --with-blas-lapack-dir=where_you_install_the_libs with the options clearly documented in bentomaker configure --help From davidmenhur at gmail.com Thu Sep 13 12:46:55 2012 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 13 Sep 2012 18:46:55 +0200 Subject: [SciPy-User] Modify package dependencies, or issue a runtime warning, if a fast BLAS implementation not found? In-Reply-To: References: Message-ID: On Thu, Sep 13, 2012 at 6:08 PM, David Cournapeau wrote: > The problem is that there is no such thing as a "quick and dirty" way > to build with optimized libraries Sorry for the lack of accuracy, for "quick and dirty" I was thinking in the basic installation, like the binary downloads or the Ubuntu repositories, where they are not linked to optimized libraries, or else pre-compiled not fully optimized versions are used. This is important for a newcomer, that may come to Python hearing of its simplicity (where its Plug&Play nature is, I think, a main reason for its sucess in science), so he just want to try it out, get started, and keep things easy. Once you actually need computing power, you can think about improving your installation. From sunghwanchoi91 at gmail.com Fri Sep 14 02:43:02 2012 From: sunghwanchoi91 at gmail.com (SungHwan Choi) Date: Fri, 14 Sep 2012 15:43:02 +0900 Subject: [SciPy-User] I have a problem with weave library_dirs option Message-ID: Dear all, I have the problem that the library files for my code are on both /usr/lib/ and some other directory. however, I want to make my code not refer it from /usr/local/ so I put library_dirs="PATH_THAT_I_WANT" option. It looks working well; but when I delete the library file on /usr/lib/ the code raise error. I filnally know that my code refer the library from not directroy that I want but simply /usr/local/ Is there anyone who can explain the priority that weave find the library as defalt? and also how to make my code refer the library from the path that I want? please help me.... I already waste several days. Best, Sunghwan Choi -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Sep 14 15:47:56 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 14 Sep 2012 12:47:56 -0700 Subject: [SciPy-User] [ANN] John Hunter has been awarded the first Distinguished Service Award by the PSF Message-ID: Hi folks, you may have already seen this, but in case you haven't, I'm thrilled to share that the Python Software Foundation has just created its newest and highest distinction, the Distinguished Service Award, and has chosen John as its first recipient: http://pyfound.blogspot.com/2012/09/announcing-2012-distinctive-service.html This is a fitting tribute to his many contributions. Cheers, f From tmp50 at ukr.net Sat Sep 15 06:51:55 2012 From: tmp50 at ukr.net (Dmitrey) Date: Sat, 15 Sep 2012 13:51:55 +0300 Subject: [SciPy-User] [ANN] OpenOpt Suite release 0.42 Message-ID: <69972.1347706315.12104809919653740544@ffe16.ukr.net> Hi all, I'm glad to inform you about new OpenOpt Suite release 0.42 (2012-Sept-15). Main changes: * Some improvements for solver interalg, including handling of categorical variables * Some parameters for solver gsubg * Speedup objective function for de and pswarm on FuncDesigner models * New global (GLP) solver: asa (adaptive simulated annealing) * Some new classes for network problems: TSP (traveling salesman problem), STAB (maximum graph stable set)], MCP (maximum clique problem) * Improvements for FD XOR (and now it can handle many inputs) * Solver de has parameter "seed", also, now it works with PyPy * Function sign now is available in FuncDesigner * FuncDesigner interval analysis (and thus solver interalg) now can handle non-monotone splines of 1st order * FuncDesigner now can handle parameter fixedVars as Python dict * Now scipy InterpolatedUnivariateSpline is used in FuncDesigner interpolator() instead of UnivariateSpline. This creates backward incompatibility - you cannot pass smoothing parameter (s) to interpolator no longer. * SpaceFuncs: add Point weight, Disk, Ball and method contains(), bugfix for importing Sphere, some new examples * Some improvements (essential speedup, new parameter interpolate for P()) for our (currently commercial) FuncDesigner Stochastic Programming addon * Some bugfixes In our website ( http://openopt.org ) you could vote for most required OpenOpt Suite development direction(s) (poll has been renewed, previous results are here). Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Mon Sep 17 04:17:29 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 17 Sep 2012 10:17:29 +0200 Subject: [SciPy-User] Naming Ideas In-Reply-To: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> References: <1346724896.59623.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <5056DC99.3040800@astro.uio.no> On 09/04/2012 04:14 AM, The Helmbolds wrote: > Names are extremely important. To steal from Mark Twain, the difference > between the best name and a poorer name is the difference between the > lightning and the lightning bug. > A good name captures attention, stimulates interest, succinctly > describes the thing named, and has a positive connotation. Accordingly, > time spent choosing a great name is well worth it. > Now, SciPy is a perfectly good name. It?s well-established and widely > known and respected. I urge that it be retained and used /exactly/ the > way it is currently defined. The whole problem is that the name is NOT defined currently. The SciPy and EuroScipy conferences are emphatically NOT about the SciPy library. And http://scipy.org is not just about the SciPy library, it's some weird hybrid, but leans more towards presenting numerical Python ecosystem than being about the library (even the Download link puts NumPy first in the list!) > If there is a need for a name that includes more than just SciPy, then I > suggest we consider something like the following: The problem is that the SciPy name has been used for "more than just SciPy" for a long time. Currently, the discussion about the pylab.org domain is going on in the NumFOCUS list. But are people willing to rename the conferences? Otherwise there will just be more confusion. Dag Sverre > *MatSysPy*, pronounced"mat-sis-pie". Short for "mathematical system for > advanced, large-scale scientific, industrial, medical and engineering > computation and graphical display.? MatSysPy currently includes the > powerful and versatile NumPy, SciPy, and MatPlotLib modules. Their > capabilities are constantly being expanded and improved, and other > modules may be added later. These stand-alone modules are carefully > designed to take full advantage of Python?s popular programming and > scripting language. This enables users to easily invoke those module > capabilities best suited to their challenging, large-scale computational > and graphical display tasks. As of this writing, a unified overall > package providing a fully integrated user interface to these module?s > vast range of capabilities is in the planning stage. [or "A unified > overall package providing a fully integrated user interface to these > module?s vast range of capabilities is under consideration.", whichever > is most accurate]. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From juanlu001 at gmail.com Mon Sep 17 13:19:03 2012 From: juanlu001 at gmail.com (=?ISO-8859-1?Q?Juan_Luis_Cano_Rodr=EDguez?=) Date: Mon, 17 Sep 2012 19:19:03 +0200 Subject: [SciPy-User] Inconsistency using fft.fftfreq with real input FFT Message-ID: Hi all, I was playing around with FFTs in NumPy and observed what seems to me an inconsistency with the output of fft.fftfreq. When I transform a general complex valued function using fft.fft, as it is stated in the documentation, A[1:n/2] contains the positive-frequency terms, and A[n/2+1:]contains the negative-frequency terms for even n. OTOH, for n even fft.fftfreq includes the Nyquist frequency in the negative ones. This is irrelevant in this case as A[n/2] represents both positive and negative Nyquist frequency. Nevertheless, if I transform a real valued signal and use fft.rfft, the negative frecuencies are not calculated and the output is cropped to n/2 so its length is n/2+1. But if I choose to crop the output of fft.fftfreq in a similar way, it appears that all the frecuencies are positive but the last one. >>> fft.fftfreq(10, 0.1)[:10 / 2 + 1] array([ 0., 1., 2., 3., 4., -5.]) This is uncomfortable because if I want to plot the result of fft.rfft vs fft.fftfreq[:N / 2 + 1] the last frequency is plotted far away on the left, for example. I wonder what are the reasons not to consider the Nyquist frequency positive. http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftfreq.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Sep 17 22:40:19 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Sep 2012 22:40:19 -0400 Subject: [SciPy-User] skew-normal distribution and Owen's T function Message-ID: triggered by a question on another mailing list I started to look at whether we can add the Skew-normal distribution to scipy.stats http://en.wikipedia.org/wiki/Skew_normal_distribution pdf is easy, but cdf needs Owen's T function http://en.wikipedia.org/wiki/Owen%27s_T_function I never heard of that one before. Google shows that it is also used for the cdf of a bivariate normal distribution. http://blog.wolfram.com/2010/10/07/why-you-should-care-about-the-obscure/ Any python and/or BSD compatible versions anywhere? Josef From opossumnano at gmail.com Tue Sep 18 05:47:06 2012 From: opossumnano at gmail.com (Tiziano Zito) Date: Tue, 18 Sep 2012 11:47:06 +0200 Subject: [SciPy-User] How to efficiently do dot(dot( A.T, diag(d) ), A ) ? In-Reply-To: References: Message-ID: <20120918094705.GA6153@bio230.biologie.hu-berlin.de> > > However, OpenBLAS/GotoBLAS could be better than ATLAS, it seems to be > > also doing well in the benchmarks you linked to: > > > > If you are on Linux, you can easily swap the BLAS libraries used, like so: > > Ah great! This is 4 times faster for me: > > Using atlas: > $ python testmult.py > Elapsed time: 0.0221469402313 > Elapsed time: 0.21438908577 > > Using goto/openblas: > $ python testmult.py > Elapsed time: 0.0214130878448 > Elapsed time: 0.051687002182 > What ATLAS package are you using? If you are on Debian/Ubuntu, the default libatlas3-base is *not* optimized for your CPU. With ATLAS most optimizations happen at build time, so to take full advantage of ATLAS you *need* to compile it on the machine you are going to use it. The binary package you get from Ubuntu/Debian has been compiled on the Debian developer machine and is not going to be good for yours. You need to follow the (very simple) instructions found in /usr/share/doc/libatlas3-base/README.Debian to compile ATLAS on your CPU, so that ATLAS has actually a chance to optimize. In my experience this can make ATLAS a lot (up to 10x) faster. For the lazy, here are the instructions: # cd /tmp # apt-get source atlas # apt-get build-dep atlas # cd atlas-3.8.4 # fakeroot debian/rules custom # cd .. this will produce a series of deb packages that you can install with # dpkg -i *.deb HTH, Tiziano From will at thearete.co.uk Tue Sep 18 10:04:28 2012 From: will at thearete.co.uk (William Furnass) Date: Tue, 18 Sep 2012 15:04:28 +0100 Subject: [SciPy-User] Vectorizing functions where not known if each arg is (broadcast compatible) scalar or ndarray Message-ID: Hi, I'm looking for a simple way to create a vectorized version of the following function that flexibly allows one or more of the inputs to be either an ndarray of constant length l or a scalar. With simpler functions (such as the 'reynolds' function mentioned below) the np.vectorize function does the job but I'm not sure how to vectorize the following given the conditionals involving Re (which could be vector or scalar) e.g. how should the case for Re > 4000 be calculated and D or k_s be indexed if it is not without introducing quite a number of checks whether Re, D or k_s are ndarrays or scalar floats? Also, is there a numpy function that allows one to check whether a number of scalar and ndarray are broadcast compatible? Cheers, Will def friction_factor(D, Q, k_s, T = 10.0, den = 1000.0): Re = reynolds(D, Q, T, den) if Re == 0: f = 0 elif Re < 2000: f = 64 / Re elif 2000 <= Re < 4000: y3 = -0.86859 * np.log((k_s / (3.7 * D)) + (5.74 / (4000**0.9))) y2 = (k_s / (3.7 * D)) + (5.74 / (Re**0.9)) fa = y3**-2 fb = fa * (2 - (0.00514215 / (y2*y3))) r = Re / 2000. x4 = r * (0.032 - (3. * fa) + (0.5 * fb)) x3 = -0.128 + (13. * fa) - (2.0 * fb) x2 = 0.128 - (17. * fa) + (2.5 * fb) x1 = (7 * fa) - fb f = x1 + r * (x2 + r * (x3 + x4)) elif Re >= 4000: f = 0.25 / (np.log10((k_s / (3.7 * D)) + (5.74 / (Re**0.9))))**2 return f friction_factor = np.vectorized(friction_factor) From takowl at gmail.com Tue Sep 18 18:26:06 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 18 Sep 2012 23:26:06 +0100 Subject: [SciPy-User] Pylab - standard packages Message-ID: Hi again, It now looks like we're going to use Pylab as the name for the 'scipy stack'. Now I want to turn to the question of what it should include. The idea is that Python distributions can call themselves Pylab compliant if they provide at least a defined set of packages. Also, I hope that it will become a metapackage in Linux distributions, so that users can 'apt-get install pylab' or similar As a minimum, I assume we should require Python, Numpy, Scipy and Matplotlib. Does anyone disagree? I also think we should specify minimum versions. The standard itself will be versioned, so we can raise these over time. For Python, I intend the requirement to be 2.x >= 2.6 or 3.x >= 3.2. What are sensible minimum versions of Numpy, Scipy and Matplotlib? Should the standard include an interface? IPython, a more traditional IDE, or both? On the one hand, specifying a standard interface means users can share experience better, and exchange richer files, like IPython notebooks or IDE project structures. Matlab, for instance, wins praise for including a powerful IDE by default. On the other hand, we've got several interesting UI efforts still taking shape - IPython notebooks, Spyder, IEP - and declaring one standard would make the alternatives less visible. I'm honestly torn on this - I can see good arguments for and against. Other scientific packages we might consider include pandas (which provides functionality similar to core parts of R), Sympy, Cython, various scikits projects, h5py, and doubtless many others I haven't thought of. We could also specify general purpose Python packages such as requests, or a GUI toolkit. On the NumFOCUS list, Chris Kees raised the idea that there could be two or more levels of packages, e.g. 'core' and 'recommended'. I don't think we should add that kind of complexity in the first version, but keep in mind that we could differentiate it later. Finally, I mean the standard to specify that the distribution must offer a way of installing arbitrary extra Python packages into it, so the standard shouldn't try to include everything you might need for scientific computing. The aim is to offer a key set of tools so you can get started without having to add things. "Get started with what?" is the key question we have to answer. In my field, for example, statistical tests are fundamental, while symbolic maths is hardly used. All your opinions are welcome, Thomas From matthew.brett at gmail.com Tue Sep 18 18:44:07 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 18 Sep 2012 23:44:07 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: Hi, On Tue, Sep 18, 2012 at 11:26 PM, Thomas Kluyver wrote: > Hi again, > > It now looks like we're going to use Pylab as the name for the 'scipy > stack'. Now I want to turn to the question of what it should include. > The idea is that Python distributions can call themselves Pylab > compliant if they provide at least a defined set of packages. Also, I > hope that it will become a metapackage in Linux distributions, so that > users can 'apt-get install pylab' or similar > > As a minimum, I assume we should require Python, Numpy, Scipy and > Matplotlib. Does anyone disagree? > > I also think we should specify minimum versions. The standard itself > will be versioned, so we can raise these over time. For Python, I > intend the requirement to be 2.x >= 2.6 or 3.x >= 3.2. What are > sensible minimum versions of Numpy, Scipy and Matplotlib I think I don't understand any more what is proposed. You mean pylab would be a sort of seal of conformity to the "pylab" standard? So you know you have pylab iff you have Python > .... etc? Then distributions like EPD and Python XY would be pylab-certified in some sense? > Should the standard include an interface? IPython, a more traditional > IDE, or both? On the one hand, specifying a standard interface means > users can share experience better, and exchange richer files, like > IPython notebooks or IDE project structures. Matlab, for instance, > wins praise for including a powerful IDE by default. On the other > hand, we've got several interesting UI efforts still taking shape - > IPython notebooks, Spyder, IEP - and declaring one standard would make > the alternatives less visible. I'm honestly torn on this - I can see > good arguments for and against. I can only say that I invariably install all of numpy, scipy, matplotlib and ipython. Is there enough agreement on the virtues of the more IDE-like GUIs to chose one? Is there a good reason not to include Ipython? I have a hunch that Ipython + notebook may well become standard soon, requiring some more dependencies such as zeromq, and providing that would be a significant benefit. > Other scientific packages we might consider include pandas (which > provides functionality similar to core parts of R), Sympy, Cython, > various scikits projects, h5py, and doubtless many others I haven't > thought of. We could also specify general purpose Python packages such > as requests, or a GUI toolkit. Cython seems to me a strong contender there - I would guess a high proportion of numerical python developers who also use scipy will have had some need for Cython. Is there any data you know of to address that? I would personally love to see an hdf5 library included - we really need a good fast standard storage protocol that we can rely on being available on the user's system. Best, Matthew From david_baddeley at yahoo.com.au Tue Sep 18 19:02:32 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 18 Sep 2012 16:02:32 -0700 (PDT) Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <1348009352.42172.YahooMailNeo@web113403.mail.gq1.yahoo.com> I guess the current main 'Pylab compatible'?distributions?are EPD and Python(x,y). It might make sense to base the standard around their core components. I'd definitely include Cython in the core package list, and would consider specifying that a c- compiler should be bundled on platforms which don't provide one (ie windows), and that Python (and numpy) headers be provided so that the distribution is build-ready (especially as both easy_install and pip install from source rather than downloading compiled packages). ?I'd suggest making any specification of IDE packages relatively loose and say that they should be there, but leave the choice of package to the implementer, eg:? "A Pylab distribution should include: - A GUI framework which is compatible with Matplotlib (e.g. wx or Qt - I think it makes sense to push people towards these 2 frameworks rather than, e.g. GTK or Tk) - An editor which supports python syntax highlighting and ideally treats tab as 4 spaces (a minimalist install might fall back on IDLE) - An interactive shell ?(e.g. IPython)" ________________________________ From: Thomas Kluyver To: SciPy Users List Sent: Wednesday, 19 September 2012 10:26 AM Subject: [SciPy-User] Pylab - standard packages Hi again, It now looks like we're going to use Pylab as the name for the 'scipy stack'. Now I want to turn to the question of what it should include. The idea is that Python distributions can call themselves Pylab compliant if they provide at least a defined set of packages. Also, I hope that it will become a metapackage in Linux distributions, so that users can 'apt-get install pylab' or similar As a minimum, I assume we should require Python, Numpy, Scipy and Matplotlib. Does anyone disagree? I also think we should specify minimum versions. The standard itself will be versioned, so we can raise these over time. For Python, I intend the requirement to be 2.x >= 2.6 or 3.x >= 3.2. What are sensible minimum versions of Numpy, Scipy and Matplotlib? Should the standard include an interface? IPython, a more traditional IDE, or both? On the one hand, specifying a standard interface means users can share experience better, and exchange richer files, like IPython notebooks or IDE project structures. Matlab, for instance, wins praise for including a powerful IDE by default. On the other hand, we've got several interesting UI efforts still taking shape - IPython notebooks, Spyder, IEP - and declaring one standard would make the alternatives less visible. I'm honestly torn on this - I can see good arguments for and against. Other scientific packages we might consider include pandas (which provides functionality similar to core parts of R), Sympy, Cython, various scikits projects, h5py, and doubtless many others I haven't thought of. We could also specify general purpose Python packages such as requests, or a GUI toolkit. On the NumFOCUS list, Chris Kees raised the idea that there could be two or more levels of packages, e.g. 'core' and 'recommended'. I don't think we should add that kind of complexity in the first version, but keep in mind that we could differentiate it later. Finally, I mean the standard to specify that the distribution must offer a way of installing arbitrary extra Python packages into it, so the standard shouldn't try to include everything you might need for scientific computing. The aim is to offer a key set of tools so you can get started without having to add things. "Get started with what?" is the key question we have to answer. In my field, for example, statistical tests are fundamental, while symbolic maths is hardly used. All your opinions are welcome, Thomas _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Tue Sep 18 19:03:42 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 19 Sep 2012 00:03:42 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 18 September 2012 23:44, Matthew Brett wrote: > I think I don't understand any more what is proposed. You mean pylab > would be a sort of seal of conformity to the "pylab" standard? So you > know you have pylab iff you have Python > .... etc? Then > distributions like EPD and Python XY would be pylab-certified in some > sense? I see my proposal as having three parts: the name, the website (which newcomers find when they search for the name), and the standard, which distributions like EPD and Python(x,y) will, I hope, conform to. These distributions are then ways to install pylab - or a dedicated user can install the pieces individually, and still end up with pylab. I'm not trying to make a canonical scientific Python distribution - that's a much bigger challenge. > Is there a good reason not to include Ipython? Perhaps other interfaces work better for some use cases, or some individuals? Python(x,y) favours Spyder, for instance, and ships an old version of IPython to maintain compatibility. If we included IPython in the standard, Python(x,y) would probably not meet the minimum specified version, and would not be Pylab compliant for the time being. With my IPython hat on, I hasten to add that we're working with Spyder developers to improve things, so that hopefully a future version of Python(x,y) can include an up-to-date version of IPython. Thanks, Thomas From matthew.brett at gmail.com Tue Sep 18 19:16:02 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 19 Sep 2012 00:16:02 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: Hi, On Wed, Sep 19, 2012 at 12:03 AM, Thomas Kluyver wrote: > On 18 September 2012 23:44, Matthew Brett wrote: >> I think I don't understand any more what is proposed. You mean pylab >> would be a sort of seal of conformity to the "pylab" standard? So you >> know you have pylab iff you have Python > .... etc? Then >> distributions like EPD and Python XY would be pylab-certified in some >> sense? > > I see my proposal as having three parts: the name, the website (which > newcomers find when they search for the name), and the standard, which > distributions like EPD and Python(x,y) will, I hope, conform to. These > distributions are then ways to install pylab - or a dedicated user can > install the pieces individually, and still end up with pylab. I'm not > trying to make a canonical scientific Python distribution - that's a > much bigger challenge. OK - but the website and the name point us to the standard, and thence to some installers for that standard? You are not proposing any new installers, but that the standard basically says something like: """"You have Pylab if you have python >= 2.6, scipy > 0.9.0, ... Your options for installing these are: Windows : - python x, y or EPD or individual download and install OSX : EPD or individual download and install Linux : apt-get / yum / whatever install python-pylab """ So, if we are adding packages to this collection, we are more or less lobbying python (x, y) or EPD for those changes? Best, Matthew From takowl at gmail.com Tue Sep 18 19:51:18 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 19 Sep 2012 00:51:18 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 19 September 2012 00:16, Matthew Brett wrote: > OK - but the website and the name point us to the standard, and thence > to some installers for that standard? You are not proposing any new > installers, but that the standard basically says something like: Yes, that's the idea. > So, if we are adding packages to this collection, we are more or less > lobbying python (x, y) or EPD for those changes? I expect that the packages we're likely to specify are already included in those distributions. We might end up lobbying for additions to EPD Free, but I think that its description as 'Scientific Python essentials' fits with what we're trying to achieve, so hopefully Enthought are open to dialogue. This page shows what EPD Free currently includes (packages with tick marks): http://www.enthought.com/products/epdlibraries.php David: > consider specifying that a c- compiler should be bundled on platforms which don't provide one (ie windows), Good point. Can someone more Windows-savvy suggest how practical this is? I assume the VS compiler can't be redistributed, so is mingw sufficiently lightweight to expect all distributions to include it? Many packages with compiled components provide executable installers for Windows. Also, I don't think Macs actually provide a C compiler - when I had to test stuff on a Mac, I had to install Xcode before I could do anything. Will distributions need to include a compiler on the Mac as well, or would the wording of the definition exclude that? So I'm leaning towards not requiring a compiler, but I could yet be persuaded. Thanks, Thomas From travis at continuum.io Tue Sep 18 20:03:26 2012 From: travis at continuum.io (Travis Oliphant) Date: Tue, 18 Sep 2012 19:03:26 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <34A5C03D-DCA5-4D65-8B89-5F653D647BC6@continuum.io> On Sep 18, 2012, at 6:51 PM, Thomas Kluyver wrote: > On 19 September 2012 00:16, Matthew Brett wrote: >> OK - but the website and the name point us to the standard, and thence >> to some installers for that standard? You are not proposing any new >> installers, but that the standard basically says something like: > > Yes, that's the idea. > >> So, if we are adding packages to this collection, we are more or less >> lobbying python (x, y) or EPD for those changes? > > I expect that the packages we're likely to specify are already > included in those distributions. We might end up lobbying for > additions to EPD Free, but I think that its description as 'Scientific > Python essentials' fits with what we're trying to achieve, so > hopefully Enthought are open to dialogue. This page shows what EPD > Free currently includes (packages with tick marks): > http://www.enthought.com/products/epdlibraries.php Don't forget about the new-comer to the freely-available binary distributions, Anaconda CE: which is a reference implementation as well and includes all the packages we are discussing: http://www.continuum.io/downloads.html But, in general, the point of this conversation is not to lobby anyone to change their tools. It is to define what the community considers to be a reference implementation (however it is obtained) with version numbers of specific packages. > > Good point. Can someone more Windows-savvy suggest how practical this > is? I assume the VS compiler can't be redistributed, so is mingw > sufficiently lightweight to expect all distributions to include it? > Many packages with compiled components provide executable installers > for Windows. It's easy to do. The next version of Anaconda CE is going to contain a C-compiler for Windows, for example. You can't really include Cython in the standard without a C-compiler. This, to me, makes the case for pylab-full (i.e. you want to have some definition that includes Cython, and you need a compiler to put Cython in there). > > Also, I don't think Macs actually provide a C compiler - when I had to > test stuff on a Mac, I had to install Xcode before I could do > anything. Will distributions need to include a compiler on the Mac as > well, or would the wording of the definition exclude that? The best thing to do is just encourage people to install XCode, I think. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Tue Sep 18 20:15:56 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 19 Sep 2012 01:15:56 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <34A5C03D-DCA5-4D65-8B89-5F653D647BC6@continuum.io> References: <34A5C03D-DCA5-4D65-8B89-5F653D647BC6@continuum.io> Message-ID: On 19 September 2012 01:03, Travis Oliphant wrote: > The next version of Anaconda CE is going to contain a C-compiler for > Windows, for example. Thanks for this info. Do you have a list of all the packages in Anaconda CE, for comparison with the lists for EPD [1] and Python(x,y) [2]? [1] http://www.enthought.com/products/epdlibraries.php [2] http://code.google.com/p/pythonxy/wiki/StandardPlugins > The best thing to do is just encourage people to install XCode, I think. So would the standard specifically treat Windows differently with respect to bundling a C compiler, or would the wording somehow differentiate platforms that don't provide a C compiler, but you've probably already installed one? Best wishes, Thomas From jason-sage at creativetrax.com Tue Sep 18 20:22:45 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Tue, 18 Sep 2012 19:22:45 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <1348009352.42172.YahooMailNeo@web113403.mail.gq1.yahoo.com> References: <1348009352.42172.YahooMailNeo@web113403.mail.gq1.yahoo.com> Message-ID: <50591055.5080705@creativetrax.com> On 9/18/12 6:02 PM, David Baddeley wrote: > I guess the current main 'Pylab compatible' distributions are EPD and > Python(x,y). Sage probably also qualifies as a "scientific python distribution" (as well as a lot more...). The package list is here: http://sagemath.org/packages/standard/ (and I hasten to add that we're just about finished with the review needed for us to upgrade to IPython 0.13...) Thanks, Jason From travis at continuum.io Tue Sep 18 20:24:32 2012 From: travis at continuum.io (Travis Oliphant) Date: Tue, 18 Sep 2012 19:24:32 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <34A5C03D-DCA5-4D65-8B89-5F653D647BC6@continuum.io> Message-ID: <35DB8498-0B55-4ACF-AACC-8CD88E8A2608@continuum.io> On Sep 18, 2012, at 7:15 PM, Thomas Kluyver wrote: > On 19 September 2012 01:03, Travis Oliphant wrote: >> The next version of Anaconda CE is going to contain a C-compiler for >> Windows, for example. > > Thanks for this info. Do you have a list of all the packages in > Anaconda CE, for comparison with the lists for EPD [1] and Python(x,y) > [2]? > > [1] http://www.enthought.com/products/epdlibraries.php > [2] http://code.google.com/p/pythonxy/wiki/StandardPlugins https://store.continuum.io/cshop/anaconda At the bottom of the page there is a link to this pop-up window. anaconda launcher bitarray 0.8.0 bitey cython 0.16 dateutil 1.5 disco 0.4.2 (Linux only) erlang (Linux only) flask 0.9 gevent 0.13.7 gevent-websocket 0.3.6 gevent_zeromq 0.2.5 greenlet0.4.0 h5py 2.0.1 hdf5 1.8.9 PIL 1.1.7 ipython 0.13 jinja2 2.6 libevert 2.0.20 llvm 3.1 llvmpy 0.8.2.dev matplotlib 1.1.1 mpi4py 1.3 mpich2 1.4.1p1 networkx 1.7 nose 1.1.2 numba 0.1.dev numexpr 2.0.1 numpy 1.7.rc1 opencv 2.4.2 openssl 1.0.1c pandas 0.8.1 pip 1.1 pixman 0.26.2 py2cairo 1.10.0 pycurl 7.19.0 pygments 1.5 pysal 1.4.0 pysam 0.6 pytables 2.4.0 python 2.7.3 pytz 2012d pyyaml 3.10 pyzmq 2.2.0 redis 2.4.15 (Linux only) redis py-2.4.13 requests 0.13.5 scikit-learn 0.11 scikits-image 0.6.1 scipy 0.11.0rc2 sqlalchemy 0.7.8 sqlite 3.7.13 statsmodels 0.4.3 sympy 0.7.1 theano 0.5.0 tornado 2.3 werkzeug 0.8.3 But, this is not complete because the Windows version does include spyder and pyside. > >> The best thing to do is just encourage people to install XCode, I think. > > So would the standard specifically treat Windows differently with > respect to bundling a C compiler, or would the wording somehow > differentiate platforms that don't provide a C compiler, but you've > probably already installed one? I think the full standard would assume Cython, but then implementations would get to pick how they supported Cython (i.e. requiring Xcode install on Mac, vs bundling with Windows). -Travis > > Best wishes, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Sep 18 20:50:09 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Sep 2012 20:50:09 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <35DB8498-0B55-4ACF-AACC-8CD88E8A2608@continuum.io> References: <34A5C03D-DCA5-4D65-8B89-5F653D647BC6@continuum.io> <35DB8498-0B55-4ACF-AACC-8CD88E8A2608@continuum.io> Message-ID: On Tue, Sep 18, 2012 at 8:24 PM, Travis Oliphant wrote: > > On Sep 18, 2012, at 7:15 PM, Thomas Kluyver wrote: > > On 19 September 2012 01:03, Travis Oliphant wrote: > > The next version of Anaconda CE is going to contain a C-compiler for > > Windows, for example. > > What c compiler do you get for 64-bit? I don't know what the audience is for pylab, but if I want to run 64 bit python on Windows then I might not need mingw (if there are still problems with 64 bit version) and the only alternative is Microsoft compilers. my 64 bit python 3.2 is all Gohlke binaries, without any compiler available yet one of my 32 bit pythons is python-xy with bundled c compiler So, c compiler looks optional to me. Josef > > Thanks for this info. Do you have a list of all the packages in > Anaconda CE, for comparison with the lists for EPD [1] and Python(x,y) > [2]? > > [1] http://www.enthought.com/products/epdlibraries.php > [2] http://code.google.com/p/pythonxy/wiki/StandardPlugins > > > https://store.continuum.io/cshop/anaconda > > At the bottom of the page there is a link to this pop-up window. > > anaconda launcher > bitarray 0.8.0 > bitey > cython 0.16 > dateutil 1.5 > disco 0.4.2 (Linux only) > erlang (Linux only) > flask 0.9 > gevent 0.13.7 > gevent-websocket 0.3.6 > gevent_zeromq 0.2.5 > greenlet0.4.0 > h5py 2.0.1 > hdf5 1.8.9 > PIL 1.1.7 > ipython 0.13 > jinja2 2.6 > libevert 2.0.20 llvm 3.1 > llvmpy 0.8.2.dev > matplotlib 1.1.1 > mpi4py 1.3 > mpich2 1.4.1p1 > networkx 1.7 > nose 1.1.2 > numba 0.1.dev > numexpr 2.0.1 > numpy 1.7.rc1 > opencv 2.4.2 > openssl 1.0.1c > pandas 0.8.1 > pip 1.1 > pixman 0.26.2 > py2cairo 1.10.0 > pycurl 7.19.0 > pygments 1.5 > pysal 1.4.0 > pysam 0.6 > pytables 2.4.0 > python 2.7.3 > pytz 2012d > pyyaml 3.10 > pyzmq 2.2.0 > redis 2.4.15 (Linux only) > redis py-2.4.13 > requests 0.13.5 > scikit-learn 0.11 > scikits-image 0.6.1 > scipy 0.11.0rc2 > sqlalchemy 0.7.8 > sqlite 3.7.13 > statsmodels 0.4.3 > sympy 0.7.1 > theano 0.5.0 > tornado 2.3 > werkzeug 0.8.3But, this is not complete because the Windows version does > include spyder and pyside. > > > > > The best thing to do is just encourage people to install XCode, I think. > > > So would the standard specifically treat Windows differently with > respect to bundling a C compiler, or would the wording somehow > differentiate platforms that don't provide a C compiler, but you've > probably already installed one? > > > > I think the full standard would assume Cython, but then implementations > would get to pick how they supported Cython (i.e. requiring Xcode install > on Mac, vs bundling with Windows). > > -Travis > > > > > > Best wishes, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Wed Sep 19 02:44:35 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 18 Sep 2012 23:44:35 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <505969D3.4030801@uci.edu> On 9/18/2012 3:26 PM, Thomas Kluyver wrote: > Hi again, > > It now looks like we're going to use Pylab as the name for the 'scipy > stack'. Now I want to turn to the question of what it should include. > The idea is that Python distributions can call themselves Pylab > compliant if they provide at least a defined set of packages. Also, I > hope that it will become a metapackage in Linux distributions, so that > users can 'apt-get install pylab' or similar > > As a minimum, I assume we should require Python, Numpy, Scipy and > Matplotlib. Does anyone disagree? > > I also think we should specify minimum versions. The standard itself > will be versioned, so we can raise these over time. For Python, I > intend the requirement to be 2.x >= 2.6 or 3.x >= 3.2. What are > sensible minimum versions of Numpy, Scipy and Matplotlib? > > Should the standard include an interface? IPython, a more traditional > IDE, or both? On the one hand, specifying a standard interface means > users can share experience better, and exchange richer files, like > IPython notebooks or IDE project structures. Matlab, for instance, > wins praise for including a powerful IDE by default. On the other > hand, we've got several interesting UI efforts still taking shape - > IPython notebooks, Spyder, IEP - and declaring one standard would make > the alternatives less visible. I'm honestly torn on this - I can see > good arguments for and against. > > Other scientific packages we might consider include pandas (which > provides functionality similar to core parts of R), Sympy, Cython, > various scikits projects, h5py, and doubtless many others I haven't > thought of. We could also specify general purpose Python packages such > as requests, or a GUI toolkit. > > On the NumFOCUS list, Chris Kees raised the idea that there could be > two or more levels of packages, e.g. 'core' and 'recommended'. I don't > think we should add that kind of complexity in the first version, but > keep in mind that we could differentiate it later. > > Finally, I mean the standard to specify that the distribution must > offer a way of installing arbitrary extra Python packages into it, so > the standard shouldn't try to include everything you might need for > scientific computing. The aim is to offer a key set of tools so you > can get started without having to add things. "Get started with what?" > is the key question we have to answer. In my field, for example, > statistical tests are fundamental, while symbolic maths is hardly > used. > > All your opinions are welcome, > > Thomas Hello, the recent poll "Scientific Python packages: Popularity check" by Pierre Raybaut [1] might be of interest. In addition to EPD, Pythonxy, AnacondaCE, and Sage, also consider WinPython [2] and ActivePython [3] as potential "Pylab compatible" distributions. [1] http://www.doodle.com/rzssq2dbnus4a34r [2] http://code.google.com/p/winpython/wiki/PackageIndex [3] http://www.activestate.com/activepython/python-financial-scientific-modules Christoph From a.klein at science-applied.nl Wed Sep 19 04:25:27 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Wed, 19 Sep 2012 10:25:27 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <34A5C03D-DCA5-4D65-8B89-5F653D647BC6@continuum.io> <35DB8498-0B55-4ACF-AACC-8CD88E8A2608@continuum.io> Message-ID: On 19 September 2012 02:50, wrote: > > > On Tue, Sep 18, 2012 at 8:24 PM, Travis Oliphant wrote: > >> >> On Sep 18, 2012, at 7:15 PM, Thomas Kluyver wrote: >> >> On 19 September 2012 01:03, Travis Oliphant wrote: >> >> The next version of Anaconda CE is going to contain a C-compiler for >> >> Windows, for example. >> >> > What c compiler do you get for 64-bit? > There is the mingw-w64 project: http://mingw-w64.sourceforge.net/. I played with that a while ago. I got it working (when I had the right build), at least for compiling some of my Cython code. Both 32bit and 64bit mingw compilers can be relatively easily shipped with a distribution. They're 120Mb and 240-ish Mb (on disk), respectively. The best thing to do is just encourage people to install XCode, I think. I believe there are now easier ways to install a C compiler on Mac, but I cannot find the link right now. implementations would get to pick how they supported Cython (i.e. requiring > Xcode install on Mac, vs bundling with Windows). I'm personally also in favor of including Cython. I agree that a distribution does not need to come with a C compiler, as long as it comes with proper instructions on how to get one. Idea: maybe on Windows we can together make a C-compiler-installer-for-pylab, that Windows distributions can share. regards, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.klein at science-applied.nl Wed Sep 19 04:49:53 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Wed, 19 Sep 2012 10:49:53 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: > > I also think we should specify minimum versions. > I think Fernando proposed in the NumFocus thread to specify *exact* version, so that you can exactly reproduce scientific results. I liked that idea. Having said that, I think its probably best to use minimum version first and work on exact versions once we've got the other parts of pylab straightened out... Should the standard include an interface? IPython, a more traditional > IDE, or both? On the one hand, specifying a standard interface means > users can share experience better, and exchange richer files, like > IPython notebooks or IDE project structures. Matlab, for instance, > wins praise for including a powerful IDE by default. On the other > hand, we've got several interesting UI efforts still taking shape - > IPython notebooks, Spyder, IEP - and declaring one standard would make > the alternatives less visible. I'm honestly torn on this - I can see > good arguments for and against. > I think the goal should be to share functionality. Experience is much more subjective. By specifying a single interface you're making it harder for users to use other/new interfaces. A lot of interesting stuff is still going on in these areas, and I don't think there is one interface that is as widely used as numpy is for numerics. You have a good point with sharing of richer files, but maybe we should try keeping it easy to share code between interfaces. In any case, I don't think it justifies selecting one interface in the standard. The nicety of shipping an IDE with the framework (like Matlab) is something that should be left to the distributions IMO. Python(x,y) does that, and we're working on something similar based on IEP. One thing that I've liked from the start about this pylab idea, is that it makes distributions "siblings" under one parent (i.e. the pylab standard). It has a unifying effect, and I'm afraid that we lose that if we select one interface. > On the NumFOCUS list, Chris Kees raised the idea that there could be > two or more levels of packages, e.g. 'core' and 'recommended'. I don't > think we should add that kind of complexity in the first version, but > keep in mind that we could differentiate it later. > I liked the idea (can't remember where I've read/heard it) that different groups can maintain their own page with packages that are relevant to their field. As someone working in medical imaging I would suggest skimage and pydicom should be in the standard, but as a biologists you may not even know what pydicom is :) So it's kind of two levels, but the second level is partitioned in different topics. regards, Almar -- Almar Klein, PhD Science Applied phone: +31 6 19268652 e-mail: a.klein at science-applied.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Sep 19 05:29:45 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 19 Sep 2012 10:29:45 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505969D3.4030801@uci.edu> References: <505969D3.4030801@uci.edu> Message-ID: On 19 September 2012 07:44, Christoph Gohlke wrote: > the recent poll "Scientific Python packages: Popularity check" by Pierre > Raybaut [1] might be of interest. Thanks Christoph, that is interesting. After numpy, scipy & matplotlib (each around 270 votes), other popular modules include PyQt (162), PIL (118), SymPy (112) and ETS (102). N.B. Cython and IPython weren't included in the poll. Almar: > I liked the idea (can't remember where I've read/heard it) that different groups can maintain their own page with > packages that are relevant to their field. As someone working in medical imaging I would suggest skimage and > pydicom should be in the standard, but as a biologists you may not even know what pydicom is :) So it's kind of > two levels, but the second level is partitioned in different topics. I quite like this idea as well, although it's not without its own troubles. The different fields aren't neatly delineated, so it will be rather subjective who should find what package useful. And it's probably too much complexity for distributions to provide multiple bundled pylab profiles, so users would still be installing those extra packages themselves. My issue with the simpler multiple-levels (core vs recommended/full) idea is that it's not clear who the target audience is. Given that this is a user-facing name, I think each level needs a clear "get this version if you ..." story. If we decide that Cython is important and it should be in the 'full' level, who only needs the 'core' set? The answer can't include 'if you need to interface with C libraries', because newcomers won't know what they need in that kind of detail. Best wishes, Thomas From a.klein at science-applied.nl Wed Sep 19 05:49:16 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Wed, 19 Sep 2012 11:49:16 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <505969D3.4030801@uci.edu> Message-ID: On 19 September 2012 11:29, Thomas Kluyver wrote: > On 19 September 2012 07:44, Christoph Gohlke wrote: > > the recent poll "Scientific Python packages: Popularity check" by Pierre > > Raybaut [1] might be of interest. > > Thanks Christoph, that is interesting. After numpy, scipy & matplotlib > (each around 270 votes), other popular modules include PyQt (162), PIL > (118), SymPy (112) and ETS (102). N.B. Cython and IPython weren't > included in the poll. > > Almar: > > I liked the idea (can't remember where I've read/heard it) that > different groups can maintain their own page with > > packages that are relevant to their field. As someone working in medical > imaging I would suggest skimage and > > pydicom should be in the standard, but as a biologists you may not even > know what pydicom is :) So it's kind of > > two levels, but the second level is partitioned in different topics. > > I quite like this idea as well, although it's not without its own > troubles. The different fields aren't neatly delineated, so it will be > rather subjective who should find what package useful. And it's > probably too much complexity for distributions to provide multiple > bundled pylab profiles, so users would still be installing those extra > packages themselves. > I agree. One thing that might happen is that some fields get their own distribution. Or maybe we will finally get packaging right and it won't be a problem... One argument in favor of grouping advanced packages in topics is that is will scale better. As more and more fields start using Python, and more and more functionality becomes available, a full-version of pylab would become huge, even though most users only need a specific set of packages. My issue with the simpler multiple-levels (core vs recommended/full) > idea is that it's not clear who the target audience is. Given that > this is a user-facing name, I think each level needs a clear "get this > version if you ..." story. If we decide that Cython is important and > it should be in the 'full' level, who only needs the 'core' set? The > answer can't include 'if you need to interface with C libraries', > because newcomers won't know what they need in that kind of detail. Good point. Plus it adds confusion. On the other hand, a user can still upgrade later if he finds out he needs to. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Wed Sep 19 06:21:03 2012 From: jh at physics.ucf.edu (Joe Harrington) Date: Wed, 19 Sep 2012 12:21:03 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: (scipy-user-request@scipy.org) Message-ID: At first the idea of a "Pylab" standard appealed to me (it still does, but with caveats, which see below). Like everyone, my lab has a standard Python stack on all our machines, and it's a pretty long list. It would be nice to cut it in half, and let someone else worry about the integration issues. So I tried to make my list of what to include that wasn't specific to my discipline. ...And I discovered that this idea is a wickedly slippery slope. While I have my own reasons for choosing certain packages over others, the community has organized to encourage lots of topical subpackages, from broad-use packages like matplotlib to specialist things like astropy. Even within an area, such as plotting, there are options, and competition is critical to get great packages. BUT if your package is in PylabTM and your competition isn't, you have a *big* advantage. You'll get installed by every Linux distro, and the computational Python distros will want to be in compliance as well, so you'll show up there. Regardless of intent, if the community makes this a standard and a distro does not follow it, it will look strange: tutorials written to the Pylab standard will not work with that distro, commonly shared code will not run in it without add-ons, etc. So, distro maintainers will want to be in compliance if they can be. By choosing among packages, Pylab will stifle competition, regardless of our pure intent. I still want there to be a Pylab, for selfish reasons cited above. I can see a couple ways out, gotten to by asking, WHOSE NEEDS ARE WE TRYING TO SERVE? 1. Make the set only include packages for which there is essentially no competition. numpy+scipy+ipython+binary file formats? I think there is consensus that Matplotlib is standard, but are we really ready to formalize the "next-best" status of all other plotting packages? I think we actually might be, but I don't think that's true for any other package with serious competiton. So numpy+matplotlib+scipy+ipython+binary file formats. I am edgy about ipython, but let's face it, almost everyone uses it. The fact that it contains some new material for which the competition hasn't been decided (i.e., notebook and related features) is a complicating factor. We will want to jam other packages in here, like cython and sympy. If we're preserving competition, it will have serious holes. It will be unwieldy for beginners (too big) and will require lots of add-ons for experienced people and production use. DOES IT SERVE ANYONE? I don't think anyone will like it. 2. Include *everything*, and don't make choices among competing packages. DOES IT SERVE ANYONE? Maybe big labs with lots of disk space. 3. Define several levels of Pylab: Level 1: Stuff needed for beginner tutorials, Numerics 101 classes, etc. I think just numpy+matplotlib+scipy+ipython+binary file formats. Declare that beginner tutorials stick to this set to be Pylab-approved tutorials and get a link from the Pylab-level-1 page. The tutorials would also have to comply with coding standards like "import numpy as np", etc. We may also declare that compliant tutorials will not use certain features of Ipython. I see things like common scientific binary file formats as very important, as anyone switching from another language will have data. DOES IT SERVE ANYONE? Yes, beginners (and their teachers). Level 2: The above plus things like cython, symbolic math, serious financial functions, unit conversions, serious stats, C, C++, FORTRAN, etc., but not selecting any package where there is still competition. Yes, this will leave holes, but people operating at this level are capable of making choices, and distros are welcome to fill in. DOES IT SERVE ANYONE? It's a good, smallish base for technical users to build upon. Level 3: The above plus ALL stable, general-purpose packages that meet a certain standard, including different competitive variants for the same task. DOES IT SERVE ANYONE? Big installations with one specialty. Level 4: The above plus specialist packages like astropy. DOES IT SERVE ANYONE? Big installations with many specialties. I like option 3. In all my thinking on this problem, I could not find a minimal set that was useful to advanced users and didn't stifle competition. So, instead we have a minimal set that isn't geared toward advanced users, and that encourages consistency across tutorials. Then we have levels of increasing inclusiveness, size, and utility to advanced users, none of which stifles competition. Most importantly, the "in" list for any level certifies that all included packages are stable, actively maintained, well documented, and do not fight with the rest of the packages. I think that requiring functionality rather than specific packages is not very useful. What utility is there in saying you have an IDE, a syntax-highlighting editor, etc. without specifying which one? I can see saying "a C compiler", though. Perhaps saying "one or more of ..." and listing specific IDEs/editors/etc. would be ok. That at least provides the service of certifying that those listed actually work well with the stack. Finally, I think that there should be installers. Certifying the work of others is nice, but the real benefit will be in actually having something to install. But if there are two entities, one that certifies and one that makes an installer for each defined Pylab level, I'm fine with that. I'd just hate to see a defined Pylab level for which no pure installer (i.e., no installer for just those packages) is available. Thanks, and sorry for the length. --jh-- Prof. Joseph Harrington Planetary Sciences Group Department of Physics PS 441 4000 Central Florida Blvd. University of Central Florida Orlando, FL 32816-2385 USA ... On sabbatical at: ... Max-Planck-Institut f?r Astronomie K?nigstuhl 17 D-69117 Heidelberg Germany jh at physics.ucf.edu planets.ucf.edu From yosefmel at post.tau.ac.il Wed Sep 19 06:38:32 2012 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Wed, 19 Sep 2012 13:38:32 +0300 Subject: [SciPy-User] Vectorizing functions where not known if each arg is (broadcast compatible) scalar or ndarray In-Reply-To: References: Message-ID: <56170526.aHFurYSvCH@yosef-flow-lab> Hi, The way I did this before was to use broadcast_arrays() at the beginning of the function, to make sure all arrays have the same size. HTH, Yosef. On Tuesday 18 September 2012 15:04:28 William Furnass wrote: > Hi, > > I'm looking for a simple way to create a vectorized version of the > following function that flexibly allows one or more of the inputs to > be either an ndarray of constant length l or a scalar. > > With simpler functions (such as the 'reynolds' function mentioned > below) the np.vectorize function does the job but I'm not sure how to > vectorize the following given the conditionals involving Re (which > could be vector or scalar) e.g. how should the case for Re > 4000 be > calculated and D or k_s be indexed if it is not without introducing > quite a number of checks whether Re, D or k_s are ndarrays or scalar > floats? > > Also, is there a numpy function that allows one to check whether a > number of scalar and ndarray are broadcast compatible? > > Cheers, > > Will > > def friction_factor(D, Q, k_s, T = 10.0, den = 1000.0): > Re = reynolds(D, Q, T, den) > if Re == 0: > f = 0 > elif Re < 2000: > f = 64 / Re > elif 2000 <= Re < 4000: > y3 = -0.86859 * np.log((k_s / (3.7 * D)) + (5.74 / (4000**0.9))) > y2 = (k_s / (3.7 * D)) + (5.74 / (Re**0.9)) > fa = y3**-2 > fb = fa * (2 - (0.00514215 / (y2*y3))) > r = Re / 2000. > x4 = r * (0.032 - (3. * fa) + (0.5 * fb)) > x3 = -0.128 + (13. * fa) - (2.0 * fb) > x2 = 0.128 - (17. * fa) + (2.5 * fb) > x1 = (7 * fa) - fb > f = x1 + r * (x2 + r * (x3 + x4)) > elif Re >= 4000: > f = 0.25 / (np.log10((k_s / (3.7 * D)) + (5.74 / (Re**0.9))))**2 > return f > friction_factor = np.vectorized(friction_factor) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Sep 19 07:35:08 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 19 Sep 2012 12:35:08 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: Hi Joe, On 19 September 2012 11:21, Joe Harrington wrote: > BUT if your package is in PylabTM and your competition isn't, you have a > *big* advantage. You'll get installed by every Linux distro, and the > computational Python distros will want to be in compliance as well, so > you'll show up there. Regardless of intent, if the community makes this > a standard and a distro does not follow it, it will look strange: > tutorials written to the Pylab standard will not work with that distro, > commonly shared code will not run in it without add-ons, etc. So, > distro maintainers will want to be in compliance if they can be. By > choosing among packages, Pylab will stifle competition, regardless of > our pure intent. Inevitably, yes, we reduce competition to some extent. But I'm not sure this is such a bad thing: we want there to be 'one obvious way to do things'. Having a lot of alternatives can be confusing for users, and it can fragment developer effort, so none of the alternatives are as good as they could be. Of course, we don't want to suppress competition too much - alternatives can try valuable new ideas and add features that would be difficult in the standard packages. The standard wouldn't stop people installing alternatives, or distributions from including those alternatives as well. That we specify matplotlib as standard doesn't stop anyone using Chaco, Mayavi or Visvis. And the standard will be versioned, so if in a couple of years a new package has become indispensable, we can add it to the standard. > Make the set only include packages for which there is essentially no > competition. numpy+scipy+ipython+binary file formats? That's tricky to define. Depending on who you ask, for example, IPython has several competing projects. And if a new competitor to Numpy were to emerge, would we avoid taking sides, even though our whole stack depends on Numpy? We even limit the choice for Python itself, by requiring packages that use the C-API. We have to take sides in some debates - not as a partisan move to bolster our favourite projects, but so that users get a coherent stack of useful tools. Useful tools have competition, and we can't say every alternative is important. As an aside, you refer to 'binary file formats' - is there a particular project for working with those that I should be aware of? > 3. Define several levels of Pylab: And now we're up to 4 levels, I see. That's too complex. I feel strongly that if we're going for multiple levels, it should be no more than 2, at least for now. And even for the more inclusive one, I don't want to build a kitchen-sink specification that includes a big list of packages. That's for distributions to define, not the standard. As I mentioned before, I think we need simple descriptions of 'why you want version X'. > Level 1: Stuff needed for beginner tutorials, Numerics 101 classes, etc. What 'beginners' need isn't necessarily the base of the stack, though. In fact, more often than not, beginners use high-level packages, and you learn about the stuff lower down the stack as you move on. And the needs of 'beginners' might be very different in different fields. In biology, we begin with statistical tests, not array manipulation. Also, hard drives are large and University internet connections are fast - why shouldn't a beginner just download the more inclusive 'recommended' level, and leave some parts unused? I'm focussing on the user here. I don't want to define multiple levels just to push the tricky decisions about inclusion downstream to distributions and users. If the standard is to have multiple flavours, we need a clear story about how it benefits users. > Finally, I think that there should be installers. Sorry, this is not part of my plan, at least for now. Maintaining a distribution for several different OSs is a much bigger task, and it's not one that I want to take on. I'm optimistic that existing distributions will meet that need. Also, I intend the name Pylab to mean a particular set of packages, however you've installed them. If we make a Pylab distribution, it gives the impression that EPD, for example, is not Pylab. Then we'd be back in the current situation, but with one more Python distribution. Thanks, Thomas From zhibin_dai at ynao.ac.cn Tue Sep 18 12:14:34 2012 From: zhibin_dai at ynao.ac.cn (Zhibin Dai) Date: Wed, 19 Sep 2012 00:14:34 +0800 Subject: [SciPy-User] how to fix this error from scipy Message-ID: <7EAC3BE0-A665-40F4-8727-B592A84A729E@ynao.ac.cn> Dear sir, I have installed numpy and scipy successfully. And the test of numpy has been passed. But, I got an error when testing scipy. The details of error have been attached as follow: ------------------------------------- sh-3.2# python Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test('full') Running unit tests for scipy NumPy version 1.6.2 NumPy is installed in /Library/Python/2.7/site-packages/numpy-1.6.2-py2.7-macosx-10.7-intel.egg/numpy SciPy version 0.10.1 SciPy is installed in /Library/Python/2.7/site-packages/scipy-0.10.1-py2.7-macosx-10.7-intel.egg/scipy Python version 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] nose version 1.2.1 ...................................................................................................................................................................................F.python(16154) malloc: *** error for object 0x7fe9125e7fa8: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Abort trap: 6 ------------------------------------- I also generate the following information in one command: ------------------------------------- sh-3.2# python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='darwin' ------ sys.version: 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] ------ sys.prefix: /System/Library/Frameworks/Python.framework/Versions/2.7 ------ sys.path=':/Library/Python/2.7/site-packages/pip-1.2.1-py2.7.egg:/Library/Python/2.7/site-packages/nose-1.2.1-py2.7.egg:/Library/Python/2.7/site-packages/numpy-1.6.2-py2.7-macosx-10.7-intel.egg:/Library/Python/2.7/site-packages/scipy-0.10.1-py2.7-macosx-10.7-intel.egg:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC:/Library/Python/2.7/site-packages' ------ Found new numpy version '1.6.2' in /Library/Python/2.7/site-packages/numpy-1.6.2-py2.7-macosx-10.7-intel.egg/numpy/__init__.pyc Found f2py2e version '2' in /Library/Python/2.7/site-packages/numpy-1.6.2-py2.7-macosx-10.7-intel.egg/numpy/f2py/f2py2e.pyc Found numpy.distutils version '0.4.0' in '/Library/Python/2.7/site-packages/numpy-1.6.2-py2.7-macosx-10.7-intel.egg/numpy/distutils/__init__.pyc' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/local/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/local/bin/gfortran', '-Wall', '-ffixed-form', '- fno-second-underscore', '-arch', 'i686', '-arch', 'x86_64', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/local/bin/gfortran', '-Wall', '-fno-second- underscore', '-arch', 'i686', '-arch', 'x86_64', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/local/bin/gfortran', '-Wall', '-ffixed-form', '- fno-second-underscore', '-Wall', '-fno-second-underscore', '-arch', 'i686', '-arch', 'x86_64', '-fPIC', '-O3', '- funroll-loops'] libraries = ['gfortran'] library_dirs = [] linker_exe = ['/usr/local/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/local/bin/gfortran', '-Wall', '-arch', 'i686', '- arch', 'x86_64', '-Wall', '-undefined', 'dynamic_lookup', '-bundle'] object_switch = '-o ' ranlib = ['/usr/local/bin/gfortran'] version = LooseVersion ('4.2.3') version_cmd = ['/usr/local/bin/gfortran', '--version'] Fortran compilers found: --fcompiler=gnu95 GNU Fortran 95 compiler (4.2.3) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu GNU Fortran 77 compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pg Portland Group Fortran Compiler Compilers not available on this platform: --fcompiler=compaq Compaq Fortran Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs is_64bit is_i386 ------ ------------------------------------- Could you have any idea for my error? Thanks in advance and looking forward to your reply. With best regards, Zhibin Dai -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Sep 19 12:22:20 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 19 Sep 2012 17:22:20 +0100 Subject: [SciPy-User] how to fix this error from scipy In-Reply-To: <7EAC3BE0-A665-40F4-8727-B592A84A729E@ynao.ac.cn> References: <7EAC3BE0-A665-40F4-8727-B592A84A729E@ynao.ac.cn> Message-ID: Hi Zhibin, On Tue, Sep 18, 2012 at 5:14 PM, Zhibin Dai wrote: > Dear sir, > > I have installed numpy and scipy successfully. And the test of numpy has > been passed. > > But, I got an error when testing scipy. The details of error have been > attached as follow: You should avoid compiling scipy with gcc-llvm, and use clang itself. We have had many bug reports of scipy compiled with gcc-llvm on the Mac, David From ralf.gommers at gmail.com Wed Sep 19 15:24:19 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 19 Sep 2012 21:24:19 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 12:26 AM, Thomas Kluyver wrote: > Hi again, > > It now looks like we're going to use Pylab as the name for the 'scipy > stack'. Now I want to turn to the question of what it should include. > The idea is that Python distributions can call themselves Pylab > compliant if they provide at least a defined set of packages. Also, I > hope that it will become a metapackage in Linux distributions, so that > users can 'apt-get install pylab' or similar > > As a minimum, I assume we should require Python, Numpy, Scipy and > Matplotlib. Does anyone disagree? > > I also think we should specify minimum versions. The standard itself > will be versioned, so we can raise these over time. For Python, I > intend the requirement to be 2.x >= 2.6 or 3.x >= 3.2. What are > sensible minimum versions of Numpy, Scipy and Matplotlib? > I'd propose numpy >= 1.5.1, scipy >= 0.10.0, matplotlib >= 1.1. EPD, Python(x,y) and AnacondaCE all fulfill those requirements. Sage likely does too, but they don't list versions on their website. Ralf > > Should the standard include an interface? IPython, a more traditional > IDE, or both? On the one hand, specifying a standard interface means > users can share experience better, and exchange richer files, like > IPython notebooks or IDE project structures. Matlab, for instance, > wins praise for including a powerful IDE by default. On the other > hand, we've got several interesting UI efforts still taking shape - > IPython notebooks, Spyder, IEP - and declaring one standard would make > the alternatives less visible. I'm honestly torn on this - I can see > good arguments for and against. > > Other scientific packages we might consider include pandas (which > provides functionality similar to core parts of R), Sympy, Cython, > various scikits projects, h5py, and doubtless many others I haven't > thought of. We could also specify general purpose Python packages such > as requests, or a GUI toolkit. > > On the NumFOCUS list, Chris Kees raised the idea that there could be > two or more levels of packages, e.g. 'core' and 'recommended'. I don't > think we should add that kind of complexity in the first version, but > keep in mind that we could differentiate it later. > > Finally, I mean the standard to specify that the distribution must > offer a way of installing arbitrary extra Python packages into it, so > the standard shouldn't try to include everything you might need for > scientific computing. The aim is to offer a key set of tools so you > can get started without having to add things. "Get started with what?" > is the key question we have to answer. In my field, for example, > statistical tests are fundamental, while symbolic maths is hardly > used. > > All your opinions are welcome, > > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 19 15:24:55 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 19 Sep 2012 21:24:55 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 1:03 AM, Thomas Kluyver wrote: > On 18 September 2012 23:44, Matthew Brett wrote: > > > Is there a good reason not to include Ipython? > > Perhaps other interfaces work better for some use cases, or some > individuals? Python(x,y) favours Spyder, for instance, and ships an > old version of IPython to maintain compatibility. If we included > IPython in the standard, Python(x,y) would probably not meet the > minimum specified version, and would not be Pylab compliant for the > time being. > The part of IPython that's pretty much universal is the command line interface, so I don't see a problem with setting the minimum version to 0.10. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Sep 19 16:23:05 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 19 Sep 2012 21:23:05 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 19 September 2012 20:24, Ralf Gommers wrote: > I'd propose numpy >= 1.5.1, scipy >= 0.10.0, matplotlib >= 1.1. Thanks Ralf, those all sound like sensible requirements. Best wishes, Thomas From travis at continuum.io Wed Sep 19 16:46:15 2012 From: travis at continuum.io (Travis Oliphant) Date: Wed, 19 Sep 2012 15:46:15 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sep 19, 2012, at 3:23 PM, Thomas Kluyver wrote: > On 19 September 2012 20:24, Ralf Gommers wrote: >> I'd propose numpy >= 1.5.1, scipy >= 0.10.0, matplotlib >= 1.1. > > Thanks Ralf, those all sound like sensible requirements. > I had been thinking that pylab would be a specification of specific versions of the packages, but this makes sense as well where a pylab specification is a range of release numbers for packages. To avoid a split in the discussion, it would be helpful to join this thread to the conversation on the numfocus at googlegroups.com -Travis > Best wishes, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From jason-sage at creativetrax.com Wed Sep 19 16:55:20 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Wed, 19 Sep 2012 15:55:20 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <505A3138.3090605@creativetrax.com> On 9/19/12 2:24 PM, Ralf Gommers wrote: > I'd propose numpy >= 1.5.1, scipy >= 0.10.0, matplotlib >= 1.1. EPD, > Python(x,y) and AnacondaCE all fulfill those requirements. Sage likely > does too, but they don't list versions on their website. According to http://sagemath.org/packages/standard/, we almost meet those requirements (we have scipy 0.9). At this point, it seems best to wait until 0.11, which is coming soon, right? Thanks, Jason From newville at cars.uchicago.edu Wed Sep 19 17:31:19 2012 From: newville at cars.uchicago.edu (Matt Newville) Date: Wed, 19 Sep 2012 16:31:19 -0500 Subject: [SciPy-User] ANN: lmfit 0.7 Message-ID: Hi All, I've posted version 0.7 of lmfit-py, which extends the code of scipy.optimize to support optimization problems to use Parameters which can take bounds, be frozen, or written as algebraic constraints. Version 0.7 fixes a few bugs and adds two new improvements in functionality: a) If the uncertainties package (http://packages.python.org/uncertainties/) is available, uncertainties will be propagated to constrained parameters b) Support for many scalar optimization methods from scipy.optimize.minimize() (scipy 0.11 and higher). While 'leastsq' is still the default fitting method (and the only one that automatically calculates uncertainties in optimized parameters), this allows feature allows easy comparison of different fitting methods. While the objective function for the scalar methods are intended to return a scalar value, if a numpy.ndarray is returned by the objective function, the sum-of-squares (array*array).sum() will be used. This permits the same objective function to be used for all methods. Of course, if an objective function returns a scalar value, it will work correctly for the scalar minimization methods. While some of the methods supported by scipy.optimize.minimize() have some way to support constraints or bounds, these features of these methods are not supported in lmfit. That is to say, these are not necessary with lmfit as the Parameters used in lmdit already have these features. In effect lmfit provides *all* fitting methods with a consistent way to specify constraints and Parameter bounds, including those methods (Levenberg-Marquardt, Nelder-Mead, etc) that don't natively support them. Code is available at http://pypi.python.org/pypi/lmfit/ and https://github.com/newville/lmfit-py/ Documentation is at http://newville.github.com/lmfit-py/ Any feedback, bug reports, and suggestions are welcome. Thanks, --Matt Newville From ralf.gommers at gmail.com Wed Sep 19 17:39:26 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 19 Sep 2012 23:39:26 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505A3138.3090605@creativetrax.com> References: <505A3138.3090605@creativetrax.com> Message-ID: On Wed, Sep 19, 2012 at 10:55 PM, Jason Grout wrote: > On 9/19/12 2:24 PM, Ralf Gommers wrote: > > I'd propose numpy >= 1.5.1, scipy >= 0.10.0, matplotlib >= 1.1. EPD, > > Python(x,y) and AnacondaCE all fulfill those requirements. Sage likely > > does too, but they don't list versions on their website. > > According to http://sagemath.org/packages/standard/, we almost meet > those requirements (we have scipy 0.9). At this point, it seems best to > wait until 0.11, which is coming soon, right? > It is coming soon, so indeed it would be better to wait for the final release. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 19 17:44:08 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 19 Sep 2012 23:44:08 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 10:46 PM, Travis Oliphant wrote: > > On Sep 19, 2012, at 3:23 PM, Thomas Kluyver wrote: > > > On 19 September 2012 20:24, Ralf Gommers wrote: > >> I'd propose numpy >= 1.5.1, scipy >= 0.10.0, matplotlib >= 1.1. > > > > Thanks Ralf, those all sound like sensible requirements. > > > > I had been thinking that pylab would be a specification of specific > versions of the packages, but this makes sense as well where a pylab > specification is a range of release numbers for packages. > A set of specific versions sounds nice in theory (for reproducible research), but then there's no distribution to install that particular set. So the only benefit I see is that you could easily specify all package versions in your own stack. Am I missing something? Ralf > To avoid a split in the discussion, it would be helpful to join this > thread to the conversation on the numfocus at googlegroups.com > > -Travis > > > > Best wishes, > > Thomas > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at vaught.net Thu Sep 20 10:16:57 2012 From: travis at vaught.net (Travis Vaught) Date: Thu, 20 Sep 2012 09:16:57 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <31FED538-C3BC-47ED-8C61-4BBF9F59CE4C@vaught.net> On Sep 18, 2012, at 6:16 PM, Matthew Brett wrote: >> ... > > OK - but the website and the name point us to the standard, and thence > to some installers for that standard? You are not proposing any new > installers, but that the standard basically says something like: > > """"You have Pylab if you have python >= 2.6, scipy > 0.9.0, ... > > Your options for installing these are: > Windows : - python x, y or EPD or individual download and install > OSX : EPD or individual download and install > Linux : apt-get / yum / whatever install python-pylab > """ > ? For the record... EPD (and EPD-Free) is also available as a binary install on Linux. -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Thu Sep 20 11:22:00 2012 From: travis at continuum.io (Travis Oliphant) Date: Thu, 20 Sep 2012 10:22:00 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sep 19, 2012, at 4:44 PM, Ralf Gommers wrote: > > > On Wed, Sep 19, 2012 at 10:46 PM, Travis Oliphant wrote: > > On Sep 19, 2012, at 3:23 PM, Thomas Kluyver wrote: > > > On 19 September 2012 20:24, Ralf Gommers wrote: > >> I'd propose numpy >= 1.5.1, scipy >= 0.10.0, matplotlib >= 1.1. > > > > Thanks Ralf, those all sound like sensible requirements. > > > > I had been thinking that pylab would be a specification of specific versions of the packages, but this makes sense as well where a pylab specification is a range of release numbers for packages. > > A set of specific versions sounds nice in theory (for reproducible research), but then there's no distribution to install that particular set. So the only benefit I see is that you could easily specify all package versions in your own stack. Am I missing something? > On Linux it would *just* work as there would be a meta-package with the same name that would download and install dependencies. On other platforms, I would assume that certain Pylab versions would be available from the various vendors. It's a bit of a chicken-and-egg, but in this case I know if we define the standard, there will be distributions that emerge (they are there already but each marching to their own drum --- this provides some cadence and input from a larger community). -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Sep 20 21:19:38 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 20 Sep 2012 18:19:38 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 4:35 AM, Thomas Kluyver wrote: > Inevitably, yes, we reduce competition to some extent. But I'm not > sure this is such a bad thing: we want there to be 'one obvious way to > do things'. Having a lot of alternatives can be confusing for users, > and it can fragment developer effort, so none of the alternatives are > as good as they could be. I think this can be framed by analogy to the python standard library: yes, it "picks sides"; no, that doesn't prevent competition. We're now on the second option-handling package in the stdlib after argparse out-did optparse enough to be included, there's similar talk of putting a new regex package in, elementtree went in despite other xml tools being there, etc. There's *enormous* value in giving users some basic guidance, and hopefully as these 'blessed' tools establish improved interoperability practices, documentation and packaging guidelines, etc, it will also make the process of incorporating third-party packages easier. Obviously, we should clearly indicate when alternatives exist to the base tools and pointing how they can be a better fit for some users/tasks (say Chaco instead of MPL if you're building an interactive app with the traits reactive programming model). > We have to take sides in some debates - not as a partisan move to > bolster our favourite projects, but so that users get a coherent stack > of useful tools. Useful tools have competition, and we can't say every > alternative is important. Absolutely. It's not like this is going to make Google stop working, so people will always be free to try new things. But what will happen is that hopefully we'll develop practices that will make the whole ecosystem, core packages and external ones, in general integrate better for end users, with a smoother installation/documentation/usage experience. The development of this will be driven by the core but the resulting conventions and tools will be usable by all projects. Ultimately we want an ecosystem similar to say the R one, where many (even competing) packages can exist, but there's a clear core to start from and an easy path for users to bring in new functionality. >> 3. Define several levels of Pylab: I think this has been danced around but not really discussed with enough precision: a clear dividing line should be drawn between "needs a compiler" and not. Because the complexities of getting a compiler off the ground in some platform are not trivial, and the details change over time, I think that the 'base level' should consist of very broadly applicable tools that do *not* need a C compiler to be installed for working. The 2nd level would require a C compiler, thus putting Cython (and in the future numba or similar tools if llvm becomes more widely accepted as the path forward) squarely in that camp. I think it would be a huge mistake to make a compiler a requirement for the base level: not that I'm not a huge fan of Cython and related tools, but we really need the on-ramp to be a very, very easy one for newcomers. And unfortunately, between 32- and 64-bit windows, mingw vs the MS compilers, the vagaries of Xcode versions on OSX and how to install it, etc, it's a bag of thorns likely to put many newcomers off. I've had for a while this basic 'layering' of the ecosystem in my mind that I use as a starting point for these conversations: https://speakerdeck.com/u/fperez/p/1204-biofrontiers-boulder?slide=21 I think if you take all that minus Cython and Mayavi (for dependency complexity reasons, VTK is a non-triival beast to deal with too), the rest is a pretty decent core that covers a lot of what a good fraction of undergraduate courses in the sciences would broadly need. Not every last discipline-specific problem is there, but it hits matlab at all the right points as well as making a very credible case against the core of R, and I think that's how we should think of it. The base system should be a very solid replacemement for typical usage of a base matlab or R installation (which is why I think that the triad of pandas, statsmodels and sklearn is absolutely essential, given where science and data analysis are going right now). Cheers, f From fperez.net at gmail.com Thu Sep 20 21:22:35 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 20 Sep 2012 18:22:35 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 12:24 PM, Ralf Gommers wrote: > > The part of IPython that's pretty much universal is the command line > interface, so I don't see a problem with setting the minimum version to > 0.10. I really hope not. The internals of IPython, the configuration, the usage, everything, is *completely* different in the 0.10 series and afterwards. We don't maintain or support 0.10 anymore. It would be a huge disservice to users to get them going on something that's already obsolete. Obviously we can discuss whether IPython shouldn't go in. But if it's going in, it should absolutely be a useful, up to date and widely supported version. Let's make this a forward-looking standard, not something that needs to run on python2.4 on CentOS 3.0. Cheers, f From takowl at gmail.com Fri Sep 21 06:52:53 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 21 Sep 2012 11:52:53 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: Thanks Fernando, you've coherently explained what I was trying to say about why we can and should take sides. On 21 September 2012 02:19, Fernando Perez wrote: > I think this has been danced around but not really discussed with > enough precision: a clear dividing line should be drawn between "needs > a compiler" and not. Because the complexities of getting a compiler > off the ground in some platform are not trivial, and the details > change over time, I think that the 'base level' should consist of very > broadly applicable tools that do *not* need a C compiler to be > installed for working. The 2nd level would require a C compiler, thus > putting Cython (and in the future numba or similar tools if llvm > becomes more widely accepted as the path forward) squarely in that > camp. I like this way of drawing a clear, objective distinction between levels. We would still need to work out how to present the different levels to users, but that's something I think we could resolve. > I've had for a while this basic 'layering' of the ecosystem in my mind > that I use as a starting point for these conversations: > > https://speakerdeck.com/u/fperez/p/1204-biofrontiers-boulder?slide=21 > > I think if you take all that minus Cython and Mayavi (for dependency > complexity reasons, VTK is a non-triival beast to deal with too), the > rest is a pretty decent core that covers a lot of what a good fraction > of undergraduate courses in the sciences would broadly need. To save people a click, Fernando's tiers look like this: Python --- Numpy --- IPython, Scipy, Matplotlib, SymPy --- pandas, StatsModels, scikits-learn, scikits-image, scikits-image, PyTables, NetworkX That seems like a vision of a much more comprehensive environment than we had been discussing, but all those packages are familiar names at Scipy conferences, and it would inarguably make a much more capable environment out of the box than just numpy+scipy+mpl. If we were to use that as a starting point, would anyone like to argue against including some of those packages? Almar has already spoken against specifying an interface. I'm actually leaning the other way, although I accept that I could be biased by my role in IPython. For introductory tutorials, I think it would be very valuable to have a common interface, so we can describe, say, what to press to run some code. Otherwise, users would be put off by having to try to apply a generalised tutorial to their particular environment, and interpret screenshots that don't match what they see. In particular, the IPython notebook is a very different model from most IDEs. What do other people think? If we did specify an interface, is there anything we could do to maintain interest in the alternatives? Best wishes, Thomas From a.klein at science-applied.nl Fri Sep 21 07:40:10 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Fri, 21 Sep 2012 13:40:10 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: > > Almar has already spoken against specifying an interface. I'm actually > leaning the other way, although I accept that I could be biased by my > role in IPython. For introductory tutorials, I think it would be very > valuable to have a common interface, so we can describe, say, what to > press to run some code. Otherwise, users would be put off by having to > try to apply a generalised tutorial to their particular environment, > and interpret screenshots that don't match what they see. In > particular, the IPython notebook is a very different model from most > IDEs. > Of course, I'm biased too, by my role in IEP :) I suppose I could live with specifying IPython as part of the base, for the reasons that you point out. As an analogy, Tk is included with Python, but GUI toolkits like Qt can still flourish. My main objection it that any chosen interface should not be implied as *the* interface. For instance, Python(x,y) should still be pylab compliant even though it uses Spyder as an interface. I suppose it's not hard for Python(x,y) to include the IPython executable in the distribution. (As opposed to integrating an IPython kernel with Spyder, which is obviosuly much harder.) Further, you made a point about being able to share richer code documents specific to the chosen interface. I strongly think that we should make sharing code independent of the interface. Of course, withing a specific user group, users are free to use specific formats, my point is that it should not be encouraged from pylab. Regards, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Sep 21 07:53:40 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 07:53:40 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 6:52 AM, Thomas Kluyver wrote: > Thanks Fernando, you've coherently explained what I was trying to say > about why we can and should take sides. > > On 21 September 2012 02:19, Fernando Perez wrote: >> I think this has been danced around but not really discussed with >> enough precision: a clear dividing line should be drawn between "needs >> a compiler" and not. Because the complexities of getting a compiler >> off the ground in some platform are not trivial, and the details >> change over time, I think that the 'base level' should consist of very >> broadly applicable tools that do *not* need a C compiler to be >> installed for working. The 2nd level would require a C compiler, thus >> putting Cython (and in the future numba or similar tools if llvm >> becomes more widely accepted as the path forward) squarely in that >> camp. > > I like this way of drawing a clear, objective distinction between > levels. We would still need to work out how to present the different > levels to users, but that's something I think we could resolve. > >> I've had for a while this basic 'layering' of the ecosystem in my mind >> that I use as a starting point for these conversations: >> >> https://speakerdeck.com/u/fperez/p/1204-biofrontiers-boulder?slide=21 >> >> I think if you take all that minus Cython and Mayavi (for dependency >> complexity reasons, VTK is a non-triival beast to deal with too), the >> rest is a pretty decent core that covers a lot of what a good fraction >> of undergraduate courses in the sciences would broadly need. > > To save people a click, Fernando's tiers look like this: > > Python > --- > Numpy > --- > IPython, Scipy, Matplotlib, SymPy > --- > pandas, StatsModels, scikits-learn, scikits-image, scikits-image, > PyTables, NetworkX > > That seems like a vision of a much more comprehensive environment than > we had been discussing, but all those packages are familiar names at > Scipy conferences, and it would inarguably make a much more capable > environment out of the box than just numpy+scipy+mpl. If we were to > use that as a starting point, would anyone like to argue against > including some of those packages? > > Almar has already spoken against specifying an interface. I'm actually > leaning the other way, although I accept that I could be biased by my > role in IPython. For introductory tutorials, I think it would be very > valuable to have a common interface, so we can describe, say, what to > press to run some code. Otherwise, users would be put off by having to > try to apply a generalised tutorial to their particular environment, > and interpret screenshots that don't match what they see. In > particular, the IPython notebook is a very different model from most > IDEs. I think for an out of the box working environment, I would include both ipython and spyder. One reason is that it requires packaging or instructions for their dependencies, especially pyside/pyqt ( I have inconsistent versions in my Windows python versions but I'm not interested enough to figure out how to get the qtconsole to work after each update.) ipython's popularity is unquestionable, in statsmodels we start to include notebooks as part of the documentation spyder gives a similar GUI as other packages that Windows user will be familiar with (Matlab, Stata without the stats specific menus, ...) my 2c Josef > > What do other people think? If we did specify an interface, is there > anything we could do to maintain interest in the alternatives? > > Best wishes, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From njs at pobox.com Fri Sep 21 09:01:39 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 14:01:39 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 12:53 PM, wrote: > On Fri, Sep 21, 2012 at 6:52 AM, Thomas Kluyver wrote: >> Thanks Fernando, you've coherently explained what I was trying to say >> about why we can and should take sides. >> >> On 21 September 2012 02:19, Fernando Perez wrote: >>> I think this has been danced around but not really discussed with >>> enough precision: a clear dividing line should be drawn between "needs >>> a compiler" and not. Because the complexities of getting a compiler >>> off the ground in some platform are not trivial, and the details >>> change over time, I think that the 'base level' should consist of very >>> broadly applicable tools that do *not* need a C compiler to be >>> installed for working. The 2nd level would require a C compiler, thus >>> putting Cython (and in the future numba or similar tools if llvm >>> becomes more widely accepted as the path forward) squarely in that >>> camp. >> >> I like this way of drawing a clear, objective distinction between >> levels. We would still need to work out how to present the different >> levels to users, but that's something I think we could resolve. >> >>> I've had for a while this basic 'layering' of the ecosystem in my mind >>> that I use as a starting point for these conversations: >>> >>> https://speakerdeck.com/u/fperez/p/1204-biofrontiers-boulder?slide=21 >>> >>> I think if you take all that minus Cython and Mayavi (for dependency >>> complexity reasons, VTK is a non-triival beast to deal with too), the >>> rest is a pretty decent core that covers a lot of what a good fraction >>> of undergraduate courses in the sciences would broadly need. >> >> To save people a click, Fernando's tiers look like this: >> >> Python >> --- >> Numpy >> --- >> IPython, Scipy, Matplotlib, SymPy >> --- >> pandas, StatsModels, scikits-learn, scikits-image, scikits-image, >> PyTables, NetworkX >> >> That seems like a vision of a much more comprehensive environment than >> we had been discussing, but all those packages are familiar names at >> Scipy conferences, and it would inarguably make a much more capable >> environment out of the box than just numpy+scipy+mpl. If we were to >> use that as a starting point, would anyone like to argue against >> including some of those packages? >> >> Almar has already spoken against specifying an interface. I'm actually >> leaning the other way, although I accept that I could be biased by my >> role in IPython. For introductory tutorials, I think it would be very >> valuable to have a common interface, so we can describe, say, what to >> press to run some code. Otherwise, users would be put off by having to >> try to apply a generalised tutorial to their particular environment, >> and interpret screenshots that don't match what they see. In >> particular, the IPython notebook is a very different model from most >> IDEs. > > I think for an out of the box working environment, I would include > both ipython and spyder. > One reason is that it requires packaging or instructions for their > dependencies, especially pyside/pyqt > ( I have inconsistent versions in my Windows python versions but I'm > not interested enough to figure out how to get the qtconsole to work > after each update.) > > ipython's popularity is unquestionable, in statsmodels we start to > include notebooks as part of the documentation > spyder gives a similar GUI as other packages that Windows user will be > familiar with (Matlab, Stata without the stats specific menus, ...) I'm not sure the "pylab brand" can really dictate which interface actual distributions should include... there are multiple that have good reasons to exist and aim at somewhat different niches. I doubt the Python(x,y) folks are going to stop recommending Spyder just because the IPython folk suggest it :-). My guess is the best good we can do is by trying to document and push for standardization in the places where there's consensus. (There's that word again...) Some ideas: - No matter which overall interface you use, you will at some point want a REPL. We should recommend that for a "pylab environment" this always defaults to an IPython shell. (Spyder for example supports both the vanilla ">>>" shell and the IPython "In [n]" shell, both accessed through a menu that confuses *me*, never mind newbies...) This seems much less controversial than specifying the overall UI, and would already be very valuable to those of us trying to write docs, because every time we write down an example we have to pick one! It's a very visible difference to newbies, and very trivial to fix. - A "pylab shell" should include some standard, tasteful set of stuff pre-imported. (I'm guessing this should be less than what you get right now from "from pylab import *", maybe no more than "import numpy as np; import matplotlib.pyplot as plt", but I don't have a strong opinion.) - Calling matplotlib plotting functions from a "pylab shell" should Just Work with no further configuration. Of course what that means exactly depends on the details of the interface (pop up a window? pop up a tab in your IDE? insert a graph in-line in the output?), but that's ok. - I'd actually like to see some guidelines for how installing packages should work. Obviously we're limited by the Python Packaging Mess(tm), but honestly some small environment configuration rules could make the end-user experience *much* better than it is right now. In R, you can just type install.packages("some-package") at the command prompt, and it'll just fetch it from their PyPI-equivalent and stick it in a user-local directory with no fuss. Perhaps a requirement for being a "pylab shell" is that the distributor has to make sure that a user-writeable site-packages directory is available and used by default. More ambitiously, I tend to think that requiring the availability of a compiler really is a good idea, given that there are freely distributable ones for all relevant platforms... and some sort of virtualenv-management tools wouldn't be a bad idea either... - I'd like to see some simple conventions for things like "given a package name, here is how you find its full sphinx docs" (to enable things like "search all the docs of all installed packages"), "given a function, here is how you find machine-readable example code" (in R, you can type example(anyfunction) and it will auto-magically copy-paste a nice example into your prompt right there), "given a package name, here is how you run its tests". This is more of a project in that it requires writing some code, not just listing recommendations on a wiki page somewhere, but maybe "pylab" can be a banner to inspire people to do that. Basically I feel like the main value a "pylab brand" can provide is by finding places where user quality-of-life can be improved by standardizing and documenting conventions, and then evangelizing those to package and distribution authors. -n From takowl at gmail.com Fri Sep 21 09:59:11 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 21 Sep 2012 14:59:11 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 21 September 2012 12:40, Almar Klein wrote: > My main objection it that any chosen interface should not be implied as > *the* interface. For instance, Python(x,y) should still be pylab compliant > even though it uses Spyder as an interface. I suppose it's not hard for > Python(x,y) to include the IPython executable in the distribution. (As > opposed to integrating an IPython kernel with Spyder, which is obviosuly > much harder.) Of course, distributions would still be able to ship more than one interface, and write their own documentation pointing people towards Spyder, for example. But a Pylab tutorial might say e.g. "go to a terminal and start 'ipython notebook'", and we'd want that to work for anyone who had installed a pylab distribution. > Further, you made a point about being able to share richer code documents > specific to the chosen interface. I strongly think that we should make > sharing code independent of the interface. Of course, withing a specific > user group, users are free to use specific formats, my point is that it > should not be encouraged from pylab. There is a desire to store data besides raw code, and the community is already starting to pick up ipynb files for that. I'm not sure about encouraging it, but I think the standard should enable people to exchange those files, at least. Nathaniel: > - A "pylab shell" I'd rather not say 'it has to work x much like IPython, but it needn't be IPython'. If we do that, most distributions will ship IPython, and users of the odd one that doesn't will be stuck unable to use features everyone else has, and documentation might assume they have. Another argument for specifying IPython is that all the distributions we've discussed so far already ship it. Only Python(x,y) has an old version, but we're working on that. EPD (inc EPD Free), Anaconda, Sage, QSnake and WinPython all seem to have a more recent version. It is a de facto standard for scientific Python distributions. I see this as a push to formalise the de facto standards, the packages that we all know about, so newcomers don't need to hunt these out themselves. Again, I'll draw a parallel with R. R ships with a lightweight IDE, and tutorials can assume you have that, but it hasn't stopped alternatives like RStudio and Tinn-R gaining popularity. Once programmers are comfortable with the language, they go looking for the tools that suit them best. But I think we would be doing users a disservice if introductory documentation couldn't assume any particular interface. Best wishes, Thomas P.S. Nathaniel: you describe trying to set conventions for how to test packages or find documentation, not to mention installing packages. I agree with all of that, but I'd like to put it under 'battles to fight another day'. If we're going to do anything, we can't try to do everything. From njs at pobox.com Fri Sep 21 10:39:07 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 15:39:07 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 2:59 PM, Thomas Kluyver wrote: > On 21 September 2012 12:40, Almar Klein wrote: >> My main objection it that any chosen interface should not be implied as >> *the* interface. For instance, Python(x,y) should still be pylab compliant >> even though it uses Spyder as an interface. I suppose it's not hard for >> Python(x,y) to include the IPython executable in the distribution. (As >> opposed to integrating an IPython kernel with Spyder, which is obviosuly >> much harder.) > > Of course, distributions would still be able to ship more than one > interface, and write their own documentation pointing people towards > Spyder, for example. But a Pylab tutorial might say e.g. "go to a > terminal and start 'ipython notebook'", and we'd want that to work for > anyone who had installed a pylab distribution. I think by definition, a "pylab tutorial" would have to be one that is aimed at a somewhat amorphous set of configurations (different OSes, distributions, etc.), and has some other goal beyond describing how to start up a Python REPL. Being able to say "start up a Pylab shell and ..." will already get us quite a long way, so long as all "pylab shells" are equivalent in the ways that matter for the tutorial. Being able to plot matters much more than where exactly in the UI those plot windows end up located. >> Further, you made a point about being able to share richer code documents >> specific to the chosen interface. I strongly think that we should make >> sharing code independent of the interface. Of course, withing a specific >> user group, users are free to use specific formats, my point is that it >> should not be encouraged from pylab. > > There is a desire to store data besides raw code, and the community is > already starting to pick up ipynb files for that. I'm not sure about > encouraging it, but I think the standard should enable people to > exchange those files, at least. > > Nathaniel: >> - A "pylab shell" > > I'd rather not say 'it has to work x much like IPython, but it needn't > be IPython'. Indeed, that'd be a terrible idea. I mean "pylab shell" as "the shell you get by default when using a pylab-oriented environment (which will be implemented by ipython, and also satisfy these other potential requirements)". > If we do that, most distributions will ship IPython, and > users of the odd one that doesn't will be stuck unable to use features > everyone else has, and documentation might assume they have. > > Another argument for specifying IPython is that all the distributions > we've discussed so far already ship it. Only Python(x,y) has an old > version, but we're working on that. EPD (inc EPD Free), Anaconda, > Sage, QSnake and WinPython all seem to have a more recent version. It > is a de facto standard for scientific Python distributions. I see this > as a push to formalise the de facto standards, the packages that we > all know about, so newcomers don't need to hunt these out themselves. Right. And IPython-the-REPL is a de facto standard, but IPython notebooks are not. (At least not yet. Obviously with your IPython hat on you're working on changing that :-).) So I think we'll get more bang-for-buck if we focus on the former right now. > Again, I'll draw a parallel with R. R ships with a lightweight IDE, > and tutorials can assume you have that, That lightweight IDE doesn't exist on Linux (I just run R in the terminal), and in fact I never knew it existed until after I'd been using R for some years. It's not actually mentioned in any tutorials I ever read. I don't think I'm a particularly representative data point, but it's a data point :-). > P.S. Nathaniel: you describe trying to set conventions for how to test > packages or find documentation, not to mention installing packages. I > agree with all of that, but I'd like to put it under 'battles to fight > another day'. If we're going to do anything, we can't try to do > everything. Really I'm trying to work toward consensus on what "pylab" should mean and what's in principle in scope. If we agree that these are the kinds of things that pylab should stand for, then they can go on the todo list until someone actually gets around to them. Anyway, volunteer effort isn't a conserved quantity. Having a long todo list can actually increase your chances of helpers showing up :-). -n From josef.pktd at gmail.com Fri Sep 21 10:39:30 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 10:39:30 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:59 AM, Thomas Kluyver wrote: > On 21 September 2012 12:40, Almar Klein wrote: >> My main objection it that any chosen interface should not be implied as >> *the* interface. For instance, Python(x,y) should still be pylab compliant >> even though it uses Spyder as an interface. I suppose it's not hard for >> Python(x,y) to include the IPython executable in the distribution. (As >> opposed to integrating an IPython kernel with Spyder, which is obviosuly >> much harder.) > > Of course, distributions would still be able to ship more than one > interface, and write their own documentation pointing people towards > Spyder, for example. But a Pylab tutorial might say e.g. "go to a > terminal and start 'ipython notebook'", and we'd want that to work for > anyone who had installed a pylab distribution. > >> Further, you made a point about being able to share richer code documents >> specific to the chosen interface. I strongly think that we should make >> sharing code independent of the interface. Of course, withing a specific >> user group, users are free to use specific formats, my point is that it >> should not be encouraged from pylab. > > There is a desire to store data besides raw code, and the community is > already starting to pick up ipynb files for that. I'm not sure about > encouraging it, but I think the standard should enable people to > exchange those files, at least. > > Nathaniel: >> - A "pylab shell" > > I'd rather not say 'it has to work x much like IPython, but it needn't > be IPython'. If we do that, most distributions will ship IPython, and > users of the odd one that doesn't will be stuck unable to use features > everyone else has, and documentation might assume they have. > > Another argument for specifying IPython is that all the distributions > we've discussed so far already ship it. Only Python(x,y) has an old > version, but we're working on that. EPD (inc EPD Free), Anaconda, > Sage, QSnake and WinPython all seem to have a more recent version. It > is a de facto standard for scientific Python distributions. I see this > as a push to formalise the de facto standards, the packages that we > all know about, so newcomers don't need to hunt these out themselves. > > Again, I'll draw a parallel with R. R ships with a lightweight IDE, > and tutorials can assume you have that, but it hasn't stopped > alternatives like RStudio and Tinn-R gaining popularity. Once > programmers are comfortable with the language, they go looking for the > tools that suit them best. But I think we would be doing users a > disservice if introductory documentation couldn't assume any > particular interface. R is awful, and it's starting to clean up the IDE situation recently (I haven't tried R Studio yet). The included interface is much worse than IDLE. The last full GUI with statistics menu crashed R after a few hours of work. The recommendations for GUIs, IDEs are all over the place. Stata and matlab have a much nicer interaction of shell and editor, and data viewer, and ... and they work after clicking a few install buttons (and paying first of course). Obviously I'm in a minority, I like Windows (TM), GUIs, spyder, git gui and use ipython only in emergencies. Josef > > Best wishes, > Thomas > > P.S. Nathaniel: you describe trying to set conventions for how to test > packages or find documentation, not to mention installing packages. I > agree with all of that, but I'd like to put it under 'battles to fight > another day'. If we're going to do anything, we can't try to do > everything. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From njs at pobox.com Fri Sep 21 10:50:38 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 15:50:38 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 3:39 PM, wrote: > Stata and matlab have a much nicer interaction of shell and editor, > and data viewer, and ... > and they work after clicking a few install buttons (and paying first of course). > > Obviously I'm in a minority, I like Windows (TM), GUIs, spyder, git > gui and use ipython only in emergencies. But within spyder, you still surely use a python shell of some sort, yes? And isn't ipython a superior option to the regular python repl for these purposes? (Personally I just use ipython at the terminal, and I've really only trained my fingers to always start ipython instead of python in the last few months.) -n From josef.pktd at gmail.com Fri Sep 21 11:10:11 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 11:10:11 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 10:50 AM, Nathaniel Smith wrote: > On Fri, Sep 21, 2012 at 3:39 PM, wrote: >> Stata and matlab have a much nicer interaction of shell and editor, >> and data viewer, and ... >> and they work after clicking a few install buttons (and paying first of course). >> >> Obviously I'm in a minority, I like Windows (TM), GUIs, spyder, git >> gui and use ipython only in emergencies. > > But within spyder, you still surely use a python shell of some sort, > yes? And isn't ipython a superior option to the regular python repl > for these purposes? I'm not and I'm not using the fancy spyder shell with import *. Most of the times I'm using the plainest shell available. spyder is very fast opening a new shell each time, and I can make sure I don't have any leftover variables, imports, ... and no reloads. python -i "this_script.py" tab completion, object inspector, variable viewer and so on are always just a click or key away. besides that, right now I have incompatible versions of spyder and ipython. > > (Personally I just use ipython at the terminal, and I've really only > trained my fingers to always start ipython instead of python in the > last few months.) My memory for commandline commands and syntax is too small to work without GUI. Josef > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From travis at continuum.io Fri Sep 21 11:12:37 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 21 Sep 2012 10:12:37 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sep 21, 2012, at 9:50 AM, Nathaniel Smith wrote: > On Fri, Sep 21, 2012 at 3:39 PM, wrote: >> Stata and matlab have a much nicer interaction of shell and editor, >> and data viewer, and ... >> and they work after clicking a few install buttons (and paying first of course). >> >> Obviously I'm in a minority, I like Windows (TM), GUIs, spyder, git >> gui and use ipython only in emergencies. > > But within spyder, you still surely use a python shell of some sort, > yes? And isn't ipython a superior option to the regular python repl > for these purposes? It would be but it sometimes isn't configured very well with the GUI so that plotting doesn't work as well as you would hope. The Spyder normal command line (as well as IEP command line) does provide some of the nice features of IPython at a "standard-looking" prompt. IPython, the command line should be part of the standard in my opinion, but not necessarily IPython-the-notebook as that is just one of many ways to interact with the standard. I also really like the idea of a pylab default workspace where one has done import numpy as np import matplotlib.pyplot as plt import scipy as sp But, already in IPython the %pylab command (or ipython --pylab) creates a workspace where at least numpy names and matplotlib plotting names are available --- this should be resolved so that either there are two approaches clearly labeled or just one. -Travis > > (Personally I just use ipython at the terminal, and I've really only > trained my fingers to always start ipython instead of python in the > last few months.) > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From takowl at gmail.com Fri Sep 21 11:50:40 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 21 Sep 2012 16:50:40 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 21 September 2012 15:39, Nathaniel Smith wrote: > I think by definition, a "pylab tutorial" would have to be one that is > aimed at a somewhat amorphous set of configurations (different OSes, > distributions, etc.), and has some other goal beyond describing how to > start up a Python REPL. Being able to say "start up a Pylab shell and > ..." will already get us quite a long way, so long as all "pylab > shells" are equivalent in the ways that matter for the tutorial. Being > able to plot matters much more than where exactly in the UI those plot > windows end up located. If it was as simple as where plot windows turned up, yes. But there are a whole host of issues from starting the interface to syntax like "foo?" that differ. They might seem trivial, but they're all things that will confuse new users going through a tutorial. So I think a standard interface is important. >> I'd rather not say 'it has to work x much like IPython, but it needn't >> be IPython'. > > Indeed, that'd be a terrible idea. I mean "pylab shell" as "the shell > you get by default when using a pylab-oriented environment (which will > be implemented by ipython, and also satisfy these other potential > requirements)". But what your requirements describe is a subset of IPython's functionality, and I expect we'd extend that subset - the 'foo?' syntax I mentioned is extremely useful, for instance. Rather than people reimplementing that, why not simply push the code we've already written for the purpose, i.e. IPython? > Right. And IPython-the-REPL is a de facto standard, but IPython > notebooks are not. (At least not yet. Obviously with your IPython hat > on you're working on changing that :-).) I'll agree that the notebook isn't quite a de facto standard, but it has gained a lot of traction in a short time. At Euroscipy 2011, Fernando demoed the first cut of the notebook, which had been merged days earlier. At Euroscipy 2012, we had several demonstrations given from notebooks. So even if the notebook isn't in the first version of the standard, I hope it will get a place in the future. Is this a position we can find consensus on? Pylab distributions should include IPython, but need not be able to run IPython notebook, i.e. pyzmq and tornado would not be required packages. >> P.S. Nathaniel: you describe trying to set conventions for how to test >> packages or find documentation, not to mention installing packages. I >> agree with all of that, but I'd like to put it under 'battles to fight >> another day'. If we're going to do anything, we can't try to do >> everything. > > Really I'm trying to work toward consensus on what "pylab" should mean > and what's in principle in scope. If we agree that these are the kinds > of things that pylab should stand for, then they can go on the todo > list until someone actually gets around to them. Anyway, volunteer > effort isn't a conserved quantity. Having a long todo list can > actually increase your chances of helpers showing up :-). I think those things are out of scope for 'Pylab', in that they shouldn't be part of the standard. But in doing this sort of integration, we will highlight those inconsistencies, and I hope this community will play a key role in fixing them. So by all means put them on the to do list, but let's not get distracted by them at the moment. Travis: > But, already in IPython the %pylab command (or ipython --pylab) creates a workspace where at least numpy > names and matplotlib plotting names are available --- this should be resolved so that either there are two > approaches clearly labeled or just one. To clarify the background, installing matplotlib also installs a 'pylab' module which imports a number of names from matplotlib and numpy. IPython's pylab mode then does 'from pylab import *'. One of my secondary goals for this effort is to disentangle that namespace from being part of matplotlib, but that's for another day. Thanks, Thomas From ralf.gommers at gmail.com Fri Sep 21 12:22:24 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Sep 2012 18:22:24 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 5:50 PM, Thomas Kluyver wrote: > On 21 September 2012 15:39, Nathaniel Smith wrote: > > I think by definition, a "pylab tutorial" would have to be one that is > > aimed at a somewhat amorphous set of configurations (different OSes, > > distributions, etc.), and has some other goal beyond describing how to > > start up a Python REPL. Being able to say "start up a Pylab shell and > > ..." will already get us quite a long way, so long as all "pylab > > shells" are equivalent in the ways that matter for the tutorial. Being > > able to plot matters much more than where exactly in the UI those plot > > windows end up located. > > If it was as simple as where plot windows turned up, yes. But there > are a whole host of issues from starting the interface to syntax like > "foo?" that differ. They might seem trivial, but they're all things > that will confuse new users going through a tutorial. So I think a > standard interface is important. > +1 if you mean the IPython REPL with "interface". This is quite comprehensive already as an introductory tutorial: http://scipy-lectures.github.com/. And pretty much the first thing it introduces is IPython. > >> I'd rather not say 'it has to work x much like IPython, but it needn't > >> be IPython'. > > > > Indeed, that'd be a terrible idea. I mean "pylab shell" as "the shell > > you get by default when using a pylab-oriented environment (which will > > be implemented by ipython, and also satisfy these other potential > > requirements)". > > But what your requirements describe is a subset of IPython's > functionality, and I expect we'd extend that subset - the 'foo?' > syntax I mentioned is extremely useful, for instance. Rather than > people reimplementing that, why not simply push the code we've already > written for the purpose, i.e. IPython? > > > Right. And IPython-the-REPL is a de facto standard, but IPython > > notebooks are not. (At least not yet. Obviously with your IPython hat > > on you're working on changing that :-).) > > I'll agree that the notebook isn't quite a de facto standard, but it > has gained a lot of traction in a short time. At Euroscipy 2011, > Fernando demoed the first cut of the notebook, which had been merged > days earlier. At Euroscipy 2012, we had several demonstrations given > from notebooks. So even if the notebook isn't in the first version of > the standard, I hope it will get a place in the future. > > Is this a position we can find consensus on? Pylab distributions > should include IPython, but need not be able to run IPython notebook, > i.e. pyzmq and tornado would not be required packages. > +1 > > >> P.S. Nathaniel: you describe trying to set conventions for how to test > >> packages or find documentation, not to mention installing packages. I > >> agree with all of that, but I'd like to put it under 'battles to fight > >> another day'. If we're going to do anything, we can't try to do > >> everything. > > > > Really I'm trying to work toward consensus on what "pylab" should mean > > and what's in principle in scope. If we agree that these are the kinds > > of things that pylab should stand for, then they can go on the todo > > list until someone actually gets around to them. Anyway, volunteer > > effort isn't a conserved quantity. Having a long todo list can > > actually increase your chances of helpers showing up :-). > > I think those things are out of scope for 'Pylab', in that they > shouldn't be part of the standard. I agree with Nathaniel that they're important, and once the work is done to get all major distributions to do the right thing here, I don't see why they shouldn't become part of the standard then. Ralf > But in doing this sort of > integration, we will highlight those inconsistencies, and I hope this > community will play a key role in fixing them. So by all means put > them on the to do list, but let's not get distracted by them at the > moment. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 21 12:32:47 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Sep 2012 18:32:47 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 3:22 AM, Fernando Perez wrote: > On Wed, Sep 19, 2012 at 12:24 PM, Ralf Gommers > wrote: > > > > The part of IPython that's pretty much universal is the command line > > interface, so I don't see a problem with setting the minimum version to > > 0.10. > > I really hope not. The internals of IPython, the configuration, the > usage, everything, is *completely* different in the 0.10 series and > afterwards. We don't maintain or support 0.10 anymore. It would be a > huge disservice to users to get them going on something that's already > obsolete. > The point of that remark was that we shouldn't set requirements that will say "Python(x,y) / Spyder isn't compliant", nothing more than that. Of course I'd prefer the latest and greatest. If IPython in Spyder will be updated soon it's a non-issue. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Sep 21 13:04:06 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 21 Sep 2012 18:04:06 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: To stay focussed, here's a current draft, based on Fernando's slide of the pylab ecosystem: Python (2.x >= 2.6 or 3.x* >= 3.2) Numpy (>= 1.5) Scipy (>= 0.10) Matplotlib (>= 1.1) IPython (>= 0.12; hopefully we can get this sorted out for Python(x,y). Notebook ability not a requirement.) SymPy Versions still to be determined. pandas StatsModels scikits-learn scikits-image PyTables NetworkX * About Python 3: it's currently blocked by matplotlib (fixed by the imminent 1.2 release), SymPy (work done by a GSoC student, waiting for a release?) and Pytables (commit log shows work towards Python 3 support). Hopefully in a few months it will be possible to build a Python 3 distribution meeting the standard. My only concern about including this many packages is that over half of them aren't in EPD Free, and I'm not sure how Enthought would feel about expanding their free tier that much (I'm sure there are Enthought people on this list - feel free to chime in). I think it's important that there is a free, cross platform distribution that users can download without worrying about licensing. Maybe we should point users to the more comprehensive Anaconda CE. Thanks, Thomas From jsseabold at gmail.com Fri Sep 21 13:15:17 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 21 Sep 2012 13:15:17 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 1:04 PM, Thomas Kluyver wrote: > To stay focussed, here's a current draft, based on Fernando's slide of > the pylab ecosystem: > > Python (2.x >= 2.6 or 3.x* >= 3.2) > Numpy (>= 1.5) > Scipy (>= 0.10) > Matplotlib (>= 1.1) > IPython (>= 0.12; hopefully we can get this sorted out for > Python(x,y). Notebook ability not a requirement.) > SymPy Versions still to be determined. > pandas > StatsModels > scikits-learn > scikits-image > PyTables > NetworkX > This sounds great. A few others I usually put in a fresh install. mpmath sphinx I also like the idea of having (configurable) default imports (**with namespaces**), though I'd think we might discuss what exactly gets imported by default and what the standards will be. Everyone has different needs of course, but I'm sure there's some commonality. Mine looks something like import numpy as np np.set_printoptions(suppress=True) import statsmodels.api as sm import pandas import sklearn from scipy import stats, optimize import matplotlib.pyplot as plt Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Sep 21 13:36:26 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 21 Sep 2012 18:36:26 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 21 September 2012 18:15, Skipper Seabold wrote: > This sounds great. A few others I usually put in a fresh install. > > mpmath > sphinx I don't know about mpmath. I probably wouldn't include Sphinx in the spec, as it's important more when you're developing and releasing packages. But it does return us to the question about general-purpose Python packages. Should we require distribute, for example - or just specify that there must be a package installation mechanism? What about popular tools like requests? Or things like GUI toolkits that are difficult to install separately? Although PyQt would rather increase the minimum size. > I also like the idea of having (configurable) default imports (**with > namespaces**) I wondered when this question would come up. ;-) One of the pieces of baggage the Pylab name comes with is a the relatively flat namespace of the pylab module. I think we need to leave that reasonably intact, just to avoid annoying all the people who're familiar with it. We could define something like a pylab2 module, and encourage people to use that. But I suggest the namespaces vs. flat debate is something we postpone to a later discussion. Thanks, Thomas From jsseabold at gmail.com Fri Sep 21 13:47:41 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 21 Sep 2012 13:47:41 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 1:36 PM, Thomas Kluyver wrote: > On 21 September 2012 18:15, Skipper Seabold wrote: > > This sounds great. A few others I usually put in a fresh install. > > > > mpmath > > sphinx > > I don't know about mpmath. I probably wouldn't include Sphinx in the > spec, as it's important more when you're developing and releasing > packages. But it does return us to the question about general-purpose > Python packages. Should we require distribute, for example - or just > specify that there must be a package installation mechanism? What > about popular tools like requests? Or things like GUI toolkits that > are difficult to install separately? Although PyQt would rather > increase the minimum size. > I like the idea of trying to emulate something like R's install.package (eventually). This, to me, is one of the reasons it's so successful. The target audience, as I think it is for pylab, is users - people that are proficient at writing scripts and generally smart problem solvers but not necessarily extremely great programmers. For example, I don't think there's an assumption that the average R user has working knowledge of how to build a package from scratch. Developers, on the other hand, don't need too much hand holding to get the other tools they need - e.g., compilers, sphinx probably falls in here, etc. If having things like distribute in the package helps move us in this direction (would it?), then I think that's a good argument for including it. > > > I also like the idea of having (configurable) default imports (**with > > namespaces**) > > I wondered when this question would come up. ;-) One of the pieces of > baggage the Pylab name comes with is a the relatively flat namespace > of the pylab module. I think we need to leave that reasonably intact, > just to avoid annoying all the people who're familiar with it. We > could define something like a pylab2 module, and encourage people to > use that. But I suggest the namespaces vs. flat debate is something we > postpone to a later discussion. > > I will reserve my counter-arguments (and shouting and wild gesticulations) until this later date. Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 21 14:25:49 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Sep 2012 20:25:49 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 7:47 PM, Skipper Seabold wrote: > On Fri, Sep 21, 2012 at 1:36 PM, Thomas Kluyver wrote: > >> On 21 September 2012 18:15, Skipper Seabold wrote: >> > This sounds great. A few others I usually put in a fresh install. >> > >> > mpmath >> > sphinx >> >> I don't know about mpmath. I probably wouldn't include Sphinx in the >> spec, as it's important more when you're developing and releasing >> packages. But it does return us to the question about general-purpose >> Python packages. Should we require distribute, for example - or just >> specify that there must be a package installation mechanism? > > > What about popular tools like requests? > > Never heard about it before just now, and not really related to scientific computing. Or things like GUI toolkits that are difficult to install separately? >> Although PyQt would rather >> increase the minimum size. >> > We're talking about the base (non-compiler) version, right? Then -1 on GUI toolkits. We need nose to run tests for almost all packages. I'd further suggest PIL (pillow) -- still popular .... sigh. Or better actually, make sure FreeImage is in with scikits-image. h5py seems to be close to PyTables in popularity and growing faster, perhaps include both? I like the idea of trying to emulate something like R's install.package > (eventually). This, to me, is one of the reasons it's so successful. The > target audience, as I think it is for pylab, is users - people that are > proficient at writing scripts and generally smart problem solvers but not > necessarily extremely great programmers. For example, I don't think there's > an assumption that the average R user has working knowledge of how to build > a package from scratch. Developers, on the other hand, don't need too much > hand holding to get the other tools they need - e.g., compilers, sphinx > probably falls in here, etc. If having things like distribute in the > package helps move us in this direction (would it?), then I think that's a > good argument for including it. > Before something like a robust "install.package" is a reality, I'm not sure requiring setuptools/distribute/pip/... is useful. It breaks all the time, which will give new users a poor impression of Pylab (or Python). Python(x,y)'s solution of plugins as .exe files is much less likely to break if done right. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Sep 21 15:39:37 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 20:39:37 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 4:50 PM, Thomas Kluyver wrote: > On 21 September 2012 15:39, Nathaniel Smith wrote: >> I think by definition, a "pylab tutorial" would have to be one that is >> aimed at a somewhat amorphous set of configurations (different OSes, >> distributions, etc.), and has some other goal beyond describing how to >> start up a Python REPL. Being able to say "start up a Pylab shell and >> ..." will already get us quite a long way, so long as all "pylab >> shells" are equivalent in the ways that matter for the tutorial. Being >> able to plot matters much more than where exactly in the UI those plot >> windows end up located. > > If it was as simple as where plot windows turned up, yes. But there > are a whole host of issues from starting the interface to syntax like > "foo?" that differ. They might seem trivial, but they're all things > that will confuse new users going through a tutorial. So I think a > standard interface is important. I'm not sure how we keep talking past each other here. Let me try again :-). What I'm suggesting is that in a "pylab" system, the default thing that happens when you ask for a shell is that: - you get an IPython REPL - and also this IPython REPL has plotting mainloop integration junk already set up for you - and also this IPython REPL has some stuff added to its default namespace - etc. I just think we should remain agnostic for now about how you "ask for a shell" -- whether that's opening an ipython notebook, or clicking Interpreter -> New in Spyder, or whatever. >>> P.S. Nathaniel: you describe trying to set conventions for how to test >>> packages or find documentation, not to mention installing packages. I >>> agree with all of that, but I'd like to put it under 'battles to fight >>> another day'. If we're going to do anything, we can't try to do >>> everything. >> >> Really I'm trying to work toward consensus on what "pylab" should mean >> and what's in principle in scope. If we agree that these are the kinds >> of things that pylab should stand for, then they can go on the todo >> list until someone actually gets around to them. Anyway, volunteer >> effort isn't a conserved quantity. Having a long todo list can >> actually increase your chances of helpers showing up :-). > > I think those things are out of scope for 'Pylab', in that they > shouldn't be part of the standard. But in doing this sort of > integration, we will highlight those inconsistencies, and I hope this > community will play a key role in fixing them. So by all means put > them on the to do list, but let's not get distracted by them at the > moment. Can you elaborate on what you think Pylab's scope is, then? My attempt is this quote from the original email: "Basically I feel like the main value a 'pylab brand' can provide is by finding places where user quality-of-life can be improved by standardizing and documenting conventions, and then evangelizing those to package and distribution authors." I think of Pylab as a banner for making the cross-package overall user experience better. That certainly includes creating standards for available packages and UI conventions, but making documentation searchable and making it easy to install domain-specific packages would clearly be in scope as well. But that's just my image of what pylab could mean, you should share yours too... -n From njs at pobox.com Fri Sep 21 15:54:45 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 20:54:45 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 7:25 PM, Ralf Gommers wrote: > On Fri, Sep 21, 2012 at 7:47 PM, Skipper Seabold > wrote: >> On Fri, Sep 21, 2012 at 1:36 PM, Thomas Kluyver wrote: >>> >>> On 21 September 2012 18:15, Skipper Seabold wrote: >>> > This sounds great. A few others I usually put in a fresh install. >>> > >>> > mpmath >>> > sphinx >>> >>> I don't know about mpmath. I probably wouldn't include Sphinx in the >>> spec, as it's important more when you're developing and releasing >>> packages. But it does return us to the question about general-purpose >>> Python packages. Should we require distribute, for example - or just >>> specify that there must be a package installation mechanism? I'd be inclined to include the core packages that might be expected for simple package development: virtualenv, distribute, sphinx, nose. Going back to the R example, my experience is that a *lot* of people write and distribute tiny R packages. Including people I wouldn't have expected too, and they're not always terribly professional, but they sort of dip their toes in the water and go from there. We should encourage a gentle slope from hacking together some algorithm for a paper -> releasing that algorithm on PyPI. Putting together a simple Python package is *really* easy once you know how -- 10 lines of setup.py, 'python setup.py register; python setup.py sdist upload' -- and sphinx gives a compelling infrastructure for writing docs, etc. >>> >>> What about popular tools like requests? > > > Never heard about it before just now, and not really related to scientific > computing. > >>> Or things like GUI toolkits that are difficult to install separately? >>> Although PyQt would rather >>> increase the minimum size. > > > We're talking about the base (non-compiler) version, right? Then -1 on GUI > toolkits. PyQt will generally get dragged along with matplotlib, won't it? >> I like the idea of trying to emulate something like R's install.package >> (eventually). This, to me, is one of the reasons it's so successful. The >> target audience, as I think it is for pylab, is users - people that are >> proficient at writing scripts and generally smart problem solvers but not >> necessarily extremely great programmers. For example, I don't think there's >> an assumption that the average R user has working knowledge of how to build >> a package from scratch. Developers, on the other hand, don't need too much >> hand holding to get the other tools they need - e.g., compilers, sphinx >> probably falls in here, etc. If having things like distribute in the package >> helps move us in this direction (would it?), then I think that's a good >> argument for including it. > > > Before something like a robust "install.package" is a reality, I'm not sure > requiring setuptools/distribute/pip/... is useful. It breaks all the time, > which will give new users a poor impression of Pylab (or Python). > Python(x,y)'s solution of plugins as .exe files is much less likely to break > if done right. It works great for plugins that they've put together and distributed and are up to date with the version you need and etc., but there are >24,000 packages on PyPI. IME pip failures come down to: - packages that use numpy.distutils but numpy isn't installed - packages that need a compiler - packages that have some elaborate library dependencies (like suitesparse or whatever) The first two are easily solved. And if that only gives people access to 22,000 packages or so, then oh well... I can't see how it would be better to have *no* library installation method. I think every project I've used R on I've ended up wanting to upgrade some package at some point. E.g. if I want pandas 0.8 and I'm using Python(x,y), then 'pip' will probably just work, and it's currently the *only* option (they're still distributing 0.7, but they do include a compiler). (Okay, I admit that part of this is just that I want to stop twitching every time I see a tutorial that includes the phrase "sudo python setup.py...".) -n From ralf.gommers at gmail.com Fri Sep 21 16:19:04 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Sep 2012 22:19:04 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:54 PM, Nathaniel Smith wrote: > > On Fri, Sep 21, 2012 at 7:25 PM, Ralf Gommers > wrote: > > > >>> Or things like GUI toolkits that are difficult to install separately? > >>> Although PyQt would rather > >>> increase the minimum size. > > > > > > We're talking about the base (non-compiler) version, right? Then -1 on > GUI > > toolkits. > > PyQt will generally get dragged along with matplotlib, won't it? > No, at least I've never got it that way. PyQt is an optional dependency for matplotlib. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 21 16:28:35 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Sep 2012 22:28:35 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:54 PM, Nathaniel Smith wrote: > On Fri, Sep 21, 2012 at 7:25 PM, Ralf Gommers > wrote: > > On Fri, Sep 21, 2012 at 7:47 PM, Skipper Seabold > > wrote: > > >> I like the idea of trying to emulate something like R's install.package > >> (eventually). This, to me, is one of the reasons it's so successful. The > >> target audience, as I think it is for pylab, is users - people that are > >> proficient at writing scripts and generally smart problem solvers but > not > >> necessarily extremely great programmers. For example, I don't think > there's > >> an assumption that the average R user has working knowledge of how to > build > >> a package from scratch. Developers, on the other hand, don't need too > much > >> hand holding to get the other tools they need - e.g., compilers, sphinx > >> probably falls in here, etc. If having things like distribute in the > package > >> helps move us in this direction (would it?), then I think that's a good > >> argument for including it. > > > > > > Before something like a robust "install.package" is a reality, I'm not > sure > > requiring setuptools/distribute/pip/... is useful. It breaks all the > time, > > which will give new users a poor impression of Pylab (or Python). > > Python(x,y)'s solution of plugins as .exe files is much less likely to > break > > if done right. > > It works great for plugins that they've put together and distributed > and are up to date with the version you need and etc., but there are > >24,000 packages on PyPI. IME pip failures come down to: > - packages that use numpy.distutils but numpy isn't installed > - packages that need a compiler > - packages that have some elaborate library dependencies (like > suitesparse or whatever) > The first two are easily solved. I thought we're first talking about the non-compiler-basic-pylab. So item 2 isn't solved. Add to your list "I work in a company, and yes they have a firewall, and no I don't have admin rights". Ralf And if that only gives people access > to 22,000 packages or so, then oh well... I can't see how it would be > better to have *no* library installation method. > > I think every project I've used R on I've ended up wanting to upgrade > some package at some point. E.g. if I want pandas 0.8 and I'm using > Python(x,y), then 'pip' will probably just work, and it's currently > the *only* option (they're still distributing 0.7, but they do include > a compiler). > > (Okay, I admit that part of this is just that I want to stop twitching > every time I see a tutorial that includes the phrase "sudo python > setup.py...".) > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Sep 21 16:32:12 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 16:32:12 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 3:54 PM, Nathaniel Smith wrote: > On Fri, Sep 21, 2012 at 7:25 PM, Ralf Gommers wrote: >> On Fri, Sep 21, 2012 at 7:47 PM, Skipper Seabold >> wrote: >>> On Fri, Sep 21, 2012 at 1:36 PM, Thomas Kluyver wrote: >>>> >>>> On 21 September 2012 18:15, Skipper Seabold wrote: >>>> > This sounds great. A few others I usually put in a fresh install. >>>> > >>>> > mpmath >>>> > sphinx >>>> >>>> I don't know about mpmath. I probably wouldn't include Sphinx in the >>>> spec, as it's important more when you're developing and releasing >>>> packages. But it does return us to the question about general-purpose >>>> Python packages. Should we require distribute, for example - or just >>>> specify that there must be a package installation mechanism? > > I'd be inclined to include the core packages that might be expected > for simple package development: virtualenv, distribute, sphinx, nose. > Going back to the R example, my experience is that a *lot* of people > write and distribute tiny R packages. Including people I wouldn't have > expected too, and they're not always terribly professional, but they > sort of dip their toes in the water and go from there. We should > encourage a gentle slope from hacking together some algorithm for a > paper -> releasing that algorithm on PyPI. Putting together a simple > Python package is *really* easy once you know how -- 10 lines of > setup.py, 'python setup.py register; python setup.py sdist upload' -- > and sphinx gives a compelling infrastructure for writing docs, etc. I think this could be very helpful, especially with template package structure. There once was a scikits template, but it's ages out of date, when I last looked at it. > >>>> >>>> What about popular tools like requests? >> >> >> Never heard about it before just now, and not really related to scientific >> computing. >> >>>> Or things like GUI toolkits that are difficult to install separately? >>>> Although PyQt would rather >>>> increase the minimum size. >> >> >> We're talking about the base (non-compiler) version, right? Then -1 on GUI >> toolkits. > > PyQt will generally get dragged along with matplotlib, won't it? > >>> I like the idea of trying to emulate something like R's install.package >>> (eventually). This, to me, is one of the reasons it's so successful. The >>> target audience, as I think it is for pylab, is users - people that are >>> proficient at writing scripts and generally smart problem solvers but not >>> necessarily extremely great programmers. For example, I don't think there's >>> an assumption that the average R user has working knowledge of how to build >>> a package from scratch. Developers, on the other hand, don't need too much >>> hand holding to get the other tools they need - e.g., compilers, sphinx >>> probably falls in here, etc. If having things like distribute in the package >>> helps move us in this direction (would it?), then I think that's a good >>> argument for including it. >> >> >> Before something like a robust "install.package" is a reality, I'm not sure >> requiring setuptools/distribute/pip/... is useful. It breaks all the time, >> which will give new users a poor impression of Pylab (or Python). >> Python(x,y)'s solution of plugins as .exe files is much less likely to break >> if done right. > > It works great for plugins that they've put together and distributed > and are up to date with the version you need and etc., but there are >>24,000 packages on PyPI. IME pip failures come down to: > - packages that use numpy.distutils but numpy isn't installed > - packages that need a compiler > - packages that have some elaborate library dependencies (like > suitesparse or whatever) main failure of pip: no installation of binaries easy_install xxx is still my favorite (on Windows) except when it doesn't find any compatible binaries. Josef > The first two are easily solved. And if that only gives people access > to 22,000 packages or so, then oh well... I can't see how it would be > better to have *no* library installation method. > > I think every project I've used R on I've ended up wanting to upgrade > some package at some point. E.g. if I want pandas 0.8 and I'm using > Python(x,y), then 'pip' will probably just work, and it's currently > the *only* option (they're still distributing 0.7, but they do include > a compiler). > > (Okay, I admit that part of this is just that I want to stop twitching > every time I see a tutorial that includes the phrase "sudo python > setup.py...".) > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Fri Sep 21 16:38:50 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:38:50 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: Warning: what follows is a highly opinionated, completely biased post. I'll be using a 'we' that refers to the IPython developers because the credit for much of what I talk about goes to the whole team, but ultimately the rant is my responsibility, so flame me if need be. self.put_hat(kind='IPython'). I think it's important to address directly the question of the IPython notebook. I realize that not everybody uses it, and it has some extra dependencies (though they are really easy ones to satisfy). But I also think it's an important discussion that goes to the question of whether we simply are trying to play catch-up what matlab/R-Rstudio offer, or to be truly forward-looking and rethink how scientific computing will be done for the coming decade. Needless to say, I have little interest in the former and am putting all my energy into the latter: if it were otherwise, I'd been contributing to Octave for the last 10 years instead. My argument, in short: we should consider *some* notebook-type tool as a first-class citizen of this effort, for the simple reason that such an approach is one whose time has come. A notebook environment is the only tool that truly tackles in an integrated manner the problem that we've been referring to as the 'lifecycle of a scientific idea' (https://speakerdeck.com/u/fperez/p/ipython-tools-for-the-lifecycle-of-research-computing?slide=3). Context: all disciplines are becoming intensely computational, the need for real-time collaboration on live computational analysis is great, the pressures for moving towards truly reusable, reproducible work are coming from multiple angles (major journals, funding agencies, ...), we need a much smoother transition between analysis codes and publications, and we need better ways to share our analysis work over the internet, for education and for archival purposes. Having a good IDE is a really important point, and my hat is off to the stellar work the Spyder team has done (and coincidentally, another Colombian physicist, Carlos C?rdoba, is leading the charge on the spyder/ipython integration work) . But to be blunt, a matlab-style IDE does not tackle the important questions above in any meaningful way. In the last decade's worth of the pylab world (using our new moniker in its intended fashion), we've certainly taken inspiration from the major systems out there, but it has always been that: *inspiration*, never simple copying: - John Hunter's brilliance with matplotlib was not so much to copy the high-level API and look/feel of plot windows to ease the transition from matlab. It was to rethink the question of what a plotting library should be, abstracting over GUI toolkits and an elegant OO architecture underneath the familiar scripting interface. - Numpy's arrays are similar to matlab/fortran ones, obviously, but when used with the full power of slicing, fancy indexing and structured dtypes, they make matlab's look like the 1970's relic they are. Jim Hugunin, Perry and Travis led the way to build something that has no match. - The one-man army that is Wes McKinney had R's DataFrame squarely in his sights when he built pandas, but he went far, far beyond the basic ideas in R to provide one of the most powerful packages we've seen in recent memory. - etc... you get my point. Now, as I said above, the scientific computing world is changing, and more importantly, a lot of things in the broader scientific world are also undergoing very drastic changes: the push for open access, data sharing and reproducibility of results is likely to make a lot of things look very different in 10 years than they do now. We can argue that the whole online education wave of Coursera/Udacity/EdX is a bit of a bubble, but there's no denying the internet will play a role in how scientists are trained both in and out of traditional academia. I argue that, after having spent the last decade building up the pylab foundations to be competitive with the 'big boys', we are uniquely well positioned to stop following and actually lead on many of these problems. And for that, my contention is that it is absolutely necessary to have: - A tool that bridges the gaps between exploratory work, collaboration, production, publication and education. - An open format for sharing, publishing and archiving executable computational work. - A system that is accessible through the browser, so that computation can be located where the data is, since we can't move the data to the desktop anymore. Remote collaboration also is most sensibly tackled via a browser, as google docs has amply demonstrated. Up until now I have *not* said that we should use the *IPython* notebook. Our efforts on this front are, I am sure, full of limitations and imperfections. But if we're not going to tackle the problems above, I would like it to be with an explicit decision on whether it is because: 1. this community only wants to stick to a traditional shell+editor/IDE approach. 2. the IPython solution is the wrong one, it has technical flaws, etc. If it's #1, I think it would be a huge, huge mistake and one of lack of foresight, ambition and vision. If that's the decision, I'm sure that we in the IPython team will simply continue fighting for that vision on our own, as we are pretty convinced it's the right thing to do. And evidence is mounting that others think the same too: - Michigan State University is teaching *two* courses on advanced genomics that are heavily notebook based: http://ged.msu.edu/angus/beacon-2012/index.html, https://github.com/ngs-docs/ngs-notebooks. - At Berkeley we have (but this is not driven by me) both an intensive bootcamp and a semester-long course on scientific python with the same: https://github.com/profjsb/python-bootcamp, https://github.com/profjsb/python-seminar. - We can now blog straight off the notebook (http://blog.fperez.org/2012/09/blogging-with-ipython-notebook.html), and Jose Unpingco is effectively writing a full book on signal processing as a series of blog posts that are notebooks: http://python-for-signal-processing.blogspot.com. - there's more, just google it. Now, if the reluctance is to go with the *IPython* notebook, then I'd like to know what the alternative is. We have effectively put 10 years of work into this problem, and the current implementation is the third or fourth attempt (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). We know it's by no means perfect, but honestly I think it would be a lot more sensible to fix whatever our limitations are than to start yet once more from scratch. So by all means beat on the format, work with us to improve it so it meets your needs, let us know what's wrong with it or help us improve the tooling around it (ipython itself, the nbconvert tools, the nbviewer.ipython.org site, etc...). But to be blunt, please don't think that ignoring 10 years of work on this problem is the right approach. In summary, I think that sticking to a shell+editor/IDE view of the problem would be missing a huge opportunity to play a key role in shaping the next decade's worth of scientific computing. And by the way, it's not like the others are standing still here: - Wolfram is busy at work promoting a closed, highly proprietary idea (http://www.wolfram.com/cdf-player). - Matlab is building a solution around Microsoft Word: http://www.mathworks.com/help/matlab/matlab_prog/create-a-matlab-notebook-with-microsoft-word.html. They have a huge market share and resources, so they can and will push pretty deep with this. - The R community has rapidly banded behind knitr (http://yihui.name/knitr). If the pylab community decides to not tackle this problem (and opportunity!) head-on, at least from IPython we will continue. I currently have 5 grants in the pipeline all of which would provide, if funded, some measure of support for this kind of work. We all know funding is a crap shoot, but even if only some of them go through we should have a decent amount of resources not only for our (this includes Brian, who's also involved with several) own time but also for students, postdocs and developers, to tackle this. And I simply view it as too important not to continue fighting in this direction. Now, after all this rant, I want to make clear that I'm *not* saying that we should stop talking about the simple shell or that everyone should switch to *only* using notebooks. One important property of the IPython notebooks is that it is very easy to generate a pure .py script out of any notebook, any time (and we know how to improve those conversion facilities quite a bit). So even if a project decides to ship all of its examples as notebooks, it's trivial to ensure that they are also accessible in pure script form to be run from the command line or loaded into spyder/IDLE/etc as well as converted to clean html in the sphinx-built documentation. Furthermore, the notebook is not the tool for building large-scale library code, so there will always be a place for emacs/vim/textmate/spyder, where the focus is more on the 'development' than the interactive exploration/analysis. But having notebooks in the projects, once we also build tools for cross-project help indexing, will let us provide users with powerful help that can search for a term across all the installed pylab-compliant tools and will give one-click access to live, executable examples they can modify immediately. Mathematica has had this for over a decade and it is absolutely extraordinary. The same tools can also index the pure .py versions, of course, but after 5 years of not having a Mathematica license, I still miss this every time I have to trawl multiple online galleries looking for something in the pylab world. OK, I doubt anyone is reading by now, so I'll stop here... Flame away. f From fperez.net at gmail.com Fri Sep 21 16:40:18 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:40:18 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:22 AM, Ralf Gommers wrote: > I agree with Nathaniel that they're important, and once the work is done to > get all major distributions to do the right thing here, I don't see why they > shouldn't become part of the standard then. I also think that the user-facing install and cross-package federated conventions for help, docs, etc, are hugely important. A discussion for a later day, to be sure, but a key part of this vision as well stated by Nathaniel. f From fperez.net at gmail.com Fri Sep 21 16:42:27 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:42:27 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:32 AM, Ralf Gommers wrote: > The point of that remark was that we shouldn't set requirements that will > say "Python(x,y) / Spyder isn't compliant", nothing more than that. Actually, we *should* define something that is almost certainly *not* met by today's versions of epd/pythonxy/anaconda/sage/whatever. If we don't, the logical implication is that we'll be defining the minimum common denominator of all of them, and that would be way wrong. What we need to find is a bar that all these projects can clear in a reasonable time-frame (say 6 months, give or take) and without undue burden on their development resources. But not bury the bar underground so that everyone has already cleared it where they're sitting. Cheers, f From njs at pobox.com Fri Sep 21 16:43:08 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 21:43:08 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:28 PM, Ralf Gommers wrote: > > > On Fri, Sep 21, 2012 at 9:54 PM, Nathaniel Smith wrote: >> >> On Fri, Sep 21, 2012 at 7:25 PM, Ralf Gommers >> wrote: >> > On Fri, Sep 21, 2012 at 7:47 PM, Skipper Seabold >> > wrote: >> >> >> I like the idea of trying to emulate something like R's install.package >> >> (eventually). This, to me, is one of the reasons it's so successful. >> >> The >> >> target audience, as I think it is for pylab, is users - people that are >> >> proficient at writing scripts and generally smart problem solvers but >> >> not >> >> necessarily extremely great programmers. For example, I don't think >> >> there's >> >> an assumption that the average R user has working knowledge of how to >> >> build >> >> a package from scratch. Developers, on the other hand, don't need too >> >> much >> >> hand holding to get the other tools they need - e.g., compilers, sphinx >> >> probably falls in here, etc. If having things like distribute in the >> >> package >> >> helps move us in this direction (would it?), then I think that's a good >> >> argument for including it. >> > >> > >> > Before something like a robust "install.package" is a reality, I'm not >> > sure >> > requiring setuptools/distribute/pip/... is useful. It breaks all the >> > time, >> > which will give new users a poor impression of Pylab (or Python). >> > Python(x,y)'s solution of plugins as .exe files is much less likely to >> > break >> > if done right. >> >> It works great for plugins that they've put together and distributed >> and are up to date with the version you need and etc., but there are >> >24,000 packages on PyPI. IME pip failures come down to: >> - packages that use numpy.distutils but numpy isn't installed >> - packages that need a compiler >> >> - packages that have some elaborate library dependencies (like >> suitesparse or whatever) >> The first two are easily solved. > > I thought we're first talking about the non-compiler-basic-pylab. So item 2 > isn't solved. Item 2 is easily solved by including a compiler, which is why including a compiler is such a good idea :-). But even so, I don't see how it would be such a tragedy if the users who clicked the "Minimal install" link ended up getting an error saying hey, you can't install this package unless you download a "full pylab". That still counts as easily solved from the user's point of view, it's not an indictment of Pylab as a whole. > Add to your list "I work in a company, and yes they have a > firewall, and no I don't have admin rights". If your firewall doesn't allow you to download files over HTTP, then you have my condolences, but I think anybody who is really in that position is already in a sufficiently adversarial relationship with their sysadmin that it won't occur to them to blame Pylab/Python for the problem. Not sure what you mean about admin rights. Except if you mean Python's obnoxious habit of defaulting to installing things systemwide, which is the straightforwardly-fixable problem that I was suggesting pylab might want to intervene on in the first place. (Did you know that at least on Linux, and with moderately recent version of the various pieces involved, you can do 'pip install --user ' and you just get a single-user install, no muss or fuss? I only discovered this *yesterday*, and no idea whether it also works on other platforms or whether there's some slightly differently-colored hoop you have to jump through to get the same behaviour. But it's lovely -- finally I can stop beginning every new-user tutorial with "okay, first, let's set you up with a local virtualenv and modify your .bashrc to activate it". If only it were the default...) -n From fperez.net at gmail.com Fri Sep 21 16:43:25 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:43:25 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 10:47 AM, Skipper Seabold wrote: > > I like the idea of trying to emulate something like R's install.package > (eventually). This, to me, is one of the reasons it's so successful. The > target audience, as I think it is for pylab, is users - people that are > proficient at writing scripts and generally smart problem solvers but not > necessarily extremely great programmers. For example, I don't think there's > an assumption that the average R user has working knowledge of how to build > a package from scratch. Developers, on the other hand, don't need too much > hand holding to get the other tools they need - e.g., compilers, sphinx > probably falls in here, etc. If having things like distribute in the package > helps move us in this direction (would it?), then I think that's a good > argument for including it. Sorry, I only credited before Nathaniel for some of these ideas, but I was also thinking about this post. Big +1 on this. From fperez.net at gmail.com Fri Sep 21 16:47:59 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:47:59 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 11:25 AM, Ralf Gommers wrote: > We're talking about the base (non-compiler) version, right? Then -1 on GUI > toolkits. As much as it pains me, I agree: the situation with Qt is really not easy. I don't know if pyside is yet a full replacement for PyQt, at least from what I've seen pyqt still works a lot better. And what with Nokia jettisoning Qt overboard, its future is by no means a clear one. That's a project of a complexity far too large for our limited resources to tackle, I'm afraid. There are three things that are hugely important but that due to their complexity I think we should shy away from, at least at the start and for the 'base' spec: - cython/numba/f2py: compilers needed (and possibly three different ones!) - mayavi: VTK and a GUI toolkit. - Qt I do use all of the above (well, not numba yet), but it's one thing to deploy this stuff on a personally-managed up to date linux box, another altogether to define a spec we can be sure will be met by everyone who will face the packaging/distribution nightmare across all platforms. f From cournape at gmail.com Fri Sep 21 16:49:52 2012 From: cournape at gmail.com (David Cournapeau) Date: Fri, 21 Sep 2012 21:49:52 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 6:47 PM, Skipper Seabold wrote: > On Fri, Sep 21, 2012 at 1:36 PM, Thomas Kluyver wrote: >> >> On 21 September 2012 18:15, Skipper Seabold wrote: >> > This sounds great. A few others I usually put in a fresh install. >> > >> > mpmath >> > sphinx >> >> I don't know about mpmath. I probably wouldn't include Sphinx in the >> spec, as it's important more when you're developing and releasing >> packages. But it does return us to the question about general-purpose >> Python packages. Should we require distribute, for example - or just >> specify that there must be a package installation mechanism? What >> about popular tools like requests? Or things like GUI toolkits that >> are difficult to install separately? Although PyQt would rather >> increase the minimum size. > > > I like the idea of trying to emulate something like R's install.package > (eventually). This, to me, is one of the reasons it's so successful. The > target audience, as I think it is for pylab, is users - people that are > proficient at writing scripts and generally smart problem solvers but not > necessarily extremely great programmers. For example, I don't think there's > an assumption that the average R user has working knowledge of how to build > a package from scratch. Developers, on the other hand, don't need too much > hand holding to get the other tools they need - e.g., compilers, sphinx > probably falls in here, etc. If having things like distribute in the package > helps move us in this direction (would it?), then I think that's a good > argument for including it. I don't think distribute (which is just setuptools with a different set of bugs) is a solution to any problem we are facing. I agree R install.package is a killer feature. FWIW, that has always been part of what I want to achieve with bento (I spent quite a bit of time around R system when annoucing what would become bento at scipy india in 2009). David From fperez.net at gmail.com Fri Sep 21 16:52:44 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:52:44 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 12:54 PM, Nathaniel Smith wrote: > I'd be inclined to include the core packages that might be expected > for simple package development: virtualenv, distribute, sphinx, nose. > Going back to the R example, my experience is that a *lot* of people > write and distribute tiny R packages. Including people I wouldn't have > expected too, and they're not always terribly professional, but they > sort of dip their toes in the water and go from there. We should > encourage a gentle slope from hacking together some algorithm for a > paper -> releasing that algorithm on PyPI. Putting together a simple > Python package is *really* easy once you know how -- 10 lines of > setup.py, 'python setup.py register; python setup.py sdist upload' -- > and sphinx gives a compelling infrastructure for writing docs, etc. Big +1 to this *idea*, though for the sake of realism and resources, I'd be inclined to tackle only the 'pure user' spec at first, let the dust settle, and then go for this approach a bit later. But you're right on the money that in the long-term, we *must* make it really, really easy to erase the boundary between 'user' and 'developer' on all fronts. From fperez.net at gmail.com Fri Sep 21 16:53:53 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:53:53 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 1:19 PM, Ralf Gommers wrote: > No, at least I've never got it that way. PyQt is an optional dependency for > matplotlib. Yup, push comes to shove, Tk works, matplotlib supports it and it's typically installed in most well-done Python builds. Though I think on OSX even the official binaries have lately been having all kinds of bizarre problems, I read something about that a while back on python-dev... From josef.pktd at gmail.com Fri Sep 21 16:57:11 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 16:57:11 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 4:38 PM, Fernando Perez wrote: > Warning: what follows is a highly opinionated, completely biased post. > I'll be using a 'we' that refers to the IPython developers because > the credit for much of what I talk about goes to the whole team, but > ultimately the rant is my responsibility, so flame me if need be. > > self.put_hat(kind='IPython'). > > I think it's important to address directly the question of the IPython > notebook. I realize that not everybody uses it, and it has some extra > dependencies (though they are really easy ones to satisfy). But I > also think it's an important discussion that goes to the question of > whether we simply are trying to play catch-up what matlab/R-Rstudio > offer, or to be truly forward-looking and rethink how scientific > computing will be done for the coming decade. Needless to say, I have > little interest in the former and am putting all my energy into the > latter: if it were otherwise, I'd been contributing to Octave for the > last 10 years instead. > > My argument, in short: we should consider *some* notebook-type tool as > a first-class citizen of this effort, for the simple reason that such > an approach is one whose time has come. A notebook environment is the > only tool that truly tackles in an integrated manner the problem that > we've been referring to as the 'lifecycle of a scientific idea' > (https://speakerdeck.com/u/fperez/p/ipython-tools-for-the-lifecycle-of-research-computing?slide=3). > > Context: all disciplines are becoming intensely computational, the > need for real-time collaboration on live computational analysis is > great, the pressures for moving towards truly reusable, reproducible > work are coming from multiple angles (major journals, funding > agencies, ...), we need a much smoother transition between analysis > codes and publications, and we need better ways to share our analysis > work over the internet, for education and for archival purposes. > Having a good IDE is a really important point, and my hat is off to > the stellar work the Spyder team has done (and coincidentally, another > Colombian physicist, Carlos C?rdoba, is leading the charge on the > spyder/ipython integration work) . But to be blunt, a matlab-style > IDE does not tackle the important questions above in any meaningful > way. > > In the last decade's worth of the pylab world (using our new moniker > in its intended fashion), we've certainly taken inspiration from the > major systems out there, but it has always been that: *inspiration*, > never simple copying: > > - John Hunter's brilliance with matplotlib was not so much to copy the > high-level API and look/feel of plot windows to ease the transition > from matlab. It was to rethink the question of what a plotting > library should be, abstracting over GUI toolkits and an elegant OO > architecture underneath the familiar scripting interface. > > - Numpy's arrays are similar to matlab/fortran ones, obviously, but > when used with the full power of slicing, fancy indexing and > structured dtypes, they make matlab's look like the 1970's relic they > are. Jim Hugunin, Perry and Travis led the way to build something > that has no match. > > - The one-man army that is Wes McKinney had R's DataFrame squarely in > his sights when he built pandas, but he went far, far beyond the basic > ideas in R to provide one of the most powerful packages we've seen in > recent memory. > > - etc... you get my point. > > > Now, as I said above, the scientific computing world is changing, and > more importantly, a lot of things in the broader scientific world are > also undergoing very drastic changes: the push for open access, data > sharing and reproducibility of results is likely to make a lot of > things look very different in 10 years than they do now. We can argue > that the whole online education wave of Coursera/Udacity/EdX is a bit > of a bubble, but there's no denying the internet will play a role in > how scientists are trained both in and out of traditional academia. > > I argue that, after having spent the last decade building up the pylab > foundations to be competitive with the 'big boys', we are uniquely > well positioned to stop following and actually lead on many of these > problems. And for that, my contention is that it is absolutely > necessary to have: > > - A tool that bridges the gaps between exploratory work, > collaboration, production, publication and education. > > - An open format for sharing, publishing and archiving executable > computational work. > > - A system that is accessible through the browser, so that computation > can be located where the data is, since we can't move the data to the > desktop anymore. Remote collaboration also is most sensibly tackled > via a browser, as google docs has amply demonstrated. > > > Up until now I have *not* said that we should use the *IPython* > notebook. Our efforts on this front are, I am sure, full of > limitations and imperfections. But if we're not going to tackle the > problems above, I would like it to be with an explicit decision on > whether it is because: > > 1. this community only wants to stick to a traditional > shell+editor/IDE approach. > > 2. the IPython solution is the wrong one, it has technical flaws, etc. > > If it's #1, I think it would be a huge, huge mistake and one of lack > of foresight, ambition and vision. If that's the decision, I'm sure > that we in the IPython team will simply continue fighting for that > vision on our own, as we are pretty convinced it's the right thing to > do. And evidence is mounting that others think the same too: > > - Michigan State University is teaching *two* courses on advanced > genomics that are heavily notebook based: > http://ged.msu.edu/angus/beacon-2012/index.html, > https://github.com/ngs-docs/ngs-notebooks. > > - At Berkeley we have (but this is not driven by me) both an intensive > bootcamp and a semester-long course on scientific python with the > same: > https://github.com/profjsb/python-bootcamp, > https://github.com/profjsb/python-seminar. > > - We can now blog straight off the notebook > (http://blog.fperez.org/2012/09/blogging-with-ipython-notebook.html), > and Jose Unpingco is effectively writing a full book on signal > processing as a series of blog posts that are notebooks: > http://python-for-signal-processing.blogspot.com. > > - there's more, just google it. > > Now, if the reluctance is to go with the *IPython* notebook, then I'd > like to know what the alternative is. We have effectively put 10 > years of work into this problem, and the current implementation is the > third or fourth attempt > (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). We > know it's by no means perfect, but honestly I think it would be a lot > more sensible to fix whatever our limitations are than to start yet > once more from scratch. So by all means beat on the format, work with > us to improve it so it meets your needs, let us know what's wrong with > it or help us improve the tooling around it (ipython itself, the > nbconvert tools, the nbviewer.ipython.org site, etc...). But to be > blunt, please don't think that ignoring 10 years of work on this > problem is the right approach. > > > In summary, I think that sticking to a shell+editor/IDE view of the > problem would be missing a huge opportunity to play a key role in > shaping the next decade's worth of scientific computing. And by the > way, it's not like the others are standing still here: > > - Wolfram is busy at work promoting a closed, highly proprietary idea > (http://www.wolfram.com/cdf-player). > > - Matlab is building a solution around Microsoft Word: > http://www.mathworks.com/help/matlab/matlab_prog/create-a-matlab-notebook-with-microsoft-word.html. > They have a huge market share and resources, so they can and will > push pretty deep with this. > > - The R community has rapidly banded behind knitr (http://yihui.name/knitr). > > > If the pylab community decides to not tackle this problem (and > opportunity!) head-on, at least from IPython we will continue. I > currently have 5 grants in the pipeline all of which would provide, if > funded, some measure of support for this kind of work. We all know > funding is a crap shoot, but even if only some of them go through we > should have a decent amount of resources not only for our (this > includes Brian, who's also involved with several) own time but also > for students, postdocs and developers, to tackle this. And I simply > view it as too important not to continue fighting in this direction. > > > Now, after all this rant, I want to make clear that I'm *not* saying > that we should stop talking about the simple shell or that everyone > should switch to *only* using notebooks. One important property of > the IPython notebooks is that it is very easy to generate a pure .py > script out of any notebook, any time (and we know how to improve those > conversion facilities quite a bit). So even if a project decides to > ship all of its examples as notebooks, it's trivial to ensure that > they are also accessible in pure script form to be run from the > command line or loaded into spyder/IDLE/etc as well as converted to > clean html in the sphinx-built documentation. > > Furthermore, the notebook is not the tool for building large-scale > library code, so there will always be a place for > emacs/vim/textmate/spyder, where the focus is more on the > 'development' than the interactive exploration/analysis. > > But having notebooks in the projects, once we also build tools for > cross-project help indexing, will let us provide users with powerful > help that can search for a term across all the installed > pylab-compliant tools and will give one-click access to live, > executable examples they can modify immediately. Mathematica has had > this for over a decade and it is absolutely extraordinary. The same > tools can also index the pure .py versions, of course, but after 5 > years of not having a Mathematica license, I still miss this every > time I have to trawl multiple online galleries looking for something > in the pylab world. > > > OK, I doubt anyone is reading by now, so I'll stop here... Flame away. No argument from me. I just spend a day getting notebooks into the statsmodels documentation and trying to improve our html repr rendering. notebooks are a great way of getting rendered and commented code and all of the many previous attempts where half baked. That's for the teaching side, I don't know much about the future of collaborative, parallel, cloud, ... interpreters. (Anaconda seems to have it built in.) (even if my development environment is spyder and eclipse.) Josef > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Fri Sep 21 16:59:16 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 13:59:16 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 1:43 PM, Nathaniel Smith wrote: > (Did you know that at least on Linux, and with moderately recent > version of the various pieces involved, you can do 'pip install --user > ' and you just get a single-user install, no muss or fuss? I > only discovered this *yesterday*, and no idea whether it also works on > other platforms or whether there's some slightly differently-colored > hoop you have to jump through to get the same behaviour. But it's > lovely -- finally I can stop beginning every new-user tutorial with > "okay, first, let's set you up with a local virtualenv and modify your > .bashrc to activate it". If only it were the default...) This was added to distutils itself in python 2.6: http://docs.python.org/whatsnew/2.6.html#pep-370-per-user-site-packages-directory. I don't know when pip started exposing it easily, I know you used to have to pass it in a more awkward way but it was still possible. You still have to do one thing at the bashrc level: any scripts a package provides will go into ~/.local/bin, which will need to be added to the user's *PATH*, it's only the handling of PYTHONPATH that becomes automatic now. Cheers, f From ralf.gommers at gmail.com Fri Sep 21 17:00:19 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Sep 2012 23:00:19 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 10:49 PM, David Cournapeau wrote: > On Fri, Sep 21, 2012 at 6:47 PM, Skipper Seabold > wrote: > > On Fri, Sep 21, 2012 at 1:36 PM, Thomas Kluyver > wrote: > >> > >> On 21 September 2012 18:15, Skipper Seabold > wrote: > >> > This sounds great. A few others I usually put in a fresh install. > >> > > >> > mpmath > >> > sphinx > >> > >> I don't know about mpmath. I probably wouldn't include Sphinx in the > >> spec, as it's important more when you're developing and releasing > >> packages. But it does return us to the question about general-purpose > >> Python packages. Should we require distribute, for example - or just > >> specify that there must be a package installation mechanism? What > >> about popular tools like requests? Or things like GUI toolkits that > >> are difficult to install separately? Although PyQt would rather > >> increase the minimum size. > > > > > > I like the idea of trying to emulate something like R's install.package > > (eventually). This, to me, is one of the reasons it's so successful. The > > target audience, as I think it is for pylab, is users - people that are > > proficient at writing scripts and generally smart problem solvers but not > > necessarily extremely great programmers. For example, I don't think > there's > > an assumption that the average R user has working knowledge of how to > build > > a package from scratch. Developers, on the other hand, don't need too > much > > hand holding to get the other tools they need - e.g., compilers, sphinx > > probably falls in here, etc. If having things like distribute in the > package > > helps move us in this direction (would it?), then I think that's a good > > argument for including it. > > I don't think distribute (which is just setuptools with a different > set of bugs) is a solution to any problem we are facing. I agree R > install.package is a killer feature. FWIW, that has always been part > of what I want to achieve with bento (I spent quite a bit of time > around R system when annoucing what would become bento at scipy india > in 2009). So how far off is bentoshop? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Sep 21 17:22:42 2012 From: cournape at gmail.com (David Cournapeau) Date: Fri, 21 Sep 2012 22:22:42 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 10:00 PM, Ralf Gommers wrote: > > > On Fri, Sep 21, 2012 at 10:49 PM, David Cournapeau > wrote: >> >> On Fri, Sep 21, 2012 at 6:47 PM, Skipper Seabold >> wrote: >> > On Fri, Sep 21, 2012 at 1:36 PM, Thomas Kluyver >> > wrote: >> >> >> >> On 21 September 2012 18:15, Skipper Seabold >> >> wrote: >> >> > This sounds great. A few others I usually put in a fresh install. >> >> > >> >> > mpmath >> >> > sphinx >> >> >> >> I don't know about mpmath. I probably wouldn't include Sphinx in the >> >> spec, as it's important more when you're developing and releasing >> >> packages. But it does return us to the question about general-purpose >> >> Python packages. Should we require distribute, for example - or just >> >> specify that there must be a package installation mechanism? What >> >> about popular tools like requests? Or things like GUI toolkits that >> >> are difficult to install separately? Although PyQt would rather >> >> increase the minimum size. >> > >> > >> > I like the idea of trying to emulate something like R's install.package >> > (eventually). This, to me, is one of the reasons it's so successful. The >> > target audience, as I think it is for pylab, is users - people that are >> > proficient at writing scripts and generally smart problem solvers but >> > not >> > necessarily extremely great programmers. For example, I don't think >> > there's >> > an assumption that the average R user has working knowledge of how to >> > build >> > a package from scratch. Developers, on the other hand, don't need too >> > much >> > hand holding to get the other tools they need - e.g., compilers, sphinx >> > probably falls in here, etc. If having things like distribute in the >> > package >> > helps move us in this direction (would it?), then I think that's a good >> > argument for including it. >> >> I don't think distribute (which is just setuptools with a different >> set of bugs) is a solution to any problem we are facing. I agree R >> install.package is a killer feature. FWIW, that has always been part >> of what I want to achieve with bento (I spent quite a bit of time >> around R system when annoucing what would become bento at scipy india >> in 2009). > > > So how far off is bentoshop? Pretty far, but closer than 3 years ago :) I was merely pointing out that getting there will be difficult, and should IMO be orthogonal to the discussion. David From njs at pobox.com Fri Sep 21 17:43:50 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 22:43:50 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: Hi Fernando, Excellent rant. To be clear, I have no objection at all to the idea of supporting notebooks in "pylab" -- my only concern is from the other direction entirely, that if the "pylab" idea is going to make a difference it will be because it gets people thinking and talking about how to work together towards shared goals that they already have, and by documenting and clarifying existing consensus. The IPython notebook has tons of buzz, you should obviously be proud of it, and maybe by next year it'll seem ridiculous to everyone that you would ever start a new Python user on anything else. But empirically, that's not true yet, and the way to get there is for you guys to continue kicking ass, not for "pylab" to legislate something. Trying would alienate people. So it's just a process and scope objection, nothing to do with the notebook idea at all. -n On Fri, Sep 21, 2012 at 9:38 PM, Fernando Perez wrote: > Warning: what follows is a highly opinionated, completely biased post. > I'll be using a 'we' that refers to the IPython developers because > the credit for much of what I talk about goes to the whole team, but > ultimately the rant is my responsibility, so flame me if need be. > > self.put_hat(kind='IPython'). > > I think it's important to address directly the question of the IPython > notebook. I realize that not everybody uses it, and it has some extra > dependencies (though they are really easy ones to satisfy). But I > also think it's an important discussion that goes to the question of > whether we simply are trying to play catch-up what matlab/R-Rstudio > offer, or to be truly forward-looking and rethink how scientific > computing will be done for the coming decade. Needless to say, I have > little interest in the former and am putting all my energy into the > latter: if it were otherwise, I'd been contributing to Octave for the > last 10 years instead. > > My argument, in short: we should consider *some* notebook-type tool as > a first-class citizen of this effort, for the simple reason that such > an approach is one whose time has come. A notebook environment is the > only tool that truly tackles in an integrated manner the problem that > we've been referring to as the 'lifecycle of a scientific idea' > (https://speakerdeck.com/u/fperez/p/ipython-tools-for-the-lifecycle-of-research-computing?slide=3). > > Context: all disciplines are becoming intensely computational, the > need for real-time collaboration on live computational analysis is > great, the pressures for moving towards truly reusable, reproducible > work are coming from multiple angles (major journals, funding > agencies, ...), we need a much smoother transition between analysis > codes and publications, and we need better ways to share our analysis > work over the internet, for education and for archival purposes. > Having a good IDE is a really important point, and my hat is off to > the stellar work the Spyder team has done (and coincidentally, another > Colombian physicist, Carlos C?rdoba, is leading the charge on the > spyder/ipython integration work) . But to be blunt, a matlab-style > IDE does not tackle the important questions above in any meaningful > way. > > In the last decade's worth of the pylab world (using our new moniker > in its intended fashion), we've certainly taken inspiration from the > major systems out there, but it has always been that: *inspiration*, > never simple copying: > > - John Hunter's brilliance with matplotlib was not so much to copy the > high-level API and look/feel of plot windows to ease the transition > from matlab. It was to rethink the question of what a plotting > library should be, abstracting over GUI toolkits and an elegant OO > architecture underneath the familiar scripting interface. > > - Numpy's arrays are similar to matlab/fortran ones, obviously, but > when used with the full power of slicing, fancy indexing and > structured dtypes, they make matlab's look like the 1970's relic they > are. Jim Hugunin, Perry and Travis led the way to build something > that has no match. > > - The one-man army that is Wes McKinney had R's DataFrame squarely in > his sights when he built pandas, but he went far, far beyond the basic > ideas in R to provide one of the most powerful packages we've seen in > recent memory. > > - etc... you get my point. > > > Now, as I said above, the scientific computing world is changing, and > more importantly, a lot of things in the broader scientific world are > also undergoing very drastic changes: the push for open access, data > sharing and reproducibility of results is likely to make a lot of > things look very different in 10 years than they do now. We can argue > that the whole online education wave of Coursera/Udacity/EdX is a bit > of a bubble, but there's no denying the internet will play a role in > how scientists are trained both in and out of traditional academia. > > I argue that, after having spent the last decade building up the pylab > foundations to be competitive with the 'big boys', we are uniquely > well positioned to stop following and actually lead on many of these > problems. And for that, my contention is that it is absolutely > necessary to have: > > - A tool that bridges the gaps between exploratory work, > collaboration, production, publication and education. > > - An open format for sharing, publishing and archiving executable > computational work. > > - A system that is accessible through the browser, so that computation > can be located where the data is, since we can't move the data to the > desktop anymore. Remote collaboration also is most sensibly tackled > via a browser, as google docs has amply demonstrated. > > > Up until now I have *not* said that we should use the *IPython* > notebook. Our efforts on this front are, I am sure, full of > limitations and imperfections. But if we're not going to tackle the > problems above, I would like it to be with an explicit decision on > whether it is because: > > 1. this community only wants to stick to a traditional > shell+editor/IDE approach. > > 2. the IPython solution is the wrong one, it has technical flaws, etc. > > If it's #1, I think it would be a huge, huge mistake and one of lack > of foresight, ambition and vision. If that's the decision, I'm sure > that we in the IPython team will simply continue fighting for that > vision on our own, as we are pretty convinced it's the right thing to > do. And evidence is mounting that others think the same too: > > - Michigan State University is teaching *two* courses on advanced > genomics that are heavily notebook based: > http://ged.msu.edu/angus/beacon-2012/index.html, > https://github.com/ngs-docs/ngs-notebooks. > > - At Berkeley we have (but this is not driven by me) both an intensive > bootcamp and a semester-long course on scientific python with the > same: > https://github.com/profjsb/python-bootcamp, > https://github.com/profjsb/python-seminar. > > - We can now blog straight off the notebook > (http://blog.fperez.org/2012/09/blogging-with-ipython-notebook.html), > and Jose Unpingco is effectively writing a full book on signal > processing as a series of blog posts that are notebooks: > http://python-for-signal-processing.blogspot.com. > > - there's more, just google it. > > Now, if the reluctance is to go with the *IPython* notebook, then I'd > like to know what the alternative is. We have effectively put 10 > years of work into this problem, and the current implementation is the > third or fourth attempt > (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). We > know it's by no means perfect, but honestly I think it would be a lot > more sensible to fix whatever our limitations are than to start yet > once more from scratch. So by all means beat on the format, work with > us to improve it so it meets your needs, let us know what's wrong with > it or help us improve the tooling around it (ipython itself, the > nbconvert tools, the nbviewer.ipython.org site, etc...). But to be > blunt, please don't think that ignoring 10 years of work on this > problem is the right approach. > > > In summary, I think that sticking to a shell+editor/IDE view of the > problem would be missing a huge opportunity to play a key role in > shaping the next decade's worth of scientific computing. And by the > way, it's not like the others are standing still here: > > - Wolfram is busy at work promoting a closed, highly proprietary idea > (http://www.wolfram.com/cdf-player). > > - Matlab is building a solution around Microsoft Word: > http://www.mathworks.com/help/matlab/matlab_prog/create-a-matlab-notebook-with-microsoft-word.html. > They have a huge market share and resources, so they can and will > push pretty deep with this. > > - The R community has rapidly banded behind knitr (http://yihui.name/knitr). > > > If the pylab community decides to not tackle this problem (and > opportunity!) head-on, at least from IPython we will continue. I > currently have 5 grants in the pipeline all of which would provide, if > funded, some measure of support for this kind of work. We all know > funding is a crap shoot, but even if only some of them go through we > should have a decent amount of resources not only for our (this > includes Brian, who's also involved with several) own time but also > for students, postdocs and developers, to tackle this. And I simply > view it as too important not to continue fighting in this direction. > > > Now, after all this rant, I want to make clear that I'm *not* saying > that we should stop talking about the simple shell or that everyone > should switch to *only* using notebooks. One important property of > the IPython notebooks is that it is very easy to generate a pure .py > script out of any notebook, any time (and we know how to improve those > conversion facilities quite a bit). So even if a project decides to > ship all of its examples as notebooks, it's trivial to ensure that > they are also accessible in pure script form to be run from the > command line or loaded into spyder/IDLE/etc as well as converted to > clean html in the sphinx-built documentation. > > Furthermore, the notebook is not the tool for building large-scale > library code, so there will always be a place for > emacs/vim/textmate/spyder, where the focus is more on the > 'development' than the interactive exploration/analysis. > > But having notebooks in the projects, once we also build tools for > cross-project help indexing, will let us provide users with powerful > help that can search for a term across all the installed > pylab-compliant tools and will give one-click access to live, > executable examples they can modify immediately. Mathematica has had > this for over a decade and it is absolutely extraordinary. The same > tools can also index the pure .py versions, of course, but after 5 > years of not having a Mathematica license, I still miss this every > time I have to trawl multiple online galleries looking for something > in the pylab world. > > > OK, I doubt anyone is reading by now, so I'll stop here... Flame away. > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From njs at pobox.com Fri Sep 21 18:00:55 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 23:00:55 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:47 PM, Fernando Perez wrote: > On Fri, Sep 21, 2012 at 11:25 AM, Ralf Gommers wrote: >> We're talking about the base (non-compiler) version, right? Then -1 on GUI >> toolkits. > > As much as it pains me, I agree: the situation with Qt is really not > easy. I don't know if pyside is yet a full replacement for PyQt, at > least from what I've seen pyqt still works a lot better. And what > with Nokia jettisoning Qt overboard, its future is by no means a clear > one. That's a project of a complexity far too large for our limited > resources to tackle, I'm afraid. This is off-topic, but my impression is that Qt is still incredibly strong. The team got "jettisoned" to another company who actually wants to continue developing Qt (and the project has been profitable forever), still have ~dozens of full-time contributors, and in the last few years have become a real open-source project with many outside contributors. And they have this strange foundation structure where basically if the KDE folks ever decide that whichever company supports Qt is screwing things up, then they can pull the plug and convert it into a pure open-source project. Qt is really the only credible way of writing a serious cross-platform GUI. And even without that, in the X11 world it's clearly technically superior to GTK+ at this point, and AFAIK it's the best option for writing a pure Windows GUI with Python. > There are three things that are hugely important but that due to their > complexity I think we should shy away from, at least at the start and > for the 'base' spec: > > - cython/numba/f2py: compilers needed (and possibly three different ones!) FWIW, AFAICT every distribution we're talking about as "pylab candidates" already *does* include compilers (at least EPD, Anaconda, Python(x,y) do, and so does every Linux distribution, plus I assume most ways of getting a usable OS X system require this). Numba's a different matter since I assume it has a rather more intimate relationship with LLVM internals, but cython at least is a pretty fundamental tool, at least as much so as many of these other packages we're talking about like pandas. > - mayavi: VTK and a GUI toolkit. > - Qt > > I do use all of the above (well, not numba yet), but it's one thing to > deploy this stuff on a personally-managed up to date linux box, > another altogether to define a spec we can be sure will be met by > everyone who will face the packaging/distribution nightmare across all > platforms. -n From njs at pobox.com Fri Sep 21 18:04:26 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 23:04:26 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:42 PM, Fernando Perez wrote: > On Fri, Sep 21, 2012 at 9:32 AM, Ralf Gommers wrote: >> The point of that remark was that we shouldn't set requirements that will >> say "Python(x,y) / Spyder isn't compliant", nothing more than that. > > Actually, we *should* define something that is almost certainly *not* > met by today's versions of epd/pythonxy/anaconda/sage/whatever. If we > don't, the logical implication is that we'll be defining the minimum > common denominator of all of them, and that would be way wrong. > > What we need to find is a bar that all these projects can clear in a > reasonable time-frame (say 6 months, give or take) and without undue > burden on their development resources. But not bury the bar > underground so that everyone has already cleared it where they're > sitting. BTW, is there currently a problem where these distributions are failing to update packages in a timely fashion? I'm a little confused about what these minimum version requirements are trying to accomplish. If we just want some distro to update some package, then filing a bug report seems like a more polite approach than forming a whole new "pylab system definition" :-). -n From njs at pobox.com Fri Sep 21 18:14:35 2012 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 21 Sep 2012 23:14:35 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 9:32 PM, wrote: > main failure of pip: no installation of binaries > > easy_install xxx > is still my favorite (on Windows) except when it doesn't find any > compatible binaries. Apparently this is a legitimately horrible technical issue, because it turns out that you and I can have two builds of the exact same version of Python on the exact same version of Windows and yet they will not be ABI compatible. (E.g., for version <3.3, you can choose different widths for the unicode type at compile time.) I suppose a possible medium-term solution for our purposes would be to add "must be ABI compatible with python.org-distributed python" to our list of requirements, and then hack our installation interface to check for and install such binaries. ("medium-term" because it would take too much work to count as a short-term solution, but significantly less than implementing a proper binary package and getting bentoshop off the ground.) -n From fperez.net at gmail.com Fri Sep 21 18:17:00 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 15:17:00 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 3:00 PM, Nathaniel Smith wrote: > FWIW, AFAICT every distribution we're talking about as "pylab > candidates" already *does* include compilers (at least EPD, Anaconda, > Python(x,y) do, and so does every Linux distribution, plus I assume > most ways of getting a usable OS X system require this). Mmh, I thought none of them covered the hugely important OSX case. On OSX, Xcode is a separate download, or at least it used to be. It may have changed, I don't use OSX. And speaking of windows, do they all include a compiler that works for both 32- and 64-bit versions, and that can build cython extensions regardless of how Python itself was compiled? I've never understood the soup of compatibility problems on Windows between the MS compilers and others... Finally, none of them ship a fortran compiler that I know... But in any case, *if* getting Cython working out of the box on OSX and windows 32- and 64-bit is actually easy and/or already done by all of them, that's awesome! I'd love to be proven wrong on my fears here... f From fperez.net at gmail.com Fri Sep 21 18:26:49 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 15:26:49 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 2:43 PM, Nathaniel Smith wrote: > But empirically, > that's not true yet, and the way to get there is for you guys to > continue kicking ass, not for "pylab" to legislate something. Trying > would alienate people. So it's just a process and scope objection, > nothing to do with the notebook idea at all. Well, but the point of pylab *is* partly to 'legislate', since we're defining a spec. So it's a valid, relevant and I would argue important question. My contention is that - *not* putting *a* notebook system into the spec is a mistake, - if one is going to go in, the ipython one is the sensible choice. Of course, the overall community may disagree and decide that they want pylab to be a spec that stays bounded by the 'shell + editor/ide' idea. I contend that's a mistake akin to saying that Octave is an intellectually interesting project, but I only have one vote here out of many. BTW, Sage has (IMHO opinion wisely, and we've obviously learned a ton from their work) squarely made the choice that putting a notebook system front and center in their efforts is the right approach. They have 6 years of empirical success backing that. I may not agree with William on all his choices, but I always take my hat off to his willingness to shoot crazy high and risk failure. Cheers, f From fperez.net at gmail.com Fri Sep 21 18:29:36 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 15:29:36 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 3:04 PM, Nathaniel Smith wrote: > I'm a little confused > about what these minimum version requirements are trying to > accomplish. I guess it's to ensure a known api level. The mental model I'm following is the LSB, but perhaps I'm off-base here... http://en.wikipedia.org/wiki/Linux_Standard_Base f From josef.pktd at gmail.com Fri Sep 21 19:04:30 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 19:04:30 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 6:14 PM, Nathaniel Smith wrote: > On Fri, Sep 21, 2012 at 9:32 PM, wrote: >> main failure of pip: no installation of binaries >> >> easy_install xxx >> is still my favorite (on Windows) except when it doesn't find any >> compatible binaries. > > Apparently this is a legitimately horrible technical issue, because it > turns out that you and I can have two builds of the exact same version > of Python on the exact same version of Windows and yet they will not > be ABI compatible. (E.g., for version <3.3, you can choose different > widths for the unicode type at compile time.) I've never seen problems with different python version types, nor heard of different unicode widths on Windows. examples: pandas is compiled against numpy 1.6 and is ABI incompatible with numpy 1.5 I never managed to install a sklearn binary and I don't know which numpy they are built against. cvxopt build script requires python 2.7, and the Gohlke binaries require the mkl build of numpy. but most of the time it just works, especially when I'm not using an outdated python 2.5 or 2.6 and numpy 1.5. Josef > > I suppose a possible medium-term solution for our purposes would be to > add "must be ABI compatible with python.org-distributed python" to our > list of requirements, and then hack our installation interface to > check for and install such binaries. ("medium-term" because it would > take too much work to count as a short-term solution, but > significantly less than implementing a proper binary package and > getting bentoshop off the ground.) > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Fri Sep 21 19:27:16 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 19:27:16 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 6:17 PM, Fernando Perez wrote: > On Fri, Sep 21, 2012 at 3:00 PM, Nathaniel Smith wrote: >> FWIW, AFAICT every distribution we're talking about as "pylab >> candidates" already *does* include compilers (at least EPD, Anaconda, >> Python(x,y) do, and so does every Linux distribution, plus I assume >> most ways of getting a usable OS X system require this). > > Mmh, I thought none of them covered the hugely important OSX case. On > OSX, Xcode is a separate download, or at least it used to be. It may > have changed, I don't use OSX. > > And speaking of windows, do they all include a compiler that works for > both 32- and 64-bit versions, and that can build cython extensions > regardless of how Python itself was compiled? I've never understood > the soup of compatibility problems on Windows between the MS compilers > and others... > > Finally, none of them ship a fortran compiler that I know... > > But in any case, *if* getting Cython working out of the box on OSX and > windows 32- and 64-bit is actually easy and/or already done by all of > them, that's awesome! I'd love to be proven wrong on my fears here... pythonxy, and I guess the others, only have MingW for 32 bit, it includes gfortran which is still incompatible with scipy. (Skipper is building the statsmodels binaries with Microsoft compilers. a quick check: sklearn binaries for 64bit windows point to Gohlke which "Requires Numpy-MKL" not stock numpy. pymc (requires fortran) is 32 bit binaries only ) I don't see anything for 64 bit windows outside of Microsoft and Intel compilers, and binary distributions that use those. Josef > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From takowl at gmail.com Fri Sep 21 19:46:00 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 22 Sep 2012 00:46:00 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 21 September 2012 23:26, Fernando Perez wrote: > Well, but the point of pylab *is* partly to 'legislate', since we're > defining a spec. So it's a valid, relevant and I would argue > important question. My contention is that > > - *not* putting *a* notebook system into the spec is a mistake, > - if one is going to go in, the ipython one is the sensible choice. I see where both of you are coming from, but I'll argue from Nathaniel's side for a moment. Fernando sees Pylab as a way to push forward a system that outstrips the alternatives. Nathaniel views it as a formalisation of existing norms, which is also the angle I've come from. By analogy with Linux distributions, we are defining something like Linux Standard Base, and it's up to individual distributions to push what they consider the best possible experience. Imagine the protests if LSB specified a desktop environment. ;-) We already have major distributions which ship able to run the IPython notebook, and I don't expect them to drop that. If (when?) it becomes a de facto standard, we can add it as a requirement to a future version of the Pylab standard. In a sense, the minimum common denominator *is* what I'm aiming for, although that's obviously not how I would phrase it. I absolutely agree that the community should aim high, and we have some amazing tools to offer. But I don't think a standard is the right place to promote the latest and greatest. It's the launch pad, not the rocket. Nathaniel (some way above): > I just think we should remain agnostic for now about how you "ask for > a shell" -- whether that's opening an ipython notebook, or clicking > Interpreter -> New in Spyder, or whatever. This is exactly the *wrong* place to be agnostic. The very first thing in the tutorial should be specified as precisely as possible - start a terminal, enter 'ipython --pylab'. Of course, the distribution can provide and promote whatever interfaces they want in addition to that - embedded shells, notebooks, IDEs, and so on. Curious users will always explore beyond the tutorial. But I think it's very important that there's a consistent, predictable way to bring up a shell across all distributions. And as we discussed before, IPython-the-REPL *is* a de facto standard. Thanks, Thomas From josef.pktd at gmail.com Fri Sep 21 19:50:30 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Sep 2012 19:50:30 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 7:46 PM, Thomas Kluyver wrote: > On 21 September 2012 23:26, Fernando Perez wrote: >> Well, but the point of pylab *is* partly to 'legislate', since we're >> defining a spec. So it's a valid, relevant and I would argue >> important question. My contention is that >> >> - *not* putting *a* notebook system into the spec is a mistake, >> - if one is going to go in, the ipython one is the sensible choice. > > I see where both of you are coming from, but I'll argue from > Nathaniel's side for a moment. > > Fernando sees Pylab as a way to push forward a system that outstrips > the alternatives. Nathaniel views it as a formalisation of existing > norms, which is also the angle I've come from. By analogy with Linux > distributions, we are defining something like Linux Standard Base, and > it's up to individual distributions to push what they consider the > best possible experience. Imagine the protests if LSB specified a > desktop environment. ;-) > > We already have major distributions which ship able to run the IPython > notebook, and I don't expect them to drop that. If (when?) it becomes > a de facto standard, we can add it as a requirement to a future > version of the Pylab standard. > > In a sense, the minimum common denominator *is* what I'm aiming for, > although that's obviously not how I would phrase it. I absolutely > agree that the community should aim high, and we have some amazing > tools to offer. But I don't think a standard is the right place to > promote the latest and greatest. It's the launch pad, not the rocket. > > Nathaniel (some way above): >> I just think we should remain agnostic for now about how you "ask for >> a shell" -- whether that's opening an ipython notebook, or clicking >> Interpreter -> New in Spyder, or whatever. > > This is exactly the *wrong* place to be agnostic. The very first thing > in the tutorial should be specified as precisely as possible - start a > terminal, enter 'ipython --pylab'. Of course, the distribution can If this goes back to the import * issue, then I call Skipper. all our tutorials and examples start with full imports and no stars, and I doubt that will change. Josef > provide and promote whatever interfaces they want in addition to that > - embedded shells, notebooks, IDEs, and so on. Curious users will > always explore beyond the tutorial. But I think it's very important > that there's a consistent, predictable way to bring up a shell across > all distributions. And as we discussed before, IPython-the-REPL *is* a > de facto standard. > > Thanks, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From a.klein at science-applied.nl Fri Sep 21 20:11:45 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Sat, 22 Sep 2012 02:11:45 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: Fernando, I enjoyed your rant and think you make great points (although I don't see why a notebook in an IDE would not be able to work as good as from a browser). > - *not* putting *a* notebook system into the spec is a mistake, > - if one is going to go in, the ipython one is the sensible choice. > > Of course, the overall community may disagree and decide that they > want pylab to be a spec that stays bounded by the 'shell + editor/ide' > idea. > The community should not decide anything about what interface the user should use. By all means, let that be the choice of the user! I think the question for Pylab is not "do we, or do we not go notebook-style". It's about specifying base packages. And later maybe documentation, packaging etc. I really like IPython's idea of the notebook. And if you're right about them being the future, their popularity will surely grow over time. But let's not force that on our users right now. As I said earlier, I have no objection of including IPython in the base packages. But I do have an objection against picking one interface and saying that is *the* one. If IPython is to be included it should be for the reasons Thomas indicated; to have something usable that's always there and on which all the tutorials are based. Just my opinion, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Sep 21 20:26:06 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 17:26:06 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 4:46 PM, Thomas Kluyver wrote: > Fernando sees Pylab as a way to push forward a system that outstrips > the alternatives. Nathaniel views it as a formalisation of existing > norms, which is also the angle I've come from. By analogy with Linux > distributions, we are defining something like Linux Standard Base, and > it's up to individual distributions to push what they consider the > best possible experience. Imagine the protests if LSB specified a > desktop environment. ;-) Well, but that's because there's gnome, kde, xfce, unity, E, etc. There are multiple choices for that layer, so the LSB doesn't want to dictate one. But we basically have in the pylab world *one* notebook tool, that's it. So I'm not talking about promoting a yet-to-be-proven tool over the alternatives[1], but rather about whether we want a notebook approach in the first place or not. As mentioned before, Sage thought already in 2006 that the answer to that should be a clear yes, and I think their success is a data point worth considering. If it was obvious 6 years ago (when arguably it was a truly bold move by William), it's only clearer now, I think. I guess what you are saying is "pylab shouldn't include a notebook tool, and stick to the shell+editor/ide level", then. I find that truly odd, since it seems to take a 2003 perspective of the problem, not a 2013 one. But if that's what the community wants to go with, as I said, I'm fully cognizant that I only have one vote out of many (and I won't rant like a lunatic further, no worries :). In the end it may prove to be a moot discussion: the notebook is obviously rising in usage rapidly absent the existence of a 'pylab spec', so it's not like I'm worried about this 'harming' ipython in any way. And my own efforts will be spent squarely on that vision of scientific computing: as much as I use the shell ipython and emacs every day, that's where I see the future of the battles that matter. But I do think it's a missed opportunity for our community. Cheers, f [1] If anything, on the IDE front the discussion is far more wide open: while spyder is awesome, I keep hearing good things about IdleX and its dependencies are much lighter than Qt, and Canopy is over the horizon backed by Enthought's not inconsequential expertise and talent... From fperez.net at gmail.com Fri Sep 21 20:43:16 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 17:43:16 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 5:11 PM, Almar Klein wrote: > Fernando, I enjoyed your rant and think you make great points (although I > don't see why a notebook in an IDE would not be able to work as good as from > a browser). Oh, it does! Here's the Emacs implementation, with lisp websockets and all: http://tkf.github.com/emacs-ipython-notebook/ And right now the web server is necessary but that's an implementation detail, one could write something that speaks zmq directly to the kernels, without any http traffic at all. That would be trickier to use remotely, but it would be a perfectly sensible local implementation of an IDE client. >> - *not* putting *a* notebook system into the spec is a mistake, >> - if one is going to go in, the ipython one is the sensible choice. >> >> Of course, the overall community may disagree and decide that they >> want pylab to be a spec that stays bounded by the 'shell + editor/ide' >> idea. > > > The community should not decide anything about what interface the user > should use. By all means, let that be the choice of the user! I think the > question for Pylab is not "do we, or do we not go notebook-style". It's > about specifying base packages. And later maybe documentation, packaging > etc. I really like IPython's idea of the notebook. And if you're right about > them being the future, their popularity will surely grow over time. But > let's not force that on our users right now. > > As I said earlier, I have no objection of including IPython in the base > packages. But I do have an objection against picking one interface and > saying that is *the* one. If IPython is to be included it should be for the > reasons Thomas indicated; to have something usable that's always there and > on which all the tutorials are based. Actually, I don't really care about the interface. What I care about is: - the format, which is open and which is what will have archival value. there's nothing ipython-specific there, and in the future hopefully not even *python* specific (we want to use it for R, matlab, etc). - the protocol, which we've also specified openly and can be implemented by others (sage already uses it in their new work). It has some python-isms but they are cosmetic and should be trimmed out over time. The emacs client above is an example of a third-party notebook client built for users who'd rather use emacs than a browser. *That* is what matters to me; the ipython implementation of the above two things can be thought of as a 'reference' implementation, Takafumi's EIN is another one, and there could easily be more. And BTW, I've never thought of saying "let's only introduce people to a notebook approach", far from it. The shell is too useful, robust and important a tool not to start there and have it as the base point for everything. My take is that if we want pylab to be a forward-looking platform for modern scientific computing, it should include a notion of a notebook system from the get-go, like Mathematica did 20 years ago and Sage 6 years ago. It doesn't have to be the ipython UI, that's for sure: Spyder could have an awesome Qt client, for example, similar to how the Emacs one was implemented. Human-facing UI layers are indeed deeply personal, and what for some is perfect for others is intolerable (I know the lack of solid vim editing in a browser drives many people mad in the sage/ipython notebooks, for example). I see a chance for a format and a protocol to have an impact (and I haven't heard of any alternatives). But again, one vote out of many, etc. I'll go back to writing the last of my grants instead, so that we actually have funding to make this happen regardless of what pylab decides ;) Cheers, f From a.klein at science-applied.nl Fri Sep 21 20:47:28 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Sat, 22 Sep 2012 02:47:28 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: > I guess what you are saying is "pylab shouldn't include a notebook > tool, and stick to the shell+editor/ide level", then. I find that > truly odd, since it seems to take a 2003 perspective of the problem, > not a 2013 one. I suspect you missed my post because you were writing yours, but I'll just say it again: not including a notebook tool in pylab is not equivalent to sticking with the shell+editor or IDE approach; it is merely leaving that choice open for the user. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Sep 21 20:56:25 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 17:56:25 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 5:47 PM, Almar Klein wrote: > I suspect you missed my post because you were writing yours, but I'll just > say it again: not including a notebook tool in pylab is not equivalent to > sticking with the shell+editor or IDE approach; it is merely leaving that > choice open for the user. Well, it is sticking to that approach *in pylab* :) Obviously users are, have always been, and will always be, free to add any tools they want to their personal workflow (in the rare occasions I use matlab I do so from Emacs with its great matlab mode, not using the Mathworks IDE which is horrific in linux, like all desktop Java apps in linux). But no worries, I've ranted enough and won't push further, as I've made my arguments as clearly (or madly) as I can. If people don't want to do that, it's fine. We'll continue building those tools from our side regardless. I just wanted to leave the imprint of a lunatic in the discussion, and I think I've succeeded at that ;) Cheers, f From andrew.collette at gmail.com Fri Sep 21 21:02:30 2012 From: andrew.collette at gmail.com (Andrew Collette) Date: Fri, 21 Sep 2012 19:02:30 -0600 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: >>> On 21 September 2012 18:15, Skipper Seabold wrote: > h5py seems to be close to PyTables in popularity and growing faster, perhaps > include both? As a regular scipy user I have been following this discussion with interest (Disclaimer: I'm also the lead developer of h5py). I think the PyLab concept is great! I just wanted to confirm that h5py is BSD licensed and supports Python 3, in case that's a consideration for inclusion. I was pleased to see that both PyTables and h5py did well in Pierre Raybaut's Doodle poll. This is great news for open binary formats. I'm also certainly willing to do any integration work, provide metadata, etc. for this package if that's necessary on the PyLab side. Andrew Collette From a.klein at science-applied.nl Fri Sep 21 21:17:36 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Sat, 22 Sep 2012 03:17:36 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 22 September 2012 02:43, Fernando Perez wrote: > On Fri, Sep 21, 2012 at 5:11 PM, Almar Klein > wrote: > > Fernando, I enjoyed your rant and think you make great points (although I > > don't see why a notebook in an IDE would not be able to work as good as > from > > a browser). > > Oh, it does! Here's the Emacs implementation, with lisp websockets and all: > > http://tkf.github.com/emacs-ipython-notebook/ > > And right now the web server is necessary but that's an implementation > detail, one could write something that speaks zmq directly to the > kernels, without any http traffic at all. That would be trickier to > use remotely, but it would be a perfectly sensible local > implementation of an IDE client. > > >> - *not* putting *a* notebook system into the spec is a mistake, > >> - if one is going to go in, the ipython one is the sensible choice. > >> > >> Of course, the overall community may disagree and decide that they > >> want pylab to be a spec that stays bounded by the 'shell + editor/ide' > >> idea. > > > > > > The community should not decide anything about what interface the user > > should use. By all means, let that be the choice of the user! I think the > > question for Pylab is not "do we, or do we not go notebook-style". It's > > about specifying base packages. And later maybe documentation, packaging > > etc. I really like IPython's idea of the notebook. And if you're right > about > > them being the future, their popularity will surely grow over time. But > > let's not force that on our users right now. > > > > As I said earlier, I have no objection of including IPython in the base > > packages. But I do have an objection against picking one interface and > > saying that is *the* one. If IPython is to be included it should be for > the > > reasons Thomas indicated; to have something usable that's always there > and > > on which all the tutorials are based. > > Actually, I don't really care about the interface. What I care about is: > > - the format, which is open and which is what will have archival > value. there's nothing ipython-specific there, and in the future > hopefully not even *python* specific (we want to use it for R, matlab, > etc). > > - the protocol, which we've also specified openly and can be > implemented by others (sage already uses it in their new work). It > has some python-isms but they are cosmetic and should be trimmed out > over time. > > The emacs client above is an example of a third-party notebook client > built for users who'd rather use emacs than a browser. > > *That* is what matters to me; the ipython implementation of the above > two things can be thought of as a 'reference' implementation, > Takafumi's EIN is another one, and there could easily be more. > > And BTW, I've never thought of saying "let's only introduce people to > a notebook approach", far from it. The shell is too useful, robust > and important a tool not to start there and have it as the base point > for everything. > Arg, I suppose we're both reacting too fast and talking interlaced :) - This time I refreshed before I posted :) Ok, so we actually agree on a lot here, and I get the feeling we're differing in opinion about subtleties... > My take is that if we want pylab to be a forward-looking platform for > modern scientific computing, it should include a notion of a notebook > system from the get-go, like Mathematica did 20 years ago and Sage 6 > years ago. It doesn't have to be the ipython UI, that's for sure: > Spyder could have an awesome Qt client, for example, similar to how > the Emacs one was implemented. > So if I understand correctly, you want to put the concept of the notebook into pylab. Kind of based around the open protocols so that different tools can get a similar user experience. Well, it is sticking to that approach *in pylab* :) Obviously users > are, have always been, and will always be, free to add any tools they > want to their personal workflow. Trying to get at the subtleties... so what if IPython with notebook feature* is a part of the base. So that every user with a pylab-compliant distro can fire up a notebook. But, distros can (and will) ship additionally an IDE (like IEP or Spyder) that does *not* have IPython or a notebook-like interface, but these interfaces are still considered Pylab-compliant. That does sound reasonable? * What exactly are the extra dependencies for that? > But no worries, I've ranted enough and won't push further, as I've > made my arguments as clearly (or madly) as I can. If people don't > want to do that, it's fine. We'll continue building those tools from > our side regardless. I just wanted to leave the imprint of a lunatic > in the discussion, and I think I've succeeded at that ;) > Not a lunatic at all. I think this discussion is very useful. I like (most) your ideas, but I just want to protect some things that I care for. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Sep 21 22:31:01 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 19:31:01 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 6:17 PM, Almar Klein wrote: > So if I understand correctly, you want to put the concept of the notebook > into pylab. Kind of based around the open protocols so that different tools > can get a similar user experience. Precisely. > Trying to get at the subtleties... so what if IPython with notebook > feature* is a part of the base. So that every user with a pylab-compliant > distro can fire up a notebook. But, distros can (and will) ship additionally > an IDE (like IEP or Spyder) that does *not* have IPython or a notebook-like > interface, but these interfaces are still considered Pylab-compliant. That > does sound reasonable? Yes, that's been my take all along (it's hard to read precise ideas within the ravings of a lunatic, you know :). We'd be *thrilled* if IEP/Spyder, in addition to talking to the ipython shell like it does now, also were to build a cool, Qt-based local notebook client. In IPython we even had a Summer of Code student who got started on a Qt notebook client, but unfortunately for him at the time too many pieces were barely ready and he couldn't really go very far beyond the prototype stage. I don't see us taking that task on anymore, as we just don't have the resources, but just like the Emacs client I pointed out, it would be a perfect project for a different team. The reason why I want the notebook in the spec is so that we can ship tutorials and documentation, and build rich examples that are searchable but immediately executable, and have that work in any pylab-compliant distro. It does *not* mean that we should say that using the notebook is the only way to work/develop, or that the examples shouldn't be also available as pure scripts (which is trivial to satisfy), or that IEP/Spyder/Canopy aren't a better tool for certain workflows and situations. And if in the future those local clients develop support for the notebook format/protocol, that would be even better, but I would never dream of saying that IEP/Spyder/etc has to support the notebook *itself* to be pylab compliant. If you got that impression, I'm sorry for any miscommunication, because that idea never, ever crossed my mind. > * What exactly are the extra dependencies for that? PyZMQ and Tornado, and likely in the near future jinja. Pyzmq has cython code in it, the other two are pure python so pretty lightweight and easy to deal with. And pyzmq has a very robust build that hasn't really caused (that we know) problems to EPD since they started shipping it over a year ago, as it's also a dependency for the qt console. > Not a lunatic at all. I think this discussion is very useful. I like (most) > your ideas, but I just want to protect some things that I care for. And I very much like the fact that even someone like Thomas, who is "in our camp", wants to see a different side of the question. Cheers, f From jason-sage at creativetrax.com Fri Sep 21 23:59:58 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Fri, 21 Sep 2012 22:59:58 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <505D37BE.3030207@creativetrax.com> This conversation moves fast! Replies inline below. On 9/21/12 7:43 PM, Fernando Perez wrote: > And right now the web server is necessary but that's an implementation > detail, one could write something that speaks zmq directly to the > kernels, without any http traffic at all. That would be trickier to > use remotely, but it would be a perfectly sensible local > implementation of an IDE client. Which also brings up that zmq is also a dependency of pylab which many distributions are not likely to have currently... > >>> - *not* putting *a* notebook system into the spec is a mistake, >>> - if one is going to go in, the ipython one is the sensible choice. >>> >>> Of course, the overall community may disagree and decide that they >>> want pylab to be a spec that stays bounded by the 'shell + editor/ide' >>> idea. >> >> >> The community should not decide anything about what interface the user >> should use. By all means, let that be the choice of the user! I think the >> question for Pylab is not "do we, or do we not go notebook-style". It's >> about specifying base packages. And later maybe documentation, packaging >> etc. I really like IPython's idea of the notebook. And if you're right about >> them being the future, their popularity will surely grow over time. But >> let's not force that on our users right now. >> >> As I said earlier, I have no objection of including IPython in the base >> packages. But I do have an objection against picking one interface and >> saying that is *the* one. If IPython is to be included it should be for the >> reasons Thomas indicated; to have something usable that's always there and >> on which all the tutorials are based. > > Actually, I don't really care about the interface. What I care about is: > > - the format, which is open and which is what will have archival > value. there's nothing ipython-specific there, and in the future > hopefully not even *python* specific (we want to use it for R, matlab, > etc). How stable is the current format? For that matter, what *is* the current IPython "Computable Document Format" :). Is it documented somewhere? The best I could find was at http://ipython.org/ipython-doc/dev/interactive/htmlnotebook.html#the-notebook-format It seems like there were discussions a little while ago on the IPython list about changing the format, or adding something to the format, or something. I don't recall exactly what it was off the top of my head. Right now, it seems that the format is defined by the IPython implementation. That's fine, but the legislation in the standard probably should have something a little more formalized. > > - the protocol, which we've also specified openly and can be > implemented by others (sage already uses it in their new work). It > has some python-isms but they are cosmetic and should be trimmed out > over time. The protocol is documented more clearly, I think. But it also has been changing, like the top-level metadata field we added (thanks again!). There also have been questions about possibly having custom user message types. I'd like to push that discussion further too. So maybe it's time to introduce version numbers for the protocol, so we can nail down a specific something to talk about? > *That* is what matters to me; the ipython implementation of the above > two things can be thought of as a 'reference' implementation, > Takafumi's EIN is another one, and there could easily be more. +1 to having a reference implementation in the spec of some sort of computable document format. And +1 to having something in the pylab that can serve as a backend for a frontend (for example, speaking the IPython protocol), along with a reference frontend. > > And BTW, I've never thought of saying "let's only introduce people to > a notebook approach", far from it. The shell is too useful, robust > and important a tool not to start there and have it as the base point > for everything. > > My take is that if we want pylab to be a forward-looking platform for > modern scientific computing, it should include a notion of a notebook > system from the get-go, like Mathematica did 20 years ago and Sage 6 > years ago. It doesn't have to be the ipython UI, that's for sure: > Spyder could have an awesome Qt client, for example, similar to how > the Emacs one was implemented. > > Human-facing UI layers are indeed deeply personal, and what for some > is perfect for others is intolerable (I know the lack of solid vim > editing in a browser drives many people mad in the sage/ipython > notebooks, for example). (codemirror apparently has vim keybindings, by the way: http://codemirror.net/demo/vim.html) > I see a chance for a format and a protocol to have an impact (and I > haven't heard of any alternatives). But again, one vote out of many, > etc. I'll go back to writing the last of my grants instead, so that > we actually have funding to make this happen regardless of what pylab > decides ;) Hooray for getting grants! I guess you're working on buying more votes :). Jason From fperez.net at gmail.com Sat Sep 22 00:23:46 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 21 Sep 2012 21:23:46 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505D37BE.3030207@creativetrax.com> References: <505D37BE.3030207@creativetrax.com> Message-ID: On Fri, Sep 21, 2012 at 8:59 PM, Jason Grout wrote: > This conversation moves fast! Replies inline below. Busy day on the intertubes, indeed :) > On 9/21/12 7:43 PM, Fernando Perez wrote: > >> And right now the web server is necessary but that's an implementation >> detail, one could write something that speaks zmq directly to the >> kernels, without any http traffic at all. That would be trickier to >> use remotely, but it would be a perfectly sensible local >> implementation of an IDE client. > > Which also brings up that zmq is also a dependency of pylab which many > distributions are not likely to have currently... EPD and Anaconda both ship it, and given the brutal complexity of Sage's deployment, even if you guys aren't shipping it right now, I can't imagine it would be a particular burden to do so. Pythonxy doesn't have it, but they do carry a ton of other complex things and all they need is a binary windows installer, which Min has done a great job providing, so I don't see that as a big deal. On Linux all the major distros ship it. > How stable is the current format? For that matter, what *is* the > current IPython "Computable Document Format" :). Is it documented > somewhere? The best I could find was at > http://ipython.org/ipython-doc/dev/interactive/htmlnotebook.html#the-notebook-format Ah, the attentive reader will have noticed how I said an 'open' format, I never said 'documented' :) Indeed, we've been derelict on that front, and have enjoyed the luxury of being the only implementation, which lets us get away with it. There is code and tests, but we should document more explicitly the structure of the format. > It seems like there were discussions a little while ago on the IPython > list about changing the format, or adding something to the format, or > something. I don't recall exactly what it was off the top of my head. > Right now, it seems that the format is defined by the IPython > implementation. That's fine, but the legislation in the standard > probably should have something a little more formalized. Absolutely. > The protocol is documented more clearly, I think. But it also has been > changing, like the top-level metadata field we added (thanks again!). > There also have been questions about possibly having custom user message > types. I'd like to push that discussion further too. So maybe it's > time to introduce version numbers for the protocol, so we can nail down > a specific something to talk about? Yes, so far it has evolved based on our needs and the feedback of people like you (well, mostly you :). We should indeed formalize this further so that these two ideas, format and protocol, are really clearly laid out. If we also specify cleanly what the ipython 'syntax' is in a single place, that will round up all of the ways in which ipython has gone 'beyond python'. Thomas has been doing great work on rationalizing this on IPEP 2 (https://github.com/ipython/ipython/issues/2293) so I think we'll get there eventually. Because ultimately, that's all that 'makes ipython': 1. a "python VM on steroids" with additional syntax, magics, OS access, etc. A lot of this is used in tutorials, books and talks so defining its boundaries more clearly is important. And by now it's pretty stable, the only change we've had in years is the %% for cell magics and I don't foresee much need for new stuff. 2. a protocol to control this VM (and by controlling more than one of them at a time, you get parallel stuff). 3. a format for storing the interactions with this VM as a file. We need to formalize all of this more explicitly, so that IPython can be *one* implementation, the reference one, but not necessarily the only one. In a way, it's almost like the process that python itself had, where CPython went from the only implementation to being the reference one. In a small way I think we can follow that model, adapted to the needs of scientific work. > +1 to having a reference implementation in the spec of some sort of > computable document format. And +1 to having something in the pylab > that can serve as a backend for a frontend (for example, speaking the > IPython protocol), along with a reference frontend. > (codemirror apparently has vim keybindings, by the way: > http://codemirror.net/demo/vim.html) Yes, we've just never done the work of exposing UI for people to access them and store their preferences, that's the problem. > Hooray for getting grants! I guess you're working on buying more votes :). You may hear from us soon, believe me :) Cheers, f From takowl at gmail.com Sat Sep 22 05:56:19 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 22 Sep 2012 10:56:19 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 22 September 2012 02:17, Almar Klein wrote: > Trying to get at the subtleties... so what if IPython with notebook > feature* is a part of the base. So that every user with a pylab-compliant > distro can fire up a notebook. But, distros can (and will) ship additionally > an IDE (like IEP or Spyder) that does *not* have IPython or a notebook-like > interface, but these interfaces are still considered Pylab-compliant. That > does sound reasonable? I think this - roughly - has been my position all along. Whatever interface we mandate merely has to be included: distributions don't have to make it the only interface, or even the primary interface. Python(x,y) can keep promoting Spyder, and your own Pyzo can promote IEP. But Pylab tutorials will naturally be written for whatever interface is common to all distributions, whether that's a notebook or a REPL. A minor quibble over semantics: as I envisage it, it is a distribution that is Pylab compliant or not, not an interface. Within Pyzo, for instance, IEP can be an interface to Pylab, but it would not be a 'Pylab interface', because it's not part of the spec. Similarly for Spyder, IdleX or Reinteract. Even the >>> shell is an interface to Pylab if you can import the necessary packages. Fernando: I can see where you're coming from with standardising a document format and protocol. I think you are making some headway with convincing the sceptics, so do keep that discussion going. But the notebook is still young, and only beginning to settle down: notebooks saved in 0.13 can't be opened in 0.12, which is the version in the most recent Ubuntu release. So if the community feels it's not quite ready yet, I think it's something we'll definitely revisit in a future revision of the spec. Andrew: Thanks for the info about h5py. As I don't use HDF5 myself, can someone describe, as impartially as possible, the differences between PyTables and h5py: how do the APIs differ, any speed difference, how well known are they, what do they depend on, and what depends on them (e.g. I think pandas can use PyTables?). If it's sensible to include both, we can do so, but I'd like to get a feel for what they each are. Thanks, Thomas From a.klein at science-applied.nl Sat Sep 22 06:55:18 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Sat, 22 Sep 2012 12:55:18 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 22 September 2012 11:56, Thomas Kluyver wrote: > On 22 September 2012 02:17, Almar Klein > wrote: > > Trying to get at the subtleties... so what if IPython with notebook > > feature* is a part of the base. So that every user with a pylab-compliant > > distro can fire up a notebook. But, distros can (and will) ship > additionally > > an IDE (like IEP or Spyder) that does *not* have IPython or a > notebook-like > > interface, but these interfaces are still considered Pylab-compliant. > That > > does sound reasonable? > > I think this - roughly - has been my position all along. Whatever > interface we mandate merely has to be included: distributions don't > have to make it the only interface, or even the primary interface. > Python(x,y) can keep promoting Spyder, and your own Pyzo can promote > IEP. But Pylab tutorials will naturally be written for whatever > interface is common to all distributions, whether that's a notebook or > a REPL. > > A minor quibble over semantics: as I envisage it, it is a distribution > that is Pylab compliant or not, not an interface. Within Pyzo, for > instance, IEP can be an interface to Pylab, but it would not be a > 'Pylab interface', because it's not part of the spec. Similarly for > Spyder, IdleX or Reinteract. Even the >>> shell is an interface to > Pylab if you can import the necessary packages. > That's how I see it too. Thanks for wording it more clearly. So I think we're converging towards an agreement. I don't know enough about the notebook to decide whether it should be included now or later. I go with whatever you guys think is best. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sat Sep 22 07:15:23 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 22 Sep 2012 12:15:23 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 22 September 2012 11:55, Almar Klein wrote: > That's how I see it too. Thanks for wording it more clearly. So I think > we're converging towards an agreement. I don't know enough about the > notebook to decide whether it should be included now or later. I go with > whatever you guys think is best. Thanks Almar. I think it would be nice to write tutorials around something a bit more advanced and more friendly than a terminal interface, but I can also see the argument that it's not quite mature enough to put in a standard yet. So I put it to the community. To get a better overview, I've made a poll. By all means express your opinions here as well, but please go and vote on whether or not to require IPython notebook now: http://www.misterpoll.com/polls/567610 Thanks, Thomas From ralf.gommers at gmail.com Sat Sep 22 07:31:54 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 22 Sep 2012 13:31:54 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sat, Sep 22, 2012 at 1:15 PM, Thomas Kluyver wrote: > On 22 September 2012 11:55, Almar Klein > wrote: > > That's how I see it too. Thanks for wording it more clearly. So I think > > we're converging towards an agreement. I don't know enough about the > > notebook to decide whether it should be included now or later. I go with > > whatever you guys think is best. > > Thanks Almar. I think it would be nice to write tutorials around > something a bit more advanced and more friendly than a terminal > interface, but I can also see the argument that it's not quite mature > enough to put in a standard yet. So I put it to the community. > > To get a better overview, I've made a poll. By all means express your > opinions here as well, but please go and vote on whether or not to > require IPython notebook now: > Can you please first explain what the IPython devs think about stability of the format? Are there any backward or forward incompatible changes in the pipeline? Thanks, Ralf > http://www.misterpoll.com/polls/567610 > > Thanks, > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sat Sep 22 08:16:23 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 22 Sep 2012 13:16:23 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 22 September 2012 12:31, Ralf Gommers wrote: > Can you please first explain what the IPython devs think about stability of > the format? Are there any backward or forward incompatible changes in the > pipeline? The format, we should say, is versioned. IPython 0.12 saves in format version 2, and 0.13 in version 3. I think we have made some *backwards compatible* changes since then (so notebooks saved with the current master can be opened in 0.13), and I don't think we have any incompatible changes being planned. If I've missed some, hopefully Fernando will correct me. The idea is that any backwards incompatible change increments the format version. The versioned format means it should always be forwards compatible - you can open a notebook saved with any earlier version. In this case, you get a pop-up alerting you that saving will update the format. Thanks, Thomas From ralf.gommers at gmail.com Sat Sep 22 09:46:57 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 22 Sep 2012 15:46:57 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sat, Sep 22, 2012 at 2:16 PM, Thomas Kluyver wrote: > On 22 September 2012 12:31, Ralf Gommers wrote: > > Can you please first explain what the IPython devs think about stability > of > > the format? Are there any backward or forward incompatible changes in the > > pipeline? > > The format, we should say, is versioned. IPython 0.12 saves in format > version 2, and 0.13 in version 3. I think we have made some *backwards > compatible* changes since then (so notebooks saved with the current > master can be opened in 0.13), and I don't think we have any > incompatible changes being planned. If I've missed some, hopefully > Fernando will correct me. The idea is that any backwards incompatible > change increments the format version. > > The versioned format means it should always be forwards compatible - > you can open a notebook saved with any earlier version. In this case, > you get a pop-up alerting you that saving will update the format. > I think you have forwards and backwards compatible the wrong way around. Forwards compatible would be that you can open version 4 files, if that's introduced in IPython 0.14 or later, in 0.13. In my opinion you need either forward compatibility or by default keep saving in version 3 even after you have version 4, if you want to make the format a standard. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.collette at gmail.com Sat Sep 22 11:04:47 2012 From: andrew.collette at gmail.com (Andrew Collette) Date: Sat, 22 Sep 2012 09:04:47 -0600 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sat, Sep 22, 2012 at 3:56 AM, Thomas Kluyver wrote: > Andrew: Thanks for the info about h5py. As I don't use HDF5 myself, > can someone describe, as impartially as possible, the differences > between PyTables and h5py: how do the APIs differ, any speed > difference, how well known are they, what do they depend on, and what > depends on them (e.g. I think pandas can use PyTables?). If it's > sensible to include both, we can do so, but I'd like to get a feel for > what they each are. I'm certainly not unbiased, but while we're waiting for others to rejoin the discussion I can give my perspective on this question. I never saw h5py and PyTables as direct competitors; they have different design goals. To me the basic difference is that PyTables is both a way to talk to HDF5 and a really great database-like interface with things like indexing, searching, etc. (both NumExpr and Blosc came out of work on PyTables, I believe). In contrast, h5py arose by asking "how can we map the basic HDF5 abstractions to Python in a direct but still Pythonic way". The API for h5py has both a high-level and low-level component; like PyTables, the high-level component is oriented around files, datasets and groups, allows iteration over elements in the file, etc. The emphasis in h5py is to use existing objects and abstractions from NumPy; for example, datasets have .dtype and .shape attributes and can be sliced like NumPy arrays. Groups are treated like dictionaries, are iterable, have .keys() and .iteritems() and friends, etc. The "main" high level interface in h5py also rests on a huge low-level interface written in Cython (http://h5py.alfven.org/docs/low/index.html), which exposes the majority of the HDF5 C API in a Pythonic, object-oriented way. The goal here is anything you can do with HDF5 in C, you can do in Python. It has no dependencies beyond NumPy and Python itself; I will let others chime in for specific projects which depend on h5py. As a rough proxy for popularity, h5py has roughly 30k downloads over the life of the project (10k in the past year). I have never benchmarked PyTables against h5py, but I strongly suspect PyTables is faster. Most of the development effort that has recently gone into h5py has been focused in other areas like API coverage, Python 3 support, Unicode, and thread safety; we've never done careful performance testing. I am eager to hear other perspectives, especially from the PyTables team. Andrew From takowl at gmail.com Sat Sep 22 11:04:56 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 22 Sep 2012 16:04:56 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 22 September 2012 14:46, Ralf Gommers wrote: > I think you have forwards and backwards compatible the wrong way around. > Forwards compatible would be that you can open version 4 files, if that's > introduced in IPython 0.14 or later, in 0.13. You're quite right, I had the terms backwards. Sorry about that; I do find them confusing. > In my opinion you need either > forward compatibility or by default keep saving in version 3 even after you > have version 4, if you want to make the format a standard. Well, if we make version 4, it would be because we were making a change that couldn't be opened by the v3 code - that's what the version numbers are for. But, as I understand it, we've made v3 considerably more extensible, so most of the things we've considered adding wouldn't need a version number bump. In the future, I envisage that if/when we define a new version, we'll have a transition period where the notebook can open version n+1, but defaults to saving as version n. Then once that ability is widespread, we'd make the next release save as n+1 by default. I've CCed the ipython-dev list: Fernando, Brian, Min, can you give some more insight on how stable the notebook format is now? Thanks, Thomas From lou_boog2000 at yahoo.com Sat Sep 22 11:22:49 2012 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sat, 22 Sep 2012 08:22:49 -0700 (PDT) Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> ________________________________ From: Fernando Perez To: SciPy Users List Sent: Friday, September 21, 2012 4:38 PM Subject: Re: [SciPy-User] Pylab - standard packages Warning: what follows is a highly opinionated, completely biased post. I'll be using a 'we' that refers to the IPython developers because the credit for much of what I talk about goes to the whole team, but ultimately the rant is my responsibility, so flame me if need be. self.put_hat(kind='IPython'). I think it's important to address directly the question of the IPython notebook.? I realize that not everybody uses it, and it has some extra dependencies (though they are really easy ones to satisfy).? But I also think it's an important discussion that goes to the question of whether we simply are trying to play catch-up what matlab/R-Rstudio offer, or to be truly forward-looking and rethink how scientific computing will be done for the coming decade.? Needless to say, I have little interest in the former and am putting all my energy into the latter: if it were otherwise, I'd been contributing to Octave for the last 10 years instead. [cut] + Many I am not a developer of open source software, but I am a big user, especially of things Python. But I like the vision even though it will bring more issues than just catching up. ?That's worth it. Because of my viewpoint, I would caution a more nuanced view of how programming languages, libraries, etc. are used by people like me in science. ?I would not divide the community into two communities: (1) developers with the edit, compile, run, cycle and (2) the users who only want to use the libraries and packages as they are. ?Yes, the latter would benefit greatly from a nice notebook, interactive app, but I would argue that many of us actually work in between the two styles. I am often developing my own software that leverages many other things (numpy, Sage, etc.), but I am also doing research as I develop. ?I run the software as I go to see the outputs. ?Many times I find new things that I can do or that I do not quite have the right idea about solving the problems. I am constantly developing and analyzing, simultaneously. ?In fact I am often sending results to my collaborators so they can continue their research as I continue to develop and refine my software. ? I bring this up because as cool as IPython is, I have never found a good way to use it for my style of research/development. ?I often just run from scripts instead since the code has to be continually re-interpreted (I'm assuming Python here) and IPython seems to put up road blocks to using software while modifying it. ?At least I haven't found a way to do it. ?If you could do that seamlessly, that would be a big advance! If I'm missing something, please let me know. ?Otherwise, good luck on the project. ?I hope it's successful because it would provide a good gateway to the world of Python. ? -- Lou Pecora,?? my views are my own. From takowl at gmail.com Sat Sep 22 12:41:48 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 22 Sep 2012 17:41:48 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> Message-ID: Hi Lou, On 22 September 2012 16:22, Lou Pecora wrote: > I bring this up because as cool as IPython is, I have never found a good way to use it for my style of research/development. I often just run from scripts instead since the code has to be continually re-interpreted (I'm assuming Python here) and IPython seems to put up road blocks to using software while modifying it. At least I haven't found a way to do it. If you could do that seamlessly, that would be a big advance! Well, that's definitely the sort of use case we're thinking about - a mixture of developing code and doing research with it. A few ideas about how you can use code and modify it, depending on the size of the code: - Small functions/classes can live in a notebook cell: modify it and rerun the cell to redefine it. - For scripts, you can use the %run magic function. It works a lot like running 'python script.py' at a command line, but when something goes wrong, you get a much better traceback, and a %debug magic command will drop you into the debugger. There's also a %edit magic function to edit a file. - You can import code and use the autoreload extension [1] to reload the modules every time you change them. - If you have a lot of modules importing from one another, there's a deepreload function [2] that will reload all of them in one go. [1] http://ipython.org/ipython-doc/stable/config/extensions/autoreload.html [2] http://ipython.org/ipython-doc/stable/interactive/reference.html?highlight=deepreload#recursive-reload I hope that gives you some ideas. Best wishes, Thomas From fperez.net at gmail.com Sat Sep 22 14:10:25 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 22 Sep 2012 11:10:25 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sat, Sep 22, 2012 at 6:46 AM, Ralf Gommers wrote: > I think you have forwards and backwards compatible the wrong way around. > Forwards compatible would be that you can open version 4 files, if that's > introduced in IPython 0.14 or later, in 0.13. In my opinion you need either > forward compatibility or by default keep saving in version 3 even after you > have version 4, if you want to make the format a standard. The format has now two numbers, major and minor. Right now we're at 3.0. The idea is that small, forward-compatible changes will only increment the minor number. It may be that a feature is added that is only understood for certain functionality in version 3.1, but a 3.0 IPython would be able to read, use and save 3.1 notebooks without any *data loss*. Only if there is a change that makes it impossible to really understand the notebooks themselves by an older version of the code (say because something in the JSON layout changes) would the major number be bumped. That's what happened when we went from v2 to v3. Now, I don't know if I'm misunderstanding your comment above that we have to keep saving in v3 forever if we want the notebook in the spec. If you really mean that, then we absolutely *don't* want ipython in the spec, ever. Because we can't commit to never in the future evolving the format. But by that token, then we'd say that the api for numpy, or matplotlib, or scipy, can never ever change in a backwards-incompatible manner if they are to go into the spec. So will we not put numpy in the spec because in-place operations in the current 1.7 betas break code that was valid up until now? So if putting anything in the spec means it can never change, then by all means let's leave it out. Because we can't commit to freezing IPython development forever. But we have put a lot of thought into trying to ensure that we won't need format changes for a long time, because we are keenly aware of how disruptive a file format change is. And now that there are a ton of notebooks 'in the wild', we know it would be a real major annoyance to make such changes nilly-willy. So we will do everything we can to keep developing, for as long as possible, on the v3 format. We will try to encapsulate all new use cases and functionality into the extensible metadata fields that are already defined in there. The course I foresee is that the *user interface* will evolve to expose and make better use of these notebooks, so certain fancier features (say a metadata-based slideshow mode, for example) might not exist in older versions simply because the code hasn't been written yet. To be more specific with the slideshow idea (which is in the works): let's say that we add tags to the metadata that will be interpreted as slide transitions in a yet-to-be completed slideshow implementation for IPython 0.14. IPython current (0.13) wouldn't show the slideshow because it simply doesn't have the feature. But it would open a notebook saved with slideshow metadata without losing any of it, and it could be worked on, edited, etc, without any danger. Cheers, f From takowl at gmail.com Sat Sep 22 15:33:13 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 22 Sep 2012 20:33:13 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 22 September 2012 19:10, Fernando Perez wrote: > Now, I don't know if I'm misunderstanding your comment above that we > have to keep saving in v3 forever if we want the notebook in the spec. > If you really mean that, then we absolutely *don't* want ipython in > the spec, ever. What about the transition period I described: if we define version 4, we should keep saving as v3 for a while until we judge most people have updated to a version that can open v4. Obviously for the standard we need some certainty that a future upgrade won't break collaboration, but I think we can do that without promising eternal forwards compatibility. This must be a common issue for file formats and protocols. The IPV6 rollout is an example on a much bigger scale - software to handle it was being rolled out for years before anyone started turning it on. Thomas From fperez.net at gmail.com Sat Sep 22 15:43:36 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 22 Sep 2012 12:43:36 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sat, Sep 22, 2012 at 12:33 PM, Thomas Kluyver wrote: > What about the transition period I described: if we define version 4, > we should keep saving as v3 for a while until we judge most people > have updated to a version that can open v4. Obviously for the standard > we need some certainty that a future upgrade won't break > collaboration, but I think we can do that without promising eternal > forwards compatibility. This must be a common issue for file formats > and protocols. The IPV6 rollout is an example on a much bigger scale - > software to handle it was being rolled out for years before anyone > started turning it on. Yes, I think that's a very sensible strategy. The uptake of the format in the wild has been very rapid, so from now on we'll want to be really, really careful with how we evolve it. While that is a constraint, fortunately we're seeing pretty non-trivial things done with the format in its current form, and the design of our metadata machinery (already in place, even if it's unexposed yet UI-wise) makes me reasonably confident that we'll be able to keep this for a while. Ah, the burden of users ;) f From jason-sage at creativetrax.com Sat Sep 22 21:32:10 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Sat, 22 Sep 2012 20:32:10 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> Message-ID: <505E669A.1010401@creativetrax.com> On 9/22/12 11:41 AM, Thomas Kluyver wrote: > - You can import code and use the autoreload extension [1] to reload > the modules every time you change them. In Sage, we also have the %attach magic, which effectively re-runs a file every time it changes. This is a bit different than autoreload; maybe it would make a good addition to IPython to complement %run? Thanks, Jason From takowl at gmail.com Sun Sep 23 06:06:54 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 11:06:54 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505E669A.1010401@creativetrax.com> References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On 23 September 2012 02:32, Jason Grout wrote: > In Sage, we also have the %attach magic, which effectively re-runs a > file every time it changes. This is a bit different than autoreload; > maybe it would make a good addition to IPython to complement %run? Sounds intriguing. Let's not go into it in this thread, but if you come and discuss what it entails on ipython-dev, I think we'd be interested. Thanks, Thomas From ralf.gommers at gmail.com Sat Sep 22 17:43:38 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 22 Sep 2012 23:43:38 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sat, Sep 22, 2012 at 8:10 PM, Fernando Perez wrote: > On Sat, Sep 22, 2012 at 6:46 AM, Ralf Gommers > wrote: > > I think you have forwards and backwards compatible the wrong way around. > > Forwards compatible would be that you can open version 4 files, if that's > > introduced in IPython 0.14 or later, in 0.13. In my opinion you need > either > > forward compatibility or by default keep saving in version 3 even after > you > > have version 4, if you want to make the format a standard. > > The format has now two numbers, major and minor. Right now we're at > 3.0. The idea is that small, forward-compatible changes will only > increment the minor number. It may be that a feature is added that is > only understood for certain functionality in version 3.1, but a 3.0 > IPython would be able to read, use and save 3.1 notebooks without any > *data loss*. > > Only if there is a change that makes it impossible to really > understand the notebooks themselves by an older version of the code > (say because something in the JSON layout changes) would the major > number be bumped. That's what happened when we went from v2 to v3. > > Now, I don't know if I'm misunderstanding your comment above that we > have to keep saving in v3 forever if we want the notebook in the spec. > If you really mean that, then we absolutely *don't* want ipython in > the spec, ever. Because we can't commit to never in the future > evolving the format. Forever is obviously too long:) Thomas' suggestion of a transition period makes sense. And I'm glad to hear you are confident that you won't have to make any changes for the foreseeable future. > But by that token, then we'd say that the api for > numpy, or matplotlib, or scipy, can never ever change in a > backwards-incompatible manner if they are to go into the spec. So > will we not put numpy in the spec because in-place operations in the > current 1.7 betas break code that was valid up until now? > That's absolutely the wrong comparison. The impact would be more like breaking the numpy ABI, or not being able to load .py files made in fancy editor X in my plain old vim. Or for a proprietary example, moving from .doc to .docx in MS Word. > So if putting anything in the spec means it can never change, then by > all means let's leave it out. Because we can't commit to freezing > IPython development forever. > > But we have put a lot of thought into trying to ensure that we won't > need format changes for a long time, because we are keenly aware of > how disruptive a file format change is. And now that there are a ton > of notebooks 'in the wild', we know it would be a real major annoyance > to make such changes nilly-willy. > > So we will do everything we can to keep developing, for as long as > possible, on the v3 format. We will try to encapsulate all new use > cases and functionality into the extensible metadata fields that are > already defined in there. The course I foresee is that the *user > interface* will evolve to expose and make better use of these > notebooks, so certain fancier features (say a metadata-based slideshow > mode, for example) might not exist in older versions simply because > the code hasn't been written yet. > > To be more specific with the slideshow idea (which is in the works): > let's say that we add tags to the metadata that will be interpreted as > slide transitions in a yet-to-be completed slideshow implementation > for IPython 0.14. IPython current (0.13) wouldn't show the slideshow > because it simply doesn't have the feature. But it would open a > notebook saved with slideshow metadata without losing any of it, and > it could be worked on, edited, etc, without any danger. > Thanks for the detailed reply. The 3 versions in the last 1.5 year worried me, but it seems the dust has settled now. I'll go and vote +1 on including this in the spec. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From lou_boog2000 at yahoo.com Sun Sep 23 08:22:01 2012 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 23 Sep 2012 05:22:01 -0700 (PDT) Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> Message-ID: <1348402921.26841.YahooMailNeo@web160204.mail.bf1.yahoo.com> ----- Original Message ----- From: Thomas Kluyver To: SciPy Users List Cc: Sent: Saturday, September 22, 2012 12:41 PM Subject: Re: [SciPy-User] Pylab - standard packages Hi Lou, On 22 September 2012 16:22, Lou Pecora wrote: > I bring this up because as cool as IPython is, I have never found a good way to use it for my style of research/development.? I often just run from scripts instead since the code has to be continually re-interpreted (I'm assuming Python here) and IPython seems to put up road blocks to using software while modifying it.? At least I haven't found a way to do it.? If you could do that seamlessly, that would be a big advance! Well, that's definitely the sort of use case we're thinking about - a mixture of developing code and doing research with it. A few ideas about how you can use code and modify it, depending on the size of the code: - Small functions/classes can live in a notebook cell: modify it and rerun the cell to redefine it. - For scripts, you can use the %run magic function. It works a lot like running 'python script.py' at a command line, but when something goes wrong, you get a much better traceback, and a %debug magic command will drop you into the debugger. There's also a %edit magic function to edit a file. - You can import code and use the autoreload extension [1] to reload the modules every time you change them. - If you have a lot of modules importing from one another, there's a deepreload function [2] that will reload all of them in one go. [1] http://ipython.org/ipython-doc/stable/config/extensions/autoreload.html [2] http://ipython.org/ipython-doc/stable/interactive/reference.html?highlight=deepreload#recursive-reload I hope that gives you some ideas. Best wishes, Thomas -------------------------------------------------- Thanks, Thomas. ?Good information. ?I'm glad you are focusing on that usage. ? -- Lou Pecora, my views are my own. From lou_boog2000 at yahoo.com Sun Sep 23 08:24:30 2012 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 23 Sep 2012 05:24:30 -0700 (PDT) Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505E669A.1010401@creativetrax.com> References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: <1348403070.12496.YahooMailNeo@web160206.mail.bf1.yahoo.com> ----- Original Message ----- From: Jason Grout To: scipy-user at scipy.org Cc: Sent: Saturday, September 22, 2012 9:32 PM Subject: Re: [SciPy-User] Pylab - standard packages On 9/22/12 11:41 AM, Thomas Kluyver wrote: > - You can import code and use the autoreload extension [1] to reload > the modules every time you change them. In Sage, we also have the %attach magic, which effectively re-runs a file every time it changes.? This is a bit different than autoreload; maybe it would make a good addition to IPython to complement %run? Thanks, Jason --------------------------------------------- Big +1, Jason. ?Thanks. ?I use the Sage package (which is awesome), but haven't explored it enough obviously.? I would vote for that addition to IPython. ?Just speaking as a user. ? -- Lou Pecora, my views are my own. From takowl at gmail.com Sun Sep 23 08:35:53 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 13:35:53 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505E669A.1010401@creativetrax.com> References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: I've put a draft version of the specification up here: https://github.com/pylab/website/blob/master/specification.rst It's still based on the slide Fernando posted. A couple of packages are in active discussion: - The IPython notebook has prompted a lot of debate. If you haven't yet voted, please do so: http://www.misterpoll.com/polls/567610 - The author of h5py has offered it for inclusion, but no-one else has spoken up for or against it. I'd like to hear from users of both that and PyTables (another interface to HDF5 files). If there are any other packages you think are important enough to be in the standard, or you feel that some of those currently included should not be, now is the time to speak up. We also still need to pick sensible minimum versions of SymPy, pandas, statsmodels, scikits-learn, scikits-image, PyTables and NetworkX. As a rough guide, if you were writing code to post on a blog, what version can you assume most readers should have? I'm thinking of desktop installations, not servers, so we needn't be too conservative. Thanks, Thomas From ralf.gommers at gmail.com Sun Sep 23 09:41:13 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Sep 2012 15:41:13 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On Sun, Sep 23, 2012 at 2:35 PM, Thomas Kluyver wrote: > I've put a draft version of the specification up here: > https://github.com/pylab/website/blob/master/specification.rst > > How about adding at least a placeholder for the with-compiler pylab? > It's still based on the slide Fernando posted. A couple of packages > are in active discussion: > > - The IPython notebook has prompted a lot of debate. If you haven't > yet voted, please do so: http://www.misterpoll.com/polls/567610 > - The author of h5py has offered it for inclusion, but no-one else has > spoken up for or against it. I'd like to hear from users of both that > and PyTables (another interface to HDF5 files). > > If there are any other packages you think are important enough to be > in the standard, or you feel that some of those currently included > should not be, now is the time to speak up. > At least `nose`, we have to be able to run tests for packages. > > We also still need to pick sensible minimum versions of SymPy, pandas, > statsmodels, scikits-learn, scikits-image, PyTables and NetworkX. As a > rough guide, if you were writing code to post on a blog, what version > can you assume most readers should have? I'm thinking of desktop > installations, not servers, so we needn't be too conservative. > If we're going with Fernando's "set the bar high" principle, how about the last released version of each package? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sun Sep 23 09:58:13 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 14:58:13 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On 23 September 2012 14:41, Ralf Gommers wrote: > At least `nose`, we have to be able to run tests for packages. We do, but for new users, unit testing is something they're unlikely to need for a while. Installing nose also doesn't involve any complex requirements. So I'd be inclined not to specify it, but I'd like to hear what others think. > If we're going with Fernando's "set the bar high" principle, how about the > last released version of each package? That's a reasonable starting point, but I'd like to get a little more insight - e.g. how long ago was the last release, and what kind of changes did it have from the release before it. I'll look into it for that list of packages and put it in another reply. Thanks, Thomas From ralf.gommers at gmail.com Sun Sep 23 10:21:58 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Sep 2012 16:21:58 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On Sun, Sep 23, 2012 at 3:41 PM, Ralf Gommers wrote: > > > On Sun, Sep 23, 2012 at 2:35 PM, Thomas Kluyver wrote: > >> I've put a draft version of the specification up here: >> https://github.com/pylab/website/blob/master/specification.rst >> >> How about adding at least a placeholder for the with-compiler pylab? > > >> It's still based on the slide Fernando posted. A couple of packages >> are in active discussion: >> >> - The IPython notebook has prompted a lot of debate. If you haven't >> yet voted, please do so: http://www.misterpoll.com/polls/567610 >> - The author of h5py has offered it for inclusion, but no-one else has >> spoken up for or against it. I'd like to hear from users of both that >> and PyTables (another interface to HDF5 files). >> >> If there are any other packages you think are important enough to be >> in the standard, or you feel that some of those currently included >> should not be, now is the time to speak up. >> > I had already suggested one of: - PIL / pillow - FreeImage (with scikits-image) Being able to load images seems kind of useful. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sun Sep 23 10:32:00 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 15:32:00 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On 23 September 2012 15:21, Ralf Gommers wrote: > I had already suggested one of: > - PIL / pillow > - FreeImage (with scikits-image) > Being able to load images seems kind of useful. I'm not familiar with sk-image. Can someone clarify what, if anything, it is capable of loading on its own? Thanks, Thomas From ralf.gommers at gmail.com Sun Sep 23 10:44:59 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Sep 2012 16:44:59 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On Sun, Sep 23, 2012 at 4:32 PM, Thomas Kluyver wrote: > On 23 September 2012 15:21, Ralf Gommers wrote: > > I had already suggested one of: > > - PIL / pillow > > - FreeImage (with scikits-image) > > Being able to load images seems kind of useful. > > I'm not familiar with sk-image. Can someone clarify what, if anything, > it is capable of loading on its own? > It has an I/O plugin framework, which can use Matplotlib / PIL / FreeImage when available, that's it. So without PIL or FreeImage I think the only thing that can be read is pngs, through MPL. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sun Sep 23 10:58:48 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 15:58:48 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: Latest major versions of things: Numpy: 1.6 (May 2011) Scipy: 0.10 (November 2011) Matplotlib: 1.1 (February 2012) IPython: 0.13 (30 June 2012) SymPy: 0.7 (28 June 2011) pandas: 0.8 (29 June 2012) - requires Numpy >= 1.6 statsmodels: 0.4 (~April 2012) scikits-learn: 0.12 (4 September 2012) - 0.11 was 7 May 2012 scikits-image: 0.6 (24 June 2012) PyTables: 2.4 (20 July 2012) NetworkX: 1.7 (5 July 2012) The most recent release is from scikits-learn, where I've noted the preceding release as well. The others are all a couple of months old. So for the first version, I'd say we specify the most recent major release of all those except scikits-learn, where we'll specify 0.11 for now. Thanks, Thomas From takowl at gmail.com Sun Sep 23 11:00:50 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 16:00:50 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On 23 September 2012 15:44, Ralf Gommers wrote: > It has an I/O plugin framework, which can use Matplotlib / PIL / FreeImage > when available, that's it. So without PIL or FreeImage I think the only > thing that can be read is pngs, through MPL. OK, I think it makes sense to include one of those, then. Can you summarise differences between PIL & FreeImage in familiarity, simplicity of building, how cross platform they are, what formats they support, and anything else that seems relevant? Thanks, Thomas From ralf.gommers at gmail.com Sun Sep 23 11:28:02 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Sep 2012 17:28:02 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On Sun, Sep 23, 2012 at 5:00 PM, Thomas Kluyver wrote: > On 23 September 2012 15:44, Ralf Gommers wrote: > > It has an I/O plugin framework, which can use Matplotlib / PIL / > FreeImage > > when available, that's it. So without PIL or FreeImage I think the only > > thing that can be read is pngs, through MPL. > > OK, I think it makes sense to include one of those, then. Can you > summarise differences between PIL & FreeImage in familiarity, > simplicity of building, how cross platform they are, what formats they > support, and anything else that seems relevant? > After years of frustration with PIL's broken tiff support and with how it's managed, I don't think I can answer that question objectively...... Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sun Sep 23 11:37:06 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 16:37:06 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On 23 September 2012 16:28, Ralf Gommers wrote: > After years of frustration with PIL's broken tiff support and with how it's > managed, I don't think I can answer that question objectively...... Well, if you think FreeImage is a better option, you're welcome to voice that. Anyone who disagrees can also speak up ;-) Thomas From 275438859 at qq.com Sun Sep 23 12:23:37 2012 From: 275438859 at qq.com (=?gb18030?B?0MTI59byueI=?=) Date: Mon, 24 Sep 2012 00:23:37 +0800 Subject: [SciPy-User] errors of scipy build Message-ID: Hi,all. I have installed numpy and scipy step by step carefully as the instructions from website:http://www.scipy.org/Installing_SciPy/Mac_OS_X But still get many errors and warnings while it's building. (OSX lion 10.7.4 /Xcode 4.5 /clang /gfortran4.2.3) Do scipy and numpy must be built by gcc?? Such as: 1 error generated. _configtest.c:5:28: error: 'test_array' declared as an array with a negative size static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) == 4)]; ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 error generated. C compiler: clang -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes /////////////////////////////////////////////////////////// Constructing wrapper function "drotm"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = drotm(x,y,param,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Sep 23 13:11:30 2012 From: cournape at gmail.com (David Cournapeau) Date: Sun, 23 Sep 2012 18:11:30 +0100 Subject: [SciPy-User] [Numpy-discussion] errors of scipy build In-Reply-To: References: Message-ID: On Sun, Sep 23, 2012 at 5:23 PM, ???? <275438859 at qq.com> wrote: > Hi,all. > I have installed numpy and scipy step by step carefully as the > instructions from website:http://www.scipy.org/Installing_SciPy/Mac_OS_X > But still get many errors and warnings while it's building. > (OSX lion 10.7.4 /Xcode 4.5 /clang /gfortran4.2.3) Those error are parts of the configuration. As long as the build runs until the end, the build is successful. You should not build scipy with gcc on mac os x 10.7, as it is known to cause issues. David From njs at pobox.com Sun Sep 23 15:00:31 2012 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 23 Sep 2012 20:00:31 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 11:26 PM, Fernando Perez wrote: > On Fri, Sep 21, 2012 at 2:43 PM, Nathaniel Smith wrote: >> But empirically, >> that's not true yet, and the way to get there is for you guys to >> continue kicking ass, not for "pylab" to legislate something. Trying >> would alienate people. So it's just a process and scope objection, >> nothing to do with the notebook idea at all. > > Well, but the point of pylab *is* partly to 'legislate', since we're > defining a spec. But that's not how specs work. We're just some people on a mailing list, we don't have taxation power or anything. I think of it like, we want to have an impact. If we suggest that someone do something that they would do anyway, then we have low impact. If we suggest that someone do something and they ignore us, then we also have low impact. We need to find places where suggesting that people (like distribution and package maintainers) do certain things will cause them to do those things, where they wouldn't have otherwise. Since distribution and package maintainers are generally smart and well-meaning people, they don't really need us to explain their jobs to them. They already know how to serve their users when it comes to changes that they can make on their own. So the main way specs provide value is in situations where everyone benefits *if* they do the same thing, and not otherwise. You sort of have to do it by consensus, because if relevant parties can't/won't do what your spec tells them to do, then it just won't happen. > So it's a valid, relevant and I would argue > important question. My contention is that > > - *not* putting *a* notebook system into the spec is a mistake, > - if one is going to go in, the ipython one is the sensible choice. Obviously ipython in general and the notebook in specific are hugely popular, and obviously any distro who cares in the slightest is going to want to make their handling of both as good as possible! Concretely, the only reason that I know for why the notebook wouldn't be supported by any pylab-relevant distro is the technical incompatibility between new ipythons and spyder. I assume that notebooks will be supported by Python(x,y) iff that is fixed. This seems like a low impact area for pylab to me, because I don't see how putting something in a spec will make any difference. The way to make an impact here is to go fix that code. > Of course, the overall community may disagree and decide that they > want pylab to be a spec that stays bounded by the 'shell + editor/ide' > idea. No-one is saying that, but we're talking about a consensus-based spec whose goal is to unify our scattered community. We can't *rule out* the editor/ide approach so long as there exist significant groups of users and developers who use it. The ipython/spyder incompatibility means that right now, putting ipython notebooks in the spec would rule out spyder and Python(x,y). Better that pylab stay out of that then try to pick sides. -n From lists at hilboll.de Sun Sep 23 15:17:31 2012 From: lists at hilboll.de (Andreas Hilboll) Date: Sun, 23 Sep 2012 21:17:31 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: <505F604B.5080306@hilboll.de> Am 23.09.2012 14:35, schrieb Thomas Kluyver: > I've put a draft version of the specification up here: > https://github.com/pylab/website/blob/master/specification.rst > > It's still based on the slide Fernando posted. A couple of packages > are in active discussion: > > - The IPython notebook has prompted a lot of debate. If you haven't > yet voted, please do so: http://www.misterpoll.com/polls/567610 > - The author of h5py has offered it for inclusion, but no-one else has > spoken up for or against it. I'd like to hear from users of both that > and PyTables (another interface to HDF5 files). I'm in favor of including both pytables and h5py. Otherwise, there will be people disappointed by pylab because a script given to them by someone else doesn't work. Additionally, I'm also in favor of including both pyhdf (for HDF4 I/O) and the netCDF4 module. Why? At least in my field, most data are in netCDF4 or HDF4 or HDF5 formats. And there's many scientists who've been working with Matlab/IDL/... for years. It's hard to get them to even look at pylab. To make pylab as attractive as possible to newcomers (to pylab, not to computational science staff alltogether), it should be as easy as possible to work with the most basic file formats -- that's ascii (through numpy/pandas), images, and some standard binary formats. Of course, that would let the list of binary dependencies grow (libnetcdf4, libhdf). Maybe it's not a good idea to include them in the base standard. There would be room in some 'pylab-full', or some other flavoured specs. Just my 2 ct ... Andreas. From takowl at gmail.com Sun Sep 23 15:18:15 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 20:18:15 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 23 September 2012 20:00, Nathaniel Smith wrote: > The ipython/spyder incompatibility > means that right now, putting ipython notebooks in the spec would rule > out spyder and Python(x,y). To elaborate on this: I'm not aiming to specifically exclude any distribution, but I'm also not aiming to specifically *include* any distribution, nor to include all scientific Python distributions. If we think that a certain distribution is missing things that should be part of the standard, we shouldn't relax the standard to accept that. At present, for example, EPD Free will not meet the draft standard. Whether or not we specify the IPython notebook, we will almost certainly specify a version of IPython that *can* run the notebook. We will work with Spyder and Python(x,y) to get a newer version of IPython in, but the current version of Python(x,y) would not meet the specified version of IPython either way. As you mentioned, there's no value in suggesting people do only what they already are. Thanks, Thomas From takowl at gmail.com Sun Sep 23 15:34:29 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 20:34:29 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505F604B.5080306@hilboll.de> References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> <505F604B.5080306@hilboll.de> Message-ID: On 23 September 2012 20:17, Andreas Hilboll wrote: > I'm in favor of including both pytables and h5py. Otherwise, there will > be people disappointed by pylab because a script given to them by > someone else doesn't work. This argument can be used for any package, though, and I don't want to make the specification massive. The question is, if you're writing a script, should you be able to assume users have pytables, h5py, or both? With respect to the other formats: can any of the packages handle more than one of the three? Can anyone give a sense of how widely used HDF4 and netCDF4 are, beyond specific fields? Of course, you can install extra packages to meet specific needs, and a specific distribution can still ship them by default. A quick survey of existing distributions suggests that EPD and Python(x,y) have pyhdf and netCDF4, while Anaconda has neither. It's not clear whether SAGE and QSnake do. Thanks, Thomas From njs at pobox.com Sun Sep 23 15:42:42 2012 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 23 Sep 2012 20:42:42 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sun, Sep 23, 2012 at 8:18 PM, Thomas Kluyver wrote: > On 23 September 2012 20:00, Nathaniel Smith wrote: >> The ipython/spyder incompatibility >> means that right now, putting ipython notebooks in the spec would rule >> out spyder and Python(x,y). > > To elaborate on this: I'm not aiming to specifically exclude any > distribution, but I'm also not aiming to specifically *include* any > distribution, nor to include all scientific Python distributions. If > we think that a certain distribution is missing things that should be > part of the standard, we shouldn't relax the standard to accept that. > At present, for example, EPD Free will not meet the draft standard. Then we should talk to the EPD folk about whether that can be fixed. The whole point of this standard is that people actually use it, right? People will keep using EPD Free no matter what we do here, and people writing docs will continue to care about what EPD Free does no matter what pylab says. > Whether or not we specify the IPython notebook, we will almost > certainly specify a version of IPython that *can* run the notebook. We > will work with Spyder and Python(x,y) to get a newer version of > IPython in, but the current version of Python(x,y) would not meet the > specified version of IPython either way. As you mentioned, there's no > value in suggesting people do only what they already are. And then in the next sentence I also mentioned that there's no value in telling people to do things that they won't do...? And it destroys the point of this project. As you just mentioned in another email, one of the points of this is to codify a set of common tools that we can assume people will have available, which makes it easier to write tutorials, blog posts, etc. But I'm not going to write a tutorial that assumes that no-one is using EPD or Python(x,y)! I'm going to aim my tutorial at the users that actually exist. And if the pylab spec doesn't describe the users that actually exist, then it's just a nice fairytale, not something I can depend on. I'm also worried that if we just show up with a list of package versions, then the distro authors will react the same way any of us would if someone showed up at our projects waving a todo list for us and insisting that this is what we had to do for our next release... it's a good recipe to make people grumpy. Perhaps a better next step at this point would be to start a pylab list, move the discussion there, and make sure that the various stakeholders are specifically invited? -n From takowl at gmail.com Sun Sep 23 16:09:18 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 21:09:18 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 23 September 2012 20:42, Nathaniel Smith wrote: > Then we should talk to the EPD folk about whether that can be fixed. Certainly. But if they disagree (it would be a big expansion, and they offer a more complete commercial version), we needn't completely revise the standard to accomodate that. > And then in the next sentence I also mentioned that there's no value > in telling people to do things that they won't do...? I hope that including a newer version of IPython is something that Python(x,y) *will* do. This is the intermediate point you were describing: not something that everyone already has done, but something that we can reasonably expect people to do. I'm taking a longer term view: Pylab needn't be a description of the userbase on the day it launches. Many users don't use a Python distribution, so there's almost no minimum set of packages you can assume people have installed today. But with plenty of communication and elbow grease, I hope that in, say, 6 months, the idea will have enough traction with users and distributions that you can write code and say "runs on Pylab 2.0", and users will either have it or be able to get it easily. We're certainly not insisting that distributions implement the spec at once, but we're giving them a (fairly reasonable, I think) set of targets to work towards. Some distributions already meet it, others may need to update a package or two. Best wishes, Thomas From a.klein at science-applied.nl Sun Sep 23 16:30:10 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Sun, 23 Sep 2012 22:30:10 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: > > Whether or not we specify the IPython notebook, we will almost > certainly specify a version of IPython that *can* run the notebook. We > will work with Spyder and Python(x,y) to get a newer version of > IPython in, but the current version of Python(x,y) would not meet the > specified version of IPython either way. I think you can interpret "Python(x,y) has IPython" in two ways. The one you are thinking of is that Spyder has an IPython shell of sufficiently high version. The other is that Python(x,y) ships the IPython package and its startup script as an executable. The latter is *much* easier to realize. Almar -- Almar Klein, PhD Science Applied phone: +31 6 19268652 e-mail: a.klein at science-applied.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.klein at science-applied.nl Sun Sep 23 16:44:42 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Sun, 23 Sep 2012 22:44:42 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On 23 September 2012 17:37, Thomas Kluyver wrote: > On 23 September 2012 16:28, Ralf Gommers wrote: > > After years of frustration with PIL's broken tiff support and with how > it's > > managed, I don't think I can answer that question objectively...... > > Well, if you think FreeImage is a better option, you're welcome to > voice that. Anyone who disagrees can also speak up ;-) Now is probably a good time to mention imageio ( http://imageio.readthedocs.org). This is a project that spun off from the FreeImage wrapper in sk-image (which was written mostly by Zach Pincus), and is intended to replace PIL for reading/writing images. It has a plugin system so that additional (scientific) formats can be added relatively easily. Right now FreeImage is the only plugin, but it gives support for lots of popular image formats. It started just this summer, and I consider it beta, but the codebase is small (FreeImage does most of the work), so a stable version should not be far off. I don't advocate putting it in the pylab spec now, but wanted to mention it because PIL was brought up (I'd like to avoid PIL). Oh, one important thing is that imageio was designed with easy distribution in mind from the start; t's pure Python with a cpython wrapper for the FreeImage lib (which is automatically downloaded at install-time). Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Sep 23 16:53:50 2012 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 23 Sep 2012 21:53:50 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sun, Sep 23, 2012 at 9:09 PM, Thomas Kluyver wrote: > On 23 September 2012 20:42, Nathaniel Smith wrote: >> Then we should talk to the EPD folk about whether that can be fixed. > > Certainly. But if they disagree (it would be a big expansion, and they > offer a more complete commercial version), we needn't completely > revise the standard to accomodate that. Right, but we should figure that out based on conversation and figuring out the exact trade-offs involved. It might be worth cutting back the spec slightly, if that turned out to be the difference between a useful spec and wishful thinking. Especially if they had a good reason for whatever came up, like Python(x,y) does. You wrote we "shouldn't relax the standard"; I'm saying we should reserve our judgement for now. I doubt we'll *need* to relax the standard anyway, but if you want people to feel involved in something you have to make clear that their input matters, not rule things out up front. >> And then in the next sentence I also mentioned that there's no value >> in telling people to do things that they won't do...? > > I hope that including a newer version of IPython is something that > Python(x,y) *will* do. This is the intermediate point you were > describing: not something that everyone already has done, but > something that we can reasonably expect people to do. I think this is a misunderstanding. The intermediate point I was talking about was the one where they change what they do *because* of discussions/a spec/etc. That's what I was addressing when I wrote: | the only reason that I know for why the notebook wouldn't | be supported by any pylab-relevant distro is the technical | incompatibility between new ipythons and spyder. I assume that | notebooks will be supported by Python(x,y) iff that is fixed. This | seems like a low impact area for pylab to me, because I don't see how | putting something in a spec will make any difference. The way to make | an impact here is to go fix that code. Actually if anything it looks like the opposite -- right now it's you and Fernando et al who are actively interested in IPython's role in the spec, not the spyder folks. So keeping the notebook out of the spec for now will motivate you guys to go get that code fixed :-). I'm sort of joking, but only sort of... > I'm taking a longer term view: Pylab needn't be a description of the > userbase on the day it launches. Many users don't use a Python > distribution, so there's almost no minimum set of packages you can > assume people have installed today. But with plenty of communication > and elbow grease, I hope that in, say, 6 months, the idea will have > enough traction with users and distributions that you can write code > and say "runs on Pylab 2.0", and users will either have it or be able > to get it easily. > > We're certainly not insisting that distributions implement the spec at > once, but we're giving them a (fairly reasonable, I think) set of > targets to work towards. Some distributions already meet it, others > may need to update a package or two. The way you get traction is by getting the relevant people involved in the decision-making, and making it as useful as possible on day one. We don't want to "give them targets", we want to work *with* them on common goals. There's no reason we can't start with a reality-based spec and then try to improve it. The nice thing is that the way you improve a reality-based spec is to improve reality, so even though the spec itself is just a webpage somewhere, it encourages changes that matter. What do you think of this suggestion from above?: "start a pylab list, move the discussion there, and make sure that the various stakeholders are specifically invited"? -n From josef.pktd at gmail.com Sun Sep 23 17:04:50 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 23 Sep 2012 17:04:50 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sun, Sep 23, 2012 at 4:53 PM, Nathaniel Smith wrote: > On Sun, Sep 23, 2012 at 9:09 PM, Thomas Kluyver wrote: >> On 23 September 2012 20:42, Nathaniel Smith wrote: >>> Then we should talk to the EPD folk about whether that can be fixed. >> >> Certainly. But if they disagree (it would be a big expansion, and they >> offer a more complete commercial version), we needn't completely >> revise the standard to accomodate that. > > Right, but we should figure that out based on conversation and > figuring out the exact trade-offs involved. It might be worth cutting > back the spec slightly, if that turned out to be the difference > between a useful spec and wishful thinking. Especially if they had a > good reason for whatever came up, like Python(x,y) does. You wrote we > "shouldn't relax the standard"; I'm saying we should reserve our > judgement for now. I doubt we'll *need* to relax the standard anyway, > but if you want people to feel involved in something you have to make > clear that their input matters, not rule things out up front. > >>> And then in the next sentence I also mentioned that there's no value >>> in telling people to do things that they won't do...? >> >> I hope that including a newer version of IPython is something that >> Python(x,y) *will* do. This is the intermediate point you were >> describing: not something that everyone already has done, but >> something that we can reasonably expect people to do. > > I think this is a misunderstanding. The intermediate point I was > talking about was the one where they change what they do *because* of > discussions/a spec/etc. That's what I was addressing when I wrote: > > | the only reason that I know for why the notebook wouldn't > | be supported by any pylab-relevant distro is the technical > | incompatibility between new ipythons and spyder. I assume that > | notebooks will be supported by Python(x,y) iff that is fixed. This > | seems like a low impact area for pylab to me, because I don't see how > | putting something in a spec will make any difference. The way to make > | an impact here is to go fix that code. > > Actually if anything it looks like the opposite -- right now it's you > and Fernando et al who are actively interested in IPython's role in > the spec, not the spyder folks. So keeping the notebook out of the > spec for now will motivate you guys to go get that code fixed :-). > > I'm sort of joking, but only sort of... > >> I'm taking a longer term view: Pylab needn't be a description of the >> userbase on the day it launches. Many users don't use a Python >> distribution, so there's almost no minimum set of packages you can >> assume people have installed today. But with plenty of communication >> and elbow grease, I hope that in, say, 6 months, the idea will have >> enough traction with users and distributions that you can write code >> and say "runs on Pylab 2.0", and users will either have it or be able >> to get it easily. >> >> We're certainly not insisting that distributions implement the spec at >> once, but we're giving them a (fairly reasonable, I think) set of >> targets to work towards. Some distributions already meet it, others >> may need to update a package or two. > > The way you get traction is by getting the relevant people involved in > the decision-making, and making it as useful as possible on day one. > We don't want to "give them targets", we want to work *with* them on > common goals. There's no reason we can't start with a reality-based > spec and then try to improve it. The nice thing is that the way you > improve a reality-based spec is to improve reality, so even though the > spec itself is just a webpage somewhere, it encourages changes that > matter. > > What do you think of this suggestion from above?: "start a pylab list, > move the discussion there, and make sure that the various stakeholders > are specifically invited"? My impression is that scipy-user is still the wider audience for the general discussion, before you go off to another mailing list. (and split the discussion 3-way) Josef > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From takowl at gmail.com Sun Sep 23 17:36:01 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 23 Sep 2012 22:36:01 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On 23 September 2012 21:53, Nathaniel Smith wrote: > So keeping the notebook out of the > spec for now will motivate you guys to go get that code fixed :-). We discussed it recently with Carlos Cordoba (a Spyder dev), but I'm not sure what the issue is, nor whether IPython, Spyder or both need to fix things. I've just opened an (IPython) issue to work this out: https://github.com/ipython/ipython/issues/2425 As Josef says, the aim of having this discussion on scipy-user was to reach a broad audience. I think we need some to bootstrap this some more before it's worth making a new mailing list for it. It looks like no Spyder/Python(x,y) representatives are reading this, so I'll post a message on their discussion group to let them know about it. Almar: thanks for the mention of imageio. By the sounds of it, that makes two votes (yours and Ralf's) in favour of FreeImage over PIL? Thanks, Thomas From fperez.net at gmail.com Sun Sep 23 19:27:01 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 23 Sep 2012 16:27:01 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <505E669A.1010401@creativetrax.com> References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On Sat, Sep 22, 2012 at 6:32 PM, Jason Grout wrote: > In Sage, we also have the %attach magic, which effectively re-runs a > file every time it changes. This is a bit different than autoreload; > maybe it would make a good addition to IPython to complement %run? How does it detect changes? Does it poll in the background or does it use posix-only tools? Remember, ipython runs on windows natively :) But if the solution is cross-platform, then it sounds like a great addition indeed. Ideally it would have support for some of the %run options that make sense (such as -t to report timing or -n to not set __name__ to '__main__'). Cheers, f From ondrej at certik.cz Sun Sep 23 19:31:34 2012 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 23 Sep 2012 16:31:34 -0700 Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails Message-ID: Hi, I noticed, that hyp1f1(0.5, 1.5, -2000) fails: In [5]: scipy.special.hyp1f1(0.5, 1.5, 0) Out[5]: 1.0 In [6]: scipy.special.hyp1f1(0.5, 1.5, -20) Out[6]: 0.19816636482997368 In [7]: scipy.special.hyp1f1(0.5, 1.5, -200) Out[7]: 0.062665706865775023 In [8]: scipy.special.hyp1f1(0.5, 1.5, -2000) Warning: invalid value encountered in hyp1f1 Out[8]: nan The values [5], [6] and [7] are correct. The value [8] should be: 0.019816636488030055... Ondrej From travis at continuum.io Mon Sep 24 02:33:12 2012 From: travis at continuum.io (Travis Oliphant) Date: Mon, 24 Sep 2012 01:33:12 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: On Sep 23, 2012, at 4:36 PM, Thomas Kluyver wrote: > On 23 September 2012 21:53, Nathaniel Smith wrote: >> So keeping the notebook out of the >> spec for now will motivate you guys to go get that code fixed :-). > > We discussed it recently with Carlos Cordoba (a Spyder dev), but I'm > not sure what the issue is, nor whether IPython, Spyder or both need > to fix things. I've just opened an (IPython) issue to work this out: > https://github.com/ipython/ipython/issues/2425 > > As Josef says, the aim of having this discussion on scipy-user was to > reach a broad audience. I think we need some to bootstrap this some > more before it's worth making a new mailing list for it. It looks like > no Spyder/Python(x,y) representatives are reading this, so I'll post a > message on their discussion group to let them know about it. My suggestion is to move this discussion to the numfocus at googlegroups.com list because NumFOCUS is already *very* interested in supporting a project like this and could bless the work with an actual distribution and not *just* a list on a web-page. There, the discussion does not just have to be abstract and only about trying to encourage other people to do something. We can discuss what should actually go into the pylab meta-package that NumFOCUS can be supporting. A reference implementation of the pylab distribution is well within the scope of NumFOCUS. If you are not already subscribed to numfocus at googlegroups.com it is easy to subscribe: Just send a message to numfocus+subscribe at googlegroups.com -Travis From a.klein at science-applied.nl Mon Sep 24 04:17:41 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Mon, 24 Sep 2012 10:17:41 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: > > We discussed it recently with Carlos Cordoba (a Spyder dev), but I'm > not sure what the issue is, nor whether IPython, Spyder or both need > to fix things. I've just opened an (IPython) issue to work this out: > https://github.com/ipython/ipython/issues/2425 I realize my previous comment on this was rather vague. What I was trying to say is that I do not think specifying the latest IPython version in pylab will cause serious problems for Python(x,y). It should easily be able to include the IPython executable in the way that other people use it, i.e. independent from its IDE (Spyder). The integration of the newer IPython (or a notebook interface) in Spyder is much more difficult. It would be great if in time this would be available too, but I don't think it's a prerequisite for Python(x,y) to be pylab compliant. But as you pointed out, it would be nice to hear what Pierre et al think about this. Almar: thanks for the mention of imageio. By the sounds of it, that > makes two votes (yours and Ralf's) in favour of FreeImage over PIL? Well, FreeImage is only a C++ library. sk-image uses it as one of its ways to load images. So my take on this would be to let sk-image do the image loading for now, and include imagio in the standard if it is more mature. sk-image can then use imageio as a plugin. (note that there are one or two other wrappers around the imageio library, but Zach started fresh because their codebases weren't very nice.) It would be great if we can avoid including PIL in the standard, but I think that many people depend on it. So others might argue differently ... Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From francesc at continuum.io Mon Sep 24 05:46:10 2012 From: francesc at continuum.io (Francesc Alted) Date: Mon, 24 Sep 2012 11:46:10 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <50602BE2.7020101@continuum.io> Hey, nice to see this conversation going on. I'm not currently an active developer of PyTables anymore (other brave people like Antonio Valentino, Anthony Scopatz and Josh Moore took over the project during the past year), but I created and lead its development for many years, and I still provide some feedback on the PyTables mailing list, so I'd be glad to contribute my view here too (although definitely it will be not impartial because, what the heck, I still consider it as my little boy ;) On 9/22/12 5:04 PM, Andrew Collette wrote: > On Sat, Sep 22, 2012 at 3:56 AM, Thomas Kluyver wrote: > >> Andrew: Thanks for the info about h5py. As I don't use HDF5 myself, >> can someone describe, as impartially as possible, the differences >> between PyTables and h5py: how do the APIs differ, any speed >> difference, how well known are they, what do they depend on, and what >> depends on them (e.g. I think pandas can use PyTables?). If it's >> sensible to include both, we can do so, but I'd like to get a feel for >> what they each are. PyTables is a high-level API for the HDF5 library, that includes advanced features that are not present in HDF5 itself, like indexing (via OPSI), out-of-core operations (via numexpr) and very fast compression (via Blosc). PyTables, contrarily to h5py, does not try to expose the complete HDF5 API, and it is more centered around providing an easy-to-use and performant interface to the HDF5 library. > I'm certainly not unbiased, but while we're waiting for others to > rejoin the discussion I can give my perspective on this question. I > never saw h5py and PyTables as direct competitors; they have different > design goals. To me the basic difference is that PyTables is both a > way to talk to HDF5 and a really great database-like interface with > things like indexing, searching, etc. (both NumExpr and Blosc came out > of work on PyTables, I believe). In contrast, h5py arose by asking > "how can we map the basic HDF5 abstractions to Python in a direct but > still Pythonic way". Yeah, I agree that the h5py and PyTables cannot be seen as direct competitors (although many people, including myself at times :), see them as such). As I said before, performance is one of the aspects that is *extremely* important for PyTables, and you are right in that both OPSI and Blosc were developments done for the sake of PyTables. Numexpr case is somewhat different, since it is was originally developed outside of the project (by David M. Cooke), and adopted and largely enhanced for allowing fast queries and out of core computations in PyTables. Of course, all these enhancements where contributed back to the original numexpr project and it continues to be an stand-alone library that is useful in many scenarios different than PyTables, in a typical case of fertile project cross-polinization. > > The API for h5py has both a high-level and low-level component; like > PyTables, the high-level component is oriented around files, datasets > and groups, allows iteration over elements in the file, etc. The > emphasis in h5py is to use existing objects and abstractions from > NumPy; for example, datasets have .dtype and .shape attributes and can > be sliced like NumPy arrays. Groups are treated like dictionaries, > are iterable, have .keys() and .iteritems() and friends, etc. So for PyTables the approach in this regard is very close to h5py, with the exception that the group hierarchy is primarily (but not only) accessed via a natural naming approach, i.e. something like: file.root.group1.group2.dataset instead of the h5py approach: file['group1']['group2']['dataset'] I find the former extremely more useful for structure discovering (by hitting the TAB key in REPL interactive environments), but this is probably a matter of tastes. > > The "main" high level interface in h5py also rests on a huge low-level > interface written in Cython > (http://h5py.alfven.org/docs/low/index.html), which exposes the > majority of the HDF5 C API in a Pythonic, object-oriented way. The > goal here is anything you can do with HDF5 in C, you can do in Python. Yeah, as it has already been said, here it lies one of the big differences between both projects: PyTables does not come with a low-level interface to HDF5. This, however, has been a deliberate design goal, as the HDF5 C API which can be rather complex and cumbersome for the large majority of Python users, and it was estimated that the large majority of the people was not interested in delving with HDF5 intricacies (those PyTables users interested in accessing to such HDF5 capabilities can always take the Cython sources and build new features on top of it, which I find a more sensible approach, specially if performance is interesting for the user). > > It has no dependencies beyond NumPy and Python itself; I will let > others chime in for specific projects which depend on h5py. As a > rough proxy for popularity, h5py has roughly 30k downloads over the > life of the project (10k in the past year). I cannot tell how many downloads PyTables has had over its almost 10 years of existence (the first public release was made back in October 2002), but probably a lot. Sourceforge reports that it received more than 50K downloads for the 2.3 series (released one year ago) and more than 6.5K downloads for the recent 2.4.0 version released a couple of months ago. However that's is a bit tricky because PyTables is shipped in most of Linux distributions, and Windows binaries are not available in SF anymore, but through independent Windows distributions like Gohlke's, EPD, Python(x,y) or Anaconda, so likely the actual number would be much more than that (but the same should apply to h5py). > > I have never benchmarked PyTables against h5py, but I strongly suspect > PyTables is faster. Yes, without knowing about anybody having done an exhaustive comparison in most of the scenarios, my own benchmarks confirm that PyTables is generally faster than h5py. It is true that both projects uses HDF5 as the basic I/O library, but when combining things like OPSI, Blosc and numexpr, this can make a huge difference. For example, in some benchmarks that I did some months ago, the difference in performance was in the range from 10 thousand to more than 100 thousand times, specially when browsing and querying medium-size on-disk tables (100 thousand rows long): http://www.digipedia.pl/usenet/thread/16009/26243/#post26257 Also, Ga?l Varoquaux blogged on some real-life benchmarks about how the ultra-fast Blosc (and LZO) compressors integrated in PyTables can accelerate the I/O: http://gael-varoquaux.info/blog/?p=159 Anyway, I think the home page about PyTables does a good job expressing how important performance is for the project: http://www.pytables.org/moin > Most of the development effort that has recently > gone into h5py has been focused in other areas like API coverage, > Python 3 support, Unicode, and thread safety; we've never done careful > performance testing. Yep, probably here h5py is more advanced than PyTables, specially because the latter does not provide full Python 3 support yet. However, Antonio made great strides on this area, and most of the pieces are already there, being the most outstanding missing part having Python 3 support for numexpr. In fact, Antonio already kindly provided patches for this, but I still need to check them out and release a new version of numexpr. I think I should stop procrastinating and give this a go sooner than later (Antonio, if you are reading this, be ready for some questions about your patches soon :). -- Francesc Alted From takowl at gmail.com Mon Sep 24 06:30:17 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 24 Sep 2012 11:30:17 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <50602BE2.7020101@continuum.io> References: <50602BE2.7020101@continuum.io> Message-ID: Thanks everyone for your comments. Re IPython: Carlos confirms [1] that integration with Spyder is improving considerably. Python(x,y) will probably throw the switch on a newer version of IPython once Spyder 2.2 is released, which sounds like it will be quite soon. FreeImage: So does sk-image interface directly with the C++ library, or does it need some existing wrapper like FreeImagePy? From the sounds of it, we should include FreeImage and any wrapper sk-image needs, with a view to including imageio in the future. Or perhaps we should require FreeImage or PIL, and let distributions pick? NumFOCUS: Thanks Travis, I think the support of NumFOCUS will be invaluable for taking this forwards. For this discussion about the standard, though, I think we're benefitting from a much wider audience on scipy-central. HDF5: Thanks for that description, Francesc. I'm leaning towards specifying both PyTables and h5py. EPD, Anaconda, Python(x,y) and WinPython already include both. I'm not clear on whether Sage & QSnake do - I see no mention of HDF5 on Sage's package list. For the first spec version, I think we should leave the inclusion of packages to handle HDF4 and netCDF4 up to distributions, as it seems they're more specialist. [1] https://github.com/ipython/ipython/issues/2425 Best wishes, Thomas From a.klein at science-applied.nl Mon Sep 24 07:36:48 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Mon, 24 Sep 2012 13:36:48 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <50602BE2.7020101@continuum.io> Message-ID: > > FreeImage: So does sk-image interface directly with the C++ library, > or does it need some existing wrapper like FreeImagePy? From the > sounds of it, we should include FreeImage and any wrapper sk-image > needs, with a view to including imageio in the future. Or perhaps we > should require FreeImage or PIL, and let distributions pick? > The FreeImage plugin of sk-image interfaces directly with the library; the plugin and imageio are both originated from the same code (but probably diverged a bit by now). Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.moore2 at nih.gov Mon Sep 24 08:15:25 2012 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Mon, 24 Sep 2012 08:15:25 -0400 Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails In-Reply-To: References: Message-ID: > -----Original Message----- > From: Ondrej Certik [mailto:ondrej at certik.cz] > Sent: Sunday, September 23, 2012 7:32 PM > To: SciPy Users List > Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails > > Hi, > > I noticed, that hyp1f1(0.5, 1.5, -2000) fails: > > In [5]: scipy.special.hyp1f1(0.5, 1.5, 0) > Out[5]: 1.0 > > In [6]: scipy.special.hyp1f1(0.5, 1.5, -20) > Out[6]: 0.19816636482997368 > > In [7]: scipy.special.hyp1f1(0.5, 1.5, -200) > Out[7]: 0.062665706865775023 > > In [8]: scipy.special.hyp1f1(0.5, 1.5, -2000) > Warning: invalid value encountered in hyp1f1 > Out[8]: nan > > > The values [5], [6] and [7] are correct. The value [8] should be: > > 0.019816636488030055... > > Ondrej > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Its blowing up around -709: In [60]: s.hyp1f1(0.5, 1.5, -709.7827128933) Out[60]: 0.03326459435722777 In [61]: s.hyp1f1(0.5, 1.5, -709.7827128934) Out[61]: inf Eric From 275438859 at qq.com Mon Sep 24 09:39:22 2012 From: 275438859 at qq.com (=?gb18030?B?0MTI59byueI=?=) Date: Mon, 24 Sep 2012 21:39:22 +0800 Subject: [SciPy-User] =?gb18030?b?u9i4tKO6ICBbTnVtcHktZGlzY3Vzc2lvbl0gZXJy?= =?gb18030?q?ors_of_scipy_build?= Message-ID: Hi,David. thanks for your help. But I don't think that the build for numpy and scipy are successful.Course the test of numpy and scipy are fail. Here's the situation: >>> import numpy as np >>> np.test('full') Running unit tests for numpy Traceback (most recent call last): File "", line 1, in File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/nosetester.py", line 340, in test self._show_system_info() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/nosetester.py", line 193, in _show_system_info nose = import_nose() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/nosetester.py", line 71, in import_nose raise ImportError(msg) ImportError: Need nose >= 0.10.0 for tests - see http://somethingaboutorange.com/mrl/projects/nose ------------------ ???? ------------------ ???: "David Cournapeau"; ????: 2012?9?24?(???) ??1:11 ???: "Discussion of Numerical Python"; ??: "scipy-user"; "paparazzi-devel"; ??: Re: [SciPy-User] [Numpy-discussion] errors of scipy build On Sun, Sep 23, 2012 at 5:23 PM, ???? <275438859 at qq.com> wrote: > Hi,all. > I have installed numpy and scipy step by step carefully as the > instructions from website:http://www.scipy.org/Installing_SciPy/Mac_OS_X > But still get many errors and warnings while it's building. > (OSX lion 10.7.4 /Xcode 4.5 /clang /gfortran4.2.3) Those error are parts of the configuration. As long as the build runs until the end, the build is successful. You should not build scipy with gcc on mac os x 10.7, as it is known to cause issues. David _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Sep 24 09:45:21 2012 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 24 Sep 2012 14:45:21 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: On Sun, Sep 23, 2012 at 2:58 PM, Thomas Kluyver wrote: > On 23 September 2012 14:41, Ralf Gommers wrote: >> At least `nose`, we have to be able to run tests for packages. > > We do, but for new users, unit testing is something they're unlikely > to need for a while. Installing nose also doesn't involve any complex > requirements. So I'd be inclined not to specify it, but I'd like to > hear what others think. IMHO nose should absolutely be on the list. The first thing I teach people is the boilerplate to put at the bottom of their modules: if __name__ == "__main__": import nose nose.runmodule() And the target audience isn't just brand new users anyway. Sample code should contain tests, so it's helpful to have a convention for how they're named and how you write assertions, and both of these come from nose as the de facto standard. -n From cournape at gmail.com Mon Sep 24 09:47:16 2012 From: cournape at gmail.com (David Cournapeau) Date: Mon, 24 Sep 2012 14:47:16 +0100 Subject: [SciPy-User] =?utf-8?b?5Zue5aSN77yaIFtOdW1weS1kaXNjdXNzaW9uXSBl?= =?utf-8?q?rrors_of_scipy_build?= In-Reply-To: References: Message-ID: On Mon, Sep 24, 2012 at 2:39 PM, ???? <275438859 at qq.com> wrote: > Hi,David. > thanks for your help. > But I don't think that the build for numpy and scipy are > successful.Course the test of numpy and scipy are fail. > Here's the situation: >>>> import numpy as np >>>> np.test('full') > Running unit tests for numpy > Traceback (most recent call last): > File "", line 1, in > File > "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/nosetester.py", > line 340, in test > self._show_system_info() > File > "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/nosetester.py", > line 193, in _show_system_info > nose = import_nose() > File > "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/testing/nosetester.py", > line 71, in import_nose > raise ImportError(msg) > ImportError: Need nose >= 0.10.0 for tests - see > http://somethingaboutorange.com/mrl/projects/nose This message indicates that you need to install nose to run numpy/scipy unit tests. Nose is not mandatory to use numpy/scipy, but required for testing. David From jason-sage at creativetrax.com Mon Sep 24 10:14:14 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Mon, 24 Sep 2012 09:14:14 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: <50606AB6.9020401@creativetrax.com> On 9/23/12 6:27 PM, Fernando Perez wrote: > On Sat, Sep 22, 2012 at 6:32 PM, Jason Grout > wrote: >> In Sage, we also have the %attach magic, which effectively re-runs a >> file every time it changes. This is a bit different than autoreload; >> maybe it would make a good addition to IPython to complement %run? > > How does it detect changes? Does it poll in the background or does it > use posix-only tools? Remember, ipython runs on windows natively :) > > But if the solution is cross-platform, then it sounds like a great > addition indeed. Ideally it would have support for some of the %run > options that make sense (such as -t to report timing or -n to not set > __name__ to '__main__'). Before any code block is run, it checks the modified time on attached files. If the modified time is newer than the last time we ran it, it runs the file again before running the code block. So I think it can do its work in the pre-run-code hook. But Thomas is right that this conversation should probably moved to the ipython mailing list. Thanks, Jason From helmrp at yahoo.com Mon Sep 24 10:57:55 2012 From: helmrp at yahoo.com (Robaula) Date: Mon, 24 Sep 2012 07:57:55 -0700 Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 58 In-Reply-To: References: Message-ID: <740FCE80-9044-4F27-9DA3-C45F271687F4@yahoo.com> As a Windows-only user, and interested mainly in quickly getting my application running and generating results, I want something that installs everything I need and lets me get on with it. And I don't want to learn or use Linux along the way. I'll be seriously discouraged if to start getting results after installing SciPy orPyLab, I next have to install A. But to install A, I first have to install B. Then I next have to install C, and then D, etc. I also get seriously irritated when after all that I discover that to use SciPy or PyLab functions FFF and GGG, I have to make sure I've first imported modules MM1 and MM2 -- and many times scratch around until I find what modules contain FFF and the other function(s) I need. Why should _I_ do that? Why doesn't SciPy or PyLab know where it's functions are and just automatically import the right module(s)? Seems like all it would take is to add a line to each function importing the appropriate module(s). Sent from Robaula's iPad helmrp at yahoo.com 2645 E Southern, A241 Tempe, AZ 85282 VOX: 480-831-3611 CELL: 602-568-6948 (but not often turned on) On Sep 24, 2012, at 5:11 AM, scipy-user-request at scipy.org wrote: > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. Re: Pylab - standard packages (Almar Klein) > 2. Re: Pylab - standard packages (Francesc Alted) > 3. Re: Pylab - standard packages (Thomas Kluyver) > 4. Re: Pylab - standard packages (Almar Klein) > 5. Re: hyp1f1(0.5, 1.5, -2000) fails (Moore, Eric (NIH/NIDDK) [F]) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 24 Sep 2012 10:17:41 +0200 > From: Almar Klein > Subject: Re: [SciPy-User] Pylab - standard packages > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > >> >> We discussed it recently with Carlos Cordoba (a Spyder dev), but I'm >> not sure what the issue is, nor whether IPython, Spyder or both need >> to fix things. I've just opened an (IPython) issue to work this out: >> https://github.com/ipython/ipython/issues/2425 > > > I realize my previous comment on this was rather vague. What I was trying > to say is that I do not think specifying the latest IPython version in > pylab will cause serious problems for Python(x,y). It should easily be able > to include the IPython executable in the way that other people use it, i.e. > independent from its IDE (Spyder). > > The integration of the newer IPython (or a notebook interface) in Spyder is > much more difficult. It would be great if in time this would be available > too, but I don't think it's a prerequisite for Python(x,y) to be > pylab compliant. > > But as you pointed out, it would be nice to hear what Pierre et al think > about this. > > > Almar: thanks for the mention of imageio. By the sounds of it, that >> makes two votes (yours and Ralf's) in favour of FreeImage over PIL? > > > Well, FreeImage is only a C++ library. sk-image uses it as one of its ways > to load images. So my take on this would be to let sk-image do the image > loading for now, and include imagio in the standard if it is more mature. > sk-image can then use imageio as a plugin. > > (note that there are one or two other wrappers around the imageio library, > but Zach started fresh because their codebases weren't very nice.) > > It would be great if we can avoid including PIL in the standard, but I > think that many people depend on it. So others might argue differently ... > > Almar > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120924/31ca759f/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Mon, 24 Sep 2012 11:46:10 +0200 > From: Francesc Alted > Subject: Re: [SciPy-User] Pylab - standard packages > To: scipy-user at scipy.org > Cc: NumFOCUS > Message-ID: <50602BE2.7020101 at continuum.io> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Hey, nice to see this conversation going on. I'm not currently an > active developer of PyTables anymore (other brave people like Antonio > Valentino, Anthony Scopatz and Josh Moore took over the project during > the past year), but I created and lead its development for many years, > and I still provide some feedback on the PyTables mailing list, so I'd > be glad to contribute my view here too (although definitely it will be > not impartial because, what the heck, I still consider it as my little > boy ;) > > On 9/22/12 5:04 PM, Andrew Collette wrote: >> On Sat, Sep 22, 2012 at 3:56 AM, Thomas Kluyver wrote: >> >>> Andrew: Thanks for the info about h5py. As I don't use HDF5 myself, >>> can someone describe, as impartially as possible, the differences >>> between PyTables and h5py: how do the APIs differ, any speed >>> difference, how well known are they, what do they depend on, and what >>> depends on them (e.g. I think pandas can use PyTables?). If it's >>> sensible to include both, we can do so, but I'd like to get a feel for >>> what they each are. > > PyTables is a high-level API for the HDF5 library, that includes > advanced features that are not present in HDF5 itself, like indexing > (via OPSI), out-of-core operations (via numexpr) and very fast > compression (via Blosc). PyTables, contrarily to h5py, does not try to > expose the complete HDF5 API, and it is more centered around providing > an easy-to-use and performant interface to the HDF5 library. > >> I'm certainly not unbiased, but while we're waiting for others to >> rejoin the discussion I can give my perspective on this question. I >> never saw h5py and PyTables as direct competitors; they have different >> design goals. To me the basic difference is that PyTables is both a >> way to talk to HDF5 and a really great database-like interface with >> things like indexing, searching, etc. (both NumExpr and Blosc came out >> of work on PyTables, I believe). In contrast, h5py arose by asking >> "how can we map the basic HDF5 abstractions to Python in a direct but >> still Pythonic way". > > Yeah, I agree that the h5py and PyTables cannot be seen as direct > competitors (although many people, including myself at times :), see > them as such). As I said before, performance is one of the aspects that > is *extremely* important for PyTables, and you are right in that both > OPSI and Blosc were developments done for the sake of PyTables. > > Numexpr case is somewhat different, since it is was originally developed > outside of the project (by David M. Cooke), and adopted and largely > enhanced for allowing fast queries and out of core computations in > PyTables. Of course, all these enhancements where contributed back to > the original numexpr project and it continues to be an stand-alone > library that is useful in many scenarios different than PyTables, in a > typical case of fertile project cross-polinization. > >> >> The API for h5py has both a high-level and low-level component; like >> PyTables, the high-level component is oriented around files, datasets >> and groups, allows iteration over elements in the file, etc. The >> emphasis in h5py is to use existing objects and abstractions from >> NumPy; for example, datasets have .dtype and .shape attributes and can >> be sliced like NumPy arrays. Groups are treated like dictionaries, >> are iterable, have .keys() and .iteritems() and friends, etc. > > So for PyTables the approach in this regard is very close to h5py, with > the exception that the group hierarchy is primarily (but not only) > accessed via a natural naming approach, i.e. something like: > > file.root.group1.group2.dataset > > instead of the h5py approach: > > file['group1']['group2']['dataset'] > > I find the former extremely more useful for structure discovering (by > hitting the TAB key in REPL interactive environments), but this is > probably a matter of tastes. > >> >> The "main" high level interface in h5py also rests on a huge low-level >> interface written in Cython >> (http://h5py.alfven.org/docs/low/index.html), which exposes the >> majority of the HDF5 C API in a Pythonic, object-oriented way. The >> goal here is anything you can do with HDF5 in C, you can do in Python. > > Yeah, as it has already been said, here it lies one of the big > differences between both projects: PyTables does not come with a > low-level interface to HDF5. This, however, has been a deliberate > design goal, as the HDF5 C API which can be rather complex and > cumbersome for the large majority of Python users, and it was estimated > that the large majority of the people was not interested in delving with > HDF5 intricacies (those PyTables users interested in accessing to such > HDF5 capabilities can always take the Cython sources and build new > features on top of it, which I find a more sensible approach, specially > if performance is interesting for the user). > >> >> It has no dependencies beyond NumPy and Python itself; I will let >> others chime in for specific projects which depend on h5py. As a >> rough proxy for popularity, h5py has roughly 30k downloads over the >> life of the project (10k in the past year). > > I cannot tell how many downloads PyTables has had over its almost 10 > years of existence (the first public release was made back in October > 2002), but probably a lot. Sourceforge reports that it received more > than 50K downloads for the 2.3 series (released one year ago) and more > than 6.5K downloads for the recent 2.4.0 version released a couple of > months ago. However that's is a bit tricky because PyTables is shipped > in most of Linux distributions, and Windows binaries are not available > in SF anymore, but through independent Windows distributions like > Gohlke's, EPD, Python(x,y) or Anaconda, so likely the actual number > would be much more than that (but the same should apply to h5py). > >> >> I have never benchmarked PyTables against h5py, but I strongly suspect >> PyTables is faster. > > Yes, without knowing about anybody having done an exhaustive comparison > in most of the scenarios, my own benchmarks confirm that PyTables is > generally faster than h5py. It is true that both projects uses HDF5 as > the basic I/O library, but when combining things like OPSI, Blosc and > numexpr, this can make a huge difference. For example, in some > benchmarks that I did some months ago, the difference in performance was > in the range from 10 thousand to more than 100 thousand times, specially > when browsing and querying medium-size on-disk tables (100 thousand rows > long): > > http://www.digipedia.pl/usenet/thread/16009/26243/#post26257 > > Also, Ga?l Varoquaux blogged on some real-life benchmarks about how the > ultra-fast Blosc (and LZO) compressors integrated in PyTables can > accelerate the I/O: > > http://gael-varoquaux.info/blog/?p=159 > > Anyway, I think the home page about PyTables does a good job expressing > how important performance is for the project: > > http://www.pytables.org/moin > >> Most of the development effort that has recently >> gone into h5py has been focused in other areas like API coverage, >> Python 3 support, Unicode, and thread safety; we've never done careful >> performance testing. > > Yep, probably here h5py is more advanced than PyTables, specially > because the latter does not provide full Python 3 support yet. However, > Antonio made great strides on this area, and most of the pieces are > already there, being the most outstanding missing part having Python 3 > support for numexpr. In fact, Antonio already kindly provided patches > for this, but I still need to check them out and release a new version > of numexpr. I think I should stop procrastinating and give this a go > sooner than later (Antonio, if you are reading this, be ready for some > questions about your patches soon :). > > -- > Francesc Alted > > > > ------------------------------ > > Message: 3 > Date: Mon, 24 Sep 2012 11:30:17 +0100 > From: Thomas Kluyver > Subject: Re: [SciPy-User] Pylab - standard packages > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Thanks everyone for your comments. > > Re IPython: Carlos confirms [1] that integration with Spyder is > improving considerably. Python(x,y) will probably throw the switch on > a newer version of IPython once Spyder 2.2 is released, which sounds > like it will be quite soon. > > FreeImage: So does sk-image interface directly with the C++ library, > or does it need some existing wrapper like FreeImagePy? From the > sounds of it, we should include FreeImage and any wrapper sk-image > needs, with a view to including imageio in the future. Or perhaps we > should require FreeImage or PIL, and let distributions pick? > > NumFOCUS: Thanks Travis, I think the support of NumFOCUS will be > invaluable for taking this forwards. For this discussion about the > standard, though, I think we're benefitting from a much wider audience > on scipy-central. > > HDF5: Thanks for that description, Francesc. I'm leaning towards > specifying both PyTables and h5py. EPD, Anaconda, Python(x,y) and > WinPython already include both. I'm not clear on whether Sage & QSnake > do - I see no mention of HDF5 on Sage's package list. For the first > spec version, I think we should leave the inclusion of packages to > handle HDF4 and netCDF4 up to distributions, as it seems they're more > specialist. > > [1] https://github.com/ipython/ipython/issues/2425 > > Best wishes, > Thomas > > > ------------------------------ > > Message: 4 > Date: Mon, 24 Sep 2012 13:36:48 +0200 > From: Almar Klein > Subject: Re: [SciPy-User] Pylab - standard packages > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > >> >> FreeImage: So does sk-image interface directly with the C++ library, >> or does it need some existing wrapper like FreeImagePy? From the >> sounds of it, we should include FreeImage and any wrapper sk-image >> needs, with a view to including imageio in the future. Or perhaps we >> should require FreeImage or PIL, and let distributions pick? >> > > The FreeImage plugin of sk-image interfaces directly with the library; the > plugin and imageio are both originated from the same code (but probably > diverged a bit by now). > > Almar > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120924/58e4f495/attachment-0001.html > > ------------------------------ > > Message: 5 > Date: Mon, 24 Sep 2012 08:15:25 -0400 > From: "Moore, Eric (NIH/NIDDK) [F]" > Subject: Re: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails > To: 'SciPy Users List' > Message-ID: > > Content-Type: text/plain; charset="us-ascii" > >> -----Original Message----- >> From: Ondrej Certik [mailto:ondrej at certik.cz] >> Sent: Sunday, September 23, 2012 7:32 PM >> To: SciPy Users List >> Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails >> >> Hi, >> >> I noticed, that hyp1f1(0.5, 1.5, -2000) fails: >> >> In [5]: scipy.special.hyp1f1(0.5, 1.5, 0) >> Out[5]: 1.0 >> >> In [6]: scipy.special.hyp1f1(0.5, 1.5, -20) >> Out[6]: 0.19816636482997368 >> >> In [7]: scipy.special.hyp1f1(0.5, 1.5, -200) >> Out[7]: 0.062665706865775023 >> >> In [8]: scipy.special.hyp1f1(0.5, 1.5, -2000) >> Warning: invalid value encountered in hyp1f1 >> Out[8]: nan >> >> >> The values [5], [6] and [7] are correct. The value [8] should be: >> >> 0.019816636488030055... >> >> Ondrej >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > Its blowing up around -709: > > In [60]: s.hyp1f1(0.5, 1.5, -709.7827128933) > Out[60]: 0.03326459435722777 > > In [61]: s.hyp1f1(0.5, 1.5, -709.7827128934) > Out[61]: inf > > Eric > > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 109, Issue 58 > ******************************************* From jjhelmus at gmail.com Mon Sep 24 11:06:10 2012 From: jjhelmus at gmail.com (Jonathan Helmus) Date: Mon, 24 Sep 2012 11:06:10 -0400 Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails In-Reply-To: References: Message-ID: <506076E2.5000107@gmail.com> On 09/24/2012 08:15 AM, Moore, Eric (NIH/NIDDK) [F] wrote: >> -----Original Message----- >> From: Ondrej Certik [mailto:ondrej at certik.cz] >> Sent: Sunday, September 23, 2012 7:32 PM >> To: SciPy Users List >> Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails >> >> Hi, >> >> I noticed, that hyp1f1(0.5, 1.5, -2000) fails: >> >> In [5]: scipy.special.hyp1f1(0.5, 1.5, 0) >> Out[5]: 1.0 >> >> In [6]: scipy.special.hyp1f1(0.5, 1.5, -20) >> Out[6]: 0.19816636482997368 >> >> In [7]: scipy.special.hyp1f1(0.5, 1.5, -200) >> Out[7]: 0.062665706865775023 >> >> In [8]: scipy.special.hyp1f1(0.5, 1.5, -2000) >> Warning: invalid value encountered in hyp1f1 >> Out[8]: nan >> >> >> The values [5], [6] and [7] are correct. The value [8] should be: >> >> 0.019816636488030055... >> >> Ondrej >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > Its blowing up around -709: > > In [60]: s.hyp1f1(0.5, 1.5, -709.7827128933) > Out[60]: 0.03326459435722777 > > In [61]: s.hyp1f1(0.5, 1.5, -709.7827128934) > Out[61]: inf > > Eric > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user exp(709.78) is right at the edge of causing a overflow in double precision, 709.79 causes an overflow. In [2]: np.exp(np.array([709.78], dtype='float64')) Out[2]: array([ 1.79282279e+308]) In [3]: np.exp(np.array([709.79], dtype='float64')) /home/jhelmus/bin/ipython:1: RuntimeWarning: overflow encountered in exp #!/home/jhelmus/bin/epd-7.3-1-rh5-x86_64/bin/python Out[3]: array([ inf]) Line 5695 in specfun.f is calculating an exponential (and I believe is causing this overflow). The complex version of this subroutine (CCHG) also suffers from this problem. I do not know enough about this type of function to suggest a meaningful fix other than suggesting a check that abs(x3) <= np.log(np.finfo('d').max()). - Jonathan Helmus From robert.kern at gmail.com Mon Sep 24 11:05:32 2012 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Sep 2012 10:05:32 -0500 Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 58 In-Reply-To: <740FCE80-9044-4F27-9DA3-C45F271687F4@yahoo.com> References: <740FCE80-9044-4F27-9DA3-C45F271687F4@yahoo.com> Message-ID: On Mon, Sep 24, 2012 at 9:57 AM, Robaula wrote: > I also get seriously irritated when after all that I discover that to use SciPy or PyLab functions FFF and GGG, I have to make sure I've first imported modules MM1 and MM2 -- and many times scratch around until I find what modules contain FFF and the other function(s) I need. Why should _I_ do that? Why doesn't SciPy or PyLab know where it's functions are and just automatically import the right module(s)? Seems like all it would take is to add a line to each function importing the appropriate module(s). Can you provide specific examples of this? I know of nothing that works like this. -- Robert Kern From takowl at gmail.com Mon Sep 24 11:06:26 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 24 Sep 2012 16:06:26 +0100 Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 58 In-Reply-To: <740FCE80-9044-4F27-9DA3-C45F271687F4@yahoo.com> References: <740FCE80-9044-4F27-9DA3-C45F271687F4@yahoo.com> Message-ID: On 24 September 2012 15:57, Robaula wrote: > I'll be seriously discouraged if to start getting results after installing SciPy orPyLab, I next have to install A. But to install A, I first have to install B. Then I next have to install C, and then D, etc. Well, that is the sort of thing we're aiming to improve. But equally, we can't know everything you're going to want to use, so depending on what you want to do, you might still have to install one or more extra packages. > And I don't want to learn or use Linux along the way. Ironically, this is precisely what Linux distributions do very well: dependency management. So when I ask it to install A, it works out that B, C and D are needed, downloads and installs them in the right order. Package management systems like APT are miles ahead of the Python packaging stuff. Thomas From josef.pktd at gmail.com Mon Sep 24 11:14:16 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Sep 2012 11:14:16 -0400 Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 58 In-Reply-To: <740FCE80-9044-4F27-9DA3-C45F271687F4@yahoo.com> References: <740FCE80-9044-4F27-9DA3-C45F271687F4@yahoo.com> Message-ID: On Mon, Sep 24, 2012 at 10:57 AM, Robaula wrote: > As a Windows-only user, and interested mainly in quickly getting my application running and generating results, I want something that installs everything I need and lets me get on with it. And I don't want to learn or use Linux along the way. > > I'll be seriously discouraged if to start getting results after installing SciPy orPyLab, I next have to install A. But to install A, I first have to install B. Then I next have to install C, and then D, etc. > > I also get seriously irritated when after all that I discover that to use SciPy or PyLab functions FFF and GGG, I have to make sure I've first imported modules MM1 and MM2 -- and many times scratch around until I find what modules contain FFF and the other function(s) I need. Why should _I_ do that? Why doesn't SciPy or PyLab know where it's functions are and just automatically import the right module(s)? Seems like all it would take is to add a line to each function importing the appropriate module(s). Because the program/package cannot or should not guess which function you want. scipy.linalg or numpy.linalg numpy.any or python's any and so on. when you load some R packages, then you get warnings like, this and this function has been overwritten by the package that has been loaded. In Stata there is a chapter describing the search path and the priority. Similar in matlab, I always had to guess where the function is coming from and if the function I wanted hasn't been replaced by something else. And I was careful, adding a prefix to my function names, so I don't make mistakes. names spaces are one honking ... >>> import this The windows help file makes searching and finding the location of a function very easy. Josef > > Sent from Robaula's iPad > helmrp at yahoo.com > 2645 E Southern, A241 > Tempe, AZ 85282 > VOX: 480-831-3611 > CELL: 602-568-6948 (but not often turned on) > From travis at continuum.io Mon Sep 24 11:41:21 2012 From: travis at continuum.io (Travis Oliphant) Date: Mon, 24 Sep 2012 10:41:21 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <1348327369.23629.YahooMailNeo@web160204.mail.bf1.yahoo.com> <505E669A.1010401@creativetrax.com> Message-ID: <3EB64ADA-97FA-41BA-BBD9-789BC6CE431F@continuum.io> nose is a dependency of NumPy, so it is on the list by default, I would think. -Travis On Sep 24, 2012, at 8:45 AM, Nathaniel Smith wrote: > On Sun, Sep 23, 2012 at 2:58 PM, Thomas Kluyver wrote: >> On 23 September 2012 14:41, Ralf Gommers wrote: >>> At least `nose`, we have to be able to run tests for packages. >> >> We do, but for new users, unit testing is something they're unlikely >> to need for a while. Installing nose also doesn't involve any complex >> requirements. So I'd be inclined not to specify it, but I'd like to >> hear what others think. > > IMHO nose should absolutely be on the list. The first thing I teach > people is the boilerplate to put at the bottom of their modules: > > if __name__ == "__main__": > import nose > nose.runmodule() > > And the target audience isn't just brand new users anyway. Sample code > should contain tests, so it's helpful to have a convention for how > they're named and how you write assertions, and both of these come > from nose as the de facto standard. > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From pierre.raybaut at gmail.com Mon Sep 24 15:22:38 2012 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Mon, 24 Sep 2012 21:22:38 +0200 Subject: [SciPy-User] ANN: WinPython v2.7.3.0 Message-ID: Hi all, I'm pleased to introduce my new contribution to the Python community: WinPython. WinPython v2.7.3.0 has been released and is available for 32-bit and 64-bit Windows platforms: http://code.google.com/p/winpython/ WinPython is a free open-source portable distribution of Python for Windows, designed for scientists. It is a full-featured (see http://code.google.com/p/winpython/wiki/PackageIndex) Python-based scientific environment: * Designed for scientists (thanks to the integrated libraries NumPy, SciPy, Matplotlib, guiqwt, etc.: * Regular *scientific users*: interactive data processing and visualization using Python with Spyder * *Advanced scientific users and software developers*: Python applications development with Spyder, version control with Mercurial and other development tools (like gettext) * *Portable*: preconfigured, it should run out of the box on any machine under Windows (without any installation requirements) and the folder containing WinPython can be moved to any location (local, network or removable drive) * *Flexible*: one can install (or should I write "use" as it's portable) as many WinPython versions as necessary (like isolated and self-consistent environments), even if those versions are running different versions of Python (2.7, 3.x in the near future) or different architectures (32bit or 64bit) on the same machine * *Customizable*: using the integrated package manager (wppm, as WinPython Package Manager), it's possible to install, uninstall or upgrade Python packages (see http://code.google.com/p/winpython/wiki/WPPM for more details on supported package formats). *WinPython is not an attempt to replace Python(x,y)*, this is just something different (see http://code.google.com/p/winpython/wiki/Roadmap): more flexible, easier to maintain, movable and less invasive for the OS, but certainly less user-friendly, with less packages/contents and without any integration to Windows explorer [*]. [*] Actually there is an optional integration into Windows explorer, providing the same features as the official Python installer regarding file associations and context menu entry (this option may be activated through the WinPython Control Panel). Enjoy! -Pierre From ecarlson at eng.ua.edu Mon Sep 24 17:56:55 2012 From: ecarlson at eng.ua.edu (Eric Carlson) Date: Mon, 24 Sep 2012 16:56:55 -0500 Subject: [SciPy-User] ANN: WinPython v2.7.3.0 In-Reply-To: References: Message-ID: Hello Pierre, I discovered this last week, and I want to tell you how wonderful this is. The self-contained environment was a perfect cure for some misery I had. Thanks for putting this great system together. Now all I need is to figure out how to do a comparable thing for OS X... Cheers, Eric Carlson From michael at yanchepmartialarts.com.au Mon Sep 24 23:35:39 2012 From: michael at yanchepmartialarts.com.au (Brickle Macho) Date: Tue, 25 Sep 2012 11:35:39 +0800 Subject: [SciPy-User] "from scipy import signal" problem Message-ID: <5061268B.9090806@gmail.com> I am getting an error, see below, when I try: from scipy import signal Scipy was install via mac ports. Not sure where to start to fix this? Do I need to rebuild scipy? If so do I uninstall from Macports (and associated dependencies) or try Any help arrpeciated. Brickle. ---- $ python test.py Traceback (most recent call last): File "test.py", line 3, in from scipy import signal File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/__init__.py", line 198, in from spline import * ImportError: dlopen(/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/spline.so, 2): Symbol not found: ___ieee_divdc3 Referenced from: /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/spline.so Expected in: flat namespace in /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/spline.so From will at thearete.co.uk Tue Sep 25 05:24:26 2012 From: will at thearete.co.uk (William Furnass) Date: Tue, 25 Sep 2012 10:24:26 +0100 Subject: [SciPy-User] Pylab - standard packages Message-ID: Something that could raise awareness and expedite the adoption of the Pylab standard under Linux would be the availability of repositories for some of the most common distributions (e.g. Ubuntu, Fedora, RHEL-a-likes). As a number of distributions take a while to catch up with the latest releases of ipython etc making Ubuntu PPA and repos.fedorapeople.org repositories available could provide people with a familar, quick and easy means to install Pylab. Not sure whether we would want distribution package managers automatically upgrading packages though when new Pylab standards/packages are released. Any thoughts? On a related note it might be a good opportunity to bring the fairly-official-looking Scipy PPA [1] up to date. [1] https://launchpad.net/~scipy/+archive/ppa Cheers, Will From lists at hilboll.de Tue Sep 25 05:27:37 2012 From: lists at hilboll.de (Andreas Hilboll) Date: Tue, 25 Sep 2012 11:27:37 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: Message-ID: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> > Something that could raise awareness and expedite the adoption of the > Pylab standard under Linux would be the availability of repositories > for some of the most common distributions (e.g. Ubuntu, Fedora, > RHEL-a-likes). As a number of distributions take a while to catch up > with the latest releases of ipython etc making Ubuntu PPA and > repos.fedorapeople.org repositories available could provide people > with a familar, quick and easy means to install Pylab. Not sure > whether we would want distribution package managers automatically > upgrading packages though when new Pylab standards/packages are > released. Any thoughts? > > On a related note it might be a good opportunity to bring the > fairly-official-looking Scipy PPA [1] up to date. > > [1] https://launchpad.net/~scipy/+archive/ppa +1 Just some days ago I registered the pylab team on launchpad, with a 'pylab-stable' PPA: https://launchpad.net/~pylab/+archive/stable However, no packages yet. Anyone wanting to participate send me a note, and I'll take you on the team. Cheers, Andreas. From takowl at gmail.com Tue Sep 25 05:54:41 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 25 Sep 2012 10:54:41 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> Message-ID: On 25 September 2012 10:27, Andreas Hilboll wrote: > However, no packages yet. Anyone wanting to participate send me a note, > and I'll take you on the team. Thanks Andreas - I'd be keen to participate in that. Thomas From tdimiduk at physics.harvard.edu Tue Sep 25 17:11:42 2012 From: tdimiduk at physics.harvard.edu (Tom Dimiduk) Date: Tue, 25 Sep 2012 17:11:42 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> Message-ID: <50621E0E.1030605@physics.harvard.edu> On 09/25/2012 05:27 AM, Andreas Hilboll wrote: >> Something that could raise awareness and expedite the adoption of the >> Pylab standard under Linux would be the availability of repositories >> for some of the most common distributions (e.g. Ubuntu, Fedora, >> RHEL-a-likes). As a number of distributions take a while to catch up >> with the latest releases of ipython etc making Ubuntu PPA and >> repos.fedorapeople.org repositories available could provide people >> with a familar, quick and easy means to install Pylab. Not sure >> whether we would want distribution package managers automatically >> upgrading packages though when new Pylab standards/packages are >> released. Any thoughts? >> >> On a related note it might be a good opportunity to bring the >> fairly-official-looking Scipy PPA [1] up to date. >> >> [1] https://launchpad.net/~scipy/+archive/ppa > +1 > > Just some days ago I registered the pylab team on launchpad, with a > 'pylab-stable' PPA: > > https://launchpad.net/~pylab/+archive/stable > > However, no packages yet. Anyone wanting to participate send me a note, > and I'll take you on the team. > > Cheers, Andreas. > Big +1 on this. I am currently stuck supporting ancient versions of numpy because people in my lab (probably rightfully) are not eager to move off their LTS ubuntu's. An officially blessed pylab PPA that they would be willing to trust would be a awesome. Tom From travis at continuum.io Tue Sep 25 17:50:48 2012 From: travis at continuum.io (Travis Oliphant) Date: Tue, 25 Sep 2012 16:50:48 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> Message-ID: <799A78EB-B00E-4CE9-8A59-1C64DD00D3E3@continuum.io> There is already a Github pylab organization. Can we keep things on Github? -Travis On Sep 25, 2012, at 4:27 AM, Andreas Hilboll wrote: >> Something that could raise awareness and expedite the adoption of the >> Pylab standard under Linux would be the availability of repositories >> for some of the most common distributions (e.g. Ubuntu, Fedora, >> RHEL-a-likes). As a number of distributions take a while to catch up >> with the latest releases of ipython etc making Ubuntu PPA and >> repos.fedorapeople.org repositories available could provide people >> with a familar, quick and easy means to install Pylab. Not sure >> whether we would want distribution package managers automatically >> upgrading packages though when new Pylab standards/packages are >> released. Any thoughts? >> >> On a related note it might be a good opportunity to bring the >> fairly-official-looking Scipy PPA [1] up to date. >> >> [1] https://launchpad.net/~scipy/+archive/ppa > > +1 > > Just some days ago I registered the pylab team on launchpad, with a > 'pylab-stable' PPA: > > https://launchpad.net/~pylab/+archive/stable > > However, no packages yet. Anyone wanting to participate send me a note, > and I'll take you on the team. > > Cheers, Andreas. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From travis at continuum.io Tue Sep 25 17:55:55 2012 From: travis at continuum.io (Travis Oliphant) Date: Tue, 25 Sep 2012 16:55:55 -0500 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> Message-ID: <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> I guess LaunchPad has some nice integrations with PPA concepts. I withdraw my recommendation about Github for this purpose. But, there is a pylab github account that can be used. -Travis On Sep 25, 2012, at 4:27 AM, Andreas Hilboll wrote: >> Something that could raise awareness and expedite the adoption of the >> Pylab standard under Linux would be the availability of repositories >> for some of the most common distributions (e.g. Ubuntu, Fedora, >> RHEL-a-likes). As a number of distributions take a while to catch up >> with the latest releases of ipython etc making Ubuntu PPA and >> repos.fedorapeople.org repositories available could provide people >> with a familar, quick and easy means to install Pylab. Not sure >> whether we would want distribution package managers automatically >> upgrading packages though when new Pylab standards/packages are >> released. Any thoughts? >> >> On a related note it might be a good opportunity to bring the >> fairly-official-looking Scipy PPA [1] up to date. >> >> [1] https://launchpad.net/~scipy/+archive/ppa > > +1 > > Just some days ago I registered the pylab team on launchpad, with a > 'pylab-stable' PPA: > > https://launchpad.net/~pylab/+archive/stable > > However, no packages yet. Anyone wanting to participate send me a note, > and I'll take you on the team. > > Cheers, Andreas. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Tue Sep 25 18:01:33 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 25 Sep 2012 15:01:33 -0700 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> Message-ID: On Tue, Sep 25, 2012 at 2:55 PM, Travis Oliphant wrote: > I guess LaunchPad has some nice integrations with PPA concepts. I withdraw my recommendation about Github for this purpose. But, there is a pylab github account that can be used. Yes a PPA is strictly so that debian/ubuntu/debian-derivative users can add it as a source for automatically-updated packages. Though I'd like to add that our wonderful friends at neurodebian already fill much of this role, I'm not really sure we need a new PPA. They pretty much package everything we're talking about here, do it on time, and I'd rather encourage reuse and support of their extraordinary efforts than build a new one next to theirs. If upon closer inspection we find good reasons *not* to use neurodebian, so be it; but let's not *start* by effectively redoing a small piece of what Yarik and Michael have already built up with so much work. Cheers, f From josef.pktd at gmail.com Tue Sep 25 18:15:21 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 25 Sep 2012 18:15:21 -0400 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> Message-ID: On Tue, Sep 25, 2012 at 6:01 PM, Fernando Perez wrote: > On Tue, Sep 25, 2012 at 2:55 PM, Travis Oliphant wrote: >> I guess LaunchPad has some nice integrations with PPA concepts. I withdraw my recommendation about Github for this purpose. But, there is a pylab github account that can be used. > > Yes a PPA is strictly so that debian/ubuntu/debian-derivative users > can add it as a source for automatically-updated packages. > > Though I'd like to add that our wonderful friends at neurodebian > already fill much of this role, I'm not really sure we need a new PPA. > They pretty much package everything we're talking about here, do it > on time, and I'd rather encourage reuse and support of their > extraordinary efforts than build a new one next to theirs. If upon > closer inspection we find good reasons *not* to use neurodebian, so be > it; but let's not *start* by effectively redoing a small piece of what > Yarik and Michael have already built up with so much work. As far as I understand (not being a Linux user) the ppa for pythonxy is mostly a repackaging of the Debian version, which for statsmodels and related is mostly Yaroslav's and NeuroDebian's work https://launchpad.net/~pythonxy (advantage for statsmodels: integrated tests of the devel version on Ubuntu) Josef > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From helmrp at yahoo.com Tue Sep 25 21:55:49 2012 From: helmrp at yahoo.com (Robaula) Date: Tue, 25 Sep 2012 18:55:49 -0700 Subject: [SciPy-User] Flaws in the cobyla routine In-Reply-To: References: Message-ID: <3BCEF0D1-EBFF-40CC-A859-834903699A15@yahoo.com> I consider the following to be flaws (not bugs) in the cobyla, or "constrained optimization by (successive) linear approximation" routine. 1. The routine returns only the optimal argument value. No description of the status at the end of the run, such as would be provided by the "Results" class are included. Consequently the user receives no information on the trustworthiness of the returned value. 2. Although "disp" and "iprint" are among the options in cobyla's calling signature, assigning different values to them has absolutely no effect on the output. 3. The docstring's Examples section mentions several values that would be included in the "Results", but with no indication of where they come from or how the users could get at them. 4. The cobyla routine calls the _minimize_cobyla routine, which prepares and returns the "Results" class information. This includes: the optimal value of the argument, the status at end, the success flag value, a message expressing the success flag in English, the optimal value of the function, and the maximum constraint violation. Then the cobyla routine "throws away" all but the first of these values. That occurs because after the cobyla routine receives the "Results" dictionary of values from _minimize_cobyla, it returns, in effect, "Results['x']", i.e., just the "Results" dictionary entry for key 'x'. This is done in line 165 of the cobyla routine. Recommend revision to correct these flaws. While doing that, consider the following remarks. In addition to reporting the maximum constraint violation, it would be helpful to also report which constraint(s) caused the maximum constraint violation. It would also be useful to report which constraints were "tight" and which were "loose." Bob & Paula H From lists at hilboll.de Wed Sep 26 02:40:43 2012 From: lists at hilboll.de (Andreas Hilboll) Date: Wed, 26 Sep 2012 08:40:43 +0200 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> Message-ID: <793f3c2fb08a0cd3981b1053098eaf8f.squirrel@srv2.s4y.tournesol-consulting.eu> > On Tue, Sep 25, 2012 at 2:55 PM, Travis Oliphant > wrote: >> I guess LaunchPad has some nice integrations with PPA concepts. I >> withdraw my recommendation about Github for this purpose. But, there >> is a pylab github account that can be used. > > Yes a PPA is strictly so that debian/ubuntu/debian-derivative users > can add it as a source for automatically-updated packages. > > Though I'd like to add that our wonderful friends at neurodebian > already fill much of this role, I'm not really sure we need a new PPA. > They pretty much package everything we're talking about here, do it > on time, and I'd rather encourage reuse and support of their > extraordinary efforts than build a new one next to theirs. If upon > closer inspection we find good reasons *not* to use neurodebian, so be > it; but let's not *start* by effectively redoing a small piece of what > Yarik and Michael have already built up with so much work. Yes, Neurodebian looks extremely helpful. However, do they package numpy and scipy? I couldn't find it on the package list at http://neuro.debian.net/pkglists/pkgs-by_release-ubuntu_12.04_lts_%22precise_pangolin%22_%28precise%29.html#pkgs-by-release-ubuntu-12-04-lts-precise-pangolin-precise Ubuntu 12.04 is still at scipy 0.9 ... Apart from that, I agree that there isn't be any need to duplicate efforts. Somewhere on the pylab Homepage it should then clearly say something about Neurodebian, and there should be a pylab metapackage on Neurodebian. Cheers, Andreas. From denis at laxalde.org Wed Sep 26 04:01:57 2012 From: denis at laxalde.org (Denis Laxalde) Date: Wed, 26 Sep 2012 10:01:57 +0200 Subject: [SciPy-User] Flaws in the cobyla routine In-Reply-To: <3BCEF0D1-EBFF-40CC-A859-834903699A15@yahoo.com> References: <3BCEF0D1-EBFF-40CC-A859-834903699A15@yahoo.com> Message-ID: <5062B675.90902@laxalde.org> Robaula a ?crit : > 1. The routine returns only the optimal argument value. No description of the status at the end of the run, such as would be provided by the "Results" class are included. Consequently the user receives no information on the trustworthiness of the returned value. You mean `fmin_cobyla`, I guess. See answer to point 4. > 2. Although "disp" and "iprint" are among the options in cobyla's calling signature, assigning different values to them has absolutely no effect on the output. This is documented. `disp` overrides `iprint`, which is deprecated. > 3. The docstring's Examples section mentions several values that would be included in the "Results", but with no indication of where they come from or how the users could get at them. Which docstring? Most of the information comes from the internal solver. As mentioned in its docstring, `Result` is a subclass of `dict` with attribute accessors. You may then access attributes with a dot notation or as a regular dict. > 4. The cobyla routine calls the _minimize_cobyla routine, which prepares and returns the "Results" class information. This includes: the optimal value of the argument, the status at end, the success flag value, a message expressing the success flag in English, the optimal value of the function, and the maximum constraint violation. Then the cobyla routine "throws away" all but the first of these values. That occurs because after the cobyla routine receives the "Results" dictionary of values from _minimize_cobyla, it returns, in effect, "Results['x']", i.e., just the "Results" dictionary entry for key 'x'. This is done in line 165 of the cobyla routine. The `Result` object was introduced recently, for the `minimize` interface. It is not returned by `fmin_cobyla` for backwards compatibility. If you want the full results, use `minimize(..., method='cobyla', ...)`. The old `fmin_` functions and the new `minimize` interface are present in the 0.11 release, which is good for transition purpose but is also confusing. We could deprecated the former in the next release I guess. > In addition to reporting the maximum constraint violation, it would be helpful to also report which constraint(s) caused the maximum constraint violation. It would also be useful to report which constraints were "tight" and which were "loose." You may fill a bug report (enhancement) for this, or better, a pull request. -- Denis Laxalde From takowl at gmail.com Wed Sep 26 07:18:45 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 26 Sep 2012 12:18:45 +0100 Subject: [SciPy-User] Pylab - standard packages In-Reply-To: References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> Message-ID: On 25 September 2012 23:01, Fernando Perez wrote: > Though I'd like to add that our wonderful friends at neurodebian > already fill much of this role, I'm not really sure we need a new PPA. As I see it, the PPA should be a one-stop-shop for any packages we specify which aren't in the distribution, or where the version in the distribution is too old. If someone else has already packaged it, it should be simple to upload their package, so we won't be duplicating a lot of work. I also intend to make a 'pylab' metapackage which will depend on all the packages in the spec, so users can 'apt-get install pylab'. Returning to earlier discussions: It sounds like we want to specify nose as well. The 1.2 release is quite recent, so let's specify >= 1.1. In the poll about IPython notebook, 30 votes have been cast in favour of including it, and 9 in favour of leaving it out for now. Those who didn't think it should be included: are you willing to concede the point? I'd prefer to avoid rehashing the discussions we've already had, but if there's some key issue that's not yet been mentioned, you can still bring it up. Thanks, Thomas From lists at onerussian.com Wed Sep 26 10:20:07 2012 From: lists at onerussian.com (Yaroslav Halchenko) Date: Wed, 26 Sep 2012 07:20:07 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Pylab - standard packages In-Reply-To: References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> Message-ID: <34482439.post@talk.nabble.com> josef.pktd wrote: > > On Tue, Sep 25, 2012 at 6:01 PM, Fernando Perez > wrote: >> On Tue, Sep 25, 2012 at 2:55 PM, Travis Oliphant >> wrote: >>> I guess LaunchPad has some nice integrations with PPA concepts. I >>> withdraw my recommendation about Github for this purpose. But, there >>> is a pylab github account that can be used. >> >> Yes a PPA is strictly so that debian/ubuntu/debian-derivative users >> can add it as a source for automatically-updated packages. >> >> Though I'd like to add that our wonderful friends at neurodebian >> already fill much of this role, I'm not really sure we need a new PPA. >> They pretty much package everything we're talking about here, do it >> on time, and I'd rather encourage reuse and support of their >> extraordinary efforts than build a new one next to theirs. If upon >> closer inspection we find good reasons *not* to use neurodebian, so be >> it; but let's not *start* by effectively redoing a small piece of what >> Yarik and Michael have already built up with so much work. > > As far as I understand (not being a Linux user) > > the ppa for pythonxy is mostly a repackaging of the Debian version, > which for statsmodels and related is mostly Yaroslav's and > NeuroDebian's work > https://launchpad.net/~pythonxy > > (advantage for statsmodels: integrated tests of the devel version on > Ubuntu) > I saw this lovely thread and could not resist to not follow up... First of all, thank you Fernando for you kind words! Indeed it would be a pity if our work gets duplicated... or worse -- claimed to be done by someone else but the original author (there were precedents) ;) But it might be possible avoid duplication at the slight cost of proliferation of 'supplemental repositories' (e.g. NeuroDebian, and PPAs) if implemented correctly. My question (echoing Fernando) would be though -- what contribution such PPA would bring? One of my main (personal) concerns about Canonical's PPA is -- its Ubuntu-egocentricity, and lack of Debian builds. So, myself, I would not use it, nor depend on it, nor recommend to our (Debian/NeuroDebian) users. BUT if those repackaged daily builds enabled testing (both pandas and statsmodels debian/rules were patched with DEB_BUILD_OPTIONS=nocheck, thus there is no "integrated tests" Josef ATM) -- I would see a benefit of such PPA existence even for myself and for other developers to get free CI based on Debian packaging -- it would help me to eliminate pre- or post-release rebuilds rounds catching out residual defects. For the users -- I am not quite sure, besides relatively rare use cases for people needing to install bleeding edge, unreleased, development version. For the released missing ones -- I would invite people just to create Debian-quality packages (it is not a rocket science) and contribute to NeuroDebian and_thus/or to Debian (since we upload to the root cause of this beauty). Otherwise, with ad-hoc PPA -- these non-official, possible mediocre, packages, which might not follow the evolution of the official package in Debian (and thus Ubuntu) -- might diverge from the packaging in official repositories. Then having differently packaged official and this PPA-provided development packages might be confusing, might break upgrade paths, unexpected installation conflicts, etc so users might get "hurt" in the long run. And that is where proliferation of PPAs would hurt everyone. Regarding Andreas's comment on fresh numpy/scipy in NeuroDebian: injecting such core libraries into the released systems where leaf-packages might not be compatible with them (and there is no reliable way to check since not every package has 100% test coverage excercised at build time) would be ... suboptimal and might break a lot of things, that is why I doubt we would ever do that for NeuroDebian unless Python world comes up with a good uniform way to specify version requirements at import time for co-available multiple versions of the modules. Summary: PPA, closer following changes in official packages, with daily builds against upstream code with build-time testing -- gets my vote. Additional PPA with simply rebuilds of our packages -- I would not care less. PPA providing core package replacing system wide installed ones -- I will be glad PPAs have no Debian builds. -- View this message in context: http://old.nabble.com/Pylab---standard-packages-tp34450032p34482439.html Sent from the Scipy-User mailing list archive at Nabble.com. From takowl at gmail.com Wed Sep 26 11:34:40 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 26 Sep 2012 16:34:40 +0100 Subject: [SciPy-User] [SciPy-user] Pylab - standard packages In-Reply-To: <34482439.post@talk.nabble.com> References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> <34482439.post@talk.nabble.com> Message-ID: Hi Yaroslav, On 26 September 2012 15:20, Yaroslav Halchenko wrote: > For the users -- I am not quite sure, besides relatively rare use cases for > people needing to install bleeding edge, unreleased, development version. > For the released missing ones -- I would invite people just to create > Debian-quality packages (it is not a rocket science) and contribute to > NeuroDebian and_thus/or to Debian (since we upload to the root cause of this > beauty). Otherwise, with ad-hoc PPA -- these non-official, possible > mediocre, packages, which might not follow the evolution of the official > package in Debian (and thus Ubuntu) -- might diverge from the packaging in > official repositories. The aim certainly isn't to have 'mediocre' packages. I envisage something similar to what you're doing, but for a much smaller set of packages, and a broader audience than neuroscientists. It looks like you're already maintaining ~all of the packages we're interested in, and you've clearly got the repository infrastructure running nicely. From a technical point of view, we could just point users at NeuroDebian, and say "you don't need to be a neuroscientist, just install these general packages". But that doesn't work in terms of perception - people looking for Pylab will think that's a kludge. If I can be bold, perhaps there's a way forward that suits everyone. Would you be prepared to separate out the Pylab packages into another repository that people could use without seeing the entirety of Neurodebian? The source lists for Neurodebian would include the Pylab repository as well, although we'd have to work out what to do for existing subscribers. I imagine there's a bit more effort involved in managing two repositories, but you may have more voluers to help with the Pylab stuff. Then we could point at the Pylab repository as the best way to get Pylab on Debian & Ubuntu - and naturally we'd credit Neurodebian there. Thanks, Thomas From ralf.gommers at gmail.com Wed Sep 26 14:58:09 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Sep 2012 20:58:09 +0200 Subject: [SciPy-User] Flaws in the cobyla routine In-Reply-To: <5062B675.90902@laxalde.org> References: <3BCEF0D1-EBFF-40CC-A859-834903699A15@yahoo.com> <5062B675.90902@laxalde.org> Message-ID: On Wed, Sep 26, 2012 at 10:01 AM, Denis Laxalde wrote: > > The old `fmin_` functions and the new `minimize` interface are present > in the 0.11 release, which is good for transition purpose but is also > confusing. We could deprecated the former in the next release I guess. Don't think that's a good idea, those functions are too widely used. We can just add in the docstring of the fmin_ functions that it's recommended to use the new minimize() interface instead. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 26 16:23:01 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Sep 2012 22:23:01 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release Message-ID: Hi, I am pleased to announce the availability of SciPy 0.11.0. For this release many new features have been added, and over 120 tickets and pull requests have been closed. Also noteworthy is that the number of contributors for this release has risen to over 50. We hope to see this number continuing to increase! The highlights of this release are: - A new module, sparse.csgraph, has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. Sources and binaries can be found at http://sourceforge.net/projects/scipy/files/scipy/0.11.0/, release notes are copied below. Thanks to everyone who contributed to this release, Ralf ========================== SciPy 0.11.0 Release Notes ========================== .. contents:: SciPy 0.11.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. Highlights of this release are: - A new module has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Our development attention will now shift to bug-fix releases on the 0.11.x branch, and on adding new features on the master branch. This release requires Python 2.4-2.7 or 3.1-3.2 and NumPy 1.5.1 or greater. New features ============ Sparse Graph Submodule ---------------------- The new submodule :mod:`scipy.sparse.csgraph` implements a number of efficient graph algorithms for graphs stored as sparse adjacency matrices. Available routines are: - :func:`connected_components` - determine connected components of a graph - :func:`laplacian` - compute the laplacian of a graph - :func:`shortest_path` - compute the shortest path between points on a positive graph - :func:`dijkstra` - use Dijkstra's algorithm for shortest path - :func:`floyd_warshall` - use the Floyd-Warshall algorithm for shortest path - :func:`breadth_first_order` - compute a breadth-first order of nodes - :func:`depth_first_order` - compute a depth-first order of nodes - :func:`breadth_first_tree` - construct the breadth-first tree from a given node - :func:`depth_first_tree` - construct a depth-first tree from a given node - :func:`minimum_spanning_tree` - construct the minimum spanning tree of a graph ``scipy.optimize`` improvements ------------------------------- The optimize module has received a lot of attention this release. In addition to added tests, documentation improvements, bug fixes and code clean-up, the following improvements were made: - A unified interface to minimizers of univariate and multivariate functions has been added. - A unified interface to root finding algorithms for multivariate functions has been added. - The L-BFGS-B algorithm has been updated to version 3.0. Unified interfaces to minimizers ```````````````````````````````` Two new functions ``scipy.optimize.minimize`` and ``scipy.optimize.minimize_scalar`` were added to provide a common interface to minimizers of multivariate and univariate functions respectively. For multivariate functions, ``scipy.optimize.minimize`` provides an interface to methods for unconstrained optimization (`fmin`, `fmin_powell`, `fmin_cg`, `fmin_ncg`, `fmin_bfgs` and `anneal`) or constrained optimization (`fmin_l_bfgs_b`, `fmin_tnc`, `fmin_cobyla` and `fmin_slsqp`). For univariate functions, ``scipy.optimize.minimize_scalar`` provides an interface to methods for unconstrained and bounded optimization (`brent`, `golden`, `fminbound`). This allows for easier comparing and switching between solvers. Unified interface to root finding algorithms ```````````````````````````````````````````` The new function ``scipy.optimize.root`` provides a common interface to root finding algorithms for multivariate functions, embeding `fsolve`, `leastsq` and `nonlin` solvers. ``scipy.linalg`` improvements ----------------------------- New matrix equation solvers ``````````````````````````` Solvers for the Sylvester equation (``scipy.linalg.solve_sylvester``, discrete and continuous Lyapunov equations (``scipy.linalg.solve_lyapunov``, ``scipy.linalg.solve_discrete_lyapunov``) and discrete and continuous algebraic Riccati equations (``scipy.linalg.solve_continuous_are``, ``scipy.linalg.solve_discrete_are``) have been added to ``scipy.linalg``. These solvers are often used in the field of linear control theory. QZ and QR Decomposition ```````````````````````` It is now possible to calculate the QZ, or Generalized Schur, decomposition using ``scipy.linalg.qz``. This function wraps the LAPACK routines sgges, dgges, cgges, and zgges. The function ``scipy.linalg.qr_multiply``, which allows efficient computation of the matrix product of Q (from a QR decompostion) and a vector, has been added. Pascal matrices ``````````````` A function for creating Pascal matrices, ``scipy.linalg.pascal``, was added. Sparse matrix construction and operations ----------------------------------------- Two new functions, ``scipy.sparse.diags`` and ``scipy.sparse.block_diag``, were added to easily construct diagonal and block-diagonal sparse matrices respectively. ``scipy.sparse.csc_matrix`` and ``csr_matrix`` now support the operations ``sin``, ``tan``, ``arcsin``, ``arctan``, ``sinh``, ``tanh``, ``arcsinh``, ``arctanh``, ``rint``, ``sign``, ``expm1``, ``log1p``, ``deg2rad``, ``rad2deg``, ``floor``, ``ceil`` and ``trunc``. Previously, these operations had to be performed by operating on the matrices' ``data`` attribute. LSMR iterative solver --------------------- LSMR, an iterative method for solving (sparse) linear and linear least-squares systems, was added as ``scipy.sparse.linalg.lsmr``. Discrete Sine Transform ----------------------- Bindings for the discrete sine transform functions have been added to ``scipy.fftpack``. ``scipy.interpolate`` improvements ---------------------------------- For interpolation in spherical coordinates, the three classes ``scipy.interpolate.SmoothSphereBivariateSpline``, ``scipy.interpolate.LSQSphereBivariateSpline``, and ``scipy.interpolate.RectSphereBivariateSpline`` have been added. Binned statistics (``scipy.stats``) ----------------------------------- The stats module has gained functions to do binned statistics, which are a generalization of histograms, in 1-D, 2-D and multiple dimensions: ``scipy.stats.binned_statistic``, ``scipy.stats.binned_statistic_2d`` and ``scipy.stats.binned_statistic_dd``. Deprecated features =================== ``scipy.sparse.cs_graph_components`` has been made a part of the sparse graph submodule, and renamed to ``scipy.sparse.csgraph.connected_components``. Calling the former routine will result in a deprecation warning. ``scipy.misc.radon`` has been deprecated. A more full-featured radon transform can be found in scikits-image. ``scipy.io.save_as_module`` has been deprecated. A better way to save multiple Numpy arrays is the ``numpy.savez`` function. The `xa` and `xb` parameters for all distributions in ``scipy.stats.distributions`` already weren't used; they have now been deprecated. Backwards incompatible changes ============================== Removal of ``scipy.maxentropy`` ------------------------------- The ``scipy.maxentropy`` module, which was deprecated in the 0.10.0 release, has been removed. Logistic regression in scikits.learn is a good and modern alternative for this functionality. Minor change in behavior of ``splev`` ------------------------------------- The spline evaluation function now behaves similarly to ``interp1d`` for size-1 arrays. Previous behavior:: >>> from scipy.interpolate import splev, splrep, interp1d >>> x = [1,2,3,4,5] >>> y = [4,5,6,7,8] >>> tck = splrep(x, y) >>> splev([1], tck) 4. >>> splev(1, tck) 4. Corrected behavior:: >>> splev([1], tck) array([ 4.]) >>> splev(1, tck) array(4.) This affects also the ``UnivariateSpline`` classes. Behavior of ``scipy.integrate.complex_ode`` ------------------------------------------- The behavior of the ``y`` attribute of ``complex_ode`` is changed. Previously, it expressed the complex-valued solution in the form:: z = ode.y[::2] + 1j * ode.y[1::2] Now, it is directly the complex-valued solution:: z = ode.y Minor change in behavior of T-tests ----------------------------------- The T-tests ``scipy.stats.ttest_ind``, ``scipy.stats.ttest_rel`` and ``scipy.stats.ttest_1samp`` have been changed so that 0 / 0 now returns NaN instead of 1. Other changes ============= The SuperLU sources in ``scipy.sparse.linalg`` have been updated to version 4.3 from upstream. The function ``scipy.signal.bode``, which calculates magnitude and phase data for a continuous-time system, has been added. The two-sample T-test ``scipy.stats.ttest_ind`` gained an option to compare samples with unequal variances, i.e. Welch's T-test. ``scipy.misc.logsumexp`` now takes an optional ``axis`` keyword argument. Authors ======= This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): * Jeff Armstrong * Chad Baker * Brandon Beacher + * behrisch + * borishim + * Matthew Brett * Lars Buitinck * Luis Pedro Coelho + * Johann Cohen-Tanugi * David Cournapeau * dougal + * Ali Ebrahim + * endolith + * Bj?rn Forsman + * Robert Gantner + * Sebastian Gassner + * Christoph Gohlke * Ralf Gommers * Yaroslav Halchenko * Charles Harris * Jonathan Helmus + * Andreas Hilboll + * Marc Honnorat + * Jonathan Hunt + * Maxim Ivanov + * Thouis (Ray) Jones * Christopher Kuster + * Josh Lawrence + * Denis Laxalde + * Travis Oliphant * Joonas Paalasmaa + * Fabian Pedregosa * Josef Perktold * Gavin Price + * Jim Radford + * Andrew Schein + * Skipper Seabold * Jacob Silterra + * Scott Sinclair * Alexis Tabary + * Martin Teichmann * Matt Terry + * Nicky van Foreest + * Jacob Vanderplas * Patrick Varilly + * Pauli Virtanen * Nils Wagner + * Darryl Wally + * Stefan van der Walt * Liming Wang + * David Warde-Farley + * Warren Weckesser * Sebastian Werk + * Mike Wimmer + * Tony S Yu + A total of 55 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jeremy.Solbrig at nrlmry.navy.mil Wed Sep 26 16:25:53 2012 From: Jeremy.Solbrig at nrlmry.navy.mil (Solbrig, Mr. Jeremy) Date: Wed, 26 Sep 2012 13:25:53 -0700 Subject: [SciPy-User] Convolving an ndarray by a function Message-ID: Hi all, I have run into a situation where I need to calculate the standard deviation for each NxM box within an ndarray. I am wondering if there is a function available that would allow me to convolve (may not be the correct word here) an ndarray by a function and return an array of the same size as the original array. Something like this: >>> foo = np.arange(10000).reshape([100,100]) >>> stddev_arr = foo.funcconvolve([3, 3], np.std) >>> stddev_arr.shape == foo.shape True Such that each point within stddev_arr is the standard deviation of a 3x3 box around each point in foo. I'm sure I could code this in a loop, but I expect that there is a better solution. Thanks for your help, Jeremy Jeremy Solbrig NRL Monterey jeremy.solbrig at nrlmry.navy.mil (831) 656-4885 From ondrej.certik at gmail.com Mon Sep 24 13:01:06 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 24 Sep 2012 10:01:06 -0700 Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails In-Reply-To: <506076E2.5000107@gmail.com> References: <506076E2.5000107@gmail.com> Message-ID: On Mon, Sep 24, 2012 at 8:06 AM, Jonathan Helmus wrote: > > > > On 09/24/2012 08:15 AM, Moore, Eric (NIH/NIDDK) [F] wrote: >>> -----Original Message----- >>> From: Ondrej Certik [mailto:ondrej at certik.cz] >>> Sent: Sunday, September 23, 2012 7:32 PM >>> To: SciPy Users List >>> Subject: [SciPy-User] hyp1f1(0.5, 1.5, -2000) fails >>> >>> Hi, >>> >>> I noticed, that hyp1f1(0.5, 1.5, -2000) fails: >>> >>> In [5]: scipy.special.hyp1f1(0.5, 1.5, 0) >>> Out[5]: 1.0 >>> >>> In [6]: scipy.special.hyp1f1(0.5, 1.5, -20) >>> Out[6]: 0.19816636482997368 >>> >>> In [7]: scipy.special.hyp1f1(0.5, 1.5, -200) >>> Out[7]: 0.062665706865775023 >>> >>> In [8]: scipy.special.hyp1f1(0.5, 1.5, -2000) >>> Warning: invalid value encountered in hyp1f1 >>> Out[8]: nan >>> >>> >>> The values [5], [6] and [7] are correct. The value [8] should be: >>> >>> 0.019816636488030055... >>> >>> Ondrej >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> Its blowing up around -709: >> >> In [60]: s.hyp1f1(0.5, 1.5, -709.7827128933) >> Out[60]: 0.03326459435722777 >> >> In [61]: s.hyp1f1(0.5, 1.5, -709.7827128934) >> Out[61]: inf >> >> Eric >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > exp(709.78) is right at the edge of causing a overflow in double > precision, 709.79 causes an overflow. > > > In [2]: np.exp(np.array([709.78], dtype='float64')) > Out[2]: array([ 1.79282279e+308]) > > In [3]: np.exp(np.array([709.79], dtype='float64')) > /home/jhelmus/bin/ipython:1: RuntimeWarning: overflow encountered in exp > #!/home/jhelmus/bin/epd-7.3-1-rh5-x86_64/bin/python > Out[3]: array([ inf]) > > Line 5695 in specfun.f is calculating an exponential (and I believe is > causing this overflow). The complex version of this subroutine (CCHG) > also suffers from this problem. I do not know enough about this type of > function to suggest a meaningful fix other than suggesting a check that > abs(x3) <= np.log(np.finfo('d').max()). I think the Fortran implementation is not optimal. One probably has to use log_gamma() and logs when doing intermediate calculations to avoid this issue and only exponentiate the final answer, since we know it is 0.019816636488030055..., which is a nice number, not too small, not too big. Ondrej From hasslerjc at comcast.net Wed Sep 26 16:51:39 2012 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 26 Sep 2012 16:51:39 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: References: Message-ID: <50636ADB.80304@comcast.net> An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 26 16:57:43 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Sep 2012 22:57:43 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: <50636ADB.80304@comcast.net> References: <50636ADB.80304@comcast.net> Message-ID: On Wed, Sep 26, 2012 at 10:51 PM, John Hassler wrote: > > On 9/26/2012 4:23 PM, Ralf Gommers wrote: > > Hi, > > I am pleased to announce the availability of SciPy 0.11.0. For this > release many new features have been added, and over 120 tickets and pull > requests have been closed. Also noteworthy is that the number of > contributors for this release has risen to over 50. We hope to see this > number continuing to increase! The highlights of this release are: > > - A new module, sparse.csgraph, has been added which provides a number > of common sparse graph algorithms. > - New unified interfaces to the existing optimization and root finding > functions have been added. > > Sources and binaries can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.11.0/, release notes > are copied below. > > Thanks to everyone who contributed to this release, > Ralf > > > Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] > on win32 > Type "copyright", "credits" or "license()" for more information. > >>> import scipy > >>> scipy.test() > Running unit tests for scipy > NumPy version 1.6.2 > NumPy is installed in C:\Python27\lib\site-packages\numpy > SciPy version 0.11.0 > SciPy is installed in C:\Python27\lib\site-packages\scipy > Python version 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit > (Intel)] > nose version 0.11.2 > > RuntimeError: module compiled against API version 7 but this version of > numpy is 6 > That shouldn't happen. It indicates that scipy was compiled against numpy master. Did you just downloaded the superpack installer from Sourceforge? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Wed Sep 26 17:28:23 2012 From: helmrp at yahoo.com (Robaula) Date: Wed, 26 Sep 2012 14:28:23 -0700 Subject: [SciPy-User] (no subject) Message-ID: Bob & Paula H From ralf.gommers at gmail.com Wed Sep 26 17:30:53 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Sep 2012 23:30:53 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: References: <50636ADB.80304@comcast.net> Message-ID: On Wed, Sep 26, 2012 at 10:57 PM, Ralf Gommers wrote: > > > On Wed, Sep 26, 2012 at 10:51 PM, John Hassler wrote: > >> >> On 9/26/2012 4:23 PM, Ralf Gommers wrote: >> >> Hi, >> >> I am pleased to announce the availability of SciPy 0.11.0. For this >> release many new features have been added, and over 120 tickets and pull >> requests have been closed. Also noteworthy is that the number of >> contributors for this release has risen to over 50. We hope to see this >> number continuing to increase! The highlights of this release are: >> >> - A new module, sparse.csgraph, has been added which provides a number >> of common sparse graph algorithms. >> - New unified interfaces to the existing optimization and root finding >> functions have been added. >> >> Sources and binaries can be found at >> http://sourceforge.net/projects/scipy/files/scipy/0.11.0/, release notes >> are copied below. >> >> Thanks to everyone who contributed to this release, >> Ralf >> >> >> Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] >> on win32 >> Type "copyright", "credits" or "license()" for more information. >> >>> import scipy >> >>> scipy.test() >> Running unit tests for scipy >> NumPy version 1.6.2 >> NumPy is installed in C:\Python27\lib\site-packages\numpy >> SciPy version 0.11.0 >> SciPy is installed in C:\Python27\lib\site-packages\scipy >> Python version 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit >> (Intel)] >> nose version 0.11.2 >> >> RuntimeError: module compiled against API version 7 but this version of >> numpy is 6 >> > > That shouldn't happen. It indicates that scipy was compiled against numpy > master. Did you just downloaded the superpack installer from Sourceforge? > OK, found the problem. Something weird with my 2.7 install under Wine. All the other installers on SourceForge are OK. Will recompile and upload a new one for 2.7, should be up in an hour or so. Apologies for that. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Wed Sep 26 17:46:08 2012 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 26 Sep 2012 17:46:08 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: References: <50636ADB.80304@comcast.net> Message-ID: <506377A0.8010603@comcast.net> An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Wed Sep 26 18:44:43 2012 From: helmrp at yahoo.com (Robaula) Date: Wed, 26 Sep 2012 15:44:43 -0700 Subject: [SciPy-User] Flaws in fmin_cobyla In-Reply-To: References: Message-ID: <8DBA3877-D812-499D-9898-31D53B8A5303@yahoo.com> On Sep 26, 2012, at 12:57 AM, scipy-user-request at scipy.org Denis Laxalde wrote: > Robaula a ?crit : >> 1. The routine returns only the optimal argument value. No description of the status at the end of the run, such as would be provided by the "Results" class are included. Consequently the user receives no information on the trustworthiness of the returned value. > > You mean `fmin_cobyla`, I guess. See answer to point 4. > >> 2. Although "disp" and "iprint" are among the options in cobyla's calling signature, assigning different values to them has absolutely no effect on the output. > > This is documented. `disp` overrides `iprint`, which is deprecated. Sorry, the documentation is flat wrong. Both disp and iprint are completely, utterly, and totally ignored. See fmin_cobyla line 165. >> 3. The docstring's Examples section mentions several values that would be included in the "Results", but with no indication of where they come from or how the users could get at them. > > Which docstring? > Most of the information comes from the internal solver. > As mentioned in its docstring, `Result` is a subclass of `dict` with > attribute accessors. You may then access attributes with a dot notation > or as a regular dict. I guess we're reading the same documentation, specifically the SciPy Reference Guide of 05 June 2012, pages 401-403, and the fmin_cobyla docstrings. >> 4. The cobyla routine calls the _minimize_cobyla routine, which prepares and returns the "Results" class information. This includes: the optimal value of the argument, the status at end, the success flag value, a message expressing the success flag in English, the optimal value of the function, and the maximum constraint violation. Then the cobyla routine "throws away" all but the first of these values. That occurs because after the cobyla routine receives the "Results" dictionary of values from _minimize_cobyla, it returns, in effect, "Results['x']", i.e., just the "Results" dictionary entry for key 'x'. This is done in line 165 of the cobyla routine. > > The `Result` object was introduced recently, for the `minimize` > interface. It is not returned by `fmin_cobyla` for backwards > compatibility. If you want the full results, use `minimize(..., > method='cobyla', ...)`. > > The old `fmin_` functions and the new `minimize` interface are present > in the 0.11 release, which is good for transition purpose but is also > confusing. We could deprecated the former in the next release I guess. Then I guess we'd have to deprecate nearly all of the optimize and root-finding stuff mentioned in the SciPy Reference Guide on pages 388-442?? I note that Ralf Gomers thinks we should find a better way. Cleaning up the fmin_cobyla documentation can certainly be a start on that, and I'll take that on. However, I think there other, and more important issues here, namely, "How should the new 'minimize' interfaces be documented? How should existing documentation for the "old" interfaces be changed in light of the new 'minimize' approach? How to bring the two of them more nearly into alignment? Are changes to the documentation standards advisable to facilitate this?" Sure, "minimize's" docstring is an excellent start, tho I do see a couple areas where a little more explanation and/or some examples would be helpful. Among other things, I think it would be a lot better to just come out and tell folks what options are available for each of the various "methods", rather than telling them the code sequence to use to view them. Certainly the cobyla options for 'minimize' differ noticeably from those for the fmin_cobyla approach. For example, using fmin_cobyla ignores disp and iprint and never returns 'Results', while using 'minimize' has no iprint option, ignores the disp option, and always returns 'Results'. I'd welcome good ideas on how best to document the new 'minimize' interface, and how best to coordinate that with the "old style" method documentation. I'm not looking for ideas that are merely the programmer's personal reminders--the current documentation is just fine for that! Instead, I'm looking for documentation suitable for use by folks who might be described as: first year grad students just slightly familiar with Python who have a one week deadline to solve their problem and desperately need _very_ clear, unambiguous, and detailed instructions on exactly how to proceed. Try to remember _your_ first year grad experiences!! From david_baddeley at yahoo.com.au Thu Sep 27 00:17:29 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 26 Sep 2012 21:17:29 -0700 (PDT) Subject: [SciPy-User] Convolving an ndarray by a function In-Reply-To: References: Message-ID: <1348719449.10458.YahooMailNeo@web113404.mail.gq1.yahoo.com> Hi Jeremy, what you are after is scipy.ndimage.generic_filter. For your particular case (std. deviation), there is also a neat approximate solution using just standard uniform filters, which, although not strictly accurate, might be close enough for many applications and is considerably faster (a quick test gives a speedup of ~ 300x). Assuming your data array is called x: from scipy import ndimage #calculate the mean of a 3x3 ROI round each point xm = ndimage.uniform_filter(x, 3) #the standard deviation is the mean of the sum of squared differences to the mean #note that here we are subtracting the mean of the local neighbourhood of each pixel, rather than that of the central pixel sigma = np.sqrt(ndimage.uniform_filter((x - xm)**2, 3) cheers, David ________________________________ From: "Solbrig, Mr. Jeremy" To: "'scipy-user at scipy.org'" Sent: Thursday, 27 September 2012 8:25 AM Subject: [SciPy-User] Convolving an ndarray by a function Hi all, I have run into a situation where I need to calculate the standard deviation for each NxM box within an ndarray.? I am wondering if there is a function available that would allow me to convolve (may not be the correct word here) an ndarray by a function and return an array of the same size as the original array.? Something like this: >>> foo = np.arange(10000).reshape([100,100]) >>> stddev_arr = foo.funcconvolve([3, 3], np.std) >>> stddev_arr.shape == foo.shape True Such that each point within stddev_arr is the standard deviation of a 3x3 box around each point in foo.? I'm sure I could code this in a loop, but I expect that there is a better solution. Thanks for your help, Jeremy Jeremy Solbrig NRL Monterey jeremy.solbrig at nrlmry.navy.mil (831) 656-4885 _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at laxalde.org Thu Sep 27 05:26:14 2012 From: denis at laxalde.org (Denis Laxalde) Date: Thu, 27 Sep 2012 11:26:14 +0200 Subject: [SciPy-User] Flaws in fmin_cobyla In-Reply-To: <8DBA3877-D812-499D-9898-31D53B8A5303@yahoo.com> References: <8DBA3877-D812-499D-9898-31D53B8A5303@yahoo.com> Message-ID: <50641BB6.9090600@laxalde.org> Robaula a ?crit : >>> 2. Although "disp" and "iprint" are among the options in >>> cobyla's calling signature, assigning different values to them >>> has absolutely no effect on the output. >> >> This is documented. `disp` overrides `iprint`, which is >> deprecated. > > Sorry, the documentation is flat wrong. Both disp and iprint are > completely, utterly, and totally ignored. See fmin_cobyla line 165. disp and iprint parameters are not ignored, they are passed from fmin_cobyla to _minimize_cobyla through the opts dictionary. So: - with disp != 0, fmin_cobyla prints things whatever the value of iprint, because iprint=disp; - with disp unspecified or None and iprint !=0, you get the same behaviour. - minimize just considers Boolean values of disp and switch between iprint=0 or iprint=1, as do most other methods. So what's wrong? Am I missing something? > Then I guess we'd have to deprecate nearly all of the optimize and > root-finding stuff mentioned in the SciPy Reference Guide on pages > 388-442?? I note that Ralf Gomers thinks we should find a better way. > Cleaning up the fmin_cobyla documentation can certainly be a start on > that, and I'll take that on. > > However, I think there other, and more important issues here, > namely, "How should the new 'minimize' interfaces be documented? How > should existing documentation for the "old" interfaces be changed in > light of the new 'minimize' approach? How to bring the two of them > more nearly into alignment? Are changes to the documentation > standards advisable to facilitate this?" If fmin_ functions ain't be deprecated, maybe these should at least not appear in the reference guide at the same level as minimize and root interfaces. The current situation is clearly confusing IMO. > Sure, "minimize's" docstring is an excellent start, tho I do see a > couple areas where a little more explanation and/or some examples > would be helpful. Among other things, I think it would be a lot > better to just come out and tell folks what options are available for > each of the various "methods", rather than telling them the code > sequence to use to view them. This particular point was discussed during implementation of the interface, and there was an agreement that the docstring should not be filled with options specific to each method, hence the advice to refer to show_minimize() for the latter. There's probably room for improvements though. > Certainly the cobyla options for 'minimize' differ noticeably from > those for the fmin_cobyla approach. The idea was to streamline the options amongst solvers. > I'm looking for documentation suitable for use by folks who might be > described as: first year grad students just slightly familiar with > Python who have a one week deadline to solve their problem and > desperately need _very_ clear, unambiguous, and detailed instructions > on exactly how to proceed. Try to remember _your_ first year grad > experiences!! This is what the optimize tutorial [1] is for, I guess. 1: http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html -- Denis Laxalde From rpyle at post.harvard.edu Thu Sep 27 10:01:52 2012 From: rpyle at post.harvard.edu (Robert Pyle) Date: Thu, 27 Sep 2012 10:01:52 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: References: Message-ID: Hi, I ran into a segfault with AnacondaCE on OSX 10.8.2 $ python Python 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy as sp >>> sp.test() Running unit tests for scipy NumPy version 1.7.0b2 NumPy is installed in /opt/anaconda/lib/python2.7/site-packages/numpy SciPy version 0.11.0 SciPy is installed in /opt/anaconda/lib/python2.7/site-packages/scipy Python version 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 1.1.2 .....................................................................................................................................................................................F.FFFSegmentation fault: 11 I installed 0.11.0rc2 which seems to run fine for my purposes but where scipy.test() gives FAILED (KNOWNFAIL=13, SKIP=29, errors=11, failures=73) I attach a zipped file of the error log, in case that's useful. And a heartfelt thanks to all who work on this. You make my life so much easier! Bob Pyle Cambridge, MA On Sep 26, 2012, at 4:23 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of SciPy 0.11.0. For this release many new features have been added, and over 120 tickets and pull requests have been closed. Also noteworthy is that the number of contributors for this release has risen to over 50. We hope to see this number continuing to increase! The highlights of this release are: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy0_11_0rc2_test.rtf.zip Type: application/zip Size: 9129 bytes Desc: not available URL: From josef.pktd at gmail.com Thu Sep 27 11:15:48 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 27 Sep 2012 11:15:48 -0400 Subject: [SciPy-User] fmin_slsqp exit mode 8 Message-ID: in statsmodels we have a case where fmin_slsqp ends with mode=8 "POSITIVE DIRECTIONAL DERIVATIVE FOR LINESEARCH" Does anyone know what it means and whether it's possible to get around it? the fortran source file doesn't have an explanation. Thanks, Josef From ralf.gommers at gmail.com Thu Sep 27 12:32:17 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 27 Sep 2012 18:32:17 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 4:01 PM, Robert Pyle wrote: > Hi, > > I ran into a segfault with AnacondaCE on OSX 10.8.2 > > $ python > Python 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) > [GCC 4.0.1 (Apple Inc. build 5493)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy as sp > >>> sp.test() > Running unit tests for scipy > NumPy version 1.7.0b2 > NumPy is installed in /opt/anaconda/lib/python2.7/site-packages/numpy > SciPy version 0.11.0 > SciPy is installed in /opt/anaconda/lib/python2.7/site-packages/scipy > Python version 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) [GCC > 4.0.1 (Apple Inc. build 5493)] > nose version 1.1.2 > .....................................................................................................................................................................................F.FFFSegmentation > fault: 11 > Could you run the tests with "scipy.test(verbose=2) to see where the issue occurs? > I installed 0.11.0rc2 which seems to run fine for my purposes but where > scipy.test() gives > Did you install both rc2 and final from the dmgs on SourceForge with macosx-10.6 in the name? > FAILED (KNOWNFAIL=13, SKIP=29, errors=11, failures=73) > > > I attach a zipped file of the error log, in case that's useful. > The test errors/failures in that log are all known. The IndexError/ValueError ones are because of an issue in numpy 1.7.0b2, which should be fixed before the final 1.7.0 release. The other ones are due to issues with Apple's Accelerate Framework. So doesn't look related. Can anyone comment on how Anaconda CE was built? I can imagine problems could occur if it was built on OS X 10.7 or 10.8 while scipy 0.11.0 was built on OS X 10.6. Ralf > > And a heartfelt thanks to all who work on this. You make my life so much > easier! > > Bob Pyle > Cambridge, MA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.bendl at nrel.gov Thu Sep 27 12:24:29 2012 From: kurt.bendl at nrel.gov (Kurt Bendl) Date: Thu, 27 Sep 2012 16:24:29 +0000 (UTC) Subject: [SciPy-User] Python position in Golden, CO Message-ID: Hi folks, I just found out that NREL, in Golden, Colorado, is looking for a "Scientific Python Applications Programmer". It's a contract position (that's how most of us start here.) Here is a link to the job description: http://www.redcanyonsoftware.com/jobs/index.php?id=33 If anyone is interested, please send a resume. I used to contract through these guys, and they were pretty reasonable. Best, Kurt Kurt Bendl Computational Sciences National Renewable Energy Laboratory 15013 Denver West Parkway Golden, CO 80401-3305 desk: 303-275-4617 From Jeremy.Solbrig at nrlmry.navy.mil Thu Sep 27 14:20:57 2012 From: Jeremy.Solbrig at nrlmry.navy.mil (Solbrig, Mr. Jeremy) Date: Thu, 27 Sep 2012 11:20:57 -0700 Subject: [SciPy-User] Convolving an ndarray by a function In-Reply-To: <1348719449.10458.YahooMailNeo@web113404.mail.gq1.yahoo.com> References: <1348719449.10458.YahooMailNeo@web113404.mail.gq1.yahoo.com> Message-ID: David, Thanks, that is exactly what I was looking for. I had looked at ndimage, but apparently I didn't look deeply enough. Jeremy From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of David Baddeley Sent: Wednesday, September 26, 2012 9:17 PM To: SciPy Users List Subject: Re: [SciPy-User] Convolving an ndarray by a function Hi Jeremy, what you are after is scipy.ndimage.generic_filter. For your particular case (std. deviation), there is also a neat approximate solution using just standard uniform filters, which, although not strictly accurate, might be close enough for many applications and is considerably faster (a quick test gives a speedup of ~ 300x). Assuming your data array is called x: from scipy import ndimage #calculate the mean of a 3x3 ROI round each point xm = ndimage.uniform_filter(x, 3) #the standard deviation is the mean of the sum of squared differences to the mean #note that here we are subtracting the mean of the local neighbourhood of each pixel, rather than that of the central pixel sigma = np.sqrt(ndimage.uniform_filter((x - xm)**2, 3) cheers, David ________________________________ From: "Solbrig, Mr. Jeremy" > To: "'scipy-user at scipy.org'" > Sent: Thursday, 27 September 2012 8:25 AM Subject: [SciPy-User] Convolving an ndarray by a function Hi all, I have run into a situation where I need to calculate the standard deviation for each NxM box within an ndarray. I am wondering if there is a function available that would allow me to convolve (may not be the correct word here) an ndarray by a function and return an array of the same size as the original array. Something like this: >>> foo = np.arange(10000).reshape([100,100]) >>> stddev_arr = foo.funcconvolve([3, 3], np.std) >>> stddev_arr.shape == foo.shape True Such that each point within stddev_arr is the standard deviation of a 3x3 box around each point in foo. I'm sure I could code this in a loop, but I expect that there is a better solution. Thanks for your help, Jeremy Jeremy Solbrig NRL Monterey jeremy.solbrig at nrlmry.navy.mil (831) 656-4885 _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Thu Sep 27 15:47:58 2012 From: helmrp at yahoo.com (Robaula) Date: Thu, 27 Sep 2012 12:47:58 -0700 Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 66 In-Reply-To: References: Message-ID: <14B10461-3DBD-4CDF-9890-8A8B3980BD7A@yahoo.com> Denis Laxalde writes: ...snip... > So what's wrong? Am I missing something? I think so. I think you're missing line 165 of the fmin_cobyla routine. > This is what the optimize tutorial [1] is for, I guess. > 1: http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html Inadequate and IMHO, useless. Also, doesn't mention fmin_cobyla, cobyla, or COBYLA. Bob H From eric.moore2 at nih.gov Thu Sep 27 16:14:40 2012 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Thu, 27 Sep 2012 16:14:40 -0400 Subject: [SciPy-User] Flaws in fmin_cobyla In-Reply-To: <14B10461-3DBD-4CDF-9890-8A8B3980BD7A@yahoo.com> References: <14B10461-3DBD-4CDF-9890-8A8B3980BD7A@yahoo.com> Message-ID: > -----Original Message----- > From: Robaula [mailto:helmrp at yahoo.com] > Sent: Thursday, September 27, 2012 3:48 PM > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] SciPy-User Digest, Vol 109, Issue 66 > > Denis Laxalde writes: > > ...snip... > > > So what's wrong? Am I missing something? > > I think so. I think you're missing line 165 of the fmin_cobyla routine. > > > This is what the optimize tutorial [1] is for, I guess. > > 1: http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html > > Inadequate and IMHO, useless. Also, doesn't mention fmin_cobyla, > cobyla, or COBYLA. > > Bob H > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user The various data produced by fmin_cobyla routine are printed using a direct call to PRINT from within the fortran routine. This is less than optimal because if you are not running in a terminal (ie, at python, or ipython) you won't see any of the output. So try executing the example or changing the disp parameter while running from the console, and they will work as expected. I'd say this is a big gotcha that should be noted in the docs at least. The better choice would really be to patch cobyla2.f so that this would work even in the ipython qtconsole or wherever Bob is running his code. If there really is something to line 165 (https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L165) it's not obvious to me. Could you elaborate? Eric From jh at physics.ucf.edu Thu Sep 27 16:27:22 2012 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 27 Sep 2012 22:27:22 +0200 Subject: [SciPy-User] Flaws in fmin_cobyla In-Reply-To: (scipy-user-request@scipy.org) Message-ID: Robaula wrote: > However, I think there other, and more important issues here, namely, > "How should the new 'minimize' interfaces be documented? How should > existing documentation for the "old" interfaces be changed in light of > the new 'minimize' approach? How to bring the two of them more nearly > into alignment? Are changes to the documentation standards advisable to > facilitate this?" Sure, "minimize's" docstring is an excellent start, > tho I do see a couple areas where a little more explanation and/or some > examples would be helpful. Among other things, I think it would be a lot > better to just come out and tell folks what options are available for > each of the various "methods", rather than telling them the code > sequence to use to view them. Certainly the cobyla options for > 'minimize' differ noticeably from those for the fmin_cobyla > approach. For example, using fmin_cobyla ignores disp and iprint and > never returns 'Results', while using 'minimize' has no iprint option, > ignores the disp option, a nd always returns 'Results'. > > I'd welcome good ideas on how best to document the new 'minimize' > interface, and how best to coordinate that with the "old style" method > documentation. I'm not looking for ideas that are merely the > programmer's personal reminders--the current documentation is just fine > for that! Instead, I'm looking for documentation suitable for use by > folks who might be described as: first year grad students just slightly > familiar with Python who have a one week deadline to solve their problem > and desperately need _very_ clear, unambiguous, and detailed > instructions on exactly how to proceed. Try to remember _your_ first > year grad experiences!! A couple of things to remember about doc writing. First, we developed the current docstring standard mainly with numpy in mind. NumPy's structure is a lot flatter than SciPy's, and we recognized that some additional kinds of doc pages might (or might not) be needed for it. The Matplotlib folks made some adjustments (with our blessing, not that they needed it) to allow the central documentation of large sets of parameters shared by many functions (e.g., for line thickness, color, style, etc.), something that doesn't happen in NumPy. So, if SciPy needs something special because of its structure that isn't covered in the standard, please bring it up on the scipy-dev mailing list, where such discussions occur. Rather than being a large, flat collection of loosely related and mostly non-interacting functions, most/all of SciPy is arranged in deep modules with lots of underlying structure and shared concepts. It does not make sense to document such structures only on the pages of a few functions, nor on the page of every function. Rather, you want to push that up to the module level, and you can augment that with topical doc pages that hang off the module, as long as they are referenced from the module doc and those of any functions affected. That way, the doc of each function only contains what's special to that function, and refers to the module doc for the overview and more info. ...and all THAT said, remember that this is REFERENCE documentation. It should talk about how the package is set up and how to get things done in it. It needs to give examples that distinguish cases from one another and to assist with basic use. However, exercises, descriptions of what this area of math is for, philosophy, and extended examples involving much more than the minimum complexity to make a point belong in a tutorial dcoument, not a reference document. The reference docs are not the place to get a math education. Actually, the tutorial docs are not the best place for that, either, though they do sometimes play that role. Rather, a good doc writer will identify several good, FREE, online sources, often including a Wikipedia article (but READ the article, they're not uniformly good or correct), as well as some WIDELY-USED textbook sections, and put those in the references section of the docs. --jh-- Joe Harrington From rpyle at post.harvard.edu Thu Sep 27 16:43:51 2012 From: rpyle at post.harvard.edu (Robert Pyle) Date: Thu, 27 Sep 2012 16:43:51 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: References: Message-ID: Hi Ralf, I installed scipy from source on Anaconda, reasoning that the dmg was only for the python.org installation. I have Apple's latest Xcode, and the installation went through without any (apparent) errors. I misspoke when I said I installed 0.11.0rc2; that came with Anaconda CE from Continuum. I have also installed 0.11.0 on python.org 2.7.3 from scipy-0.11.0-py2.7-python.org-macosx10.6.dmg. This doesn't segfault of course, but I do get 9 errors and 9 unknown failures. I'm attaching zipped logs for both (verbose=2 for the Anaconda segfault, as you asked). I trust the file names are self-explanatory. Bob -------------- next part -------------- A non-text attachment was scrubbed... Name: AnacondaCE2.7.3_scipy0.11.0.rtf.zip Type: application/zip Size: 2588 bytes Desc: not available URL: -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: pythonorg2.7.3_scipy0.11.0.rtf.zip Type: application/zip Size: 2775 bytes Desc: not available URL: -------------- next part -------------- On Sep 27, 2012, at 12:32 PM, Ralf Gommers wrote: > > > On Thu, Sep 27, 2012 at 4:01 PM, Robert Pyle wrote: > Hi, > > I ran into a segfault with AnacondaCE on OSX 10.8.2 > > $ python > Python 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) > [GCC 4.0.1 (Apple Inc. build 5493)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy as sp > >>> sp.test() > Running unit tests for scipy > NumPy version 1.7.0b2 > NumPy is installed in /opt/anaconda/lib/python2.7/site-packages/numpy > SciPy version 0.11.0 > SciPy is installed in /opt/anaconda/lib/python2.7/site-packages/scipy > Python version 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) [GCC 4.0.1 (Apple Inc. build 5493)] > nose version 1.1.2 > .....................................................................................................................................................................................F.FFFSegmentation fault: 11 > > Could you run the tests with "scipy.test(verbose=2) to see where the issue occurs? > > > I installed 0.11.0rc2 which seems to run fine for my purposes but where scipy.test() gives > > Did you install both rc2 and final from the dmgs on SourceForge with macosx-10.6 in the name? > > FAILED (KNOWNFAIL=13, SKIP=29, errors=11, failures=73) > > > I attach a zipped file of the error log, in case that's useful. > > The test errors/failures in that log are all known. The IndexError/ValueError ones are because of an issue in numpy 1.7.0b2, which should be fixed before the final 1.7.0 release. The other ones are due to issues with Apple's Accelerate Framework. So doesn't look related. > > Can anyone comment on how Anaconda CE was built? I can imagine problems could occur if it was built on OS X 10.7 or 10.8 while scipy 0.11.0 was built on OS X 10.6. > > Ralf > > > > And a heartfelt thanks to all who work on this. You make my life so much easier! > > Bob Pyle > Cambridge, MA > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Thu Sep 27 16:57:42 2012 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 27 Sep 2012 23:57:42 +0300 Subject: [SciPy-User] Flaws in fmin_cobyla In-Reply-To: References: <14B10461-3DBD-4CDF-9890-8A8B3980BD7A@yahoo.com> Message-ID: 27.09.2012 23:14, Moore, Eric (NIH/NIDDK) [F] kirjoitti: >> -----Original Message----- >> From: Robaula [mailto:helmrp at yahoo.com] [clip] >> I think so. I think you're missing line 165 of the fmin_cobyla routine. [clip] > The various data produced by fmin_cobyla routine are printed using > a direct call to PRINT from within the fortran routine. This is less > than optimal because if you are not running in a terminal (ie, at > python, or ipython) you won't see any of the output. > So try executing the example or changing the disp parameter > while running from the console, and they will work as expected. > > I'd say this is a big gotcha that should be noted in the docs at least. > The better choice would really be to patch cobyla2.f so that this would > work even in the ipython qtconsole or wherever Bob is running his code. > > If there really is something to line 165 > (https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L165) > it's not obvious to me. Could you elaborate? Both iprint and disp are passed correctly down to the Fortran routine. Fixing the issue with converting output from stdout to Python's sys.stdout can be done by passing a Python callback function down to the fortran code, and replacing prints with it + some Fortran string processing. There are also a couple of other cases of this, in Qhull and some of the integrators in scipy.integrate. Ideally, Ipython consoles et al. would redirect stdout/stderr file handles so that also the output that comes outside from Python land would be shown to the user. However, the correct behavior for Scipy routines would be to print via Python's I/O. -- Pauli Virtanen From ralf.gommers at gmail.com Thu Sep 27 17:00:35 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 27 Sep 2012 23:00:35 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 10:43 PM, Robert Pyle wrote: > Hi Ralf, > > I installed scipy from source on Anaconda, reasoning that the dmg was only > for the python.org installation. I have Apple's latest Xcode, and the > installation went through without any (apparent) errors. > > I misspoke when I said I installed 0.11.0rc2; that came with Anaconda CE > from Continuum. > That's good to know, since no changes after 0.11.0rc2 could explain this issue. Probably just following the Anaconda build method would fix your segfault. > > I have also installed 0.11.0 on python.org 2.7.3 from > scipy-0.11.0-py2.7-python.org-macosx10.6.dmg. This doesn't segfault of > course, but I do get 9 errors and 9 unknown failures. I'm attaching zipped > logs for both (verbose=2 for the Anaconda segfault, as you asked). I trust > the file names are self-explanatory. > The errors are due to the beta version of numpy being shipped by Anaconda, scipy.sparse.csgraph should work as advertised. With numpy 1.7.0 (once released) those errors should be gone. The single precision failures are due to Apple's Accelerate Framework, only known solution is to not use it and compile against ATLAS instead. Ralf > > > > > On Sep 27, 2012, at 12:32 PM, Ralf Gommers wrote: > > > > > > > On Thu, Sep 27, 2012 at 4:01 PM, Robert Pyle > wrote: > > Hi, > > > > I ran into a segfault with AnacondaCE on OSX 10.8.2 > > > > $ python > > Python 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) > > [GCC 4.0.1 (Apple Inc. build 5493)] on darwin > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import scipy as sp > > >>> sp.test() > > Running unit tests for scipy > > NumPy version 1.7.0b2 > > NumPy is installed in /opt/anaconda/lib/python2.7/site-packages/numpy > > SciPy version 0.11.0 > > SciPy is installed in /opt/anaconda/lib/python2.7/site-packages/scipy > > Python version 2.7.3 AnacondaCE (default, Sep 4 2012, 10:42:42) [GCC > 4.0.1 (Apple Inc. build 5493)] > > nose version 1.1.2 > > > .....................................................................................................................................................................................F.FFFSegmentation > fault: 11 > > > > Could you run the tests with "scipy.test(verbose=2) to see where the > issue occurs? > > > > > > I installed 0.11.0rc2 which seems to run fine for my purposes but where > scipy.test() gives > > > > Did you install both rc2 and final from the dmgs on SourceForge with > macosx-10.6 in the name? > > > > FAILED (KNOWNFAIL=13, SKIP=29, errors=11, failures=73) > > > > > > I attach a zipped file of the error log, in case that's useful. > > > > The test errors/failures in that log are all known. The > IndexError/ValueError ones are because of an issue in numpy 1.7.0b2, which > should be fixed before the final 1.7.0 release. The other ones are due to > issues with Apple's Accelerate Framework. So doesn't look related. > > > > Can anyone comment on how Anaconda CE was built? I can imagine problems > could occur if it was built on OS X 10.7 or 10.8 while scipy 0.11.0 was > built on OS X 10.6. > > > > Ralf > > > > > > > > And a heartfelt thanks to all who work on this. You make my life so > much easier! > > > > Bob Pyle > > Cambridge, MA > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Sep 27 17:58:10 2012 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 27 Sep 2012 22:58:10 +0100 Subject: [SciPy-User] Flaws in fmin_cobyla In-Reply-To: References: <14B10461-3DBD-4CDF-9890-8A8B3980BD7A@yahoo.com> Message-ID: On Thu, Sep 27, 2012 at 9:57 PM, Pauli Virtanen wrote: > Ideally, Ipython consoles et al. would redirect stdout/stderr file > handles so that also the output that comes outside from Python land > would be shown to the user. This is possible: redirect the underlying file descriptors to local pipes, then spawn threads to read those pipes and call sys.std{out,err}.write: http://homepage.ntlworld.com/jonathan.deboynepollard/FGA/redirecting-standard-io.html Maybe someone should file a bug on IPython... -n From helmrp at yahoo.com Thu Sep 27 18:18:43 2012 From: helmrp at yahoo.com (Robaula) Date: Thu, 27 Sep 2012 15:18:43 -0700 Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 68 In-Reply-To: References: Message-ID: Bob & Paula H On Sep 27, 2012, at 1:39 PM, scipy-user-request at scipy.org wrote: > Message: 4 > Date: Thu, 27 Sep 2012 16:14:40 -0400 > From: "Moore, Eric (NIH/NIDDK) [F]" > Subject: Re: [SciPy-User] Flaws in fmin_cobyla > To: "scipy-user at scipy.org" > Message-ID: > > Content-Type: text/plain; charset="us-ascii" .......snip....... > The various data produced by fmin_cobyla routine are printed using a direct call to PRINT from within the fortran routine. This is less than optimal because if you are not running in a terminal (ie, at python, or ipython) you won't see any of the output. > > So try executing the example or changing the disp parameter while running from the console, and they will work as expected. > > I'd say this is a big gotcha that should be noted in the docs at least. The better choice would really be to patch cobyla2.f so that this would work even in the ipython qtconsole or wherever Bob is running his code. > > If there really is something to line 165 (https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L165) it's not obvious to me. Could you elaborate? > > Eric Thanks! This explains a lot of this "now you see it, now you don't" business. FYI, I haven't even seen a terminal in the last 25-30 years. I'm retired with no access to any commercial or academic support, and running on a Gateway desktop under Windows 7 and the latest Python 2.7.x and NumPy/SciPy versions. So I'd love to have a sentence or two from you noting the difference in returns depending on whether console is being used or not. I'd like to use them in my prospective revision of fmin_cobyla's docstring. Wonder how many other SciPy wrapped-Fortran programs have the same behavior? Well, I think it's line number 165. Whatever the right number, it's the return command from "def fmin_cobyla". That line says to return from fmin_cobyla the Results stuff it got from fmin_cobyla's call to _minimize_cobyla. But -- when you find it, pay close attention to it's last five characters, which are ['x']. Those five characters pick out just the part of the Results dictionary that corresponds to the key 'x', I.e., the final value of the argument of the objective function. One "fix" would be to modify the return statement to return just the argument, and to return all of the Results stuff otherwise. OTOH, Ralf Gommers might properly object on the grounds of it's effect on existing code. As far as "gotcha", well, it certainly got me! Again, thanks for your astute observations! From evilper at gmail.com Fri Sep 28 04:51:58 2012 From: evilper at gmail.com (Per Nielsen) Date: Fri, 28 Sep 2012 10:51:58 +0200 Subject: [SciPy-User] Problems with scipy.integrate.quad and sinusoidal weights Message-ID: Hi all, I am getting some strange (wrong or highly unaccurate) results when I try to use scipy.integrate.quad to integrate highly oscillatory functions. Please consider the following code: In [40]: from scipy.integrate import quad In [41]: from math import exp, sin, cos In [42]: fsin = lambda x: exp(-x) * sin(100*x) In [43]: quad(fsin, 0., 10., args=(), weight='sin', wvar=100.) Out[43]: (0.4974066723952844, 0.0005303917238325333) In [44]: quad(fsin, 0., 10., args=()) Out[44]: (0.0099990111781435, 0.0006814046027542903) where the last line, without the weight function gives the correct result. I have looked at the documentain for quad at: https://github.com/scipy/scipy/blob/master/scipy/integrate/quadpack.py#L134 and to me this should be the way to integrate fsin. Am I misunderstanding the arguments or whats going on? :) Cheers, Per -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Sep 28 08:13:07 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 28 Sep 2012 08:13:07 -0400 Subject: [SciPy-User] Problems with scipy.integrate.quad and sinusoidal weights In-Reply-To: References: Message-ID: On Fri, Sep 28, 2012 at 4:51 AM, Per Nielsen wrote: > Hi all, > > I am getting some strange (wrong or highly unaccurate) results when I try to > use scipy.integrate.quad to integrate highly oscillatory functions. Please > consider the following code: > > In [40]: from scipy.integrate import quad > > In [41]: from math import exp, sin, cos > > In [42]: fsin = lambda x: exp(-x) * sin(100*x) > > In [43]: quad(fsin, 0., 10., args=(), weight='sin', wvar=100.) > Out[43]: (0.4974066723952844, 0.0005303917238325333) > > In [44]: quad(fsin, 0., 10., args=()) > Out[44]: (0.0099990111781435, 0.0006814046027542903) > > where the last line, without the weight function gives the correct result. > > I have looked at the documentain for quad at: > > https://github.com/scipy/scipy/blob/master/scipy/integrate/quadpack.py#L134 > > and to me this should be the way to integrate fsin. Am I misunderstanding > the arguments or whats going on? :) the weight function is automatically added >>> integrate.quad(lambda x: np.exp(-x)*np.sin(100*x), 0., 10., args=()) Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. (0.0099990111781434986, 0.00068140460275433325) >>> integrate.quad(lambda x: np.exp(-x), 0., 10., args=(), weight='sin', wvar=100.) (0.0099987410521618428, 7.0983734843630013e-10) (I never used wvar before, just guessing Josef > Cheers, > Per > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From eric.moore2 at nih.gov Fri Sep 28 09:00:42 2012 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Fri, 28 Sep 2012 09:00:42 -0400 Subject: [SciPy-User] SciPy-User Digest, Vol 109, Issue 68 In-Reply-To: References: Message-ID: > -----Original Message----- > From: Robaula [mailto:helmrp at yahoo.com] > Sent: Thursday, September 27, 2012 6:19 PM > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] SciPy-User Digest, Vol 109, Issue 68 > > > > Bob & Paula H > > On Sep 27, 2012, at 1:39 PM, scipy-user-request at scipy.org wrote: > > > Message: 4 > > Date: Thu, 27 Sep 2012 16:14:40 -0400 > > From: "Moore, Eric (NIH/NIDDK) [F]" > > Subject: Re: [SciPy-User] Flaws in fmin_cobyla > > To: "scipy-user at scipy.org" > > Message-ID: > > > > Content-Type: text/plain; charset="us-ascii" > > .......snip....... > > > The various data produced by fmin_cobyla routine are printed using a > direct call to PRINT from within the fortran routine. This is less than > optimal because if you are not running in a terminal (ie, at python, or > ipython) you won't see any of the output. > > > > So try executing the example or changing the disp parameter while > running from the console, and they will work as expected. > > > > I'd say this is a big gotcha that should be noted in the docs at > least. The better choice would really be to patch cobyla2.f so that > this would work even in the ipython qtconsole or wherever Bob is > running his code. > > > > If there really is something to line 165 > (https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L1 > 65) it's not obvious to me. Could you elaborate? > > > > Eric > > Thanks! This explains a lot of this "now you see it, now you don't" > business. FYI, I haven't even seen a terminal in the last 25-30 years. > I'm retired with no access to any commercial or academic support, and > running on a Gateway desktop under Windows 7 and the latest Python > 2.7.x and NumPy/SciPy versions. So I'd love to have a sentence or two > from you noting the difference in returns depending on whether console > is being used or not. I'd like to use them in my prospective revision > of fmin_cobyla's docstring. > > Wonder how many other SciPy wrapped-Fortran programs have the same > behavior? > > Well, I think it's line number 165. Whatever the right number, it's the > return command from "def fmin_cobyla". That line says to return from > fmin_cobyla the Results stuff it got from fmin_cobyla's call to > _minimize_cobyla. But -- when you find it, pay close attention to it's > last five characters, which are ['x']. Those five characters pick out > just the part of the Results dictionary that corresponds to the key > 'x', I.e., the final value of the argument of the objective function. > One "fix" would be to modify the return statement to return just the > argument, and to return all of the Results stuff otherwise. OTOH, Ralf > Gommers might properly object on the grounds of it's effect on existing > code. > > As far as "gotcha", well, it certainly got me! Again, thanks for your > astute observations! > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user I suppose I should have been clearer. I don't mean a hardware terminal. I mean a terminal emulator running on your computer. So for windows 7, run cmd.exe, and execute python.exe, which if it isn't in your path, will likely be c:\Python27\python.exe. My tests were done with scipy 0.9 on windows 7, but there is no reason to think that this isn't still true with newer versions of scipy. Eric From takowl at gmail.com Fri Sep 28 09:06:52 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 28 Sep 2012 14:06:52 +0100 Subject: [SciPy-User] Flaws in fmin_cobyla In-Reply-To: References: <14B10461-3DBD-4CDF-9890-8A8B3980BD7A@yahoo.com> Message-ID: On 27 September 2012 22:58, Nathaniel Smith wrote: > This is possible: redirect the underlying file descriptors to local > pipes, then spawn threads to read those pipes and call > sys.std{out,err}.write: > http://homepage.ntlworld.com/jonathan.deboynepollard/FGA/redirecting-standard-io.html > > Maybe someone should file a bug on IPython... We already have one open: ;-) https://github.com/ipython/ipython/issues/1230 Thanks for the link, I'll add it to that issue. Thomas From evilper at gmail.com Fri Sep 28 10:02:36 2012 From: evilper at gmail.com (Per Nielsen) Date: Fri, 28 Sep 2012 16:02:36 +0200 Subject: [SciPy-User] Problems with scipy.integrate.quad and sinusoidal weights In-Reply-To: References: Message-ID: > > > the weight function is automatically added > > >>> integrate.quad(lambda x: np.exp(-x)*np.sin(100*x), 0., 10., args=()) > Warning: The maximum number of subdivisions (50) has been achieved. > If increasing the limit yields no improvement it is advised to analyze > the integrand in order to determine the difficulties. If the position > of a > local difficulty can be determined (singularity, discontinuity) one will > probably gain from splitting up the interval and calling the integrator > on the subranges. Perhaps a special-purpose integrator should be used. > (0.0099990111781434986, 0.00068140460275433325) > > >>> integrate.quad(lambda x: np.exp(-x), 0., 10., args=(), weight='sin', > wvar=100.) > (0.0099987410521618428, 7.0983734843630013e-10) > > (I never used wvar before, just guessing > > Josef > > Ahh, you are completely right. I was thinking that weight function meant what basis function it used in the numerical integration, but of course it simply multiply it to the integrand. Thanks for making that clear to me :) Per > > Cheers, > > Per > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Thu Sep 27 15:00:59 2012 From: nils106 at googlemail.com (Nils Wagner) Date: Thu, 27 Sep 2012 20:00:59 +0100 Subject: [SciPy-User] Several test failures Message-ID: >>> numpy.__version__ '1.8.0.dev-1ea1592' >>> scipy.__version__ '0.12.0.dev-fd68897' ====================================================================== ERROR: test_interpolate.TestInterp1D.test_bounds ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 224, in test_bounds self._bounds_check(kind) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 191, in _bounds_check extrap10(11.2), File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 396, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 372, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 835, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_interpolate.TestInterp1D.test_complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 316, in test_complex self._check_complex(np.complex64, kind) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 304, in _check_complex assert_array_almost_equal(y[:-1], c(x)[:-1]) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 396, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 372, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 832, in spleval res[sl].real = _fitpack._bspleval(xx,xj,cvals.real[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: Check the actual implementation of spline interpolation. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 144, in test_cubic interp10(self.x10), File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 396, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 372, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 835, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_interpolate.TestInterp1D.test_nd ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 294, in test_nd self._nd_check_interp(kind) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", line 234, in _nd_check_interp interp10(np.array([[3., 5.], [2., 7.]])), File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 396, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 372, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 835, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_ndgriddata.TestGriddata.test_1d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_ndgriddata.py", line 73, in test_1d assert_allclose(griddata(x, y, x, method=method), y, File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/ndgriddata.py", line 178, in griddata return ip(xi) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 396, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 372, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 835, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_ndgriddata.TestGriddata.test_1d_unsorted ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_ndgriddata.py", line 85, in test_1d_unsorted assert_allclose(griddata(x, y, x, method=method), y, File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/ndgriddata.py", line 178, in griddata return ip(xi) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 396, in __call__ y_new = self._call(x_new) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 372, in _call_spline result = spleval(self._spline,x_new.ravel()) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", line 835, in spleval res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) IndexError: too many indices ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 159, in test_vectorized_query d, i = self.kdtree.query(np.zeros((2,4,3))) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query_all_neighbors ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 187, in test_vectorized_query_all_neighbors d, i = self.kdtree.query(np.zeros((2,4,3)),k=None,distance_upper_bound=1.1) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_vectorization.test_vectorized_query_multiple_neighbors ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 175, in test_vectorized_query_multiple_neighbors d, i = self.kdtree.query(np.zeros((2,4,3)),k=kk) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 434, in query for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_random_ball_vectorized ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 332, in test_random_ball_vectorized r = T.query_ball_point(np.random.randn(2,3,m),1) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", line 544, in query_ball_point for c in np.ndindex(retshape): File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ====================================================================== ERROR: test_kdtree.test_random_ball_vectorized_compiled ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", line 342, in test_random_ball_vectorized_compiled r = T.query_ball_point(np.random.randn(2,3,m),1) File "ckdtree.pyx", line 1399, in scipy.spatial.ckdtree.cKDTree.query_ball_point (scipy/spatial/ckdtree.c:11875) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", line 536, in __init__ x = as_strided(_nx.zeros(1), shape=shape, strides=_nx.zeros_like(shape)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", line 28, in as_strided return np.asarray(DummyArray(interface, base=x)) File "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", line 324, in asarray return array(a, dtype, copy=False, order=order) ValueError: negative dimensions are not allowed ---------------------------------------------------------------------- Ran 5537 tests in 166.811s FAILED (KNOWNFAIL=14, SKIP=28, errors=11) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Fri Sep 28 12:39:30 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Fri, 28 Sep 2012 09:39:30 -0700 Subject: [SciPy-User] Several test failures In-Reply-To: References: Message-ID: <5065D2C2.8050208@uci.edu> On 9/27/2012 12:00 PM, Nils Wagner wrote: >>>> numpy.__version__ > '1.8.0.dev-1ea1592' >>>> scipy.__version__ > '0.12.0.dev-fd68897' > > ====================================================================== > ERROR: test_interpolate.TestInterp1D.test_bounds > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", > line 224, in test_bounds > self._bounds_check(kind) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", > line 191, in _bounds_check > extrap10(11.2), > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 396, in __call__ > y_new = self._call(x_new) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 372, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 835, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_interpolate.TestInterp1D.test_complex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", > line 316, in test_complex > self._check_complex(np.complex64, kind) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", > line 304, in _check_complex > assert_array_almost_equal(y[:-1], c(x)[:-1]) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 396, in __call__ > y_new = self._call(x_new) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 372, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 832, in spleval > res[sl].real = _fitpack._bspleval(xx,xj,cvals.real[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: Check the actual implementation of spline interpolation. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", > line 144, in test_cubic > interp10(self.x10), > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 396, in __call__ > y_new = self._call(x_new) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 372, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 835, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_interpolate.TestInterp1D.test_nd > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", > line 294, in test_nd > self._nd_check_interp(kind) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_interpolate.py", > line 234, in _nd_check_interp > interp10(np.array([[3., 5.], [2., 7.]])), > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 396, in __call__ > y_new = self._call(x_new) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 372, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 835, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_ndgriddata.TestGriddata.test_1d > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_ndgriddata.py", > line 73, in test_1d > assert_allclose(griddata(x, y, x, method=method), y, > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/ndgriddata.py", > line 178, in griddata > return ip(xi) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 396, in __call__ > y_new = self._call(x_new) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 372, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 835, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_ndgriddata.TestGriddata.test_1d_unsorted > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/tests/test_ndgriddata.py", > line 85, in test_1d_unsorted > assert_allclose(griddata(x, y, x, method=method), y, > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/ndgriddata.py", > line 178, in griddata > return ip(xi) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 396, in __call__ > y_new = self._call(x_new) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 372, in _call_spline > result = spleval(self._spline,x_new.ravel()) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/interpolate/interpolate.py", > line 835, in spleval > res[sl] = _fitpack._bspleval(xx,xj,cvals[sl],k,deriv) > IndexError: too many indices > > ====================================================================== > ERROR: test_kdtree.test_vectorization.test_vectorized_query > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", > line 159, in test_vectorized_query > d, i = self.kdtree.query(np.zeros((2,4,3))) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", > line 434, in query > for c in np.ndindex(retshape): > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", > line 324, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: test_kdtree.test_vectorization.test_vectorized_query_all_neighbors > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", > line 187, in test_vectorized_query_all_neighbors > d, i = > self.kdtree.query(np.zeros((2,4,3)),k=None,distance_upper_bound=1.1) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", > line 434, in query > for c in np.ndindex(retshape): > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", > line 324, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: > test_kdtree.test_vectorization.test_vectorized_query_multiple_neighbors > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", > line 175, in test_vectorized_query_multiple_neighbors > d, i = self.kdtree.query(np.zeros((2,4,3)),k=kk) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", > line 434, in query > for c in np.ndindex(retshape): > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", > line 324, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: test_kdtree.test_random_ball_vectorized > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", > line 332, in test_random_ball_vectorized > r = T.query_ball_point(np.random.randn(2,3,m),1) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/kdtree.py", > line 544, in query_ball_point > for c in np.ndindex(retshape): > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", > line 324, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ====================================================================== > ERROR: test_kdtree.test_random_ball_vectorized_compiled > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.7/site-packages/scipy/spatial/tests/test_kdtree.py", > line 342, in test_random_ball_vectorized_compiled > r = T.query_ball_point(np.random.randn(2,3,m),1) > File "ckdtree.pyx", line 1399, in > scipy.spatial.ckdtree.cKDTree.query_ball_point > (scipy/spatial/ckdtree.c:11875) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/index_tricks.py", > line 536, in __init__ > x = as_strided(_nx.zeros(1), shape=shape, > strides=_nx.zeros_like(shape)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/lib/stride_tricks.py", > line 28, in as_strided > return np.asarray(DummyArray(interface, base=x)) > File > "/home/nwagner/local/lib64/python2.7/site-packages/numpy/core/numeric.py", > line 324, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: negative dimensions are not allowed > > ---------------------------------------------------------------------- > Ran 5537 tests in 166.811s > > FAILED (KNOWNFAIL=14, SKIP=28, errors=11) > Those known errors, related to https://github.com/numpy/numpy/pull/445 Christoph From gilles.rochefort at gmail.com Fri Sep 28 13:04:11 2012 From: gilles.rochefort at gmail.com (Gilles Rochefort) Date: Fri, 28 Sep 2012 19:04:11 +0200 Subject: [SciPy-User] fmin_slsqp exit mode 8 In-Reply-To: References: Message-ID: <5065D88B.9090200@gmail.com> Could you provide an example that produce such a mode ? Regards, Gilles. > in statsmodels we have a case where fmin_slsqp ends with mode=8 > "POSITIVE DIRECTIONAL DERIVATIVE FOR LINESEARCH" > > Does anyone know what it means and whether it's possible to get around it? > > the fortran source file doesn't have an explanation. > > Thanks, > > Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Fri Sep 28 13:41:10 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 28 Sep 2012 13:41:10 -0400 Subject: [SciPy-User] fmin_slsqp exit mode 8 In-Reply-To: <5065D88B.9090200@gmail.com> References: <5065D88B.9090200@gmail.com> Message-ID: On Fri, Sep 28, 2012 at 1:04 PM, Gilles Rochefort wrote: > Could you provide an example that produce such a mode ? It will not be easy or possible to get the example standalone The example is a L1 penalized maximum likelihood estimation for a poisson regression https://github.com/statsmodels/statsmodels/pull/465/files#L2R94 the slsqp part is here https://github.com/statsmodels/statsmodels/pull/465/files#diff-7 The full code for this is spread over 3 classes (using inheritance) (or 5 counting all class inheritance levels) fmin_slsqp works pretty well for Logit, Probit and Multinomial Logit, but for Poisson with a large regularization parameter, we get exit code 8. The optimized values also look reasonable in that case. (the pull request will be merged within a few days.) Josef > > Regards, > Gilles. > >> in statsmodels we have a case where fmin_slsqp ends with mode=8 >> "POSITIVE DIRECTIONAL DERIVATIVE FOR LINESEARCH" >> >> Does anyone know what it means and whether it's possible to get around it? >> >> the fortran source file doesn't have an explanation. >> >> Thanks, >> >> Josef >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Fri Sep 28 14:09:27 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 28 Sep 2012 21:09:27 +0300 Subject: [SciPy-User] fmin_slsqp exit mode 8 In-Reply-To: References: Message-ID: 27.09.2012 18:15, josef.pktd at gmail.com kirjoitti: > in statsmodels we have a case where fmin_slsqp ends with mode=8 > "POSITIVE DIRECTIONAL DERIVATIVE FOR LINESEARCH" > > Does anyone know what it means and whether it's possible to get around it? > > the fortran source file doesn't have an explanation. Guessing without wading through the F77 goto sphagetti: it could mean that the optimizer has wound up with a search direction in which the function increases (or doesn't decrease fast enough). If it's an termination condition, it probably also means that the optimizer is not able to recover from this. Some googling seems to indicate that this depends on the scaling of the prolem, so it may also be some sort of a precision issue (or an issue with wrong tolerances): http://www.mail-archive.com/nlopt-discuss at ab-initio.mit.edu/msg00208.html -- Pauli Virtanen From pwang at streamitive.com Fri Sep 28 16:31:13 2012 From: pwang at streamitive.com (Peter Wang) Date: Fri, 28 Sep 2012 15:31:13 -0500 Subject: [SciPy-User] PyData NYC 2012 Speakers and Talks announced! Message-ID: Hi everyone, The PyData NYC team and Continuum Analytics are proud to announce the full lineup of talks and speakers for the PyData NYC 2012 event! We're thrilled with the exciting lineup of workshops, hands-on tutorials, and talks about real-world uses of Python for data analysis. http://nyc2012.pydata.org/schedule The list of presenters and talk abstracts are also available, and are linked from the schedule page. For those who will be in town on Thursday evening of October 25th, there will be a special PyData edition of Drinks & Data at Dewey's Flatiron. It'll be a great chance to socialize and meet with PyData presenters and other attendees. Register here: http://drinks-and-data-pydata-conf-ny.eventbrite.com/ We're also proud to be part of the NYC DataWeek: http://oreilly.com/dataweek/?cmp=tw-strata-ev-dr. The week of October 22nd is going to be a great time to be in New York! Lastly, we are still looking for sponsors! If you want to get your company recognition in front of a few hundred Python data hackers and hardcore developers, PyData will be a premier venue to showcase your products or recruit exceptional talent. Please visit http://nyc2012.pydata.org/sponsors/becoming/ to inquire about sponsorship. In addition to the conference sponsorship, charter sponsorships for dinner Friday night, as well as the Sunday Hack-a-thon event are all open. Please help us promote the conference! Tell your friends, email your meetup groups, and follow @PyDataConf on Twitter. Early registration ends in just a few weeks, so register today! http://pydata.eventbrite.com/ See you there! -Peter Wang Organizer, PyData NYC 2012 From josef.pktd at gmail.com Fri Sep 28 19:24:18 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 28 Sep 2012 19:24:18 -0400 Subject: [SciPy-User] fmin_slsqp exit mode 8 In-Reply-To: References: Message-ID: On Fri, Sep 28, 2012 at 2:09 PM, Pauli Virtanen wrote: > 27.09.2012 18:15, josef.pktd at gmail.com kirjoitti: >> in statsmodels we have a case where fmin_slsqp ends with mode=8 >> "POSITIVE DIRECTIONAL DERIVATIVE FOR LINESEARCH" >> >> Does anyone know what it means and whether it's possible to get around it? >> >> the fortran source file doesn't have an explanation. > > Guessing without wading through the F77 goto sphagetti: it could mean > that the optimizer has wound up with a search direction in which the > function increases (or doesn't decrease fast enough). If it's an > termination condition, it probably also means that the optimizer is not > able to recover from this. I had tried some randomization as new starting values, but in this example this didn't help. > > Some googling seems to indicate that this depends on the scaling of the > prolem, so it may also be some sort of a precision issue (or an issue > with wrong tolerances): > > http://www.mail-archive.com/nlopt-discuss at ab-initio.mit.edu/msg00208.html scaling might be a problem in this example hessian, second derivative of the unpenalized likelihood function >>> np.linalg.eigvals(poisson_l1_res._results.model.hessian(poisson_l1_res.params)) array([-16078553.93225711, -1374997.42454279, -299647.67457668, -138719.26843099, -15800.99493306, -1091.16078941, -10258.71018359, -3800.22940286, -7530.7029302 , -6540.09128479]) Maybe it's just a bad example to use for L1 penalization. ---- I tried to scale down the objective function and gradient, and it works np.linalg.eigvals(poisson_l1_res._results.model.hessian(poisson_l1_res.params)) array([-588.82869149, -64.89601886, -13.81251974, -6.90900488, -0.74415772, -0.48190709, -0.03863475, -0.34855895, -0.28063095, -0.16671642]) I can impose a high penalization factor and still get a successful mode=0 convergence. I'm not sure the convergence has actually improved in relative terms. (Now I just have to figure out if we want to consistently change the scaling of the loglikelihood, or just hack it into L1 optimization.) Thanks for the hint, Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From helmrp at yahoo.com Fri Sep 28 23:20:43 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Fri, 28 Sep 2012 20:20:43 -0700 (PDT) Subject: [SciPy-User] COBYLA Message-ID: <1348888843.27463.YahooMailNeo@web31805.mail.mud.yahoo.com> Paul Virtanen writes: >>Date: Thu, 27 Sep 2012 23:57:42 +0300 >>From: Pauli Virtanen >>Subject: Re: [SciPy-User] Flaws in fmin_cobyla ?(SNIP) >>Ideally, Ipython consoles et al. would redirect stdout/stderr file >>handles so that also the output that comes outside from Python land >>would be shown to the user. However, the correct behavior for Scipy >>routines would be to print via Python's I/O. That's OK for "one-shot" optimizations. Not so good for real-world "mutli-shot" optimizations -- unless I'm missing something here. By multi-shot optimizations I mean something like the following: An engine designer has an 18-dimensional design variable, with 24 constraints and 60 args, and wants to run 70 to 100 cases by varying the args. She writes?a Python program that reads the args from a file of cases, prepares the inputs to cobyla, turns the cobyla crank and saves its outputs to a file for further analysis. She wants to save?all of the Results information from each run to this output file. That's not convenient to do if the Results information is not actually _returned_ by COBYLA. Bob and Paula H "Many of life's failures are people who did not realize how close they were to success when they gave up." (Thomas Edison) From pav at iki.fi Sat Sep 29 07:09:03 2012 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 29 Sep 2012 14:09:03 +0300 Subject: [SciPy-User] COBYLA In-Reply-To: <1348888843.27463.YahooMailNeo@web31805.mail.mud.yahoo.com> References: <1348888843.27463.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: 29.09.2012 06:20, The Helmbolds kirjoitti: [clip] > An engine designer has an 18-dimensional design variable, with > 24 constraints and 60 args, and wants to run 70 to 100 cases > by varying the args. She writes a Python program that reads > the args from a file of cases, prepares the inputs to cobyla, > turns the cobyla crank and saves its outputs to a file for > further analysis. She wants to save all of the Results information from each run to this output file. > That's not convenient to do if the Results information is not actually _returned_ by COBYLA. Good point. This is a good reason why library routines should never print anything. Luckily, it is not more difficult to implement than performing I/O via Python. -- Pauli Virtanen From ralf.gommers at gmail.com Sat Sep 29 09:53:43 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 29 Sep 2012 15:53:43 +0200 Subject: [SciPy-User] "from scipy import signal" problem In-Reply-To: <5061268B.9090806@gmail.com> References: <5061268B.9090806@gmail.com> Message-ID: On Tue, Sep 25, 2012 at 5:35 AM, Brickle Macho < michael at yanchepmartialarts.com.au> wrote: > I am getting an error, see below, when I try: > > from scipy import signal > > Scipy was install via mac ports. Not sure where to start to fix this? > Do I need to rebuild scipy? If so do I uninstall from Macports (and > associated dependencies) or try > > Any help arrpeciated. > This seems to be a Macports bug: https://trac.macports.org/ticket/35141. Rebuilding with gcc-4.4 seems to solve the issue. I don't know what the recommended way is for uninstalling things in Macports. In general you're probably better off using a complete distribution like EPD instead of Macports I think. Ralf > > Brickle. > > > ---- > $ python test.py > Traceback (most recent call last): > File "test.py", line 3, in > from scipy import signal > File > > "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/__init__.py", > line 198, in > from spline import * > ImportError: > > dlopen(/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/spline.so, > 2): Symbol not found: ___ieee_divdc3 > Referenced from: > > /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/spline.so > Expected in: flat namespace > in > > /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/spline.so > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From indiajoe at gmail.com Fri Sep 28 14:45:21 2012 From: indiajoe at gmail.com (Joe Philip Ninan) Date: Sat, 29 Sep 2012 00:15:21 +0530 Subject: [SciPy-User] Fitting Gaussian in spectra Message-ID: Hi, I have a spectra with multiple gaussian emission lines over a noisy continuum. My primary objective is to find areas under all the gaussian peaks. For that, the following is the algorithm i have in mind. 1) fit the continuum and subtract it. 2) find the peaks 3) do least square fit of gaussian at the peaks to find the area under each gaussian peaks. I am basically stuck at the first step itself. Simple 2nd or 3rd order polynomial fit is not working because the contribution from peaks are significant. Any tool exist to fit continuum ignoring the peaks? For finding peaks, i tried find_peaks_cwt in signal module of scipy. But it seems to be quite sensitive of the width of peak and was picking up non-existing peaks also. The wavelet used was default mexican hat. Is there any better wavelet i should try? Or is there any other module in python/scipy which i should give a try? Thanking you. -cheers joe -- /--------------------------------------------------------------- "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Sep 29 07:05:27 2012 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 29 Sep 2012 14:05:27 +0300 Subject: [SciPy-User] fmin_slsqp exit mode 8 In-Reply-To: References: Message-ID: <5066D5F7.7030103@iki.fi> 29.09.2012 02:24, josef.pktd at gmail.com kirjoitti: [clip] > I tried to scale down the objective function and gradient, and it works > > np.linalg.eigvals(poisson_l1_res._results.model.hessian(poisson_l1_res.params)) > array([-588.82869149, -64.89601886, -13.81251974, -6.90900488, > -0.74415772, -0.48190709, -0.03863475, -0.34855895, > -0.28063095, -0.16671642]) > > I can impose a high penalization factor and still get a successful > mode=0 convergence. > I'm not sure the convergence has actually improved in relative terms. > > (Now I just have to figure out if we want to consistently change the > scaling of the loglikelihood, or just hack it into L1 optimization.) Ideally, the SLSQP algorithm itself would be scale invariant, but apparently something inside the code assumes that the function values (and maybe gradients) are "of the order of one". -- Pauli Virtanen From takowl at gmail.com Sat Sep 29 11:16:44 2012 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 29 Sep 2012 16:16:44 +0100 Subject: [SciPy-User] [SciPy-user] Pylab - standard packages In-Reply-To: References: <76b6b0e2f78755096dd3545e87ced475.squirrel@srv2.s4y.tournesol-consulting.eu> <01D91AC9-ACAA-4D5F-BB9C-B3BC179D39E0@continuum.io> <34482439.post@talk.nabble.com> Message-ID: Following discussion on the Numfocus list, it looks likely that we *won't* be using the name Pylab after - the Matlab association is felt to be too strong. Scipy is once again the preferred candidate. Thomas From josef.pktd at gmail.com Sat Sep 29 11:31:18 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 29 Sep 2012 11:31:18 -0400 Subject: [SciPy-User] fmin_slsqp exit mode 8 In-Reply-To: <5066D5F7.7030103@iki.fi> References: <5066D5F7.7030103@iki.fi> Message-ID: On Sat, Sep 29, 2012 at 7:05 AM, Pauli Virtanen wrote: > 29.09.2012 02:24, josef.pktd at gmail.com kirjoitti: > [clip] >> I tried to scale down the objective function and gradient, and it works >> >> np.linalg.eigvals(poisson_l1_res._results.model.hessian(poisson_l1_res.params)) >> array([-588.82869149, -64.89601886, -13.81251974, -6.90900488, >> -0.74415772, -0.48190709, -0.03863475, -0.34855895, >> -0.28063095, -0.16671642]) >> >> I can impose a high penalization factor and still get a successful >> mode=0 convergence. >> I'm not sure the convergence has actually improved in relative terms. >> >> (Now I just have to figure out if we want to consistently change the >> scaling of the loglikelihood, or just hack it into L1 optimization.) > > Ideally, the SLSQP algorithm itself would be scale invariant, but > apparently something inside the code assumes that the function values > (and maybe gradients) are "of the order of one". That sounds like the right explanation. I was also surprised that it only has one precision parameter, acc, where I didn't figure out where it is used (maybe everywhere), but we needed to make it smaller than the default. Josef > > -- > Pauli Virtanen > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From denis at laxalde.org Sun Sep 30 03:27:20 2012 From: denis at laxalde.org (Denis Laxalde) Date: Sun, 30 Sep 2012 09:27:20 +0200 Subject: [SciPy-User] COBYLA In-Reply-To: <1348888843.27463.YahooMailNeo@web31805.mail.mud.yahoo.com> References: <1348888843.27463.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: <5067F458.9000103@laxalde.org> The Helmbolds a ?crit : > An engine designer has an 18-dimensional design variable, with 24 > constraints and 60 args, and wants to run 70 to 100 cases by varying > the args. She writes a Python program that reads the args from a file > of cases, prepares the inputs to cobyla, turns the cobyla crank and > saves its outputs to a file for further analysis. She wants to save > all of the Results information from each run to this output file. > That's not convenient to do if the Results information is not > actually_returned_ by COBYLA. If by "Results" you mean the Result object that we discussed previously, you would get it using minimize(..., method='cobyla', ...) instead of fmin_cobyla, which is unlikely to change in this respect. -- Denis Laxalde From Jerome.Kieffer at esrf.fr Sun Sep 30 04:54:05 2012 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Sun, 30 Sep 2012 10:54:05 +0200 Subject: [SciPy-User] Fitting Gaussian in spectra In-Reply-To: References: Message-ID: <20120930105405.9fb85e88.Jerome.Kieffer@esrf.fr> On Sat, 29 Sep 2012 00:15:21 +0530 Joe Philip Ninan wrote: > 1) fit the continuum and subtract it. > Or is there any other module in python/scipy which i should give a try? > Thanking you. Iteratively apply a Savitsky-Golay filter with a large width(>10) and a low order (2). at the begining you will only smear out the noise then start removing peaks. SG filter are really fast to apply. -- J?r?me Kieffer Data analysis unit - ESRF From ckkart at hoc.net Sun Sep 30 08:13:27 2012 From: ckkart at hoc.net (Christian K.) Date: Sun, 30 Sep 2012 09:13:27 -0300 Subject: [SciPy-User] Fitting Gaussian in spectra In-Reply-To: References: Message-ID: > I have a spectra with multiple gaussian emission lines over a noisy > continuum. > My primary objective is to find areas under all the gaussian peaks. > For that, the following is the algorithm i have in mind. > 1) fit the continuum and subtract it. > 2) find the peaks > 3) do least square fit of gaussian at the peaks to find the area under > each gaussian peaks. > I am basically stuck at the first step itself. Simple 2nd or 3rd order > polynomial fit is not working because the contribution from peaks are > significant. Any tool exist to fit continuum ignoring the peaks? Try to fit all at once and subtract only parts of the model which best describe the background. In fact, doing so, you do not even need to subtract the continuum. If you were using peak-o-mat you could e.g. try a model like CB DEC GA GA GA GA (constant background, exponential decay, gauss) assuming in this case, that the continuum can be described by an exponential function plus a constant offset. Then you would evaluate those parts of the model which are related to the continuum and subtract them from the original one. peak-o-mat however cannot find the peaks on its one, as you said, peak finding can be quite tricky. Tell if that helps or if you need more information. Regards, Christian From kevin.gullikson at gmail.com Sat Sep 29 11:28:17 2012 From: kevin.gullikson at gmail.com (Kevin Gullikson) Date: Sat, 29 Sep 2012 10:28:17 -0500 Subject: [SciPy-User] Fitting Gaussian in spectra In-Reply-To: References: Message-ID: I'm not sure about the peak-finding part, but for fitting the continuum by ignoring peaks I often do an iterative fit such as below. It ignores more and more points from the peaks. I use this kind of an algorithm for fitting the continuum on absorption spectra pretty often. done = False while not done: done = True fit = numpy.poly1d(numpy.polyfit(x,y,order)) residuals = y - fit(x) std = numpy.std(residuals) badindices = numpy.where(residuals > 5*std)[0] if badindices.size > 0: done = False x = numpy.delete(x, badindices) y = numpy.delete(y, badindices) That solution may not be the best for you if you have a very large array though, because deleting indices from a numpy array is inefficient. Cheers, Kevin Gullikson On Fri, Sep 28, 2012 at 1:45 PM, Joe Philip Ninan wrote: > Hi, > I have a spectra with multiple gaussian emission lines over a noisy > continuum. > My primary objective is to find areas under all the gaussian peaks. > For that, the following is the algorithm i have in mind. > 1) fit the continuum and subtract it. > 2) find the peaks > 3) do least square fit of gaussian at the peaks to find the area under > each gaussian peaks. > I am basically stuck at the first step itself. Simple 2nd or 3rd order > polynomial fit is not working because the contribution from peaks are > significant. Any tool exist to fit continuum ignoring the peaks? > For finding peaks, i tried find_peaks_cwt in signal module of scipy. But > it seems to be quite sensitive of the width of peak and was picking up > non-existing peaks also. > The wavelet used was default mexican hat. Is there any better wavelet i > should try? > > Or is there any other module in python/scipy which i should give a try? > Thanking you. > -cheers > joe > -- > /--------------------------------------------------------------- > "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Sun Sep 30 14:02:44 2012 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sun, 30 Sep 2012 20:02:44 +0200 Subject: [SciPy-User] Fitting Gaussian in spectra In-Reply-To: References: Message-ID: On Sat, Sep 29, 2012 at 5:28 PM, Kevin Gullikson wrote: > That solution may not be the best for you if you have a very large array > though, because deleting indices from a numpy array is inefficient. Alternatively, you could use a masked array, and just mask out the peaks. No deletion needed. What polynomial order do you use? From newville at cars.uchicago.edu Sun Sep 30 14:21:43 2012 From: newville at cars.uchicago.edu (Matt Newville) Date: Sun, 30 Sep 2012 13:21:43 -0500 Subject: [SciPy-User] Fitting Gaussian in spectra In-Reply-To: References: Message-ID: Hi Joe, On Fri, Sep 28, 2012 at 1:45 PM, Joe Philip Ninan wrote: > Hi, > I have a spectra with multiple gaussian emission lines over a noisy > continuum. > My primary objective is to find areas under all the gaussian peaks. > For that, the following is the algorithm i have in mind. > 1) fit the continuum and subtract it. > 2) find the peaks > 3) do least square fit of gaussian at the peaks to find the area under each > gaussian peaks. > I am basically stuck at the first step itself. Simple 2nd or 3rd order > polynomial fit is not working because the contribution from peaks are > significant. Any tool exist to fit continuum ignoring the peaks? > For finding peaks, i tried find_peaks_cwt in signal module of scipy. But it > seems to be quite sensitive of the width of peak and was picking up > non-existing peaks also. > The wavelet used was default mexican hat. Is there any better wavelet i > should try? > > Or is there any other module in python/scipy which i should give a try? > Thanking you. > -cheers > joe I would echo much of the earlier advice. Fitting in stages (first background, then peaks) can be a bit dangerous, but is sometimes justifiable. I think there really isn't a good domain-independent way to model a continuum background, and it can be very useful to have some physical or spectral model for what the form of the continuum should be. That being said, there are a few things you might consider trying, especially since you know that you have positive peaks on a relatively smooth (if noisy) background. First, in the fit objective function, you might consider weighting positive elements of the residuals logarithmically and negative elements by some large scale or even exponentially. That will help to ignore the peaks, and keep the modeled background on the very low end of the spectra. Second, use your knowledge of the peak widths to set the polynomial or spline, or whatever function you're using to model the background. If you know your peaks have some range of widths, you could even consider using a Fourier filtering method to reduce the low-frequency continuum and the high-frequency noise while leaving the frequencies of interest (mostly) in tact. With such an approach, you might fit the background such that it only tried to match the low-frequency components of the spectra. Finally, sometimes, a least-squares fit isn't needed. For example, for x-ray fluorescence spectra there is a simple but pretty effective method by Kajfosz and Kwiatek in Nucl Instrum Meth B22, p78 (1987) "Non-polynomial approximation of background in x-ray spectra". For an implementation of this, see https://github.com/xraypy/tdl/blob/master/modules/xrf/xrf_bgr.py This might not be exactly what you're looking for, but it might help get you started. --Matt From helmrp at yahoo.com Sun Sep 30 17:09:33 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sun, 30 Sep 2012 14:09:33 -0700 (PDT) Subject: [SciPy-User] cobyla References: Message-ID: <1349039373.72572.YahooMailNeo@web31805.mail.mud.yahoo.com> On?my system (Windows 7, Python 2.7.x and?IDLE, latest SciPy), I observe the following behavior with fmin_cobyla and minimize's COBYLA method. ? Case 1: When run either in the IDLE interactive shell or within an enclosing Python program: ??? 1.1. The fmin_cobyla function never returns the Results dictionary, and never?displays it to Python's stdout. This is true regardless of?the function call's disp setting. ??? 1.2. The 'minimize' function always?returns the Results dictionary but never displays it to Python's stdout. Again, this is true regardless of the function call's disp setting. ? Case 2: When run interactively in Window's Command Prompt box: ??? 2.1 The fmin_cobyla function never returns the Result dictionary, regardless of the function call's disp setting. Setting disp to True or False either displays the Results dictionary in the command box or not (respectively). I don't think the Results dictionary gets to the command box via stdout. ??? 2.2 The 'minimize' function always returns the Result dictionary, regardless of the function call's disp setting.??Setting disp to True or False either displays the Results dictionary in the command box or not (respectively).?I don't think the Results dictionary gets to the command box via stdout. My thanks to all who helped clarify this situation. Bob?H?? From tsyu80 at gmail.com Sun Sep 30 17:23:54 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Sun, 30 Sep 2012 17:23:54 -0400 Subject: [SciPy-User] ANN: scikits-image 0.7.0 release Message-ID: Announcement: scikits-image 0.7.0 ================================= We're happy to announce the 7th version of scikits-image! Scikits-image is an image processing toolbox for SciPy that includes algorithms for segmentation, geometric transformations, color space manipulation, analysis, filtering, morphology, feature detection, and more. For more information, examples, and documentation, please visit our website http://skimage.org New Features ------------ It's been only 3 months since scikits-image 0.6 was released, but in that short time, we've managed to add plenty of new features and enhancements, including - Geometric image transforms - 3 new image segmentation routines (Felsenzwalb, Quickshift, SLIC) - Local binary patterns for texture characterization - Morphological reconstruction - Polygon approximation - CIE Lab color space conversion - Image pyramids - Multispectral support in random walker segmentation - Slicing, concatenation, and natural sorting of image collections - Perimeter and coordinates measurements in regionprops - An extensible image viewer based on Qt and Matplotlib, with plugins for edge detection, line-profiling, and viewing image collections Plus, this release adds a number of bug fixes, new examples, and performance enhancements. Contributors to this release ---------------------------- This release was only possible due to the efforts of many contributors, both new and old. - Andreas Mueller - Andreas Wuerl - Andy Wilson - Brian Holt - Christoph Gohlke - Dharhas Pothina - Emmanuelle Gouillart - Guillaume Gay - Josh Warner - James Bergstra - Johannes Schonberger - Jonathan J. Helmus - Juan Nunez-Iglesias - Leon Tietz - Marianne Corvellec - Matt McCormick - Neil Yager - Nicolas Pinto - Nicolas Poilvert - Pavel Campr - Petter Strandmark - Stefan van der Walt - Tim Sheerman-Chase - Tomas Kazmar - Tony S Yu - Wei Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Sun Sep 30 18:41:05 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sun, 30 Sep 2012 15:41:05 -0700 (PDT) Subject: [SciPy-User] Request help on fsolve Message-ID: <1349044865.39688.YahooMailNeo@web31813.mail.mud.yahoo.com> Please help me out here. I?m trying to rewrite the docstring for the `fsolve.py` routine located on my machine in: C:/users/owner/scipy/scipy/optimize/minpack.py ? The specific issue I?m having difficulty with is understanding the outputs described in fsolve?s docstring as: ?? 'fjac': the orthogonal matrix, q, produced by the QR factorization of the final approximate Jacobian matrix, stored column wise ?? 'r': upper triangular matrix produced by QR factorization of same matrix ? These are described in SciPy?s minpack/hybrd.f file as: ?? ?fjac? is an output n by n array which contains the orthogonal matrix q produced by the qr factorization of the final approximate jacobian. ?? ?r? is an output array of length lr which contains the upper triangular matrix produced by the qr factorization of the final approximate jacobian, stored rowwise. ? For ease in writing, in what follows let?s use the symbols ?Jend? for the final approximate Jacobian matrix, and use ?Q? and ?R? for its QR decomposition matrices. Now consider the problem of finding the solution to the following three nonlinear equations in three unknowns (u, v, w), which we will refer to as ?E?: ??? 2 * a * u + b * v + d - w * v = 0 ?? ?b * u + 2 * c * v + e - w * u = 0 ??? u * v - f = 0 where (a, b, c, d, e, f ) = (2, 3, 7, 8, 9, 2). For inputs to fsolve, we identify (u, v, w) = (x[0], x[1], x[2]). ? Now fsolve gives the solution array: ?[uend vend wend] = [? 1.79838825?? 1.11210691? 16.66195357]. With these values, the above three equations E are satisfied to an accuracy of about 9 significant figures. The Jacobian matrix for the three LHS functions in E is: ? ?J = np.matrix([[2*a, b-w, -v], [b-w, 2*c, -u], [v, u, 0.]]) Note that it?s symmetric, and if we compute its value using the above fsolve?s ?end? solution values we get: ?Jend = [[? 4.????????? 19.66195357?? 1.11210691], ??????????? [ 19.66195357? 14.?????????? 1.79838825],? ??????????? [? 1.11210691?? 1.79838825?? 0.??????? ]] Using SciPy?s linalg package, this Jend has the QR decomposition: ?Qend =? [[-0.28013447 -0.91516674 -0.28981807] ??????????? [ 0.95679602 -0.24168763 -0.16164302] ??????????? [ 0.07788487 -0.32257856? 0.94333293]] ?Rend =? [[-14.278857??? 17.08226116? -1.40915124] ??????????? [ -0.?????????? 9.69946027?? 1.45241144] ??????????? [ -0.?????????? 0.?????????? 0.61300558]] and Qend * Rend = Jend to within about 15 significant figures. However, fsolve gives the QR decomposition: ? qretm =? [[-0.64093238? 0.75748326? 0.1241966 ] ??????????? [-0.62403598 -0.60841098? 0.4903215 ] ??????????? [-0.44697291 -0.23675978 -0.8626471 ]] ? rret =? [ -7.77806716? 30.02199802? -0.819055?? -10.74878184?? 2.00090268? 1.02706198] and converting rret to a NumPy matrix gives: ?? ?rretm =? [[ -7.77806716? 30.02199802? -0.819055? ] ??????????? [? 0.???????? -10.74878184?? 2.00090268] ??????????? [? 0.?????????? 0.?????????? 1.02706198]] Now qret and rretm bear no obvious relation to Qend and Rend. Although qretm is orthogonal to about 16 significant figures, we find the product: ?qretm * rretm =? [[? 4.98521509 -27.38409295?? 2.16816676] ??????????? [? 4.85379376 -12.19513008? -0.2026608 ] ??????????? [? 3.47658529 -10.87414051? -0.99362993]] which bears no obvious relationship to Jend. The hybrdj.f routine in minpack refers to a permutation matrix, p, such that we should have in our notation: ?p*Jend = qretm*rretm, but fsolve apparently does not return the matrix p, and I don?t see any permutation of Jend that would equal qretm*rretm. ? If we reinterpret rret as meaning the matrix: ?rretaltm =? [[ -7.77806716? 30.02199802 -10.74878184] ??????????? [? 0.????????? -0.819055???? 2.00090268] ??????????? [? 0.?????????? 0.?????????? 1.02706198]] then we get the product: ?qretm * rretaltm =? [[? 4.98521509 -19.86249109?? 8.53245022] ??????????? [? 4.85379376 -18.2364849??? 5.99384603] ??????????? [? 3.47658529 -13.22510045?? 3.44468895]] which again bears no obvious relationship to Jend. Using the transpose of qretm in the above product is no help. So please help me out here. What are the fjac and r values that fsolve returns? How are they related to the above Qend, Rend, and Jend? How is the user supposed to use them? Bob?H