From ondrej at certik.cz Sat Jan 3 02:18:43 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Fri, 2 Jan 2009 23:18:43 -0800 Subject: [SciPy-dev] tagging 0.7.0rc1 this weekend? (was Numerical Recipes ...) In-Reply-To: References: Message-ID: <85b5c3130901022318j43978625x8cbf918547870ebd@mail.gmail.com> On Mon, Dec 29, 2008 at 2:02 AM, Jarrod Millman wrote: > The NR code review is basically complete. All the NR copyright issues > have been resolved (the only thing left is to verify that the > docstrings clearly explain the code's relation to NR). A big thanks > to everyone who helped resolve the NR issues! > > Since this was the only thing blocking the release candidate, I am > planning to create a 0.7.x branch and tag 0.7.0rc1 at the end of the > weekend. Once I branch for 0.7.x, the trunk will be open for 0.8 > development. That's cool. I should hurry up with fixing the debian numpy 1.2.1 package, but I was ultra busy lately. If anyone is using Debian/Ubuntu and would like to have the numpy 1.2.1 and scipy 0.7.0 (that depends on it) and would like to help out, let me know. I most probably cannot work on this for at least one more week. Ondrej From jason-sage at creativetrax.com Sat Jan 3 17:45:52 2009 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Sat, 03 Jan 2009 15:45:52 -0700 Subject: [SciPy-dev] almost doubled the number of tests In-Reply-To: References: Message-ID: <495FEAA0.5020302@creativetrax.com> Jarrod Millman wrote: > In preparation for the upcoming release, I was comparing the number of > tests in 0.6.0 to the number on the trunk: > 0.6.0: Ran 2266 tests in 203.503s > Trunk: Ran 3990 tests in 392.357s > This is great. Do you know what the function coverage is (i.e., what percentage of functions have direct tests)? Thanks, Jason > I knew that a large number of test had been written over the last > year, but I was very pleased to see that we nearly doubled the number. > Thanks to everyone who contributed new tests over the last year. > Increasing our test coverage is extremely important. Importantly, it > makes it easier to refactor the code without worrying about > regressions creeping in without being noticed. Hopefully, this trend > will continue and we will add nearly 2000 new tests over the next > year. > > Jarrod > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From millman at berkeley.edu Sun Jan 4 06:56:40 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 4 Jan 2009 03:56:40 -0800 Subject: [SciPy-dev] tagging 0.7.0rc1 this weekend? (was Numerical Recipes ...) In-Reply-To: References: Message-ID: On Mon, Dec 29, 2008 at 2:02 AM, Jarrod Millman wrote: > Since this was the only thing blocking the release candidate, I am > planning to create a 0.7.x branch and tag 0.7.0rc1 at the end of the > weekend. Once I branch for 0.7.x, the trunk will be open for 0.8 > development. Just a reminder that I will be creating a 0.7.x branch on Monday. I will create a rc1 tag from the branch. Once I branch, I will send an email to the list and the trunk will be open for 0.8 development. David and Stefan are working on a few final clean-ups before we branch and I plan to go over the release notes. I would appreciate it if everyone could review the release notes and let me know if there is anything missing: http://scipy.org/scipy/scipy/milestone/0.7.0 Also it would be very useful for anyone to go over the remaining open tickets and check to see if there are any that should be closed: http://scipy.org/scipy/scipy/query?status=new&status=assigned&status=reopened&milestone=0.7.0 It is possible that some of these have been resolved, but no one has closed the ticket. Thanks, Jarrod From p.c.degroot at tudelft.nl Sun Jan 4 10:24:03 2009 From: p.c.degroot at tudelft.nl (Pieter Cristiaan de Groot) Date: Sun, 04 Jan 2009 16:24:03 +0100 Subject: [SciPy-dev] sse related crash in numpy-1.2.1 and scipy-0.7.0b1 Message-ID: <4960D493.7020003@tudelft.nl> Hello all, In addition to this post: http://www.nabble.com/Re:-scipy-on-old-CPU-crashes-td20787430.html I would like to report that numpy-1.2.1 and scipy-0.7.0b1 crash on there corresponding .test() scripts. It is a windows XP machine with an Athlon XP 2200 processor (does have SSE, doesn't have SSE2), using python 2.5.4. I found out the hard way that at least "leastsq" from "scipy.optimize" crashes. The windows crashreport points towards "_minpack.dll" if I remember correctly. Digging into the problem, it turned out that both numpy and scipy crash on there .test() scripts. Going back to numpy 1.2.0 and scipy-0.6.0-P3 solved the problem. I think it is a reasonable guess (but not more then that) that is related to SSE(2) instrucions ( see link above ). Unfortunately I didn't store any output of the test scripts, and since it is a critical machine I cannot experiment to much anymore with different versions. Still I hope that this report may help. Pieter p.s. I'm a relatively new user and was confused by the 'superpack' name in the download section. After getting familiar with the problems described above I think I understand what it means now. Would it be a good idea to put a small explanation what 'superpack' means on the website (just before, or in the downoad section)? From cournape at gmail.com Sun Jan 4 11:05:13 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 5 Jan 2009 01:05:13 +0900 Subject: [SciPy-dev] sse related crash in numpy-1.2.1 and scipy-0.7.0b1 In-Reply-To: <4960D493.7020003@tudelft.nl> References: <4960D493.7020003@tudelft.nl> Message-ID: <5b8d13220901040805n3539ea34m8d21c23de9007702@mail.gmail.com> On Mon, Jan 5, 2009 at 12:24 AM, Pieter Cristiaan de Groot wrote: > Hello all, > > In addition to this post: > http://www.nabble.com/Re:-scipy-on-old-CPU-crashes-td20787430.html > > I would like to report that numpy-1.2.1 and scipy-0.7.0b1 crash on there > corresponding .test() scripts. > It is a windows XP machine with an Athlon XP 2200 processor (does have > SSE, doesn't have SSE2), using python 2.5.4. > The issue is known - and you're right about the reason. The issue should not occur for the release of scipy 0.7. David From stefan at sun.ac.za Sun Jan 4 17:03:45 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 5 Jan 2009 00:03:45 +0200 Subject: [SciPy-dev] Stopping F2Py from generating verbose output Message-ID: <9457e7c80901041403i7de7e67dk4455922625d9e53f@mail.gmail.com> Hi all, It seems that somewhere along the line f2py inserts debugging prints for scipy.lib.blas.fblas. E.g. $ python -c "from scipy.lib.blas import fblas; fblas.zaxpy([1, 2], [3, 4], n=4)" zaxpy:n=4 Traceback (most recent call last): File "", line 1, in fblas.error: (len(y)-offy>(n-1)*abs(incy)) failed for 1st keyword n Does anybody know how to suppress those "zaxpy:n=4" style messages, or even just where they are generated? Thanks, St?fan From aisaac at american.edu Sun Jan 4 17:20:00 2009 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 04 Jan 2009 17:20:00 -0500 Subject: [SciPy-dev] Numerical Recipes (was tagging 0.7rc1 this weekend?) In-Reply-To: References: <4947CA8C.40008@american.edu> Message-ID: <49613610.8060407@american.edu> > On Tue, Dec 16, 2008 at 7:34 AM, Alan G Isaac wrote: >> Travis O. is listed as writing this orthogonal.py in 2000 >> and doing bug fixes in 2003. Note that the >> reference to Numerical Recipes is one of two >> references documenting an algebraic recursion >> relation and is not referring to an implementation. >> http://www.fizyka.umk.pl/nrbook/c4-5.pdf >> http://www-lsp.ujf-grenoble.fr/recherche/a3t2/a3t2a2/bahram/biblio/abramowitz_and_stegun/page_774.htm >> I do not see any problem with the code. Fernando Perez wrote: > Other than the fact that these codes both use the same > piece of mathematics, I see no problem here whatsoever. > It might be worth clarifying in the docstring Am I correct that the documentation project does not give access to the module-level docstrings? I would have been happy to make this change in orthogonal.py but did not see how to add documentation via the project. Anyway, here is the reST for the GW cite and what I think the AS citation info is. (Not sure about the address.) fwiw, Alan Isaac For the mathematical background, see [golub.welsch-1969-mathcomp]_ and [abramowitz.stegun-1965]_. .. [golub.welsch-1969-mathcomp] Golub, Gene H, and John H Welsch. 1969. Calculation of Gauss Quadrature Rules. *Mathematics of Computation* 23, 221-230+s1--s10. .. [abramowitz.stegun-1965] Abramowitz, Milton, and Irene A Stegun. (1965) *Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables*. Gaithersburg, MD: National Bureau of Standards. http://www.math.sfu.ca/~cbm/aands/ From doutriaux1 at llnl.gov Mon Jan 5 17:31:23 2009 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Mon, 5 Jan 2009 14:31:23 -0800 Subject: [SciPy-dev] scipy fails with 2.6.1 on Mac OS X Message-ID: <1C34EFB9-743C-4FE3-8853-6859A825ABBC@llnl.gov> Hi, I'm trying to build scipy on my mac 10.5 with python 2.6.1 I get the following error message: /usr/local/bin/gfortran -Wall -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-i386-2.6/build/src.macosx-10.3-i386-2.6/build/ src.macosx-10.3-i386-2.6/scipy/sparse/linalg/isolve/iterative/ _iterativemodule.o build/temp.macosx-10.3-i386-2.6/build/ src.macosx-10.3-i386-2.6/fortranobject.o build/temp.macosx-10.3- i386-2.6/build/src.macosx-10.3-i386-2.6/scipy/sparse/linalg/isolve/ iterative/STOPTEST2.o build/temp.macosx-10.3-i386-2.6/build/ src.macosx-10.3-i386-2.6/scipy/sparse/linalg/isolve/iterative/ getbreak.o build/temp.macosx-10.3-i386-2.6/build/src.macosx-10.3- i386-2.6/scipy/sparse/linalg/isolve/iterative/BiCGREVCOM.o build/ temp.macosx-10.3-i386-2.6/build/src.macosx-10.3-i386-2.6/scipy/sparse/ linalg/isolve/iterative/BiCGSTABREVCOM.o build/temp.macosx-10.3- i386-2.6/build/src.macosx-10.3-i386-2.6/scipy/sparse/linalg/isolve/ iterative/CGREVCOM.o build/temp.macosx-10.3-i386-2.6/build/ src.macosx-10.3-i386-2.6/scipy/sparse/linalg/isolve/iterative/ CGSREVCOM.o build/temp.macosx-10.3-i386-2.6/build/src.macosx-10.3- i386-2.6/scipy/sparse/linalg/isolve/iterative/GMRESREVCOM.o build/ temp.macosx-10.3-i386-2.6/build/src.macosx-10.3-i386-2.6/scipy/sparse/ linalg/isolve/iterative/QMRREVCOM.o -L/usr/local/lib/gcc/i386-apple- darwin8.11.1/4.3.0 -Lbuild/temp.macosx-10.3-i386-2.6 -lgfortran -o build/lib.macosx-10.3-i386-2.6/scipy/sparse/linalg/isolve/ _iterative.so -Wl,-framework -Wl,Accelerate ld warning: duplicate dylib /usr/local/lib/libgcc_s.1.dylib building 'scipy.sparse.linalg.dsolve._zsuperlu' extension compiling C sources C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall - Wstrict-prototypes compile options: '-DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 -I/lgm/cdat/ 2.6.1/lib/python2.6/site-packages/numpy/core/include -I/lgm/cdat/2.6.1/ include/python2.6 -c' extra options: '-msse3' gcc: scipy/sparse/linalg/dsolve/_superluobject.c In file included from scipy/sparse/linalg/dsolve/_superluobject.h:8, from scipy/sparse/linalg/dsolve/_superluobject.c:5: scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for '_Py_c_abs' /lgm/cdat/2.6.1/include/python2.6/complexobject.h:30: error: previous declaration of '_Py_c_abs' was here In file included from scipy/sparse/linalg/dsolve/_superluobject.h:8, from scipy/sparse/linalg/dsolve/_superluobject.c:5: scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for '_Py_c_abs' /lgm/cdat/2.6.1/include/python2.6/complexobject.h:30: error: previous declaration of '_Py_c_abs' was here error: Command "gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 -I/lgm/cdat/ 2.6.1/lib/python2.6/site-packages/numpy/core/include -I/lgm/cdat/2.6.1/ include/python2.6 -c scipy/sparse/linalg/dsolve/_superluobject.c -o build/temp.macosx-10.3-i386-2.6/scipy/sparse/linalg/dsolve/ _superluobject.o -msse3" failed with exit status 1 I can provide you with the full scipy build log if you want. Thanks for your help, C. From wnbell at gmail.com Mon Jan 5 17:54:46 2009 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 5 Jan 2009 17:54:46 -0500 Subject: [SciPy-dev] scipy fails with 2.6.1 on Mac OS X In-Reply-To: <1C34EFB9-743C-4FE3-8853-6859A825ABBC@llnl.gov> References: <1C34EFB9-743C-4FE3-8853-6859A825ABBC@llnl.gov> Message-ID: On Mon, Jan 5, 2009 at 5:31 PM, Charles ???? Doutriaux wrote: > Hi, > > I'm trying to build scipy on my mac 10.5 > with python 2.6.1 > > I get the following error message: > > In file included from scipy/sparse/linalg/dsolve/_superluobject.h:8, > from scipy/sparse/linalg/dsolve/_superluobject.c:5: > scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h:60: error: > conflicting types for '_Py_c_abs' This should be fixed in SciPy 0.7 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From doutriaux1 at llnl.gov Mon Jan 5 17:58:07 2009 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Mon, 5 Jan 2009 14:58:07 -0800 Subject: [SciPy-dev] scipy fails with 2.6.1 on Mac OS X In-Reply-To: References: <1C34EFB9-743C-4FE3-8853-6859A825ABBC@llnl.gov> Message-ID: <22C3C77D-6DA0-4414-ADB7-84E7D223539D@llnl.gov> Hi Nathan, Thanks, I just realized I wasn't building against the trunk... Tested it against trunk and it went fine Thanks again, C. On Jan 5, 2009, at 2:54 PM, Nathan Bell wrote: > On Mon, Jan 5, 2009 at 5:31 PM, Charles ???? Doutriaux > wrote: >> Hi, >> >> I'm trying to build scipy on my mac 10.5 >> with python 2.6.1 >> >> I get the following error message: >> >> In file included from scipy/sparse/linalg/dsolve/_superluobject.h:8, >> from scipy/sparse/linalg/dsolve/_superluobject.c:5: >> scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h:60: error: >> conflicting types for '_Py_c_abs' > > This should be fixed in SciPy 0.7 > > -- > Nathan Bell wnbell at gmail.com > http:// graphics.cs.uiuc.edu/~wnbell/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http:// projects.scipy.org/mailman/listinfo/scipy-dev From benfrantzdale at gmail.com Mon Jan 5 21:30:21 2009 From: benfrantzdale at gmail.com (Ben FrantzDale) Date: Mon, 5 Jan 2009 21:30:21 -0500 Subject: [SciPy-dev] Abstract vectors in optimization Message-ID: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> I've started playing with SciPy for numerical optimization. So far, it looks like a great set of useful algorithm implementations. If I understand the architecture correctly, there is one change that could make these libraries even more powerful, particularly for large-scale scientific computing. The optimization algorithms seem to be designed to work only with NumPy vectors. Mathematically, though, many of these algorithms can operate on an arbitrary Hilbert space, so the algoirhtms can be slightly rewritten to work with any objects that support addition, scalar multiplication, and provide an inner product, and a few other things. If these functions could be made more general in this way, I could see the SciPy optimization tools being extremely useful for large-scale optimization problems in which people already have the cost function, its gradient, and the representation of a state vector implemented (e.g., in C++). It would be easy for them to provide a Hilbert-space interface to Python. For example, the nonlinear CG solver, fmin_cg, takes a cost function, and an initial state, x0, and among other things, a number describing a norm to use to test for convergence. At present, it requires the following functions (among others) work: x0 = asarray(x0).flatten() len(x0) vecnorm(gfk, ord=norm) # where gfk is the derivative vector and norm is passed in numpy.dot(gfk,gfk) It was easy for me (new to Python, seasoned in C++) to rewrite this function to work with a class providing a Hilbert-space interface, thereby removing the dependence on any particular representation of the vector and gradient. For the sake of testing, I wrote a wrapper class that wraps a NumPy array. When rewriting fmin_cg to use that class, I don't need asarray(x0).flatten(), len(x0) is provided as __len__(self), norm(self, norm=2) is provided, and inner_product(self,other) is provided, along with the arithmatic operations. Has anyone considered this approach? It really seems like a small code change that would both simplify the optimization code and make it significantly more flexible. I've tried similar approaches for generic high-performance optimization functions in C++, but I think Python's type system and memory management makes it a better language to write this sort of solver. ?Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Jan 6 00:27:23 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 06 Jan 2009 14:27:23 +0900 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> Message-ID: <4962EBBB.20201@ar.media.kyoto-u.ac.jp> Hi Ben, Ben FrantzDale wrote: > I've started playing with SciPy for numerical optimization. So far, it > looks like a great set of useful algorithm implementations. > > If I understand the architecture correctly, there is one change that > could make these libraries even more powerful, particularly for > large-scale scientific computing. The optimization algorithms seem to > be designed to work only with NumPy vectors. Mathematically, though, > many of these algorithms can operate on an arbitrary Hilbert space, so > the algoirhtms can be slightly rewritten to work with any objects that > support addition, scalar multiplication, and provide an inner product, > and a few other things. It may be true mathematically, but I don't think it is such a useful abstraction for implementation. The few other things is what matters the most, actually. For example, many if not most scipy algorithms depend on advanced features provided by numpy, such as slicing, broadcasting, etc... Implementing everything in scipy in terms of an abstract object with only the operations implied by Euclidean structures would be quite awkward. > > If these functions could be made more general in this way, I could see > the SciPy optimization tools being extremely useful for large-scale > optimization problems in which people already have the cost function, > its gradient, and the representation of a state vector implemented > (e.g., in C++). You can already do that with some effort, depending on the memory representation of your state vector: you make a numpy array from your data (you can of course almost always convert to a numpy array anyway, but I assume you want to avoid copies, since you talk about large-scale optimization). I want to ask: why would you *not* want to use numpy ? What does it bring to you ? cheers, David From robert.kern at gmail.com Tue Jan 6 01:01:01 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 Jan 2009 00:01:01 -0600 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <4962EBBB.20201@ar.media.kyoto-u.ac.jp> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730901052201u58b61179h57d4889042a9b2e0@mail.gmail.com> On Mon, Jan 5, 2009 at 23:27, David Cournapeau wrote: > I want to ask: why would you *not* want to use numpy ? What does it > bring to you ? If you want to implement optimization in a curved space with a non-Euclidean metric (say in the space of correlation matrices or SO(3)), you usually can't just use numpy arrays in the optimization implementations that we have. You really do need to rewrite the algorithm in terms of the abstract operations. For example, http://www-math.mit.edu/~lippert/sgmin.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Jan 6 01:09:07 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 Jan 2009 00:09:07 -0600 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> Message-ID: <3d375d730901052209q37ecb9fbk8ea23ccaab4c605f@mail.gmail.com> 2009/1/5 Ben FrantzDale : > Has anyone considered this approach? It really seems like a small code > change that would both simplify the optimization code and make it > significantly more flexible. I've tried similar approaches for generic > high-performance optimization functions in C++, but I think Python's type > system and memory management makes it a better language to write this sort > of solver. I've looked at this before. Many of the solvers are implemented in FORTRAN, so it's not all that easy to replace their innards. Fortunately, are almost black-box; they just need to be modified to accept an innerproduct function. The ones in Python might be easier to replace, but I'm skeptical that this can be done in either language without adversely affecting performance in the typical case. It might be feasible to "copy-and-modify" a handful of the Python ones as needed, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Jan 6 01:06:01 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 06 Jan 2009 15:06:01 +0900 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <3d375d730901052201u58b61179h57d4889042a9b2e0@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <3d375d730901052201u58b61179h57d4889042a9b2e0@mail.gmail.com> Message-ID: <4962F4C9.7020600@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > On Mon, Jan 5, 2009 at 23:27, David Cournapeau > wrote: >> I want to ask: why would you *not* want to use numpy ? What does it >> bring to you ? > > If you want to implement optimization in a curved space with a > non-Euclidean metric (say in the space of correlation matrices or > SO(3)), you usually can't just use numpy arrays in the optimization > implementations that we have. You really do need to rewrite the > algorithm in terms of the abstract operations. For example, > > http://www-math.mit.edu/~lippert/sgmin.html Sure, I did not mean more abstract are never useful, I understood the op question as why not writing the whole scipy without numpy. For more specific algorithms, I don't see much difference between "abstract arrays" for optimization and another kind of classes - unless you need the whole numpy interface and linear algebra ? David P.S: Thanks for the link, BTW, I did not know about this work, it looks quite interesting; did you implement some of the methods mentioned in the paper "Nonlinear Eigenvalue Problems With Orthogonality Constraints " ? From robert.kern at gmail.com Tue Jan 6 01:52:36 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 Jan 2009 00:52:36 -0600 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <4962F4C9.7020600@ar.media.kyoto-u.ac.jp> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <3d375d730901052201u58b61179h57d4889042a9b2e0@mail.gmail.com> <4962F4C9.7020600@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730901052252g11ad9d0elb75bf47a73a33f28@mail.gmail.com> On Tue, Jan 6, 2009 at 00:06, David Cournapeau wrote: > Robert Kern wrote: >> On Mon, Jan 5, 2009 at 23:27, David Cournapeau >> wrote: >>> I want to ask: why would you *not* want to use numpy ? What does it >>> bring to you ? >> >> If you want to implement optimization in a curved space with a >> non-Euclidean metric (say in the space of correlation matrices or >> SO(3)), you usually can't just use numpy arrays in the optimization >> implementations that we have. You really do need to rewrite the >> algorithm in terms of the abstract operations. For example, >> >> http://www-math.mit.edu/~lippert/sgmin.html > > Sure, I did not mean more abstract are never useful, I understood the op > question as why not writing the whole scipy without numpy. For more > specific algorithms, I don't see much difference between "abstract > arrays" for optimization and another kind of classes - unless you need > the whole numpy interface and linear algebra ? Not everything, but you do need to change code (even the pure Python functions) in order to actually be black-box and support other metrics. There are Euclidean/ndarray assumptions in the code. For example, I initially thought that the iterative FORTRAN solvers that we have was already black-box enough, but after asking Lippert's advice you need a bit more. The iterative solvers are abstracted at roughly the level an ndarray-like class would be. > P.S: Thanks for the link, BTW, I did not know about this work, it looks > quite interesting; did you implement some of the methods mentioned in > the paper "Nonlinear Eigenvalue Problems With Orthogonality Constraints > " ? Some. It was at the stage of using the scipy.sparse.linalg iterative solvers before I realized that they weren't sufficient. Also, this was some time ago, and the interfaces changed a bit, so it currently doesn't even almost work. It follows the work done in LRCM MIN for finding low-rank correlation matrices, which is a derivative of SG MIN. http://pietersz.org/lrcm%20min.htm In order to make it work, the iterative solvers actually need to be re-implemented. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -------------- next part -------------- A non-text attachment was scrubbed... Name: cholesky_optimize.py Type: text/x-python-script Size: 22163 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: randcorr.py Type: text/x-python-script Size: 1525 bytes Desc: not available URL: From stefan at sun.ac.za Tue Jan 6 03:58:02 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 6 Jan 2009 10:58:02 +0200 Subject: [SciPy-dev] Patching fmin_l_bfgs_b in scipy.optimize In-Reply-To: <4948598C.7050000@gmail.com> References: <49484F0E.4000409@gmail.com> <49485775.9010506@creativetrax.com> <4948598C.7050000@gmail.com> Message-ID: <9457e7c80901060058j2f79bd8ax8abf85641a00865b@mail.gmail.com> Hi Gilles If your patch hasn't yet been applied, please file a ticket on the developer page http://projects.scipy.org/scipy/scipy so that we don't forget to attend to this issue. Cheers St?fan 2008/12/17 Gilles Rochefort : > Thanks for the advice, > here is my new patch ! > > Regards, > Gilles. > > jason-sage at creativetrax.com a ?crit : > > Gilles Rochefort wrote: > > > Hi, > > I had some troubles getting a stable version of fmin_l_bfgs_b. I got a > segmentation fault > as soon as I turned a different value than -1 to iprint argument. From stefan at sun.ac.za Tue Jan 6 04:29:44 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 6 Jan 2009 11:29:44 +0200 Subject: [SciPy-dev] make 0.7.0b1 lintian clean In-Reply-To: <85b5c3130812080831j3bc1e78jf3c91c2be34e965f@mail.gmail.com> References: <85b5c3130812080831j3bc1e78jf3c91c2be34e965f@mail.gmail.com> Message-ID: <9457e7c80901060129r240e6e3eq741fe0008aec238f@mail.gmail.com> 2008/12/8 Ondrej Certik : > I am testing the debian packaging of 0.7.0b1 and lintian says: [...] > The attached patch fixes this in the scipy trunk. If you agree, please apply. [...] Thanks, Ondrej! I applied your patch. Cheers St?fan From stefan at sun.ac.za Tue Jan 6 04:55:00 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 6 Jan 2009 11:55:00 +0200 Subject: [SciPy-dev] Single precision FFT In-Reply-To: References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> 2008/12/1 Anne Archibald : > I'd also like to suggest that, if possible, it would be nice if > single-precision FFTs were not a separate module, or even a separate > function, but instead the usual fft function selected them when handed > a single-precision input. Has there been any progress on the single precision FFT front? It seems that this is a highly requested feature, so it should go on the TODO list for 0.8. Cheers St?fan From david at ar.media.kyoto-u.ac.jp Tue Jan 6 04:44:03 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 06 Jan 2009 18:44:03 +0900 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> Message-ID: <496327E3.9060105@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > 2008/12/1 Anne Archibald : > >> I'd also like to suggest that, if possible, it would be nice if >> single-precision FFTs were not a separate module, or even a separate >> function, but instead the usual fft function selected them when handed >> a single-precision input. >> > > Has there been any progress on the single precision FFT front? Not that I am aware of - for sure nothing has been committed to the scipy tree. There was some discussion recently about something interested in doing the work in numpy - but I think this should be done in scipy (to avoid putting more code in numpy - and because the underliong fortran code is already there for single precision in scipy case). For anyone moderately familiar with f2py and numpy, it would not take much time, since it would mostly be a copy and paste of the existing code for the low-level wrapper, and a bit careful work to avoid breaking any public API for the high-level wrapper, cheers, David From pearu at cens.ioc.ee Tue Jan 6 06:40:19 2009 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 6 Jan 2009 13:40:19 +0200 (EET) Subject: [SciPy-dev] Stopping F2Py from generating verbose output In-Reply-To: <9457e7c80901041403i7de7e67dk4455922625d9e53f@mail.gmail.com> References: <9457e7c80901041403i7de7e67dk4455922625d9e53f@mail.gmail.com> Message-ID: <39446.172.17.0.4.1231242019.squirrel@cens.ioc.ee> On Mon, January 5, 2009 12:03 am, St?fan van der Walt wrote: > Hi all, > > It seems that somewhere along the line f2py inserts debugging prints > for scipy.lib.blas.fblas. E.g. > > $ python -c "from scipy.lib.blas import fblas; fblas.zaxpy([1, 2], [3, 4], > n=4)" > zaxpy:n=4 > Traceback (most recent call last): > File "", line 1, in > fblas.error: (len(y)-offy>(n-1)*abs(incy)) failed for 1st keyword n > > Does anybody know how to suppress those "zaxpy:n=4" style messages, or > even just where they are generated? These messages are generated by f2py generated extension modules and there is no switch implemented to disable these messages because they are part of the exception. Sometimes the error message is not sufficient to determine the cause of the exception and printing out the actual values of the arguments is useful. IMHO, a proper solution would be to include the printed messages to the exception value instead of just removing them. Pearu From benfrantzdale at gmail.com Tue Jan 6 08:00:24 2009 From: benfrantzdale at gmail.com (Ben FrantzDale) Date: Tue, 6 Jan 2009 08:00:24 -0500 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <4962EBBB.20201@ar.media.kyoto-u.ac.jp> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> Message-ID: <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> David, Thanks for the reply. Let me describe the problem I've thought most about to give you a sense of where I am coming from: The optimization problem I have worked most with is atomistic simulation (particularly relaxation) in which you have millions of atoms all interacting with a force function. The force function is sparse but as the atoms move, the sparseness pattern changes (with sparseness structures that can be updated automatically when the state vector is "moved"). Furthermore, the atoms can be distributed via MPI, so while they happen to be a size-3n array of doubles locally, local atom 1,234 at one moment may not be the same atom as local atom 1,234 at the next step. (I don't know the details of PyMPI or how garbage collection might interact with MPI, but using SciPy for massively-parallel optimization is another motiviation for this post.) A common solver to use to relax the state of this sort of system is nonlinear CG. However, I've seen many separate implementations of CG hand-written to operate on these data structures. As a software engineer, this is sad to see; really one nonlinear CG solver should be reusable enough that it can be dropped in and used. I tried writing a sufficiently-generic CG solver in C++, but designing the right family of types, particularly with memory-management issues in mind, got out of hand. Given the cost function is the expensive part of the sorts of problems I am interested in, the overhead of Python is tiny, and SciPy has nice optimization libraries, which lead me to consider generalizing SciPy functions. I agree with Robert's comment about non-Euclidean metrics, such as working with SO(3). Similarly, I have thought it would be interesting to optimize function spaces (as in finite-element solvers) in which it should be possible to add two functions over the same domain but with different discretizations, so that __add__ would implicitly remesh one or the other functions rather than having __add__ require the same basis functions on the LHS and RHS so it can just add corresponding scalar coefficients. As for the "few other things", the only example I tried changing is nonlinear CG, and that really didn't require too many things. See below for the SimpleHilbertVector class I tested with, which just hides a numpy array behind a Hilbert-space interface. Obviously methods that require second-order approximations (i.e., a Hessian), such as Levenberg-Marquardt, would need some way to represent the Hessian, so that is a method that would at least require more thought to try to implement in an interface-only way. As for such as slicing, broadcasting, etc. I didn't use of this in fmin_cg. I see one use of slicing in optimize.py (in brute), but nothing about broadcasting. Can you point to what you mean? ?Ben # Appologies to whatever degree the following is non-Pythonic: class HilbertVector: def norm(self, norm = 2): if norm == 2: return sqrt(self.inner_product(self)) else: raise class SimpleHilbertVector(HilbertVector): """ This is a simple Hilbert vector. """ def __init__(self, arr): if isinstance(arr, SimpleHilbertVector): self.data = arr.data else: self.data = array(arr) def inner_product(self, other): return dot(self.data, other.data) def norm(self, norm = 2): try: HilbertVector.norm(self, norm) except: if norm == Inf: return max(abs(self.data)) raise def __mul__(self, other): return SimpleHilbertVector(self.data.__mul__(other)) def __div__(self, other): return SimpleHilbertVector(self.data.__div__(other)) def __truediv__(self, other): return SimpleHilbertVector(self.data.__truediv__(other)) def __rdiv__(self, other): return SimpleHilbertVector(self.data.__rdiv__(other)) def __rmul__(self, other): return SimpleHilbertVector(self.data.__mul__(other)) def __add__(self, other): return SimpleHilbertVector(other.data.__radd__(self.data)) def __radd__(self, other): return SimpleHilbertVector(other.data.__add__(self.data)) def __sub__(self, other): return SimpleHilbertVector(other.__rsub__(self.data)) def __rsub__(self, other): return SimpleHilbertVector(other.__sub__(self.data)) def __neg__(self): return SimpleHilbertVector(self.data.__neg__()) def __len__(self): return self.data.__len__() def __str__(self): return self.data.__str__() + " (SimpleHilbertVector)" On Tue, Jan 6, 2009 at 12:27 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Hi Ben, > > Ben FrantzDale wrote: > > I've started playing with SciPy for numerical optimization. So far, it > > looks like a great set of useful algorithm implementations. > > > > If I understand the architecture correctly, there is one change that > > could make these libraries even more powerful, particularly for > > large-scale scientific computing. The optimization algorithms seem to > > be designed to work only with NumPy vectors. Mathematically, though, > > many of these algorithms can operate on an arbitrary Hilbert space, so > > the algoirhtms can be slightly rewritten to work with any objects that > > support addition, scalar multiplication, and provide an inner product, > > and a few other things. > > It may be true mathematically, but I don't think it is such a useful > abstraction for implementation. The few other things is what matters the > most, actually. For example, many if not most scipy algorithms depend on > advanced features provided by numpy, such as slicing, broadcasting, > etc... Implementing everything in scipy in terms of an abstract object > with only the operations implied by Euclidean structures would be quite > awkward. > > > > > If these functions could be made more general in this way, I could see > > the SciPy optimization tools being extremely useful for large-scale > > optimization problems in which people already have the cost function, > > its gradient, and the representation of a state vector implemented > > (e.g., in C++). > > You can already do that with some effort, depending on the memory > representation of your state vector: you make a numpy array from your > data (you can of course almost always convert to a numpy array anyway, > but I assume you want to avoid copies, since you talk about large-scale > optimization). > > I want to ask: why would you *not* want to use numpy ? What does it > bring to you ? > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Jan 6 08:22:58 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 6 Jan 2009 15:22:58 +0200 Subject: [SciPy-dev] Stopping F2Py from generating verbose output In-Reply-To: <39446.172.17.0.4.1231242019.squirrel@cens.ioc.ee> References: <9457e7c80901041403i7de7e67dk4455922625d9e53f@mail.gmail.com> <39446.172.17.0.4.1231242019.squirrel@cens.ioc.ee> Message-ID: <9457e7c80901060522w594be284hb89c201bd9d85c59@mail.gmail.com> 2009/1/6 Pearu Peterson : > These messages are generated by f2py generated extension modules > and there is no switch implemented to disable these messages > because they are part of the exception. Sometimes the error message > is not sufficient to determine the cause of the exception and > printing out the actual values of the arguments is useful. > IMHO, a proper solution would be to include the printed messages > to the exception value instead of just removing them. That sounds like a better plan. The reason I wanted to remove them is because they pop up everywhere during unit testing. Would you like to have a look at it? Thanks for the pointer, St?fan From stefan at sun.ac.za Tue Jan 6 08:24:38 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 6 Jan 2009 15:24:38 +0200 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <496327E3.9060105@ar.media.kyoto-u.ac.jp> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> 2009/1/6 David Cournapeau : >> Has there been any progress on the single precision FFT front? > > Not that I am aware of - for sure nothing has been committed to the > scipy tree. There was some discussion recently about something > interested in doing the work in numpy - but I think this should be done > in scipy (to avoid putting more code in numpy - and because the > underliong fortran code is already there for single precision in scipy > case). > > For anyone moderately familiar with f2py and numpy, it would not take > much time, since it would mostly be a copy and paste of the existing > code for the low-level wrapper, and a bit careful work to avoid breaking > any public API for the high-level wrapper, I think this would make a very good GSOC project. I heard that some students were interested in working on Scipy. Regards St?fan From david at ar.media.kyoto-u.ac.jp Tue Jan 6 08:28:07 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 06 Jan 2009 22:28:07 +0900 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> Message-ID: <49635C67.8000307@ar.media.kyoto-u.ac.jp> Ben FrantzDale wrote: > David, > > Thanks for the reply. Let me describe the problem I've thought most > about to give you a sense of where I am coming from: > > The optimization problem I have worked most with is atomistic > simulation (particularly relaxation) in which you have millions of > atoms all interacting with a force function. The force function is > sparse but as the atoms move, the sparseness pattern changes (with > sparseness structures that can be updated automatically when the state > vector is "moved"). Furthermore, the atoms can be distributed via MPI, > so while they happen to be a size-3n array of doubles locally, local > atom 1,234 at one moment may not be the same atom as local atom 1,234 > at the next step. (I don't know the details of PyMPI or how garbage > collection might interact with MPI, but using SciPy for > massively-parallel optimization is another motiviation for this post.) > > A common solver to use to relax the state of this sort of system is > nonlinear CG. However, I've seen many separate implementations of CG > hand-written to operate on these data structures. As a software > engineer, this is sad to see; really one nonlinear CG solver should be > reusable enough that it can be dropped in and used. I tried writing a > sufficiently-generic CG solver in C++, but designing the right family > of types, particularly with memory-management issues in mind, got out > of hand. Given the cost function is the expensive part of the sorts of > problems I am interested in, the overhead of Python is tiny, and SciPy > has nice optimization libraries, which lead me to consider > generalizing SciPy functions. > I think I simply did not understand what you meant by scipy; for me, scipy means the whole package - and in that case, slicing and other advanced features of numpy certainly are useful. But it looks like you had in mind only the optimize package; in that context, Robert's answer is certainly better than mine. > > > I agree with Robert's comment about non-Euclidean metrics, such as > working with SO(3). Similarly, I have thought it would be interesting > to optimize function spaces (as in finite-element solvers) in which it > should be possible to add two functions over the same domain but with > different discretizations, so that __add__ would implicitly remesh one > or the other functions rather than having __add__ require the same > basis functions on the LHS and RHS so it can just add corresponding > scalar coefficients. That can definitely be useful, and not only for optimization. Optimization is not a field I can claim to know a lot about, but those more abstract arrays with overridable 'core' aspects certainly is useful in other problems as well. Taking an example nearer to my own field, a package called lastwave for wavelets has some neat tricks related to indexing, including automatic interpolation which can be very useful for signal processing, and is a bit similar to your example of 'remeshing' if I understand correctly. cheers, David From josef.pktd at gmail.com Tue Jan 6 10:07:43 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 6 Jan 2009 10:07:43 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? Message-ID: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> I was poking around in the source tree and was curious what scipy.stci is. But there is no basic info available http://docs.scipy.org/scipy/docs/scipy.stsci/ is empty The setup.py describes it as "image array convolution functions" Is there any relationship to ndimage? Or is this orphaned code? Josef From chanley at stsci.edu Tue Jan 6 10:11:29 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Jan 2009 10:11:29 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> Message-ID: <496374A1.3040000@stsci.edu> josef.pktd at gmail.com wrote: > I was poking around in the source tree and was curious what scipy.stci > is. But there is no basic info available > > http://docs.scipy.org/scipy/docs/scipy.stsci/ > is empty > > The setup.py describes it as "image array convolution functions" > > Is there any relationship to ndimage? Or is this orphaned code? > > Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev It is not orphaned code. It was distributed with numarray but moved to scipy when we moved to numpy. It does not have the same capabilities as ndimage. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From pearu at cens.ioc.ee Tue Jan 6 10:52:16 2009 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 6 Jan 2009 17:52:16 +0200 (EET) Subject: [SciPy-dev] Stopping F2Py from generating verbose output In-Reply-To: <9457e7c80901060522w594be284hb89c201bd9d85c59@mail.gmail.com> References: <9457e7c80901041403i7de7e67dk4455922625d9e53f@mail.gmail.com> <39446.172.17.0.4.1231242019.squirrel@cens.ioc.ee> <9457e7c80901060522w594be284hb89c201bd9d85c59@mail.gmail.com> Message-ID: <56220.172.17.0.4.1231257136.squirrel@cens.ioc.ee> On Tue, January 6, 2009 3:22 pm, St?fan van der Walt wrote: > 2009/1/6 Pearu Peterson : >> These messages are generated by f2py generated extension modules >> and there is no switch implemented to disable these messages >> because they are part of the exception. Sometimes the error message >> is not sufficient to determine the cause of the exception and >> printing out the actual values of the arguments is useful. >> IMHO, a proper solution would be to include the printed messages >> to the exception value instead of just removing them. > > That sounds like a better plan. The reason I wanted to remove them is > because they pop up everywhere during unit testing. Would you like to > have a look at it? Hmm, the messages should pop up only when there is a failure. If all tests pass then the messages are shown by the tests checking the failures. I'll take a look what I can do quickly in f2py if this is the case. Pearu From david at ar.media.kyoto-u.ac.jp Tue Jan 6 10:51:42 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 07 Jan 2009 00:51:42 +0900 Subject: [SciPy-dev] Stopping F2Py from generating verbose output In-Reply-To: <56220.172.17.0.4.1231257136.squirrel@cens.ioc.ee> References: <9457e7c80901041403i7de7e67dk4455922625d9e53f@mail.gmail.com> <39446.172.17.0.4.1231242019.squirrel@cens.ioc.ee> <9457e7c80901060522w594be284hb89c201bd9d85c59@mail.gmail.com> <56220.172.17.0.4.1231257136.squirrel@cens.ioc.ee> Message-ID: <49637E0E.6050707@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > > Hmm, the messages should pop up only when there is a failure. If all > tests pass then the messages are shown by the tests checking the failures. > That's exactly what's happening, David From josef.pktd at gmail.com Tue Jan 6 12:34:00 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 6 Jan 2009 12:34:00 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <496374A1.3040000@stsci.edu> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> Message-ID: <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> On Tue, Jan 6, 2009 at 10:11 AM, Christopher Hanley wrote: > josef.pktd at gmail.com wrote: >> I was poking around in the source tree and was curious what scipy.stci >> is. But there is no basic info available >> >> http://docs.scipy.org/scipy/docs/scipy.stsci/ >> is empty >> >> The setup.py describes it as "image array convolution functions" >> >> Is there any relationship to ndimage? Or is this orphaned code? >> >> Josef >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > It is not orphaned code. It was distributed with numarray but moved to > scipy when we moved to numpy. > > It does not have the same capabilities as ndimage. > > Chris > > > -- > Christopher Hanley > Senior Systems Software Engineer > Space Telescope Science Institute > 3700 San Martin Drive > Baltimore MD, 21218 > (410) 338-4338 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > To expose it to the public, I tried to add scipy.stsci to the docs at http://docs.scipy.org/scipy/docs/scipy-docs/stsci.rst/ but the links are not working yet. Someone familiar with it, might want to check the information and maybe add some explanation. Josef Josef From chanley at stsci.edu Tue Jan 6 12:37:04 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Jan 2009 12:37:04 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> Message-ID: <496396C0.7080102@stsci.edu> josef.pktd at gmail.com wrote: > On Tue, Jan 6, 2009 at 10:11 AM, Christopher Hanley wrote: >> josef.pktd at gmail.com wrote: >>> I was poking around in the source tree and was curious what scipy.stci >>> is. But there is no basic info available >>> >>> http://docs.scipy.org/scipy/docs/scipy.stsci/ >>> is empty >>> >>> The setup.py describes it as "image array convolution functions" >>> >>> Is there any relationship to ndimage? Or is this orphaned code? >>> >>> Josef >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >> It is not orphaned code. It was distributed with numarray but moved to >> scipy when we moved to numpy. >> >> It does not have the same capabilities as ndimage. >> >> Chris >> >> >> -- >> Christopher Hanley >> Senior Systems Software Engineer >> Space Telescope Science Institute >> 3700 San Martin Drive >> Baltimore MD, 21218 >> (410) 338-4338 >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > To expose it to the public, I tried to add scipy.stsci to the docs at > http://docs.scipy.org/scipy/docs/scipy-docs/stsci.rst/ but the links > are not working yet. > > Someone familiar with it, might want to check the information and > maybe add some explanation. > > Josef > > Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev OK. Thanks. I'll look into updating the information. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From josef.pktd at gmail.com Tue Jan 6 12:58:17 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 6 Jan 2009 12:58:17 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <496396C0.7080102@stsci.edu> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> <496396C0.7080102@stsci.edu> Message-ID: <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> On Tue, Jan 6, 2009 at 12:37 PM, Christopher Hanley wrote: > josef.pktd at gmail.com wrote: >> On Tue, Jan 6, 2009 at 10:11 AM, Christopher Hanley wrote: >>> josef.pktd at gmail.com wrote: >>>> I was poking around in the source tree and was curious what scipy.stci >>>> is. But there is no basic info available >>>> >>>> http://docs.scipy.org/scipy/docs/scipy.stsci/ >>>> is empty >>>> >>>> The setup.py describes it as "image array convolution functions" >>>> >>>> Is there any relationship to ndimage? Or is this orphaned code? >>>> >>>> Josef >>>> _______________________________________________ >>>> Scipy-dev mailing list >>>> Scipy-dev at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> It is not orphaned code. It was distributed with numarray but moved to >>> scipy when we moved to numpy. >>> >>> It does not have the same capabilities as ndimage. >>> >>> Chris >>> >>> >>> -- >>> Christopher Hanley >>> Senior Systems Software Engineer >>> Space Telescope Science Institute >>> 3700 San Martin Drive >>> Baltimore MD, 21218 >>> (410) 338-4338 >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> >> >> To expose it to the public, I tried to add scipy.stsci to the docs at >> http://docs.scipy.org/scipy/docs/scipy-docs/stsci.rst/ but the links >> are not working yet. >> >> Someone familiar with it, might want to check the information and >> maybe add some explanation. >> >> Josef >> >> Josef >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > OK. Thanks. I'll look into updating the information. > > Chris > > > -- > Christopher Hanley > Senior Systems Software Engineer > Space Telescope Science Institute > 3700 San Martin Drive > Baltimore MD, 21218 > (410) 338-4338 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I think the imports are also not correctly exposed. * convolve and image are not exposed on the stsci module level (no __all__ anywhere_) and have to be imported individually. I don't know if this is on purpose, but it might confuse the doc generation * there is a `scipy.stsci.convolve.lineshape` that really looks orphaned, import doesn't work and it doesn't seem to be used, but I didn't check carefully * test coverage = 0 >>> stsci.convolve Traceback (most recent call last): File "", line 1, in stsci.convolve AttributeError: 'module' object has no attribute 'convolve' >>> from scipy.stsci import convolve >>> import scipy.stsci.convolve.lineshape as lsh Traceback (most recent call last): File "", line 1, in import scipy.stsci.convolve.lineshape as lsh File "\Programs\Python25\Lib\site-packages\scipy\stsci\convolve\lineshape.py", line 43, in from convolve._lineshape import * ImportError: No module named convolve._lineshape Josef From chanley at stsci.edu Tue Jan 6 13:08:25 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Jan 2009 13:08:25 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> <496396C0.7080102@stsci.edu> <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> Message-ID: <49639E19.30908@stsci.edu> josef.pktd at gmail.com wrote: > I think the imports are also not correctly exposed. > > * convolve and image are not exposed on the stsci module level (no > __all__ anywhere_) and have to be imported individually. I don't know > if this is on purpose, but it might confuse the doc generation > > * there is a `scipy.stsci.convolve.lineshape` that really looks > orphaned, import doesn't work and it doesn't seem to be used, but I > didn't check carefully > > * test coverage = 0 > >>>> stsci.convolve > Traceback (most recent call last): > File "", line 1, in > stsci.convolve > AttributeError: 'module' object has no attribute 'convolve' >>>> from scipy.stsci import convolve > > >>>> import scipy.stsci.convolve.lineshape as lsh > Traceback (most recent call last): > File "", line 1, in > import scipy.stsci.convolve.lineshape as lsh > File "\Programs\Python25\Lib\site-packages\scipy\stsci\convolve\lineshape.py", > line 43, in > from convolve._lineshape import * > ImportError: No module named convolve._lineshape > > Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev OK. I'll check the imports and see about making the stsci module level stuff more scipy friendly. There are doctests for image and convolve. I will look into exposing them. We normally package these as stand alone modules with our software distributions. We haven't yet wanted our users to need to install scipy. Obviously we need to make this more scipy friendly as well. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From cohen at lpta.in2p3.fr Tue Jan 6 13:13:14 2009 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Tue, 06 Jan 2009 19:13:14 +0100 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <49639E19.30908@stsci.edu> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> <496396C0.7080102@stsci.edu> <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> <49639E19.30908@stsci.edu> Message-ID: <49639F3A.7080505@lpta.in2p3.fr> hi there, sorry for my pedestrian question, but if you distribute it in standalone somewhere else anyway, what is the argument to keep it in scipy? :) Johann Christopher Hanley wrote: > josef.pktd at gmail.com wrote: > >> I think the imports are also not correctly exposed. >> >> * convolve and image are not exposed on the stsci module level (no >> __all__ anywhere_) and have to be imported individually. I don't know >> if this is on purpose, but it might confuse the doc generation >> >> * there is a `scipy.stsci.convolve.lineshape` that really looks >> orphaned, import doesn't work and it doesn't seem to be used, but I >> didn't check carefully >> >> * test coverage = 0 >> >> >>>>> stsci.convolve >>>>> >> Traceback (most recent call last): >> File "", line 1, in >> stsci.convolve >> AttributeError: 'module' object has no attribute 'convolve' >> >>>>> from scipy.stsci import convolve >>>>> >> >>>>> import scipy.stsci.convolve.lineshape as lsh >>>>> >> Traceback (most recent call last): >> File "", line 1, in >> import scipy.stsci.convolve.lineshape as lsh >> File "\Programs\Python25\Lib\site-packages\scipy\stsci\convolve\lineshape.py", >> line 43, in >> from convolve._lineshape import * >> ImportError: No module named convolve._lineshape >> >> Josef >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > OK. I'll check the imports and see about making the stsci module level > stuff more scipy friendly. > > There are doctests for image and convolve. I will look into exposing them. > > We normally package these as stand alone modules with our software > distributions. We haven't yet wanted our users to need to install > scipy. Obviously we need to make this more scipy friendly as well. > > Chris > > > From chanley at stsci.edu Tue Jan 6 14:02:34 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Jan 2009 14:02:34 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <49639F3A.7080505@lpta.in2p3.fr> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> <496396C0.7080102@stsci.edu> <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> <49639E19.30908@stsci.edu> <49639F3A.7080505@lpta.in2p3.fr> Message-ID: <4963AACA.3060302@stsci.edu> I don't think it is for us (STScI) to decide if image and convolve should be part of scipy. I will leave that decision to the scipy community. I will say that image and convolve were once distributed with the numarray open source project. When that project was discontinued there was some desire within the community to keep the projects available to developers. Numpy was not an appropriate location given its desire to be minimalistic in size and scope. Scipy was the logical solution at the time. Perhaps that is not the case anymore. Although we happily accept patches and make our code freely available the stsci_python distribution is not as readily modified by external developers as a project like scipy. This is due to our need to support a software production environment for the HST and JWST. So that is some of the history that went into getting image and convolve where they are now. I'll let others way the pros and cons of keeping them in scipy and then delete them if desired. Thanks, Chris Cohen-Tanugi Johann wrote: > hi there, sorry for my pedestrian question, > but if you distribute it in standalone somewhere else anyway, what is > the argument to keep it in scipy? > :) > Johann > > Christopher Hanley wrote: >> josef.pktd at gmail.com wrote: >> >>> I think the imports are also not correctly exposed. >>> >>> * convolve and image are not exposed on the stsci module level (no >>> __all__ anywhere_) and have to be imported individually. I don't know >>> if this is on purpose, but it might confuse the doc generation >>> >>> * there is a `scipy.stsci.convolve.lineshape` that really looks >>> orphaned, import doesn't work and it doesn't seem to be used, but I >>> didn't check carefully >>> >>> * test coverage = 0 >>> >>> >>>>>> stsci.convolve >>>>>> >>> Traceback (most recent call last): >>> File "", line 1, in >>> stsci.convolve >>> AttributeError: 'module' object has no attribute 'convolve' >>> >>>>>> from scipy.stsci import convolve >>>>>> >>> >>>>>> import scipy.stsci.convolve.lineshape as lsh >>>>>> >>> Traceback (most recent call last): >>> File "", line 1, in >>> import scipy.stsci.convolve.lineshape as lsh >>> File "\Programs\Python25\Lib\site-packages\scipy\stsci\convolve\lineshape.py", >>> line 43, in >>> from convolve._lineshape import * >>> ImportError: No module named convolve._lineshape >>> >>> Josef >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> >> OK. I'll check the imports and see about making the stsci module level >> stuff more scipy friendly. >> >> There are doctests for image and convolve. I will look into exposing them. >> >> We normally package these as stand alone modules with our software >> distributions. We haven't yet wanted our users to need to install >> scipy. Obviously we need to make this more scipy friendly as well. >> >> Chris >> >> >> > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From josef.pktd at gmail.com Tue Jan 6 15:04:40 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 6 Jan 2009 15:04:40 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <4963AACA.3060302@stsci.edu> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> <496396C0.7080102@stsci.edu> <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> <49639E19.30908@stsci.edu> <49639F3A.7080505@lpta.in2p3.fr> <4963AACA.3060302@stsci.edu> Message-ID: <1cd32cbb0901061204y2a007dan195aff3f1856a2fb@mail.gmail.com> On Tue, Jan 6, 2009 at 2:02 PM, Christopher Hanley wrote: > I don't think it is for us (STScI) to decide if image and convolve > should be part of scipy. I will leave that decision to the scipy community. > > I will say that image and convolve were once distributed with the > numarray open source project. When that project was discontinued there > was some desire within the community to keep the projects available to > developers. Numpy was not an appropriate location given its desire to > be minimalistic in size and scope. Scipy was the logical solution at > the time. Perhaps that is not the case anymore. > > Although we happily accept patches and make our code freely available > the stsci_python distribution is not as readily modified by external > developers as a project like scipy. This is due to our need to support > a software production environment for the HST and JWST. > > So that is some of the history that went into getting image and convolve > where they are now. I'll let others way the pros and cons of keeping > them in scipy and then delete them if desired. > > Thanks, > Chris > > > Cohen-Tanugi Johann wrote: >> hi there, sorry for my pedestrian question, >> but if you distribute it in standalone somewhere else anyway, what is >> the argument to keep it in scipy? >> :) >> Johann >> >> Christopher Hanley wrote: >>> josef.pktd at gmail.com wrote: >>> >>>> I think the imports are also not correctly exposed. >>>> >>>> * convolve and image are not exposed on the stsci module level (no >>>> __all__ anywhere_) and have to be imported individually. I don't know >>>> if this is on purpose, but it might confuse the doc generation >>>> >>>> * there is a `scipy.stsci.convolve.lineshape` that really looks >>>> orphaned, import doesn't work and it doesn't seem to be used, but I >>>> didn't check carefully >>>> >>>> * test coverage = 0 >>>> >>>> >>>>>>> stsci.convolve >>>>>>> >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> stsci.convolve >>>> AttributeError: 'module' object has no attribute 'convolve' >>>> >>>>>>> from scipy.stsci import convolve >>>>>>> >>>> >>>>>>> import scipy.stsci.convolve.lineshape as lsh >>>>>>> >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> import scipy.stsci.convolve.lineshape as lsh >>>> File "\Programs\Python25\Lib\site-packages\scipy\stsci\convolve\lineshape.py", >>>> line 43, in >>>> from convolve._lineshape import * >>>> ImportError: No module named convolve._lineshape >>>> >>>> Josef >>>> _______________________________________________ >>>> Scipy-dev mailing list >>>> Scipy-dev at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>>> >>> OK. I'll check the imports and see about making the stsci module level >>> stuff more scipy friendly. >>> >>> There are doctests for image and convolve. I will look into exposing them. >>> >>> We normally package these as stand alone modules with our software >>> distributions. We haven't yet wanted our users to need to install >>> scipy. Obviously we need to make this more scipy friendly as well. >>> >>> Chris >>> >>> >>> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > > -- > Christopher Hanley > Senior Systems Software Engineer > Space Telescope Science Institute > 3700 San Martin Drive > Baltimore MD, 21218 > (410) 338-4338 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I'm relying more and more on the new windows help files which make browsing and searching very easy. And when I see modules and functions in the source that are missing in the docs, I'm getting curious why they are not exposed to the public. As a relatively new user of scipy, I'm still trying to find my way around and the representation of the modules in the docs and exposure to np.lookfor is useful to find out what is available. So, all I would like to see is that modules and functions are properly exposed, so that a "generic" user can find and make use of them. (Which, of course, makes it also more likely to expose any problems with the code, as I saw in stats) Josef From millman at berkeley.edu Tue Jan 6 15:08:56 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 6 Jan 2009 12:08:56 -0800 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: <1cd32cbb0901061204y2a007dan195aff3f1856a2fb@mail.gmail.com> References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> <496396C0.7080102@stsci.edu> <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> <49639E19.30908@stsci.edu> <49639F3A.7080505@lpta.in2p3.fr> <4963AACA.3060302@stsci.edu> <1cd32cbb0901061204y2a007dan195aff3f1856a2fb@mail.gmail.com> Message-ID: On Tue, Jan 6, 2009 at 12:04 PM, wrote: > I'm relying more and more on the new windows help files which make > browsing and searching very easy. And when I see modules and functions > in the source that are missing in the docs, I'm getting curious why > they are not exposed to the public. > > As a relatively new user of scipy, I'm still trying to find my way > around and the representation of the modules in the docs and exposure > to np.lookfor is useful to find out what is available. > > So, all I would like to see is that modules and functions are properly > exposed, so that a "generic" user can find and make use of them. > (Which, of course, makes it also more likely to expose any problems > with the code, as I saw in stats) I think that it would be great to make sure we are fully exposing everything we ship. As an aside, at some point we need to review the stsci code and decide whether it belongs in scipy. So far no one has had an opportunity to do that. If we decide we want to keep the functionality, we will need to decide where it belongs. The name 'stsci' isn't very intuitive and doesn't conform with the other sub-package names. Jarrod From chanley at stsci.edu Tue Jan 6 15:44:26 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Jan 2009 15:44:26 -0500 Subject: [SciPy-dev] Is scipy.stci an orphan? In-Reply-To: References: <1cd32cbb0901060707j4eec5ce1t1114687d690e359c@mail.gmail.com> <496374A1.3040000@stsci.edu> <1cd32cbb0901060934h66a7676ctbc7de5885b4d0817@mail.gmail.com> <496396C0.7080102@stsci.edu> <1cd32cbb0901060958u19772396yec5cd58dd2a1b7c1@mail.gmail.com> <49639E19.30908@stsci.edu> <49639F3A.7080505@lpta.in2p3.fr> <4963AACA.3060302@stsci.edu> <1cd32cbb0901061204y2a007dan195aff3f1856a2fb@mail.gmail.com> Message-ID: <4963C2AA.2050902@stsci.edu> Jarrod Millman wrote: > On Tue, Jan 6, 2009 at 12:04 PM, wrote: >> I'm relying more and more on the new windows help files which make >> browsing and searching very easy. And when I see modules and functions >> in the source that are missing in the docs, I'm getting curious why >> they are not exposed to the public. >> >> As a relatively new user of scipy, I'm still trying to find my way >> around and the representation of the modules in the docs and exposure >> to np.lookfor is useful to find out what is available. >> >> So, all I would like to see is that modules and functions are properly >> exposed, so that a "generic" user can find and make use of them. >> (Which, of course, makes it also more likely to expose any problems >> with the code, as I saw in stats) > > I think that it would be great to make sure we are fully exposing > everything we ship. > > As an aside, at some point we need to review the stsci code and decide > whether it belongs in scipy. So far no one has had an opportunity to > do that. If we decide we want to keep the functionality, we will need > to decide where it belongs. The name 'stsci' isn't very intuitive and > doesn't conform with the other sub-package names. > > Jarrod > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev I can definitely look into better exposing the existing code and enabling the tests. There had earlier been talk of an imaging module that would encompass these packages in addition to ndimage but I don't think anything came of that discussion. I will await more community input before acting on any such changes. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From robert.kern at gmail.com Tue Jan 6 16:53:22 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 Jan 2009 15:53:22 -0600 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> Message-ID: <3d375d730901061353i4df32315na9d931b32c9f618a@mail.gmail.com> 2009/1/6 Ben FrantzDale : > David, > > Thanks for the reply. Let me describe the problem I've thought most about to > give you a sense of where I am coming from: > > The optimization problem I have worked most with is atomistic simulation > (particularly relaxation) in which you have millions of atoms all > interacting with a force function. The force function is sparse but as the > atoms move, the sparseness pattern changes (with sparseness structures that > can be updated automatically when the state vector is "moved"). Furthermore, > the atoms can be distributed via MPI, so while they happen to be a size-3n > array of doubles locally, local atom 1,234 at one moment may not be the same > atom as local atom 1,234 at the next step. (I don't know the details of > PyMPI or how garbage collection might interact with MPI, but using SciPy for > massively-parallel optimization is another motiviation for this post.) > > A common solver to use to relax the state of this sort of system is > nonlinear CG. However, I've seen many separate implementations of CG > hand-written to operate on these data structures. As a software engineer, > this is sad to see; really one nonlinear CG solver should be reusable enough > that it can be dropped in and used. I tried writing a sufficiently-generic > CG solver in C++, but designing the right family of types, particularly with > memory-management issues in mind, got out of hand. Given the cost function > is the expensive part of the sorts of problems I am interested in, the > overhead of Python is tiny, and SciPy has nice optimization libraries, which > lead me to consider generalizing SciPy functions. The solvers in scipy.sparse.linalg are already black-boxed enough for this. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Jan 6 17:14:10 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 Jan 2009 16:14:10 -0600 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <3d375d730901061353i4df32315na9d931b32c9f618a@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> <3d375d730901061353i4df32315na9d931b32c9f618a@mail.gmail.com> Message-ID: <3d375d730901061414o2ede5e02oa8238fc930c08a39@mail.gmail.com> On Tue, Jan 6, 2009 at 15:53, Robert Kern wrote: > The solvers in scipy.sparse.linalg are already black-boxed enough for this. I'm sorry. That was dumb. Ignore me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From doutriaux1 at llnl.gov Tue Jan 6 18:39:47 2009 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Tue, 6 Jan 2009 15:39:47 -0800 Subject: [SciPy-dev] release? Message-ID: Hi there, I was wondering if 0.7.0 was going to be released soon? We're about to do a release ourselves and I would like to pack in the latest scipy. Thanks, C. From millman at berkeley.edu Tue Jan 6 20:23:21 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 6 Jan 2009 17:23:21 -0800 Subject: [SciPy-dev] release? In-Reply-To: References: Message-ID: On Tue, Jan 6, 2009 at 3:39 PM, Charles ???? Doutriaux wrote: > I was wondering if 0.7.0 was going to be released soon? It is imminent. I was planning on tagging the rc1 earlier today, but there are a couple of test failures that I am looking into. I will send more details later. Jarrod From millman at berkeley.edu Tue Jan 6 20:50:57 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 6 Jan 2009 17:50:57 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch Message-ID: I have been meaning to look into this today, but haven't had enough time. So I thought I would just send it to the list, in case some else can get to this before me. I am seeing the following errors: ====================================================================== ERROR: Tests Obrien transform ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/stats/tests/test_mstats_basic.py", line 469, in test_obrientransform assert_almost_equal(np.round(mstats.obrientransform(*args).T,4), File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/stats/mstats_basic.py", line 1732, in obrientransform raise ValueError("Lack of convergence in obrientransform.") ValueError: Lack of convergence in obrientransform. ====================================================================== ERROR: no_test_no_check_return (test_wx_spec.TestWxConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jarrod/usr/local/lib/python2.5/site-packages/numpy/testing/decorators.py", line 82, in skipper return f(*args, **kwargs) File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py", line 114, in no_test_no_check_return mod.compile() File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/build_tools.py", line 272, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/jarrod/usr/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/usr/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -I/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave -I/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/scxx -I/home/jarrod/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c /r/d0/jarrod/src/scipy-projects/wx_return.cpp -o /tmp/jarrod/python25_intermediate/compiler_03f32eef5dd808ceec230e3ab4055356/r/d0/jarrod/src/scipy-projects/wx_return.o" failed with exit status 1 ====================================================================== ERROR: test_var_in (test_wx_spec.TestWxConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jarrod/usr/local/lib/python2.5/site-packages/numpy/testing/decorators.py", line 82, in skipper return f(*args, **kwargs) File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py", line 73, in test_var_in mod.compile() File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/build_tools.py", line 272, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/jarrod/usr/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/usr/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -I/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave -I/home/jarrod/usr/local/lib/python2.5/site-packages/scipy/weave/scxx -I/usr/include/gtk-1.2 -I/usr/include/glib-1.2 -I/usr/lib/glib/include -I/home/jarrod/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c /r/d0/jarrod/src/scipy-projects/wx_var_in.cpp -o /tmp/jarrod/python25_intermediate/compiler_03f32eef5dd808ceec230e3ab4055356/r/d0/jarrod/src/scipy-projects/wx_var_in.o -I/usr/lib/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread -I/usr/lib/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread" failed with exit status 1 The last two have been around for awhile (I don't think they were being run until the switch to nose). They don't bother me too much, since they are related to wx support for weave and appear to have been broken for some time. The first error seems a little more serious. Does anyone have any idea how to fix them? If not, should we just mark them as known errors and go ahead and make the release? Thoughts? Thanks, Jarrod From benfrantzdale at gmail.com Tue Jan 6 22:26:45 2009 From: benfrantzdale at gmail.com (Ben FrantzDale) Date: Tue, 6 Jan 2009 22:26:45 -0500 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <49635C67.8000307@ar.media.kyoto-u.ac.jp> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> <49635C67.8000307@ar.media.kyoto-u.ac.jp> Message-ID: <416c2a310901061926h64df3fc4i63d42d781249bd0f@mail.gmail.com> David, Robert, et al. I think you understand what I am talking about. This is just about optimization (although in general I would argue that functions should take as general an interface as possible for the same reasons). Attached is all the code I changed to make fmin_cg work with a SimpleHilbertVector class. If people are receptive to this approach, I have a few next questions: 1. What is the right interface to provide? Is the interface I described the right (Pythonic) way to do it, or is there a better interface? (If it were Templated C++, I'd do it with overloaded free functions, but I think it needs to/should be done with class methods.) 2. Which functions should be changed? (fmin_cg, clearly; what else? The full list in optimize.py is fmin, fmin_powell, fmin_bfgs, fmin_ncg, fmin_cg, and fminbound.) 3. How would one deal with numpy arrays? Is it best to wrap them in the interface (as shown in the attached code) or is it better to add methods to them? If the right way is to wrap them, should all the optimization functions have that as a first step -- check for numpy arrays and wrap as needed? Given an appropriate interface, the changes to optimize.py are essentially search-and-replace and I would be happy to do it. Does Travis Oliphant (whose name is on optimize.py) read this list? ?Ben On Tue, Jan 6, 2009 at 8:28 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Ben FrantzDale wrote: > > David, > > > > Thanks for the reply. Let me describe the problem I've thought most > > about to give you a sense of where I am coming from: > > > > The optimization problem I have worked most with is atomistic > > simulation (particularly relaxation) in which you have millions of > > atoms all interacting with a force function. The force function is > > sparse but as the atoms move, the sparseness pattern changes (with > > sparseness structures that can be updated automatically when the state > > vector is "moved"). Furthermore, the atoms can be distributed via MPI, > > so while they happen to be a size-3n array of doubles locally, local > > atom 1,234 at one moment may not be the same atom as local atom 1,234 > > at the next step. (I don't know the details of PyMPI or how garbage > > collection might interact with MPI, but using SciPy for > > massively-parallel optimization is another motiviation for this post.) > > > > A common solver to use to relax the state of this sort of system is > > nonlinear CG. However, I've seen many separate implementations of CG > > hand-written to operate on these data structures. As a software > > engineer, this is sad to see; really one nonlinear CG solver should be > > reusable enough that it can be dropped in and used. I tried writing a > > sufficiently-generic CG solver in C++, but designing the right family > > of types, particularly with memory-management issues in mind, got out > > of hand. Given the cost function is the expensive part of the sorts of > > problems I am interested in, the overhead of Python is tiny, and SciPy > > has nice optimization libraries, which lead me to consider > > generalizing SciPy functions. > > > > I think I simply did not understand what you meant by scipy; for me, > scipy means the whole package - and in that case, slicing and other > advanced features of numpy certainly are useful. But it looks like you > had in mind only the optimize package; in that context, Robert's answer > is certainly better than mine. > > > > > > > I agree with Robert's comment about non-Euclidean metrics, such as > > working with SO(3). Similarly, I have thought it would be interesting > > to optimize function spaces (as in finite-element solvers) in which it > > should be possible to add two functions over the same domain but with > > different discretizations, so that __add__ would implicitly remesh one > > or the other functions rather than having __add__ require the same > > basis functions on the LHS and RHS so it can just add corresponding > > scalar coefficients. > > That can definitely be useful, and not only for optimization. > Optimization is not a field I can claim to know a lot about, but those > more abstract arrays with overridable 'core' aspects certainly is useful > in other problems as well. Taking an example nearer to my own field, a > package called lastwave for wavelets has some neat tricks related to > indexing, including automatic interpolation which can be very useful for > signal processing, and is a bit similar to your example of 'remeshing' > if I understand correctly. > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Hilbert_cg_example.py Type: text/x-python Size: 17465 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testit.py Type: text/x-python Size: 726 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Tue Jan 6 22:25:04 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 07 Jan 2009 12:25:04 +0900 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: References: Message-ID: <49642090.2090202@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > The last two have been around for awhile (I don't think they were > being run until the switch to nose). They don't bother me too much, > since they are related to wx support for weave and appear to have been > broken for some time. The first error seems a little more serious. > Does anyone have any idea how to fix them? If not, should we just > mark them as known errors and go ahead and make the release? > I would say just branch for the RC anyway, without tagging, and we'll fix the tests in the branch. Otherwise, I am afraid we will keep in the loop of 'just make this one change which changes nothing - break the trunk again - fix it - step 1'. FWIW, I have not seen the wx-related on any of my machines, David From robert.kern at gmail.com Tue Jan 6 22:56:51 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 Jan 2009 21:56:51 -0600 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901061926h64df3fc4i63d42d781249bd0f@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> <49635C67.8000307@ar.media.kyoto-u.ac.jp> <416c2a310901061926h64df3fc4i63d42d781249bd0f@mail.gmail.com> Message-ID: <3d375d730901061956t2331629ao45b32df9313b052b@mail.gmail.com> 2009/1/6 Ben FrantzDale : > David, Robert, et al. > > I think you understand what I am talking about. This is just about > optimization (although in general I would argue that functions should take > as general an interface as possible for the same reasons). I'm not sure I would push it that far. Premature generalization can be just as bad as premature optimization. I find that a use case-based approach is a better principle. Write the code for the problem in front of you. Generalize when you have a use case for doing so. Now, we do have a handful of use cases here, but I think we should have some care before modifying the functions in place. Generalization could cost us performance in the typical case. > Attached is all the code I changed to make fmin_cg work with a > SimpleHilbertVector class. Hmm, can you show us a .diff of the changes that you made? I'd like to see more clearly exactly what you changed. > If people are receptive to this approach, I have a few next questions: > 1. What is the right interface to provide? Is the interface I described the > right (Pythonic) way to do it, or is there a better interface? (If it were > Templated C++, I'd do it with overloaded free functions, but I think it > needs to/should be done with class methods.) I still don't think this interface supports the manifold-optimization use case. The inner product and norm implementations need more information than just the vectors. > 2. Which functions should be changed? (fmin_cg, clearly; what else? The full > list in optimize.py is fmin, fmin_powell, fmin_bfgs, fmin_ncg, fmin_cg, and > fminbound.) > 3. How would one deal with numpy arrays? Is it best to wrap them in the > interface (as shown in the attached code) or is it better to add methods to > them? We're not going to add methods to them. > If the right way is to wrap them, should all the optimization > functions have that as a first step -- check for numpy arrays and wrap as > needed? Possibly, but that could have a significant performance hit that will need to be tested. > Given an appropriate interface, the changes to optimize.py are essentially > search-and-replace and I would be happy to do it. > > Does Travis Oliphant (whose name is on optimize.py) read this list? Yeah. He's pretty busy this week, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Tue Jan 6 23:29:36 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 6 Jan 2009 23:29:36 -0500 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <49642090.2090202@ar.media.kyoto-u.ac.jp> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> Message-ID: <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> I think the mstats obrien transform error is numpy related, Nils reported it on 12/22, and I looked briefly into it. He had the error with >>> numpy.__version__ '1.3.0.dev6188' I am using (on windowXP) >>> np.version.version '1.3.0.dev6139' and never got this error I also didn't see any changes in scipy that would have caused this error to appear,. So, my prime suspect would be changes in numpy for masked arrays, which I don't follow. I also wouldn't hold up the release process for this. Josef From josef.pktd at gmail.com Wed Jan 7 00:17:05 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 7 Jan 2009 00:17:05 -0500 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> Message-ID: <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> But,if possible, I would like one additional test in weave to be skipped on windows with mingw compiler scipy\weave\tests\test_ext_tools.py:test_with_include printing to stout std::cout << std::endl; std::cout << "test printing a value:" << a << std::endl; crashes python (segfault). (I never filed a ticket) The test is only run with test('full') which you recommend in the release notes. without the segfaulting test, I get 2 known failures and 7 skips in weave for the wx tests in weave I get the 7 skips: (test_wx_spec.TestWxConverter) ... SKIP: (error was Could not locate wxPython base directory.) Josef From millman at berkeley.edu Wed Jan 7 00:30:16 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 6 Jan 2009 21:30:16 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <49642090.2090202@ar.media.kyoto-u.ac.jp> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> Message-ID: On Tue, Jan 6, 2009 at 7:25 PM, David Cournapeau wrote: > I would say just branch for the RC anyway, without tagging, and we'll > fix the tests in the branch. Otherwise, I am afraid we will keep in the > loop of 'just make this one change which changes nothing - break the > trunk again - fix it - step 1'. OK, I went ahead and branched. > FWIW, I have not seen the wx-related on any of my machines, Sorry for asking, but can you verify that the tests aren't being skipped on your machine. Thanks, Jarrod From millman at berkeley.edu Wed Jan 7 00:44:22 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 6 Jan 2009 21:44:22 -0800 Subject: [SciPy-dev] trunk open for 0.8 development Message-ID: I created a 0.7.x branch and the trunk is now open for 0.8 development. There a few small issues that need to be resolved before tagging the 0.7.0rc1, but those should be resolved shortly. The first release candidate will be out in a day or two with 0.7.0 to hopefully follow quickly afterward. Cheers, Jarrod From millman at berkeley.edu Wed Jan 7 00:59:12 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 6 Jan 2009 21:59:12 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> Message-ID: On Tue, Jan 6, 2009 at 8:29 PM, wrote: > I think the mstats obrien transform error is numpy related, > > Nils reported it on 12/22, and I looked briefly into it. > He had the error with >>>> numpy.__version__ > '1.3.0.dev6188' > > I am using (on windowXP) >>>> np.version.version > '1.3.0.dev6139' > and never got this error Hmm. I am still getting the same error with the numpy 1.2.1 release (on Fedora Linux). From millman at berkeley.edu Wed Jan 7 01:22:59 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 6 Jan 2009 22:22:59 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> Message-ID: On Tue, Jan 6, 2009 at 9:17 PM, wrote: > But,if possible, I would like one additional test in weave to be > skipped on windows with mingw compiler > > scipy\weave\tests\test_ext_tools.py:test_with_include I made the test a known failure on the branch: http://projects.scipy.org/scipy/scipy/changeset/5340 From matthieu.brucher at gmail.com Wed Jan 7 03:01:31 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 7 Jan 2009 09:01:31 +0100 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> Message-ID: Hi Jarrod, It won't be enough, the segfault will always be there unless the test is not executed or modified so that nothing is displayed. Matthieu 2009/1/7 Jarrod Millman : > On Tue, Jan 6, 2009 at 9:17 PM, wrote: >> But,if possible, I would like one additional test in weave to be >> skipped on windows with mingw compiler >> >> scipy\weave\tests\test_ext_tools.py:test_with_include > > I made the test a known failure on the branch: > http://projects.scipy.org/scipy/scipy/changeset/5340 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From eads at soe.ucsc.edu Wed Jan 7 03:11:11 2009 From: eads at soe.ucsc.edu (Damian Eads) Date: Wed, 7 Jan 2009 03:11:11 -0500 Subject: [SciPy-dev] trunk open for 0.8 development In-Reply-To: References: Message-ID: <91b4b1ab0901070011s188061ebi88745c26da6df8a0@mail.gmail.com> Jarrod, Thanks so much to everyone for their hard work toward the 0.7 release. SciPy has matured significantly as a result. As Jarrod pointed out, there are twice the number of tests, better documentation, hundreds of bug fixes, and many new features, as well as David's new build infrastructure. This is a big milestone, and I look forward to helping out with the 0.8 development effort. >From the penn turnpike @ 3 am, Damian On 1/7/09, Jarrod Millman wrote: > I created a 0.7.x branch and the trunk is now open for 0.8 > development. There a few small issues that need to be resolved before > tagging the 0.7.0rc1, but those should be resolved shortly. The first > release candidate will be out in a day or two with 0.7.0 to hopefully > follow quickly afterward. > > Cheers, > Jarrod > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From david at ar.media.kyoto-u.ac.jp Wed Jan 7 03:40:26 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 07 Jan 2009 17:40:26 +0900 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> Message-ID: <49646A7A.6070704@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > Hi Jarrod, > > It won't be enough, the segfault will always be there unless the test > is not executed or modified so that nothing is displayed. > Do you know what causes the problem ? It is some bad interaction between the MS C++ runtime and mingw for cout ? cheers, David From matthieu.brucher at gmail.com Wed Jan 7 03:59:27 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 7 Jan 2009 09:59:27 +0100 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <49646A7A.6070704@ar.media.kyoto-u.ac.jp> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> <49646A7A.6070704@ar.media.kyoto-u.ac.jp> Message-ID: 2009/1/7 David Cournapeau : > Matthieu Brucher wrote: >> Hi Jarrod, >> >> It won't be enough, the segfault will always be there unless the test >> is not executed or modified so that nothing is displayed. > > Do you know what causes the problem ? It is some bad interaction between > the MS C++ runtime and mingw for cout ? No, I don't know (I never use mingw), but I think there was much discussion here on the interactions between mingw and the C++ runtime causing segfaults, wasn't there ? Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From stefan at sun.ac.za Wed Jan 7 04:16:49 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 7 Jan 2009 11:16:49 +0200 Subject: [SciPy-dev] Recipe: distribution of non-linear transformation of continuous variables In-Reply-To: <1cd32cbb0810290542y46652eb0xa96293d3d3f2b81@mail.gmail.com> References: <1cd32cbb0810282149y2893d90asca5dac38057c417b@mail.gmail.com> <9457e7c80810290043hbbead4bm4389457b2d0cfd9c@mail.gmail.com> <1cd32cbb0810290542y46652eb0xa96293d3d3f2b81@mail.gmail.com> Message-ID: <9457e7c80901070116s154b7c5bufb1749ce94edaf4@mail.gmail.com> Transforming distributions comes in handy. Robert, do you think this would be a useful addition to scipy.stats? St?fan 2008/10/29 : > Here is the script file as an attachment. > > Josef From opossumnano at gmail.com Wed Jan 7 04:48:14 2009 From: opossumnano at gmail.com (Tiziano Zito) Date: Wed, 7 Jan 2009 10:48:14 +0100 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> Message-ID: <20090107094813.GA19351@localhost> > >> Has there been any progress on the single precision FFT front? > > For anyone moderately familiar with f2py and numpy, it would not take > > much time, since it would mostly be a copy and paste of the existing > > code for the low-level wrapper, and a bit careful work to avoid breaking > > any public API for the high-level wrapper, > > I think this would make a very good GSOC project. I heard that some > students were interested in working on Scipy. If this is going to be scheduled for the 0.8 release, I think I can try to add the single precision fft wrappers in the next months. I'll report here if things get more complicated than expected! ciao, tiziano From stefan at sun.ac.za Wed Jan 7 09:15:23 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 7 Jan 2009 16:15:23 +0200 Subject: [SciPy-dev] Bug in scipy.weave.catalog.default_dir() In-Reply-To: <4dacb2560810081435v3a161811wa79a1b97fbd41edd@mail.gmail.com> References: <4dacb2560810081435v3a161811wa79a1b97fbd41edd@mail.gmail.com> Message-ID: <9457e7c80901070615y5665ca69h6bed2f4171157d1@mail.gmail.com> Hi Joseph 2008/10/8 Joseph Turian : > I am using scipy.weave in an environment in which my home directory is > not accessable. > However, scipy.weave attempts to write to $HOME/.python25_compiled, > even though I set PYTHONCOMPILED. > > Looking a little deeper, I see that scipy.weave.catalog.default_dir() > is implemented incorrectly. > The documentation says: > "If for some reason it isn't possible to build a default directory > in the user's home, /tmp/_pythonXX_compiled is used." > > What actually happens is that if $HOME is defined, > $HOME/.python25_compiled is used and no exception is caught if writing > fails. > I would prefer that, as per the documentation, if > $HOME/.python25_compiled is not writeable then the exception is caught > and /tmp/_pythonXX_compiled is used instead. This should now be fixed. Regards St?fan From sturla at molden.no Wed Jan 7 09:58:41 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 07 Jan 2009 15:58:41 +0100 Subject: [SciPy-dev] parallelizing cKDTRee Message-ID: <4964C321.6090504@molden.no> Speed is very important when searching kd-trees; otherwise we should not be using kd-trees but brute force. Thus exploiting multiple processors are important as well. 1. Multiprocessing: Must add support for pickling and unpickling to cKDTree (i.e. __reduce__ and __setstate__ methods). This would be useful for saving to disk as well. 2. Multithreading (Python): cKDTree.query calls cKDTree.__query with the GIL released (i.e. a 'with nogil:' block). I think this will be safe. 3. Multithreading (Cython): We could simply call cKDTree.__query in parallel using OpenMP pragmas. It would be a simple and quite portable hack. Which do you prefer? All three? (Forgive me for cross-posting. I did not know which list is the more appropriate.) Regards, Sturla Molden From rmay31 at gmail.com Wed Jan 7 10:07:04 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 07 Jan 2009 09:07:04 -0600 Subject: [SciPy-dev] Bug in scipy.weave.catalog.default_dir() In-Reply-To: <9457e7c80901070615y5665ca69h6bed2f4171157d1@mail.gmail.com> References: <4dacb2560810081435v3a161811wa79a1b97fbd41edd@mail.gmail.com> <9457e7c80901070615y5665ca69h6bed2f4171157d1@mail.gmail.com> Message-ID: <4964C518.9060401@gmail.com> St?fan van der Walt wrote: > Hi Joseph > > 2008/10/8 Joseph Turian : >> I am using scipy.weave in an environment in which my home directory is >> not accessable. >> However, scipy.weave attempts to write to $HOME/.python25_compiled, >> even though I set PYTHONCOMPILED. >> >> Looking a little deeper, I see that scipy.weave.catalog.default_dir() >> is implemented incorrectly. >> The documentation says: >> "If for some reason it isn't possible to build a default directory >> in the user's home, /tmp/_pythonXX_compiled is used." >> >> What actually happens is that if $HOME is defined, >> $HOME/.python25_compiled is used and no exception is caught if writing >> fails. >> I would prefer that, as per the documentation, if >> $HOME/.python25_compiled is not writeable then the exception is caught >> and /tmp/_pythonXX_compiled is used instead. > > This should now be fixed. Should this change also be done on the 0.7.x branch? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From peridot.faceted at gmail.com Wed Jan 7 10:55:38 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 7 Jan 2009 10:55:38 -0500 Subject: [SciPy-dev] parallelizing cKDTRee In-Reply-To: <4964C321.6090504@molden.no> References: <4964C321.6090504@molden.no> Message-ID: 2009/1/7 Sturla Molden : > Speed is very important when searching kd-trees; otherwise we should not > be using kd-trees but brute force. Thus exploiting multiple processors > are important as well. Very sensible. > 1. Multiprocessing: > Must add support for pickling and unpickling to cKDTree (i.e. __reduce__ > and __setstate__ methods). This would be useful for saving to disk as well. This would be a good idea. One labor-saving device would be to take advantage of the fact that construction is pretty cheap and have the pickle method pickle only the construction parameters and the underlying array. Not that pickling the tree structure would be very hard either. One could also implement conversion beween C and python kdtrees, which would make it easier to modify existing kdtrees or implement new algorithms. > 2. Multithreading (Python): > cKDTree.query calls cKDTree.__query with the GIL released (i.e. a 'with > nogil:' block). I think this will be safe. This looks very easy and should pose no problems. > 3. Multithreading (Cython): > We could simply call cKDTree.__query in parallel using OpenMP pragmas. > It would be a simple and quite portable hack. Are there compilation issues with using OpenMP? Otherwise this should be fairly easy too, although selecting the right degree of parallelism may be a problem (I have no experience with OpenMP). > Which do you prefer? All three? I think all three are good ideas; I'd start with the low-hanging fruit, which is just releasing the GIL. Then pickling, which is useful for other purposes, and then OpenMP if it's useful. I won't have time to do anything about this for a week or two. Anne From david at ar.media.kyoto-u.ac.jp Wed Jan 7 10:45:23 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 08 Jan 2009 00:45:23 +0900 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <20090107094813.GA19351@localhost> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> <20090107094813.GA19351@localhost> Message-ID: <4964CE13.50306@ar.media.kyoto-u.ac.jp> Tiziano Zito wrote: >>>> Has there been any progress on the single precision FFT front? >>>> >>> For anyone moderately familiar with f2py and numpy, it would not take >>> much time, since it would mostly be a copy and paste of the existing >>> code for the low-level wrapper, and a bit careful work to avoid breaking >>> any public API for the high-level wrapper, >>> >> I think this would make a very good GSOC project. I heard that some >> students were interested in working on Scipy. >> > > If this is going to be scheduled for the 0.8 release, I think I can > try to add the single precision fft wrappers in the next months. > I got bored during a meeting: single precision is now implemented in scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real input -> hermitian output for rfft). It should be transparent, e.g: from scipy.fftpack import fft, rfft import numpy as np fft(np.random.randn(10)).dtype == np.cdouble fft(np.random.randn(10).astype(np.complex64)).dtype == np.complex64 Note that not everything works yet - float32 input to fft does not work yet, for example (it is upcasted to double precision, as before). It is not super-tested either (not less than the double versions, though). cheers, David From sturla at molden.no Wed Jan 7 12:39:11 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 07 Jan 2009 18:39:11 +0100 Subject: [SciPy-dev] parallelizing cKDTRee In-Reply-To: References: <4964C321.6090504@molden.no> Message-ID: <4964E8BF.7060700@molden.no> On 1/7/2009 4:55 PM, Anne Archibald wrote: > This would be a good idea. One labor-saving device would be to take > advantage of the fact that construction is pretty cheap and have the > pickle method pickle only the construction parameters and the > underlying array. Not that pickling the tree structure would be very > hard either. Yes. I have an example of that in the Cookbook. I'm actually looking at adding __reduce__ and __setstate__ now. I'll post the code tomorrow or on friday. >> 3. Multithreading (Cython): >> We could simply call cKDTree.__query in parallel using OpenMP pragmas. >> It would be a simple and quite portable hack. > > Are there compilation issues with using OpenMP? Otherwise this should > be fairly easy too, although selecting the right degree of parallelism > may be a problem (I have no experience with OpenMP). Yes. There are compilers that don't support it (but gcc does). And there is the Cyton to C compilation, which tempers with the variable names. The pragma must thus be added to the C code and possibly use variable names present in the autogenerated C. Since there is no need for any synchronization among OpenMP threads here, I guess this should suffice: #pragma omp parallel \ private(__pyx_v_c, __pyx_v_k, __pyx_v_dd, \ __pyx_v_ii, __pyx_v_xx, \ __pyx_v_self, \ __pyx_v_eps, __pyx_v_p, \ __pyx_v_distance_upper_bound) It should be placed right before the line that says: for (__pyx_v_c = 0; __pyx_v_c < __pyx_8; __pyx_v_c+=1) { /* "/usr/data/david/src/dsp/scipy/trunk/scipy/spatial/ckdtree.pyx":583 Regards, Sturla Molden From sturla at molden.no Wed Jan 7 12:54:29 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 07 Jan 2009 18:54:29 +0100 Subject: [SciPy-dev] parallelizing cKDTRee In-Reply-To: <4964E8BF.7060700@molden.no> References: <4964C321.6090504@molden.no> <4964E8BF.7060700@molden.no> Message-ID: <4964EC55.6070202@molden.no> On 1/7/2009 6:39 PM, Sturla Molden wrote: >> This would be a good idea. One labor-saving device would be to take >> advantage of the fact that construction is pretty cheap and have the >> pickle method pickle only the construction parameters and the >> underlying array. Not that pickling the tree structure would be very >> hard either. > > Yes. I have an example of that in the Cookbook. > > I'm actually looking at adding __reduce__ and __setstate__ now. I'll > post the code tomorrow or on friday. Oh, I did not think of that :) I pickled the whole kd-tree, but it would be even cheaper to just have the worker processes build their own from the data. If the workers equal the number of processors, the extra overhead will be zero. Which means there would be no need for supporting the pickle protocol at all. Sturla Molden From pgmdevlist at gmail.com Wed Jan 7 13:17:40 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 7 Jan 2009 13:17:40 -0500 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> Message-ID: <9F7256E5-C21C-4EAA-AAC1-E281244492D3@gmail.com> On Jan 6, 2009, at 11:29 PM, josef.pktd at gmail.com wrote: > I think the mstats obrien transform error is numpy related, > I confirm josef's diagnostic: that was indeed a pb in numpy.ma.core, a bug in __isub__ recently introduced when I tried to make the operations more data friendly (as in leaving masked data unaffected by the operation). Should be fixed in numpy r6299 From josef.pktd at gmail.com Wed Jan 7 13:26:23 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 7 Jan 2009 13:26:23 -0500 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> <49646A7A.6070704@ar.media.kyoto-u.ac.jp> Message-ID: <1cd32cbb0901071026s595c556br6fc028701003d12f@mail.gmail.com> On Wed, Jan 7, 2009 at 3:59 AM, Matthieu Brucher wrote: > 2009/1/7 David Cournapeau : >> Matthieu Brucher wrote: >>> Hi Jarrod, >>> >>> It won't be enough, the segfault will always be there unless the test >>> is not executed or modified so that nothing is displayed. >> >> Do you know what causes the problem ? It is some bad interaction between >> the MS C++ runtime and mingw for cout ? > > No, I don't know (I never use mingw), but I think there was much > discussion here on the interactions between mingw and the C++ runtime > causing segfaults, wasn't there ? > > Matthieu > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > The changed test file with the skip decorator is missing `import sys`: File "C:\Programs\Python25\Lib\site-packages\scipy\weave\tests\test_ext_tools. py", line 37, in TestExtModule @dec.knownfailureif(sys.platform == 'win32', NameError: name 'sys' is not defined (I didn't rebuild scipy, I just copied the test file into my install.) With the skip, I get a KnownFailureTest error but no crash Thanks I think, Robert Kern mentioned that the crash with writing to cout is a bug in MingW 3.45 Josef From millman at berkeley.edu Wed Jan 7 13:59:50 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 7 Jan 2009 10:59:50 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <1cd32cbb0901071026s595c556br6fc028701003d12f@mail.gmail.com> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> <49646A7A.6070704@ar.media.kyoto-u.ac.jp> <1cd32cbb0901071026s595c556br6fc028701003d12f@mail.gmail.com> Message-ID: On Wed, Jan 7, 2009 at 10:26 AM, wrote: > The changed test file with the skip decorator is missing `import sys`: Sorry, fixed in r5372. From millman at berkeley.edu Wed Jan 7 14:01:49 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 7 Jan 2009 11:01:49 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <1cd32cbb0901062117i7599d8f7xed8df562433b74c@mail.gmail.com> Message-ID: On Wed, Jan 7, 2009 at 12:01 AM, Matthieu Brucher wrote: > It won't be enough, the segfault will always be there unless the test > is not executed or modified so that nothing is displayed. Tests decorated as known failures are skipped, so they shouldn't actually be run. Jarrod From millman at berkeley.edu Wed Jan 7 14:41:23 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 7 Jan 2009 11:41:23 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: <9F7256E5-C21C-4EAA-AAC1-E281244492D3@gmail.com> References: <49642090.2090202@ar.media.kyoto-u.ac.jp> <1cd32cbb0901062029l2a15954brb35daf9f60e71a65@mail.gmail.com> <9F7256E5-C21C-4EAA-AAC1-E281244492D3@gmail.com> Message-ID: On Wed, Jan 7, 2009 at 10:17 AM, Pierre GM wrote: > I confirm josef's diagnostic: that was indeed a pb in numpy.ma.core, a > bug in __isub__ recently introduced when I tried to make the > operations more data friendly (as in leaving masked data unaffected by > the operation). > Should be fixed in numpy r6299 Thanks, that works for me. From robert.kern at gmail.com Wed Jan 7 15:30:25 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 7 Jan 2009 15:30:25 -0500 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <4964CE13.50306@ar.media.kyoto-u.ac.jp> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> <20090107094813.GA19351@localhost> <4964CE13.50306@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730901071230l725cd875nd5f0ae136515e5c6@mail.gmail.com> On Wed, Jan 7, 2009 at 10:45, David Cournapeau wrote: > I got bored during a meeting: single precision is now implemented in > scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real > input -> hermitian output for rfft). Hear, hear! Lets schedule you some more meetings! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed Jan 7 15:36:57 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 7 Jan 2009 15:36:57 -0500 Subject: [SciPy-dev] default license for mathworks fileexchange ? Message-ID: <1cd32cbb0901071236r3f5eb3c2g6ff2b177f7035056@mail.gmail.com> I didn't find much information for the default license for mathworks fileexchange ? Does anyone know what the license restriction for translated code from the fileexchange is if the m-file and the description don't contain any licensing information? What's the policy for included translated code in scipy? I guess there is no problem including translated code in cookbook recipes. ? Josef From robert.kern at gmail.com Wed Jan 7 15:43:11 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 7 Jan 2009 15:43:11 -0500 Subject: [SciPy-dev] default license for mathworks fileexchange ? In-Reply-To: <1cd32cbb0901071236r3f5eb3c2g6ff2b177f7035056@mail.gmail.com> References: <1cd32cbb0901071236r3f5eb3c2g6ff2b177f7035056@mail.gmail.com> Message-ID: <3d375d730901071243r4aac131yc6dbad89c7f8be42@mail.gmail.com> On Wed, Jan 7, 2009 at 15:36, wrote: > I didn't find much information for the default license for mathworks > fileexchange ? > > Does anyone know what the license restriction for translated code from > the fileexchange is if the m-file and the description don't contain > any licensing information? Contact the author. > What's the policy for included translated code in scipy? Get a suitable license for it if it is copyrightable. > I guess there is no problem including translated code in cookbook recipes. ? You should make the license status and the original source of the code clear. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed Jan 7 16:04:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 7 Jan 2009 16:04:30 -0500 Subject: [SciPy-dev] default license for mathworks fileexchange ? In-Reply-To: <3d375d730901071243r4aac131yc6dbad89c7f8be42@mail.gmail.com> References: <1cd32cbb0901071236r3f5eb3c2g6ff2b177f7035056@mail.gmail.com> <3d375d730901071243r4aac131yc6dbad89c7f8be42@mail.gmail.com> Message-ID: <1cd32cbb0901071304w2c971749s63fcab466a318bed@mail.gmail.com> On Wed, Jan 7, 2009 at 3:43 PM, Robert Kern wrote: > On Wed, Jan 7, 2009 at 15:36, wrote: >> I didn't find much information for the default license for mathworks >> fileexchange ? >> >> Does anyone know what the license restriction for translated code from >> the fileexchange is if the m-file and the description don't contain >> any licensing information? > > Contact the author. > >> What's the policy for included translated code in scipy? > > Get a suitable license for it if it is copyrightable. > >> I guess there is no problem including translated code in cookbook recipes. ? > > You should make the license status and the original source of the code clear. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Thanks for the clarification, I was hoping that it can be considered as public domain recipes. Josef From stefan at sun.ac.za Wed Jan 7 16:17:10 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 7 Jan 2009 23:17:10 +0200 Subject: [SciPy-dev] Bug in scipy.weave.catalog.default_dir() In-Reply-To: <4964C518.9060401@gmail.com> References: <4dacb2560810081435v3a161811wa79a1b97fbd41edd@mail.gmail.com> <9457e7c80901070615y5665ca69h6bed2f4171157d1@mail.gmail.com> <4964C518.9060401@gmail.com> Message-ID: <9457e7c80901071317p273954b2oa403e74178378062@mail.gmail.com> Hey Ryan It's not a particularly serious problem, and should only occur if you have an existing .python25_compiled directory in ${HOME} that is not writable any longer. Also, it isn't easy to write a unit test for this specific case, so I'd feel safer if more people would try it out before it propagates too far. Cheers St?fan 2009/1/7 Ryan May : > St?fan van der Walt wrote: >> Hi Joseph >> >> 2008/10/8 Joseph Turian : >>> I am using scipy.weave in an environment in which my home directory is >>> not accessable. >>> However, scipy.weave attempts to write to $HOME/.python25_compiled, >>> even though I set PYTHONCOMPILED. >>> From peridot.faceted at gmail.com Wed Jan 7 16:27:05 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 7 Jan 2009 16:27:05 -0500 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <3d375d730901071230l725cd875nd5f0ae136515e5c6@mail.gmail.com> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> <20090107094813.GA19351@localhost> <4964CE13.50306@ar.media.kyoto-u.ac.jp> <3d375d730901071230l725cd875nd5f0ae136515e5c6@mail.gmail.com> Message-ID: 2009/1/7 Robert Kern : > On Wed, Jan 7, 2009 at 10:45, David Cournapeau > wrote: >> I got bored during a meeting: single precision is now implemented in >> scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real >> input -> hermitian output for rfft). > > Hear, hear! Lets schedule you some more meetings! Make sure they're adequately boring. Nice work! Anne From stefan at sun.ac.za Wed Jan 7 16:48:40 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 7 Jan 2009 23:48:40 +0200 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <4964CE13.50306@ar.media.kyoto-u.ac.jp> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> <20090107094813.GA19351@localhost> <4964CE13.50306@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80901071348i2878347flbae4d56cb16e04ec@mail.gmail.com> 2009/1/7 David Cournapeau : > I got bored during a meeting: single precision is now implemented in > scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real > input -> hermitian output for rfft). It should be transparent, e.g: And here I was thinking this could be a GSoC task -- I should recalibrate my estimator when it comes to bored Frenchmen in Japan! Thanks, David. Cheers St?fan From fperez.net at gmail.com Wed Jan 7 18:29:55 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Jan 2009 15:29:55 -0800 Subject: [SciPy-dev] test errors blocking 0.7.x branch In-Reply-To: References: Message-ID: On Tue, Jan 6, 2009 at 5:50 PM, Jarrod Millman wrote: > ====================================================================== > ERROR: no_test_no_check_return (test_wx_spec.TestWxConverter) > ---------------------------------------------------------------------- As a note, an easy way of seeing all the wx problems is to run the test suite inside a screen session, which doesn't set $DISPLAY. In that case, you start seeing a lot of: ====================================================================== ERROR: no_test_no_check_return (test_wx_spec.TestWxConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/fperez/usr/opt/lib64/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py", line 30, in setUp self.app = wx.App() File "/usr/lib/python2.5/site-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 7880, in __init__ raise SystemExit(msg) SystemExit: Unable to access the X Display, is $DISPLAY set properly? IMHO, the WX support in weave might as well be ripped out altogether. I'd be very surprised if it really works at all, and it seems to be a remnant from the early days when scipy had lots of plotting and GUI support. Now that scipy's focus is pretty much algorithms, and scientific computing (which includes data formats, I/O and a few non-numerical things, I know), it seems to me that keeping WX code around is just creating unnecessary noise and maintenance headaches. Does anyone know if the WX code in weave actually works correctly under any circumstances? I'm honestly curious, especially because seeing code like: searched_locations = ['c:\third\wxpython*', '/usr/lib/wx*'] makes me a bit suspicious :) Just an opinion from the peanut gallery... Cheers, f From fperez.net at gmail.com Wed Jan 7 18:40:44 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Jan 2009 15:40:44 -0800 Subject: [SciPy-dev] Numerical Recipes (was tagging 0.7rc1 this weekend?) In-Reply-To: <49613610.8060407@american.edu> References: <4947CA8C.40008@american.edu> <49613610.8060407@american.edu> Message-ID: Hi Alan, On Sun, Jan 4, 2009 at 2:20 PM, Alan G Isaac wrote: > Anyway, here is the reST for the GW cite and > what I think the AS citation info is. > (Not sure about the address.) I just took the liberty of committing this to trunk so it doesn't fall through the cracks, with due credit. Jarrod will backport it now to the release branch. Thanks! f From millman at berkeley.edu Wed Jan 7 20:06:30 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 7 Jan 2009 17:06:30 -0800 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) Message-ID: I changed the subject line for this because I wanted to make sure everyone reads it. The only remaining issue that needs to be resolved before I tag the 0.7.0rc1 is what to do about the wx-related weave tests. Depending on your configuration and how you run the tests, the wx-related weave code will generate a number of test errors. I think that given the huge amount effort that has gone into making a very polished 0.7.0 release, we need to deal with these errors quickly. After looking at the code, I don't believe we are going to be able to fix these errors so there are only two options that I see: 1. Mark all the wx-related weave tests as known failures 2. Remove the wx-related weave code and tests Fernando's email below advocates the 2nd option, which is the one I am slightly leaning to at this point. I would rather not add a number of known failures to our test system at this point, especially for code that doesn't seem to me to be very likely to work on many systems given the number of hard-coded paths in wx_spec.py (stefan fixed this somewhat in r3266). I would like to know if anyone is using this functionality, so if you are or know someone who is please speak up immediately. On Wed, Jan 7, 2009 at 3:29 PM, Fernando Perez wrote: > On Tue, Jan 6, 2009 at 5:50 PM, Jarrod Millman wrote: > >> ====================================================================== >> ERROR: no_test_no_check_return (test_wx_spec.TestWxConverter) >> ---------------------------------------------------------------------- > > As a note, an easy way of seeing all the wx problems is to run the > test suite inside a screen session, which doesn't set $DISPLAY. In > that case, you start seeing a lot of: > > ====================================================================== > ERROR: no_test_no_check_return (test_wx_spec.TestWxConverter) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/fperez/usr/opt/lib64/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py", > line 30, in setUp > self.app = wx.App() > File "/usr/lib/python2.5/site-packages/wx-2.8-gtk2-unicode/wx/_core.py", > line 7880, in __init__ > raise SystemExit(msg) > SystemExit: Unable to access the X Display, is $DISPLAY set properly? > > > IMHO, the WX support in weave might as well be ripped out altogether. > I'd be very surprised if it really works at all, and it seems to be a > remnant from the early days when scipy had lots of plotting and GUI > support. Now that scipy's focus is pretty much algorithms, and > scientific computing (which includes data formats, I/O and a few > non-numerical things, I know), it seems to me that keeping WX code > around is just creating unnecessary noise and maintenance headaches. > > Does anyone know if the WX code in weave actually works correctly > under any circumstances? I'm honestly curious, especially because > seeing code like: > > searched_locations = ['c:\third\wxpython*', > '/usr/lib/wx*'] > > makes me a bit suspicious :) > > Just an opinion from the peanut gallery... > > Cheers, > > f From stefan at sun.ac.za Thu Jan 8 02:44:24 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 8 Jan 2009 09:44:24 +0200 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: References: Message-ID: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> 2009/1/8 Jarrod Millman : > fix these errors so there are only two options that I see: > > 1. Mark all the wx-related weave tests as known failures > 2. Remove the wx-related weave code and tests I made change #2, and took out the VTK wrapper as well (TVTK supercedes this functionality). If anyone objects, feel free to revert. Regards St?fan From tom.grydeland at gmail.com Thu Jan 8 03:00:28 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Thu, 8 Jan 2009 09:00:28 +0100 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <4964CE13.50306@ar.media.kyoto-u.ac.jp> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> <20090107094813.GA19351@localhost> <4964CE13.50306@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, Jan 7, 2009 at 4:45 PM, David Cournapeau wrote: > I got bored during a meeting: single precision is now implemented in > scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real > input -> hermitian output for rfft). It should be transparent, e.g: > > from scipy.fftpack import fft, rfft > import numpy as np > > fft(np.random.randn(10)).dtype == np.cdouble > fft(np.random.randn(10).astype(np.complex64)).dtype == np.complex64 Excellent! I'd like to bring the the docstrings up to date on this as well. "If the input is single-precision (real or complex), the output will be single-precision also" or do you have other suggestions? > David -- Tom Grydeland From millman at berkeley.edu Thu Jan 8 03:57:27 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 8 Jan 2009 00:57:27 -0800 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> Message-ID: On Wed, Jan 7, 2009 at 11:44 PM, St?fan van der Walt wrote: > I made change #2, and took out the VTK wrapper as well (TVTK > supercedes this functionality). If anyone objects, feel free to > revert. Thanks for taking care of this. I went ahead and backported your change in r5379. (I also backported r5377 and r5378.) I am going to wait until tomorrow to see if anyone has any objections. If no one says anything, I will go ahead and tag 0.7.0rc1. Jarrod From david at ar.media.kyoto-u.ac.jp Thu Jan 8 03:53:17 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 08 Jan 2009 17:53:17 +0900 Subject: [SciPy-dev] Single precision FFT In-Reply-To: References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> <20090107094813.GA19351@localhost> <4964CE13.50306@ar.media.kyoto-u.ac.jp> <3d375d730901071230l725cd875nd5f0ae136515e5c6@mail.gmail.com> Message-ID: <4965BEFD.5040704@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > > Make sure they're adequately boring. > Note that I did not say they were boring, but shamefully, my Japanese is sill poor enough such as formal presentations mostly elude me :) I fixed fft such as the complex transform handles complex numbers with 0 imaginary output - effectively making fft of float32 array work in single precision. I also discovered at the same time that fftpack has some cosine and sine transforms, which is great - I will be able to implement some missing functions compared to the matlab signal toolbox easily with this. I discovered something weird - though unrelated: real fft (rfft) in numpy and scipy are different in their output format (numpy puts only one 'side' of the hermitian output - scipy put all the number in an array of reals, which is a bit strange). I don't think the scipy format makes much sense ? In C, I can see the advantage of having input/output of the same type/size, but in python, I am not sure. Modifying this would be a major breakage, though, cheers, David From prabhu at aero.iitb.ac.in Thu Jan 8 08:42:09 2009 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 08 Jan 2009 19:12:09 +0530 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> Message-ID: <496602B1.5050402@aero.iitb.ac.in> St?fan van der Walt wrote: > 2009/1/8 Jarrod Millman : >> fix these errors so there are only two options that I see: >> >> 1. Mark all the wx-related weave tests as known failures >> 2. Remove the wx-related weave code and tests > > I made change #2, and took out the VTK wrapper as well (TVTK > supercedes this functionality). If anyone objects, feel free to > revert. As the author of the VTK weave wrapper, I object. I can't revert since I do not have SVN access to scipy/numpy. What the VTK wrapper for weave does is let you embed c++ code which operates on a VTK Python object. This is not something that TVTK does or provides for at all. I think it is useful functionality that is not available anywhere else and should not be removed. cheers, prabhu From stefan at sun.ac.za Thu Jan 8 09:48:48 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 8 Jan 2009 16:48:48 +0200 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <496602B1.5050402@aero.iitb.ac.in> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> Message-ID: <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> Hi Prabhu 2009/1/8 Prabhu Ramachandran : > St?fan van der Walt wrote: > As the author of the VTK weave wrapper, I object. I can't revert since I > do not have SVN access to scipy/numpy. What the VTK wrapper for weave > does is let you embed c++ code which operates on a VTK Python object. > This is not something that TVTK does or provides for at all. I think it > is useful functionality that is not available anywhere else and should > not be removed. Thanks for your input, I'll add it back. Would you please help us to document and test it? Regards St?fan From benfrantzdale at gmail.com Thu Jan 8 09:59:22 2009 From: benfrantzdale at gmail.com (Ben FrantzDale) Date: Thu, 8 Jan 2009 09:59:22 -0500 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901070640o17f87c63t835e47f619026c25@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> <49635C67.8000307@ar.media.kyoto-u.ac.jp> <416c2a310901061926h64df3fc4i63d42d781249bd0f@mail.gmail.com> <3d375d730901061956t2331629ao45b32df9313b052b@mail.gmail.com> <416c2a310901070640o17f87c63t835e47f619026c25@mail.gmail.com> Message-ID: <416c2a310901080659r667ed33bv6a44b59968b1a065@mail.gmail.com> On Tue, Jan 6, 2009 at 10:56 PM, Robert Kern wrote: > 2009/1/6 Ben FrantzDale : > > David, Robert, et al. > > > > I think you understand what I am talking about. This is just about > > optimization (although in general I would argue that functions should > take > > as general an interface as possible for the same reasons). > > I'm not sure I would push it that far. Premature generalization can be > just as bad as premature optimization. I find that a use case-based > approach is a better principle. Write the code for the problem in > front of you. Generalize when you have a use case for doing so. > I completely agree; I was just speculating :-) > Now, we do have a handful of use cases here, but I think we should > have some care before modifying the functions in place. Generalization > could cost us performance in the typical case. > > > Attached is all the code I changed to make fmin_cg work with a > > SimpleHilbertVector class. > > Hmm, can you show us a .diff of the changes that you made? I'd like to > see more clearly exactly what you changed. > See attached. Run this: $ patch Hilbert_cg_example_original.py Hilbert_cg_example.diff -o Hilbert_cg_example.py to get Hilbert_example.py. (I can't fit all the files in one email to this list.) I cleaned it up a bit and included all the functions I changed, all in one file to make a diff make sense. The diff lines are almost all just numpy.dot(u,v) -> inner_product(u,v) and vecnorm(v,norm) -> v.norm(norm). > > If people are receptive to this approach, I have a few next questions: > > 1. What is the right interface to provide? Is the interface I described > the > > right (Pythonic) way to do it, or is there a better interface? (If it > were > > Templated C++, I'd do it with overloaded free functions, but I think it > > needs to/should be done with class methods.) > > I still don't think this interface supports the manifold-optimization > use case. The inner product and norm implementations need more > information than just the vectors. > Good point. I've been focused on optimizing within a vector space not on a manifold. For nonlinear CG, I suppose the gradient needs to be a vector in the tangent space, which itself must be a Hilbert space, but the state "vector" can be in an affine space or on a manifold. What is the mathematically-correct operation for moving such a "vector" on its manifold? That operation would take the place of the line xk += alpha_k * pk where pk is a vector in the tangent space. --Ben PS If you have references for optimizing on a manifold, I would be very curious to read more about it. Wikipedia suggests I should look into Development (differential geometry), Geodesics, Affine connection, and Parallel transport, and Retraction. > > 2. Which functions should be changed? (fmin_cg, clearly; what else? The > full > > list in optimize.py is fmin, fmin_powell, fmin_bfgs, fmin_ncg, fmin_cg, > and > > fminbound.) > > 3. How would one deal with numpy arrays? Is it best to wrap them in the > > interface (as shown in the attached code) or is it better to add methods > to > > them? > > We're not going to add methods to them. > > > If the right way is to wrap them, should all the optimization > > functions have that as a first step -- check for numpy arrays and wrap as > > needed? > > Possibly, but that could have a significant performance hit that will > need to be tested. > > > Given an appropriate interface, the changes to optimize.py are > essentially > > search-and-replace and I would be happy to do it. > > > > Does Travis Oliphant (whose name is on optimize.py) read this list? > > Yeah. He's pretty busy this week, though. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Hilbert_cg_example_original.py Type: text/x-python Size: 11799 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Hilbert_cg_example.diff Type: application/octet-stream Size: 3152 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testit.py Type: text/x-python Size: 1371 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Thu Jan 8 10:19:05 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 09 Jan 2009 00:19:05 +0900 Subject: [SciPy-dev] Scipy 0.7, weave, windows Message-ID: <49661969.40905@ar.media.kyoto-u.ac.jp> Hi, I just did a full build/install/test dance of scipy 0.7 on windows, and things look good - except weave, which brings 205 errors when the full test suite is run. Do people use weave on windows ? I would think not many, because we discovered with Stefan some weave functions using python code not available at least since python 2.4, but I would like to make sure. Otherwise, I believe we will finally be able to release scipy 0.7, almost one year and a half after 0.6 :) thanks, David From charlesr.harris at gmail.com Thu Jan 8 14:09:50 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Jan 2009 12:09:50 -0700 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <4965BEFD.5040704@ar.media.kyoto-u.ac.jp> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <9457e7c80901060155y4dc5c972g50859229012e89f5@mail.gmail.com> <496327E3.9060105@ar.media.kyoto-u.ac.jp> <9457e7c80901060524y57a78fddoa5b0c8feab85f8b@mail.gmail.com> <20090107094813.GA19351@localhost> <4964CE13.50306@ar.media.kyoto-u.ac.jp> <3d375d730901071230l725cd875nd5f0ae136515e5c6@mail.gmail.com> <4965BEFD.5040704@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Jan 8, 2009 at 1:53 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Anne Archibald wrote: > > > > Make sure they're adequately boring. > > > > Note that I did not say they were boring, but shamefully, my Japanese is > sill poor enough such as formal presentations mostly elude me :) > > I fixed fft such as the complex transform handles complex numbers with 0 > imaginary output - effectively making fft of float32 array work in > single precision. I also discovered at the same time that fftpack has > some cosine and sine transforms, which is great - I will be able to > implement some missing functions compared to the matlab signal toolbox > easily with this. > > I discovered something weird - though unrelated: real fft (rfft) in > numpy and scipy are different in their output format (numpy puts only > one 'side' of the hermitian output - scipy put all the number in an > array of reals, which is a bit strange). I don't think the scipy format > makes much sense ? In C, I can see the advantage of having input/output > of the same type/size, but in python, I am not sure. Modifying this > would be a major breakage, though, > There was a lot of discussion about this several years back. The "natural" way for the real transform to store the results of an in place transform is with the DC and Nyquist frequencies, which are both reals, stored as the real and imaginary parts of the DC value. This keeps the input and output arrays the same size but it is somewhat unnatural from the users point of view. The values may also be shuffled so that the results appear in real, complex, ... , complex, real order. I forget which way fftpack does it. I prefer the numpy way for higher level usage, especially as it works better with ndarrays. Now that fftpack is the only fft version in scipy and there is only one version of the real transform to deal with it might be a good time to revisit the question and settle things before we hit version 1.0. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jan 8 14:41:24 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Jan 2009 12:41:24 -0700 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901080659r667ed33bv6a44b59968b1a065@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> <49635C67.8000307@ar.media.kyoto-u.ac.jp> <416c2a310901061926h64df3fc4i63d42d781249bd0f@mail.gmail.com> <3d375d730901061956t2331629ao45b32df9313b052b@mail.gmail.com> <416c2a310901070640o17f87c63t835e47f619026c25@mail.gmail.com> <416c2a310901080659r667ed33bv6a44b59968b1a065@mail.gmail.com> Message-ID: On Thu, Jan 8, 2009 at 7:59 AM, Ben FrantzDale wrote: > On Tue, Jan 6, 2009 at 10:56 PM, Robert Kern wrote: > >> 2009/1/6 Ben FrantzDale : >> > David, Robert, et al. >> > >> > I think you understand what I am talking about. This is just about >> > optimization (although in general I would argue that functions should >> take >> > as general an interface as possible for the same reasons). >> >> I'm not sure I would push it that far. Premature generalization can be >> just as bad as premature optimization. I find that a use case-based >> approach is a better principle. Write the code for the problem in >> front of you. Generalize when you have a use case for doing so. >> > > I completely agree; I was just speculating :-) > > >> Now, we do have a handful of use cases here, but I think we should >> have some care before modifying the functions in place. Generalization >> could cost us performance in the typical case. >> >> > Attached is all the code I changed to make fmin_cg work with a >> > SimpleHilbertVector class. >> >> Hmm, can you show us a .diff of the changes that you made? I'd like to >> see more clearly exactly what you changed. >> > > See attached. > Run this: > $ patch Hilbert_cg_example_original.py Hilbert_cg_example.diff -o > Hilbert_cg_example.py > to get Hilbert_example.py. (I can't fit all the files in one email to this > list.) > > I cleaned it up a bit and included all the functions I changed, all in one > file to make a diff make sense. The diff lines are almost all just > numpy.dot(u,v) -> inner_product(u,v) and vecnorm(v,norm) -> v.norm(norm). > > >> > If people are receptive to this approach, I have a few next questions: >> > 1. What is the right interface to provide? Is the interface I described >> the >> > right (Pythonic) way to do it, or is there a better interface? (If it >> were >> > Templated C++, I'd do it with overloaded free functions, but I think it >> > needs to/should be done with class methods.) >> >> I still don't think this interface supports the manifold-optimization >> use case. The inner product and norm implementations need more >> information than just the vectors. >> > > Good point. I've been focused on optimizing within a vector space not on a > manifold. For nonlinear CG, I suppose the gradient needs to be a vector in > the tangent space, which itself must be a Hilbert space, but the state > "vector" can be in an affine space or on a manifold. What is the > mathematically-correct operation for moving such a "vector" on its manifold? > Moving vectors on manifolds is a can of worms, leading to covariant differentiation via connections, parallel transport, Christoffel symbols, etc., etc.. The general case also probably needs a set of coordinate maps for computation. I'm not sure we want to go there. It's easier to embed the manifold in a higher dimensional space and use vectors and inner products, moving the vectors in the embedding space and projecting onto the manifold. Control systems do quaternions ( 3 sphere) that way, reducing the problem to constrained optimization. I suppose much depends much on the problem at hand... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jan 8 15:39:28 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 8 Jan 2009 15:39:28 -0500 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <416c2a310901080659r667ed33bv6a44b59968b1a065@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> <49635C67.8000307@ar.media.kyoto-u.ac.jp> <416c2a310901061926h64df3fc4i63d42d781249bd0f@mail.gmail.com> <3d375d730901061956t2331629ao45b32df9313b052b@mail.gmail.com> <416c2a310901070640o17f87c63t835e47f619026c25@mail.gmail.com> <416c2a310901080659r667ed33bv6a44b59968b1a065@mail.gmail.com> Message-ID: <3d375d730901081239i4f5a804bxda69b9320d2e51df@mail.gmail.com> On Thu, Jan 8, 2009 at 09:59, Ben FrantzDale wrote: > On Tue, Jan 6, 2009 at 10:56 PM, Robert Kern wrote: >> I still don't think this interface supports the manifold-optimization >> use case. The inner product and norm implementations need more >> information than just the vectors. > > Good point. I've been focused on optimizing within a vector space not on a > manifold. For nonlinear CG, I suppose the gradient needs to be a vector in > the tangent space, which itself must be a Hilbert space, but the state > "vector" can be in an affine space or on a manifold. What is the > mathematically-correct operation for moving such a "vector" on its manifold? > That operation would take the place of the line > xk += alpha_k * pk > where pk is a vector in the tangent space. You basically need to define a move(x0, dx, alpha) function. > PS > If you have references for optimizing on a manifold, I would be very curious > to read more about it. Wikipedia suggests I should look into Development > (differential geometry), Geodesics, Affine connection, and Parallel > transport, and Retraction. http://www-math.mit.edu/~lippert/sgmin.html Also look at the code I posted for an example of a particular manifold. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From scott.sinclair.za at gmail.com Fri Jan 9 00:49:52 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 9 Jan 2009 07:49:52 +0200 Subject: [SciPy-dev] Code refactoring can result in loss of documentation Message-ID: <6a17e9ee0901082149l2b0bad72kb1a2ead4c49d2552@mail.gmail.com> I noticed yesterday that when Pierre GM fixed ticket #961 (http://scipy.org/scipy/numpy/ticket/961), the revised docstrings for np.ma.getdata and np.ma.getmask in the doc-editor reverted to what is in SVN. These functions previously resolved to np.ma.core.get_data and np.core.get_mask, but now resolve to np.core.getdata and np.core.getmask. In retrospect, it's obvious that this will happen when functions get renamed (even if the API and behaviour of Numpy remain unchanged, as in this case), but not something that occurred to me. Any work done in the doc-editor can be recovered because the doc-editors repository must contain the enhanced docstrings even though they no longer appear on the web interface, but it is easy for a similar situation to pass unnoticed in future. In this case it was noticed and resolved, so no harm done. This post is intended to draw developers attention to the possibility of this occuring. Changes to docs in SVN are propogated once daily to the doc-editor, but must be done manually in the opposite direction for quality control. I'd suggest that a "best practice" when doing similar refactoring should be to check for revised docstrings in the doc-editor, do a quick or, preferably, thorough review and commit the updated docstring to SVN before refactoring. Cheers, Scott From stijn.deweirdt at ugent.be Fri Jan 9 04:26:28 2009 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Fri, 09 Jan 2009 10:26:28 +0100 Subject: [SciPy-dev] scipy with icc/ifort 11 Message-ID: <1231493188.4051.18.camel@spike.ugent.be> hi all, finally found some time again to play with python/icc/ifort. i'm using python 2.5.4, icc/ifort 11.0.074 and mkl 10.1.0.015. (pythin 2.5.4 has no ctypes support in this build i'm using. i still need to retry with ctypes build with system libffi) i reported some patches i needed in a previous mail, but then i was stuck at a segfault. i'm now using numpy 1.2.1 and yesterday svn checkout of scipy (so almost 0.7.0rc1). numpy test results aren't changed (a few problems like reported http://www.nabble.com/ANN:-NumPy-1.2.0-td19683018.html) running the scipy tests still segfualt, but i looked at a core dump and found test_dsyev (test_esv.TestEsv) ... Program received signal SIGSEGV mkl_serv_allocate () in /apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/lib/lapack/flapack.so then i read the mkl user guide and found that mkl does some optimised memory management, so i decided to turn it off using export MKL_DISABLE_FAST_MM=1 now it doesn't segfault anymore, but still a few tests fail. i attached output at the end. if someone could shed some light on it, that would be great. btw, scipy test with 0.6.0 segfaults but fro completely different reasons (or so it seems). i'm not bothering anymore. stijn ------------------------------------------------------------------------ ====================================================================== ERROR: Failure: ImportError (/cvos/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/linalg/atlas_version.so: undefined symbol: ATL_buildinfo) ---------------------------------------------------------------------- Traceback (most recent call last): File "/cvos/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/cvos/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/cvos/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/cvos/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/linalg/tests/test_atlas_version.py", line 8, in import scipy.linalg.atlas_version ImportError: /cvos/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/linalg/atlas_version.so: undefined symbol: ATL_buildinfo ====================================================================== ERROR: Failure: ImportError (No module named convolve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/stsci/__init__.py", line 2, in import image File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/stsci/image/__init__.py", line 2, in from _image import * File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/stsci/image/_image.py", line 2, in import convolve ImportError: No module named convolve ====================================================================== FAIL: test_lorentz (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 292, in test_lorentz 3.7798193600109009e+00]), File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/numpy/testing/utils.py", line 311, in assert_array_almost_equal header='Arrays are not almost equal') File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/numpy/testing/utils.py", line 296, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.00000000e+03, 1.00000000e-01, 3.80000000e+00]) y: array([ 1.43067808e+03, 1.33905090e-01, 3.77981936e+00]) ====================================================================== FAIL: test_multi (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 188, in test_multi 0.5101147161764654, 0.5173902330489161]), File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/numpy/testing/utils.py", line 311, in assert_array_almost_equal header='Arrays are not almost equal') File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/numpy/testing/utils.py", line 296, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4. , 2. , 7. , 0.4, 0.5]) y: array([ 4.37998803, 2.43330576, 8.00288459, 0.51011472, 0.51739023]) ====================================================================== FAIL: test_pearson (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 235, in test_pearson np.array([ 5.4767400299231674, -0.4796082367610305]), File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/numpy/testing/utils.py", line 311, in assert_array_almost_equal header='Arrays are not almost equal') File "/shared/apps/python/2.5.4-ictce-3.2.020/lib/python2.5/site-packages/numpy/testing/utils.py", line 296, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1., 1.]) y: array([ 5.47674003, -0.47960824]) From benfrantzdale at gmail.com Fri Jan 9 09:34:08 2009 From: benfrantzdale at gmail.com (Ben FrantzDale) Date: Fri, 9 Jan 2009 09:34:08 -0500 Subject: [SciPy-dev] Abstract vectors in optimization In-Reply-To: <3d375d730901081239i4f5a804bxda69b9320d2e51df@mail.gmail.com> References: <416c2a310901051830w3271f789rf491a54e41a7512b@mail.gmail.com> <4962EBBB.20201@ar.media.kyoto-u.ac.jp> <416c2a310901060500h404e837eya1b8b00a2b88f691@mail.gmail.com> <49635C67.8000307@ar.media.kyoto-u.ac.jp> <416c2a310901061926h64df3fc4i63d42d781249bd0f@mail.gmail.com> <3d375d730901061956t2331629ao45b32df9313b052b@mail.gmail.com> <416c2a310901070640o17f87c63t835e47f619026c25@mail.gmail.com> <416c2a310901080659r667ed33bv6a44b59968b1a065@mail.gmail.com> <3d375d730901081239i4f5a804bxda69b9320d2e51df@mail.gmail.com> Message-ID: <416c2a310901090634y4a5d2f3dr4d42f2e8688afb3d@mail.gmail.com> On Thu, Jan 8, 2009 at 3:39 PM, Robert Kern wrote: > On Thu, Jan 8, 2009 at 09:59, Ben FrantzDale > wrote: > > On Tue, Jan 6, 2009 at 10:56 PM, Robert Kern > wrote: > > >> I still don't think this interface supports the manifold-optimization > >> use case. The inner product and norm implementations need more > >> information than just the vectors. > > > > Good point. I've been focused on optimizing within a vector space not on > a > > manifold. For nonlinear CG, I suppose the gradient needs to be a vector > in > > the tangent space, which itself must be a Hilbert space, but the state > > "vector" can be in an affine space or on a manifold. What is the > > mathematically-correct operation for moving such a "vector" on its > manifold? > > That operation would take the place of the line > > xk += alpha_k * pk > > where pk is a vector in the tangent space. > > You basically need to define a move(x0, dx, alpha) function. > I think if you are moving a finite distance you need transport your gradient vector as well, otherwise you'll wind up (at least in CG) taking the inner product of a vector from the tangent space of x_i with a vector from the tangent space of x_i+1, which doesn't make much sense. With that in mind, the code could easily be generalized by adding the following: def move(self, dx, alpha, *tangentVectorsToTransport): # Moves self, transport any vectors passed in. This interface be provided by an abstract ManifoldPoint class and HilbertVector could inherit it and provide move as self += dx*alpha with the identity operation for transport. (Since Python is dynamically typed, is it recommended to provide an "abstract" base class that just raises exceptions. On the one hand it would be helpful to show what interface someone needs to implement; on the other hand, creating a ManifoldPoint class with methods that just raise exceptions essentially adds nothing. What's the Python Way?) I'll read up on this and post an updated version of the code. --Ben > > PS > > If you have references for optimizing on a manifold, I would be very > curious > > to read more about it. Wikipedia suggests I should look into Development > > (differential geometry), Geodesics, Affine connection, and Parallel > > transport, and Retraction. > > http://www-math.mit.edu/~lippert/sgmin.html > > Also look at the code I posted for an example of a particular manifold. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Fri Jan 9 11:37:31 2009 From: sturla at molden.no (Sturla Molden) Date: Fri, 9 Jan 2009 17:37:31 +0100 (CET) Subject: [SciPy-dev] denoising spatial point process data Message-ID: <5e12a746973e53ef7bcce833fe7bbf1a.squirrel@webmail.uio.no> If it is of interest I'll donate a useful algorithm of denoising spatial point process data to scipy.spatial. It works by fitting a mixture of two Poisson processes (signal + noise) using an EM algorithm. It was originally developed for US DoD to detect minefields using reconnaissance aircraft images. I use it on neurophysiological spike data to remove artifact waveforms. It seems to be very efficient. Regards, Sturla Molden # Written by Sturla Molden import numpy from scipy.special import gamma def nnclean(K, knn, ndim, convergence=0.01): """ K-th nearest neighbour clutter removal Algorithm from: Byers, S & Raftery, A (1998). Nearest-Neighbor Clutter Removal for estimating Features in Spatial Point Processes. J. Am. Stat. Assoc., 93: 577-585. Input: K: neighbour to use (K=5 often works well) Dk: distance to K-th nearest neighbour. Obtain by searching a scipy.spatial.cKDTree with p=2 ndim: number of dimensions in the data set from which Dk was computed. Output: signal: indices of signal points noise: indices of clutter points delta: likelihood that a point is signal loglike: total log-likelihood of the fit """ # size of data d, n = ndim, len(Dk) # calculate volume of N-D hypersphere ad = (2*numpy.pi**(d/2.0))/(d*gamma(d/2.0)) #select a random starting point delta = numpy.random.rand(n) l1 = K * delta.sum()/(ad*numpy.sum((Dk**d)*delta)) l2 = K*numpy.sum(1.0-delta)/(ad*numpy.sum((Dk**d)*(1.0-delta))) #l1[numpy.where(l1<1E-12)[0]] = 1E-12 #l2[numpy.where(l2<1E-12)[0]] = 1E-12 p = numpy.sum(delta)/n old_loglike = numpy.sum( -p*l1*ad*((Dk**d)*delta) - (1.0-p)*l2*ad*((Dk**d)*(1.0-delta)) + delta*K*numpy.log(l1*ad) + (1.0-delta)*K*numpy.log(l2*ad) ) # estimate Poisson mixture by EM updates term = 100 loglike = [old_loglike] counter = 0 run = 1 old_classification = numpy.round(delta) while run: new_delta, l1, l2, p = EM_update(Dk,K,n,d,ad,delta,l1,l2,p) #l1[numpy.where(l1<1E-12)[0]] = 1E-12 #l2[numpy.where(l2<1E-12)[0]] = 1E-12 new_loglike = numpy.sum( -p*l1*ad*((Dk**d)*new_delta) - (1-p)*l2*ad*((Dk**d)*(1.0-new_delta)) + new_delta*K*numpy.log(l1*ad) + (1.0-new_delta)*K*numpy.log(l2*ad) ) update = 100*(new_loglike - old_loglike)/new_loglike # Force at least 3 EM updates, quit after 50. # Quit if EM starts to oscillate. counter = counter + 1 if counter < 3: run = 1 elif counter > 50: run = 0 elif old_loglike > new_loglike: run = 0 classification = old_classification new_loglike = old_loglike; new_delta = delta elif update < convergence: run = 0 old_loglike = new_loglike loglike.append(new_loglike) delta = new_delta if l2 > l1: delta = 1.0 - delta idx, = numpy.where(delta < 1E-6) delta[idx] = 1E-6 signal = numpy.where(delta > .5)[0] noise = numpy.where(delta <= .5)[0] # done return signal, noise, delta, loglike #Single EM update def EM_update(Dk,K,n,d,ad,delta,l1,l2,p): factorial = lambda K : numpy.arange(1.0,K+1.0).prod() if K > 1 else 1.0 pdf = lambda x,K,l,d: numpy.exp(-l*(x**d)) * 2 * (l**K) * x**(d*K - 1) / factorial(K-1) # E-step P1 = pdf(Dk,K,l1*ad,d) P2 = pdf(Dk,K,l2*ad,d) index, = numpy.where((P1 == 0.0) * (P2 == 0.0)) delta[index] = .5 index, = numpy.where((P1 != 0.0) + (P2 != 0.0)) delta[index] = p*P1[index]/(p*P1[index] + (1.0-p)*P2[index]) # M-step l1 = K*numpy.sum(delta)/(ad*numpy.sum((Dk**d)*delta)) l2 = K*numpy.sum(1.0-delta)/(ad*numpy.sum((Dk**d)*(1.0-delta))) p = numpy.sum(delta)/n return delta, l1, l2, p From peridot.faceted at gmail.com Fri Jan 9 12:29:43 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 9 Jan 2009 12:29:43 -0500 Subject: [SciPy-dev] denoising spatial point process data In-Reply-To: <5e12a746973e53ef7bcce833fe7bbf1a.squirrel@webmail.uio.no> References: <5e12a746973e53ef7bcce833fe7bbf1a.squirrel@webmail.uio.no> Message-ID: 2009/1/9 Sturla Molden : > If it is of interest I'll donate a useful algorithm of denoising spatial > point process data to scipy.spatial. It works by fitting a mixture of two > Poisson processes (signal + noise) using an EM algorithm. It was > originally developed for US DoD to detect minefields using reconnaissance > aircraft images. I use it on neurophysiological spike data to remove > artifact waveforms. It seems to be very efficient. It looks like a very handy procedure. As I understand it, it models the spatial region as consisting of two regions (not necessarily connected), each of which contains a Poisson process with a different density, and it classifies points into one region or the other based on the distance to their K-th nearest neighbor. As such, I wonder whether it belongs more in with the clustering/machine learning code? One nice aspect of having scipy.spatial is that other scipy code can use it regardless of where that code actually is. Anne From sturla at molden.no Fri Jan 9 13:30:01 2009 From: sturla at molden.no (Sturla Molden) Date: Fri, 9 Jan 2009 19:30:01 +0100 (CET) Subject: [SciPy-dev] denoising spatial point process data In-Reply-To: References: <5e12a746973e53ef7bcce833fe7bbf1a.squirrel@webmail.uio.no> Message-ID: <4082c0daadf2fb994b4c526108b15d9f.squirrel@webmail.uio.no> > 2009/1/9 Sturla Molden : > As such, I wonder > whether it belongs more in with the clustering/machine learning code? It is not correct to call it a 'clustering' technique, but machine learning is an acceptable label. Similar to many clustering techniques (e.g. k-means) it fits a mixture model using the EM agorithm. It does not fit clusters, but separates target from clutter, based on the idea that clutter points will be widely scattered without adjacent neighbours. As for clustering: A clustering technique related to nnclean is 'superparamagnetic clustering'. It uses KNN distances and some form of MCMC (Swendsen-Wang or Wolff's algorithm). It is to my knowledge the only clustering method that is impervious to initialization, oblivious to the number of clusters in advance, can fit clusters of arbitrary shape, as well as guaranteed to converge to the globally correct solution. I have an implementation of that as well (it needs some more testing). Unfortunately it seems to be protected by a patent. I am not a lawyer, but it seems strange that a pure numerical method can be patented. At least in Europe, patents must include some sort of physical action, not just plain mathematics. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.77.5569 http://www.wikipatents.com/6021383.html From robert.kern at gmail.com Fri Jan 9 16:28:46 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 9 Jan 2009 15:28:46 -0600 Subject: [SciPy-dev] denoising spatial point process data In-Reply-To: <4082c0daadf2fb994b4c526108b15d9f.squirrel@webmail.uio.no> References: <5e12a746973e53ef7bcce833fe7bbf1a.squirrel@webmail.uio.no> <4082c0daadf2fb994b4c526108b15d9f.squirrel@webmail.uio.no> Message-ID: <3d375d730901091328h40b869adk41fe31cd04f40225@mail.gmail.com> On Fri, Jan 9, 2009 at 12:30, Sturla Molden wrote: >> 2009/1/9 Sturla Molden : > >> As such, I wonder >> whether it belongs more in with the clustering/machine learning code? > > It is not correct to call it a 'clustering' technique, but machine > learning is an acceptable label. Similar to many clustering techniques > (e.g. k-means) it fits a mixture model using the EM agorithm. It does not > fit clusters, but separates target from clutter, based on the idea that > clutter points will be widely scattered without adjacent neighbours. > > As for clustering: > > A clustering technique related to nnclean is 'superparamagnetic > clustering'. It uses KNN distances and some form of MCMC (Swendsen-Wang or > Wolff's algorithm). It is to my knowledge the only clustering method that > is impervious to initialization, oblivious to the number of clusters in > advance, can fit clusters of arbitrary shape, as well as guaranteed to > converge to the globally correct solution. I have an implementation of > that as well (it needs some more testing). Unfortunately it seems to be > protected by a patent. I am not a lawyer, but it seems strange that a pure > numerical method can be patented. At least in Europe, patents must include > some sort of physical action, not just plain mathematics. > > http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.77.5569 > http://www.wikipatents.com/6021383.html In the US, algorithms been patentable; the patents just need to be written such that they refer to running it on a computer device. A recent court decision has suggested that software patents that only need a general computer rather than a more specific kind of hardware may not be patentable, but we'll see how that holds up. Until that decision gets some more support, I think we're still going to avoid patented algorithms in scipy, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sturla at molden.no Fri Jan 9 16:44:49 2009 From: sturla at molden.no (Sturla Molden) Date: Fri, 9 Jan 2009 22:44:49 +0100 (CET) Subject: [SciPy-dev] denoising spatial point process data In-Reply-To: <3d375d730901091328h40b869adk41fe31cd04f40225@mail.gmail.com> References: <5e12a746973e53ef7bcce833fe7bbf1a.squirrel@webmail.uio.no> <4082c0daadf2fb994b4c526108b15d9f.squirrel@webmail.uio.no> <3d375d730901091328h40b869adk41fe31cd04f40225@mail.gmail.com> Message-ID: > On Fri, Jan 9, 2009 at 12:30, Sturla Molden wrote: > Until that > decision gets some more support, I think we're still going to avoid > patented algorithms in scipy, though. That is what I thought as well. As far as I know, nnclean is not covered by a patent, at least I have not found any. And there are freely available implementations for S and R. Sturla Molden From pav at iki.fi Sun Jan 11 12:42:37 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 11 Jan 2009 17:42:37 +0000 (UTC) Subject: [SciPy-dev] Numerical Recipes (was tagging 0.7rc1 this weekend?) References: <4947CA8C.40008@american.edu> <49613610.8060407@american.edu> Message-ID: Sun, 04 Jan 2009 17:20:00 -0500, Alan G Isaac wrote: >> On Tue, Dec 16, 2008 at 7:34 AM, Alan G Isaac >> wrote: >>> Travis O. is listed as writing this orthogonal.py in 2000 and doing >>> bug fixes in 2003. Note that the reference to Numerical Recipes is >>> one of two references documenting an algebraic recursion relation and >>> is not referring to an implementation. >>> http://www.fizyka.umk.pl/nrbook/c4-5.pdf >>> http://www-lsp.ujf-grenoble.fr/recherche/a3t2/a3t2a2/bahram/biblio/ abramowitz_and_stegun/page_774.htm >>> I do not see any problem with the code. > > > Fernando Perez wrote: >> Other than the fact that these codes both use the same piece of >> mathematics, I see no problem here whatsoever. It might be worth >> clarifying in the docstring > > > Am I correct that the documentation project does not give access to the > module-level docstrings? I would have been happy to make this change in > orthogonal.py but did not see how to add documentation via the project. It does, here: http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ -- Pauli Virtanen From aisaac at american.edu Sun Jan 11 15:08:23 2009 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 11 Jan 2009 15:08:23 -0500 Subject: [SciPy-dev] Numerical Recipes (was tagging 0.7rc1 this weekend?) In-Reply-To: References: <4947CA8C.40008@american.edu> <49613610.8060407@american.edu> Message-ID: <496A51B7.3070507@american.edu> On 1/11/2009 12:42 PM Pauli Virtanen apparently wrote: > http://docs.scipy.org/scipy/docs/scipy.special.orthogonal/ OK, thanks. I cleaned that up a bit. Alan From nwagner at iam.uni-stuttgart.de Mon Jan 12 03:04:51 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 12 Jan 2009 09:04:51 +0100 Subject: [SciPy-dev] ERROR: Failure: ImportError (No module named convolve) Message-ID: >>> numpy.__version__ '1.3.0.dev6316' >>> scipy.__version__ '0.8.0.dev5437' ====================================================================== ERROR: Failure: ImportError (No module named convolve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/__init__.py", line 2, in import image File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/image/__init__.py", line 2, in from _image import * File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/image/_image.py", line 2, in import convolve ImportError: No module named convolve ---------------------------------------------------------------------- Ran 3572 tests in 58.251s FAILED (KNOWNFAIL=2, SKIP=17, errors=1) From chanley at stsci.edu Mon Jan 12 08:07:24 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 12 Jan 2009 08:07:24 -0500 Subject: [SciPy-dev] ERROR: Failure: ImportError (No module named convolve) In-Reply-To: References: Message-ID: <496B408C.5090303@stsci.edu> Nils Wagner wrote: >>>> numpy.__version__ > '1.3.0.dev6316' >>>> scipy.__version__ > '0.8.0.dev5437' > > ====================================================================== > ERROR: Failure: ImportError (No module named convolve) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/loader.py", > line 364, in loadTestsFromName > addr.filename, addr.module) > File > "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", > line 39, in importFromPath > return self.importFromDir(dir_path, fqname) > File > "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", > line 84, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/__init__.py", > line 2, in > import image > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/image/__init__.py", > line 2, in > from _image import * > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/image/_image.py", > line 2, in > import convolve > ImportError: No module named convolve > > ---------------------------------------------------------------------- > Ran 3572 tests in 58.251s > > FAILED (KNOWNFAIL=2, SKIP=17, errors=1) > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev OK. This is my fault. I'll fix it this morning. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From nwagner at iam.uni-stuttgart.de Mon Jan 12 08:24:28 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 12 Jan 2009 14:24:28 +0100 Subject: [SciPy-dev] ERROR: Failure: ImportError (No module named convolve) In-Reply-To: <496B408C.5090303@stsci.edu> References: <496B408C.5090303@stsci.edu> Message-ID: On Mon, 12 Jan 2009 08:07:24 -0500 Christopher Hanley wrote: > Nils Wagner wrote: >>>>> numpy.__version__ >> '1.3.0.dev6316' >>>>> scipy.__version__ >> '0.8.0.dev5437' >> >> ====================================================================== >> ERROR: Failure: ImportError (No module named convolve) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/loader.py", >> line 364, in loadTestsFromName >> addr.filename, addr.module) >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", >> line 39, in importFromPath >> return self.importFromDir(dir_path, fqname) >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", >> line 84, in importFromDir >> mod = load_module(part_fqname, fh, filename, desc) >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/__init__.py", >> line 2, in >> import image >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/image/__init__.py", >> line 2, in >> from _image import * >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/stsci/image/_image.py", >> line 2, in >> import convolve >> ImportError: No module named convolve >> >> ---------------------------------------------------------------------- >> Ran 3572 tests in 58.251s >> >> FAILED (KNOWNFAIL=2, SKIP=17, errors=1) >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > OK. This is my fault. I'll fix it this morning. > > Chris > > > -- > Christopher Hanley > Senior Systems Software Engineer > Space Telescope Science Institute > 3700 San Martin Drive > Baltimore MD, 21218 > (410) 338-4338 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Hi Chris, Thank you for your response. Please can you also close the corresponding ticket http://projects.scipy.org/scipy/scipy/ticket/841 Cheers, Nils From daniel.wheeler2 at gmail.com Mon Jan 12 14:59:52 2009 From: daniel.wheeler2 at gmail.com (Daniel Wheeler) Date: Mon, 12 Jan 2009 14:59:52 -0500 Subject: [SciPy-dev] [SciPy-user] Scipy 0.7, weave, windows In-Reply-To: <5b8d13220901090431u7b9b0de4n4ce26d187fc5e20d@mail.gmail.com> References: <49661969.40905@ar.media.kyoto-u.ac.jp> <80b160a0901080749r3d419de0vf9c7dd65c508ec31@mail.gmail.com> <5b8d13220901080815r1eb9b82r19c93a56e79e559b@mail.gmail.com> <80b160a0901081200v1745d5dch9f3198a86d1ab18f@mail.gmail.com> <5b8d13220901090431u7b9b0de4n4ce26d187fc5e20d@mail.gmail.com> Message-ID: <80b160a0901121159m4f1695e9o1c4b8cca7f95033a@mail.gmail.com> Hi David, I ran all the weave tests in FiPy (on windows) with the binary posted and I get absolutley no errors. I was a little confused by this because we generally get a mountain of doctest errors the first time we run the weave tests. These are associated with the "weave compiling" output noise. I deleted the ".python25_compiled" directory and I only get one doctest error associated with the creation of that directory. Anyway, the long and the short of it is that everything seems cool and as an addition weave is no longer puking that annoying "weave compiling" noise that breaks all the doctests. Cheers On Fri, Jan 9, 2009 at 7:31 AM, David Cournapeau wrote: > On Fri, Jan 9, 2009 at 5:00 AM, Daniel Wheeler > wrote: >> On Thu, Jan 8, 2009 at 11:15 AM, David Cournapeau wrote: >>> On Fri, Jan 9, 2009 at 12:49 AM, Daniel Wheeler >>> wrote: >>>> On Thu, Jan 8, 2009 at 10:19 AM, David Cournapeau >>>> wrote: >>>>> Hi, >>>>> >>>>> I just did a full build/install/test dance of scipy 0.7 on windows, >>>>> and things look good - except weave, which brings 205 errors when the >>>>> full test suite is run. Do people use weave on windows ? >>>> >>>> Yes. Our test suite for fipy currently passes all it's weave tests on >>>> windows with python 2.5 and scipy version 0.6.0 and that includes a >>>> lot of auto generated weave code. >>> >>> Thanks for the info. Would you mind testing it with scipy 0.7.x branch >>> ? There are some test failures which showed some old code which could >>> not have worked (like using python code which was removed from python >>> svn 5 years ago), but as I am not a weave user myself, I can't really >>> assess what's significant and what's not. >>> >>> I could make a binary installer if that makes it easier for you to test, >> >> That would be great if you have it set to build quickly and easily. >> Don't fancy figuring out how to build scipy on windows. Cheers. > > No need to worry, I am the one who coded the tools for the windows > binary installer, so hopefully I am still familiar with it :) > > Here we are: > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/scipy/scipy-0.7.0.dev5410-win32-superpack-python2.5.exe > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Daniel Wheeler From millman at berkeley.edu Mon Jan 12 22:34:33 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 12 Jan 2009 19:34:33 -0800 Subject: [SciPy-dev] 2009 Google Summer of Code Message-ID: Google has announced that it will be running Summer of Code and Google Highly Open Participation Contest again this summer. Due to the current economic climate, the SoC will be smaller this year. But the participant stipends will remain the same amount as last year. SciPy has participated in the SoC for the last several years. Our involvement in this program has been extremely productive. Last year there were several NumPy/SciPy related projects including: NumPy test framework enhancement Alan McIntyre Developing Cython for Easy NumPy Integration Dag Sverre Seljebotn Moving OpenOpt Forward Dmitrey Kroshko While it isn't certain that we will get any student proposals accepted, it is important to start developing proposals now. Please feel free to send ideas to the list or use the wiki: http://scipy.org/scipy/scipy/wiki/SummerofCodeIdeas For instance, here are a few ideas for SoC projects that I have been thinking about: - Port NumPy/SciPy to Python 3.0 - Date/time types http://projects.scipy.org/scipy/numpy/browser/trunk/doc/neps/datetime-proposal3.rst - integrate Jonathan Taylor's statistical models into scipy.stats. - improve datasource and integrate it into all the numpy/scipy io http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/_datasource.py - clean up, refactor scipy package structure: scipy.lib, scipy.misc, scipy.stsci - modernize, clean-up scipy.weave/numpy.f2py, integrate fast-vectorize Thanks, Jarrod From michael.abshoff at googlemail.com Mon Jan 12 22:45:32 2009 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 12 Jan 2009 19:45:32 -0800 Subject: [SciPy-dev] 2009 Google Summer of Code In-Reply-To: References: Message-ID: <496C0E5C.9020500@gmail.com> Jarrod Millman wrote: Hi, > Google has announced that it will be running Summer of Code and Google > Highly Open Participation Contest again this summer. Due to the > current economic climate, the SoC will be smaller this year. But the > participant stipends will remain the same amount as last year. > > SciPy has participated in the SoC for the last several years. Our > involvement in this program has been extremely productive. Last year > there were several NumPy/SciPy related projects including: > > NumPy test framework enhancement > Alan McIntyre > > Developing Cython for Easy NumPy Integration > Dag Sverre Seljebotn > > Moving OpenOpt Forward > Dmitrey Kroshko > > While it isn't certain that we will get any student proposals > accepted, it is important to start developing proposals now. Please > feel free to send ideas to the list or use the wiki: > http://scipy.org/scipy/scipy/wiki/SummerofCodeIdeas > > For instance, here are a few ideas for SoC projects that I have been > thinking about: > > - Port NumPy/SciPy to Python 3.0 This would certainly rock and be the top pick if I had any say in it. Depending on how my summer looks I would even be interested in doing this :) > - Date/time types > http://projects.scipy.org/scipy/numpy/browser/trunk/doc/neps/datetime-proposal3.rst > > - integrate Jonathan Taylor's statistical models into scipy.stats. > > - improve datasource and integrate it into all the numpy/scipy io > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/_datasource.py > > - clean up, refactor scipy package structure: > scipy.lib, scipy.misc, scipy.stsci > > - modernize, clean-up scipy.weave/numpy.f2py, integrate fast-vectorize > > Thanks, > Jarrod Cheers, Michael > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Tue Jan 13 05:45:03 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 13 Jan 2009 11:45:03 +0100 Subject: [SciPy-dev] IOError in fftpack/tests Message-ID: >>> scipy.__version__ '0.8.0.dev5461' ====================================================================== ERROR: Failure: IOError ([Errno 2] No such file or directory: '/data/home/nwagner/local/lib/python2.5/site-packages/scipy/fftpack/tests/test.mat') ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/fftpack/tests/test_real_transforms.py", line 11, in squeeze_me=True, struct_as_record=True, mat_dtype=True) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/io/matlab/mio.py", line 110, in loadmat MR = mat_reader_factory(file_name, appendmat, **kwargs) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/io/matlab/mio.py", line 67, in mat_reader_factory byte_stream = open(full_name, 'rb') IOError: [Errno 2] No such file or directory: '/data/home/nwagner/local/lib/python2.5/site-packages/scipy/fftpack/tests/test.mat' From david at ar.media.kyoto-u.ac.jp Tue Jan 13 10:58:40 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 14 Jan 2009 00:58:40 +0900 Subject: [SciPy-dev] IOError in fftpack/tests In-Reply-To: References: Message-ID: <496CBA30.2070202@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: >>>> scipy.__version__ >>>> > '0.8.0.dev5461' > Forgot to add one test file, should be fixed, David From josef.pktd at gmail.com Tue Jan 13 13:31:04 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 13 Jan 2009 13:31:04 -0500 Subject: [SciPy-dev] cut-and-paste error in scipy\io\matlab\mio5.py Message-ID: <1cd32cbb0901131031v5cf002b7x6df48508326dd5c@mail.gmail.com> File "C:\Programs\Python25\lib\site-packages\scipy\io\matlab\mio5.py", line 502, in get_raw_array result = super(Mat5ObjectMatrixGetter, self).get_raw_array() TypeError: super(type, obj): obj must be an instance or subtype of type class Mat5FunctionMatrixGetter(Mat5CellMatrixGetter): def get_raw_array(self): result = super(Mat5ObjectMatrixGetter, self).get_raw_array() return MatlabFunction(result) super(Mat5ObjectMatrixGetter should be: super(Mat5FunctionMatrixGetter I guess this is covered by a test Josef From matthew.brett at gmail.com Tue Jan 13 15:37:59 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 13 Jan 2009 15:37:59 -0500 Subject: [SciPy-dev] cut-and-paste error in scipy\io\matlab\mio5.py In-Reply-To: <1cd32cbb0901131031v5cf002b7x6df48508326dd5c@mail.gmail.com> References: <1cd32cbb0901131031v5cf002b7x6df48508326dd5c@mail.gmail.com> Message-ID: <1e2af89e0901131237m631a867s25f78da102f13c@mail.gmail.com> Hi, > super(Mat5ObjectMatrixGetter > > should be: > super(Mat5FunctionMatrixGetter Thank you, that's a good spot. > I guess this is covered by a test I think you mean 'I guess this _should_ be covered by a test'. But no, there are no tests for function reading and writing. It's just that I don't have easy access to matlab anymore. I'll make the change in trunk, and in the rc branch. I'll try and add a test for this exact problem. Thanks again, Matthew From josef.pktd at gmail.com Tue Jan 13 16:03:32 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 13 Jan 2009 16:03:32 -0500 Subject: [SciPy-dev] cut-and-paste error in scipy\io\matlab\mio5.py In-Reply-To: <1e2af89e0901131237m631a867s25f78da102f13c@mail.gmail.com> References: <1cd32cbb0901131031v5cf002b7x6df48508326dd5c@mail.gmail.com> <1e2af89e0901131237m631a867s25f78da102f13c@mail.gmail.com> Message-ID: <1cd32cbb0901131303y43ab8099y6831d53dda3dc53@mail.gmail.com> On Tue, Jan 13, 2009 at 3:37 PM, Matthew Brett wrote: > Hi, > >> super(Mat5ObjectMatrixGetter >> >> should be: >> super(Mat5FunctionMatrixGetter > > Thank you, that's a good spot. > >> I guess this is covered by a test > > I think you mean 'I guess this _should_ be covered by a test'. But > no, there are no tests for function reading and writing. It's just > that I don't have easy access to matlab anymore. > > I'll make the change in trunk, and in the rc branch. I'll try and add > a test for this exact problem. > I tried some more after this change to my local scipy, and I got soon stuck again. From matlab I just dumped the entire workspace, with different structs and function handles, and I ended up with TypeErrors, which I guess are actually not implemented types. Loading a mat5 file that only contains data matrices and structs without function handles works without problems. But to load the relevant parts of the dumped workspace, I put some try except statements in mio5 to capture type errors and replaced these elements by None or NaN. This way types that are not implemented are just skipped. The way I did it is a dirty hack, since I just capture any exception in read_element, since I'm not familiar with `mio` to figure out what the correct dummy content of an element should be. I don't have time right now to test this properly, but since the basic data transfer through mat5 files is working now, I will slowly switch from csv files to mat5 and can test some more. My matlab version is still 2006a, so mat5 is the best I can do. Thanks, Josef From matthew.brett at gmail.com Tue Jan 13 16:10:24 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 13 Jan 2009 16:10:24 -0500 Subject: [SciPy-dev] cut-and-paste error in scipy\io\matlab\mio5.py In-Reply-To: <1cd32cbb0901131303y43ab8099y6831d53dda3dc53@mail.gmail.com> References: <1cd32cbb0901131031v5cf002b7x6df48508326dd5c@mail.gmail.com> <1e2af89e0901131237m631a867s25f78da102f13c@mail.gmail.com> <1cd32cbb0901131303y43ab8099y6831d53dda3dc53@mail.gmail.com> Message-ID: <1e2af89e0901131310g1c2c25c4q23bd8a24c3531dbb@mail.gmail.com> Hi, > I tried some more after this change to my local scipy, and I got soon > stuck again. From matlab I just dumped the entire workspace, with > different structs and function handles, and I ended up with > TypeErrors, which I guess are actually not implemented types. Thanks again. Several of the newer readers are not tested properly. They need examples. Can you make small examples of the things that will not load, preferably as single variable .mat files, and post them somehow? See the matlab/io/tests/gen_mat5files.m for some examples of generating .mat files. Thanks a lot - it's very useful to have good feedback, Matthew From millman at berkeley.edu Wed Jan 14 01:36:47 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 13 Jan 2009 22:36:47 -0800 Subject: [SciPy-dev] SciPy 0.7.x branch frozen Message-ID: The SciPy 0.7.x branch is now frozen. The trunk remains open for development. Please do *not* make any commits to the branch from now until 0.7.0 final is released. If you have a bug fix you want included in the 0.7.x series, please commit it to the trunk and send an email to the dev-list asking for it to be reviewed and backported. I will tag 0.7.0rc1 tomorrow and, unless something unexpected turns up, the 0.7.0 final release will be tagged a little over one week later. Here is the release schedule: 1/13 0.7.x branch frozen 1/14 0.7.0rc1 tagged 1/15 0.7.0rc1 announced, binaries posted to sourceforge 1/24 0.7.0 tagged 1/26 0.7.0 announced, binaries posted to sourceforge A 0.7.1 release is scheduled for sometime in February (the exact date will depend on exactly what issues are reported and fixed with 0.7.0). Thanks to everyone who has helped get this release ready. 0.7 is a very significant release in terms of features, bug-fixes, testing, documentation, license compliance, and release process. I am looking forward to the 0.8 release and will send a separate email to start discussing the 0.8 development roadmap. Best, Jarrod From tom.grydeland at gmail.com Wed Jan 14 07:45:51 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Wed, 14 Jan 2009 13:45:51 +0100 Subject: [SciPy-dev] Inheritance from poly1d (orthogonal.py) Message-ID: Hello, developers. In reference to the ticket I just submitted: http://scipy.org/scipy/scipy/ticket/844 I think the problem is with the class 'poly1d' in numpy.lib.polynomial. Several of the methods of this class (such as __mul__, __rmul__, integ, deriv) refer explicitly to 'poly1d' instead of self's class. In the same vein, I expect that __repr__ should use self's class instead of 'poly1d'. -- Tom Grydeland From nwagner at iam.uni-stuttgart.de Wed Jan 14 11:52:05 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Jan 2009 17:52:05 +0100 Subject: [SciPy-dev] No module named latexwriter Message-ID: Hi all, I tried to build the documentation by make html in numpy/doc I am using OpenSuse11.1 on a x86_64 box. ~/svn/numpy/doc> make html mkdir -p build ./sphinxext/autosummary_generate.py source/reference/*.rst \ -p dump.xml -o source/reference/generated /home/nwagner/svn/numpy/doc/sphinxext/autosummary.py:59: DeprecationWarning: The sphinx.builder module is deprecated; please import builders from the respective sphinx.builders submodules. import sphinx.addnodes, sphinx.roles, sphinx.builder Failed to import 'numpy.__array_priority__': Failed to import 'numpy.distutils.misc_util.get_numarray_include_dirs': Failed to import 'numpy.generic.__squeeze__': touch build/generate-stamp mkdir -p build/html build/doctrees LANG=C sphinx-build -b html -d build/doctrees source build/html /home/nwagner/svn/numpy/doc/sphinxext/autosummary.py:59: DeprecationWarning: The sphinx.builder module is deprecated; please import builders from the respective sphinx.builders submodules. import sphinx.addnodes, sphinx.roles, sphinx.builder Extension error: Could not import extension only_directives (exception: No module named latexwriter) make: *** [html] Fehler 1 How can I fix the problem ? Nils From pav at iki.fi Wed Jan 14 12:18:10 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 14 Jan 2009 17:18:10 +0000 (UTC) Subject: [SciPy-dev] No module named latexwriter References: Message-ID: Wed, 14 Jan 2009 17:52:05 +0100, Nils Wagner wrote: > I tried to build the documentation by make html in numpy/doc > > I am using OpenSuse11.1 on a x86_64 box. [clip] > How can I fix the problem ? Are you using Sphinx version 0.5? The extensions are only tested with it. -- Pauli Virtanen From nwagner at iam.uni-stuttgart.de Wed Jan 14 12:24:22 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Jan 2009 18:24:22 +0100 Subject: [SciPy-dev] No module named latexwriter In-Reply-To: References: Message-ID: On Wed, 14 Jan 2009 17:18:10 +0000 (UTC) Pauli Virtanen wrote: > Wed, 14 Jan 2009 17:52:05 +0100, Nils Wagner wrote: >> I tried to build the documentation by make html in >>numpy/doc >> >> I am using OpenSuse11.1 on a x86_64 box. > [clip] >> How can I fix the problem ? > > Are you using Sphinx version 0.5? The extensions are >only tested with it. > Hi Pauli, I am using v0.6. sphinx-build Sphinx v0.6 Usage: /home/nwagner/local/bin/sphinx-build [options] sourcedir outdir [filenames...] Options: -b -- builder to use; default is html -a -- write all files; default is to only write new and changed files -E -- don't use a saved environment, always read all files -d -- path for the cached environment and doctree files (default: outdir/.doctrees) -c -- path where configuration file (conf.py) is located (default: same as sourcedir) -C -- use no config file at all, only -D options -D -- override a setting in configuration -A -- pass a value into the templates, for HTML builder -N -- do not do colored output -q -- no output on stdout, just warnings on stderr -Q -- no output at all, not even warnings -P -- run Pdb on exception Modi: * without -a and without filenames, write new and changed files. * with -a, write all files. * with filenames, write these. Nils From pav at iki.fi Thu Jan 15 19:31:21 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 16 Jan 2009 00:31:21 +0000 (UTC) Subject: [SciPy-dev] SciPy 0.7.x branch frozen References: Message-ID: Tue, 13 Jan 2009 22:36:47 -0800, Jarrod Millman wrote: > The SciPy 0.7.x branch is now frozen. The trunk remains open for > development. Please do *not* make any commits to the branch from now > until 0.7.0 final is released. If you have a bug fix you want included > in the 0.7.x series, please commit it to the trunk and send an email to > the dev-list asking for it to be reviewed and backported. I wrote a fix for #679, invalid values returned by special.struve: http://scipy.org/scipy/scipy/ticket/679 It was a bug in specfun. The fix is here: http://scipy.org/scipy/scipy/changeset/5465 http://scipy.org/scipy/scipy/changeset/5466 Tests are included. Might be nice to have also this in 0.7.0 if someone is willing to review it. If not, 0.7.1 would be probably OK, too. -- Pauli Virtanen From thouis at broad.mit.edu Thu Jan 15 20:47:35 2009 From: thouis at broad.mit.edu (Thouis (Ray) Jones) Date: Thu, 15 Jan 2009 20:47:35 -0500 Subject: [SciPy-dev] cut-and-paste error in scipy\io\matlab\mio5.py In-Reply-To: <1cd32cbb0901131303y43ab8099y6831d53dda3dc53@mail.gmail.com> References: <1cd32cbb0901131031v5cf002b7x6df48508326dd5c@mail.gmail.com> <1e2af89e0901131237m631a867s25f78da102f13c@mail.gmail.com> <1cd32cbb0901131303y43ab8099y6831d53dda3dc53@mail.gmail.com> Message-ID: <6c17e6f50901151747h7326f657p53ad0c8bc7f58171@mail.gmail.com> This may be useful for someone trying to write a patch for the opaque objects (tag 17, the one Josef is dealing with): https://www-old.cae.wisc.edu/pipermail/octave-maintainers/2007-May/002824.html Apparently these are workspaces attached to anonymous and nested functions. I've looked at the Octave code a bit (src/ls-mat5.cc) as a reference, and made some initial stabs at handling them in scipy, but no luck so far. Josef suggested that matlab objects without readers might be loaded as an unparsed chunk of data, which could then be written back out directly (rather than just punting them). That seems a reasonable solution to me. Ray Jones From tom.grydeland at gmail.com Fri Jan 16 04:01:03 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Fri, 16 Jan 2009 10:01:03 +0100 Subject: [SciPy-dev] Inheritance from poly1d (orthogonal.py) In-Reply-To: References: Message-ID: On Wed, Jan 14, 2009 at 1:45 PM, Tom Grydeland wrote: > Hello, developers. > > In reference to the ticket I just submitted: > http://scipy.org/scipy/scipy/ticket/844 > > I think the problem is with the class 'poly1d' in > numpy.lib.polynomial. Several of the methods of this class (such as > __mul__, __rmul__, integ, deriv) refer explicitly to 'poly1d' instead > of self's class. > I've looked into this, and while I have a few possible solutions, I am not entirely decided on which is the correct one, perhaps because of my lack of experience with orthogonal polynomials. I know how to rewrite the numpy.poly1d class so that objects of derived classes also produce derived-class objects when they are multiplied with a scalar, differentiated, integrated etc. I do not know that this is the right approach, however. Although the product of an orthogonal polynomial and a scalar is another orthogonal polynomial, the same is not true for polynomials resulting from differentiation, integration, polynomial division etc. In short, it should probably be up to a derived class to decide which, if any, of these operations also result in an object of the derived class. I also know how to rewrite particular methods of the orthogonal.orthopoly1d class to have them produce objects of the appropriate class, but I suspect that this also runs the risk of producing orthogonal polynomials in inappropriate ways. I suspect the correct way to deal with this is to explicitly create orthopoly1d objects instead of the poly1d objects that arise from statements like "base * factor" in orthogonal.py. I will be happy to implement this, but somebody who understands orthogonal polynomials (and their scipy implementation) better than me must tell me which of their properties (e.g., weight function, roots and weights) are preserved, and how they change when they are not. The answer to this question determines how the changes must be made. So, is somebody up to explaining this to me? > In the same vein, I expect that __repr__ should use self's class > instead of 'poly1d'. This probably still holds true. -- Tom Grydeland From pav at iki.fi Fri Jan 16 15:46:51 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 16 Jan 2009 20:46:51 +0000 (UTC) Subject: [SciPy-dev] scipy.org down Message-ID: (Sorry for the noise if you already know this.) Hi, Scipy.org web server seems to be down. The machine itself seems to be up, though (apache dossed?): $ nc -v scipy.org 80 scipy.org [216.62.213.231] 80 (www) open GET [never returns anything] Let's see if the mailing list works... -- Pauli Virtanen From arokem at berkeley.edu Fri Jan 16 15:52:12 2009 From: arokem at berkeley.edu (Ariel Rokem) Date: Fri, 16 Jan 2009 12:52:12 -0800 Subject: [SciPy-dev] scipy.org down In-Reply-To: References: Message-ID: <2E0AF248-72EA-4B61-9011-1EAC9E1568C7@berkeley.edu> The mailing list seems to work. I have also had a similar experience with the website, though. Ariel On Jan 16, 2009, at 12:46 PM, Pauli Virtanen wrote: > (Sorry for the noise if you already know this.) > > Hi, > > Scipy.org web server seems to be down. The machine itself seems to > be up, > though (apache dossed?): > > $ nc -v scipy.org 80 > scipy.org [216.62.213.231] 80 (www) open > GET > [never returns anything] > > Let's see if the mailing list works... > > -- > Pauli Virtanen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From robert.kern at gmail.com Fri Jan 16 15:57:42 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Jan 2009 14:57:42 -0600 Subject: [SciPy-dev] scipy.org down In-Reply-To: References: Message-ID: <3d375d730901161257v2e3d3ff1l3f73bbe26296eeca@mail.gmail.com> On Fri, Jan 16, 2009 at 14:46, Pauli Virtanen wrote: > (Sorry for the noise if you already know this.) > > Hi, > > Scipy.org web server seems to be down. The machine itself seems to be up, > though (apache dossed?): We're working on it. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pwang at enthought.com Fri Jan 16 16:07:20 2009 From: pwang at enthought.com (Peter Wang) Date: Fri, 16 Jan 2009 15:07:20 -0600 Subject: [SciPy-dev] scipy.org down In-Reply-To: <3d375d730901161257v2e3d3ff1l3f73bbe26296eeca@mail.gmail.com> References: <3d375d730901161257v2e3d3ff1l3f73bbe26296eeca@mail.gmail.com> Message-ID: <372D68AA-7A90-436F-AEB1-5DF18D31CA00@enthought.com> On Jan 16, 2009, at 2:57 PM, Robert Kern wrote: > On Fri, Jan 16, 2009 at 14:46, Pauli Virtanen wrote: >> (Sorry for the noise if you already know this.) >> >> Hi, >> >> Scipy.org web server seems to be down. The machine itself seems to >> be up, >> though (apache dossed?): > > We're working on it. Thank you. It should be back up. I restarted both the web and mail server processes. Thanks for the heads up! Please let me know if there are any other problems. -Peter From xavier.gnata at gmail.com Sat Jan 17 11:02:39 2009 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 17 Jan 2009 17:02:39 +0100 Subject: [SciPy-dev] FAIL: test_pbdv (test_basic.TestCephes) Message-ID: <4972011F.10800@gmail.com> Hi, On kubuntu 64bits, scipy svn: This issue is still open on my box: FAIL: test_pbdv (test_basic.TestCephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line361, in test_pbdv assert_equal(cephes.pbdv(1,0),(0.0,0.0)) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 176, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), verbose) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=1 ACTUAL: 1.0 DESIRED: 0.0 Can you reproduce that? What else should I test to help to kill this bug? Xavier From david at ar.media.kyoto-u.ac.jp Sun Jan 18 06:18:30 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 18 Jan 2009 20:18:30 +0900 Subject: [SciPy-dev] DCT naming conventions ? Message-ID: <49731006.3050503@ar.media.kyoto-u.ac.jp> Hi, I needed DCT transforms, so I finished implementing them in scipy (both single and double prec): http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py I followed the definitions of wikipedia/fftw, using the fftpack implementations. I am not sure about naming conventions: - dct2 is the type II DCT, not the 2dimension DCT (multidimensional dcts are not implemented - I don't need them myself) - dct3 and dct2 are inverse from each other (compared to dct/idct notation of matlab; dct2(x, norm=ortho) is the same as matlab dct(x), and dct3(x, norm='ortho') is the same as matlab idct. Unless someone has a better idea/suggestion, I will put those definitions in scipy.fftpack namespace in a couple of days, cheers, David From william.ratcliff at gmail.com Sun Jan 18 09:58:13 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Sun, 18 Jan 2009 09:58:13 -0500 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <49731006.3050503@ar.media.kyoto-u.ac.jp> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> Message-ID: <827183970901180658t1de30972g9e240cb883d4124f@mail.gmail.com> If it's not too hard, could you also add the multidimensional transform? Thanks, William On Sun, Jan 18, 2009 at 6:18 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Hi, > > I needed DCT transforms, so I finished implementing them in scipy > (both single and double prec): > > > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py > > I followed the definitions of wikipedia/fftw, using the fftpack > implementations. I am not sure about naming conventions: > - dct2 is the type II DCT, not the 2dimension DCT (multidimensional > dcts are not implemented - I don't need them myself) > - dct3 and dct2 are inverse from each other (compared to dct/idct > notation of matlab; dct2(x, norm=ortho) is the same as matlab dct(x), > and dct3(x, norm='ortho') is the same as matlab idct. > > Unless someone has a better idea/suggestion, I will put those > definitions in scipy.fftpack namespace in a couple of days, > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Jan 18 10:25:42 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 18 Jan 2009 17:25:42 +0200 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <49731006.3050503@ar.media.kyoto-u.ac.jp> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> 2009/1/18 David Cournapeau : > I needed DCT transforms, so I finished implementing them in scipy > (both single and double prec): > > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py > > - dct3 and dct2 are inverse from each other (compared to dct/idct > notation of matlab; dct2(x, norm=ortho) is the same as matlab dct(x), > and dct3(x, norm='ortho') is the same as matlab idct. While this naming is accurate, it is confusing in relation to the other FFT functions. I prefer "dct" and "idct". Cheers St?fan From cournape at gmail.com Sun Jan 18 11:30:37 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 19 Jan 2009 01:30:37 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> Message-ID: <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> On Mon, Jan 19, 2009 at 12:25 AM, St?fan van der Walt wrote: > 2009/1/18 David Cournapeau : >> I needed DCT transforms, so I finished implementing them in scipy >> (both single and double prec): >> >> http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py >> >> - dct3 and dct2 are inverse from each other (compared to dct/idct >> notation of matlab; dct2(x, norm=ortho) is the same as matlab dct(x), >> and dct3(x, norm='ortho') is the same as matlab idct. > > While this naming is accurate, it is confusing in relation to the > other FFT functions. I prefer "dct" and "idct". Calling them dct/idct can be confusing as well: dct3(dct2(x)) != x (because of normalization). And what to do for dct1/dct4 - in theory, there are 8 dct possible, and 8 dst as well (I have no use for dst, so I did not implement them, but they should follow at some point). Maybe the type could be an argument: dct(..., type=number) ? David From william.ratcliff at gmail.com Sun Jan 18 12:04:42 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Sun, 18 Jan 2009 12:04:42 -0500 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> Message-ID: <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> I like your arguments idea, dct(...,type=number) idct(...,type=number). maybe the type could default to 2 for dct and idct (with the implementation of idct for type 2 being dct of type3). This would make life easier for users. William On Sun, Jan 18, 2009 at 11:30 AM, David Cournapeau wrote: > On Mon, Jan 19, 2009 at 12:25 AM, St?fan van der Walt > wrote: > > 2009/1/18 David Cournapeau : > >> I needed DCT transforms, so I finished implementing them in scipy > >> (both single and double prec): > >> > >> > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py > >> > >> - dct3 and dct2 are inverse from each other (compared to dct/idct > >> notation of matlab; dct2(x, norm=ortho) is the same as matlab dct(x), > >> and dct3(x, norm='ortho') is the same as matlab idct. > > > > While this naming is accurate, it is confusing in relation to the > > other FFT functions. I prefer "dct" and "idct". > > Calling them dct/idct can be confusing as well: dct3(dct2(x)) != x > (because of normalization). And what to do for dct1/dct4 - in theory, > there are 8 dct possible, and 8 dst as well (I have no use for dst, so > I did not implement them, but they should follow at some point). > > Maybe the type could be an argument: dct(..., type=number) ? > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.grydeland at gmail.com Sun Jan 18 14:46:25 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Sun, 18 Jan 2009 20:46:25 +0100 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> Message-ID: On Sun, Jan 18, 2009 at 6:04 PM, william ratcliff wrote: > I like your arguments idea, > dct(...,type=number) > idct(...,type=number). +1 > maybe the type could default to 2 for dct and idct (with the implementation > of idct for type 2 being dct of type3). This would make life easier for > users. +1 Also, this naming makes it possible to give the 2D and higher-dimensionality DCTs names analogously to the FFT functions. > William >> David -- Tom Grydeland From david at ar.media.kyoto-u.ac.jp Sun Jan 18 23:09:07 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 19 Jan 2009 13:09:07 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <827183970901180658t1de30972g9e240cb883d4124f@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <827183970901180658t1de30972g9e240cb883d4124f@mail.gmail.com> Message-ID: <4973FCE3.4060008@ar.media.kyoto-u.ac.jp> william ratcliff wrote: > If it's not too hard, could you also add the multidimensional transform? Actually, it is a bit hard to do it; not so much the multidimensional aspect itself, but the handling of arbitrary shape and axis (it took me a while to fix and test related bugs for multidimensional fft). Since I just need 1d DCT for myself, and not so much time ATM, I unfortunately can't spend too much time on it, cheers, David From cournape at gmail.com Mon Jan 19 03:39:25 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 19 Jan 2009 17:39:25 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> Message-ID: <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> On Mon, Jan 19, 2009 at 4:46 AM, Tom Grydeland wrote: > On Sun, Jan 18, 2009 at 6:04 PM, william ratcliff > wrote: >> I like your arguments idea, >> dct(...,type=number) >> idct(...,type=number). > > +1 > >> maybe the type could default to 2 for dct and idct (with the implementation >> of idct for type 2 being dct of type3). This would make life easier for >> users. > > +1 > > Also, this naming makes it possible to give the 2D and > higher-dimensionality DCTs names analogously to the FFT functions. > Ok, I implemented the above scheme for DCT I/II/III, the docstring is: def dct(x, type=2, n=None, axis=-1, norm=None): """ Return the Discrete Cosine Transform of arbitrary type sequence x. Parameters ---------- x : array-like input array. type : {1, 2, 3} type of the DCT (see Notes). n : int, optional Length of the transform. axis : int, optional axis over which to compute the transform. norm : {None, 'ortho'} normalization mode (see Notes). Returns ------- y : real ndarray Notes ----- For a single dimension array x, dct(x, norm='ortho') is equal to matlab dct(x). There are theoretically 8 types of the DCT, only the first 3 types are implemented in scipy. 'The' DCT generally refers to DCT type 2, and 'the' Inverse DCT generally refers to DCT type 3. type I ~~~~~~ There are several definitions of the DCT-I; we use the following (for norm=None): for 0 <= k < N, N-1 y[k] = x[0] + (-1)**k x[N-1] + 2 * sum x[n]*cos(pi*k*n/(N-1)) n=0 type II ~~~~~~~ There are several definitions of the DCT-II; we use the following (for norm=None): N-1 y[k] = 2* sum x[n]*cos(pi*k*(2n+1)/(2*N)), 0 <= k < N. n=0 If norm='ortho', y[k] is multiplied by a scaling factor f: f = sqrt(1/(4*N)) if k = 0 f = sqrt(1/(2*N)) otherwise Which makes the corresponding matrix of coefficients orthonormal (OO' = Id). type III ~~~~~~~~ There are several definitions, we use the following (norm=None): N-1 y[k] = x[0] + 2 * sum x[n]*cos(pi*(k+0.5)*n/N), 0 <= k < N. n=0 Or (norm='ortho'), for 0 <= k < N: N-1 y[k] = x[0] / sqrt(N) + sqrt(1/N) * sum x[n]*cos(pi*(k+0.5)*n/N) n=0 The (unnormalized) DCT-III is the inverse of the (unnormalized) DCT-II, up to a factor 2*N. The orthonormalized DCT-III is exactly the inverse of the orthonormalized DCT-II. References ---------- http://en.wikipedia.org/wiki/Discrete_cosine_transform 'A Fast Cosine Transform in One and Two Dimensions', by J. Makhoul, in IEEE Transactions on acoustics, speech and signal processing. From stefan at sun.ac.za Mon Jan 19 03:57:58 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 Jan 2009 10:57:58 +0200 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> Message-ID: <9457e7c80901190057t3e4cef98u16c34bb07a2dacef@mail.gmail.com> 2009/1/19 David Cournapeau : > Ok, I implemented the above scheme for DCT I/II/III, the docstring is: > > def dct(x, type=2, n=None, axis=-1, norm=None): I think it would make sense to add a wrapper to idct too, since this is such a common use case. Cheers St?fan From david at ar.media.kyoto-u.ac.jp Mon Jan 19 03:45:56 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 19 Jan 2009 17:45:56 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <9457e7c80901190057t3e4cef98u16c34bb07a2dacef@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> <9457e7c80901190057t3e4cef98u16c34bb07a2dacef@mail.gmail.com> Message-ID: <49743DC4.70503@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > 2009/1/19 David Cournapeau : > >> Ok, I implemented the above scheme for DCT I/II/III, the docstring is: >> >> def dct(x, type=2, n=None, axis=-1, norm=None): >> > > I think it would make sense to add a wrapper to idct too, since this > is such a common use case. > Great minds think alike :) It is already implemented, if I understand you correctly: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py David From stefan at sun.ac.za Mon Jan 19 05:42:44 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 Jan 2009 12:42:44 +0200 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <49743DC4.70503@ar.media.kyoto-u.ac.jp> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> <9457e7c80901190057t3e4cef98u16c34bb07a2dacef@mail.gmail.com> <49743DC4.70503@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80901190242g302a87b9i59111ae14222ad07@mail.gmail.com> 2009/1/19 David Cournapeau : > St?fan van der Walt wrote: >> 2009/1/19 David Cournapeau : >> >>> Ok, I implemented the above scheme for DCT I/II/III, the docstring is: >>> >>> def dct(x, type=2, n=None, axis=-1, norm=None): >>> >> >> I think it would make sense to add a wrapper to idct too, since this >> is such a common use case. >> > > Great minds think alike :) It is already implemented, if I understand > you correctly: > > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py Perfect, thanks! St?fan From pav at iki.fi Mon Jan 19 05:55:10 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 19 Jan 2009 10:55:10 +0000 (UTC) Subject: [SciPy-dev] DCT naming conventions ? References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> Message-ID: Mon, 19 Jan 2009 17:39:25 +0900, David Cournapeau wrote: [clip] > norm=None): > N-1 > y[k] = x[0] + (-1)**k x[N-1] + 2 * sum x[n]*cos(pi*k*n/(N-1)) > n=0 This seems to be an use case for the math:: directive. Please use it, or make the previous line end with :: so that the text renders as preformatted text. Thanks, -- Pauli Virtanen From tom.grydeland at gmail.com Mon Jan 19 06:13:43 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Mon, 19 Jan 2009 12:13:43 +0100 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> Message-ID: On Mon, Jan 19, 2009 at 11:55 AM, Pauli Virtanen wrote: > Mon, 19 Jan 2009 17:39:25 +0900, David Cournapeau wrote: > [clip] >> norm=None): >> N-1 >> y[k] = x[0] + (-1)**k x[N-1] + 2 * sum x[n]*cos(pi*k*n/(N-1)) >> n=0 > > This seems to be an use case for the math:: directive. Please use it, or > make the previous line end with :: so that the text renders as > preformatted text. I thought so also. I'll volunteer to do it if David is too busy. > Thanks, > Pauli Virtanen -- Tom Grydeland From Scott.Daniels at Acm.Org Mon Jan 19 16:00:56 2009 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Mon, 19 Jan 2009 13:00:56 -0800 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> Message-ID: David Cournapeau wrote: > ... def dct(x, type=2, n=None, axis=-1, norm=None): maybe this is bike-shedding, but wouldn't def dct(x, type=2, n=None, axis=-1, normalize=False) Where any false means no normalization and any true means do ortho normalization? Or do you forsee non-ortho normalizations being implemented? --Scott David Daniels Scott.Daniels at Acm.Org From david at ar.media.kyoto-u.ac.jp Mon Jan 19 22:55:35 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 20 Jan 2009 12:55:35 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> Message-ID: <49754B37.3060402@ar.media.kyoto-u.ac.jp> Scott David Daniels wrote: > David Cournapeau wrote: > >> ... def dct(x, type=2, n=None, axis=-1, norm=None): >> > maybe this is bike-shedding, but wouldn't > def dct(x, type=2, n=None, axis=-1, normalize=False) > Where any false means no normalization and any true means do ortho > normalization? Or do you forsee non-ortho normalizations being > implemented? > Yes, several normalization schemes are possible - it is mostly a matter of convention. I implemented one normalization scheme to get the same results as matlab, which I needed for my case. cheers, David From cournape at gmail.com Tue Jan 20 04:17:41 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 20 Jan 2009 18:17:41 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> Message-ID: <5b8d13220901200117s75a6033fgcd25d5e941aed016@mail.gmail.com> On Mon, Jan 19, 2009 at 8:13 PM, Tom Grydeland wrote: > On Mon, Jan 19, 2009 at 11:55 AM, Pauli Virtanen wrote: >> Mon, 19 Jan 2009 17:39:25 +0900, David Cournapeau wrote: >> [clip] >>> norm=None): >>> N-1 >>> y[k] = x[0] + (-1)**k x[N-1] + 2 * sum x[n]*cos(pi*k*n/(N-1)) >>> n=0 >> >> This seems to be an use case for the math:: directive. Please use it, or >> make the previous line end with :: so that the text renders as >> preformatted text. > > I thought so also. I'll volunteer to do it if David is too busy. Sure, go ahead. I wrote the formula mostly for reference because there are so many variations in texts - I did not put too much attention to the actual formatting and spelling, David From tom.grydeland at gmail.com Wed Jan 21 04:49:32 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Wed, 21 Jan 2009 10:49:32 +0100 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <49731006.3050503@ar.media.kyoto-u.ac.jp> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> Message-ID: On Sun, Jan 18, 2009 at 12:18 PM, David Cournapeau wrote: > Hi, > > I needed DCT transforms, so I finished implementing them in scipy > (both single and double prec): > > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py Any reason why these would not belong in numpy? > David -- Tom Grydeland From cournape at gmail.com Wed Jan 21 05:13:19 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 21 Jan 2009 19:13:19 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: References: <49731006.3050503@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220901210213u163f02dbq7aeb7dc716e45eff@mail.gmail.com> On Wed, Jan 21, 2009 at 6:49 PM, Tom Grydeland wrote: > On Sun, Jan 18, 2009 at 12:18 PM, David Cournapeau > wrote: >> Hi, >> >> I needed DCT transforms, so I finished implementing them in scipy >> (both single and double prec): >> >> http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/fftpack/realtransforms.py > > Any reason why these would not belong in numpy? Two reasons: - we generally don't want to expand numpy functionalities. - the code depends on the fftpack library, which is not in numpy. David From tom.grydeland at gmail.com Wed Jan 21 09:31:12 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Wed, 21 Jan 2009 15:31:12 +0100 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <5b8d13220901200117s75a6033fgcd25d5e941aed016@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <9457e7c80901180725l70e1fa8di1ed7b035cd6e00a7@mail.gmail.com> <5b8d13220901180830t62716f58o4b7d86d26eb6bb5@mail.gmail.com> <827183970901180904r36264d97v2fe01075bd26a6ac@mail.gmail.com> <5b8d13220901190039l5757e35fsb95ccb22dfe8a12@mail.gmail.com> <5b8d13220901200117s75a6033fgcd25d5e941aed016@mail.gmail.com> Message-ID: On Tue, Jan 20, 2009 at 10:17 AM, David Cournapeau wrote: > On Mon, Jan 19, 2009 at 8:13 PM, Tom Grydeland wrote: >> I thought so also. I'll volunteer to do it if David is too busy. > Sure, go ahead. I wrote the formula mostly for reference because there > are so many variations in texts - I did not put too much attention to > the actual formatting and spelling, > > David Okay, I've formatted the maths (with an eye towards keeping it readable also as text) and added a link to the online version of the reference cited. Are the limits in the sums all correct now? -- Tom Grydeland From tom.grydeland at gmail.com Wed Jan 21 09:32:48 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Wed, 21 Jan 2009 15:32:48 +0100 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <5b8d13220901210213u163f02dbq7aeb7dc716e45eff@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <5b8d13220901210213u163f02dbq7aeb7dc716e45eff@mail.gmail.com> Message-ID: On Wed, Jan 21, 2009 at 11:13 AM, David Cournapeau wrote: > - the code depends on the fftpack library, which is not in numpy. http://docs.scipy.org/numpy/docs/numpy.fft.fftpack/ > David -- Tom Grydeland From cournape at gmail.com Wed Jan 21 10:18:43 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 22 Jan 2009 00:18:43 +0900 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <5b8d13220901210213u163f02dbq7aeb7dc716e45eff@mail.gmail.com> Message-ID: <5b8d13220901210718u2ea69e1chae6f26e6c934c43e@mail.gmail.com> On Wed, Jan 21, 2009 at 11:32 PM, Tom Grydeland wrote: > On Wed, Jan 21, 2009 at 11:13 AM, David Cournapeau wrote: > >> - the code depends on the fftpack library, which is not in numpy. > > http://docs.scipy.org/numpy/docs/numpy.fft.fftpack/ Numpy only includes a light version of fftpack, as indicated in the package. It does not contain the code for any DCT nor any single precision code. David From tom.grydeland at gmail.com Wed Jan 21 10:22:07 2009 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Wed, 21 Jan 2009 16:22:07 +0100 Subject: [SciPy-dev] DCT naming conventions ? In-Reply-To: <5b8d13220901210718u2ea69e1chae6f26e6c934c43e@mail.gmail.com> References: <49731006.3050503@ar.media.kyoto-u.ac.jp> <5b8d13220901210213u163f02dbq7aeb7dc716e45eff@mail.gmail.com> <5b8d13220901210718u2ea69e1chae6f26e6c934c43e@mail.gmail.com> Message-ID: On Wed, Jan 21, 2009 at 4:18 PM, David Cournapeau wrote: > Numpy only includes a light version of fftpack, as indicated in the > package. It does not contain the code for any DCT nor any single > precision code. Ah, ok, thanks! That explains my confusion. Should the docstrings I wrote for numpy.fft and friends be carried over to scipy.fft? > David -- Tom Grydeland From nwagner at iam.uni-stuttgart.de Wed Jan 21 12:41:41 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Jan 2009 18:41:41 +0100 Subject: [SciPy-dev] manifold_learning Message-ID: Hi all, The example regression.py in manifold_learning requires openopt. Is it recommended to install Openopt from its new location http://www.openopt.org/Install ? python -i regression.py Traceback (most recent call last): File "regression.py", line 21, in from scikits.learn.machine.manifold_learning import regression File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/__init__.py", line 9, in # See the README file for information on usage and redistribution. File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/__init__.py", line 10, in # File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/geodesic_mds.py", line 14, in ImportError: No module named openopt.solvers.optimizers Cheers, Nils From matthieu.brucher at gmail.com Wed Jan 21 12:48:38 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 21 Jan 2009 18:48:38 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: Hi, I didn't try the new location, but if Dmitrey didn't delete the generic framework, everything should be fine ;) Matthieu 2009/1/21 Nils Wagner : > Hi all, > > The example regression.py in manifold_learning requires > openopt. > > Is it recommended to install Openopt from its new location > http://www.openopt.org/Install ? > > > python -i regression.py > Traceback (most recent call last): > File "regression.py", line 21, in > from scikits.learn.machine.manifold_learning import > regression > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/__init__.py", > line 9, in > # See the README file for information on usage and > redistribution. > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/__init__.py", > line 10, in > # > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/geodesic_mds.py", > line 14, in > ImportError: No module named openopt.solvers.optimizers > > > Cheers, > Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Wed Jan 21 13:17:52 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Jan 2009 19:17:52 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: On Wed, 21 Jan 2009 18:48:38 +0100 Matthieu Brucher wrote: > Hi, > > I didn't try the new location, but if Dmitrey didn't >delete the > generic framework, everything should be fine ;) > > Matthieu Hi, O.k. I have installed openopt from the old location. python -i regression.py Traceback (most recent call last): File "regression.py", line 21, in from scikits.learn.machine.manifold_learning import regression ImportError: No module named learn.machine.manifold_learning Nils From matthieu.brucher at gmail.com Wed Jan 21 13:19:49 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 21 Jan 2009 19:19:49 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: 2009/1/21 Nils Wagner : > On Wed, 21 Jan 2009 18:48:38 +0100 > Matthieu Brucher wrote: >> Hi, >> >> I didn't try the new location, but if Dmitrey didn't >>delete the >> generic framework, everything should be fine ;) >> >> Matthieu > > Hi, > > O.k. I have installed openopt from the old location. > > python -i regression.py > Traceback (most recent call last): > File "regression.py", line 21, in > from scikits.learn.machine.manifold_learning import > regression > ImportError: No module named > learn.machine.manifold_learning It seems the scikits namespace is not recognized as such. Does the rest of scikits.learn work ? Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Wed Jan 21 13:28:59 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Jan 2009 19:28:59 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: On Wed, 21 Jan 2009 19:19:49 +0100 Matthieu Brucher wrote: > 2009/1/21 Nils Wagner : >> On Wed, 21 Jan 2009 18:48:38 +0100 >> Matthieu Brucher wrote: >>> Hi, >>> >>> I didn't try the new location, but if Dmitrey didn't >>>delete the >>> generic framework, everything should be fine ;) >>> >>> Matthieu >> >> Hi, >> >> O.k. I have installed openopt from the old location. >> >> python -i regression.py >> Traceback (most recent call last): >> File "regression.py", line 21, in >> from scikits.learn.machine.manifold_learning import >> regression >> ImportError: No module named >> learn.machine.manifold_learning > > It seems the scikits namespace is not recognized as >such. Does the > rest of scikits.learn work ? > > Matthieu > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and >http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher Hi, Now I have installed learn via python setup.py install --prefix=$HOME/local --single-version-externally-managed --record=/dev/null If I run python -i regression.py Traceback (most recent call last): File "regression.py", line 21, in from scikits.learn.machine.manifold_learning import regression File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/__init__.py", line 9, in # See the README file for information on usage and redistribution. File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/__init__.py", line 10, in # File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/geodesic_mds.py", line 15, in File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/cost_function/__init__.py", line 8, in # File "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/cost_function/cost_function.py", line 15, in File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/ctypeslib.py", line 111, in load_library raise e OSError: /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6-linux-x86_64.egg/scikits/learn/machine/manifold_learning/compression/cost_function/_cost_function.pyd: Kann die Shared-Object-Datei nicht ?ffnen: Ist kein Verzeichnis Who can shed some light on that problem ? Nils From josef.pktd at gmail.com Wed Jan 21 13:32:26 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 21 Jan 2009 13:32:26 -0500 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: <1cd32cbb0901211032l336581adl3fa3c5d4433c888b@mail.gmail.com> On Wed, Jan 21, 2009 at 1:19 PM, Matthieu Brucher wrote: > 2009/1/21 Nils Wagner : >> On Wed, 21 Jan 2009 18:48:38 +0100 >> Matthieu Brucher wrote: >>> Hi, >>> >>> I didn't try the new location, but if Dmitrey didn't >>>delete the >>> generic framework, everything should be fine ;) >>> >>> Matthieu >> >> Hi, >> >> O.k. I have installed openopt from the old location. >> >> python -i regression.py >> Traceback (most recent call last): >> File "regression.py", line 21, in >> from scikits.learn.machine.manifold_learning import >> regression >> ImportError: No module named >> learn.machine.manifold_learning > > It seems the scikits namespace is not recognized as such. Does the > rest of scikits.learn work ? > > Matthieu After I build it, there was no __init__py in the top scikits directory. Adding an empty __init__.py file solved the import error. Josef From matthieu.brucher at gmail.com Wed Jan 21 13:52:40 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 21 Jan 2009 19:52:40 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: > python -i regression.py > Traceback (most recent call last): > File "regression.py", line 21, in > from scikits.learn.machine.manifold_learning import > regression > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/__init__.py", > line 9, in > # See the README file for information on usage and > redistribution. > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/__init__.py", > line 10, in > # > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/geodesic_mds.py", > line 15, in > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/cost_function/__init__.py", > line 8, in > # > File > "build/bdist.linux-x86_64/egg/scikits/learn/machine/manifold_learning/compression/cost_function/cost_function.py", > line 15, in > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/ctypeslib.py", > line 111, in load_library > raise e > OSError: > /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6-linux-x86_64.egg/scikits/learn/machine/manifold_learning/compression/cost_function/_cost_function.pyd: > Kann die Shared-Object-Datei nicht ?ffnen: Ist kein > Verzeichnis > > Who can shed some light on that problem ? This is strange. I must have made a mistake when changing the extension, but even with Windows, it shouldn't have been a pyd extension. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From matthieu.brucher at gmail.com Wed Jan 21 14:06:41 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 21 Jan 2009 20:06:41 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: >> Who can shed some light on that problem ? > > This is strange. I must have made a mistake when changing the > extension, but even with Windows, it shouldn't have been a pyd > extension. No idea. I'm using load_library, it should be correct. Is there a _cost_function.so in /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6-linux-x86_64.egg/scikits/learn/machine/manifold_learning/compression/cost_function/ ? -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Wed Jan 21 14:25:17 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Jan 2009 20:25:17 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: On Wed, 21 Jan 2009 20:06:41 +0100 Matthieu Brucher wrote: >>> Who can shed some light on that problem ? >> >> This is strange. I must have made a mistake when >>changing the >> extension, but even with Windows, it shouldn't have been >>a pyd >> extension. > > No idea. I'm using load_library, it should be correct. > Is there a _cost_function.so in > /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6-linux-x86_64.egg/scikits/learn/machine/manifold_learning/compression/cost_function/ > ? > > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and >http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev No. /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6.egg-info nwagner at linux-mogv:~/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6.egg-info> ll insgesamt 68 -rw-r--r-- 1 nwagner users 1 21. Jan 19:25 dependency_links.txt -rw-r--r-- 1 nwagner users 8 21. Jan 19:25 namespace_packages.txt -rw-r--r-- 1 nwagner users 686 21. Jan 19:25 PKG-INFO -rw-r--r-- 1 nwagner users 5 21. Jan 19:25 requires.txt -rw-r--r-- 1 nwagner users 15201 21. Jan 19:25 SOURCES.txt -rw-r--r-- 1 nwagner users 8 21. Jan 19:25 top_level.txt -rw-r--r-- 1 nwagner users 1 21. Jan 18:34 zip-safe Nils From matthieu.brucher at gmail.com Wed Jan 21 14:32:24 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 21 Jan 2009 20:32:24 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: 2009/1/21 Nils Wagner : > On Wed, 21 Jan 2009 20:06:41 +0100 > Matthieu Brucher wrote: >>>> Who can shed some light on that problem ? >>> >>> This is strange. I must have made a mistake when >>>changing the >>> extension, but even with Windows, it shouldn't have been >>>a pyd >>> extension. >> >> No idea. I'm using load_library, it should be correct. >> Is there a _cost_function.so in >> /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6-linux-x86_64.egg/scikits/learn/machine/manifold_learning/compression/cost_function/ >> ? >> >> >> -- >> Information System Engineer, Ph.D. >> Website: http://matthieu-brucher.developpez.com/ >> Blogs: http://matt.eifelle.com and >>http://blog.developpez.com/?blog=92 >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > No. > > /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6.egg-info > > nwagner at linux-mogv:~/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6.egg-info> > ll > insgesamt 68 > -rw-r--r-- 1 nwagner users 1 21. Jan 19:25 > dependency_links.txt > -rw-r--r-- 1 nwagner users 8 21. Jan 19:25 > namespace_packages.txt > -rw-r--r-- 1 nwagner users 686 21. Jan 19:25 PKG-INFO > -rw-r--r-- 1 nwagner users 5 21. Jan 19:25 > requires.txt > -rw-r--r-- 1 nwagner users 15201 21. Jan 19:25 SOURCES.txt > -rw-r--r-- 1 nwagner users 8 21. Jan 19:25 > top_level.txt > -rw-r--r-- 1 nwagner users 1 21. Jan 18:34 zip-safe > > Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > in the compression/cost_function folder -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Wed Jan 21 14:45:45 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Jan 2009 20:45:45 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: On Wed, 21 Jan 2009 20:32:24 +0100 Matthieu Brucher wrote: > 2009/1/21 Nils Wagner : >> On Wed, 21 Jan 2009 20:06:41 +0100 >> Matthieu Brucher wrote: >>>>> Who can shed some light on that problem ? >>>> >>>> This is strange. I must have made a mistake when >>>>changing the >>>> extension, but even with Windows, it shouldn't have been >>>>a pyd >>>> extension. >>> >>> No idea. I'm using load_library, it should be correct. >>> Is there a _cost_function.so in >>> /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6-linux-x86_64.egg/scikits/learn/machine/manifold_learning/compression/cost_function/ >>> ? >>> >>> >>> -- >>> Information System Engineer, Ph.D. >>> Website: http://matthieu-brucher.developpez.com/ >>> Blogs: http://matt.eifelle.com and >>>http://blog.developpez.com/?blog=92 >>> LinkedIn: http://www.linkedin.com/in/matthieubrucher >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> No. >> >> /home/nwagner/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6.egg-info >> >> nwagner at linux-mogv:~/local/lib64/python2.6/site-packages/scikits.learn-0.0.0-py2.6.egg-info> >> ll >> insgesamt 68 >> -rw-r--r-- 1 nwagner users 1 21. Jan 19:25 >> dependency_links.txt >> -rw-r--r-- 1 nwagner users 8 21. Jan 19:25 >> namespace_packages.txt >> -rw-r--r-- 1 nwagner users 686 21. Jan 19:25 PKG-INFO >> -rw-r--r-- 1 nwagner users 5 21. Jan 19:25 >> requires.txt >> -rw-r--r-- 1 nwagner users 15201 21. Jan 19:25 >>SOURCES.txt >> -rw-r--r-- 1 nwagner users 8 21. Jan 19:25 >> top_level.txt >> -rw-r--r-- 1 nwagner users 1 21. Jan 18:34 zip-safe >> >> Nils >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > in the compression/cost_function folder > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and >http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev O.k. I have removed everything under site-packages and reinstalled learn. Now python -i regression.py Importing dataset swissroll.pickled Importing compressed dataset swissroll.isomap.pickled Regression using MLPLMR /home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/stats.py:1471: DeprecationWarning: scipy.stats.cov is deprecated; please update your code to use numpy.cov. Please note that: - numpy.cov rowvar argument defaults to true, not false - numpy.cov bias argument defaults to false, not true """, DeprecationWarning) Traceback (most recent call last): File "regression.py", line 56, in model = regressionalgo(data, coords, neighbors = 9) File "/home/nwagner/local/lib64/python2.6/site-packages/scikits/learn/machine/manifold_learning/regression/MLPLMR.py", line 38, in __init__ super(MLPLMR, self).__init__(points, coords, **kwargs) File "/home/nwagner/local/lib64/python2.6/site-packages/scikits/learn/machine/manifold_learning/regression/PLMR.py", line 36, in __init__ self.coords_field = RBF_field(coords, weight = 1) File "/home/nwagner/local/lib64/python2.6/site-packages/scikits/learn/machine/manifold_learning/stats/radial_basis_functions_field.py", line 29, in __init__ gaussian_kde.__init__(self, dataset = samples.T) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/kde.py", line 81, in __init__ self._compute_covariance() File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/kde.py", line 333, in _compute_covariance self.factor * self.factor) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/stats.py", line 1483, in cov m = m - mean(m,axis=0) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/stats.py", line 418, in mean axis=0, ddof=1).""") DeprecationWarning: scipy.stats.mean is deprecated; please update your code to use numpy.mean. Please note that: - numpy.mean axis argument defaults to None, not 0 - numpy.mean has a ddof argument to replace bias in a more general manner. scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, axis=0, ddof=1). Nils From matthieu.brucher at gmail.com Wed Jan 21 15:17:33 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 21 Jan 2009 21:17:33 +0100 Subject: [SciPy-dev] manifold_learning In-Reply-To: References: Message-ID: > python -i regression.py > Importing dataset swissroll.pickled > Importing compressed dataset swissroll.isomap.pickled > Regression using MLPLMR > /home/nwagner/local/lib64/python2.6/site-packages/scipy/stats/stats.py:1471: > DeprecationWarning: scipy.stats.cov is deprecated; please > update your code to use numpy.cov. (...) > DeprecationWarning: scipy.stats.mean is deprecated; please > update your code to use numpy.mean. > Please note that: > - numpy.mean axis argument defaults to None, not 0 > - numpy.mean has a ddof argument to replace bias in a > more general manner. > scipy.stats.mean(a, bias=True) can be replaced by > numpy.mean(x, > axis=0, ddof=1). It should be working now. It may be long to compute, but it should work. The first warning should indeed be fixed. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From pav at iki.fi Fri Jan 23 04:04:25 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 23 Jan 2009 09:04:25 +0000 (UTC) Subject: [SciPy-dev] Scipy Trac broken? Message-ID: Hi, Scipy Trac complains about "database disk image is malformed", at http://scipy.org/scipy/scipy/query?status=new&status=assigned&status=reopened&milestone=0.7.0 -- Pauli Virtanen From cournape at gmail.com Fri Jan 23 05:35:36 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 23 Jan 2009 19:35:36 +0900 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: References: Message-ID: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> On Fri, Jan 23, 2009 at 6:04 PM, Pauli Virtanen wrote: > Hi, > > Scipy Trac complains about "database disk image is malformed", at > > http://scipy.org/scipy/scipy/query?status=new&status=assigned&status=reopened&milestone=0.7.0 Works for me, FWIW, David From pav at iki.fi Fri Jan 23 06:45:56 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 23 Jan 2009 11:45:56 +0000 (UTC) Subject: [SciPy-dev] Scipy Trac broken? References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> Message-ID: Fri, 23 Jan 2009 19:35:36 +0900, David Cournapeau wrote: > On Fri, Jan 23, 2009 at 6:04 PM, Pauli Virtanen wrote: >> Hi, >> >> Scipy Trac complains about "database disk image is malformed", at >> >> http://scipy.org/scipy/scipy/query?status=new&status=assigned&status=reopened&milestone=0.7.0 > > Works for me, FWIW, Works for me too, now. A temporary glitch, then. Pauli From warren.weckesser at gmail.com Fri Jan 23 11:49:53 2009 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 23 Jan 2009 10:49:53 -0600 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> Message-ID: <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> I'm currently getting "Network Timeout" errors when I try to get to scipy.org. On Fri, Jan 23, 2009 at 5:45 AM, Pauli Virtanen wrote: > Fri, 23 Jan 2009 19:35:36 +0900, David Cournapeau wrote: > > > On Fri, Jan 23, 2009 at 6:04 PM, Pauli Virtanen wrote: > >> Hi, > >> > >> Scipy Trac complains about "database disk image is malformed", at > >> > >> > http://scipy.org/scipy/scipy/query?status=new&status=assigned&status=reopened&milestone=0.7.0 > > > > Works for me, FWIW, > > Works for me too, now. A temporary glitch, then. > > Pauli > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jan 23 11:58:31 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 23 Jan 2009 17:58:31 +0100 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> Message-ID: On Fri, 23 Jan 2009 10:49:53 -0600 Warren Weckesser wrote: > I'm currently getting "Network Timeout" errors when I >try to get to > scipy.org. > svn access is also broken. Nils From pav at iki.fi Fri Jan 23 13:03:30 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 23 Jan 2009 18:03:30 +0000 (UTC) Subject: [SciPy-dev] scipy.org down Message-ID: Hi, As mentioned in the other thread, scipy.org is down. It was down also last Friday. A coincidence? -- Pauli Virtanen From josef.pktd at gmail.com Fri Jan 23 13:50:06 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 23 Jan 2009 13:50:06 -0500 Subject: [SciPy-dev] is PySparse required for scipy.maxentropy Message-ID: <1cd32cbb0901231050j255fa495ped9a6f89968bd542@mail.gmail.com> historic artifact or not? I saw a reference to PySparse in scipy.maxentropy.maxentutils.sparsefeatures. Is this still required or covered by scipy.sparse? Josef def sparsefeatures(f, x, format='csc_matrix'): """ Returns an Mx1 sparse matrix of non-zero evaluations of the scalar functions f_1,...,f_m in the list f at the point x. If format='ll_mat', the PySparse module (or a symlink to it) must be available in the Python site-packages/ directory. A trimmed-down version, patched for NumPy compatibility, is available in the SciPy sandbox/pysparse directory. """ m = len(f) if format == 'll_mat': import spmatrix sparsef = spmatrix.ll_mat(m, 1) From eric at enthought.com Fri Jan 23 15:13:32 2009 From: eric at enthought.com (eric jones) Date: Fri, 23 Jan 2009 14:13:32 -0600 Subject: [SciPy-dev] scipy.org down In-Reply-To: References: Message-ID: <5542DA26-7244-45FE-AAA3-DAFDCFC5E094@enthought.com> Peter is checking into it. thanks, eric On Jan 23, 2009, at 12:03 PM, Pauli Virtanen wrote: > Hi, > > As mentioned in the other thread, scipy.org is down. > > It was down also last Friday. A coincidence? > > -- > Pauli Virtanen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From pwang at enthought.com Fri Jan 23 15:29:30 2009 From: pwang at enthought.com (Peter Wang) Date: Fri, 23 Jan 2009 14:29:30 -0600 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> Message-ID: On Jan 23, 2009, at 10:58 AM, Nils Wagner wrote: > On Fri, 23 Jan 2009 10:49:53 -0600 > Warren Weckesser wrote: >> I'm currently getting "Network Timeout" errors when I >> try to get to >> scipy.org. >> > > svn access is also broken. > > Nils It should all be back up again. All the Trac instances really overload the server, and that causes the intermittent connectivity that folks are seeing. Although I can browse tickets just fine, and see the 0.7.0 tickets all listed, Pauli's milestone query still returns the "database disk image is malformed" error for me, and in fact any attempt to use the Custom Query feature in the Trac causes this same error. I've made a copy of the trac.db file on the server and am poking around in it. -Peter From pwang at enthought.com Fri Jan 23 15:39:16 2009 From: pwang at enthought.com (Peter Wang) Date: Fri, 23 Jan 2009 14:39:16 -0600 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> Message-ID: On Jan 23, 2009, at 2:29 PM, Peter Wang wrote: > Although I can browse tickets just fine, and see the 0.7.0 tickets all > listed, Pauli's milestone query still returns the "database disk image > is malformed" error for me, and in fact any attempt to use the Custom > Query feature in the Trac causes this same error. I've made a copy of > the trac.db file on the server and am poking around in it. > > -Peter OK, I rebuilt the trac database and it seems to have fixed the corruption problem. Custom Queries work again. If you have edited a ticket in the last few hours, please go back and double-check that the rebuild didn't blow away any of your changes. Thanks, Peter From nwagner at iam.uni-stuttgart.de Fri Jan 23 15:47:00 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 23 Jan 2009 21:47:00 +0100 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> Message-ID: On Fri, 23 Jan 2009 17:58:31 +0100 "Nils Wagner" wrote: > On Fri, 23 Jan 2009 10:49:53 -0600 > Warren Weckesser wrote: >> I'm currently getting "Network Timeout" errors when I >>try to get to >> scipy.org. >> I receive an internal server error (500) if I try to open http://projects.scipy.org/scipy/scipy/roadmap Any idea ? Nils From robert.kern at gmail.com Fri Jan 23 16:16:07 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Jan 2009 15:16:07 -0600 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> Message-ID: <3d375d730901231316g180f5df4g3f63145f3419ec65@mail.gmail.com> On Fri, Jan 23, 2009 at 14:47, Nils Wagner wrote: > On Fri, 23 Jan 2009 17:58:31 +0100 > "Nils Wagner" wrote: >> On Fri, 23 Jan 2009 10:49:53 -0600 >> Warren Weckesser wrote: >>> I'm currently getting "Network Timeout" errors when I >>>try to get to >>> scipy.org. > > I receive an internal server error (500) if I try to open > > http://projects.scipy.org/scipy/scipy/roadmap > > Any idea ? Works for me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pwang at enthought.com Fri Jan 23 16:17:10 2009 From: pwang at enthought.com (Peter Wang) Date: Fri, 23 Jan 2009 15:17:10 -0600 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: <3d375d730901231316g180f5df4g3f63145f3419ec65@mail.gmail.com> References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> <3d375d730901231316g180f5df4g3f63145f3419ec65@mail.gmail.com> Message-ID: <0821D94E-4A05-41AD-AE32-B988E5675C96@enthought.com> On Jan 23, 2009, at 3:16 PM, Robert Kern wrote: > On Fri, Jan 23, 2009 at 14:47, Nils Wagner > wrote: >> On Fri, 23 Jan 2009 17:58:31 +0100 >> "Nils Wagner" wrote: >>> On Fri, 23 Jan 2009 10:49:53 -0600 >>> Warren Weckesser wrote: >>>> I'm currently getting "Network Timeout" errors when I >>>> try to get to >>>> scipy.org. >> >> I receive an internal server error (500) if I try to open >> >> http://projects.scipy.org/scipy/scipy/roadmap >> >> Any idea ? > > Works for me. I have restarted the apache process; there was a stale FastCGI process still lingering around. It should work now. -Peter From pav at iki.fi Fri Jan 23 16:22:30 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 23 Jan 2009 21:22:30 +0000 (UTC) Subject: [SciPy-dev] Scipy Trac broken? References: <5b8d13220901230235j4c7a371bq24a6028d3177ea73@mail.gmail.com> <114880320901230849u60c19e21v3363bcc67285b872@mail.gmail.com> Message-ID: Fri, 23 Jan 2009 14:29:30 -0600, Peter Wang wrote: > On Jan 23, 2009, at 10:58 AM, Nils Wagner wrote: > >> On Fri, 23 Jan 2009 10:49:53 -0600 >> Warren Weckesser wrote: >>> I'm currently getting "Network Timeout" errors when I try to get to >>> scipy.org. >>> >>> >> svn access is also broken. >> >> Nils > > It should all be back up again. Thanks! -- Pauli Virtanen From millman at berkeley.edu Fri Jan 23 20:58:24 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 23 Jan 2009 17:58:24 -0800 Subject: [SciPy-dev] ANN: SciPy 0.7.0rc2 (release candidate) Message-ID: I'm pleased to announce the second release candidate for SciPy 0.7.0. Due to an issue with the Window's build scripts, the first release candidate wasn't announced. SciPy is a package of tools for science and engineering for Python. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. This release candidate comes almost one year after the 0.6.0 release and contains many new features, numerous bug-fixes, improved test coverage, and better documentation. Please note that SciPy 0.7.0rc2 requires Python 2.4 or greater and NumPy 1.2.0 or greater. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?group_id=27747&release_id=655674 You can download the release from here: http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531&release_id=655674 Thank you to everybody who contributed to this release. Enjoy, Jarrod Millman From strawman at astraw.com Fri Jan 23 22:44:57 2009 From: strawman at astraw.com (Andrew Straw) Date: Fri, 23 Jan 2009 19:44:57 -0800 Subject: [SciPy-dev] ANN: SciPy 0.7.0rc2 (release candidate) In-Reply-To: References: Message-ID: <497A8EB9.30800@astraw.com> I just made a comment to issue #226 ("sparse.lil_matrix doesn't support inplace operations with fancy indexing") that might be nice to address before the final 0.7 release if it's not too difficult: http://scipy.org/scipy/scipy/ticket/226#comment:5 Jarrod Millman wrote: > I'm pleased to announce the second release candidate for SciPy 0.7.0. > Due to an issue with the Window's build scripts, the first release > candidate wasn't announced. > > SciPy is a package of tools for science and engineering for Python. > It includes modules for statistics, optimization, integration, linear > algebra, Fourier transforms, signal and image processing, ODE solvers, > and more. > > This release candidate comes almost one year after the 0.6.0 release > and contains many new features, numerous bug-fixes, improved test > coverage, and better documentation. Please note that SciPy 0.7.0rc2 > requires Python 2.4 or greater and NumPy 1.2.0 or greater. > > For information, please see the release notes: > http://sourceforge.net/project/shownotes.php?group_id=27747&release_id=655674 > > You can download the release from here: > http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531&release_id=655674 > > Thank you to everybody who contributed to this release. > > Enjoy, > > Jarrod Millman > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From david at ar.media.kyoto-u.ac.jp Sat Jan 24 00:59:49 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 24 Jan 2009 14:59:49 +0900 Subject: [SciPy-dev] ANN: SciPy 0.7.0rc2 (release candidate) In-Reply-To: <497A8EB9.30800@astraw.com> References: <497A8EB9.30800@astraw.com> Message-ID: <497AAE55.9060106@ar.media.kyoto-u.ac.jp> Andrew Straw wrote: > I just made a comment to issue #226 ("sparse.lil_matrix doesn't support > inplace operations with fancy indexing") that might be nice to address > before the final 0.7 release if it's not too difficult: > > http://scipy.org/scipy/scipy/ticket/226#comment:5 > I am not a user of sparse matrices: is this a serious issue ? I would prefer postponing any non-critical issues to a 7.1 release at this stage, David From strawman at astraw.com Sat Jan 24 01:30:00 2009 From: strawman at astraw.com (Andrew Straw) Date: Fri, 23 Jan 2009 22:30:00 -0800 Subject: [SciPy-dev] ANN: SciPy 0.7.0rc2 (release candidate) In-Reply-To: <497AAE55.9060106@ar.media.kyoto-u.ac.jp> References: <497A8EB9.30800@astraw.com> <497AAE55.9060106@ar.media.kyoto-u.ac.jp> Message-ID: <497AB568.9@astraw.com> David, it's not a regression, so I wouldn't say it's critical. I just thought that, given the band-aid nature of the present "solution", it could perhaps be slightly improved if the solution was low energy... (I stumbled over this while testing the 0.7 release candidate and thought I'd at least bring it up.) 0.7 as-is would be mighty fine for me. -Andrew David Cournapeau wrote: > Andrew Straw wrote: >> I just made a comment to issue #226 ("sparse.lil_matrix doesn't support >> inplace operations with fancy indexing") that might be nice to address >> before the final 0.7 release if it's not too difficult: >> >> http://scipy.org/scipy/scipy/ticket/226#comment:5 >> > > I am not a user of sparse matrices: is this a serious issue ? I would > prefer postponing any non-critical issues to a 7.1 release at this stage, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From david at ar.media.kyoto-u.ac.jp Sat Jan 24 02:58:57 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 24 Jan 2009 16:58:57 +0900 Subject: [SciPy-dev] C++ in scipy: enforcing extension Message-ID: <497ACA41.4080100@ar.media.kyoto-u.ac.jp> Hi, I noticed that several scipy modules now use C++ (at least scipy.sparse and scipy.interpolate). I think it would be better if we could agree on one file extension, instead of something using cxx (scipy.sparse.sparsetools) and sometimes cpp (scipy.interpolate). I don't care which one - I thought .cc was the recommended one for some time, but I am not following C++ anymore, so I am not the best person to make the choice, David From sturla at molden.no Sat Jan 24 13:45:18 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 24 Jan 2009 19:45:18 +0100 (CET) Subject: [SciPy-dev] Parallel cKDTree for SciPy Message-ID: This is a follow up to my post on scipy-user this morning. As shown, using OpenMP with cKDTree works very well. But it remained that I hand-edited the output from Cython. Therefore, I rewrote Anne Archibald's cKDTree sliglity, putting all the OpenMP calls in a separate C file. Now it works without having to modify the Cython output. I have also made sure that: - The GIL is released around the time-consuming loop in cKDTree.query. - If a single query is made, OpenMP threads is not used in cKDTree.query. - The number of OpenMP threads can be specified when calling cKDTree.query. For compilation I used GCC 4.4 with -O3 and -fopenmp, and linked -lgomp and -lpthread. I have attached the C and Cython code, and a speed test. In the benchmark, the black line is SciPy's 0.7.0rc2 cKDTree, green line is mine using one OpenMP thread, and the red is letting OpenMP pick the number of threads. I have a dual core laptop so OpenMP used two threads. The code is donated to SciPy if you want it. I think it is substantially better than the present version in SciPy SVN, as speed is the sole purpose of using kd-trees; otherwise, we could just as well do O(N**2) brute force search. Code and benchmark is available here: http://folk.uio.no/sturlamo/kdtree/ Windows binary compiled for Python 2.5 can be uploaded on request. Regards, Sturla Molden From wnbell at gmail.com Sat Jan 24 16:05:19 2009 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 24 Jan 2009 16:05:19 -0500 Subject: [SciPy-dev] is PySparse required for scipy.maxentropy In-Reply-To: <1cd32cbb0901231050j255fa495ped9a6f89968bd542@mail.gmail.com> References: <1cd32cbb0901231050j255fa495ped9a6f89968bd542@mail.gmail.com> Message-ID: On Fri, Jan 23, 2009 at 1:50 PM, wrote: > historic artifact or not? > > I saw a reference to PySparse in scipy.maxentropy.maxentutils.sparsefeatures. > Is this still required or covered by scipy.sparse? > > def sparsefeatures(f, x, format='csc_matrix'): > """ Returns an Mx1 sparse matrix of non-zero evaluations of the > scalar functions f_1,...,f_m in the list f at the point x. > > If format='ll_mat', the PySparse module (or a symlink to it) must be > available in the Python site-packages/ directory. A trimmed-down > version, patched for NumPy compatibility, is available in the SciPy > sandbox/pysparse directory. > """ > m = len(f) > if format == 'll_mat': > import spmatrix > sparsef = spmatrix.ll_mat(m, 1) scipy.sparse should have an equivalent format for everything PySparse provided. The ll_mat would correspond to sparse.lil_matrix. Ideally, the 'format' parameter usage should be made to agree with what we do in scipy.sparse. For instance: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/construct.py#L57 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wnbell at gmail.com Sat Jan 24 16:07:49 2009 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 24 Jan 2009 16:07:49 -0500 Subject: [SciPy-dev] ANN: SciPy 0.7.0rc2 (release candidate) In-Reply-To: <497AAE55.9060106@ar.media.kyoto-u.ac.jp> References: <497A8EB9.30800@astraw.com> <497AAE55.9060106@ar.media.kyoto-u.ac.jp> Message-ID: On Sat, Jan 24, 2009 at 12:59 AM, David Cournapeau wrote: > Andrew Straw wrote: >> I just made a comment to issue #226 ("sparse.lil_matrix doesn't support >> inplace operations with fancy indexing") that might be nice to address >> before the final 0.7 release if it's not too difficult: >> >> http://scipy.org/scipy/scipy/ticket/226#comment:5 >> > > I am not a user of sparse matrices: is this a serious issue ? I would > prefer postponing any non-critical issues to a 7.1 release at this stage, > I'll fix it in 0.7.1 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From alan at ajackson.org Sat Jan 24 16:23:46 2009 From: alan at ajackson.org (Alan Jackson) Date: Sat, 24 Jan 2009 15:23:46 -0600 Subject: [SciPy-dev] An interesting exercise - reproduce R analysis using python. Message-ID: <20090124152346.78ff56f3@ajackson.org> A quite good online book, "Using R for Data Analysis and Graphics" by John Maindonald has been updated and has all the data and exercises posted on the net. Relevance to this group? I think it might be quite fun and instructive to work through the 'R' exercises with numpy/matplotlib/scipy instead. I'm considering doing it, partly just to broaden my numpy/matplotlib skillset, but I thought I would post this here in case anyone else might find it an interesting challenge. Exercises and datasets can be found from links here : http://blog.revolution-computing.com/2009/01/using-r-for-data-analysis-and-graphics.html -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From josef.pktd at gmail.com Sat Jan 24 16:56:59 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 24 Jan 2009 16:56:59 -0500 Subject: [SciPy-dev] is PySparse required for scipy.maxentropy In-Reply-To: References: <1cd32cbb0901231050j255fa495ped9a6f89968bd542@mail.gmail.com> Message-ID: <1cd32cbb0901241356o180a630eifa63652f4d77287a@mail.gmail.com> On Sat, Jan 24, 2009 at 4:05 PM, Nathan Bell wrote: > On Fri, Jan 23, 2009 at 1:50 PM, wrote: >> historic artifact or not? >> >> I saw a reference to PySparse in scipy.maxentropy.maxentutils.sparsefeatures. >> Is this still required or covered by scipy.sparse? >> >> def sparsefeatures(f, x, format='csc_matrix'): >> """ Returns an Mx1 sparse matrix of non-zero evaluations of the >> scalar functions f_1,...,f_m in the list f at the point x. >> >> If format='ll_mat', the PySparse module (or a symlink to it) must be >> available in the Python site-packages/ directory. A trimmed-down >> version, patched for NumPy compatibility, is available in the SciPy >> sandbox/pysparse directory. >> """ >> m = len(f) >> if format == 'll_mat': >> import spmatrix >> sparsef = spmatrix.ll_mat(m, 1) > > scipy.sparse should have an equivalent format for everything PySparse > provided. The ll_mat would correspond to sparse.lil_matrix. > > Ideally, the 'format' parameter usage should be made to agree with > what we do in scipy.sparse. For instance: > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/construct.py#L57 > > -- > Nathan Bell wnbell at gmail.com > http://graphics.cs.uiuc.edu/~wnbell/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Thanks, I added a ticket for this Josef From ramercer at gmail.com Sat Jan 24 17:10:02 2009 From: ramercer at gmail.com (Adam Mercer) Date: Sat, 24 Jan 2009 16:10:02 -0600 Subject: [SciPy-dev] ANN: SciPy 0.7.0rc2 (release candidate) In-Reply-To: References: Message-ID: <799406d60901241410g4a9baf3fy2c1cd1eaa4851a56@mail.gmail.com> On Fri, Jan 23, 2009 at 19:58, Jarrod Millman wrote: > I'm pleased to announce the second release candidate for SciPy 0.7.0. Installs without issue on Mac OS X 10.5: Ran 3393 tests in 93.490s OK (KNOWNFAIL=2, SKIP=21) Cheers Adam From prabhu at aero.iitb.ac.in Sun Jan 25 12:31:03 2009 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sun, 25 Jan 2009 23:01:03 +0530 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> Message-ID: <497CA1D7.3080703@aero.iitb.ac.in> Hi St?fan, St?fan van der Walt wrote: > 2009/1/8 Prabhu Ramachandran : >> St?fan van der Walt wrote: >> As the author of the VTK weave wrapper, I object. I can't revert since I >> do not have SVN access to scipy/numpy. What the VTK wrapper for weave >> does is let you embed c++ code which operates on a VTK Python object. >> This is not something that TVTK does or provides for at all. I think it >> is useful functionality that is not available anywhere else and should >> not be removed. > > Thanks for your input, I'll add it back. Would you please help us to > document and test it? Well, there is a simple example in weave/examples/vtk_example.py. That example should be fairly informative. :) Unfortunately, testing this requires the use of nasty external resources. In particular it requires that VTK be installed and that weave be told where to find the VTK headers and libraries. In any case, I modified the paths suitably in the example corresponding to my system's VTK setup and it seems to work fine -- note that I'm using version 0.6.0 that ships with Ubuntu rather than SVN. I hope this helps. cheers, prabhu From nwagner at iam.uni-stuttgart.de Sun Jan 25 14:03:26 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 25 Jan 2009 20:03:26 +0100 Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels Message-ID: Hi all, I do get three failures wrt. to special functions and python2.6 ====================================================================== FAIL: test_yn_zeros (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 1598, in test_yn_zeros 488.98055964441374646], rtol=1e-19) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 38, in assert_tol_equal verbose=verbose, header=header) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-19, atol=0 (mismatch 100.0%) x: array([ 450.136, 463.057, 472.807, 481.274, 488.981]) y: array([ 450.136, 463.057, 472.807, 481.274, 488.981]) ====================================================================== FAIL: test_ynp_zeros (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 1604, in test_ynp_zeros assert_tol_equal(yvp(443, ao), 0, atol=1e-15) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 38, in assert_tol_equal verbose=verbose, header=header) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-07, atol=1e-15 (mismatch 100.0%) x: array([ 1.239e-10, -8.119e-16, 3.608e-16, 5.898e-16, 1.226e-15]) y: array(0) ====================================================================== FAIL: Negative-order Bessels ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 1665, in test_ticket_853 assert_tol_equal(iv(-0.5, 1 ), 1.231200214592967) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 38, in assert_tol_equal verbose=verbose, header=header) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-07, atol=0 (mismatch 100.0%) x: array(0.0) y: array(1.231200214592967) Nils From nwagner at iam.uni-stuttgart.de Sun Jan 25 14:25:16 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 25 Jan 2009 20:25:16 +0100 Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels In-Reply-To: References: Message-ID: On Sun, 25 Jan 2009 20:03:26 +0100 "Nils Wagner" wrote: > Hi all, > > I do get three failures wrt. to special functions and > python2.6 > > ====================================================================== >FAIL: test_yn_zeros (test_basic.TestBessel) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 1598, in test_yn_zeros > 488.98055964441374646], rtol=1e-19) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 38, in assert_tol_equal > verbose=verbose, header=header) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-19, atol=0 > > (mismatch 100.0%) > x: array([ 450.136, 463.057, 472.807, 481.274, > 488.981]) > y: array([ 450.136, 463.057, 472.807, 481.274, > 488.981]) > > ====================================================================== >FAIL: test_ynp_zeros (test_basic.TestBessel) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 1604, in test_ynp_zeros > assert_tol_equal(yvp(443, ao), 0, atol=1e-15) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 38, in assert_tol_equal > verbose=verbose, header=header) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-07, atol=1e-15 > > (mismatch 100.0%) > x: array([ 1.239e-10, -8.119e-16, 3.608e-16, > 5.898e-16, 1.226e-15]) > y: array(0) > > ====================================================================== >FAIL: Negative-order Bessels > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/case.py", > line 182, in runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 1665, in test_ticket_853 > assert_tol_equal(iv(-0.5, 1 ), 1.231200214592967) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 38, in assert_tol_equal > verbose=verbose, header=header) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-07, atol=0 > > (mismatch 100.0%) > x: array(0.0) > y: array(1.231200214592967) > > Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev >>> scipy.__version__ '0.8.0.dev5524' ====================================================================== FAIL: test_yv_cephes_vs_amos (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 1650, in test_yv_cephes_vs_amos self.check_cephes_vs_amos(yv, yn, rtol=1e-11, atol=1e-305) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", line 1640, in check_cephes_vs_amos assert c2.imag != 0, (v, z) AssertionError: (301, 1.0) Nils From nwagner at iam.uni-stuttgart.de Mon Jan 26 03:03:19 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 26 Jan 2009 09:03:19 +0100 Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels In-Reply-To: References: Message-ID: On Sun, 25 Jan 2009 20:03:26 +0100 "Nils Wagner" wrote: > Hi all, > > I do get three failures wrt. to special functions and > python2.6 > > ====================================================================== >FAIL: test_yn_zeros (test_basic.TestBessel) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 1598, in test_yn_zeros > 488.98055964441374646], rtol=1e-19) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 38, in assert_tol_equal > verbose=verbose, header=header) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-19, atol=0 > > (mismatch 100.0%) > x: array([ 450.136, 463.057, 472.807, 481.274, > 488.981]) > y: array([ 450.136, 463.057, 472.807, 481.274, > 488.981]) > > ====================================================================== >FAIL: test_ynp_zeros (test_basic.TestBessel) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 1604, in test_ynp_zeros > assert_tol_equal(yvp(443, ao), 0, atol=1e-15) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 38, in assert_tol_equal > verbose=verbose, header=header) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-07, atol=1e-15 > > (mismatch 100.0%) > x: array([ 1.239e-10, -8.119e-16, 3.608e-16, > 5.898e-16, 1.226e-15]) > y: array(0) > > ====================================================================== >FAIL: Negative-order Bessels > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/case.py", > line 182, in runTest > self.test(*self.arg) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 1665, in test_ticket_853 > assert_tol_equal(iv(-0.5, 1 ), 1.231200214592967) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 38, in assert_tol_equal > verbose=verbose, header=header) > File > "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-07, atol=0 > > (mismatch 100.0%) > x: array(0.0) > y: array(1.231200214592967) > > Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Another failure is present in >>> scipy.__version__ '0.8.0.dev5525' ====================================================================== FAIL: test_iv_cephes_vs_amos (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1653, in test_iv_cephes_vs_amos self.check_cephes_vs_amos(iv, iv, rtol=1e-8, atol=1e-305) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1642, in check_cephes_vs_amos assert_tol_equal(c1, c2, err_msg=(v, z), rtol=rtol, atol=atol) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 38, in assert_tol_equal verbose=verbose, header=header) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-08, atol=1e-305 (-120, -11) (mismatch 100.0%) x: array(1.3384173609003782e-110) y: array((1.3384173859242368e-110+0j)) From pav at iki.fi Mon Jan 26 04:23:37 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 26 Jan 2009 09:23:37 +0000 (UTC) Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels References: Message-ID: Sun, 25 Jan 2009 20:03:26 +0100, Nils Wagner wrote: > I do get three failures wrt. to special functions and python2.6 Too strict tolerances in the tests for the most part, I think. Will fix. I presume this is a 64-bit Linux platform. I'm in progress of addressing several bugs and improving the tests for the Bessel-related special functions in Scipy trunk. If someone wants to review the changesets I've committed recently, please go ahead. > ====================================================================== > FAIL: test_yn_zeros (test_basic.TestBessel) > ---------------------------------------------------------------------- [clip] > Not equal to tolerance rtol=1e-19, atol=0 Too strict tolerance, I think. Some errors ~eps probably arise, depending on the compiler. > ====================================================================== > FAIL: test_ynp_zeros (test_basic.TestBessel) > ---------------------------------------------------------------------- [clip] > AssertionError: > Not equal to tolerance rtol=1e-07, atol=1e-15 > > (mismatch 100.0%) > x: array([ 1.239e-10, -8.119e-16, 3.608e-16, 5.898e-16, > 1.226e-15]) > y: array(0) Ditto. Except maybe for the first item. The question now is whether the problem is in the `yvp` routine or in `ynp_zeros`. > ====================================================================== > FAIL: Negative-order Bessels > ---------------------------------------------------------------------- [clip] > assert_tol_equal(iv(-0.5, 1 ), 1.231200214592967) [clip] Negative half-integers are a known failure of cephes/iv (also Scipy 0.6.0), since hyperg has a pole at the corresponding points. But I think this test is commented out in the current SVN trunk, though. I have a fix for this, but it's not committed yet. -- Pauli Virtanen From pav at iki.fi Mon Jan 26 04:28:47 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 26 Jan 2009 09:28:47 +0000 (UTC) Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels References: Message-ID: Mon, 26 Jan 2009 09:03:19 +0100, Nils Wagner wrote: [clip] > ====================================================================== > FAIL: test_iv_cephes_vs_amos (test_basic.TestBessel) > ---------------------------------------------------------------------- [clip] > Not equal to tolerance rtol=1e-08, atol=1e-305 (-120, -11) > (mismatch 100.0%) > x: array(1.3384173609003782e-110) > y: array((1.3384173859242368e-110+0j)) The way cephes/iv computes the value of Iv is not too accurate for large orders or large x. I have some code to compute the result in this range from proper asymptotic expansions, but also it isn't committed yet. -- Pauli Virtanen From pav at iki.fi Mon Jan 26 04:31:31 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 26 Jan 2009 09:31:31 +0000 (UTC) Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels References: Message-ID: Sun, 25 Jan 2009 20:25:16 +0100, Nils Wagner wrote: [clip] >>>> scipy.__version__ > '0.8.0.dev5524' > ====================================================================== > FAIL: test_yv_cephes_vs_amos (test_basic.TestBessel) > ---------------------------------------------------------------------- [clip] > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/ test_basic.py", > line 1650, in test_yv_cephes_vs_amos > self.check_cephes_vs_amos(yv, yn, rtol=1e-11, > atol=1e-305) > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/ test_basic.py", > line 1640, in check_cephes_vs_amos > assert c2.imag != 0, (v, z) > AssertionError: (301, 1.0) This should have been fixed in r5524. Note that you need to touch scipy/special/_cephesmodule.c before rebuilding, to make sure that scipy.special._cephes module is rebuilt. The build system apparently does not track dependencies properly here. -- Pauli Virtanen From alexh at psych.usyd.edu.au Mon Jan 26 03:55:52 2009 From: alexh at psych.usyd.edu.au (Alex Holcombe) Date: Mon, 26 Jan 2009 08:55:52 +0000 (UTC) Subject: [SciPy-dev] =?utf-8?q?An_interesting_exercise_-_reproduce_R_analy?= =?utf-8?q?sis_using=09python=2E?= References: <20090124152346.78ff56f3@ajackson.org> Message-ID: Alan Jackson ajackson.org> writes: > > A quite good online book, "Using R for Data Analysis and Graphics" by > John Maindonald has been updated and has all the data and exercises > posted on the net. Relevance to this group? I think it might be quite > fun and instructive to work through the 'R' exercises with > numpy/matplotlib/scipy instead. I'm considering doing it, partly just > to broaden my numpy/matplotlib skillset, but I thought I would post > this here in case anyone else might find it an interesting challenge. > > Exercises and datasets can be found from links here : > http://blog.revolution-computing.com/2009/01/using-r-for-data-analysis-and-graphics.html > A first step for me in having the R functionality I need within python is to take the data from an experiment and plot the dependent variable as a function of a subset of the independent variables in the experiment. For example plotting vehicle fuel efficiency as a function of mpg and vehicle weight. The loadtxt function easily imports my data files in a structure called a recarray, similar to a data.frame in R, a lot like a flat spreadsheet with a name for each column. I would like to collapse the other variables to determine the mean fuel efficiency for every combination of mpg and vehicle weight. If done in Excel, I think this involves a ?PivotTable". For python, I wrote a function where I pass a recarray, the dependent variable name, and the names of the variables (datafile columns) that I want to collapse by, and it passes back multi-dimensional arrays providing the mean, standard deviation, and number of data points for every combination of the variables. collapseBy(data,DV,*factors) I am new to SciPy and don't know how to make my code follow its style, but I've posted the code here http://openwetware.org/images/7/7b/CollapseBy.txt blogged on the subject here http://alexholcombe.wordpress.com/2009/01/26/summarizing-data-by-combinations-of-variables-with-python/ I would hope that something like this could be put into SciPy? Alex Holcombe, http://www.psych.usyd.edu.au/staff/alexh/ From nwagner at iam.uni-stuttgart.de Mon Jan 26 05:27:30 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 26 Jan 2009 11:27:30 +0100 Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels In-Reply-To: References: Message-ID: On Mon, 26 Jan 2009 09:31:31 +0000 (UTC) Pauli Virtanen wrote: > Sun, 25 Jan 2009 20:25:16 +0100, Nils Wagner wrote: > [clip] >>>>> scipy.__version__ >> '0.8.0.dev5524' >> ====================================================================== >> FAIL: test_yv_cephes_vs_amos (test_basic.TestBessel) >> ---------------------------------------------------------------------- > [clip] >> Traceback (most recent call last): >> File >> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/ > test_basic.py", >> line 1650, in test_yv_cephes_vs_amos >> self.check_cephes_vs_amos(yv, yn, rtol=1e-11, >> atol=1e-305) >> File >> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/special/tests/ > test_basic.py", >> line 1640, in check_cephes_vs_amos >> assert c2.imag != 0, (v, z) >> AssertionError: (301, 1.0) > > This should have been fixed in r5524. Note that you need >to > > touch scipy/special/_cephesmodule.c > Done, but the failure persists. > before rebuilding, to make sure that >scipy.special._cephes module is > rebuilt. The build system apparently does not track >dependencies properly > here. > > -- > Pauli Virtanen I am using 64-bit linux CentOS release 4.6 (Final). Python 2.5.1 (r251:54863, Dec 21 2007, 09:21:07) [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.special >>> scipy.special.test() Running unit tests for scipy.special NumPy version 1.3.0.dev6333 NumPy is installed in /data/home/nwagner/local/lib/python2.5/site-packages/numpy SciPy version 0.8.0.dev5525 SciPy is installed in /data/home/nwagner/local/lib/python2.5/site-packages/scipy Python version 2.5.1 (r251:54863, Dec 21 2007, 09:21:07) [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] nose version 0.10.4 .................F........................................FF.F........................................................................................................................................................................................................................................................................................................... ====================================================================== FAIL: test_iv_cephes_vs_amos (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1653, in test_iv_cephes_vs_amos self.check_cephes_vs_amos(iv, iv, rtol=1e-8, atol=1e-305) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1642, in check_cephes_vs_amos assert_tol_equal(c1, c2, err_msg=(v, z), rtol=rtol, atol=atol) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 38, in assert_tol_equal verbose=verbose, header=header) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-08, atol=1e-305 (-120, -11) (mismatch 100.0%) x: array(1.3384173609003782e-110) y: array((1.3384173859242368e-110+0j)) ====================================================================== FAIL: test_yn_zeros (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1598, in test_yn_zeros 488.98055964441374646], rtol=1e-19) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 38, in assert_tol_equal verbose=verbose, header=header) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-19, atol=0 (mismatch 100.0%) x: array([ 450.13573091, 463.05692375, 472.80651543, 481.27353185, 488.9805596 ]) y: array([ 450.13573092, 463.05692377, 472.80651546, 481.27353185, 488.98055964]) ====================================================================== FAIL: test_ynp_zeros (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1604, in test_ynp_zeros assert_tol_equal(yvp(443, ao), 0, atol=1e-15) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 38, in assert_tol_equal verbose=verbose, header=header) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-07, atol=1e-15 (mismatch 100.0%) x: array([ 1.23909633e-10, -8.11850587e-16, 3.60822483e-16, 5.89805982e-16, 1.22644950e-15]) y: array(0) ====================================================================== FAIL: test_yv_cephes_vs_amos (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1650, in test_yv_cephes_vs_amos self.check_cephes_vs_amos(yv, yn, rtol=1e-11, atol=1e-305) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1640, in check_cephes_vs_amos assert c2.imag != 0, (v, z) AssertionError: (301, 1.0) ---------------------------------------------------------------------- Ran 361 tests in 0.782s FAILED (failures=4) >>> scipy.__version__ '0.8.0.dev5525' Nils From pav at iki.fi Mon Jan 26 06:03:07 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 26 Jan 2009 11:03:07 +0000 (UTC) Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels References: Message-ID: Mon, 26 Jan 2009 11:27:30 +0100, Nils Wagner wrote: > ====================================================================== > FAIL: test_yv_cephes_vs_amos (test_basic.TestBessel) > ---------------------------------------------------------------------- [clip] > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/ tests/test_basic.py", > line 1650, in test_yv_cephes_vs_amos > self.check_cephes_vs_amos(yv, yn, rtol=1e-11, > atol=1e-305) > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/ tests/test_basic.py", > line 1640, in check_cephes_vs_amos > assert c2.imag != 0, (v, z) > AssertionError: (301, 1.0) Appears to be a bug in cephes/yn on 64-bit systems: >>> import scipy >>> import scipy.special as sc >>> scipy.__version__ '0.7.0.dev5268' >>> sc.yn(301,1) nan whereas on a 32-bit system it gives the correct answer: >>> sc.yn(301,1) -inf From david at ar.media.kyoto-u.ac.jp Mon Jan 26 06:29:35 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 26 Jan 2009 20:29:35 +0900 Subject: [SciPy-dev] FAIL: test_yn_zeros, test_ynp_zeros, Negative-order Bessels In-Reply-To: References: Message-ID: <497D9E9F.7010401@ar.media.kyoto-u.ac.jp> Pauli Virtanen wrote: > Mon, 26 Jan 2009 11:27:30 +0100, Nils Wagner wrote: > >> ====================================================================== >> FAIL: test_yv_cephes_vs_amos (test_basic.TestBessel) >> ---------------------------------------------------------------------- >> > [clip] > >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/ >> > tests/test_basic.py", > >> line 1650, in test_yv_cephes_vs_amos >> self.check_cephes_vs_amos(yv, yn, rtol=1e-11, >> atol=1e-305) >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/ >> > tests/test_basic.py", > >> line 1640, in check_cephes_vs_amos >> assert c2.imag != 0, (v, z) >> AssertionError: (301, 1.0) >> > > Appears to be a bug in cephes/yn on 64-bit systems: > > >>>> import scipy >>>> import scipy.special as sc >>>> scipy.__version__ >>>> > '0.7.0.dev5268' > >>>> sc.yn(301,1) >>>> > nan > > whereas on a 32-bit system it gives the correct answer: > I wonder whether it would make sense to progressively use our own math function library, at least for some core routines - R core for example, does not use cephes or any external library (some functions are "translations" from specfun though). Or is there a simpler solution toward robust math routines (robustness against different architectures and out of domain input, in particular) ? David From josef.pktd at gmail.com Tue Jan 27 00:31:47 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 27 Jan 2009 00:31:47 -0500 Subject: [SciPy-dev] An interesting exercise - reproduce R analysis using python. In-Reply-To: References: <20090124152346.78ff56f3@ajackson.org> Message-ID: <1cd32cbb0901262131j6f2378dcx101e078c23083f12@mail.gmail.com> On Mon, Jan 26, 2009 at 3:55 AM, Alex Holcombe wrote: > Alan Jackson ajackson.org> writes: > >> >> A quite good online book, "Using R for Data Analysis and Graphics" by >> John Maindonald has been updated and has all the data and exercises >> posted on the net. Relevance to this group? I think it might be quite >> fun and instructive to work through the 'R' exercises with >> numpy/matplotlib/scipy instead. I'm considering doing it, partly just >> to broaden my numpy/matplotlib skillset, but I thought I would post >> this here in case anyone else might find it an interesting challenge. >> >> Exercises and datasets can be found from links here : >> > http://blog.revolution-computing.com/2009/01/using-r-for-data-analysis-and-graphics.html >> > > A first step for me in having the R functionality I need within python is to > take the data from an experiment and plot the dependent variable as a function > of a subset of the independent variables in the experiment. For example plotting > vehicle fuel efficiency as a function of mpg and vehicle weight. The loadtxt > function easily imports my data files in a structure called a recarray, similar > to a data.frame in R, a lot like a flat spreadsheet with a name for each column. > I would like to collapse the other variables to determine the mean fuel > efficiency for every combination of mpg and vehicle weight. If done in Excel, I > think this involves a "PivotTable". For python, I wrote a function where I pass > a recarray, the dependent variable name, and the names of the variables > (datafile columns) that I want to collapse by, and it passes back > multi-dimensional arrays providing the mean, standard deviation, and number of > data points for every combination of the variables. > collapseBy(data,DV,*factors) > I am new to SciPy and don't know how to make my code follow its style, but I've > posted the code here http://openwetware.org/images/7/7b/CollapseBy.txt > blogged on the subject here > http://alexholcombe.wordpress.com/2009/01/26/summarizing-data-by-combinations-of-variables-with-python/ > I would hope that something like this could be put into SciPy? > > Alex Holcombe, http://www.psych.usyd.edu.au/staff/alexh/ > Your code is stylisticaly pretty difficult to read, for someone used to python code. I recommend giving a look at http://www.python.org/dev/peps/pep-0008/. Also I had problems following all the (temporary) variables, so I tried to come up with something that is easier to read. Since I'm not using record arrays, I wrote it for a standard array holding the data, with indices defining dependent and explanatory variables. Below is the best of three versions I came up with, 2 other versions are also in the attached file. I didn't try any timing and I don't have mdp installed so I couldn't directly compare with your program. I was also looking at some references for R and I was pretty surprised about the number of tutorials and books are available for R. Josef #---------------------------------- import numpy as np data = np.random.randint(1,3, size=(10,5)) keep = [1, 4] # index in data of explanatory variable under consideration dv = 0 # index in data of dependent variable # version1b: using dictionary to store indices to data #----------------------------------------------------- # build dictionary with unique combination as keys # and corresponding row indices as values result1 = {} for index, row in enumerate(data): result1.setdefault(tuple(row[[1, 4]]),[]).append(index) # calculate statistics for each combination (key) stat1 = [] for k,v in sorted(result1.iteritems()): m = data[v,dv].mean() s = data[v,dv].std() stat1.append(list(k) + [m, s, len(v)]) # convert result statistic to numpy arrays stat1n = np.array(stat1) print "combination mean var count" print stat1n #----------------------------------------------- -------------- next part -------------- '''calculate basic statistics of a dependent variable conditional on some explanatory variables while ignoring other explanatory variables. works only for discrete data ''' import numpy as np data = np.random.randint(1,3, size=(10,5)) keep = [1, 4] # index in data of explanatory variable under consideration dv = 0 # index in data of dependent variable # version1: using dictionary to store data rows #---------------------------------------------- # build dictionary with unique combination as keys # and corresponding data as values result = {} for row in data: print row result.setdefault(tuple(row[ [1, 4]]),[]).append(row) # calculate statistics for each combination (key) stat = [] for k,v in sorted(result.iteritems()): y = np.asarray(v)[:,dv] stat.append(list(k) + [y.mean(), y.std(), y.shape[0]]) # convert result statistic to numpy arrays statn = np.array(stat) print "combination mean var count" print statn assert np.sum(statn[:,len(keep)]*statn[:,-1])/data.shape[0] \ == data[:,dv].mean() # version1b: using dictionary to store indices to data #----------------------------------------------------- # build dictionary with unique combination as keys # and corresponding row indices as values result1 = {} for index, row in enumerate(data): result1.setdefault(tuple(row[[1, 4]]),[]).append(index) # calculate statistics for each combination (key) stat1 = [] for k,v in sorted(result1.iteritems()): m = data[v,dv].mean() s = data[v,dv].std() stat1.append(list(k) + [m, s, len(v)]) # convert result statistic to numpy arrays stat1n = np.array(stat1) print "combination mean var count" print stat1n assert np.all(stat1n == statn) # version2: using itertools groupby #---------------------------------- import itertools #sort rows, can use numpy instead datasorted = np.array(sorted(list(data), key=lambda(x):repr(x[keep]))) #use repr in sort key to avoid numpy array comparison stat2 = [] for k, v in itertools.groupby(datasorted, lambda(x):repr(x[keep])): v2 = np.array(list(v)) y = v2[:,dv] stat2.append(v2[0,keep].tolist() + [y.mean(), y.std(), y.shape[0]]) stat2n = np.array(stat2) print "combination mean var count" print statn assert np.all(stat2n == statn) From david at ar.media.kyoto-u.ac.jp Tue Jan 27 00:23:17 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 27 Jan 2009 14:23:17 +0900 Subject: [SciPy-dev] An interesting exercise - reproduce R analysis using python. In-Reply-To: <1cd32cbb0901262131j6f2378dcx101e078c23083f12@mail.gmail.com> References: <20090124152346.78ff56f3@ajackson.org> <1cd32cbb0901262131j6f2378dcx101e078c23083f12@mail.gmail.com> Message-ID: <497E9A45.1000502@ar.media.kyoto-u.ac.jp> josef.pktd at gmail.com wrote: > I was also looking at some references for R and I was pretty surprised > about the number of tutorials and books are available for R. > R is an open source version of S (which exists for quite some time), and both are the standard numerical packages as used in the statistical communities AFAIK. My impression is that R/S is to statistics what matlab is to signal processing in terms of usage: pervasive in both academic and private environments. cheers, David From alexh at psych.usyd.edu.au Tue Jan 27 05:45:38 2009 From: alexh at psych.usyd.edu.au (Alex Holcombe) Date: Tue, 27 Jan 2009 10:45:38 +0000 (UTC) Subject: [SciPy-dev] =?utf-8?q?An_interesting_exercise_-_reproduce_R_analy?= =?utf-8?q?sis=09using_python=2E?= References: <20090124152346.78ff56f3@ajackson.org> <1cd32cbb0901262131j6f2378dcx101e078c23083f12@mail.gmail.com> Message-ID: gmail.com> writes: > > Your code is stylisticaly pretty difficult to read, for someone used > to python code. I recommend giving a look at > http://www.python.org/dev/peps/pep-0008/. Also I had problems > following all the (temporary) variables, so I tried to come up with > something that is easier to read. > Wow, your versions are all much nicer than mine, thank you! If anyone wants to use these (as I do), note that in version 1 and 1b you'd need to substitute in 'keep' for '[1,4]'. I would consider this code to be basic functionality for any data analysis package. From sturla at molden.no Tue Jan 27 06:48:53 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 27 Jan 2009 12:48:53 +0100 Subject: [SciPy-dev] An interesting exercise - reproduce R analysis using python. In-Reply-To: <497E9A45.1000502@ar.media.kyoto-u.ac.jp> References: <20090124152346.78ff56f3@ajackson.org> <1cd32cbb0901262131j6f2378dcx101e078c23083f12@mail.gmail.com> <497E9A45.1000502@ar.media.kyoto-u.ac.jp> Message-ID: <497EF4A5.7090102@molden.no> On 1/27/2009 6:23 AM, David Cournapeau wrote: > R is an open source version of S (which exists for quite some time), and > both are the standard numerical packages as used in the statistical > communities AFAIK. S is the language. R and S-PLUS are the two major implementations of S. S is the most pervasive language in statistics. When statistican's publish papers, they more or less expect the author to put up S code on a database called Statlib. S.M. From nwagner at iam.uni-stuttgart.de Tue Jan 27 10:39:38 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 27 Jan 2009 16:39:38 +0100 Subject: [SciPy-dev] scipy.org is down again Message-ID: Hi all, I cannot access http://www.scipy.org Nils From gael.varoquaux at normalesup.org Tue Jan 27 11:05:14 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 27 Jan 2009 17:05:14 +0100 Subject: [SciPy-dev] scipy.org is down again In-Reply-To: References: Message-ID: <20090127160514.GC13626@phare.normalesup.org> On Tue, Jan 27, 2009 at 04:39:38PM +0100, Nils Wagner wrote: > I cannot access http://www.scipy.org Its back up. Ga?l From josef.pktd at gmail.com Tue Jan 27 15:14:02 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 27 Jan 2009 15:14:02 -0500 Subject: [SciPy-dev] An interesting exercise - reproduce R analysis using python. In-Reply-To: References: <20090124152346.78ff56f3@ajackson.org> <1cd32cbb0901262131j6f2378dcx101e078c23083f12@mail.gmail.com> Message-ID: <1cd32cbb0901271214n66a71d3agb786c37cd79e76b4@mail.gmail.com> On Tue, Jan 27, 2009 at 5:45 AM, Alex Holcombe wrote: > gmail.com> writes: > >> >> Your code is stylisticaly pretty difficult to read, for someone used >> to python code. I recommend giving a look at >> http://www.python.org/dev/peps/pep-0008/. Also I had problems >> following all the (temporary) variables, so I tried to come up with >> something that is easier to read. >> > > Wow, your versions are all much nicer than mine, thank you! If anyone wants to > use these (as I do), note that in version 1 and 1b you'd need to substitute in > 'keep' for '[1,4]'. > I would consider this code to be basic functionality for any data analysis > package. > I looked up the explanation for the pivot table, and I rewrote my version 1b to optionally produce a table for each statistic instead of a flat table. I added a flat2nd function, flat to multidimensional conversion function. see attachment. I don't know what all the uses for this are, but this could be extended to include a callback function to calculate arbitrary statistics on the data for the unique factors. See if this version works as you would expect a pivot table to work. Josef -------------- next part -------------- # only tested for 2D, i.e. 2 explanatory variables # no pretty print import numpy as np from numpy.testing import assert_array_equal, assert_equal def ptable(data, dv, keep, outformat='flat'): '''calculate basic statistics for pivot table Mean, standard deviation and count for a dependent variable conditional on some explanatory variables while ignoring other explanatory variables. This works only for discrete values of explanatory variables Parameters ---------- data : 2D array assumes variables are in columns and observations in rows dv : int column index of dependent variable keep : array_like int column indices of explanatory variables outformat : (optional) * 'flat' (default) : return 2D array with unique values of explanatory variables in first columns and statistics in later columns * 'table' Returns ------- statarr: 2D array if outformat = 'flat' {uns, mmean, mstd, mcount} if outformat = 'table' ''' # build dictionary with unique combination as keys # and corresponding row indices as values catdata = {} for index, row in enumerate(data): catdata.setdefault(tuple(row[keep]),[]).append(index) # calculate statistics for each combination (key) stat = [] for k,v in sorted(catdata.iteritems()): m = data[v,dv].mean() s = data[v,dv].std() stat.append(list(k) + [m, s, len(v)]) # convert result statistic to numpy arrays statarr = np.array(stat) if outformat == 'flat': return statarr elif outformat == 'table': # convert flat table to multidimensional K = len(keep) mmean, uns = flat2multi(statarr[:,range(K)+[K]]) mstd, uns = flat2multi(statarr[:,range(K)+[K+1]]) mcount, uns = flat2multi(statarr[:,range(K)+[K+2]]) return uns, mmean, mstd, mcount else: raise ValueError, "outformat can only be 'flat' or 'table'" def flat2nd(x): '''convert flat table to multidimensional table Assumes rows on first K columns are jointly unique. Flat table does not need to have complete, i.e. rectangular, values for explanatory variables. Missing elements are filled with NaN. Parameters ---------- x array (N,K+1) flat table [x1,x2,y] returns ------- res : array contains variable of last column in input reformated to have K dimensions with rows and columns according to unique uns: list of K 1D arrays element i of uns is 1D array of values of the explanatory variable for the ith axis of `res` Example ------- >>> mex = np.array([[ 11., 1., 1.], [ 11., 2., 2.], [ 12., 1., 3.], [ 12., 2., 4.]]) >>> res, unirs, uns = flat2nd(mex) >>> res array([[ 1., 2.], [ 3., 4.]]) >>> uns [array([ 11., 12.]), array([ 1., 2.])] example with unequal dimension and not rectangular >>> mex = np.array([[ 11., 1., 1.], [ 11., 2., 2.], [ 12., 1., 3.], [ 12., 2., 4.], [ 13., 2., 5.],]) >>> res, unirs, uns = flat2nd(mex) >>> res array([[ 1., 2.], [ 3., 4.], [ NaN, 5.]]) >>> uns [array([ 11., 12., 13.]), array([ 1., 2.])] ''' uns = [] unirs = [] dims = [] for ii in range(x.shape[1]-1): un, unir = np.unique1d(x[:,ii], return_inverse=True) uns.append(un) unirs.append(unir) dims.append(len(un)) res = np.nan * np.ones(dims) res[zip(unirs)]=x[:,-1] return res, uns def test_flat2multi(): mex = np.array([[ 11., 1., 1.], [ 11., 2., 2.], [ 12., 1., 3.], [ 12., 2., 4.]]) res, uns = flat2nd(mex) assert_array_equal(res, np.array([[ 1., 2.], [ 3., 4.]])) assert_equal(uns, [np.array([ 11., 12.]), np.array([ 1., 2.])]) if __name__ == '__main__': test_flat2multi() data = np.random.randint(1,3, size=(10,5)) data[:,1] += 10 keep = [1, 4] # index in data of explanatory variable under consideration dv = 0 # index in data of dependent variable statn = ptable(data, dv, keep, outformat='flat') print statn uns, mmean, mstd, mcount = ptable(data, dv, keep, outformat='table') print uns print mmean print mstd print mcount mex = np.array([[ 11., 1., 1.], [ 11., 2., 2.], [ 12., 1., 3.], [ 12., 2., 4.], [ 13., 2., 5.],]) res, uns = flat2nd(mex) print uns print res From nwagner at iam.uni-stuttgart.de Tue Jan 27 16:41:07 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 27 Jan 2009 22:41:07 +0100 Subject: [SciPy-dev] make latex in scipy/doc failed Message-ID: Hi all, a make latex in scipy/doc failed Intersphinx hit: PyCObject_FromVoidPtrAndDesc http://docs.python.org/dev/c-api/cobject.html Intersphinx hit: PyCObject_FromVoidPtr http://docs.python.org/dev/c-api/cobject.html WARNING: /home/nwagner/svn/scipy/doc/source/index.rst:: (WARNING/2) undefined label: numpy-reference.routines.io -- if you don't give a link caption the label must precede a section header. Intersphinx hit: numpy.poly1d http://docs.scipy.org/doc/numpy/reference/generated/numpy.poly1d.html writing... Exception occurred: File "/home/nwagner/local/lib64/python2.6/site-packages/Sphinx-0.5.1-py2.6.egg/sphinx/latexwriter.py", line 619, in visit_entry raise NotImplementedError('Column or row spanning cells are ' NotImplementedError: Column or row spanning cells are not implemented. The full traceback has been saved in /tmp/sphinx-err-GOKGyC.log, if you want to report the issue to the author. Please also report this if it was a user error, so that a better error message can be provided next time. Send reports to sphinx-dev at googlegroups.com. Thanks! make: *** [latex] Fehler 1 log file attached. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: sphinx-err-GOKGyC.log Type: text/x-log Size: 3622 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Wed Jan 28 14:54:28 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 28 Jan 2009 20:54:28 +0100 Subject: [SciPy-dev] Matrix sign function is missing in the docs Message-ID: Hi all, The matrix sign function (linalg.signm) is missing in the docs scipy/doc/build/html/tutorial/linalg.html#matrix-functions Nils From david at ar.media.kyoto-u.ac.jp Wed Jan 28 21:11:02 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 29 Jan 2009 11:11:02 +0900 Subject: [SciPy-dev] A buildbot farm with shell access - for free ? Message-ID: <49811036.5030701@ar.media.kyoto-u.ac.jp> Hi, Just saw that on one ML: http://www.snakebite.org/ http://mail.python.org/pipermail/python-committers/2009-January/000331.html Bottom line: it looks like there is a set of machines which were donated to the PSF for buildbot *with shell access* so that people can fix problems appearing on some platforms. If you look at the email, there are some 'exotic' machines that mere mortals cannot have access to (like True64 on Itanium: to quote the email "massive quad Itanium 2 RX-5670s, chock full of 73GB 15k disks and no less than 78GB of RAM between the two servers; 32GB in one and 46GB in the other"). There are also windows machines available. It is said in the email that this is reserved to the python project, and prominent python projects like Twisted and Django. Would it be ok to try to be qualified as a prominent python project as well ? cheers, David From wnbell at gmail.com Wed Jan 28 22:25:23 2009 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 28 Jan 2009 22:25:23 -0500 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <497CA1D7.3080703@aero.iitb.ac.in> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> Message-ID: On Sun, Jan 25, 2009 at 12:31 PM, Prabhu Ramachandran wrote: > > Unfortunately, testing this requires the use of nasty external > resources. In particular it requires that VTK be installed and that > weave be told where to find the VTK headers and libraries. In any case, > I modified the paths suitably in the example corresponding to my > system's VTK setup and it seems to work fine -- note that I'm using > version 0.6.0 that ships with Ubuntu rather than SVN. > This speaks in favor of distributing the package separately from SciPy. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From millman at berkeley.edu Wed Jan 28 22:54:05 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 28 Jan 2009 19:54:05 -0800 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> Message-ID: On Wed, Jan 28, 2009 at 7:25 PM, Nathan Bell wrote: > On Sun, Jan 25, 2009 at 12:31 PM, Prabhu Ramachandran > wrote: >> >> Unfortunately, testing this requires the use of nasty external >> resources. In particular it requires that VTK be installed and that >> weave be told where to find the VTK headers and libraries. In any case, >> I modified the paths suitably in the example corresponding to my >> system's VTK setup and it seems to work fine -- note that I'm using >> version 0.6.0 that ships with Ubuntu rather than SVN. >> > > This speaks in favor of distributing the package separately from SciPy. I know there has been a lot of disagreement with me on this before, but I am very much in favor of splitting weave into its own project entirely. It doesn't belong in scipy. Over the last year, Stefan, David, and I spent a fair amount of time on fixing issues with weave because we wanted to get scipy released. It slowed down the release and diverted effort that could have been spent on other parts of scipy, which also need attention! If one of the people interested in keeping weave in scipy would step forward and maintain it, I would be more than happy to keep quiet about my reservations on keeping weave in scipy. If you want weave to stay in scipy, please step up and help fix bugs, add tests, improve the documentation, and refactor the code to make it easier to maintain and support! From millman at berkeley.edu Wed Jan 28 22:55:08 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 28 Jan 2009 19:55:08 -0800 Subject: [SciPy-dev] [Numpy-discussion] A buildbot farm with shell access - for free ? In-Reply-To: <49811036.5030701@ar.media.kyoto-u.ac.jp> References: <49811036.5030701@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, Jan 28, 2009 at 6:11 PM, David Cournapeau wrote: > It is said in the email that this is reserved to the python project, and > prominent python projects like Twisted and Django. Would it be ok to try > to be qualified as a prominent python project as well ? That would be great. From cournape at gmail.com Thu Jan 29 00:10:32 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 29 Jan 2009 14:10:32 +0900 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> Message-ID: <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> On Thu, Jan 29, 2009 at 12:54 PM, Jarrod Millman wrote: > On Wed, Jan 28, 2009 at 7:25 PM, Nathan Bell wrote: >> On Sun, Jan 25, 2009 at 12:31 PM, Prabhu Ramachandran >> wrote: >>> >>> Unfortunately, testing this requires the use of nasty external >>> resources. In particular it requires that VTK be installed and that >>> weave be told where to find the VTK headers and libraries. In any case, >>> I modified the paths suitably in the example corresponding to my >>> system's VTK setup and it seems to work fine -- note that I'm using >>> version 0.6.0 that ships with Ubuntu rather than SVN. >>> >> >> This speaks in favor of distributing the package separately from SciPy. > > I know there has been a lot of disagreement with me on this before, > but I am very much in favor of splitting weave into its own project > entirely. It doesn't belong in scipy. Over the last year, Stefan, > David, and I spent a fair amount of time on fixing issues with weave > because we wanted to get scipy released. It slowed down the release > and diverted effort that could have been spent on other parts of > scipy, which also need attention! I think the main problem with weave in that particular case was that I am not a user of weave myself, specially on windows. Thus, the problem is not so much specific to weave than an example of the general dogfooding problem in open source: people mostly care about what they use. I am not sure I understand the parts concerning VTK and weave interactions; one one hand, I think we should remove as much as possible anything with complicated dependencies in scipy, but removing this feature (as any other feature) means breaking scipy for some people. cheers, David From scott.sinclair.za at gmail.com Thu Jan 29 01:29:01 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 29 Jan 2009 08:29:01 +0200 Subject: [SciPy-dev] Matrix sign function is missing in the docs In-Reply-To: References: Message-ID: <6a17e9ee0901282229q32f0f15fn3d248d20a8688ec1@mail.gmail.com> > 2009/1/28 Nils Wagner : > The matrix sign function (linalg.signm) is missing in the > docs > > scipy/doc/build/html/tutorial/linalg.html#matrix-functions All functions are not automatically added to the documentation, because this would make the docs difficult to customize and structure in a sensible way. The signm function needs to be manually added to linalg.rst in the appropriate place. You could use the doc-editor here: http://docs.scipy.org/scipy/docs/scipy-docs/linalg.rst The docstring for linalg.signm can be edited here: http://docs.scipy.org/scipy/docs/scipy.linalg.matfuncs.signm/ Cheers, Scott From millman at berkeley.edu Thu Jan 29 02:05:21 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 28 Jan 2009 23:05:21 -0800 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> Message-ID: On Wed, Jan 28, 2009 at 9:10 PM, David Cournapeau wrote: > I think the main problem with weave in that particular case was that I > am not a user of weave myself, specially on windows. Thus, the problem > is not so much specific to weave than an example of the general > dogfooding problem in open source: people mostly care about what they > use. Does anyone who is actively developing SciPy use weave? If not, this is going to be a recurring problem. Jarrod From gael.varoquaux at normalesup.org Thu Jan 29 02:10:04 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 29 Jan 2009 08:10:04 +0100 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> Message-ID: <20090129071004.GD5567@phare.normalesup.org> On Wed, Jan 28, 2009 at 11:05:21PM -0800, Jarrod Millman wrote: > On Wed, Jan 28, 2009 at 9:10 PM, David Cournapeau wrote: > > I think the main problem with weave in that particular case was that I > > am not a user of weave myself, specially on windows. Thus, the problem > > is not so much specific to weave than an example of the general > > dogfooding problem in open source: people mostly care about what they > > use. > Does anyone who is actively developing SciPy use weave? If not, this > is going to be a recurring problem. I see a solution to this: wait a bit more for cython to stabilise and people to get used to it, and then say that weave is depreciated in the favor of cython, and move it out. I can see such a scheme working (it has worked for me: I no longer really favor weave, cython is nicer). Ga?l From cournape at gmail.com Thu Jan 29 02:20:08 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 29 Jan 2009 16:20:08 +0900 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <20090129071004.GD5567@phare.normalesup.org> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> Message-ID: <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> On Thu, Jan 29, 2009 at 4:10 PM, Gael Varoquaux wrote: > > I see a solution to this: wait a bit more for cython to stabilise and > people to get used to it, and then say that weave is depreciated in the > favor of cython, and move it out. I can see such a scheme working (it has > worked for me: I no longer really favor weave, cython is nicer). But weave can be used to wrap C++ code, no ? Which is not really possible with Cythin, AFAIK, David From gael.varoquaux at normalesup.org Thu Jan 29 02:22:20 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 29 Jan 2009 08:22:20 +0100 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> Message-ID: <20090129072220.GE5567@phare.normalesup.org> On Thu, Jan 29, 2009 at 04:20:08PM +0900, David Cournapeau wrote: > But weave can be used to wrap C++ code, no ? Which is not really > possible with Cythin, AFAIK, Dang, you are right. I think I have used an impedance-matching C++ file which exported its symbols in a C-compatibe way to do these kind of things (is this possible? I haven't played with C++ in years). Ga?l From millman at berkeley.edu Thu Jan 29 02:25:18 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 28 Jan 2009 23:25:18 -0800 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> References: <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> Message-ID: On Wed, Jan 28, 2009 at 11:20 PM, David Cournapeau wrote: > But weave can be used to wrap C++ code, no ? Which is not really > possible with Cythin, AFAIK, There is some minimal support for C++ (hopefully it will improve): http://docs.cython.org/docs/wrapping_CPlusPlus.html From millman at berkeley.edu Thu Jan 29 02:29:21 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 28 Jan 2009 23:29:21 -0800 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <20090129071004.GD5567@phare.normalesup.org> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> Message-ID: On Wed, Jan 28, 2009 at 11:10 PM, Gael Varoquaux wrote: > I see a solution to this: wait a bit more for cython to stabilise and > people to get used to it, and then say that weave is depreciated in the > favor of cython, and move it out. I can see such a scheme working (it has > worked for me: I no longer really favor weave, cython is nicer). Since 0.7.0rc2 is out, weave will be in 0.7.x. We could deprecate weave (and recommend cython?) in 0.8.x. Simultaneous with the 0.8 release, we can move weave from the development trunk to its own project. Then weave can raise an error in 0.9.x and point users to the standalone project and/or cython. We could then remove all references to weave in the 0.10.x release. Jarrod From david at ar.media.kyoto-u.ac.jp Thu Jan 29 02:13:52 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 29 Jan 2009 16:13:52 +0900 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <20090129072220.GE5567@phare.normalesup.org> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> <20090129072220.GE5567@phare.normalesup.org> Message-ID: <49815730.9070802@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > (is this possible? I haven't played with C++ in years). > Yes it is: it is the only way that I know of to call C++ code from other languages independently of the target language. cheers, David From michael.abshoff at googlemail.com Thu Jan 29 02:29:15 2009 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Wed, 28 Jan 2009 23:29:15 -0800 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> Message-ID: <49815ACB.3080109@gmail.com> David Cournapeau wrote: > On Thu, Jan 29, 2009 at 4:10 PM, Gael Varoquaux > wrote: > >> I see a solution to this: wait a bit more for cython to stabilise and >> people to get used to it, and then say that weave is depreciated in the >> favor of cython, and move it out. I can see such a scheme working (it has >> worked for me: I no longer really favor weave, cython is nicer). > > But weave can be used to wrap C++ code, no ? Which is not really > possible with Cythin, AFAIK, Sage wraps copious amounts of C++ code, but I wouldn't call the current way to do it elegant. > David Cheers, Michael > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From david at ar.media.kyoto-u.ac.jp Thu Jan 29 02:19:13 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 29 Jan 2009 16:19:13 +0900 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> Message-ID: <49815871.8010103@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > On Wed, Jan 28, 2009 at 11:10 PM, Gael Varoquaux > wrote: > >> I see a solution to this: wait a bit more for cython to stabilise and >> people to get used to it, and then say that weave is depreciated in the >> favor of cython, and move it out. I can see such a scheme working (it has >> worked for me: I no longer really favor weave, cython is nicer). >> > > Since 0.7.0rc2 is out, weave will be in 0.7.x. We could deprecate > weave (and recommend cython?) in 0.8.x. Simultaneous with the 0.8 > release, we can move weave from the development trunk to its own > project. Then weave can raise an error in 0.9.x and point users to > the standalone project and/or cython. We could then remove all > references to weave in the 0.10.x release. > I am afraid this will annoy many users: I don't know the numbers (I guess nobody knows), but there seems to be quite a few users of weave. And adding C++ support to anything which needs some C++ understanding - the cython alternative - is a huge task; I am not saying it would not be useful, but C++ is such a complicated thing that supporting even a useful subset of it is very hard, and certainly will take time. David From matthieu.brucher at gmail.com Thu Jan 29 05:11:55 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 29 Jan 2009 11:11:55 +0100 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> References: <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> Message-ID: 2009/1/29 David Cournapeau : > On Thu, Jan 29, 2009 at 4:10 PM, Gael Varoquaux > wrote: > >> >> I see a solution to this: wait a bit more for cython to stabilise and >> people to get used to it, and then say that weave is depreciated in the >> favor of cython, and move it out. I can see such a scheme working (it has >> worked for me: I no longer really favor weave, cython is nicer). > > But weave can be used to wrap C++ code, no ? Which is not really > possible with Cythin, AFAIK, It's more using the blitz converters than wrapping C++. The converters change the code to the correct C++ code. Using Cython instead of weave won't be much a trouble if Cython supports numpy arrays. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From gael.varoquaux at normalesup.org Thu Jan 29 05:15:16 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 29 Jan 2009 11:15:16 +0100 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: References: <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <20090129071004.GD5567@phare.normalesup.org> <5b8d13220901282320x580a33bfs4b6199a06b71589b@mail.gmail.com> Message-ID: <20090129101516.GI5567@phare.normalesup.org> On Thu, Jan 29, 2009 at 11:11:55AM +0100, Matthieu Brucher wrote: > Using Cython instead of weave won't be much a trouble if Cython > supports numpy arrays. The one missing feature of Cython compared to weave is on the fly compilation of code, I would say. Ga?l From prabhu at aero.iitb.ac.in Thu Jan 29 06:08:34 2009 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 29 Jan 2009 16:38:34 +0530 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> Message-ID: <49818E32.4060006@aero.iitb.ac.in> On 01/29/09 08:55, Nathan Bell wrote: > On Sun, Jan 25, 2009 at 12:31 PM, Prabhu Ramachandran > wrote: >> Unfortunately, testing this requires the use of nasty external >> resources. In particular it requires that VTK be installed and that >> weave be told where to find the VTK headers and libraries. In any case, >> I modified the paths suitably in the example corresponding to my >> system's VTK setup and it seems to work fine -- note that I'm using >> version 0.6.0 that ships with Ubuntu rather than SVN. > > This speaks in favor of distributing the package separately from SciPy. For one single file (vtk_spec.py) that adds optional support for VTK and does not affect anything else? Or are you saying all of weave should be separately packaged because of other reasons? cheers, prabhu From prabhu at aero.iitb.ac.in Thu Jan 29 06:18:26 2009 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 29 Jan 2009 16:48:26 +0530 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> Message-ID: <49819082.1010203@aero.iitb.ac.in> On 01/29/09 10:40, David Cournapeau wrote: > I am not sure I understand the parts concerning VTK and weave > interactions; one one hand, I think we should remove as much as > possible anything with complicated dependencies in scipy, but removing > this feature (as any other feature) means breaking scipy for some > people. While I understand the frustrations with weave and the lack of developer interest, the vtk support is entirely optional and should not as such affect your decision to retain or remove weave from scipy. You can build and install weave without either wx, VTK, swig etc. installed. You obviously can't have automated testing of any of those features though. However, once built, and if VTK is available, you can safely use the vtk_spec file to interface to VTK. Thats a big enough pro for me to think keeping it in makes sense. Besides, this is weave specific and it really does not make sense to put this in VTK or swig or wx. cheers, prabhu From gael.varoquaux at normalesup.org Thu Jan 29 06:19:54 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 29 Jan 2009 12:19:54 +0100 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <49818E32.4060006@aero.iitb.ac.in> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <49818E32.4060006@aero.iitb.ac.in> Message-ID: <20090129111954.GJ5567@phare.normalesup.org> On Thu, Jan 29, 2009 at 04:38:34PM +0530, Prabhu Ramachandran wrote: > On 01/29/09 08:55, Nathan Bell wrote: > > On Sun, Jan 25, 2009 at 12:31 PM, Prabhu Ramachandran > > wrote: > >> Unfortunately, testing this requires the use of nasty external > >> resources. In particular it requires that VTK be installed and that > >> weave be told where to find the VTK headers and libraries. In any case, > >> I modified the paths suitably in the example corresponding to my > >> system's VTK setup and it seems to work fine -- note that I'm using > >> version 0.6.0 that ships with Ubuntu rather than SVN. > > This speaks in favor of distributing the package separately from SciPy. > For one single file (vtk_spec.py) that adds optional support for VTK and > does not affect anything else? Or are you saying all of weave should be > separately packaged because of other reasons? The discussion is mainly around the complete weave package, I believe. This file is symptomatic, because it cannot be tested with the vanilla install of scipy, but it is only a minor part of the argument. Ga?l From lists at cheimes.de Thu Jan 29 08:36:07 2009 From: lists at cheimes.de (Christian Heimes) Date: Thu, 29 Jan 2009 14:36:07 +0100 Subject: [SciPy-dev] A buildbot farm with shell access - for free ? In-Reply-To: <49811036.5030701@ar.media.kyoto-u.ac.jp> References: <49811036.5030701@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > It is said in the email that this is reserved to the python project, and > prominent python projects like Twisted and Django. Would it be ok to try > to be qualified as a prominent python project as well ? Give it some time. Nobody - not even the Python core devs - have access to the machines. It's going to take at least several weeks to get the infrastructure running and to establish a policy. >From my perspective NumPy and Sage both count as prominent Python projects. Heck, you are in competition with tools like Matlab and you ain't looking bad! Furthermore you could make better use of the machines than Django because you are using way more C extensions and esoteric libraries. I recommend you subscribe to the snakebite list and bring up your interest. You got my +1 already. For now the list is snakebite-list at googlegroups.com but it will move to another server (probably Python.org) eventually. Christian From cournape at gmail.com Thu Jan 29 09:40:43 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 29 Jan 2009 23:40:43 +0900 Subject: [SciPy-dev] removing wx-related weave code (was test errors blocking 0.7.x branch) In-Reply-To: <49819082.1010203@aero.iitb.ac.in> References: <9457e7c80901072344hacaedbrc7e2b5e08fdacd09@mail.gmail.com> <496602B1.5050402@aero.iitb.ac.in> <9457e7c80901080648q67146748p6f8181d88a4797c2@mail.gmail.com> <497CA1D7.3080703@aero.iitb.ac.in> <5b8d13220901282110n31696f31p3c96e41938ef0a82@mail.gmail.com> <49819082.1010203@aero.iitb.ac.in> Message-ID: <5b8d13220901290640p27633b7fj37e27bc46f7da0a7@mail.gmail.com> On Thu, Jan 29, 2009 at 8:18 PM, Prabhu Ramachandran wrote: > On 01/29/09 10:40, David Cournapeau wrote: >> I am not sure I understand the parts concerning VTK and weave >> interactions; one one hand, I think we should remove as much as >> possible anything with complicated dependencies in scipy, but removing >> this feature (as any other feature) means breaking scipy for some >> people. > > While I understand the frustrations with weave and the lack of developer > interest, the vtk support is entirely optional and should not as such > affect your decision to retain or remove weave from scipy. It won't be my decision, but there is an inherent support problem everytime you add a new dependency to a project. It has a developement cost, and I don't think scipy has that many people working on it for the amount of code already there. The fact that it is optional has nothing to do with it - unless optional means you can break it at any time, but I doubt that's what you meant :) David