From scipy-svn at scipy.org Mon Sep 1 10:40:48 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 1 Sep 2008 09:40:48 -0500 (CDT) Subject: [Scipy-svn] r4680 - trunk Message-ID: <20080901144048.7527139C09F@scipy.org> Author: cdavid Date: 2008-09-01 09:40:30 -0500 (Mon, 01 Sep 2008) New Revision: 4680 Modified: trunk/INSTALL.txt Log: Update INSTALL notes. Modified: trunk/INSTALL.txt =================================================================== --- trunk/INSTALL.txt 2008-08-29 23:27:17 UTC (rev 4679) +++ trunk/INSTALL.txt 2008-09-01 14:40:30 UTC (rev 4680) @@ -40,7 +40,7 @@ 3) Complete LAPACK__ library (see NOTES 1, 2, 3) - Debian packages: atlas3-headers atlas3-base atlas3-base-dev + Debian/Ubuntu packages (g77): atlas3-base atlas3-base-dev Various SciPy packages do linear algebra computations using the LAPACK routines. SciPy's setup.py scripts can use number of different LAPACK @@ -63,13 +63,10 @@ 1) C, C++, Fortran 77 compilers (see COMPILER NOTES) To build SciPy or any other extension modules for Python, you'll need - a C compiler. + a C compiler. Scipy also requires a C++ compiler. Various SciPy modules use Fortran 77 libraries, so you'll need also - at least a Fortran 77 compiler installed. Currently the SciPy build - process does not use a C++ compiler, but the SciPy module Weave uses - a C++ compiler at run time, so it is good to have C++ compiler around - as well. + at least a Fortran 77 compiler installed. gcc__ 3.x compilers are recommended. gcc 2.95 and 4.0.x also work on some platforms, but may be more problematic (see COMPILER NOTES). @@ -79,10 +76,9 @@ __ http://gcc.gnu.org/ -2) FFTW__ 2.1.x (see Lib/fftpack/NOTES.txt) +2) FFTW__ x (see Lib/fftpack/NOTES.txt) - FFTW 3.0.x may also work, but SciPy currently has better performance - with FFTW 2.1.x on complex input. + FFTW 2.1.x and 3.x work. Debian packages: fftw2 fftw-dev fftw3 fftw3-dev @@ -98,7 +94,11 @@ http://math-atlas.sourceforge.net/errata.html#completelp - for instructions. + for instructions. Please be aware than building your own atlas is + error-prone, and should be avoided as much as possible if you don't want to + spend time on build issues. Use the blas/lapack packaged by your + distribution on Linux; on Mac Os X, you should use the vecLib/Accelerate + framework, which are available when installing the apple development tools. Below follows basic steps for building ATLAS+LAPACK from scratch. In case of trouble, consult the documentation of the corresponding @@ -255,13 +255,12 @@ other vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Porland, Lahey, HP, IBM are supported in the form of community feedback. -gcc__ 3.x compilers are recommended. gcc 4.0.x also works on some -platforms (e.g. Linux x86). SciPy is not fully compatible with gcc -4.0.x on OS X. If building on OS X, we recommend you use gcc 3.3, by -typing: +gcc__ compiler is recommended. gcc 3.x and 4.x are known to work. +If building on OS X, you should use the provided gcc by xcode tools, and the +gfortran compiler available here: - gcc_select 3.3 - +http://r.research.att.com/tools/ + You can specify which Fortran compiler to use by using the following install command:: @@ -271,9 +270,10 @@ python setup.py config_fc --help-fcompiler -IMPORTANT: It is highly recommended that all libraries that scipy uses -(e.g. blas and atlas libraries) are built with the same Fortran -compiler. +IMPORTANT: It is highly recommended that all libraries that scipy uses (e.g. +blas and atlas libraries) are built with the same Fortran compiler. In most +cases, if you mix compilers, you will not be able to import scipy at best, have +crashes and random results at worse. __ http://gcc.gnu.org/ From scipy-svn at scipy.org Mon Sep 1 14:28:33 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 1 Sep 2008 13:28:33 -0500 (CDT) Subject: [Scipy-svn] r4681 - in trunk/scipy: . fftpack integrate ndimage sparse/linalg/dsolve sparse/linalg/eigen/lobpcg stats Message-ID: <20080901182833.629B539C140@scipy.org> Author: alan.mcintyre Date: 2008-09-01 13:28:08 -0500 (Mon, 01 Sep 2008) New Revision: 4681 Modified: trunk/scipy/__init__.py trunk/scipy/fftpack/info.py trunk/scipy/integrate/ode.py trunk/scipy/ndimage/filters.py trunk/scipy/sparse/linalg/dsolve/__init__.py trunk/scipy/sparse/linalg/eigen/lobpcg/__init__.py trunk/scipy/stats/distributions.py trunk/scipy/stats/mstats.py Log: Correct handling of __doc__ to allow for running under python -OO. Modified: trunk/scipy/__init__.py =================================================================== --- trunk/scipy/__init__.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/__init__.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -34,7 +34,8 @@ __all__ += ['randn', 'rand', 'fft', 'ifft'] -__doc__ += """ +if __doc__: + __doc__ += """ Contents -------- @@ -76,17 +77,20 @@ del name, subpackages -__doc__ += """ +if __doc__: + __doc__ += """ Available subpackages --------------------- """ -__doc__ += pkgload.get_pkgdocs() +if __doc__: + __doc__ += pkgload.get_pkgdocs() from numpy.testing import Tester test = Tester().test bench = Tester().bench -__doc__ += """ +if __doc__: + __doc__ += """ Utility tools ------------- Modified: trunk/scipy/fftpack/info.py =================================================================== --- trunk/scipy/fftpack/info.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/fftpack/info.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -51,7 +51,10 @@ 'rfftfreq' ] -__doc_title__ = __doc__.lstrip().split('\n',1)[0] +if __doc__: + __doc_title__ = __doc__.lstrip().split('\n',1)[0] +else: + __doc_title__ = None postpone_import = 1 Modified: trunk/scipy/integrate/ode.py =================================================================== --- trunk/scipy/integrate/ode.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/integrate/ode.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -99,7 +99,8 @@ """ -__doc__ += integrator_info +if __doc__: + __doc__ += integrator_info # XXX: Integrators must have: # =========================== @@ -188,7 +189,8 @@ """ - __doc__ += integrator_info + if __doc__: + __doc__ += integrator_info def __init__(self, f, jac=None): """ Modified: trunk/scipy/ndimage/filters.py =================================================================== --- trunk/scipy/ndimage/filters.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/ndimage/filters.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -44,8 +44,9 @@ def moredoc(*args): def decorate(f): - if not f.__doc__: f.__doc__ = "" - for a in args: f.__doc__ += a + if f.__doc__ is not None: + for a in args: + f.__doc__ += a return f return decorate Modified: trunk/scipy/sparse/linalg/dsolve/__init__.py =================================================================== --- trunk/scipy/sparse/linalg/dsolve/__init__.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/sparse/linalg/dsolve/__init__.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -2,9 +2,9 @@ from info import __doc__ -import umfpack +#import umfpack #__doc__ = '\n\n'.join( (__doc__, umfpack.__doc__) ) -del umfpack +#del umfpack from linsolve import * Modified: trunk/scipy/sparse/linalg/eigen/lobpcg/__init__.py =================================================================== --- trunk/scipy/sparse/linalg/eigen/lobpcg/__init__.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/sparse/linalg/eigen/lobpcg/__init__.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -2,7 +2,8 @@ from info import __doc__ import lobpcg -__doc__ = '\n\n'.join( (lobpcg.__doc__, __doc__) ) +if __doc__ and lobpcg.__doc__: + __doc__ = '\n\n'.join( (lobpcg.__doc__, __doc__) ) del lobpcg from lobpcg import * Modified: trunk/scipy/stats/distributions.py =================================================================== --- trunk/scipy/stats/distributions.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/stats/distributions.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -324,7 +324,7 @@ longname = hstr + name if self.__doc__ is None: self.__doc__ = rv_continuous.__doc__ - if self.__doc__ is not None: + else: self.__doc__ = self.__doc__.replace("A Generic",longname) if name is not None: self.__doc__ = self.__doc__.replace("generic",name) @@ -3428,7 +3428,7 @@ longname = hstr + name if self.__doc__ is None: self.__doc__ = rv_discrete.__doc__ - if self.__doc__ is not None: + else: self.__doc__ = self.__doc__.replace("A Generic",longname) if name is not None: self.__doc__ = self.__doc__.replace("generic",name) Modified: trunk/scipy/stats/mstats.py =================================================================== --- trunk/scipy/stats/mstats.py 2008-09-01 14:40:30 UTC (rev 4680) +++ trunk/scipy/stats/mstats.py 2008-09-01 18:28:08 UTC (rev 4681) @@ -591,9 +591,11 @@ t = rpb*ma.sqrt(df/(1.0-rpb**2)) prob = betai(0.5*df, 0.5, df/(df+t*t)) return rpb, prob -pointbiserialr.__doc__ = stats.pointbiserialr.__doc__ + genmissingvaldoc +if stats.pointbiserialr.__doc__: + pointbiserialr.__doc__ = stats.pointbiserialr.__doc__ + genmissingvaldoc + def linregress(*args): if len(args) == 1: # more than 1D array? args = ma.array(args[0], copy=True) @@ -630,9 +632,11 @@ intercept = ymean - slope*xmean sterrest = ma.sqrt(1.-r*r) * y.std() return slope, intercept, r, prob, sterrest, Syy/Sxx -linregress.__doc__ = stats.linregress.__doc__ + genmissingvaldoc +if stats.linregress.__doc__: + linregress.__doc__ = stats.linregress.__doc__ + genmissingvaldoc + def theilslopes(y, x=None, alpha=0.05): """Computes the Theil slope over the dataset (x,y), as the median of all slopes between paired values. @@ -1084,9 +1088,11 @@ return trimr(a, limits=limits, inclusive=inclusive, axis=axis) else: return trima(a, limits=limits, inclusive=inclusive) -trim.__doc__ = trim.__doc__ % trimdoc +if trim.__doc__: + trim.__doc__ = trim.__doc__ % trimdoc + def trimboth(data, proportiontocut=0.2, inclusive=(True,True), axis=None): """Trims the data by masking the int(proportiontocut*n) smallest and int(proportiontocut*n) largest values of data along the given axis, where n From scipy-svn at scipy.org Wed Sep 3 12:18:46 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Wed, 3 Sep 2008 11:18:46 -0500 (CDT) Subject: [Scipy-svn] r4682 - trunk/scipy/cluster/tests Message-ID: <20080903161846.1979F39C02A@scipy.org> Author: alan.mcintyre Date: 2008-09-03 11:18:43 -0500 (Wed, 03 Sep 2008) New Revision: 4682 Modified: trunk/scipy/cluster/tests/test_distance.py Log: Limit output based on 'verbose' parameter. Modified: trunk/scipy/cluster/tests/test_distance.py =================================================================== --- trunk/scipy/cluster/tests/test_distance.py 2008-09-01 18:28:08 UTC (rev 4681) +++ trunk/scipy/cluster/tests/test_distance.py 2008-09-03 16:18:43 UTC (rev 4682) @@ -150,7 +150,8 @@ Y_right = eo['pdist-euclidean-iris'] Y_test1 = pdist(X, 'euclidean') - print np.abs(Y_right - Y_test1).max() + if verbose > 2: + print np.abs(Y_right - Y_test1).max() self.failUnless(within_tol(Y_test1, Y_right, eps)) def test_pdist_euclidean_iris_nonC(self): @@ -269,7 +270,8 @@ Y_right = eo['pdist-cosine-iris'] Y_test1 = pdist(X, 'cosine') - print np.abs(Y_test1 - Y_right).max() + if verbose > 2: + print np.abs(Y_test1 - Y_right).max() self.failUnless(within_tol(Y_test1, Y_right, eps)) #print "cosine-iris", np.abs(Y_test1 - Y_right).max() @@ -331,7 +333,8 @@ Y_right = eo['pdist-cityblock-iris'] Y_test1 = pdist(X, 'cityblock') - print "cityblock-iris-float32", np.abs(Y_test1 - Y_right).max() + if verbose > 2: + print "cityblock-iris-float32", np.abs(Y_test1 - Y_right).max() self.failUnless(within_tol(Y_test1, Y_right, eps)) def test_pdist_cityblock_iris_nonC(self): @@ -394,7 +397,8 @@ Y_right = np.float32(eo['pdist-correlation-iris']) Y_test1 = pdist(X, 'correlation') - print "correlation-iris", np.abs(Y_test1 - Y_right).max() + if verbose > 2: + print "correlation-iris", np.abs(Y_test1 - Y_right).max() self.failUnless(within_tol(Y_test1, Y_right, eps)) def test_pdist_correlation_iris_nonC(self): @@ -487,7 +491,8 @@ Y_right = eo['pdist-minkowski-5.8-iris'] Y_test1 = pdist(X, 'minkowski', 5.8) - print "minkowski-iris-5.8", np.abs(Y_test1 - Y_right).max() + if verbose > 2: + print "minkowski-iris-5.8", np.abs(Y_test1 - Y_right).max() self.failUnless(within_tol(Y_test1, Y_right, eps)) def test_pdist_minkowski_iris_nonC(self): @@ -649,7 +654,8 @@ Y_right = eo['pdist-chebychev'] Y_test1 = pdist(X, 'chebychev') - print "chebychev", np.abs(Y_test1 - Y_right).max() + if verbose > 2: + print "chebychev", np.abs(Y_test1 - Y_right).max() self.failUnless(within_tol(Y_test1, Y_right, eps)) def test_pdist_chebychev_random_nonC(self): @@ -679,7 +685,8 @@ X = np.float32(eo['iris']) Y_right = eo['pdist-chebychev-iris'] Y_test1 = pdist(X, 'chebychev') - print "chebychev-iris", np.abs(Y_test1 - Y_right).max() + if verbose > 2: + print "chebychev-iris", np.abs(Y_test1 - Y_right).max() self.failUnless(within_tol(Y_test1, Y_right, eps)) def test_pdist_chebychev_iris_nonC(self): @@ -714,13 +721,15 @@ "Tests pdist(X, 'matching') to see if the two implementations match on random boolean input data." D = eo['random-bool-data'] B = np.bool_(D) - print B.shape, B.dtype + if verbose > 2: + print B.shape, B.dtype eps = 1e-10 y1 = pdist(B, "matching") y2 = pdist(B, "test_matching") y3 = pdist(D, "test_matching") - print np.abs(y1-y2).max() - print np.abs(y1-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y1-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) @@ -745,13 +754,15 @@ def test_pdist_jaccard_match(self): "Tests pdist(X, 'jaccard') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "jaccard") y2 = pdist(D, "test_jaccard") y3 = pdist(np.bool_(D), "test_jaccard") - print np.abs(y1-y2).max() - print np.abs(y2-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y2-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) @@ -761,7 +772,8 @@ np.array([1, 1, 0, 1, 1])) m2 = yule(np.array([1, 0, 1, 1, 0], dtype=np.bool), np.array([1, 1, 0, 1, 1], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - 2.0) <= 1e-10) self.failUnless(np.abs(m2 - 2.0) <= 1e-10) @@ -771,20 +783,23 @@ np.array([1, 1, 0])) m2 = yule(np.array([1, 0, 1], dtype=np.bool), np.array([1, 1, 0], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - 2.0) <= 1e-10) self.failUnless(np.abs(m2 - 2.0) <= 1e-10) def test_pdist_yule_match(self): "Tests pdist(X, 'yule') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "yule") y2 = pdist(D, "test_yule") y3 = pdist(np.bool_(D), "test_yule") - print np.abs(y1-y2).max() - print np.abs(y2-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y2-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) @@ -794,7 +809,8 @@ np.array([1, 1, 0, 1, 1])) m2 = dice(np.array([1, 0, 1, 1, 0], dtype=np.bool), np.array([1, 1, 0, 1, 1], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - (3.0/7.0)) <= 1e-10) self.failUnless(np.abs(m2 - (3.0/7.0)) <= 1e-10) @@ -804,20 +820,23 @@ np.array([1, 1, 0])) m2 = dice(np.array([1, 0, 1], dtype=np.bool), np.array([1, 1, 0], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - 0.5) <= 1e-10) self.failUnless(np.abs(m2 - 0.5) <= 1e-10) def test_pdist_dice_match(self): "Tests pdist(X, 'dice') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "dice") y2 = pdist(D, "test_dice") y3 = pdist(D, "test_dice") - print np.abs(y1-y2).max() - print np.abs(y2-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y2-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) @@ -827,7 +846,8 @@ np.array([1, 1, 0, 1, 1])) m2 = sokalsneath(np.array([1, 0, 1, 1, 0], dtype=np.bool), np.array([1, 1, 0, 1, 1], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - (3.0/4.0)) <= 1e-10) self.failUnless(np.abs(m2 - (3.0/4.0)) <= 1e-10) @@ -837,20 +857,23 @@ np.array([1, 1, 0])) m2 = sokalsneath(np.array([1, 0, 1], dtype=np.bool), np.array([1, 1, 0], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - (4.0/5.0)) <= 1e-10) self.failUnless(np.abs(m2 - (4.0/5.0)) <= 1e-10) def test_pdist_sokalsneath_match(self): "Tests pdist(X, 'sokalsneath') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "sokalsneath") y2 = pdist(D, "test_sokalsneath") y3 = pdist(np.bool_(D), "test_sokalsneath") - print np.abs(y1-y2).max() - print np.abs(y2-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y2-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) @@ -860,7 +883,8 @@ np.array([1, 1, 0, 1, 1])) m2 = rogerstanimoto(np.array([1, 0, 1, 1, 0], dtype=np.bool), np.array([1, 1, 0, 1, 1], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - (3.0/4.0)) <= 1e-10) self.failUnless(np.abs(m2 - (3.0/4.0)) <= 1e-10) @@ -870,20 +894,23 @@ np.array([1, 1, 0])) m2 = rogerstanimoto(np.array([1, 0, 1], dtype=np.bool), np.array([1, 1, 0], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - (4.0/5.0)) <= 1e-10) self.failUnless(np.abs(m2 - (4.0/5.0)) <= 1e-10) def test_pdist_rogerstanimoto_match(self): "Tests pdist(X, 'rogerstanimoto') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "rogerstanimoto") y2 = pdist(D, "test_rogerstanimoto") y3 = pdist(np.bool_(D), "test_rogerstanimoto") - print np.abs(y1-y2).max() - print np.abs(y2-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y2-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) @@ -893,7 +920,8 @@ np.array([1, 1, 0, 1, 1])) m2 = russellrao(np.array([1, 0, 1, 1, 0], dtype=np.bool), np.array([1, 1, 0, 1, 1], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - (3.0/5.0)) <= 1e-10) self.failUnless(np.abs(m2 - (3.0/5.0)) <= 1e-10) @@ -903,55 +931,64 @@ np.array([1, 1, 0])) m2 = russellrao(np.array([1, 0, 1], dtype=np.bool), np.array([1, 1, 0], dtype=np.bool)) - print m + if verbose > 2: + print m self.failUnless(np.abs(m - (2.0/3.0)) <= 1e-10) self.failUnless(np.abs(m2 - (2.0/3.0)) <= 1e-10) def test_pdist_russellrao_match(self): "Tests pdist(X, 'russellrao') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "russellrao") y2 = pdist(D, "test_russellrao") y3 = pdist(np.bool_(D), "test_russellrao") - print np.abs(y1-y2).max() - print np.abs(y2-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y2-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) def test_pdist_sokalmichener_match(self): "Tests pdist(X, 'sokalmichener') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "sokalmichener") y2 = pdist(D, "test_sokalmichener") y3 = pdist(np.bool_(D), "test_sokalmichener") - print np.abs(y1-y2).max() - print np.abs(y2-y3).max() + if verbose > 2: + print np.abs(y1-y2).max() + print np.abs(y2-y3).max() self.failUnless(within_tol(y1, y2, eps)) self.failUnless(within_tol(y2, y3, eps)) def test_pdist_kulsinski_match(self): "Tests pdist(X, 'kulsinski') to see if the two implementations match on random double input data." D = eo['random-bool-data'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "kulsinski") y2 = pdist(D, "test_kulsinski") y3 = pdist(np.bool_(D), "test_kulsinski") - print np.abs(y1-y2).max() + if verbose > 2: + print np.abs(y1-y2).max() self.failUnless(within_tol(y1, y2, eps)) def test_pdist_canberra_match(self): "Tests pdist(X, 'canberra') to see if the two implementations match on the Iris data set." D = eo['iris'] - print D.shape, D.dtype + if verbose > 2: + print D.shape, D.dtype eps = 1e-10 y1 = pdist(D, "canberra") y2 = pdist(D, "test_canberra") - print np.abs(y1-y2).max() + if verbose > 2: + print np.abs(y1-y2).max() self.failUnless(within_tol(y1, y2, eps)) def test_pdist_canberra_ticket_711(self): @@ -959,7 +996,8 @@ eps = 1e-8 pdist_y = pdist(([3.3], [3.4]), "canberra") right_y = 0.01492537 - print np.abs(pdist_y-right_y).max() + if verbose > 2: + print np.abs(pdist_y-right_y).max() self.failUnless(within_tol(pdist_y, right_y, eps)) def within_tol(a, b, tol): From scipy-svn at scipy.org Wed Sep 3 12:58:55 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Wed, 3 Sep 2008 11:58:55 -0500 (CDT) Subject: [Scipy-svn] r4683 - in trunk/scipy: cluster cluster/tests io io/arff io/arff/tests io/matlab io/tests misc/tests signal/tests sparse/linalg/eigen/arpack/tests special special/tests stats stats/tests Message-ID: <20080903165855.3C67939C02A@scipy.org> Author: alan.mcintyre Date: 2008-09-03 11:58:28 -0500 (Wed, 03 Sep 2008) New Revision: 4683 Modified: trunk/scipy/cluster/tests/vq_test.py trunk/scipy/cluster/vq.py trunk/scipy/io/arff/arffread.py trunk/scipy/io/arff/tests/test_data.py trunk/scipy/io/matlab/mio4.py trunk/scipy/io/matlab/mio5.py trunk/scipy/io/matlab/miobase.py trunk/scipy/io/npfile.py trunk/scipy/io/tests/test_npfile.py trunk/scipy/io/tests/test_recaster.py trunk/scipy/misc/tests/test_pilutil.py trunk/scipy/signal/tests/test_wavelets.py trunk/scipy/sparse/linalg/eigen/arpack/tests/test_speigs.py trunk/scipy/special/spfun_stats.py trunk/scipy/special/tests/test_spfun_stats.py trunk/scipy/stats/_support.py trunk/scipy/stats/tests/test_morestats.py Log: Standardize NumPy import as "import numpy as np". Modified: trunk/scipy/cluster/tests/vq_test.py =================================================================== --- trunk/scipy/cluster/tests/vq_test.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/cluster/tests/vq_test.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -1,6 +1,5 @@ -import numpy as N +import numpy as np from scipy.cluster import vq -#import vq_c as vq def python_vq(all_data,code_book): import time @@ -12,8 +11,8 @@ print ' first dist:', dist1[:5] print ' last codes:', codes1[-5:] print ' last dist:', dist1[-5:] - float_obs = all_data.astype(N.float32) - float_code = code_book.astype(N.float32) + float_obs = all_data.astype(np.float32) + float_code = code_book.astype(np.float32) t1 = time.time() codes1,dist1 = vq.vq(float_obs,float_code) t2 = time.time() @@ -34,12 +33,12 @@ return array(data) def main(): - N.random.seed((1000,1000)) + np.random.seed((1000,1000)) Ncodes = 40 Nfeatures = 16 Nobs = 4000 - code_book = N.random.normal(0,1,(Ncodes,Nfeatures)) - features = N.random.normal(0,1,(Nobs,Nfeatures)) + code_book = np.random.normal(0,1,(Ncodes,Nfeatures)) + features = np.random.normal(0,1,(Nobs,Nfeatures)) codes,dist = python_vq(features,code_book) if __name__ == '__main__': Modified: trunk/scipy/cluster/vq.py =================================================================== --- trunk/scipy/cluster/vq.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/cluster/vq.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -83,7 +83,7 @@ from numpy import shape, zeros, sqrt, argmin, minimum, array, \ newaxis, arange, compress, equal, common_type, single, double, take, \ std, mean -import numpy as N +import numpy as np class ClusterError(Exception): pass @@ -233,8 +233,8 @@ """ # n = number of observations # d = number of features - if N.ndim(obs) == 1: - if not N.ndim(obs) == N.ndim(code_book): + if np.ndim(obs) == 1: + if not np.ndim(obs) == np.ndim(code_book): raise ValueError( "Observation and code_book should have the same rank") else: @@ -244,7 +244,7 @@ # code books and observations should have same number of features and same # shape - if not N.ndim(obs) == N.ndim(code_book): + if not np.ndim(obs) == np.ndim(code_book): raise ValueError("Observation and code_book should have the same rank") elif not d == code_book.shape[1]: raise ValueError("Code book(%d) and obs(%d) should have the same " \ @@ -254,7 +254,7 @@ code = zeros(n, dtype=int) min_dist = zeros(n) for i in range(n): - dist = N.sum((obs[i] - code_book) ** 2, 1) + dist = np.sum((obs[i] - code_book) ** 2, 1) code[i] = argmin(dist) min_dist[i] = dist[code[i]] @@ -281,9 +281,9 @@ raise RuntimeError("_py_vq_1d buggy, do not use rank 1 arrays for now") n = obs.size nc = code_book.size - dist = N.zeros((n, nc)) + dist = np.zeros((n, nc)) for i in range(nc): - dist[:, i] = N.sum(obs - code_book[i]) + dist[:, i] = np.sum(obs - code_book[i]) print dist code = argmin(dist) min_dist = dist[code] @@ -327,7 +327,7 @@ number of features (eg columns)""" % (code_book.shape[1], d)) diff = obs[newaxis, :, :] - code_book[:,newaxis,:] - dist = sqrt(N.sum(diff * diff, -1)) + dist = sqrt(np.sum(diff * diff, -1)) code = argmin(dist, 0) min_dist = minimum.reduce(dist, 0) #the next line I think is equivalent # - and should be faster @@ -520,7 +520,7 @@ else: n = data.size - p = N.random.permutation(n) + p = np.random.permutation(n) x = data[p[:k], :].copy() return x @@ -541,23 +541,23 @@ """ def init_rank1(data): - mu = N.mean(data) - cov = N.cov(data) - x = N.random.randn(k) - x *= N.sqrt(cov) + mu = np.mean(data) + cov = np.cov(data) + x = np.random.randn(k) + x *= np.sqrt(cov) x += mu return x def init_rankn(data): - mu = N.mean(data, 0) - cov = N.atleast_2d(N.cov(data, rowvar = 0)) + mu = np.mean(data, 0) + cov = np.atleast_2d(np.cov(data, rowvar = 0)) # k rows, d cols (one row = one obs) # Generate k sample of a random variable ~ Gaussian(mu, cov) - x = N.random.randn(k, mu.size) - x = N.dot(x, N.linalg.cholesky(cov).T) + mu + x = np.random.randn(k, mu.size) + x = np.dot(x, np.linalg.cholesky(cov).T) + mu return x - nd = N.ndim(data) + nd = np.ndim(data) if nd == 1: return init_rank1(data) else: @@ -628,7 +628,7 @@ if missing not in _valid_miss_meth.keys(): raise ValueError("Unkown missing method: %s" % str(missing)) # If data is rank 1, then we have 1 dimension problem. - nd = N.ndim(data) + nd = np.ndim(data) if nd == 1: d = 1 #raise ValueError("Input of rank 1 not supported yet") @@ -637,13 +637,13 @@ else: raise ValueError("Input of rank > 2 not supported") - if N.size(data) < 1: + if np.size(data) < 1: raise ValueError("Input has 0 items.") # If k is not a single value, then it should be compatible with data's # shape - if N.size(k) > 1 or minit == 'matrix': - if not nd == N.ndim(k): + if np.size(k) > 1 or minit == 'matrix': + if not nd == np.ndim(k): raise ValueError("k is not an int and has not same rank than data") if d == 1: nc = len(k) @@ -683,9 +683,9 @@ label = vq(data, code)[0] # Update the code by computing centroids using the new code book for j in range(nc): - mbs = N.where(label==j) + mbs = np.where(label==j) if mbs[0].size > 0: - code[j] = N.mean(data[mbs], axis=0) + code[j] = np.mean(data[mbs], axis=0) else: missing() @@ -694,13 +694,13 @@ if __name__ == '__main__': pass #import _vq - #a = N.random.randn(4, 2) - #b = N.random.randn(2, 2) + #a = np.random.randn(4, 2) + #b = np.random.randn(2, 2) #print _vq.vq(a, b) - #print _vq.vq(N.array([[1], [2], [3], [4], [5], [6.]]), - # N.array([[2.], [5.]])) - #print _vq.vq(N.array([1, 2, 3, 4, 5, 6.]), N.array([2., 5.])) - #_vq.vq(a.astype(N.float32), b.astype(N.float32)) - #_vq.vq(a, b.astype(N.float32)) + #print _vq.vq(np.array([[1], [2], [3], [4], [5], [6.]]), + # np.array([[2.], [5.]])) + #print _vq.vq(np.array([1, 2, 3, 4, 5, 6.]), np.array([2., 5.])) + #_vq.vq(a.astype(np.float32), b.astype(np.float32)) + #_vq.vq(a, b.astype(np.float32)) #_vq.vq([0], b) Modified: trunk/scipy/io/arff/arffread.py =================================================================== --- trunk/scipy/io/arff/arffread.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/arff/arffread.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -4,7 +4,7 @@ import itertools import sys -import numpy as N +import numpy as np from scipy.io.arff.utils import partial @@ -271,9 +271,9 @@ """given a string x, convert it to a float. If the stripped string is a ?, return a Nan (missing value).""" if x.strip() == '?': - return N.nan + return np.nan else: - return N.float(x) + return np.float(x) def safe_nominal(value, pvalue): svalue = value.strip() @@ -409,7 +409,7 @@ # This can be used once we want to support integer as integer values and # not as numeric anymore (using masked arrays ?). - acls2dtype = {'real' : N.float, 'integer' : N.float, 'numeric' : N.float} + acls2dtype = {'real' : np.float, 'integer' : np.float, 'numeric' : np.float} acls2conv = {'real' : safe_float, 'integer' : safe_float, 'numeric' : safe_float} descr = [] convertors = [] @@ -489,7 +489,7 @@ a = generator(ofile, delim = delim) # No error should happen here: it is a bug otherwise - data = N.fromiter(a, descr) + data = np.fromiter(a, descr) return data, meta #----- @@ -497,7 +497,7 @@ #----- def basic_stats(data): nbfac = data.size * 1. / (data.size - 1) - return N.nanmin(data), N.nanmax(data), N.mean(data), N.std(data) * nbfac + return np.nanmin(data), np.nanmax(data), np.mean(data), np.std(data) * nbfac def print_attribute(name, tp, data): type = tp[0] Modified: trunk/scipy/io/arff/tests/test_data.py =================================================================== --- trunk/scipy/io/arff/tests/test_data.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/arff/tests/test_data.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -16,8 +16,8 @@ (1, 2, 3, 4, 'class3')] missing = os.path.join(data_path, 'missing.arff') -expect_missing_raw = N.array([[1, 5], [2, 4], [N.nan, N.nan]]) -expect_missing = N.empty(3, [('yop', N.float), ('yap', N.float)]) +expect_missing_raw = np.array([[1, 5], [2, 4], [np.nan, np.nan]]) +expect_missing = np.empty(3, [('yop', np.float), ('yap', np.float)]) expect_missing['yop'] = expect_missing_raw[:, 0] expect_missing['yap'] = expect_missing_raw[:, 1] Modified: trunk/scipy/io/matlab/mio4.py =================================================================== --- trunk/scipy/io/matlab/mio4.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/matlab/mio4.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -1,7 +1,7 @@ ''' Classes for read / write of matlab (TM) 4 files ''' -import numpy as N +import numpy as np from miobase import * @@ -76,7 +76,7 @@ header['mclass'] = T header['dims'] = (data['mrows'], data['ncols']) header['is_complex'] = data['imagf'] == 1 - remaining_bytes = header['dtype'].itemsize * N.product(header['dims']) + remaining_bytes = header['dtype'].itemsize * np.product(header['dims']) if header['is_complex'] and not header['mclass'] == mxSPARSE_CLASS: remaining_bytes *= 2 next_pos = self.mat_stream.tell() + remaining_bytes @@ -109,10 +109,10 @@ num_bytes = dt.itemsize for d in dims: num_bytes *= d - arr = N.ndarray(shape=dims, - dtype=dt, - buffer=self.mat_stream.read(num_bytes), - order='F') + arr = np.ndarray(shape=dims, + dtype=dt, + buffer=self.mat_stream.read(num_bytes), + order='F') if copy: arr = arr.copy() return arr @@ -122,9 +122,9 @@ def __init__(self, array_reader, header): super(Mat4FullGetter, self).__init__(array_reader, header) if header['is_complex']: - self.mat_dtype = N.dtype(N.complex128) + self.mat_dtype = np.dtype(np.complex128) else: - self.mat_dtype = N.dtype(N.float64) + self.mat_dtype = np.dtype(np.float64) def get_raw_array(self): if self.header['is_complex']: @@ -137,12 +137,12 @@ class Mat4CharGetter(Mat4MatrixGetter): def get_raw_array(self): - arr = self.read_array().astype(N.uint8) + arr = self.read_array().astype(np.uint8) # ascii to unicode S = arr.tostring().decode('ascii') - return N.ndarray(shape=self.header['dims'], - dtype=N.dtype('U1'), - buffer = N.array(S)).copy() + return np.ndarray(shape=self.header['dims'], + dtype=np.dtype('U1'), + buffer = np.array(S)).copy() class Mat4SparseGetter(Mat4MatrixGetter): @@ -166,14 +166,14 @@ res = self.read_array() tmp = res[:-1,:] dims = res[-1,0:2] - I = N.ascontiguousarray(tmp[:,0],dtype='intc') #fixes byte order also - J = N.ascontiguousarray(tmp[:,1],dtype='intc') + I = np.ascontiguousarray(tmp[:,0],dtype='intc') #fixes byte order also + J = np.ascontiguousarray(tmp[:,1],dtype='intc') I -= 1 # for 1-based indexing J -= 1 if res.shape[1] == 3: - V = N.ascontiguousarray(tmp[:,2],dtype='float') + V = np.ascontiguousarray(tmp[:,2],dtype='float') else: - V = N.ascontiguousarray(tmp[:,2],dtype='complex') + V = np.ascontiguousarray(tmp[:,2],dtype='complex') V.imag = tmp[:,3] if have_sparse: return scipy.sparse.coo_matrix((V,(I,J)), dims) @@ -201,15 +201,15 @@ def format_looks_right(self): # Mat4 files have a zero somewhere in first 4 bytes self.mat_stream.seek(0) - mopt_bytes = N.ndarray(shape=(4,), - dtype=N.uint8, - buffer = self.mat_stream.read(4)) + mopt_bytes = np.ndarray(shape=(4,), + dtype=np.uint8, + buffer = self.mat_stream.read(4)) self.mat_stream.seek(0) return 0 in mopt_bytes def guess_byte_order(self): self.mat_stream.seek(0) - mopt = self.read_dtype(N.dtype('i4')) + mopt = self.read_dtype(np.dtype('i4')) self.mat_stream.seek(0) if mopt < 0 or mopt > 5000: return ByteOrder.swapped_code @@ -227,7 +227,7 @@ ''' if dims is None: dims = self.arr.shape - header = N.empty((), mdtypes_template['header']) + header = np.empty((), mdtypes_template['header']) M = not ByteOrder.little_endian O = 0 header['mopt'] = (M * 1000 + @@ -242,7 +242,7 @@ self.write_string(self.name + '\0') def arr_to_2d(self): - self.arr = N.atleast_2d(self.arr) + self.arr = np.atleast_2d(self.arr) dims = self.arr.shape if len(dims) > 2: self.arr = self.arr.reshape(-1,dims[-1]) @@ -284,12 +284,12 @@ T=mxCHAR_CLASS) if self.arr.dtype.kind == 'U': # Recode unicode to ascii - n_chars = N.product(dims) - st_arr = N.ndarray(shape=(), - dtype=self.arr_dtype_number(n_chars), - buffer=self.arr) + n_chars = np.product(dims) + st_arr = np.ndarray(shape=(), + dtype=self.arr_dtype_number(n_chars), + buffer=self.arr) st = st_arr.item().encode('ascii') - self.arr = N.ndarray(shape=dims, dtype='S1', buffer=st) + self.arr = np.ndarray(shape=dims, dtype='S1', buffer=st) self.write_bytes(self.arr) @@ -301,7 +301,7 @@ ''' A = self.arr.tocoo() #convert to sparse COO format (ijv) imagf = A.dtype.kind == 'c' - ijv = N.zeros((A.nnz + 1, 3+imagf), dtype='f8') + ijv = np.zeros((A.nnz + 1, 3+imagf), dtype='f8') ijv[:-1,0] = A.row ijv[:-1,1] = A.col ijv[:-1,0:2] += 1 # 1 based indexing @@ -326,13 +326,13 @@ if have_sparse: if scipy.sparse.issparse(arr): return Mat4SparseWriter(stream, arr, name) - arr = N.array(arr) + arr = np.array(arr) dtt = arr.dtype.type - if dtt is N.object_: + if dtt is np.object_: raise TypeError, 'Cannot save object arrays in Mat4' - elif dtt is N.void: + elif dtt is np.void: raise TypeError, 'Cannot save void type arrays' - elif dtt in (N.unicode_, N.string_): + elif dtt in (np.unicode_, np.string_): return Mat4CharWriter(stream, arr, name) else: return Mat4NumericWriter(stream, arr, name) Modified: trunk/scipy/io/matlab/mio5.py =================================================================== --- trunk/scipy/io/matlab/mio5.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/matlab/mio5.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -29,7 +29,7 @@ import zlib from copy import copy as pycopy from cStringIO import StringIO -import numpy as N +import numpy as np from miobase import * @@ -189,9 +189,9 @@ def read_element(self, copy=True): raw_tag = self.mat_stream.read(8) - tag = N.ndarray(shape=(), - dtype=self.dtypes['tag_full'], - buffer=raw_tag) + tag = np.ndarray(shape=(), + dtype=self.dtypes['tag_full'], + buffer=raw_tag) mdtype = tag['mdtype'].item() byte_count = mdtype >> 16 @@ -201,9 +201,9 @@ mdtype = mdtype & 0xFFFF dt = self.dtypes[mdtype] el_count = byte_count // dt.itemsize - return N.ndarray(shape=(el_count,), - dtype=dt, - buffer=raw_tag[4:]) + return np.ndarray(shape=(el_count,), + dtype=dt, + buffer=raw_tag[4:]) byte_count = tag['byte_count'].item() if mdtype == miMATRIX: @@ -217,9 +217,9 @@ else: # numeric data dt = self.dtypes[mdtype] el_count = byte_count // dt.itemsize - el = N.ndarray(shape=(el_count,), - dtype=dt, - buffer=self.mat_stream.read(byte_count)) + el = np.ndarray(shape=(el_count,), + dtype=dt, + buffer=self.mat_stream.read(byte_count)) if copy: el = el.copy() @@ -325,7 +325,7 @@ self.mat_dtype = 'f8' def get_raw_array(self): - return N.array([[]]) + return np.array([[]]) class Mat5NumericMatrixGetter(Mat5MatrixGetter): @@ -333,7 +333,7 @@ def __init__(self, array_reader, header): super(Mat5NumericMatrixGetter, self).__init__(array_reader, header) if header['is_logical']: - self.mat_dtype = N.dtype('bool') + self.mat_dtype = np.dtype('bool') else: self.mat_dtype = self.class_dtypes[header['mclass']] @@ -345,10 +345,10 @@ res = res + (res_j * 1j) else: res = self.read_element() - return N.ndarray(shape=self.header['dims'], - dtype=res.dtype, - buffer=res, - order='F') + return np.ndarray(shape=self.header['dims'], + dtype=res.dtype, + buffer=res, + order='F') class Mat5SparseMatrixGetter(Mat5MatrixGetter): @@ -390,28 +390,28 @@ def get_raw_array(self): res = self.read_element() # Convert non-string types to unicode - if isinstance(res, N.ndarray): - if res.dtype.type == N.uint16: + if isinstance(res, np.ndarray): + if res.dtype.type == np.uint16: codec = miUINT16_codec if self.codecs['uint16_len'] == 1: - res = res.astype(N.uint8) - elif res.dtype.type in (N.uint8, N.int8): + res = res.astype(np.uint8) + elif res.dtype.type in (np.uint8, np.int8): codec = 'ascii' else: raise TypeError, 'Did not expect type %s' % res.dtype res = res.tostring().decode(codec) - return N.ndarray(shape=self.header['dims'], - dtype=N.dtype('U1'), - buffer=N.array(res), - order='F').copy() + return np.ndarray(shape=self.header['dims'], + dtype=np.dtype('U1'), + buffer=np.array(res), + order='F').copy() class Mat5CellMatrixGetter(Mat5MatrixGetter): def get_raw_array(self): # Account for fortran indexing of cells tupdims = tuple(self.header['dims'][::-1]) - length = N.product(tupdims) - result = N.empty(length, dtype=object) + length = np.product(tupdims) + result = np.empty(length, dtype=object) for i in range(length): result[i] = self.get_item() return result.reshape(tupdims).T @@ -551,16 +551,16 @@ def format_looks_right(self): # Mat4 files have a zero somewhere in first 4 bytes self.mat_stream.seek(0) - mopt_bytes = N.ndarray(shape=(4,), - dtype=N.uint8, - buffer = self.mat_stream.read(4)) + mopt_bytes = np.ndarray(shape=(4,), + dtype=np.uint8, + buffer = self.mat_stream.read(4)) self.mat_stream.seek(0) return 0 not in mopt_bytes class Mat5MatrixWriter(MatStreamWriter): - mat_tag = N.zeros((), mdtypes_template['tag_full']) + mat_tag = np.zeros((), mdtypes_template['tag_full']) mat_tag['mdtype'] = miMATRIX def __init__(self, file_stream, arr, name, is_global=False): @@ -572,7 +572,7 @@ def write_element(self, arr, mdtype=None): # write tag, data - tag = N.zeros((), mdtypes_template['tag_full']) + tag = np.zeros((), mdtypes_template['tag_full']) if mdtype is None: tag['mdtype'] = np_to_mtypes[arr.dtype.str[1:]] else: @@ -585,7 +585,7 @@ self.write_bytes(arr) # pad to next 64-bit boundary - self.write_bytes(N.zeros((padding,),'u1')) + self.write_bytes(np.zeros((padding,),'u1')) def write_header(self, mclass, is_global=False, @@ -602,7 +602,7 @@ self._mat_tag_pos = self.file_stream.tell() self.write_dtype(self.mat_tag) # write array flags (complex, global, logical, class, nzmax) - af = N.zeros((), mdtypes_template['array_flags']) + af = np.zeros((), mdtypes_template['array_flags']) af['data_type'] = miUINT32 af['byte_count'] = 8 flags = is_complex << 3 | is_global << 2 | is_logical << 1 @@ -611,13 +611,13 @@ self.write_dtype(af) # write array shape if self.arr.ndim < 2: - new_arr = N.atleast_2d(self.arr) + new_arr = np.atleast_2d(self.arr) if type(new_arr) != type(self.arr): raise ValueError("Array should be 2-dimensional.") self.arr = new_arr - self.write_element(N.array(self.arr.shape, dtype='i4')) + self.write_element(np.array(self.arr.shape, dtype='i4')) # write name - self.write_element(N.array([ord(c) for c in self.name], 'i1')) + self.write_element(np.array([ord(c) for c in self.name], 'i1')) def update_matrix_tag(self): curr_pos = self.file_stream.tell() @@ -657,12 +657,12 @@ self.write_header(mclass=mxCHAR_CLASS) if self.arr.dtype.kind == 'U': # Recode unicode using self.codec - n_chars = N.product(self.arr.shape) - st_arr = N.ndarray(shape=(), - dtype=self.arr_dtype_number(n_chars), - buffer=self.arr) + n_chars = np.product(self.arr.shape) + st_arr = np.ndarray(shape=(), + dtype=self.arr_dtype_number(n_chars), + buffer=self.arr) st = st_arr.item().encode(self.codec) - self.arr = N.ndarray(shape=(len(st)), dtype='u1', buffer=st) + self.arr = np.ndarray(shape=(len(st)), dtype='u1', buffer=st) self.write_element(self.arr,mdtype=miUTF8) self.update_matrix_tag() @@ -709,7 +709,7 @@ if have_sparse: if scipy.sparse.issparse(arr): return Mat5SparseWriter(self.stream, arr, name, is_global) - arr = N.array(arr) + arr = np.array(arr) if arr.dtype.hasobject: types, arr_type = self.classify_mobjects(arr) if arr_type == 'c': @@ -740,13 +740,13 @@ o - object array ''' n = objarr.size - types = N.empty((n,), dtype='S1') + types = np.empty((n,), dtype='S1') types[:] = 'i' type_set = set() flato = objarr.flat for i in range(n): obj = flato[i] - if isinstance(obj, N.ndarray): + if isinstance(obj, np.ndarray): types[i] = 'a' continue try: @@ -784,11 +784,11 @@ unicode_strings) # write header import os, time - hdr = N.zeros((), mdtypes_template['file_header']) + hdr = np.zeros((), mdtypes_template['file_header']) hdr['description']='MATLAB 5.0 MAT-file Platform: %s, Created on: %s' % ( os.name,time.asctime()) hdr['version']= 0x0100 - hdr['endian_test']=N.ndarray(shape=(),dtype='S2',buffer=N.uint16(0x4d49)) + hdr['endian_test']=np.ndarray(shape=(),dtype='S2',buffer=np.uint16(0x4d49)) file_stream.write(hdr.tostring()) def get_unicode_strings(self): @@ -812,7 +812,7 @@ stream = self.writer_getter.stream if self.do_compression: str = zlib.compress(stream.getvalue(stream.tell())) - tag = N.empty((), mdtypes_template['tag_full']) + tag = np.empty((), mdtypes_template['tag_full']) tag['mdtype'] = miCOMPRESSED tag['byte_count'] = len(str) self.file_stream.write(tag.tostring() + str) Modified: trunk/scipy/io/matlab/miobase.py =================================================================== --- trunk/scipy/io/matlab/miobase.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/matlab/miobase.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -6,7 +6,7 @@ import sys -import numpy as N +import numpy as np try: import scipy.sparse @@ -71,10 +71,10 @@ a_dtype is assumed to be correct endianness ''' num_bytes = a_dtype.itemsize - arr = N.ndarray(shape=(), - dtype=a_dtype, - buffer=self.mat_stream.read(num_bytes), - order='F') + arr = np.ndarray(shape=(), + dtype=a_dtype, + buffer=self.mat_stream.read(num_bytes), + order='F') return arr def read_ztstring(self, num_bytes): @@ -182,8 +182,7 @@ def convert_dtypes(self, dtype_template): dtypes = dtype_template.copy() for k in dtypes: - dtypes[k] = N.dtype(dtypes[k]).newbyteorder( - self.order_code) + dtypes[k] = np.dtype(dtypes[k]).newbyteorder(self.order_code) return dtypes def matrix_getter_factory(self): @@ -228,7 +227,7 @@ str_arr = arr.reshape( (small_product(n_dims), dims[-1])) - arr = N.empty(n_dims, dtype=object) + arr = np.empty(n_dims, dtype=object) for i in range(0, n_dims[-1]): arr[...,i] = self.chars_to_str(str_arr[i]) else: # return string @@ -239,9 +238,9 @@ if getter.mat_dtype is not None: arr = arr.astype(getter.mat_dtype) if self.squeeze_me: - arr = N.squeeze(arr) + arr = np.squeeze(arr) if not arr.size: - arr = N.array([]) + arr = np.array([]) elif not arr.shape: # 0d coverted to scalar arr = arr.item() return arr @@ -249,10 +248,10 @@ def chars_to_str(self, str_arr): ''' Convert string array to string ''' - dt = N.dtype('U' + str(small_product(str_arr.shape))) - return N.ndarray(shape=(), - dtype = dt, - buffer = str_arr.copy()).item() + dt = np.dtype('U' + str(small_product(str_arr.shape))) + return np.ndarray(shape=(), + dtype = dt, + buffer = str_arr.copy()).item() def get_variables(self, variable_names=None): ''' get variables from stream as dictionary @@ -353,7 +352,7 @@ def arr_dtype_number(self, num): ''' Return dtype for given number of items per element''' - return N.dtype(self.arr.dtype.str[:2] + str(num)) + return np.dtype(self.arr.dtype.str[:2] + str(num)) def arr_to_chars(self): ''' Convert string array to char array ''' @@ -361,9 +360,9 @@ if not dims: dims = [1] dims.append(int(self.arr.dtype.str[2:])) - self.arr = N.ndarray(shape=dims, - dtype=self.arr_dtype_number(1), - buffer=self.arr) + self.arr = np.ndarray(shape=dims, + dtype=self.arr_dtype_number(1), + buffer=self.arr) def write_bytes(self, arr): self.file_stream.write(arr.tostring(order='F')) Modified: trunk/scipy/io/npfile.py =================================================================== --- trunk/scipy/io/npfile.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/npfile.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -6,7 +6,7 @@ import sys -import numpy as N +import numpy as np __all__ = ['sys_endian_code', 'npfile'] @@ -40,9 +40,9 @@ Example use: >>> from StringIO import StringIO - >>> import numpy as N + >>> import numpy as np >>> from scipy.io import npfile - >>> arr = N.arange(10).reshape(5,2) + >>> arr = np.arange(10).reshape(5,2) >>> # Make file-like object (could also be file name) >>> my_file = StringIO() >>> npf = npfile(my_file) @@ -167,7 +167,7 @@ (if None from self.order) ''' endian, order = self._endian_order(endian, order) - data = N.asarray(data) + data = np.asarray(data) dt_endian = self._endian_from_dtype(data.dtype) if not endian == 'dtype': if dt_endian != endian: @@ -194,7 +194,7 @@ arr - array from file with given dtype (dt) ''' endian, order = self._endian_order(endian, order) - dt = N.dtype(dt) + dt = np.dtype(dt) try: shape = list(shape) except TypeError: @@ -203,7 +203,7 @@ if minus_ones == 0: pass elif minus_ones == 1: - known_dimensions_size = -N.product(shape,axis=0) * dt.itemsize + known_dimensions_size = -np.product(shape,axis=0) * dt.itemsize unknown_dimension_size, illegal = divmod(self.remaining_bytes(), known_dimensions_size) if illegal: @@ -212,10 +212,10 @@ else: raise ValueError( "illegal -1 count; can only specify one unknown dimension") - sz = dt.itemsize * N.product(shape) + sz = dt.itemsize * np.product(shape) dt_endian = self._endian_from_dtype(dt) buf = self.file.read(sz) - arr = N.ndarray(shape=shape, + arr = np.ndarray(shape=shape, dtype=dt, buffer=buf, order=order) @@ -223,7 +223,7 @@ return arr.byteswap() return arr.copy() -npfile = N.deprecate_with_doc(""" +npfile = np.deprecate_with_doc(""" You can achieve the same effect as using npfile, using ndarray.tofile and numpy.fromfile. Modified: trunk/scipy/io/tests/test_npfile.py =================================================================== --- trunk/scipy/io/tests/test_npfile.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/tests/test_npfile.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -2,7 +2,7 @@ from StringIO import StringIO from tempfile import mkstemp from numpy.testing import * -import numpy as N +import numpy as np from scipy.io.npfile import npfile, sys_endian_code @@ -12,7 +12,7 @@ fd, fname = mkstemp() os.close(fd) npf = npfile(fname) - arr = N.reshape(N.arange(10), (5,2)) + arr = np.reshape(np.arange(10), (5,2)) self.assertRaises(IOError, npf.write_array, arr) npf.close() npf = npfile(fname, 'w') @@ -58,7 +58,7 @@ def test_read_write_array(self): npf = npfile(StringIO()) - arr = N.reshape(N.arange(10), (5,2)) + arr = np.reshape(np.arange(10), (5,2)) # Arr as read in fortran order f_arr = arr.reshape((2,5)).T # Arr written in fortran order read in C order Modified: trunk/scipy/io/tests/test_recaster.py =================================================================== --- trunk/scipy/io/tests/test_recaster.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/io/tests/test_recaster.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -1,4 +1,4 @@ -import numpy as N +import numpy as np from numpy.testing import * from scipy.io.recaster import sctype_attributes, Recaster, RecastError @@ -15,14 +15,14 @@ R = Recaster() assert set(R.sctype_list) == set(sctype_attributes().keys()), \ 'Default recaster should include all system types' - T = N.float32 + T = np.float32 R = Recaster([T]) assert R.sctype_list == [T], 'Scalar type list not correctly set' # Setting tolerances R = Recaster() tols = R.default_sctype_tols() assert tols == R.sctype_tols, 'Unexpected tols dictionary' - F = N.finfo(T) + F = np.finfo(T) R = Recaster(sctype_tols={T: { 'rtol': F.eps*2, 'atol': F.tiny*2, @@ -31,8 +31,8 @@ 'Rtol not correctly set' assert R.sctype_tols[T]['atol'] == F.tiny*2, \ 'Atol not correctly set' - T = N.complex128 - F = N.finfo(T) + T = np.complex128 + F = np.finfo(T) assert R.sctype_tols[T]['rtol'] == F.eps, \ 'Rtol defaults not correctly set' assert R.sctype_tols[T]['atol'] == F.tiny, \ @@ -47,22 +47,22 @@ # Define expected type output from fp recast of value sta = sctype_attributes() inp_outp = ( - (1, N.complex128, 'c', sta[N.complex128]['size'], 0, N.complex128), - (1, N.complex128, 'c', sta[N.complex128]['size'], 1, N.complex64), - (1, N.complex128, 'c', sta[N.complex64]['size'], 0, N.complex64), - (1, N.complex128, 'f', sta[N.float64]['size'], 0, N.float64), - (1.0+1j, N.complex128, 'f', sta[N.complex128]['size'], 0, None), - (1, N.float64, 'f', sta[N.float64]['size'], 0, N.float64), - (1, N.float64, 'f', sta[N.float64]['size'], 1, N.float32), - (1, N.float64, 'f', sta[N.float32]['size'], 0, N.float32), - (1, N.float64, 'c', sta[N.complex128]['size'], 0, N.complex128), - (1, N.float64, 'c', sta[N.complex128]['size'], 1, N.complex64), - (1, N.int32, 'f', sta[N.float64]['size'], 0, N.float64), - (1, N.int32, 'f', sta[N.float64]['size'], 1, N.float32), - (1, N.float64, 'f', 0, 0, None), + (1, np.complex128, 'c', sta[np.complex128]['size'], 0, np.complex128), + (1, np.complex128, 'c', sta[np.complex128]['size'], 1, np.complex64), + (1, np.complex128, 'c', sta[np.complex64]['size'], 0, np.complex64), + (1, np.complex128, 'f', sta[np.float64]['size'], 0, np.float64), + (1.0+1j, np.complex128, 'f', sta[np.complex128]['size'], 0, None), + (1, np.float64, 'f', sta[np.float64]['size'], 0, np.float64), + (1, np.float64, 'f', sta[np.float64]['size'], 1, np.float32), + (1, np.float64, 'f', sta[np.float32]['size'], 0, np.float32), + (1, np.float64, 'c', sta[np.complex128]['size'], 0, np.complex128), + (1, np.float64, 'c', sta[np.complex128]['size'], 1, np.complex64), + (1, np.int32, 'f', sta[np.float64]['size'], 0, np.float64), + (1, np.int32, 'f', sta[np.float64]['size'], 1, np.float32), + (1, np.float64, 'f', 0, 0, None), ) for value, inp, kind, max_size, continue_down, outp in inp_outp: - arr = N.array(value, dtype=inp) + arr = np.array(value, dtype=inp) arr = R.cast_to_fp(arr, kind, max_size, continue_down) if outp is None: assert arr is None, \ @@ -79,29 +79,29 @@ # Smallest int sctype with full recaster params = sctype_attributes() RF = Recaster() - test_triples = [(N.uint8, 0, 255), - (N.int8, -128, 0), - (N.uint16, 0, params[N.uint16]['max']), - (N.int16, params[N.int16]['min'], 0), - (N.uint32, 0, params[N.uint32]['max']), - (N.int32, params[N.int32]['min'], 0), - (N.uint64, 0, params[N.uint64]['max']), - (N.int64, params[N.int64]['min'], 0)] + test_triples = [(np.uint8, 0, 255), + (np.int8, -128, 0), + (np.uint16, 0, params[np.uint16]['max']), + (np.int16, params[np.int16]['min'], 0), + (np.uint32, 0, params[np.uint32]['max']), + (np.int32, params[np.int32]['min'], 0), + (np.uint64, 0, params[np.uint64]['max']), + (np.int64, params[np.int64]['min'], 0)] for T, mn, mx in test_triples: rt = RF.smallest_int_sctype(mx, mn) - assert N.dtype(rt) == N.dtype(T), \ + assert np.dtype(rt) == np.dtype(T), \ 'Expected %s, got %s type' % (T, rt) # Smallest int sctype with restricted recaster - mmax = params[N.int32]['max'] - mmin = params[N.int32]['min'] - RR = Recaster([N.int32]) + mmax = params[np.int32]['max'] + mmin = params[np.int32]['min'] + RR = Recaster([np.int32]) for kind in ('int', 'uint'): - for T in N.sctypes[kind]: + for T in np.sctypes[kind]: mx = params[T]['max'] mn = params[T]['min'] rt = RR.smallest_int_sctype(mx, mn) if mx <= mmax and mn >= mmin: - assert rt == N.int32, \ + assert rt == np.int32, \ 'Expected int32 type, got %s' % rt else: assert rt is None, \ @@ -110,62 +110,62 @@ mx = 1000 mn = 0 rt = RF.smallest_int_sctype(mx, mn) - assert rt == N.int16, 'Expected int16, got %s' % rt + assert rt == np.int16, 'Expected int16, got %s' % rt rt = RF.smallest_int_sctype(mx, mn, 'i') - assert rt == N.int16, 'Expected int16, got %s' % rt + assert rt == np.int16, 'Expected int16, got %s' % rt rt = RF.smallest_int_sctype(mx, mn, prefer='u') - assert rt == N.uint16, 'Expected uint16, got %s' % rt + assert rt == np.uint16, 'Expected uint16, got %s' % rt def test_recasts(self): - valid_types = [N.int32, N.complex128, N.float64] + valid_types = [np.int32, np.complex128, np.float64] # Test smallest R = Recaster(valid_types, recast_options='smallest') inp_outp = ( - (1, N.complex128, N.int32), - (1, N.complex64, N.int32), - (1.0+1j, N.complex128, N.complex128), - (1.0+1j, N.complex64, N.complex128), - (1, N.float64, N.int32), - (1, N.float32, N.int32), - (1.1, N.float64, N.float64), - (-1e12, N.int64, N.float64), + (1, np.complex128, np.int32), + (1, np.complex64, np.int32), + (1.0+1j, np.complex128, np.complex128), + (1.0+1j, np.complex64, np.complex128), + (1, np.float64, np.int32), + (1, np.float32, np.int32), + (1.1, np.float64, np.float64), + (-1e12, np.int64, np.float64), ) self.run_io_recasts(R, inp_outp) # Test only_if_none R = Recaster(valid_types, recast_options='only_if_none') inp_outp = ( - (1, N.complex128, N.complex128), - (1, N.complex64, N.int32), - (1.0+1j, N.complex128, N.complex128), - (1.0+1j, N.complex64, N.complex128), - (1, N.float64, N.float64), - (1, N.float32, N.int32), - (1.1, N.float64, N.float64), - (-1e12, N.int64, N.float64), + (1, np.complex128, np.complex128), + (1, np.complex64, np.int32), + (1.0+1j, np.complex128, np.complex128), + (1.0+1j, np.complex64, np.complex128), + (1, np.float64, np.float64), + (1, np.float32, np.int32), + (1.1, np.float64, np.float64), + (-1e12, np.int64, np.float64), ) self.run_io_recasts(R, inp_outp) # Test preserve_precision R = Recaster(valid_types, recast_options='preserve_precision') inp_outp = ( - (1, N.complex128, N.complex128), - (1, N.complex64, N.complex128), - (1.0+1j, N.complex128, N.complex128), - (1.0+1j, N.complex64, N.complex128), - (1, N.float64, N.float64), - (1, N.float32, N.float64), - (1.1, N.float64, N.float64), - (-1e12, N.int64, None), + (1, np.complex128, np.complex128), + (1, np.complex64, np.complex128), + (1.0+1j, np.complex128, np.complex128), + (1.0+1j, np.complex64, np.complex128), + (1, np.float64, np.float64), + (1, np.float32, np.float64), + (1.1, np.float64, np.float64), + (-1e12, np.int64, None), ) self.run_io_recasts(R, inp_outp) def run_io_recasts(self, R, inp_outp): ''' Runs sets of value, input, output tests ''' for value, inp, outp in inp_outp: - arr = N.array(value, inp) + arr = np.array(value, inp) if outp is None: self.assertRaises(RecastError, R.recast, arr) continue - arr = R.recast(N.array(value, inp)) + arr = R.recast(np.array(value, inp)) assert arr is not None, \ 'Expected %s from %s, got None' % (outp, inp) dtt = arr.dtype.type Modified: trunk/scipy/misc/tests/test_pilutil.py =================================================================== --- trunk/scipy/misc/tests/test_pilutil.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/misc/tests/test_pilutil.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -1,6 +1,6 @@ import os.path import glob -import numpy as N +import numpy as np from numpy.testing import * @@ -19,14 +19,14 @@ class TestPILUtil(TestCase): def test_imresize(self): - im = N.random.random((10,20)) - for T in N.sctypes['float'] + [float]: + im = np.random.random((10,20)) + for T in np.sctypes['float'] + [float]: im1 = pilutil.imresize(im,T(1.1)) assert_equal(im1.shape,(11,22)) def test_bytescale(self): - x = N.array([0,1,2],N.uint8) - y = N.array([0,1,2]) + x = np.array([0,1,2],np.uint8) + y = np.array([0,1,2]) assert_equal(pilutil.bytescale(x),x) assert_equal(pilutil.bytescale(y),[0,127,255]) Modified: trunk/scipy/signal/tests/test_wavelets.py =================================================================== --- trunk/scipy/signal/tests/test_wavelets.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/signal/tests/test_wavelets.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -1,4 +1,4 @@ -import numpy as N +import numpy as np from numpy.testing import * from scipy.signal import wavelets @@ -36,15 +36,15 @@ assert_equal(x,y) # miscellaneous tests: - x = N.array([1.73752399e-09 +9.84327394e-25j, - 6.49471756e-01 +0.00000000e+00j, - 1.73752399e-09 -9.84327394e-25j]) + x = np.array([1.73752399e-09 +9.84327394e-25j, + 6.49471756e-01 +0.00000000e+00j, + 1.73752399e-09 -9.84327394e-25j]) y = wavelets.morlet(3,w=2,complete=True) assert_array_almost_equal(x,y) - x = N.array([2.00947715e-09 +9.84327394e-25j, - 7.51125544e-01 +0.00000000e+00j, - 2.00947715e-09 -9.84327394e-25j]) + x = np.array([2.00947715e-09 +9.84327394e-25j, + 7.51125544e-01 +0.00000000e+00j, + 2.00947715e-09 -9.84327394e-25j]) y = wavelets.morlet(3,w=2,complete=False) assert_array_almost_equal(x,y,decimal=2) Modified: trunk/scipy/sparse/linalg/eigen/arpack/tests/test_speigs.py =================================================================== --- trunk/scipy/sparse/linalg/eigen/arpack/tests/test_speigs.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/sparse/linalg/eigen/arpack/tests/test_speigs.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -5,20 +5,19 @@ from scipy.sparse.linalg.interface import aslinearoperator from scipy.sparse.linalg.eigen.arpack.speigs import * +import numpy as np -import numpy as N - class TestEigs(TestCase): def test(self): maxn=15 # Dimension of square matrix to be solved # Use a PDP^-1 factorisation to construct matrix with known # eiegevalues/vectors. Used random eiegenvectors initially. - P = N.mat(N.random.random((maxn,)*2)) - P /= map(N.linalg.norm, P.T) # Normalise the eigenvectors - D = N.mat(N.zeros((maxn,)*2)) - D[range(maxn), range(maxn)] = (N.arange(maxn, dtype=float)+1)/N.sqrt(maxn) - A = P*D*N.linalg.inv(P) - vals = N.array(D.diagonal())[0] + P = np.mat(np.random.random((maxn,)*2)) + P /= map(np.linalg.norm, P.T) # Normalise the eigenvectors + D = np.mat(np.zeros((maxn,)*2)) + D[range(maxn), range(maxn)] = (np.arange(maxn, dtype=float)+1)/np.sqrt(maxn) + A = P*D*np.linalg.inv(P) + vals = np.array(D.diagonal())[0] vecs = P uv_sortind = vals.argsort() vals = vals[uv_sortind] @@ -26,14 +25,14 @@ A=aslinearoperator(A) matvec = A.matvec - #= lambda x: N.asarray(A*x)[0] + #= lambda x: np.asarray(A*x)[0] nev=4 eigvs = ARPACK_eigs(matvec, A.shape[0], nev=nev) calc_vals = eigvs[0] # Ensure the calculated eigenvectors have the same sign as the reference values - calc_vecs = eigvs[1] / [N.sign(x[0]) for x in eigvs[1].T] + calc_vecs = eigvs[1] / [np.sign(x[0]) for x in eigvs[1].T] assert_array_almost_equal(calc_vals, vals[0:nev], decimal=7) - assert_array_almost_equal(calc_vecs, N.array(vecs)[:,0:nev], decimal=7) + assert_array_almost_equal(calc_vecs, np.array(vecs)[:,0:nev], decimal=7) # class TestGeneigs(TestCase): Modified: trunk/scipy/special/spfun_stats.py =================================================================== --- trunk/scipy/special/spfun_stats.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/special/spfun_stats.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -33,7 +33,7 @@ """Some more special functions which may be useful for multivariate statistical analysis.""" -import numpy as N +import numpy as np from scipy.special import gammaln as loggam def multigammaln(a, d): @@ -71,17 +71,17 @@ R. J. Muirhead, Aspects of multivariate statistical theory (Wiley Series in probability and mathematical statistics). """ - a = N.asarray(a) - if not N.isscalar(d) or (N.floor(d) != d): + a = np.asarray(a) + if not np.isscalar(d) or (np.floor(d) != d): raise ValueError("d should be a positive integer (dimension)") - if N.any(a <= 0.5 * (d - 1)): + if np.any(a <= 0.5 * (d - 1)): raise ValueError("condition a (%f) > 0.5 * (d-1) (%f) not met" \ % (a, 0.5 * (d-1))) - res = (d * (d-1) * 0.25) * N.log(N.pi) + res = (d * (d-1) * 0.25) * np.log(np.pi) if a.size == 1: axis = -1 else: axis = 0 - res += N.sum(loggam([(a - (j - 1.)/2) for j in range(1, d+1)]), axis) + res += np.sum(loggam([(a - (j - 1.)/2) for j in range(1, d+1)]), axis) return res Modified: trunk/scipy/special/tests/test_spfun_stats.py =================================================================== --- trunk/scipy/special/tests/test_spfun_stats.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/special/tests/test_spfun_stats.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -1,18 +1,16 @@ -import numpy as N +import numpy as np from numpy.testing import * - from scipy.special import gammaln, multigammaln - class TestMultiGammaLn(TestCase): def test1(self): - a = N.abs(N.random.randn()) + a = np.abs(np.random.randn()) assert_array_equal(multigammaln(a, 1), gammaln(a)) def test_ararg(self): d = 5 - a = N.abs(N.random.randn(3, 2)) + d + a = np.abs(np.random.randn(3, 2)) + d tr = multigammaln(a, d) assert_array_equal(tr.shape, a.shape) @@ -20,7 +18,7 @@ assert_array_equal(tr.ravel()[i], multigammaln(a.ravel()[i], d)) d = 5 - a = N.abs(N.random.randn(1, 2)) + d + a = np.abs(np.random.randn(1, 2)) + d tr = multigammaln(a, d) assert_array_equal(tr.shape, a.shape) Modified: trunk/scipy/stats/_support.py =================================================================== --- trunk/scipy/stats/_support.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/stats/_support.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -1,6 +1,6 @@ from numpy import asarray import stats -import numpy as N +import numpy as np from types import ListType, TupleType, StringType import copy @@ -17,20 +17,20 @@ source = asarray(source) if len(source.shape)==1: width = 1 - source = N.resize(source,[source.shape[0],width]) + source = np.resize(source,[source.shape[0],width]) else: width = source.shape[1] for addon in args: if len(addon.shape)==1: width = 1 - addon = N.resize(addon,[source.shape[0],width]) + addon = np.resize(addon,[source.shape[0],width]) else: width = source.shape[1] if len(addon) < len(source): - addon = N.resize(addon,[source.shape[0],addon.shape[1]]) + addon = np.resize(addon,[source.shape[0],addon.shape[1]]) elif len(source) < len(addon): - source = N.resize(source,[addon.shape[0],source.shape[1]]) - source = N.concatenate((source,addon),1) + source = np.resize(source,[addon.shape[0],source.shape[1]]) + source = np.concatenate((source,addon),1) return source @@ -39,37 +39,37 @@ works on arrays NOT including string items (e.g., type 'O' or 'c'). """ inarray = asarray(inarray) - uniques = N.array([inarray[0]]) + uniques = np.array([inarray[0]]) if len(uniques.shape) == 1: # IF IT'S A 1D ARRAY for item in inarray[1:]: - if N.add.reduce(N.equal(uniques,item).flat) == 0: + if np.add.reduce(np.equal(uniques,item).flat) == 0: try: - uniques = N.concatenate([uniques,N.array[N.newaxis,:]]) + uniques = np.concatenate([uniques,np.array[np.newaxis,:]]) except TypeError: - uniques = N.concatenate([uniques,N.array([item])]) + uniques = np.concatenate([uniques,np.array([item])]) else: # IT MUST BE A 2+D ARRAY if inarray.typecode() != 'O': # not an Object array for item in inarray[1:]: - if not N.sum(N.alltrue(N.equal(uniques,item),1),axis=0): + if not np.sum(np.alltrue(np.equal(uniques,item),1),axis=0): try: - uniques = N.concatenate( [uniques,item[N.newaxis,:]] ) + uniques = np.concatenate( [uniques,item[np.newaxis,:]] ) except TypeError: # the item to add isn't a list - uniques = N.concatenate([uniques,N.array([item])]) + uniques = np.concatenate([uniques,np.array([item])]) else: pass # this item is already in the uniques array else: # must be an Object array, alltrue/equal functions don't work for item in inarray[1:]: newflag = 1 for unq in uniques: # NOTE: cmp --> 0=same, -1=<, 1=> - test = N.sum(abs(N.array(map(cmp,item,unq))),axis=0) + test = np.sum(abs(np.array(map(cmp,item,unq))),axis=0) if test == 0: # if item identical to any 1 row in uniques newflag = 0 # then not a novel item to add break if newflag == 1: try: - uniques = N.concatenate( [uniques,item[N.newaxis,:]] ) + uniques = np.concatenate( [uniques,item[np.newaxis,:]] ) except TypeError: # the item to add isn't a list - uniques = N.concatenate([uniques,N.array([item])]) + uniques = np.concatenate([uniques,np.array([item])]) return uniques def colex(a, indices, axis=1): @@ -79,12 +79,12 @@ Returns: the columns of a specified by indices\n""" - if type(indices) not in [ListType,TupleType,N.ndarray]: + if type(indices) not in [ListType,TupleType,np.ndarray]: indices = [indices] - if len(N.shape(a)) == 1: - cols = N.resize(a,[a.shape[0],1]) + if len(np.shape(a)) == 1: + cols = np.resize(a,[a.shape[0],1]) else: - cols = N.take(a,indices,axis) + cols = np.take(a,indices,axis) return cols def printcc(lst, extra=2): @@ -137,9 +137,9 @@ function = 'lines = filter(lambda x: '+criterion+',a)' exec(function) try: - lines = N.array(lines) + lines = np.array(lines) except: - lines = N.array(lines,'O') + lines = np.array(lines,'O') return lines @@ -150,9 +150,9 @@ Returns: the rows of a where columnlist[i]=valuelist[i] for ALL i\n""" a = asarray(a) - if type(columnlist) not in [ListType,TupleType,N.ndarray]: + if type(columnlist) not in [ListType,TupleType,np.ndarray]: columnlist = [columnlist] - if type(valuelist) not in [ListType,TupleType,N.ndarray]: + if type(valuelist) not in [ListType,TupleType,np.ndarray]: valuelist = [valuelist] criterion = '' for i in range(len(columnlist)): @@ -183,14 +183,14 @@ means = cfcn(avgcol) return means else: - if type(keepcols) not in [ListType,TupleType,N.ndarray]: + if type(keepcols) not in [ListType,TupleType,np.ndarray]: keepcols = [keepcols] values = colex(a,keepcols) # so that "item" can be appended (below) uniques = unique(values) # get a LIST, so .sort keeps rows intact uniques.sort() newlist = [] for item in uniques: - if type(item) not in [ListType,TupleType,N.ndarray]: + if type(item) not in [ListType,TupleType,np.ndarray]: item =[item] tmprows = linexand(a,keepcols,item) for col in collapsecols: @@ -205,9 +205,9 @@ item.append(len(avgcol)) newlist.append(item) try: - new_a = N.array(newlist) + new_a = np.array(newlist) except TypeError: - new_a = N.array(newlist,'O') + new_a = np.array(newlist,'O') return new_a Modified: trunk/scipy/stats/tests/test_morestats.py =================================================================== --- trunk/scipy/stats/tests/test_morestats.py 2008-09-03 16:18:43 UTC (rev 4682) +++ trunk/scipy/stats/tests/test_morestats.py 2008-09-03 16:58:28 UTC (rev 4683) @@ -6,7 +6,7 @@ import scipy.stats as stats -import numpy as N +import numpy as np from numpy.random import RandomState g1 = [1.006, 0.996, 0.998, 1.000, 0.992, 0.993, 1.002, 0.999, 0.994, 1.000] @@ -63,10 +63,10 @@ assert_almost_equal(pval,0.13499256881897437,11) def test_approx(self): - ramsay = N.array((111, 107, 100, 99, 102, 106, 109, 108, 104, 99, - 101, 96, 97, 102, 107, 113, 116, 113, 110, 98)) - parekh = N.array((107, 108, 106, 98, 105, 103, 110, 105, 104, - 100, 96, 108, 103, 104, 114, 114, 113, 108, 106, 99)) + ramsay = np.array((111, 107, 100, 99, 102, 106, 109, 108, 104, 99, + 101, 96, 97, 102, 107, 113, 116, 113, 110, 98)) + parekh = np.array((107, 108, 106, 98, 105, 103, 110, 105, 104, + 100, 96, 108, 103, 104, 114, 114, 113, 108, 106, 99)) W, pval = stats.ansari(ramsay, parekh) assert_almost_equal(W,185.5,11) assert_almost_equal(pval,0.18145819972867083,11) From scipy-svn at scipy.org Wed Sep 3 15:31:15 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Wed, 3 Sep 2008 14:31:15 -0500 (CDT) Subject: [Scipy-svn] r4684 - trunk/scipy/cluster Message-ID: <20080903193115.07E9039C10E@scipy.org> Author: rkern Date: 2008-09-03 14:31:15 -0500 (Wed, 03 Sep 2008) New Revision: 4684 Modified: trunk/scipy/cluster/distance.py Log: BUG: Remove debugging print statements. Modified: trunk/scipy/cluster/distance.py =================================================================== --- trunk/scipy/cluster/distance.py 2008-09-03 16:58:28 UTC (rev 4683) +++ trunk/scipy/cluster/distance.py 2008-09-03 19:31:15 UTC (rev 4684) @@ -596,7 +596,6 @@ u = np.asarray(u) v = np.asarray(v) (nff, nft, ntf, ntt) = _nbool_correspond_all(u, v) - print nff, nft, ntf, ntt return float(2.0 * ntf * nft) / float(ntt * nff + ntf * nft) def matching(u, v): From scipy-svn at scipy.org Thu Sep 4 12:47:21 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 4 Sep 2008 11:47:21 -0500 (CDT) Subject: [Scipy-svn] r4685 - branches/interpolate Message-ID: <20080904164721.729FD39C02A@scipy.org> Author: oliphant Date: 2008-09-04 11:47:08 -0500 (Thu, 04 Sep 2008) New Revision: 4685 Modified: branches/interpolate/info.py Log: Test Modified: branches/interpolate/info.py =================================================================== --- branches/interpolate/info.py 2008-09-03 19:31:15 UTC (rev 4684) +++ branches/interpolate/info.py 2008-09-04 16:47:08 UTC (rev 4685) @@ -1,6 +1,7 @@ # FIXME : better docstring. This needs updating as features change, # and it also discusses technical points as well as user-interface. + __doc__ = \ """ This module provides several functions and classes for interpolation @@ -60,4 +61,4 @@ """ -postpone_import = 1 \ No newline at end of file +postpone_import = 1 From scipy-svn at scipy.org Thu Sep 4 12:47:55 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 4 Sep 2008 11:47:55 -0500 (CDT) Subject: [Scipy-svn] r4686 - branches Message-ID: <20080904164755.00F2939C02A@scipy.org> Author: oliphant Date: 2008-09-04 11:47:55 -0500 (Thu, 04 Sep 2008) New Revision: 4686 Removed: branches/Interpolate1D/ Log: Remove duplicate branch. From scipy-svn at scipy.org Thu Sep 4 15:41:11 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 4 Sep 2008 14:41:11 -0500 (CDT) Subject: [Scipy-svn] r4687 - in trunk/scipy/interpolate: . src Message-ID: <20080904194111.62D4F39C02A@scipy.org> Author: oliphant Date: 2008-09-04 14:41:11 -0500 (Thu, 04 Sep 2008) New Revision: 4687 Added: trunk/scipy/interpolate/src/ trunk/scipy/interpolate/src/__fitpack.h trunk/scipy/interpolate/src/_fitpackmodule.c trunk/scipy/interpolate/src/fitpack.pyf trunk/scipy/interpolate/src/multipack.h Removed: trunk/scipy/interpolate/__fitpack.h trunk/scipy/interpolate/_fitpackmodule.c trunk/scipy/interpolate/fitpack.pyf trunk/scipy/interpolate/multipack.h Modified: trunk/scipy/interpolate/setup.py Log: Move interpolate sources to sub-directory. Deleted: trunk/scipy/interpolate/__fitpack.h =================================================================== --- trunk/scipy/interpolate/__fitpack.h 2008-09-04 16:47:55 UTC (rev 4686) +++ trunk/scipy/interpolate/__fitpack.h 2008-09-04 19:41:11 UTC (rev 4687) @@ -1,1128 +0,0 @@ -/* - Python-C wrapper of FITPACK (by P. Dierckx) (in netlib known as dierckx) - Author: Pearu Peterson - June 1.-4., 1999 - June 7. 1999 - $Revision$ - $Date$ - */ - -/* module_methods: - {"_curfit", fitpack_curfit, METH_VARARGS, doc_curfit}, - {"_spl_", fitpack_spl_, METH_VARARGS, doc_spl_}, - {"_splint", fitpack_splint, METH_VARARGS, doc_splint}, - {"_sproot", fitpack_sproot, METH_VARARGS, doc_sproot}, - {"_spalde", fitpack_spalde, METH_VARARGS, doc_spalde}, - {"_parcur", fitpack_parcur, METH_VARARGS, doc_parcur}, - {"_surfit", fitpack_surfit, METH_VARARGS, doc_surfit}, - {"_bispev", fitpack_bispev, METH_VARARGS, doc_bispev}, - {"_insert", fitpack_insert, METH_VARARGS, doc_insert}, - */ -/* link libraries: (one item per line) - ddierckx - */ -/* python files: (to be imported to Multipack.py) - fitpack.py - */ -#if defined(NO_APPEND_FORTRAN) -#define CURFIT curfit -#define PERCUR percur -#define SPALDE spalde -#define SPLDER splder -#define SPLEV splev -#define SPLINT splint -#define SPROOT sproot -#define PARCUR parcur -#define CLOCUR clocur -#define SURFIT surfit -#define BISPEV bispev -#define PARDER parder -#define INSERT insert -#else -#define CURFIT curfit_ -#define PERCUR percur_ -#define SPALDE spalde_ -#define SPLDER splder_ -#define SPLEV splev_ -#define SPLINT splint_ -#define SPROOT sproot_ -#define PARCUR parcur_ -#define CLOCUR clocur_ -#define SURFIT surfit_ -#define BISPEV bispev_ -#define PARDER parder_ -#define INSERT insert_ -#endif -void CURFIT(int*,int*,double*,double*,double*,double*,double*,int*,double*,int*,int*,double*,double*,double*,double*,int*,int*,int*); -void PERCUR(int*,int*,double*,double*,double*,int*,double*,int*,int*,double*,double*,double*,double*,int*,int*,int*); -void SPALDE(double*,int*,double*,int*,double*,double*,int*); -void SPLDER(double*,int*,double*,int*,int*,double*,double*,int*,double*,int*); -void SPLEV(double*,int*,double*,int*,double*,double*,int*,int*); -double SPLINT(double*,int*,double*,int*,double*,double*,double*); -void SPROOT(double*,int*,double*,double*,int*,int*,int*); -void PARCUR(int*,int*,int*,int*,double*,int*,double*,double*,double*,double*,int*,double*,int*,int*,double*,int*,double*,double*,double*,int*,int*,int*); -void CLOCUR(int*,int*,int*,int*,double*,int*,double*,double*,int*,double*,int*,int*,double*,int*,double*,double*,double*,int*,int*,int*); -void SURFIT(int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int*,double*,int*,int*,int*,double*,int*,double*,int*,double*,double*,double*,double*,int*,double*,int*,int*,int*,int*); -void BISPEV(double*,int*,double*,int*,double*,int*,int*,double*,int*,double*,int*,double*,double*,int*,int*,int*,int*); -void PARDER(double*,int*,double*,int*,double*,int*,int*,int*,int*,double*,int*,double*,int*,double*,double*,int*,int*,int*,int*); -void INSERT(int*,double*,int*,double*,int*,double*,double*,int*,double*,int*,int*); - -/* Note that curev, cualde need no interface. */ - -static char doc_bispev[] = " [z,ier] = _bispev(tx,ty,c,kx,ky,x,y,nux,nuy)"; -static PyObject *fitpack_bispev(PyObject *dummy, PyObject *args) { - int nx,ny,kx,ky,mx,my,lwrk,*iwrk,kwrk,ier,lwa,mxy,nux,nuy; - double *tx,*ty,*c,*x,*y,*z,*wrk,*wa = NULL; - PyArrayObject *ap_x = NULL,*ap_y = NULL,*ap_z = NULL,*ap_tx = NULL,\ - *ap_ty = NULL,*ap_c = NULL; - PyObject *x_py = NULL,*y_py = NULL,*c_py = NULL,*tx_py = NULL,*ty_py = NULL; - if (!PyArg_ParseTuple(args, "OOOiiOOii",&tx_py,&ty_py,&c_py,&kx,&ky, - &x_py,&y_py,&nux,&nuy)) - return NULL; - ap_x = (PyArrayObject *)PyArray_ContiguousFromObject(x_py, PyArray_DOUBLE, 0, 1); - ap_y = (PyArrayObject *)PyArray_ContiguousFromObject(y_py, PyArray_DOUBLE, 0, 1); - ap_c = (PyArrayObject *)PyArray_ContiguousFromObject(c_py, PyArray_DOUBLE, 0, 1); - ap_tx = (PyArrayObject *)PyArray_ContiguousFromObject(tx_py, PyArray_DOUBLE, 0, 1); - ap_ty = (PyArrayObject *)PyArray_ContiguousFromObject(ty_py, PyArray_DOUBLE, 0, 1); - if (ap_x == NULL || ap_y == NULL || ap_c == NULL || ap_tx == NULL \ - || ap_ty == NULL) goto fail; - x = (double *) ap_x->data; - y = (double *) ap_y->data; - c = (double *) ap_c->data; - tx = (double *) ap_tx->data; - ty = (double *) ap_ty->data; - nx = ap_tx->dimensions[0]; - ny = ap_ty->dimensions[0]; - mx = ap_x->dimensions[0]; - my = ap_y->dimensions[0]; - mxy = mx*my; - ap_z = (PyArrayObject *)PyArray_FromDims(1,&mxy,PyArray_DOUBLE); - z = (double *) ap_z->data; - if (nux || nuy) - lwrk = mx*(kx+1-nux)+my*(ky+1-nuy)+(nx-kx-1)*(ny-ky-1); - else - lwrk = mx*(kx+1)+my*(ky+1); - kwrk = mx+my; - lwa = lwrk+kwrk; - if ((wa = (double *)malloc(lwa*sizeof(double)))==NULL) { - PyErr_NoMemory(); - goto fail; - } - wrk = wa; - iwrk = (int *)(wrk+lwrk); - if (nux || nuy) - PARDER(tx,&nx,ty,&ny,c,&kx,&ky,&nux,&nuy,x,&mx,y,&my,z,wrk,&lwrk,iwrk,&kwrk,&ier); - else - BISPEV(tx,&nx,ty,&ny,c,&kx,&ky,x,&mx,y,&my,z,wrk,&lwrk,iwrk,&kwrk,&ier); - - if (wa) free(wa); - Py_DECREF(ap_x); - Py_DECREF(ap_y); - Py_DECREF(ap_c); - Py_DECREF(ap_tx); - Py_DECREF(ap_ty); - return Py_BuildValue("Ni",PyArray_Return(ap_z),ier); - fail: - if (wa) free(wa); - Py_XDECREF(ap_x); - Py_XDECREF(ap_y); - Py_XDECREF(ap_z); - Py_XDECREF(ap_c); - Py_XDECREF(ap_tx); - Py_XDECREF(ap_ty); - return NULL; -} - -static char doc_surfit[] = " [tx,ty,c,o] = _surfit(x,y,z,w,xb,xe,yb,ye,kx,ky,iopt,s,eps,tx,ty,nxest,nyest,wrk,lwrk1,lwrk2)"; -static PyObject *fitpack_surfit(PyObject *dummy, PyObject *args) { - int iopt,m,kx,ky,nxest,nyest,nx,ny,lwrk1,lwrk2,*iwrk,kwrk,ier,lwa,nxo,nyo,\ - i,lc,lcest,nmax; - double *x,*y,*z,*w,xb,xe,yb,ye,s,*tx,*ty,*c,fp,*wrk1,*wrk2,*wa = NULL,eps; - PyArrayObject *ap_x = NULL,*ap_y = NULL,*ap_z,*ap_w = NULL,\ - *ap_tx = NULL,*ap_ty = NULL,*ap_c = NULL; - PyArrayObject *ap_wrk = NULL; - PyObject *x_py = NULL,*y_py = NULL,*z_py = NULL,*w_py = NULL,\ - *tx_py = NULL,*ty_py = NULL; - PyObject *wrk_py=NULL; - nx=ny=ier=nxo=nyo=0; - if (!PyArg_ParseTuple(args, "OOOOddddiiiddOOiiOii",\ - &x_py,&y_py,&z_py,&w_py,&xb,&xe,\ - &yb,&ye,&kx,&ky,&iopt,&s,&eps,&tx_py,&ty_py,&nxest,&nyest,\ - &wrk_py,&lwrk1,&lwrk2)) return NULL; - ap_x = (PyArrayObject *)PyArray_ContiguousFromObject(x_py, PyArray_DOUBLE, 0, 1); - ap_y = (PyArrayObject *)PyArray_ContiguousFromObject(y_py, PyArray_DOUBLE, 0, 1); - ap_z = (PyArrayObject *)PyArray_ContiguousFromObject(z_py, PyArray_DOUBLE, 0, 1); - ap_w = (PyArrayObject *)PyArray_ContiguousFromObject(w_py, PyArray_DOUBLE, 0, 1); - ap_wrk=(PyArrayObject *)PyArray_ContiguousFromObject(wrk_py, PyArray_DOUBLE, 0, 1); - /*ap_iwrk=(PyArrayObject *)PyArray_ContiguousFromObject(iwrk_py, PyArray_INT, 0, 1);*/ - if (ap_x == NULL || ap_y == NULL || ap_z == NULL || ap_w == NULL \ - || ap_wrk == NULL) goto fail; - x = (double *) ap_x->data; - y = (double *) ap_y->data; - z = (double *) ap_z->data; - w = (double *) ap_w->data; - m = ap_x->dimensions[0]; - nmax=nxest; - if (nmaxdimensions[0]; - ny = nyo = ap_ty->dimensions[0]; - memcpy(tx,ap_tx->data,nx*sizeof(double)); - memcpy(ty,ap_ty->data,ny*sizeof(double)); - } - if (iopt==1) { - lc = (nx-kx-1)*(ny-ky-1); - memcpy(wrk1,ap_wrk->data,lc*sizeof(double)); - /*memcpy(iwrk,ap_iwrk->data,n*sizeof(int));*/ - } - SURFIT(&iopt,&m,x,y,z,w,&xb,&xe,&yb,&ye,&kx,&ky,&s,&nxest,&nyest,&nmax,&eps,&nx,tx,&ny,ty,c,&fp,wrk1,&lwrk1,wrk2,&lwrk2,iwrk,&kwrk,&ier); - i=0; - while ((ier>10) && (i++<5)) { - lwrk2=ier; - if ((wrk2 = (double *)malloc(lwrk2*sizeof(double)))==NULL) { - PyErr_NoMemory(); - goto fail; - } - SURFIT(&iopt,&m,x,y,z,w,&xb,&xe,&yb,&ye,&kx,&ky,&s,&nxest,&nyest,&nmax,&eps,&nx,tx,&ny,ty,c,&fp,wrk1,&lwrk1,wrk2,&lwrk2,iwrk,&kwrk,&ier); - if (wrk2) free(wrk2); - } - if (ier==10) { - PyErr_SetString(PyExc_ValueError, "Invalid inputs."); - goto fail; - } - lc = (nx-kx-1)*(ny-ky-1); - Py_XDECREF(ap_tx); - Py_XDECREF(ap_ty); - ap_tx = (PyArrayObject *)PyArray_FromDims(1,&nx,PyArray_DOUBLE); - ap_ty = (PyArrayObject *)PyArray_FromDims(1,&ny,PyArray_DOUBLE); - ap_c = (PyArrayObject *)PyArray_FromDims(1,&lc,PyArray_DOUBLE); - if (ap_tx == NULL || ap_ty == NULL || ap_c == NULL) goto fail; - if ((iopt==0)||(nx>nxo)||(ny>nyo)) { - Py_XDECREF(ap_wrk); - ap_wrk = (PyArrayObject *)PyArray_FromDims(1,&lc,PyArray_DOUBLE); - if (ap_wrk == NULL) goto fail; - /*ap_iwrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_INT);*/ - } - if(ap_wrk->dimensions[0]data,tx,nx*sizeof(double)); - memcpy(ap_ty->data,ty,ny*sizeof(double)); - memcpy(ap_c->data,c,lc*sizeof(double)); - memcpy(ap_wrk->data,wrk1,lc*sizeof(double)); - /*memcpy(ap_iwrk->data,iwrk,n*sizeof(int));*/ - if (wa) free(wa); - Py_DECREF(ap_x); - Py_DECREF(ap_y); - Py_DECREF(ap_z); - Py_DECREF(ap_w); - return Py_BuildValue("NNN{s:N,s:i,s:d}",PyArray_Return(ap_tx),\ - PyArray_Return(ap_ty),PyArray_Return(ap_c),\ - "wrk",PyArray_Return(ap_wrk),\ - "ier",ier,"fp",fp); - fail: - if (wa) free(wa); - Py_XDECREF(ap_x); - Py_XDECREF(ap_y); - Py_XDECREF(ap_z); - Py_XDECREF(ap_w); - Py_XDECREF(ap_tx); - Py_XDECREF(ap_ty); - Py_XDECREF(ap_wrk); - /*Py_XDECREF(ap_iwrk);*/ - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ValueError, "An error occurred."); - } - return NULL; -} - - -static char doc_parcur[] = " [t,c,o] = _parcur(x,w,u,ub,ue,k,iopt,ipar,s,t,nest,wrk,iwrk,per)"; -static PyObject *fitpack_parcur(PyObject *dummy, PyObject *args) { - int k,iopt,ipar,nest,*iwrk,idim,m,mx,n=0,no=0,nc,ier,lc,lwa,lwrk,i,per; - double *x,*w,*u,*c,*t,*wrk,*wa=NULL,ub,ue,fp,s; - PyObject *x_py = NULL,*u_py = NULL,*w_py = NULL,*t_py = NULL; - PyObject *wrk_py=NULL,*iwrk_py=NULL; - PyArrayObject *ap_x = NULL,*ap_u = NULL,*ap_w = NULL,*ap_t = NULL,*ap_c = NULL; - PyArrayObject *ap_wrk = NULL,*ap_iwrk = NULL; - if (!PyArg_ParseTuple(args, "OOOddiiidOiOOi",&x_py,&w_py,&u_py,&ub,&ue,\ - &k,&iopt,&ipar,&s,&t_py,&nest,&wrk_py,&iwrk_py,&per)) return NULL; - ap_x = (PyArrayObject *)PyArray_ContiguousFromObject(x_py, PyArray_DOUBLE, 0, 1); - ap_u = (PyArrayObject *)PyArray_ContiguousFromObject(u_py, PyArray_DOUBLE, 0, 1); - ap_w = (PyArrayObject *)PyArray_ContiguousFromObject(w_py, PyArray_DOUBLE, 0, 1); - ap_wrk=(PyArrayObject *)PyArray_ContiguousFromObject(wrk_py, PyArray_DOUBLE, 0, 1); - ap_iwrk=(PyArrayObject *)PyArray_ContiguousFromObject(iwrk_py, PyArray_INT, 0, 1); - if (ap_x == NULL || ap_u == NULL || ap_w == NULL || ap_wrk == NULL || ap_iwrk == NULL) goto fail; - x = (double *) ap_x->data; - u = (double *) ap_u->data; - w = (double *) ap_w->data; - m = ap_w->dimensions[0]; - mx = ap_x->dimensions[0]; - idim = mx/m; - if (per) - lwrk=m*(k+1)+nest*(7+idim+5*k); - else - lwrk=m*(k+1)+nest*(6+idim+3*k); - nc=idim*nest; - lwa = nc+2*nest+lwrk; - if ((wa = (double *)malloc(lwa*sizeof(double)))==NULL) { - PyErr_NoMemory(); - goto fail; - } - t = wa; - c = t + nest; - wrk = c + nc; - iwrk = (int *)(wrk + lwrk); - if (iopt) { - ap_t=(PyArrayObject *)PyArray_ContiguousFromObject(t_py, PyArray_DOUBLE, 0, 1); - if (ap_t == NULL) goto fail; - n = no = ap_t->dimensions[0]; - memcpy(t,ap_t->data,n*sizeof(double)); - } - if (iopt==1) { - memcpy(wrk,ap_wrk->data,n*sizeof(double)); - memcpy(iwrk,ap_iwrk->data,n*sizeof(int)); - } - if (per) - CLOCUR(&iopt,&ipar,&idim,&m,u,&mx,x,w,&k,&s,&nest,&n,t,&nc,\ - c,&fp,wrk,&lwrk,iwrk,&ier); - else - PARCUR(&iopt,&ipar,&idim,&m,u,&mx,x,w,&ub,&ue,&k,&s,&nest,&n,t,&nc,\ - c,&fp,wrk,&lwrk,iwrk,&ier); - if (ier==10) goto fail; - if (ier>0 && n==0) n=1; - lc = (n-k-1)*idim; - ap_t = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); - ap_c = (PyArrayObject *)PyArray_FromDims(1,&lc,PyArray_DOUBLE); - if (ap_t == NULL || ap_c == NULL) goto fail; - if ((iopt==0)||(n>no)) { - ap_wrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); - ap_iwrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_INT); - if (ap_wrk == NULL || ap_iwrk == NULL) goto fail; - } - memcpy(ap_t->data,t,n*sizeof(double)); - for (i=0;idata+i*(n-k-1),c+i*n,(n-k-1)*sizeof(double)); - memcpy(ap_wrk->data,wrk,n*sizeof(double)); - memcpy(ap_iwrk->data,iwrk,n*sizeof(int)); - if (wa) free(wa); - Py_DECREF(ap_x); - Py_DECREF(ap_w); - return Py_BuildValue("NN{s:N,s:d,s:d,s:N,s:N,s:i,s:d}",PyArray_Return(ap_t),PyArray_Return(ap_c),"u",PyArray_Return(ap_u),"ub",ub,"ue",ue,"wrk",PyArray_Return(ap_wrk),"iwrk",PyArray_Return(ap_iwrk),"ier",ier,"fp",fp); - fail: - if (wa) free(wa); - Py_XDECREF(ap_x); - Py_XDECREF(ap_u); - Py_XDECREF(ap_w); - Py_XDECREF(ap_t); - Py_XDECREF(ap_wrk); - Py_XDECREF(ap_iwrk); - return NULL; -} - -static char doc_curfit[] = " [t,c,o] = _curfit(x,y,w,xb,xe,k,iopt,s,t,nest,wrk,iwrk,per)"; -static PyObject *fitpack_curfit(PyObject *dummy, PyObject *args) { - int iopt,m,k,nest,n,lwrk,*iwrk,ier,lwa,lc,no=0,per; - double *x,*y,*w,xb,xe,s,*t,*c,fp,*wrk,*wa = NULL; - PyArrayObject *ap_x = NULL,*ap_y = NULL,*ap_w = NULL,*ap_t = NULL,*ap_c = NULL; - PyArrayObject *ap_wrk = NULL,*ap_iwrk = NULL; - PyObject *x_py = NULL,*y_py = NULL,*w_py = NULL,*t_py = NULL; - PyObject *wrk_py=NULL,*iwrk_py=NULL; - if (!PyArg_ParseTuple(args, "OOOddiidOiOOi",&x_py,&y_py,&w_py,&xb,&xe,\ - &k,&iopt,&s,&t_py,&nest,&wrk_py,&iwrk_py,&per)) return NULL; - ap_x = (PyArrayObject *)PyArray_ContiguousFromObject(x_py, PyArray_DOUBLE, 0, 1); - ap_y = (PyArrayObject *)PyArray_ContiguousFromObject(y_py, PyArray_DOUBLE, 0, 1); - ap_w = (PyArrayObject *)PyArray_ContiguousFromObject(w_py, PyArray_DOUBLE, 0, 1); - ap_wrk=(PyArrayObject *)PyArray_ContiguousFromObject(wrk_py, PyArray_DOUBLE, 0, 1); - ap_iwrk=(PyArrayObject *)PyArray_ContiguousFromObject(iwrk_py, PyArray_INT, 0, 1); - if (ap_x == NULL || ap_y == NULL || ap_w == NULL || ap_wrk == NULL || ap_iwrk == NULL) goto fail; - x = (double *) ap_x->data; - y = (double *) ap_y->data; - w = (double *) ap_w->data; - m = ap_x->dimensions[0]; - if (per) lwrk = m*(k+1) + nest*(8+5*k); - else lwrk = m*(k+1) + nest*(7+3*k); - lwa = 3*nest+lwrk; - if ((wa = (double *)malloc(lwa*sizeof(double)))==NULL) { - PyErr_NoMemory(); - goto fail; - } - t = wa; - c = t + nest; - wrk = c + nest; - iwrk = (int *)(wrk + lwrk); - if (iopt) { - ap_t=(PyArrayObject *)PyArray_ContiguousFromObject(t_py, PyArray_DOUBLE, 0, 1); - if (ap_t == NULL) goto fail; - n = no = ap_t->dimensions[0]; - memcpy(t,ap_t->data,n*sizeof(double)); - } - if (iopt==1) { - memcpy(wrk,ap_wrk->data,n*sizeof(double)); - memcpy(iwrk,ap_iwrk->data,n*sizeof(int)); - } - if (per) - PERCUR(&iopt,&m,x,y,w,&k,&s,&nest,&n,t,c,&fp,wrk,&lwrk,iwrk,&ier); - else - CURFIT(&iopt,&m,x,y,w,&xb,&xe,&k,&s,&nest,&n,t,c,&fp,wrk,&lwrk,iwrk,&ier); - if (ier==10) { - PyErr_SetString(PyExc_ValueError, "Invalid inputs."); - goto fail; - } - lc = n-k-1; - if (!iopt) { - ap_t = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); - if (ap_t == NULL) goto fail; - } - ap_c = (PyArrayObject *)PyArray_FromDims(1,&lc,PyArray_DOUBLE); - if (ap_c == NULL) goto fail; - if ((iopt==0)||(n>no)) { - Py_XDECREF(ap_wrk); - Py_XDECREF(ap_iwrk); - ap_wrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); - ap_iwrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_INT); - if (ap_wrk == NULL || ap_iwrk == NULL) goto fail; - } - memcpy(ap_t->data,t,n*sizeof(double)); - memcpy(ap_c->data,c,lc*sizeof(double)); - memcpy(ap_wrk->data,wrk,n*sizeof(double)); - memcpy(ap_iwrk->data,iwrk,n*sizeof(int)); - if (wa) free(wa); - Py_DECREF(ap_x); - Py_DECREF(ap_y); - Py_DECREF(ap_w); - return Py_BuildValue("NN{s:N,s:N,s:i,s:d}",PyArray_Return(ap_t),PyArray_Return(ap_c),"wrk",PyArray_Return(ap_wrk),"iwrk",PyArray_Return(ap_iwrk),"ier",ier,"fp",fp); - fail: - if (wa) free(wa); - Py_XDECREF(ap_x); - Py_XDECREF(ap_y); - Py_XDECREF(ap_w); - Py_XDECREF(ap_t); - Py_XDECREF(ap_wrk); - Py_XDECREF(ap_iwrk); - return NULL; -} - -static char doc_spl_[] = " [y,ier] = _spl_(x,nu,t,c,k )"; -static PyObject *fitpack_spl_(PyObject *dummy, PyObject *args) { - int n,nu,m,ier,k; - double *x,*y,*t,*c,*wrk = NULL; - PyArrayObject *ap_x = NULL,*ap_y = NULL,*ap_t = NULL,*ap_c = NULL; - PyObject *x_py = NULL,*t_py = NULL,*c_py = NULL; - if (!PyArg_ParseTuple(args, "OiOOi",&x_py,&nu,&t_py,&c_py,&k)) return NULL; - ap_x = (PyArrayObject *)PyArray_ContiguousFromObject(x_py, PyArray_DOUBLE, 0, 1); - ap_t = (PyArrayObject *)PyArray_ContiguousFromObject(t_py, PyArray_DOUBLE, 0, 1); - ap_c = (PyArrayObject *)PyArray_ContiguousFromObject(c_py, PyArray_DOUBLE, 0, 1); - if ((ap_x == NULL || ap_t == NULL || ap_c == NULL)) goto fail; - x = (double *) ap_x->data; - m = ap_x->dimensions[0]; - t = (double *) ap_t->data; - c = (double *) ap_c->data; - n = ap_t->dimensions[0]; - ap_y = (PyArrayObject *)PyArray_FromDims(1,&m,PyArray_DOUBLE); - if (ap_y == NULL) goto fail; - y = (double *) ap_y->data; - if ((wrk = (double *)malloc(n*sizeof(double)))==NULL) { - PyErr_NoMemory(); - goto fail; - } - if (nu) - SPLDER(t,&n,c,&k,&nu,x,y,&m,wrk,&ier); - else - SPLEV(t,&n,c,&k,x,y,&m,&ier); - if (wrk) free(wrk); - Py_DECREF(ap_x); - Py_DECREF(ap_c); - Py_DECREF(ap_t); - return Py_BuildValue("Ni",PyArray_Return(ap_y),ier); - fail: - if (wrk) free(wrk); - Py_XDECREF(ap_x); - Py_XDECREF(ap_c); - Py_XDECREF(ap_t); - return NULL; -} - -static char doc_splint[] = " [aint,wrk] = _splint(t,c,k,a,b)"; -static PyObject *fitpack_splint(PyObject *dummy, PyObject *args) { - int n,k; - double *t,*c,*wrk = NULL,a,b,aint; - PyArrayObject *ap_t = NULL,*ap_c = NULL; - PyArrayObject *ap_wrk = NULL; - PyObject *t_py = NULL,*c_py = NULL; - if (!PyArg_ParseTuple(args, "OOidd",&t_py,&c_py,&k,&a,&b)) return NULL; - ap_t = (PyArrayObject *)PyArray_ContiguousFromObject(t_py, PyArray_DOUBLE, 0, 1); - ap_c = (PyArrayObject *)PyArray_ContiguousFromObject(c_py, PyArray_DOUBLE, 0, 1); - if ((ap_t == NULL || ap_c == NULL)) goto fail; - t = (double *) ap_t->data; - c = (double *) ap_c->data; - n = ap_t->dimensions[0]; - ap_wrk = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); - if (ap_wrk == NULL) goto fail; - wrk = (double *) ap_wrk->data; - aint = SPLINT(t,&n,c,&k,&a,&b,wrk); - Py_DECREF(ap_c); - Py_DECREF(ap_t); - return Py_BuildValue("dN",aint,PyArray_Return(ap_wrk)); - fail: - Py_XDECREF(ap_c); - Py_XDECREF(ap_t); - return NULL; -} - -static char doc_sproot[] = " [z,ier] = _sproot(t,c,k,mest)"; -static PyObject *fitpack_sproot(PyObject *dummy, PyObject *args) { - int n,k,mest,ier,m; - double *t,*c,*z=NULL; - PyArrayObject *ap_t = NULL,*ap_c = NULL; - PyArrayObject *ap_z = NULL; - PyObject *t_py = NULL,*c_py = NULL; - if (!PyArg_ParseTuple(args, "OOii",&t_py,&c_py,&k,&mest)) return NULL; - ap_t = (PyArrayObject *)PyArray_ContiguousFromObject(t_py, PyArray_DOUBLE, 0, 1); - ap_c = (PyArrayObject *)PyArray_ContiguousFromObject(c_py, PyArray_DOUBLE, 0, 1); - if ((ap_t == NULL || ap_c == NULL)) goto fail; - t = (double *) ap_t->data; - c = (double *) ap_c->data; - n = ap_t->dimensions[0]; - if ((z = (double *)malloc(mest*sizeof(double)))==NULL) { - PyErr_NoMemory(); - goto fail; - } - SPROOT(t,&n,c,z,&mest,&m,&ier); - if (ier==10) m=0; - ap_z = (PyArrayObject *)PyArray_FromDims(1,&m,PyArray_DOUBLE); - if (ap_z == NULL) goto fail; - memcpy(ap_z->data,z,m*sizeof(double)); - if (z) free(z); - Py_DECREF(ap_c); - Py_DECREF(ap_t); - return Py_BuildValue("Ni",PyArray_Return(ap_z),ier); - fail: - if (z) free(z); - Py_XDECREF(ap_c); - Py_XDECREF(ap_t); - return NULL; -} - -static char doc_spalde[] = " [d,ier] = _spalde(t,c,k,x)"; -static PyObject *fitpack_spalde(PyObject *dummy, PyObject *args) { - int n,k,k1,ier; - double *t,*c,*d=NULL,x; - PyArrayObject *ap_t = NULL,*ap_c = NULL,*ap_d = NULL; - PyObject *t_py = NULL,*c_py = NULL; - if (!PyArg_ParseTuple(args, "OOid",&t_py,&c_py,&k,&x)) return NULL; - ap_t = (PyArrayObject *)PyArray_ContiguousFromObject(t_py, PyArray_DOUBLE, 0, 1); - ap_c = (PyArrayObject *)PyArray_ContiguousFromObject(c_py, PyArray_DOUBLE, 0, 1); - if ((ap_t == NULL || ap_c == NULL)) goto fail; - t = (double *) ap_t->data; - c = (double *) ap_c->data; - n = ap_t->dimensions[0]; - k1=k+1; - ap_d = (PyArrayObject *)PyArray_FromDims(1,&k1,PyArray_DOUBLE); - if (ap_d == NULL) goto fail; - d = (double *) ap_d->data; - SPALDE(t,&n,c,&k1,&x,d,&ier); - Py_DECREF(ap_c); - Py_DECREF(ap_t); - return Py_BuildValue("Ni",PyArray_Return(ap_d),ier); - fail: - Py_XDECREF(ap_c); - Py_XDECREF(ap_t); - return NULL; -} - -static char doc_insert[] = " [tt,cc,ier] = _insert(iopt,t,c,k,x,m)"; -static PyObject *fitpack_insert(PyObject *dummy, PyObject*args) { - int iopt, n, nn, k, nest, ier, m; - double x; - double *t, *c, *tt, *cc; - PyArrayObject *ap_t = NULL, *ap_c = NULL, *ap_tt = NULL, *ap_cc = NULL; - PyObject *t_py = NULL, *c_py = NULL; - PyObject *ret = NULL; - if (!PyArg_ParseTuple(args, "iOOidi",&iopt,&t_py,&c_py,&k, &x, &m)) return NULL; - ap_t = (PyArrayObject *)PyArray_ContiguousFromObject(t_py, PyArray_DOUBLE, 0, 1); - ap_c = (PyArrayObject *)PyArray_ContiguousFromObject(c_py, PyArray_DOUBLE, 0, 1); - if (ap_t == NULL || ap_c == NULL) goto fail; - t = (double *) ap_t->data; - c = (double *) ap_c->data; - n = ap_t->dimensions[0]; - nest = n + m; - ap_tt = (PyArrayObject *)PyArray_FromDims(1,&nest,PyArray_DOUBLE); - ap_cc = (PyArrayObject *)PyArray_FromDims(1,&nest,PyArray_DOUBLE); - if (ap_tt == NULL || ap_cc == NULL) goto fail; - tt = (double *) ap_tt->data; - cc = (double *) ap_cc->data; - for ( ; n < nest; n++) { - INSERT(&iopt, t, &n, c, &k, &x, tt, &nn, cc, &nest, &ier); - if (ier) break; - t = tt; - c = cc; - } - Py_DECREF(ap_c); - Py_DECREF(ap_t); - ret = Py_BuildValue("NNi",PyArray_Return(ap_tt),PyArray_Return(ap_cc),ier); - return ret; - - fail: - Py_XDECREF(ap_c); - Py_XDECREF(ap_t); - return NULL; - } - - -static void -_deBoor_D(double *t, double x, int k, int ell, int m, double *result) { - /* On completion the result array stores - the k+1 non-zero values of beta^(m)_i,k(x): for i=ell, ell-1, ell-2, ell-k. - Where t[ell] <= x < t[ell+1]. - */ - /* Implements a recursive algorithm similar to the original algorithm of - deBoor. - */ - - double *hh = result + k + 1; - double *h = result; - double xb, xa, w; - int ind, j, n; - - /* Perform k-m "standard" deBoor iterations */ - /* so that h contains the k+1 non-zero values of beta_{ell,k-m}(x) */ - /* needed to calculate the remaining derivatives. */ - - result[0] = 1.0; - for (j=1; j<=k-m; j++) { - memcpy(hh, h, j*sizeof(double)); - h[0] = 0.0; - for (n=1; n<=j; n++) { - ind = ell + n; - xb = t[ind]; - xa = t[ind-j]; - if (xb == xa) { - h[n] = 0.0; - continue; - } - w = hh[n-1]/(xb-xa); - h[n-1] += w*(xb-x); - h[n] = w*(x-xa); - } - } - - /* Now do m "derivative" recursions */ - /* to convert the values of beta into the mth derivative */ - for (j=k-m+1; j<=k; j++) { - memcpy(hh, h, j*sizeof(double)); - h[0] = 0.0; - for (n=1; n<=j; n++) { - ind = ell + n; - xb = t[ind]; - xa = t[ind-j]; - if (xb == xa) { - h[m] = 0.0; - continue; - } - w = j*hh[n-1]/(xb-xa); - h[n-1] -= w; - h[n] = w; - } - } -} - - -/* Given a set of (N+1) samples: A default set of knots is constructed - using the samples xk plus 2*(K-1) additional knots where - K = max(order,1) and the knots are chosen so that distances - are symmetric around the first and last samples: x_0 and x_N. - - There should be a vector of N+K coefficients for the spline - curve in coef. These coefficients form the curve as - - s(x) = sum(c_j B_{j,K}(x), j=-K..N-1) - - The spline function is evaluated at all points xx. - The approximation interval is from xk[0] to xk[-1] - Any xx outside that interval is set automatically to 0.0 - */ -static char doc_bspleval[] = "y = _bspleval(xx,xk,coef,k,{deriv (0)})\n" - "\n" - "The spline is defined by the approximation interval xk[0] to xk[-1],\n" - "the length of xk (N+1), the order of the spline, k, and \n" - "the number of coeficients N+k. The coefficients range from xk_{-K}\n" - "to xk_{N-1} inclusive and are all the coefficients needed to define\n" - "an arbitrary spline of order k, on the given approximation interval\n" - "\n" - "Extra knot points are internally added using knot-point symmetry \n" - "around xk[0] and xk[-1]"; - -static PyObject *_bspleval(PyObject *dummy, PyObject *args) { - int k,kk,N,i,ell,dk,deriv=0; - PyObject *xx_py=NULL, *coef_py=NULL, *x_i_py=NULL; - PyArrayObject *xx=NULL, *coef=NULL, *x_i=NULL, *yy=NULL; - PyArrayIterObject *xx_iter; - double *t=NULL, *h=NULL, *ptr; - double x0, xN, xN1, arg, sp, cval; - if (!PyArg_ParseTuple(args, "OOOi|i", &xx_py, &x_i_py, &coef_py, &k, &deriv)) - return NULL; - if (k < 0) { - PyErr_Format(PyExc_ValueError, "order (%d) must be >=0", k); - return NULL; - } - if (deriv > k) { - PyErr_Format(PyExc_ValueError, "derivative (%d) must be <= order (%d)", - deriv, k); - return NULL; - } - kk = k; - if (k==0) kk = 1; - dk = (k == 0 ? 0 : 1); - x_i = (PyArrayObject *)PyArray_FROMANY(x_i_py, NPY_DOUBLE, 1, 1, NPY_ALIGNED); - coef = (PyArrayObject *)PyArray_FROMANY(coef_py, NPY_DOUBLE, 1, 1, NPY_ALIGNED); - xx = (PyArrayObject *)PyArray_FROMANY(xx_py, NPY_DOUBLE, 0, 0, NPY_ALIGNED); - if (x_i == NULL || coef == NULL || xx == NULL) goto fail; - - N = PyArray_DIM(x_i,0)-1; - - if (PyArray_DIM(coef,0) < (N+k)) { - PyErr_Format(PyExc_ValueError, "too few coefficients (have %d need at least %d)", - PyArray_DIM(coef,0), N+k); - goto fail; - } - - /* create output values */ - yy = (PyArrayObject *)PyArray_EMPTY(xx->nd, xx->dimensions, NPY_DOUBLE, 0); - if (yy == NULL) goto fail; - /* create dummy knot array with new knots inserted at the end - selected as mirror symmetric versions of the old knots - */ - t = (double *)malloc(sizeof(double)*(N+2*kk-1)); - if (t==NULL) { - PyErr_NoMemory(); - goto fail; - } - x0 = *((double *)PyArray_DATA(x_i)); - xN = *((double *)PyArray_DATA(x_i) + N); - for (i=0; i 1*/ - t[i] = 2*x0 - *((double *)(PyArray_GETPTR1(x_i,kk-1-i))); - t[kk+N+i] = 2*xN - *((double *)(PyArray_GETPTR1(x_i,N-1-i))); - } - ptr = t + (kk-1); - for (i=0; i<=N; i++) { - *ptr++ = *((double *)(PyArray_GETPTR1(x_i, i))); - } - - /* Create work array to hold computed non-zero values for - the spline for a value of x. - */ - h = (double *)malloc(sizeof(double)*(2*kk+1)); - if (h==NULL) { - PyErr_NoMemory(); - goto fail; - } - - /* Determine the spline for each value of x */ - xx_iter = (PyArrayIterObject *)PyArray_IterNew((PyObject *)xx); - if (xx_iter == NULL) goto fail; - ptr = PyArray_DATA(yy); - - while(PyArray_ITER_NOTDONE(xx_iter)) { - arg = *((double *)PyArray_ITER_DATA(xx_iter)); - if ((arg < x0) || (arg > xN)) { - /* If we are outside the interpolation region, - fill with zeros - */ - *ptr++ = 0.0; - } - else { - /* Find the interval that arg lies between in the set of knots - t[ell] <= arg < t[ell+1] (last-knot use the previous interval) */ - xN1 = *((double *)PyArray_DATA(x_i) + N-1); - if (arg >= xN1) { - ell = N + kk - 2; - } - else { - ell = kk-1; - while ((arg > t[ell])) ell++; - if (arg != t[ell]) ell--; - } - - _deBoor_D(t, arg, k, ell, deriv, h); - - sp = 0.0; - for (i=0; i<=k; i++) { - cval = *((double *)(PyArray_GETPTR1(coef, ell-i+dk))); - sp += cval*h[k-i]; - } - *ptr++ = sp; - } - PyArray_ITER_NEXT(xx_iter); - } - Py_DECREF(xx_iter); - Py_DECREF(x_i); - Py_DECREF(coef); - Py_DECREF(xx); - free(t); - free(h); - return PyArray_Return(yy); - - fail: - Py_XDECREF(xx); - Py_XDECREF(coef); - Py_XDECREF(x_i); - Py_XDECREF(yy); - if (t != NULL) free(t); - if (h != NULL) free(h); - return NULL; -} - - -/* Given a set of (N+1) sample positions: - Construct the diagonals of the (N+1) x (N+K) matrix that is needed to find - the coefficients of a spline fit of order K. - Note that K>=2 because for K=0,1, the coefficients are just the - sample values themselves. - - The equation that expresses the constraints is - - s(x_i) = sum(c_j B_{j,K}(x_i), j=-K..N-1) = w_i for i=0..N - - This is equivalent to - - w = B*c where c.T = [c_{-K}, c{-K+1}, ..., c_{N-1}] and - w.T = [w_{0}, w_{1}, ..., w_{N}] - - Therefore B is an (N+1) times (N+K) matrix with entries - - B_{j,K}(x_i) for column j=-K..N-1 - and row i=0..N - - This routine takes the N+1 sample positions and the order k and - constructs the banded constraint matrix B (with k non-zero diagonals) - - The returned array is (N+1) times (N+K) ready to be either used - to compute a minimally Kth-order derivative discontinuous spline - or to be expanded with an additional K-1 constraints to be used in - an exact spline specification. - */ -static char doc_bsplmat[] = "B = _bsplmat(order,xk)\n" -"Construct the constraint matrix for spline fitting of order k\n" -"given sample positions in xk.\n" -"\n" -"If xk is an integer (N+1), then the result is equivalent to\n" -"xk=arange(N+1)+x0 for any value of x0. This produces the\n" -"integer-spaced, or cardinal spline matrix a bit faster."; -static PyObject *_bsplmat(PyObject *dummy, PyObject *args) { - int k,N,i,numbytes,j, equal; - npy_intp dims[2]; - PyObject *x_i_py=NULL; - PyArrayObject *x_i=NULL, *BB=NULL; - double *t=NULL, *h=NULL, *ptr; - double x0, xN, arg; - if (!PyArg_ParseTuple(args, "iO", &k, &x_i_py)) - return NULL; - if (k < 2) { - PyErr_Format(PyExc_ValueError, "order (%d) must be >=2", k); - return NULL; - } - - equal = 0; - N = PySequence_Length(x_i_py); - if (N == -1 && PyErr_Occurred()) { - PyErr_Clear(); - N = PyInt_AsLong(x_i_py); - if (N==-1 && PyErr_Occurred()) goto fail; - equal = 1; - } - N -= 1; - - /* create output matrix */ - dims[0] = N+1; - dims[1] = N+k; - BB = (PyArrayObject *)PyArray_ZEROS(2, dims, NPY_DOUBLE, 0); - if (BB == NULL) goto fail; - - t = (double *)malloc(sizeof(double)*(N+2*k-1)); - if (t==NULL) { - PyErr_NoMemory(); - goto fail; - } - - /* Create work array to hold computed non-zero values for - the spline for a value of x. - */ - h = (double *)malloc(sizeof(double)*(2*k+1)); - if (h==NULL) { - PyErr_NoMemory(); - goto fail; - } - - numbytes = k*sizeof(double); - - if (equal) { /* points equally spaced by 1 */ - /* we run deBoor's algorithm one time with artificially created knots - Then, we keep copying the result to every row */ - - /* Create knots at equally-spaced locations from -(K-1) to N+K-1 */ - ptr = t; - for (i=-k+1; i 1*/ - t[i] = 2*x0 - *((double *)(PyArray_GETPTR1(x_i,k-1-i))); - t[k+N+i] = 2*xN - *((double *)(PyArray_GETPTR1(x_i,N-1-i))); - } - ptr = t + (k-1); - for (i=0; i<=N; i++) { - *ptr++ = *((double *)(PyArray_GETPTR1(x_i, i))); - } - - - /* Determine the K+1 non-zero values of the spline and place them in the - correct location in the matrix for each row (along the diagonals). - In fact, the last member is always zero so only K non-zero values - are present. - */ - ptr = PyArray_DATA(BB); - for (i=0,j=k-1; i=2", k); - return NULL; - } - - equal = 0; - N = PySequence_Length(x_i_py); - if (N==2 || (N == -1 && PyErr_Occurred())) { - PyErr_Clear(); - if (PyTuple_Check(x_i_py)) { - /* x_i_py = (N+1, dx) */ - N = PyInt_AsLong(PyTuple_GET_ITEM(x_i_py, 0)); - dx = PyFloat_AsDouble(PyTuple_GET_ITEM(x_i_py, 1)); - } - else { - N = PyInt_AsLong(x_i_py); - if (N==-1 && PyErr_Occurred()) goto fail; - dx = 1.0; - } - equal = 1; - } - N -= 1; - - if (N < 2) { - PyErr_Format(PyExc_ValueError, "too few samples (%d)", N); - return NULL; - } - /* create output matrix */ - dims[0] = N-1; - dims[1] = N+k; - BB = (PyArrayObject *)PyArray_ZEROS(2, dims, NPY_DOUBLE, 0); - if (BB == NULL) goto fail; - - t = (double *)malloc(sizeof(double)*(N+2*k-1)); - if (t==NULL) { - PyErr_NoMemory(); - goto fail; - } - - /* Create work array to hold computed non-zero values for - the spline for a value of x. - */ - h = (double *)malloc(sizeof(double)*(2*k+1)); - if (h==NULL) { - PyErr_NoMemory(); - goto fail; - } - - if (equal) { /* points equally spaced by 1 */ - /* we run deBoor's full derivative algorithm twice, subtract the results - offset by one and then copy the result one time with artificially created knots - Then, we keep copying the result to every row */ - - /* Create knots at equally-spaced locations from -(K-1) to N+K-1 */ - double *tmp, factor; - int numbytes; - numbytes = (k+2)*sizeof(double); - tmp = malloc(numbytes); - if (tmp==NULL) { - PyErr_NoMemory(); - goto fail; - } - ptr = t; - for (i=-k+1; i 1*/ - t[i] = 2*x0 - *((double *)(PyArray_GETPTR1(x_i,k-1-i))); - t[k+N+i] = 2*xN - *((double *)(PyArray_GETPTR1(x_i,N-1-i))); - } - ptr = t + (k-1); - for (i=0; i<=N; i++) { - *ptr++ = *((double *)(PyArray_GETPTR1(x_i, i))); - } - - - /* Determine the K+1 non-zero values of the discontinuity jump matrix - and place them in the correct location in the matrix for each row - (along the diagonals). - - The matrix is - - J_{ij} = b^{(k)}_{j,k}(x^{+}_i) - b^{(k)}_{j,k}(x^{-}_i) - - */ - ptr = PyArray_DATA(BB); - dptr = ptr; - for (i=0,j=k-1; i0) { - for (m=0; m<=k; m++) *dptr++ += h[m]; - } - /* store location of last start position plus one.*/ - dptr = ptr - k; - ptr += N; /* advance to next row shifted over one */ - } - /* We need to finish the result for the last row. */ - _deBoor_D(t, 0, k, j, k, h); - for (m=0; m<=k; m++) *dptr++ += h[m]; - - finish: - Py_XDECREF(x_i); - free(t); - free(h); - return (PyObject *)BB; - - fail: - Py_XDECREF(x_i); - Py_XDECREF(BB); - if (t != NULL) free(t); - if (h != NULL) free(h); - return NULL; -} - - - Deleted: trunk/scipy/interpolate/_fitpackmodule.c =================================================================== --- trunk/scipy/interpolate/_fitpackmodule.c 2008-09-04 16:47:55 UTC (rev 4686) +++ trunk/scipy/interpolate/_fitpackmodule.c 2008-09-04 19:41:11 UTC (rev 4687) @@ -1,36 +0,0 @@ -/* - Multipack project. - This file is generated by setmodules.py. Do not modify it. - */ -#include "multipack.h" -static PyObject *fitpack_error; -#include "__fitpack.h" -static struct PyMethodDef fitpack_module_methods[] = { -{"_curfit", fitpack_curfit, METH_VARARGS, doc_curfit}, -{"_spl_", fitpack_spl_, METH_VARARGS, doc_spl_}, -{"_splint", fitpack_splint, METH_VARARGS, doc_splint}, -{"_sproot", fitpack_sproot, METH_VARARGS, doc_sproot}, -{"_spalde", fitpack_spalde, METH_VARARGS, doc_spalde}, -{"_parcur", fitpack_parcur, METH_VARARGS, doc_parcur}, -{"_surfit", fitpack_surfit, METH_VARARGS, doc_surfit}, -{"_bispev", fitpack_bispev, METH_VARARGS, doc_bispev}, -{"_insert", fitpack_insert, METH_VARARGS, doc_insert}, -{"_bspleval", _bspleval, METH_VARARGS, doc_bspleval}, -{"_bsplmat", _bsplmat, METH_VARARGS, doc_bsplmat}, -{"_bspldismat", _bspldismat, METH_VARARGS, doc_bspldismat}, -{NULL, NULL, 0, NULL} -}; -PyMODINIT_FUNC init_fitpack(void) { - PyObject *m, *d, *s; - m = Py_InitModule("_fitpack", fitpack_module_methods); - import_array(); - d = PyModule_GetDict(m); - - s = PyString_FromString(" 1.7 "); - PyDict_SetItemString(d, "__version__", s); - fitpack_error = PyErr_NewException ("fitpack.error", NULL, NULL); - Py_DECREF(s); - if (PyErr_Occurred()) - Py_FatalError("can't initialize module fitpack"); -} - Deleted: trunk/scipy/interpolate/fitpack.pyf =================================================================== --- trunk/scipy/interpolate/fitpack.pyf 2008-09-04 16:47:55 UTC (rev 4686) +++ trunk/scipy/interpolate/fitpack.pyf 2008-09-04 19:41:11 UTC (rev 4687) @@ -1,479 +0,0 @@ -! -*- f90 -*- -! Author: Pearu Peterson -! -python module dfitpack ! in - - usercode ''' - -static double dmax(double* seq,int len) { - double val; - int i; - if (len<1) - return -1e308; - val = seq[0]; - for(i=1;ival) val = seq[i]; - return val; -} -static double dmin(double* seq,int len) { - double val; - int i; - if (len<1) - return 1e308; - val = seq[0]; - for(i=1;ival1) return val1; - val1 = dmax(tx,nx); - return val2 - (val1-val2)/nx; -} -static double calc_e(double* x,int m,double* tx,int nx) { - double val1 = dmax(x,m); - double val2 = dmax(tx,nx); - if (val2=8) :: n=len(t) - real*8 dimension(n),depend(n),check(len(c)==n) :: c - real*8 dimension(mest),intent(out),depend(mest) :: zero - integer optional,intent(in),depend(n) :: mest=3*(n-7) - integer intent(out) :: m - integer intent(out) :: ier - end subroutine sproot - - subroutine spalde(t,n,c,k,x,d,ier) - ! d,ier = spalde(t,c,k,x) - - callprotoargument double*,int*,double*,int*,double*,double*,int* - callstatement {int k1=k+1; (*f2py_func)(t,&n,c,&k1,&x,d,&ier); } - - real*8 dimension(n) :: t - integer intent(hide),depend(t) :: n=len(t) - real*8 dimension(n),depend(n),check(len(c)==n) :: c - integer intent(in) :: k - real*8 intent(in) :: x - real*8 dimension(k+1),intent(out),depend(k) :: d - integer intent(out) :: ier - end subroutine spalde - - subroutine curfit(iopt,m,x,y,w,xb,xe,k,s,nest,n,t,c,fp,wrk,lwrk,iwrk,ier) - ! in curfit.f - integer :: iopt - integer intent(hide),depend(x),check(m>k),depend(k) :: m=len(x) - real*8 dimension(m) :: x - real*8 dimension(m),depend(m),check(len(y)==m) :: y - real*8 dimension(m),depend(m),check(len(w)==m) :: w - real*8 optional,depend(x),check(xb<=x[0]) :: xb = x[0] - real*8 optional,depend(x,m),check(xe>=x[m-1]) :: xe = x[m-1] - integer optional,check(1<=k && k <=5),intent(in) :: k=3 - real*8 optional,check(s>=0.0) :: s = 0.0 - integer intent(hide),depend(t) :: nest=len(t) - integer intent(out), depend(nest) :: n=nest - real*8 dimension(nest),intent(inout) :: t - real*8 dimension(n),intent(out) :: c - real*8 intent(out) :: fp - real*8 dimension(lwrk),intent(inout) :: wrk - integer intent(hide),depend(wrk) :: lwrk=len(wrk) - integer dimension(nest),intent(inout) :: iwrk - integer intent(out) :: ier - end subroutine curfit - - subroutine percur(iopt,m,x,y,w,k,s,nest,n,t,c,fp,wrk,lwrk,iwrk,ier) - ! in percur.f - integer :: iopt - integer intent(hide),depend(x),check(m>k),depend(k) :: m=len(x) - real*8 dimension(m) :: x - real*8 dimension(m),depend(m),check(len(y)==m) :: y - real*8 dimension(m),depend(m),check(len(w)==m) :: w - integer optional,check(1<=k && k <=5),intent(in) :: k=3 - real*8 optional,check(s>=0.0) :: s = 0.0 - integer intent(hide),depend(t) :: nest=len(t) - integer intent(out), depend(nest) :: n=nest - real*8 dimension(nest),intent(inout) :: t - real*8 dimension(n),intent(out) :: c - real*8 intent(out) :: fp - real*8 dimension(lwrk),intent(inout) :: wrk - integer intent(hide),depend(wrk) :: lwrk=len(wrk) - integer dimension(nest),intent(inout) :: iwrk - integer intent(out) :: ier - end subroutine percur - - - subroutine parcur(iopt,ipar,idim,m,u,mx,x,w,ub,ue,k,s,nest,n,t,nc,c,fp,wrk,lwrk,iwrk,ier) - ! in parcur.f - integer check(iopt>=-1 && iopt <= 1):: iopt - integer check(ipar == 1 || ipar == 0) :: ipar - integer check(idim > 0 && idim < 11) :: idim - integer intent(hide),depend(u,k),check(m>k) :: m=len(u) - real*8 dimension(m), intent(inout) :: u - integer intent(hide),depend(x,idim,m),check(mx>=idim*m) :: mx=len(x) - real*8 dimension(mx) :: x - real*8 dimension(m) :: w - real*8 :: ub - real*8 :: ue - integer optional, check(1<=k && k<=5) :: k=3.0 - real*8 optional, check(s>=0.0) :: s = 0.0 - integer intent(hide), depend(t) :: nest=len(t) - integer intent(out), depend(nest) :: n=nest - real*8 dimension(nest), intent(inout) :: t - integer intent(hide), depend(c,nest,idim), check(nc>=idim*nest) :: nc=len(c) - real*8 dimension(nc), intent(out) :: c - real*8 intent(out) :: fp - real*8 dimension(lwrk), intent(inout) :: wrk - integer intent(hide),depend(wrk) :: lwrk=len(wrk) - integer dimension(nest), intent(inout) :: iwrk - integer intent(out) :: ier - end subroutine parcur - - - subroutine fpcurf0(iopt,x,y,w,m,xb,xe,k,s,nest,tol,maxit,k1,k2,n,t,c,fp,fpint,wrk,nrdata,ier) - ! x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier = \ - ! fpcurf0(x,y,k,[w,xb,xe,s,nest]) - - fortranname fpcurf - callprotoargument int*,double*,double*,double*,int*,double*,double*,int*,double*,int*,double*,int*,int*,int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int* - callstatement (*f2py_func)(&iopt,x,y,w,&m,&xb,&xe,&k,&s,&nest,&tol,&maxit,&k1,&k2,&n,t,c,&fp,fpint,wrk,wrk+nest,wrk+nest*k2,wrk+nest*2*k2,wrk+nest*3*k2,nrdata,&ier) - - integer intent(hide) :: iopt = 0 - real*8 dimension(m),intent(in,out) :: x - real*8 dimension(m),depend(m),check(len(y)==m),intent(in,out) :: y - real*8 dimension(m),depend(m),check(len(w)==m),intent(in,out) :: w = 1.0 - integer intent(hide),depend(x),check(m>k),depend(k) :: m=len(x) - real*8 intent(in,out),depend(x),check(xb<=x[0]) :: xb = x[0] - real*8 intent(in,out),depend(x,m),check(xe>=x[m-1]) :: xe = x[m-1] - integer check(1<=k && k<=5),intent(in,out) :: k - real*8 check(s>=0.0),depend(m),intent(in,out) :: s = m - integer intent(in),depend(m,s,k,k1),check(nest>=2*k1) :: nest = (s==0.0?m+k+1:MAX(m/2,2*k1)) - real*8 intent(hide) :: tol = 0.001 - integer intent(hide) :: maxit = 20 - integer intent(hide),depend(k) :: k1=k+1 - integer intent(hide),depend(k) :: k2=k+2 - integer intent(out) :: n - real*8 dimension(nest),intent(out),depend(nest) :: t - real*8 dimension(nest),depend(nest),intent(out) :: c - real*8 intent(out) :: fp - real*8 dimension(nest),depend(nest),intent(out,cache) :: fpint - real*8 dimension(nest*3*k2+m*k1),intent(cache,hide),depend(nest,k1,k2,m) :: wrk - integer dimension(nest),depend(nest),intent(out,cache) :: nrdata - integer intent(out) :: ier - end subroutine fpcurf0 - - subroutine fpcurf1(iopt,x,y,w,m,xb,xe,k,s,nest,tol,maxit,k1,k2,n,t,c,fp,fpint,wrk,nrdata,ier) - ! x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier = \ - ! fpcurf1(x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier) - - fortranname fpcurf - callprotoargument int*,double*,double*,double*,int*,double*,double*,int*,double*,int*,double*,int*,int*,int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int* - callstatement (*f2py_func)(&iopt,x,y,w,&m,&xb,&xe,&k,&s,&nest,&tol,&maxit,&k1,&k2,&n,t,c,&fp,fpint,wrk,wrk+nest,wrk+nest*k2,wrk+nest*2*k2,wrk+nest*3*k2,nrdata,&ier) - - integer intent(hide) :: iopt = 1 - real*8 dimension(m),intent(in,out,overwrite) :: x - real*8 dimension(m),depend(m),check(len(y)==m),intent(in,out,overwrite) :: y - real*8 dimension(m),depend(m),check(len(w)==m),intent(in,out,overwrite) :: w - integer intent(hide),depend(x),check(m>k),depend(k) :: m=len(x) - real*8 intent(in,out) :: xb - real*8 intent(in,out) :: xe - integer check(1<=k && k<=5),intent(in,out) :: k - real*8 check(s>=0.0),intent(in,out) :: s - integer intent(hide),depend(t) :: nest = len(t) - real*8 intent(hide) :: tol = 0.001 - integer intent(hide) :: maxit = 20 - integer intent(hide),depend(k) :: k1=k+1 - integer intent(hide),depend(k) :: k2=k+2 - integer intent(in,out) :: n - real*8 dimension(nest),intent(in,out,overwrite) :: t - real*8 dimension(nest),depend(nest),check(len(c)==nest),intent(in,out,overwrite) :: c - real*8 intent(in,out) :: fp - real*8 dimension(nest),depend(nest),check(len(fpint)==nest),intent(in,out,cache,overwrite) :: fpint - real*8 dimension(nest*3*k2+m*k1),intent(cache,hide),depend(nest,k1,k2,m) :: wrk - integer dimension(nest),depend(nest),check(len(nrdata)==nest),intent(in,out,cache,overwrite) :: nrdata - integer intent(in,out) :: ier - end subroutine fpcurf1 - - subroutine fpcurfm1(iopt,x,y,w,m,xb,xe,k,s,nest,tol,maxit,k1,k2,n,t,c,fp,fpint,wrk,nrdata,ier) - ! x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier = \ - ! fpcurfm1(x,y,k,t,[w,xb,xe]) - - fortranname fpcurf - callprotoargument int*,double*,double*,double*,int*,double*,double*,int*,double*,int*,double*,int*,int*,int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int* - callstatement (*f2py_func)(&iopt,x,y,w,&m,&xb,&xe,&k,&s,&nest,&tol,&maxit,&k1,&k2,&n,t,c,&fp,fpint,wrk,wrk+nest,wrk+nest*k2,wrk+nest*2*k2,wrk+nest*3*k2,nrdata,&ier) - - integer intent(hide) :: iopt = -1 - real*8 dimension(m),intent(in,out) :: x - real*8 dimension(m),depend(m),check(len(y)==m),intent(in,out) :: y - real*8 dimension(m),depend(m),check(len(w)==m),intent(in,out) :: w = 1.0 - integer intent(hide),depend(x),check(m>k),depend(k) :: m=len(x) - real*8 intent(in,out),depend(x),check(xb<=x[0]) :: xb = x[0] - real*8 intent(in,out),depend(x,m),check(xe>=x[m-1]) :: xe = x[m-1] - integer check(1<=k && k<=5),intent(in,out) :: k - real*8 intent(out) :: s = -1 - integer intent(hide),depend(n) :: nest = n - real*8 intent(hide) :: tol = 0.001 - integer intent(hide) :: maxit = 20 - integer intent(hide),depend(k) :: k1=k+1 - integer intent(hide),depend(k) :: k2=k+2 - integer intent(out),depend(t) :: n = len(t) - real*8 dimension(n),intent(in,out,overwrite) :: t - real*8 dimension(nest),depend(nest),intent(out) :: c - real*8 intent(out) :: fp - real*8 dimension(nest),depend(nest),intent(out,cache) :: fpint - real*8 dimension(nest*3*k2+m*k1),intent(cache,hide),depend(nest,k1,k2,m) :: wrk - integer dimension(nest),depend(nest),intent(out,cache) :: nrdata - integer intent(out) :: ier - end subroutine fpcurfm1 - - !!!!!!!!!! Bivariate spline !!!!!!!!!!! - - subroutine bispev(tx,nx,ty,ny,c,kx,ky,x,mx,y,my,z,wrk,lwrk,iwrk,kwrk,ier) - ! z,ier = bispev(tx,ty,c,kx,ky,x,y) - real*8 dimension(nx),intent(in) :: tx - integer intent(hide),depend(tx) :: nx=len(tx) - real*8 dimension(ny),intent(in) :: ty - integer intent(hide),depend(ty) :: ny=len(ty) - real*8 intent(in),dimension((nx-kx-1)*(ny-ky-1)),depend(nx,ny,kx,ky),& - check(len(c)==(nx-kx-1)*(ny-ky-1)):: c - integer :: kx - integer :: ky - real*8 intent(in),dimension(mx) :: x - integer intent(hide),depend(x) :: mx=len(x) - real*8 intent(in),dimension(my) :: y - integer intent(hide),depend(y) :: my=len(y) - real*8 dimension(mx,my),depend(mx,my),intent(out,c) :: z - real*8 dimension(lwrk),depend(lwrk),intent(hide,cache) :: wrk - integer intent(hide),depend(mx,kx,my,ky) :: lwrk=mx*(kx+1)+my*(ky+1) - integer dimension(kwrk),depend(kwrk),intent(hide,cache) :: iwrk - integer intent(hide),depend(mx,my) :: kwrk=mx+my - integer intent(out) :: ier - end subroutine bispev - - subroutine surfit_smth(iopt,m,x,y,z,w,xb,xe,yb,ye,kx,ky,s,nxest,nyest,& - nmax,eps,nx,tx,ny,ty,c,fp,wrk1,lwrk1,wrk2,lwrk2,& - iwrk,kwrk,ier) - ! nx,tx,ny,ty,c,fp,ier = surfit_smth(x,y,z,[w,xb,xe,yb,ye,kx,ky,s,eps,lwrk2]) - - fortranname surfit - - integer intent(hide) :: iopt=0 - integer intent(hide),depend(x,kx,ky),check(m>=(kx+1)*(ky+1)) & - :: m=len(x) - real*8 dimension(m) :: x - real*8 dimension(m),depend(m),check(len(y)==m) :: y - real*8 dimension(m),depend(m),check(len(z)==m) :: z - real*8 optional,dimension(m),depend(m),check(len(w)==m) :: w = 1.0 - real*8 optional,depend(x,m) :: xb=dmin(x,m) - real*8 optional,depend(x,m) :: xe=dmax(x,m) - real*8 optional,depend(y,m) :: yb=dmin(y,m) - real*8 optional,depend(y,m) :: ye=dmax(y,m) - integer check(1<=kx && kx<=5) :: kx = 3 - integer check(1<=ky && ky<=5) :: ky = 3 - real*8 optional,check(0.0<=s) :: s = m - integer optional,depend(kx,m),check(nxest>=2*(kx+1)) & - :: nxest = imax(kx+1+sqrt(m/2),2*(kx+1)) - integer optional,depend(ky,m),check(nyest>=2*(ky+1)) & - :: nyest = imax(ky+1+sqrt(m/2),2*(ky+1)) - integer intent(hide),depend(nxest,nyest) :: nmax=MAX(nxest,nyest) - real*8 optional,check(0.0=(kx+1)*(ky+1)) & - :: m=len(x) - real*8 dimension(m) :: x - real*8 dimension(m),depend(m),check(len(y)==m) :: y - real*8 dimension(m),depend(m),check(len(z)==m) :: z - real*8 optional,dimension(m),depend(m),check(len(w)==m) :: w = 1.0 - real*8 optional,depend(x,tx,m,nx) :: xb=calc_b(x,m,tx,nx) - real*8 optional,depend(x,tx,m,nx) :: xe=calc_e(x,m,tx,nx) - real*8 optional,depend(y,ty,m,ny) :: yb=calc_b(y,m,ty,ny) - real*8 optional,depend(y,ty,m,ny) :: ye=calc_e(y,m,ty,ny) - integer check(1<=kx && kx<=5) :: kx = 3 - integer check(1<=ky && ky<=5) :: ky = 3 - real*8 intent(hide) :: s = 0.0 - integer intent(hide),depend(nx) :: nxest = nx - integer intent(hide),depend(ny) :: nyest = ny - integer intent(hide),depend(nx,ny) :: nmax=MAX(nx,ny) - real*8 optional,check(0.0kx) :: mx=len(x) - real*8 dimension(mx) :: x - integer intent(hide),depend(y,ky),check(my>ky) :: my=len(y) - real*8 dimension(my) :: y - real*8 dimension(mx*my),depend(mx,my),check(len(z)==mx*my) :: z - real*8 optional,depend(x,mx) :: xb=dmin(x,mx) - real*8 optional,depend(x,mx) :: xe=dmax(x,mx) - real*8 optional,depend(y,my) :: yb=dmin(y,my) - real*8 optional,depend(y,my) :: ye=dmax(y,my) - integer optional,check(1<=kx && kx<=5) :: kx = 3 - integer optional,check(1<=ky && ky<=5) :: ky = 3 - real*8 optional,check(0.0<=s) :: s = 0.0 - integer intent(hide),depend(kx,mx),check(nxest>=2*(kx+1)) & - :: nxest = mx+kx+1 - integer intent(hide),depend(ky,my),check(nyest>=2*(ky+1)) & - :: nyest = my+ky+1 - integer intent(out) :: nx - real*8 dimension(nxest),intent(out),depend(nxest) :: tx - integer intent(out) :: ny - real*8 dimension(nyest),intent(out),depend(nyest) :: ty - real*8 dimension((nxest-kx-1)*(nyest-ky-1)), & - depend(kx,ky,nxest,nyest),intent(out) :: c - real*8 intent(out) :: fp - real*8 dimension(lwrk),intent(cache,hide),depend(lwrk) :: wrk - integer intent(hide),depend(mx,my,kx,ky,nxest,nyest) & - :: lwrk=calc_regrid_lwrk(mx,my,kx,ky,nxest,nyest) - integer dimension(kwrk),depend(kwrk),intent(cache,hide) :: iwrk - integer intent(hide),depend(mx,my,nxest,nyest) & - :: kwrk=3+mx+my+nxest+nyest - integer intent(out) :: ier - end subroutine regrid_smth - - function dblint(tx,nx,ty,ny,c,kx,ky,xb,xe,yb,ye,wrk) - ! iy = dblint(tx,ty,c,kx,ky,xb,xe,yb,ye) - real*8 dimension(nx),intent(in) :: tx - integer intent(hide),depend(tx) :: nx=len(tx) - real*8 dimension(ny),intent(in) :: ty - integer intent(hide),depend(ty) :: ny=len(ty) - real*8 intent(in),dimension((nx-kx-1)*(ny-ky-1)),depend(nx,ny,kx,ky),& - check(len(c)==(nx-kx-1)*(ny-ky-1)):: c - integer :: kx - integer :: ky - real*8 intent(in) :: xb - real*8 intent(in) :: xe - real*8 intent(in) :: yb - real*8 intent(in) :: ye - real*8 dimension(nx+ny-kx-ky-2),depend(nx,ny,kx,ky),intent(cache,hide) :: wrk - real*8 :: dblint - end function dblint - end interface -end python module dfitpack - Deleted: trunk/scipy/interpolate/multipack.h =================================================================== --- trunk/scipy/interpolate/multipack.h 2008-09-04 16:47:55 UTC (rev 4686) +++ trunk/scipy/interpolate/multipack.h 2008-09-04 19:41:11 UTC (rev 4687) @@ -1,211 +0,0 @@ -/* MULTIPACK module by Travis Oliphant - -Copyright (c) 2002 Travis Oliphant all rights reserved -Oliphant.Travis at altavista.net -Permission to use, modify, and distribute this software is given under the -terms of the SciPy (BSD style) license. See LICENSE.txt that came with -this distribution for specifics. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -*/ - - -/* This extension module is a collection of wrapper functions around -common FORTRAN code in the packages MINPACK, ODEPACK, and QUADPACK plus -some differential algebraic equation solvers. - -The wrappers are meant to be nearly direct translations between the -FORTAN code and Python. Some parameters like sizes do not need to be -passed since they are available from the objects. - -It is anticipated that a pure Python module be written to call these lower -level routines and make a simpler user interface. All of the routines define -default values for little-used parameters so that even the raw routines are -quite useful without a separate wrapper. - -FORTRAN Outputs that are not either an error indicator or the sought-after -results are placed in a dictionary and returned as an optional member of -the result tuple when the full_output argument is non-zero. -*/ - -#include "Python.h" -#include "numpy/arrayobject.h" - -#define PYERR(errobj,message) {PyErr_SetString(errobj,message); goto fail;} -#define PYERR2(errobj,message) {PyErr_Print(); PyErr_SetString(errobj, message); goto fail;} -#define ISCONTIGUOUS(m) ((m)->flags & CONTIGUOUS) - -#define STORE_VARS() PyObject *store_multipack_globals[4]; int store_multipack_globals3; - -#define INIT_FUNC(fun,arg,errobj) { /* Get extra arguments or set to zero length tuple */ \ - store_multipack_globals[0] = multipack_python_function; \ - store_multipack_globals[1] = multipack_extra_arguments; \ - if (arg == NULL) { \ - if ((arg = PyTuple_New(0)) == NULL) goto fail; \ - } \ - else \ - Py_INCREF(arg); /* We decrement on exit. */ \ - if (!PyTuple_Check(arg)) \ - PYERR(errobj,"Extra Arguments must be in a tuple"); \ - /* Set up callback functions */ \ - if (!PyCallable_Check(fun)) \ - PYERR(errobj,"First argument must be a callable function."); \ - multipack_python_function = fun; \ - multipack_extra_arguments = arg; } - -#define INIT_JAC_FUNC(fun,Dfun,arg,col_deriv,errobj) { \ - store_multipack_globals[0] = multipack_python_function; \ - store_multipack_globals[1] = multipack_extra_arguments; \ - store_multipack_globals[2] = multipack_python_jacobian; \ - store_multipack_globals3 = multipack_jac_transpose; \ - if (arg == NULL) { \ - if ((arg = PyTuple_New(0)) == NULL) goto fail; \ - } \ - else \ - Py_INCREF(arg); /* We decrement on exit. */ \ - if (!PyTuple_Check(arg)) \ - PYERR(errobj,"Extra Arguments must be in a tuple"); \ - /* Set up callback functions */ \ - if (!PyCallable_Check(fun) || (Dfun != Py_None && !PyCallable_Check(Dfun))) \ - PYERR(errobj,"The function and its Jacobian must be callable functions."); \ - multipack_python_function = fun; \ - multipack_extra_arguments = arg; \ - multipack_python_jacobian = Dfun; \ - multipack_jac_transpose = !(col_deriv);} - -#define RESTORE_JAC_FUNC() multipack_python_function = store_multipack_globals[0]; \ - multipack_extra_arguments = store_multipack_globals[1]; \ - multipack_python_jacobian = store_multipack_globals[2]; \ - multipack_jac_transpose = store_multipack_globals3; - -#define RESTORE_FUNC() multipack_python_function = store_multipack_globals[0]; \ - multipack_extra_arguments = store_multipack_globals[1]; - -#define SET_DIAG(ap_diag,o_diag,mode) { /* Set the diag vector from input */ \ - if (o_diag == NULL || o_diag == Py_None) { \ - ap_diag = (PyArrayObject *)PyArray_FromDims(1,&n,PyArray_DOUBLE); \ - if (ap_diag == NULL) goto fail; \ - diag = (double *)ap_diag -> data; \ - mode = 1; \ - } \ - else { \ - ap_diag = (PyArrayObject *)PyArray_ContiguousFromObject(o_diag, PyArray_DOUBLE, 1, 1); \ - if (ap_diag == NULL) goto fail; \ - diag = (double *)ap_diag -> data; \ - mode = 2; } } - -#define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, *p3=(double *)(data);\ -int i,j;\ -for (j=0;j<(m);p3++,j++) \ - for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ - *p1 = *p2; } -/* -static PyObject *multipack_python_function=NULL; -static PyObject *multipack_python_jacobian=NULL; -static PyObject *multipack_extra_arguments=NULL; -static int multipack_jac_transpose=1; -*/ - -static PyArrayObject * my_make_numpy_array(PyObject *y0, int type, int mindim, int maxdim) - /* This is just like PyArray_ContiguousFromObject except it handles - * single numeric datatypes as 1-element, rank-1 arrays instead of as - * scalars. - */ -{ - PyArrayObject *new_array; - PyObject *tmpobj; - - Py_INCREF(y0); - - if (PyInt_Check(y0) || PyFloat_Check(y0)) { - tmpobj = PyList_New(1); - PyList_SET_ITEM(tmpobj, 0, y0); /* reference now belongs to tmpobj */ - } - else - tmpobj = y0; - - new_array = (PyArrayObject *)PyArray_ContiguousFromObject(tmpobj, type, mindim, maxdim); - - Py_DECREF(tmpobj); - return new_array; -} - -static PyObject *call_python_function(PyObject *func, int n, double *x, PyObject *args, int dim, PyObject *error_obj) -{ - /* - This is a generic function to call a python function that takes a 1-D - sequence as a first argument and optional extra_arguments (should be a - zero-length tuple if none desired). The result of the function is - returned in a multiarray object. - -- build sequence object from values in x. - -- add extra arguments (if any) to an argument list. - -- call Python callable object - -- check if error occurred: - if so return NULL - -- if no error, place result of Python code into multiarray object. - */ - - PyArrayObject *sequence = NULL; - PyObject *arglist = NULL, *tmpobj = NULL; - PyObject *arg1 = NULL, *str1 = NULL; - PyObject *result = NULL; - PyArrayObject *result_array = NULL; - - /* Build sequence argument from inputs */ - sequence = (PyArrayObject *)PyArray_FromDimsAndData(1, &n, PyArray_DOUBLE, (char *)x); - if (sequence == NULL) PYERR2(error_obj,"Internal failure to make an array of doubles out of first\n argument to function call."); - - /* Build argument list */ - if ((arg1 = PyTuple_New(1)) == NULL) { - Py_DECREF(sequence); - return NULL; - } - PyTuple_SET_ITEM(arg1, 0, (PyObject *)sequence); - /* arg1 now owns sequence reference */ - if ((arglist = PySequence_Concat( arg1, args)) == NULL) - PYERR2(error_obj,"Internal error constructing argument list."); - - Py_DECREF(arg1); /* arglist has a reference to sequence, now. */ - - - /* Call function object --- variable passed to routine. Extra - arguments are in another passed variable. - */ - if ((result = PyEval_CallObject(func, arglist))==NULL) { - PyErr_Print(); - tmpobj = PyObject_GetAttrString(func, "func_name"); - if (tmpobj == NULL) goto fail; - str1 = PyString_FromString("Error occured while calling the Python function named "); - if (str1 == NULL) { Py_DECREF(tmpobj); goto fail;} - PyString_ConcatAndDel(&str1, tmpobj); - PyErr_SetString(error_obj, PyString_AsString(str1)); - Py_DECREF(str1); - goto fail; - } - - if ((result_array = (PyArrayObject *)PyArray_ContiguousFromObject(result, PyArray_DOUBLE, dim-1, dim))==NULL) - PYERR2(error_obj,"Result from function call is not a proper array of floats."); - - Py_DECREF(result); - Py_DECREF(arglist); - return (PyObject *)result_array; - - fail: - Py_XDECREF(arglist); - Py_XDECREF(result); - Py_XDECREF(arg1); - return NULL; -} - - - - - - - - - - - - - Modified: trunk/scipy/interpolate/setup.py =================================================================== --- trunk/scipy/interpolate/setup.py 2008-09-04 16:47:55 UTC (rev 4686) +++ trunk/scipy/interpolate/setup.py 2008-09-04 19:41:11 UTC (rev 4687) @@ -12,12 +12,12 @@ ) config.add_extension('_fitpack', - sources=['_fitpackmodule.c'], + sources=['src/_fitpackmodule.c'], libraries=['fitpack'], ) config.add_extension('dfitpack', - sources=['fitpack.pyf'], + sources=['src/fitpack.pyf'], libraries=['fitpack'], ) Copied: trunk/scipy/interpolate/src/__fitpack.h (from rev 4684, trunk/scipy/interpolate/__fitpack.h) Copied: trunk/scipy/interpolate/src/_fitpackmodule.c (from rev 4684, trunk/scipy/interpolate/_fitpackmodule.c) Copied: trunk/scipy/interpolate/src/fitpack.pyf (from rev 4684, trunk/scipy/interpolate/fitpack.pyf) Copied: trunk/scipy/interpolate/src/multipack.h (from rev 4684, trunk/scipy/interpolate/multipack.h) From scipy-svn at scipy.org Thu Sep 4 16:22:29 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 4 Sep 2008 15:22:29 -0500 (CDT) Subject: [Scipy-svn] r4688 - trunk/scipy/weave/tests Message-ID: <20080904202229.8DD5939C02A@scipy.org> Author: alan.mcintyre Date: 2008-09-04 15:22:25 -0500 (Thu, 04 Sep 2008) New Revision: 4688 Modified: trunk/scipy/weave/tests/test_scxx_dict.py Log: Use numpy.testing.dec.knownfailureif instead of skipknownfailure (which was removed) . Modified: trunk/scipy/weave/tests/test_scxx_dict.py =================================================================== --- trunk/scipy/weave/tests/test_scxx_dict.py 2008-09-04 19:41:11 UTC (rev 4687) +++ trunk/scipy/weave/tests/test_scxx_dict.py 2008-09-04 20:22:25 UTC (rev 4688) @@ -9,7 +9,6 @@ from scipy.weave import inline_tools - class TestDictConstruct(TestCase): #------------------------------------------------------------------------ # Check that construction from basic types is allowed and have correct @@ -111,7 +110,7 @@ def test_char(self): self.generic_get('return_val = a["b"];') - @dec.skipknownfailure + @dec.knownfailureif(True) @dec.slow def test_char_fail(self): # We can't through a KeyError for dicts on RHS of @@ -134,7 +133,7 @@ """ self.generic_get(code,['a']) - @dec.skipknownfailure + @dec.knownfailureif(True) @dec.slow def test_obj_fail(self): # We can't through a KeyError for dicts on RHS of From scipy-svn at scipy.org Fri Sep 5 08:50:13 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Fri, 5 Sep 2008 07:50:13 -0500 (CDT) Subject: [Scipy-svn] r4689 - in trunk/scipy/interpolate: . src Message-ID: <20080905125013.7092239C11D@scipy.org> Author: oliphant Date: 2008-09-05 07:50:12 -0500 (Fri, 05 Sep 2008) New Revision: 4689 Added: trunk/scipy/interpolate/src/_interpolate.cpp trunk/scipy/interpolate/src/interpolate.h Modified: trunk/scipy/interpolate/setup.py Log: Add _interpolate functions. Modified: trunk/scipy/interpolate/setup.py =================================================================== --- trunk/scipy/interpolate/setup.py 2008-09-04 20:22:25 UTC (rev 4688) +++ trunk/scipy/interpolate/setup.py 2008-09-05 12:50:12 UTC (rev 4689) @@ -21,6 +21,11 @@ libraries=['fitpack'], ) + config.add_extension('_interpolate', + sources=['src/_interpolate.cpp'], + include_dirs = ['src'], + depends = ['src/interpolate.h']) + config.add_data_dir('tests') return config Added: trunk/scipy/interpolate/src/_interpolate.cpp =================================================================== --- trunk/scipy/interpolate/src/_interpolate.cpp 2008-09-04 20:22:25 UTC (rev 4688) +++ trunk/scipy/interpolate/src/_interpolate.cpp 2008-09-05 12:50:12 UTC (rev 4689) @@ -0,0 +1,236 @@ +#include "Python.h" +#include + +#include "interpolate.h" +#include "numpy/arrayobject.h" + +using namespace std; + +extern "C" { + +static PyObject* linear_method(PyObject*self, PyObject* args, PyObject* kywds) +{ + static char *kwlist[] = {"x","y","new_x","new_y", NULL}; + PyObject *py_x, *py_y, *py_new_x, *py_new_y; + py_x = py_y = py_new_x = py_new_y = NULL; + PyObject *arr_x, *arr_y, *arr_new_x, *arr_new_y; + arr_x = arr_y = arr_new_x = arr_new_y = NULL; + + if(!PyArg_ParseTupleAndKeywords(args,kywds,"OOOO:linear_dddd",kwlist,&py_x, &py_y, &py_new_x, &py_new_y)) + return NULL; + arr_x = PyArray_FROMANY(py_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_x) { + PyErr_SetString(PyExc_ValueError, "x must be a 1-D array of floats"); + goto fail; + } + arr_y = PyArray_FROMANY(py_y, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_y) { + PyErr_SetString(PyExc_ValueError, "y must be a 1-D array of floats"); + goto fail; + } + arr_new_x = PyArray_FROMANY(py_new_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_new_x) { + PyErr_SetString(PyExc_ValueError, "new_x must be a 1-D array of floats"); + goto fail; + } + arr_new_y = PyArray_FROMANY(py_new_y, PyArray_DOUBLE, 1, 1, NPY_INOUT_ARRAY); + if (!arr_new_y) { + PyErr_SetString(PyExc_ValueError, "new_y must be a 1-D array of floats"); + goto fail; + } + + linear((double*)PyArray_DATA(arr_x), (double*)PyArray_DATA(arr_y), + PyArray_DIM(arr_x,0), (double*)PyArray_DATA(arr_new_x), + (double*)PyArray_DATA(arr_new_y), PyArray_DIM(arr_new_x,0)); + + Py_DECREF(arr_x); + Py_DECREF(arr_y); + Py_DECREF(arr_new_x); + Py_DECREF(arr_new_y); + + Py_RETURN_NONE; + +fail: + Py_XDECREF(arr_x); + Py_XDECREF(arr_y); + Py_XDECREF(arr_new_x); + Py_XDECREF(arr_new_y); + return NULL; +} + +static PyObject* loginterp_method(PyObject*self, PyObject* args, PyObject* kywds) +{ + static char *kwlist[] = {"x","y","new_x","new_y", NULL}; + PyObject *py_x, *py_y, *py_new_x, *py_new_y; + py_x = py_y = py_new_x = py_new_y = NULL; + PyObject *arr_x, *arr_y, *arr_new_x, *arr_new_y; + arr_x = arr_y = arr_new_x = arr_new_y = NULL; + + if(!PyArg_ParseTupleAndKeywords(args,kywds,"OOOO:loginterp_dddd",kwlist,&py_x, &py_y, &py_new_x, &py_new_y)) + return NULL; + arr_x = PyArray_FROMANY(py_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_x) { + PyErr_SetString(PyExc_ValueError, "x must be a 1-D array of floats"); + goto fail; + } + arr_y = PyArray_FROMANY(py_y, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_y) { + PyErr_SetString(PyExc_ValueError, "y must be a 1-D array of floats"); + goto fail; + } + arr_new_x = PyArray_FROMANY(py_new_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_new_x) { + PyErr_SetString(PyExc_ValueError, "new_x must be a 1-D array of floats"); + goto fail; + } + arr_new_y = PyArray_FROMANY(py_new_y, PyArray_DOUBLE, 1, 1, NPY_INOUT_ARRAY); + if (!arr_new_y) { + PyErr_SetString(PyExc_ValueError, "new_y must be a 1-D array of floats"); + goto fail; + } + + loginterp((double*)PyArray_DATA(arr_x), (double*)PyArray_DATA(arr_y), + PyArray_DIM(arr_x,0), (double*)PyArray_DATA(arr_new_x), + (double*)PyArray_DATA(arr_new_y), PyArray_DIM(arr_new_x,0)); + + Py_DECREF(arr_x); + Py_DECREF(arr_y); + Py_DECREF(arr_new_x); + Py_DECREF(arr_new_y); + + Py_RETURN_NONE; + +fail: + Py_XDECREF(arr_x); + Py_XDECREF(arr_y); + Py_XDECREF(arr_new_x); + Py_XDECREF(arr_new_y); + return NULL; +} + +static PyObject* window_average_method(PyObject*self, PyObject* args, PyObject* kywds) +{ + static char *kwlist[] = {"x","y","new_x","new_y", NULL}; + PyObject *py_x, *py_y, *py_new_x, *py_new_y; + py_x = py_y = py_new_x = py_new_y = NULL; + PyObject *arr_x, *arr_y, *arr_new_x, *arr_new_y; + arr_x = arr_y = arr_new_x = arr_new_y = NULL; + double width; + + if(!PyArg_ParseTupleAndKeywords(args,kywds,"OOOOd:loginterp_dddd",kwlist,&py_x, &py_y, &py_new_x, &py_new_y, &width)) + return NULL; + arr_x = PyArray_FROMANY(py_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_x) { + PyErr_SetString(PyExc_ValueError, "x must be a 1-D array of floats"); + goto fail; + } + arr_y = PyArray_FROMANY(py_y, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_y) { + PyErr_SetString(PyExc_ValueError, "y must be a 1-D array of floats"); + goto fail; + } + arr_new_x = PyArray_FROMANY(py_new_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_new_x) { + PyErr_SetString(PyExc_ValueError, "new_x must be a 1-D array of floats"); + goto fail; + } + arr_new_y = PyArray_FROMANY(py_new_y, PyArray_DOUBLE, 1, 1, NPY_INOUT_ARRAY); + if (!arr_new_y) { + PyErr_SetString(PyExc_ValueError, "new_y must be a 1-D array of floats"); + goto fail; + } + + window_average((double*)PyArray_DATA(arr_x), (double*)PyArray_DATA(arr_y), + PyArray_DIM(arr_x,0), (double*)PyArray_DATA(arr_new_x), + (double*)PyArray_DATA(arr_new_y), PyArray_DIM(arr_new_x,0), width); + + Py_DECREF(arr_x); + Py_DECREF(arr_y); + Py_DECREF(arr_new_x); + Py_DECREF(arr_new_y); + + Py_RETURN_NONE; + +fail: + Py_XDECREF(arr_x); + Py_XDECREF(arr_y); + Py_XDECREF(arr_new_x); + Py_XDECREF(arr_new_y); + return NULL; +} + +static PyObject* block_average_above_method(PyObject*self, PyObject* args, PyObject* kywds) +{ + static char *kwlist[] = {"x","y","new_x","new_y", NULL}; + PyObject *py_x, *py_y, *py_new_x, *py_new_y; + py_x = py_y = py_new_x = py_new_y = NULL; + PyObject *arr_x, *arr_y, *arr_new_x, *arr_new_y; + arr_x = arr_y = arr_new_x = arr_new_y = NULL; + + if(!PyArg_ParseTupleAndKeywords(args,kywds,"OOOO:loginterp_dddd",kwlist,&py_x, &py_y, &py_new_x, &py_new_y)) + return NULL; + arr_x = PyArray_FROMANY(py_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_x) { + PyErr_SetString(PyExc_ValueError, "x must be a 1-D array of floats"); + goto fail; + } + arr_y = PyArray_FROMANY(py_y, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_y) { + PyErr_SetString(PyExc_ValueError, "y must be a 1-D array of floats"); + goto fail; + } + arr_new_x = PyArray_FROMANY(py_new_x, PyArray_DOUBLE, 1, 1, NPY_IN_ARRAY); + if (!arr_new_x) { + PyErr_SetString(PyExc_ValueError, "new_x must be a 1-D array of floats"); + goto fail; + } + arr_new_y = PyArray_FROMANY(py_new_y, PyArray_DOUBLE, 1, 1, NPY_INOUT_ARRAY); + if (!arr_new_y) { + PyErr_SetString(PyExc_ValueError, "new_y must be a 1-D array of floats"); + goto fail; + } + + block_average_above((double*)PyArray_DATA(arr_x), (double*)PyArray_DATA(arr_y), + PyArray_DIM(arr_x,0), (double*)PyArray_DATA(arr_new_x), + (double*)PyArray_DATA(arr_new_y), PyArray_DIM(arr_new_x,0)); + + Py_DECREF(arr_x); + Py_DECREF(arr_y); + Py_DECREF(arr_new_x); + Py_DECREF(arr_new_y); + + Py_RETURN_NONE; + +fail: + Py_XDECREF(arr_x); + Py_XDECREF(arr_y); + Py_XDECREF(arr_new_x); + Py_XDECREF(arr_new_y); + return NULL; +} + +static PyMethodDef interpolate_methods[] = { + {"linear_dddd", (PyCFunction)linear_method, METH_VARARGS|METH_KEYWORDS, + ""}, + {"loginterp_dddd", (PyCFunction)loginterp_method, METH_VARARGS|METH_KEYWORDS, + ""}, + {"window_average_ddddd", (PyCFunction)window_average_method, METH_VARARGS|METH_KEYWORDS, + ""}, + {"block_average_above_dddd", (PyCFunction)block_average_above_method, METH_VARARGS|METH_KEYWORDS, + ""}, + {NULL, NULL, 0, NULL} +}; + + +PyMODINIT_FUNC init_interpolate(void) +{ + PyObject* m; + m = Py_InitModule3("_interpolate", interpolate_methods, + "A few interpolation routines.\n" + ); + if (m == NULL) + return; + import_array(); +} + +} // extern "C" Added: trunk/scipy/interpolate/src/interpolate.h =================================================================== --- trunk/scipy/interpolate/src/interpolate.h 2008-09-04 20:22:25 UTC (rev 4688) +++ trunk/scipy/interpolate/src/interpolate.h 2008-09-05 12:50:12 UTC (rev 4689) @@ -0,0 +1,205 @@ +#include +#include +#include +#include + +template +void linear(T* x_vec, T* y_vec, int len, + T* new_x_vec, T* new_y_vec, int new_len) +{ + for (int i=0;i=x_vec[len-1]) + index = len-2; + else + { + T* which = std::lower_bound(x_vec, x_vec+len, new_x); + index = which - x_vec-1; + } + + if(new_x == x_vec[index]) + { + // exact value + new_y_vec[i] = y_vec[index]; + } + else + { + //interpolate + double x_lo = x_vec[index]; + double x_hi = x_vec[index+1]; + double y_lo = y_vec[index]; + double y_hi = y_vec[index+1]; + double slope = (y_hi-y_lo)/(x_hi-x_lo); + new_y_vec[i] = slope * (new_x-x_lo) + y_lo; + } + } +} + +template +void loginterp(T* x_vec, T* y_vec, int len, + T* new_x_vec, T* new_y_vec, int new_len) +{ + for (int i=0;i=x_vec[len-1]) + index = len-2; + else + { + T* which = std::lower_bound(x_vec, x_vec+len, new_x); + index = which - x_vec-1; + } + + if(new_x == x_vec[index]) + { + // exact value + new_y_vec[i] = y_vec[index]; + } + else + { + //interpolate + double x_lo = x_vec[index]; + double x_hi = x_vec[index+1]; + double y_lo = log10(y_vec[index]); + double y_hi = log10(y_vec[index+1]); + double slope = (y_hi-y_lo)/(x_hi-x_lo); + new_y_vec[i] = pow(10.0, (slope * (new_x-x_lo) + y_lo)); + } + } +} + +template +int block_average_above(T* x_vec, T* y_vec, int len, + T* new_x_vec, T* new_y_vec, int new_len) +{ + int bad_index = -1; + int start_index = 0; + T last_y = 0.0; + T thickness = 0.0; + + for(int i=0;i x_vec[len-1])) + { + bad_index = i; + break; + } + else if (new_x == x_vec[0]) + { + // for the first sample, just return the cooresponding y value + new_y_vec[i] = y_vec[0]; + } + else + { + T* which = std::lower_bound(x_vec, x_vec+len, new_x); + int index = which - x_vec-1; + + // calculate weighted average + + // Start off with "residue" from last interval in case last x + // was between to samples. + T weighted_y_sum = last_y * thickness; + T thickness_sum = thickness; + for(int j=start_index; j<=index; j++) + { + if (x_vec[j+1] < new_x) + thickness = x_vec[j+1] - x_vec[j]; + else + thickness = new_x -x_vec[j]; + weighted_y_sum += y_vec[j] * thickness; + thickness_sum += thickness; + } + new_y_vec[i] = weighted_y_sum/thickness_sum; + + // Store the thickness between the x value and the next sample + // to add to the next weighted average. + last_y = y_vec[index]; + thickness = x_vec[index+1] - new_x; + + // start next weighted average at next sample + start_index =index+1; + } + } + return bad_index; +} + +template +int window_average(T* x_vec, T* y_vec, int len, + T* new_x_vec, T* new_y_vec, int new_len, + T width) +{ + for(int i=0;i= len) + { + //top = x_vec[len-1]; + top_index = len-1; + } + //std::cout << std::endl; + //std::cout << bottom_index << " " << top_index << std::endl; + //std::cout << bottom << " " << top << std::endl; + // calculate weighted average + T thickness =0.0; + T thickness_sum =0.0; + T weighted_y_sum =0.0; + for(int j=bottom_index; j < top_index; j++) + { + thickness = x_vec[j+1] - bottom; + weighted_y_sum += y_vec[j] * thickness; + thickness_sum += thickness; + bottom = x_vec[j+1]; + /* + std::cout << "iter: " << j - bottom_index << " " << + "index: " << j << " " << + "bottom: " << bottom << " " << + "x+1: " << x_vec[j+1] << " " << + "x: " << x_vec[j] << " " << + "y: " << y_vec[j] << " " << + "weighted_sum: " << weighted_y_sum << + "thickness: " << thickness << " " << + "thickness_sum: " << thickness_sum << std::endl; + */ + //std::cout << x_vec[j] << " "; + //std::cout << thickness << " "; + } + + // last element + thickness = top - bottom; + weighted_y_sum += y_vec[top_index] * thickness; + thickness_sum += thickness; + /* + std::cout << "iter: last" << " " << + "index: " << top_index << " " << + "x: " << x_vec[top_index] << " " << + "y: " << y_vec[top_index] << " " << + "weighted_sum: " << weighted_y_sum << + "thickness: " << thickness << " " << + "thickness_sum: " << thickness_sum << std::endl; + */ + //std::cout << x_vec[top_index] << " " << thickness_sum << std::endl; + new_y_vec[i] = weighted_y_sum/thickness_sum; + } + return -1; +} From scipy-svn at scipy.org Fri Sep 5 08:55:42 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Fri, 5 Sep 2008 07:55:42 -0500 (CDT) Subject: [Scipy-svn] r4690 - in trunk/scipy/interpolate: . tests Message-ID: <20080905125542.BD1AA39C11D@scipy.org> Author: oliphant Date: 2008-09-05 07:55:38 -0500 (Fri, 05 Sep 2008) New Revision: 4690 Added: trunk/scipy/interpolate/interpolate_wrapper.py trunk/scipy/interpolate/tests/test_interpolate_wrapper.py Log: Add interpolate wrapper and tests. Added: trunk/scipy/interpolate/interpolate_wrapper.py =================================================================== --- trunk/scipy/interpolate/interpolate_wrapper.py 2008-09-05 12:50:12 UTC (rev 4689) +++ trunk/scipy/interpolate/interpolate_wrapper.py 2008-09-05 12:55:38 UTC (rev 4690) @@ -0,0 +1,138 @@ +""" helper_funcs.py. + scavenged from enthought,interpolate +""" + +import numpy as np +import sys +import _interpolate # C extension. Does all the real work. + +def atleast_1d_and_contiguous(ary, dtype = np.float64): + return np.atleast_1d( np.ascontiguousarray(ary, dtype) ) + +def nearest(x, y, new_x): + """ Rounds each new_x[i] to the closest value in x + and returns corresponding y. + """ + shifted_x = np.concatenate(( np.array([x[0]-1]) , x[0:-1] )) + + midpoints_of_x = atleast_1d_and_contiguous( .5*(x + shifted_x) ) + new_x = atleast_1d_and_contiguous(new_x) + + TINY = 1e-10 + indices = np.searchsorted(midpoints_of_x, new_x+TINY)-1 + indices = np.atleast_1d(np.clip(indices, 0, np.Inf).astype(np.int)) + new_y = np.take(y, indices, axis=-1) + + return new_y + + + +def linear(x, y, new_x): + """ Linearly interpolates values in new_x based on the values in x and y + + Parameters + ---------- + x + 1-D array + y + 1-D or 2-D array + new_x + 1-D array + """ + x = atleast_1d_and_contiguous(x, np.float64) + y = atleast_1d_and_contiguous(y, np.float64) + new_x = atleast_1d_and_contiguous(new_x, np.float64) + + assert len(y.shape) < 3, "function only works with 1D or 2D arrays" + if len(y.shape) == 2: + new_y = np.zeros((y.shape[0], len(new_x)), np.float64) + for i in range(len(new_y)): # for each row + _interpolate.linear_dddd(x, y[i], new_x, new_y[i]) + else: + new_y = np.zeros(len(new_x), np.float64) + _interpolate.linear_dddd(x, y, new_x, new_y) + + return new_y + +def logarithmic(x, y, new_x): + """ Linearly interpolates values in new_x based in the log space of y. + + Parameters + ---------- + x + 1-D array + y + 1-D or 2-D array + new_x + 1-D array + """ + x = atleast_1d_and_contiguous(x, np.float64) + y = atleast_1d_and_contiguous(y, np.float64) + new_x = atleast_1d_and_contiguous(new_x, np.float64) + + assert len(y.shape) < 3, "function only works with 1D or 2D arrays" + if len(y.shape) == 2: + new_y = np.zeros((y.shape[0], len(new_x)), np.float64) + for i in range(len(new_y)): + _interpolate.loginterp_dddd(x, y[i], new_x, new_y[i]) + else: + new_y = np.zeros(len(new_x), np.float64) + _interpolate.loginterp_dddd(x, y, new_x, new_y) + + return new_y + +def block_average_above(x, y, new_x): + """ Linearly interpolates values in new_x based on the values in x and y + + Parameters + ---------- + x + 1-D array + y + 1-D or 2-D array + new_x + 1-D array + """ + bad_index = None + x = atleast_1d_and_contiguous(x, np.float64) + y = atleast_1d_and_contiguous(y, np.float64) + new_x = atleast_1d_and_contiguous(new_x, np.float64) + + assert len(y.shape) < 3, "function only works with 1D or 2D arrays" + if len(y.shape) == 2: + new_y = np.zeros((y.shape[0], len(new_x)), np.float64) + for i in range(len(new_y)): + bad_index = _interpolate.block_averave_above_dddd(x, y[i], + new_x, new_y[i]) + if bad_index is not None: + break + else: + new_y = np.zeros(len(new_x), np.float64) + bad_index = _interpolate.block_average_above_dddd(x, y, new_x, new_y) + + if bad_index is not None: + msg = "block_average_above cannot extrapolate and new_x[%d]=%f "\ + "is out of the x range (%f, %f)" % \ + (bad_index, new_x[bad_index], x[0], x[-1]) + raise ValueError, msg + + return new_y + +def block(x, y, new_x): + """ Essentially a step function. + + For each new_x[i], finds largest j such that + x[j] < new_x[j], and returns y[j]. + """ + # find index of values in x that preceed values in x + # This code is a little strange -- we really want a routine that + # returns the index of values where x[j] < x[index] + TINY = 1e-10 + indices = np.searchsorted(x, new_x+TINY)-1 + + # If the value is at the front of the list, it'll have -1. + # In this case, we will use the first (0), element in the array. + # take requires the index array to be an Int + indices = np.atleast_1d(np.clip(indices, 0, np.Inf).astype(np.int)) + new_y = np.take(y, indices, axis=-1) + return new_y \ No newline at end of file Added: trunk/scipy/interpolate/tests/test_interpolate_wrapper.py =================================================================== --- trunk/scipy/interpolate/tests/test_interpolate_wrapper.py 2008-09-05 12:50:12 UTC (rev 4689) +++ trunk/scipy/interpolate/tests/test_interpolate_wrapper.py 2008-09-05 12:55:38 UTC (rev 4690) @@ -0,0 +1,86 @@ +""" module to test interpolate_wrapper.py +""" + +# Unit Test +import unittest +import time +from numpy import arange, allclose, ones, NaN, isnan +import numpy as np + +# functionality to be tested +from scipy.interpolate.interpolate_wrapper import atleast_1d_and_contiguous, \ + linear, logarithmic, block_average_above, block, nearest + +class Test(unittest.TestCase): + + def assertAllclose(self, x, y, rtol=1.0e-5): + for i, xi in enumerate(x): + self.assert_(allclose(xi, y[i], rtol) or (isnan(xi) and isnan(y[i]))) + + def test_nearest(self): + N = 5 + x = arange(N) + y = arange(N) + self.assertAllclose(y, nearest(x, y, x+.1)) + self.assertAllclose(y, nearest(x, y, x-.1)) + + def test_linear(self): + N = 3000. + x = arange(N) + y = arange(N) + new_x = arange(N)+0.5 + t1 = time.clock() + new_y = linear(x, y, new_x) + t2 = time.clock() + #print "time for linear interpolation with N = %i:" % N, t2 - t1 + + self.assertAllclose(new_y[:5], [0.5, 1.5, 2.5, 3.5, 4.5]) + + def test_block_average_above(self): + N = 3000. + x = arange(N) + y = arange(N) + + new_x = arange(N/2)*2 + t1 = time.clock() + new_y = block_average_above(x, y, new_x) + t2 = time.clock() + #print "time for block_avg_above interpolation with N = %i:" % N, t2 - t1 + self.assertAllclose(new_y[:5], [0.0, 0.5, 2.5, 4.5, 6.5]) + + def test_linear2(self): + N = 3000. + x = arange(N) + y = ones((100,N)) * arange(N) + new_x = arange(N)+0.5 + t1 = time.clock() + new_y = linear(x, y, new_x) + t2 = time.clock() + #print "time for 2D linear interpolation with N = %i:" % N, t2 - t1 + self.assertAllclose(new_y[:5,:5], + [[ 0.5, 1.5, 2.5, 3.5, 4.5], + [ 0.5, 1.5, 2.5, 3.5, 4.5], + [ 0.5, 1.5, 2.5, 3.5, 4.5], + [ 0.5, 1.5, 2.5, 3.5, 4.5], + [ 0.5, 1.5, 2.5, 3.5, 4.5]]) + + def test_logarithmic(self): + N = 4000. + x = arange(N) + y = arange(N) + new_x = arange(N)+0.5 + t1 = time.clock() + new_y = logarithmic(x, y, new_x) + t2 = time.clock() + #print "time for logarithmic interpolation with N = %i:" % N, t2 - t1 + correct_y = [np.NaN, 1.41421356, 2.44948974, 3.46410162, 4.47213595] + self.assertAllclose(new_y[:5], correct_y) + + def runTest(self): + test_list = [name for name in dir(self) if name.find('test_')==0] + for test_name in test_list: + exec("self.%s()" % test_name) + +if __name__ == '__main__': + unittest.main() + From scipy-svn at scipy.org Fri Sep 5 10:18:43 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Fri, 5 Sep 2008 09:18:43 -0500 (CDT) Subject: [Scipy-svn] r4691 - in trunk/scipy/sparse: . tests Message-ID: <20080905141843.7B9AC39C11D@scipy.org> Author: wnbell Date: 2008-09-05 09:18:40 -0500 (Fri, 05 Sep 2008) New Revision: 4691 Modified: trunk/scipy/sparse/lil.py trunk/scipy/sparse/tests/test_base.py Log: fixed lil_matrix fancy indexing problem pointed out by RC Modified: trunk/scipy/sparse/lil.py =================================================================== --- trunk/scipy/sparse/lil.py 2008-09-05 12:55:38 UTC (rev 4690) +++ trunk/scipy/sparse/lil.py 2008-09-05 14:18:40 UTC (rev 4691) @@ -294,18 +294,20 @@ elif not isinstance(x, spmatrix): x = lil_matrix(x) - if isspmatrix(x) and index == (slice(None), slice(None)): - # self[:,:] = other_sparse - x = lil_matrix(x) - self.rows = x.rows - self.data = x.data - return - try: i, j = index except (ValueError, TypeError): raise IndexError, "invalid index" + if isspmatrix(x): + if (isinstance(i, slice) and (i == slice(None))) and \ + (isinstance(j, slice) and (j == slice(None))): + # self[:,:] = other_sparse + x = lil_matrix(x) + self.rows = x.rows + self.data = x.data + return + if isscalar(i): row = self.rows[i] data = self.data[i] Modified: trunk/scipy/sparse/tests/test_base.py =================================================================== --- trunk/scipy/sparse/tests/test_base.py 2008-09-05 12:55:38 UTC (rev 4690) +++ trunk/scipy/sparse/tests/test_base.py 2008-09-05 14:18:40 UTC (rev 4691) @@ -1260,6 +1260,12 @@ D = lil_matrix(C) assert_array_equal(C.A, D.A) + def test_fancy_indexing(self): + M = arange(25).reshape(5,5) + A = lil_matrix( M ) + + assert_equal(A[array([1,2,3]),2:3].todense(), M[array([1,2,3]),2:3]) + def test_point_wise_multiply(self): l = lil_matrix((4,3)) l[0,0] = 1 From scipy-svn at scipy.org Fri Sep 5 14:05:58 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Fri, 5 Sep 2008 13:05:58 -0500 (CDT) Subject: [Scipy-svn] r4692 - in trunk/scipy/cluster: . src Message-ID: <20080905180558.061C139C02D@scipy.org> Author: damian.eads Date: 2008-09-05 13:05:54 -0500 (Fri, 05 Sep 2008) New Revision: 4692 Modified: trunk/scipy/cluster/distance.py trunk/scipy/cluster/src/distance.c trunk/scipy/cluster/src/distance.h Log: Added some code for computing distances between two sets of vectors, instead of pairwise distances. Modified: trunk/scipy/cluster/distance.py =================================================================== --- trunk/scipy/cluster/distance.py 2008-09-05 14:18:40 UTC (rev 4691) +++ trunk/scipy/cluster/distance.py 2008-09-05 18:05:54 UTC (rev 4692) @@ -9,8 +9,12 @@ +------------------+-------------------------------------------------+ |*Function* | *Description* | +------------------+-------------------------------------------------+ -|pdist | computes distances between observation pairs. | +|pdist | pairwise distances between observation | +| | vectors. | +------------------+-------------------------------------------------+ +|cdist | distances between between two collections of | +| | observation vectors. | ++------------------+-------------------------------------------------+ |squareform | converts a square distance matrix to a | | | condensed one and vice versa. | +------------------+-------------------------------------------------+ Modified: trunk/scipy/cluster/src/distance.c =================================================================== --- trunk/scipy/cluster/src/distance.c 2008-09-05 14:18:40 UTC (rev 4691) +++ trunk/scipy/cluster/src/distance.c 2008-09-05 18:05:54 UTC (rev 4692) @@ -618,3 +618,301 @@ } } } + + +/** cdist */ + +void cdist_euclidean(const double *XA, + const double *XB, double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = euclidean_distance(u, v, n); + } + } +} + +void cdist_mahalanobis(const double *XA, + const double *XB, + const double *covinv, + double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + double *dimbuf1, *dimbuf2; + dimbuf1 = (double*)malloc(sizeof(double) * 2 * n); + dimbuf2 = dimbuf1 + n; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = mahalanobis_distance(u, v, covinv, dimbuf1, dimbuf2, n); + } + } + dimbuf2 = 0; + free(dimbuf1); +} + +void cdist_bray_curtis(const double *XA, const double *XB, + double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = bray_curtis_distance(u, v, n); + } + } +} + +void cdist_canberra(const double *XA, + const double *XB, double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = canberra_distance(u, v, n); + } + } +} + +void cdist_hamming(const double *XA, + const double *XB, double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = hamming_distance(u, v, n); + } + } +} + +void cdist_hamming_bool(const char *XA, + const char *XB, const char *X, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = hamming_distance_bool(u, v, n); + } + } +} + +void cdist_jaccard(const double *XA, + const double *XB, double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = jaccard_distance(u, v, n); + } + } +} + +void cdist_jaccard_bool(const char *XA, + const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = jaccard_distance_bool(u, v, n); + } + } +} + + +void cdist_chebyshev(const double *XA, + const double *XB, double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = chebyshev_distance(u, v, n); + } + } +} + +void cdist_cosine(const double *XA, + const double *XB, double *dm, int mA, int mB, int n, + const double *normsA, const double *normsB) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = cosine_distance(u, v, n, normsA[i], normsB[j]); + } + } +} + +void cdist_seuclidean(const double *XA, + const double *XB, + const double *var, + double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = seuclidean_distance(var, u, v, n); + } + } +} + +void cdist_city_block(const double *XA, const double *XB, double *dm, int mA, int mB, int n) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = city_block_distance(u, v, n); + } + } +} + +void cdist_minkowski(const double *XA, const double *XB, double *dm, int mA, int mB, int n, double p) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = minkowski_distance(u, v, n, p); + } + } +} + +void cdist_yule_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = yule_distance_bool(u, v, n); + } + } +} + +void cdist_matching_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = matching_distance_bool(u, v, n); + } + } +} + +void cdist_dice_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = dice_distance_bool(u, v, n); + } + } +} + +void cdist_rogerstanimoto_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = rogerstanimoto_distance_bool(u, v, n); + } + } +} + +void cdist_russellrao_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = russellrao_distance_bool(u, v, n); + } + } +} + +void cdist_kulsinski_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = kulsinski_distance_bool(u, v, n); + } + } +} + +void cdist_sokalsneath_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = sokalsneath_distance_bool(u, v, n); + } + } +} + +void cdist_sokalmichener_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { + int i, j; + const char *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = sokalmichener_distance_bool(u, v, n); + } + } +} Modified: trunk/scipy/cluster/src/distance.h =================================================================== --- trunk/scipy/cluster/src/distance.h 2008-09-05 14:18:40 UTC (rev 4691) +++ trunk/scipy/cluster/src/distance.h 2008-09-05 18:05:54 UTC (rev 4692) @@ -63,4 +63,51 @@ void pdist_sokalmichener_bool(const char *X, double *dm, int m, int n); void pdist_sokalsneath_bool(const char *X, double *dm, int m, int n); +void cdist_euclidean(const double *XA, const double *XB, double *dm, int mA, int mB, int n); +void cdist_mahalanobis(const double *XA, const double *XB, + const double *covinv, + double *dm, int mA, int mB, int n); +void cdist_bray_curtis(const double *XA, const double *XB, + double *dm, int mA, int mB, int n); +void cdist_canberra(const double *XA, + const double *XB, double *dm, int mA, int mB, int n); +void cdist_hamming(const double *XA, + const double *XB, double *dm, int mA, int mB, int n); +void cdist_hamming_bool(const char *XA, + const char *XB, const char *X, double *dm, + int mA, int mB, int n); +void cdist_jaccard(const double *XA, + const double *XB, double *dm, int mA, int mB, int n); +void cdist_jaccard_bool(const char *XA, + const char *XB, double *dm, int mA, int mB, int n); +void cdist_chebyshev(const double *XA, + const double *XB, double *dm, int mA, int mB, int n); +void cdist_cosine(const double *XA, + const double *XB, double *dm, int mA, int mB, int n, + const double *normsA, const double *normsB); +void cdist_seuclidean(const double *XA, + const double *XB, + const double *var, + double *dm, int mA, int mB, int n); +void cdist_city_block(const double *XA, const double *XB, double *dm, + int mA, int mB, int n); +void cdist_minkowski(const double *XA, const double *XB, double *dm, + int mA, int mB, int n, double p); +void cdist_yule_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); +void cdist_matching_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); +void cdist_dice_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); +void cdist_rogerstanimoto_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); +void cdist_russellrao_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); +void cdist_kulsinski_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); +void cdist_sokalsneath_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); +void cdist_sokalmichener_bool(const char *XA, const char *XB, double *dm, + int mA, int mB, int n); + #endif From scipy-svn at scipy.org Sun Sep 7 03:46:32 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Sun, 7 Sep 2008 02:46:32 -0500 (CDT) Subject: [Scipy-svn] r4693 - in trunk/scipy/io/matlab: . tests Message-ID: <20080907074632.177E639C27D@scipy.org> Author: matthew.brett at gmail.com Date: 2008-09-07 02:46:29 -0500 (Sun, 07 Sep 2008) New Revision: 4693 Modified: trunk/scipy/io/matlab/__init__.py trunk/scipy/io/matlab/mio5.py trunk/scipy/io/matlab/tests/test_mio.py Log: Add modified version of Zachary Pincus patch to support small element format saving Modified: trunk/scipy/io/matlab/__init__.py =================================================================== --- trunk/scipy/io/matlab/__init__.py 2008-09-05 18:05:54 UTC (rev 4692) +++ trunk/scipy/io/matlab/__init__.py 2008-09-07 07:46:29 UTC (rev 4693) @@ -0,0 +1,5 @@ +# Matlab file read and write utilities +from mio import loadmat, savemat + +from numpy.testing import Tester +test = Tester().test Modified: trunk/scipy/io/matlab/mio5.py =================================================================== --- trunk/scipy/io/matlab/mio5.py 2008-09-05 18:05:54 UTC (rev 4692) +++ trunk/scipy/io/matlab/mio5.py 2008-09-07 07:46:29 UTC (rev 4693) @@ -87,6 +87,7 @@ ('version', 'u2'), ('endian_test', 'S2')], 'tag_full': [('mdtype', 'u4'), ('byte_count', 'u4')], + 'tag_smalldata':[('byte_count_mdtype', 'u4'), ('data', 'S4')], 'array_flags': [('data_type', 'u4'), ('byte_count', 'u4'), ('flags_class','u4'), @@ -193,23 +194,28 @@ dtype=self.dtypes['tag_full'], buffer=raw_tag) mdtype = tag['mdtype'].item() - + # Byte count if this is small data element byte_count = mdtype >> 16 if byte_count: # small data element format if byte_count > 4: raise ValueError, 'Too many bytes for sde format' mdtype = mdtype & 0xFFFF - dt = self.dtypes[mdtype] - el_count = byte_count // dt.itemsize - return np.ndarray(shape=(el_count,), - dtype=dt, - buffer=raw_tag[4:]) - - byte_count = tag['byte_count'].item() - if mdtype == miMATRIX: - return self.current_getter(byte_count).get_array() - elif mdtype in self.codecs: # encoded char data + if mdtype == miMATRIX: + raise TypeError('Cannot have matrix in SDE format') + raw_str = raw_tag[4:byte_count+4] + else: # regular element + byte_count = tag['byte_count'].item() + # Deal with miMATRIX type (cannot pass byte string) + if mdtype == miMATRIX: + return self.current_getter(byte_count).get_array() + # All other types can be read from string raw_str = self.mat_stream.read(byte_count) + # Seek to next 64-bit boundary + mod8 = byte_count % 8 + if mod8: + self.mat_stream.seek(8 - mod8, 1) + + if mdtype in self.codecs: # encoded char data codec = self.codecs[mdtype] if not codec: raise TypeError, 'Do not support encoding %d' % mdtype @@ -219,15 +225,10 @@ el_count = byte_count // dt.itemsize el = np.ndarray(shape=(el_count,), dtype=dt, - buffer=self.mat_stream.read(byte_count)) + buffer=raw_str) if copy: el = el.copy() - # Seek to next 64-bit boundary - mod8 = byte_count % 8 - if mod8: - self.mat_stream.seek(8 - mod8, 1) - return el def matrix_getter_factory(self): @@ -572,18 +573,30 @@ def write_element(self, arr, mdtype=None): # write tag, data - tag = np.zeros((), mdtypes_template['tag_full']) if mdtype is None: - tag['mdtype'] = np_to_mtypes[arr.dtype.str[1:]] + mdtype = np_to_mtypes[arr.dtype.str[1:]] + byte_count = arr.size*arr.itemsize + if byte_count <= 4: + self.write_smalldata_element(arr, mdtype, byte_count) else: - tag['mdtype'] = mdtype + self.write_regular_element(arr, mdtype, byte_count) - tag['byte_count'] = arr.size*arr.itemsize + def write_smalldata_element(self, arr, mdtype, byte_count): + # write tag with embedded data + tag = np.zeros((), mdtypes_template['tag_smalldata']) + tag['byte_count_mdtype'] = (byte_count << 16) + mdtype + # if arr.tostring is < 4, the element will be zero-padded as needed. + tag['data'] = arr.tostring(order='F') + self.write_dtype(tag) + + def write_regular_element(self, arr, mdtype, byte_count): + # write tag, data + tag = np.zeros((), mdtypes_template['tag_full']) + tag['mdtype'] = mdtype + tag['byte_count'] = byte_count padding = (8 - tag['byte_count']) % 8 - self.write_dtype(tag) self.write_bytes(arr) - # pad to next 64-bit boundary self.write_bytes(np.zeros((padding,),'u1')) @@ -595,7 +608,7 @@ ''' Write header for given data options mclass - mat5 matrix class is_global - True if matrix is global - is_complex - True is matrix is complex + is_complex - True if matrix is complex is_logical - True if matrix is logical nzmax - max non zero elements for sparse arrays ''' Modified: trunk/scipy/io/matlab/tests/test_mio.py =================================================================== --- trunk/scipy/io/matlab/tests/test_mio.py 2008-09-05 18:05:54 UTC (rev 4692) +++ trunk/scipy/io/matlab/tests/test_mio.py 2008-09-07 07:46:29 UTC (rev 4693) @@ -267,3 +267,5 @@ assert_array_almost_equal(actual['x'].todense(), expected['x'].todense()) + + From scipy-svn at scipy.org Sun Sep 7 15:28:15 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Sun, 7 Sep 2008 14:28:15 -0500 (CDT) Subject: [Scipy-svn] r4694 - in trunk/scipy/cluster: . src tests Message-ID: <20080907192815.8885039C153@scipy.org> Author: damian.eads Date: 2008-09-07 14:28:06 -0500 (Sun, 07 Sep 2008) New Revision: 4694 Added: trunk/scipy/cluster/tests/cdist-X1.txt trunk/scipy/cluster/tests/cdist-X2.txt Modified: trunk/scipy/cluster/distance.py trunk/scipy/cluster/src/distance.c trunk/scipy/cluster/src/distance.h trunk/scipy/cluster/src/distance_wrap.c trunk/scipy/cluster/tests/test_distance.py Log: Added cdist function for computing distances between two collections of vectors. Added tests for the cdist function. Modified: trunk/scipy/cluster/distance.py =================================================================== --- trunk/scipy/cluster/distance.py 2008-09-07 07:46:29 UTC (rev 4693) +++ trunk/scipy/cluster/distance.py 2008-09-07 19:28:06 UTC (rev 4694) @@ -796,7 +796,7 @@ def pdist(X, metric='euclidean', p=2, V=None, VI=None): """ - Computes the distance between m original observations in + Computes the pairwise distances between m original observations in n-dimensional space. Returns a condensed distance matrix Y. For each :math:`$i$` and :math:`$j$` (where :math:`$i #include +extern PyObject *cdist_euclidean_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_euclidean(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_canberra_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_canberra(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_bray_curtis_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_bray_curtis(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + + +extern PyObject *cdist_mahalanobis_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *covinv_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + const double *covinv; + if (!PyArg_ParseTuple(args, "O!O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &covinv_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + covinv = (const double*)covinv_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_mahalanobis(XA, XB, covinv, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + + +extern PyObject *cdist_chebyshev_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_chebyshev(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + + +extern PyObject *cdist_cosine_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_, *normsA_, *normsB_; + int mA, mB, n; + double *dm; + const double *XA, *XB, *normsA, *normsB; + if (!PyArg_ParseTuple(args, "O!O!O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_, + &PyArray_Type, &normsA_, + &PyArray_Type, &normsB_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + normsA = (const double*)normsA_->data; + normsB = (const double*)normsB_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_cosine(XA, XB, dm, mA, mB, n, normsA, normsB); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_seuclidean_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_, *var_; + int mA, mB, n; + double *dm; + const double *XA, *XB, *var; + if (!PyArg_ParseTuple(args, "O!O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &var_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + var = (double*)var_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_seuclidean(XA, XB, var, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_city_block_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_city_block(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_hamming_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_hamming(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_hamming_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_hamming_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_jaccard_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_jaccard(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_jaccard_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_jaccard_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue("d", 0.0); +} + +extern PyObject *cdist_minkowski_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const double *XA, *XB; + double p; + if (!PyArg_ParseTuple(args, "O!O!O!d", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_, + &p)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_minkowski(XA, XB, dm, mA, mB, n, p); + } + return Py_BuildValue("d", 0.0); +} + + +extern PyObject *cdist_yule_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_yule_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +extern PyObject *cdist_matching_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_matching_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +extern PyObject *cdist_dice_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_dice_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +extern PyObject *cdist_rogerstanimoto_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_rogerstanimoto_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +extern PyObject *cdist_russellrao_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_russellrao_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +extern PyObject *cdist_kulsinski_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_kulsinski_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +extern PyObject *cdist_sokalmichener_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_sokalmichener_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +extern PyObject *cdist_sokalsneath_bool_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_; + int mA, mB, n; + double *dm; + const char *XA, *XB; + if (!PyArg_ParseTuple(args, "O!O!O!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_)) { + return 0; + } + else { + XA = (const char*)XA_->data; + XB = (const char*)XB_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + + cdist_sokalsneath_bool(XA, XB, dm, mA, mB, n); + } + return Py_BuildValue(""); +} + +/***************************** pdist ***/ + extern PyObject *pdist_euclidean_wrap(PyObject *self, PyObject *args) { PyArrayObject *X_, *dm_; int m, n; @@ -533,6 +1033,27 @@ static PyMethodDef _distanceWrapMethods[] = { + {"cdist_bray_curtis_wrap", cdist_bray_curtis_wrap, METH_VARARGS}, + {"cdist_canberra_wrap", cdist_canberra_wrap, METH_VARARGS}, + {"cdist_chebyshev_wrap", cdist_chebyshev_wrap, METH_VARARGS}, + {"cdist_city_block_wrap", cdist_city_block_wrap, METH_VARARGS}, + {"cdist_cosine_wrap", cdist_cosine_wrap, METH_VARARGS}, + {"cdist_dice_bool_wrap", cdist_dice_bool_wrap, METH_VARARGS}, + {"cdist_euclidean_wrap", cdist_euclidean_wrap, METH_VARARGS}, + {"cdist_hamming_wrap", cdist_hamming_wrap, METH_VARARGS}, + {"cdist_hamming_bool_wrap", cdist_hamming_bool_wrap, METH_VARARGS}, + {"cdist_jaccard_wrap", cdist_jaccard_wrap, METH_VARARGS}, + {"cdist_jaccard_bool_wrap", cdist_jaccard_bool_wrap, METH_VARARGS}, + {"cdist_kulsinski_bool_wrap", cdist_kulsinski_bool_wrap, METH_VARARGS}, + {"cdist_mahalanobis_wrap", cdist_mahalanobis_wrap, METH_VARARGS}, + {"cdist_matching_bool_wrap", cdist_matching_bool_wrap, METH_VARARGS}, + {"cdist_minkowski_wrap", cdist_minkowski_wrap, METH_VARARGS}, + {"cdist_rogerstanimoto_bool_wrap", cdist_rogerstanimoto_bool_wrap, METH_VARARGS}, + {"cdist_russellrao_bool_wrap", cdist_russellrao_bool_wrap, METH_VARARGS}, + {"cdist_seuclidean_wrap", cdist_seuclidean_wrap, METH_VARARGS}, + {"cdist_sokalmichener_bool_wrap", cdist_sokalmichener_bool_wrap, METH_VARARGS}, + {"cdist_sokalsneath_bool_wrap", cdist_sokalsneath_bool_wrap, METH_VARARGS}, + {"cdist_yule_bool_wrap", cdist_yule_bool_wrap, METH_VARARGS}, {"pdist_bray_curtis_wrap", pdist_bray_curtis_wrap, METH_VARARGS}, {"pdist_canberra_wrap", pdist_canberra_wrap, METH_VARARGS}, {"pdist_chebyshev_wrap", pdist_chebyshev_wrap, METH_VARARGS}, Added: trunk/scipy/cluster/tests/cdist-X1.txt =================================================================== --- trunk/scipy/cluster/tests/cdist-X1.txt 2008-09-07 07:46:29 UTC (rev 4693) +++ trunk/scipy/cluster/tests/cdist-X1.txt 2008-09-07 19:28:06 UTC (rev 4694) @@ -0,0 +1,10 @@ +1.147593763490969421e-01 8.926156143344999849e-01 1.437758624645746330e-02 1.803435962879929022e-02 5.533046214065578949e-01 5.554315640747428118e-01 4.497546637814608950e-02 4.438089247948049376e-01 7.984582810220538507e-01 2.752880789161644692e-01 1.344667112315823809e-01 9.230479561452992199e-01 6.040471462941819913e-01 3.797251652770228247e-01 4.316042735592399149e-01 5.312356915348823705e-01 4.348143005129563310e-01 3.111531488508799681e-01 9.531194313908697424e-04 8.212995023500069269e-02 6.689953269869852726e-01 9.914864535288493430e-01 8.037556036341153565e-01 +9.608925123801395074e-01 2.974451233678974127e-01 9.001110330654185088e-01 5.824163330415995654e-01 7.308574928293812834e-01 2.276154562412870952e-01 7.306791076039623745e-01 8.677244866905511333e-01 9.160806456176984192e-01 6.157216959991280714e-01 5.149053524695440531e-01 3.056427344890983999e-01 9.790557366933895223e-01 4.484995861076724877e-01 4.776550391081165747e-01 7.210436977670631187e-01 9.136399501661039979e-01 4.260275733550000776e-02 5.943900041968954717e-01 3.864571606342745991e-01 9.442027665110838131e-01 4.779949058608601309e-02 6.107551944250865228e-01 +3.297286578103622023e-01 5.980207401936733502e-01 3.673301293561567205e-01 2.585830520887681949e-01 4.660558746104259686e-01 6.083795956610364986e-01 4.535206368070313632e-01 6.873989778785424276e-01 5.130152688495458468e-01 7.665877846542720198e-01 3.444402973525138023e-01 3.583658123644906102e-02 7.924818220986856732e-01 8.746685720522412444e-01 3.010105569182431884e-01 6.012239357385538163e-01 6.233737362204671006e-01 4.830438698668915176e-01 2.317286885842551047e-02 7.585989958123050547e-01 7.108257632278830451e-01 1.551024884178199281e-01 2.665485998155288083e-01 +2.456278068903017253e-02 4.148739837711815648e-01 1.986372227934196655e-01 6.920408530298168825e-01 1.003067576685774398e-01 7.421560456480125190e-01 1.808453980608998313e-01 4.251297882537475870e-01 6.773002683522370004e-01 4.084108792570182445e-01 7.462888013191590897e-01 8.069930220529277776e-01 9.211110587681808903e-01 4.141491046181076108e-01 7.486318689260342829e-01 9.515405507589296263e-01 4.634288892577109742e-03 8.027593488166355762e-01 3.010346805217798405e-01 8.663248877242523127e-01 2.479968181181605447e-01 5.619851096054278017e-01 3.903886764590250857e-01 +7.122019976035700584e-01 6.188878051047785878e-01 7.290897087051201320e-01 6.334802157757637442e-01 5.523084734954342156e-01 5.614937129563645213e-01 2.496741051791574462e-01 5.972227939599233926e-01 1.786590597761109622e-01 2.609525984850900038e-01 7.210438943286010538e-01 2.211429064605652250e-01 9.140497572472672250e-02 1.430242193668443962e-01 7.856446942916397447e-01 4.635256358156553125e-01 5.278744289813760426e-01 3.702808015407184072e-01 5.527073830480792038e-01 6.370732917599846168e-01 9.953487928925482953e-01 3.021789770611936765e-01 3.354901923998221402e-02 +6.509638560895427695e-01 8.387598220902757751e-01 7.761375971745763103e-01 1.481627639227802717e-01 3.529474982902305324e-01 4.883093646287851586e-01 9.652923033658690199e-01 9.500680513565308294e-01 3.061885005078281985e-01 7.271902818906019750e-01 2.358962978196710303e-03 7.359889703223099211e-01 8.988893768074724955e-01 4.135279653937307121e-02 8.516441856688283796e-01 4.889597623270667270e-01 5.575909822114655245e-01 9.010853652261575641e-01 2.912844516556202246e-01 9.088759383368658629e-01 8.104351227460024898e-01 8.080695436776826890e-01 1.430530913253185155e-01 +8.048001196608134400e-01 3.066089444418462762e-02 9.021887554292090661e-01 6.154331491807940591e-02 1.378912575206647784e-02 5.775720193142440673e-01 1.219298963069791464e-01 1.883270243412101808e-01 5.569262398688379356e-02 8.964817777510125651e-02 7.977092785346929782e-01 4.878149375226197293e-01 4.511973131518809410e-02 1.858690046801604323e-01 6.947686471083162063e-01 5.884058794291086025e-01 8.638884676612634816e-01 3.855470871341656336e-01 3.495049047300468059e-01 2.767740932353948136e-01 4.731087031714035218e-01 6.679001673437914288e-01 7.502944200696660682e-01 +6.527328264244687261e-01 8.289483383553154505e-01 9.179741348282299818e-01 1.065639864466713105e-01 6.253616929058514184e-01 5.927750325266062381e-01 3.039157425463192563e-01 2.452766763359194302e-01 6.514027700704632107e-01 5.529218485487964463e-01 4.941158239308394151e-01 6.605306467722642516e-01 2.273688037050677346e-01 4.282616592244774534e-01 2.956128257930247250e-01 1.154803628237965896e-01 9.228220410235263849e-01 6.663525307676617659e-01 1.908852615936970087e-01 9.921383408926374159e-01 4.988716450388516188e-01 1.014900352736023414e-01 3.363930180244284474e-01 +2.914369076275757919e-01 5.196673601143533272e-01 7.420144907858341465e-01 1.768984185504740569e-01 5.296766993228564369e-01 5.922023566159900776e-01 5.965161262020234334e-01 3.810272333046110793e-01 8.368797246118340194e-01 7.896422363801189892e-01 9.655797561098209414e-01 4.430034032346981121e-01 2.780869795706976122e-01 3.047310845416009162e-01 8.051138863500326703e-01 6.731468634690835895e-01 4.743383036815584930e-01 9.530709614322225853e-01 7.753587619850917934e-01 2.801137109357491051e-01 6.182543660889736614e-01 5.005218857766725593e-01 9.071447804755052857e-01 +2.075071644012620453e-01 4.834950086973934802e-01 3.037011473860764532e-01 6.476084284887700937e-01 8.107195771564194020e-01 7.869075869075803364e-01 6.851234019375299633e-01 3.544187468104398331e-02 4.847673235908021017e-01 5.690262846164507726e-01 1.663354142616256803e-01 9.692796809752548537e-01 4.133441725866372485e-01 6.729167604487583665e-01 3.998813427407297283e-01 8.272617414104491695e-01 2.129248316324727774e-01 6.517004761357130249e-01 7.363013506605019520e-01 4.072375306356985636e-01 4.463336683526665238e-01 5.485059309728204102e-01 1.981745754527846071e-01 Added: trunk/scipy/cluster/tests/cdist-X2.txt =================================================================== --- trunk/scipy/cluster/tests/cdist-X2.txt 2008-09-07 07:46:29 UTC (rev 4693) +++ trunk/scipy/cluster/tests/cdist-X2.txt 2008-09-07 19:28:06 UTC (rev 4694) @@ -0,0 +1,20 @@ +7.680465556300619667e-02 4.675022344069014180e-01 8.955498989131543963e-01 3.816236071436276411e-01 1.109030077070989329e-01 2.318928815459808668e-02 7.477394240984251983e-01 1.202289789304434864e-01 8.007290497575981769e-01 6.795195698871731027e-01 6.568225762396605605e-01 2.231475263228478445e-01 7.064624077661341151e-02 1.081656666815267176e-02 1.592069359090128033e-01 1.363392203645097389e-01 9.277020735447568667e-01 8.103136564528209407e-01 5.229467676276455812e-02 7.708020259874025504e-01 6.527954747473352359e-02 5.516397414886525796e-01 3.653371861367954443e-01 +8.144399106025798085e-01 7.731852525462976633e-01 6.909477620673205589e-01 9.696063817000286633e-01 4.297887511677249694e-01 6.989600553425188156e-01 7.310201335033380543e-01 3.135256147868910048e-01 5.715578037275241829e-01 3.935000744675094531e-01 2.057715781268398825e-01 5.892508589665171881e-01 8.512951599236765476e-01 9.569808799061578775e-01 6.164885878024699561e-01 4.714185430004367294e-01 6.128831737628155363e-01 6.641799309623502845e-01 6.001985185338730711e-01 4.231922889723856995e-01 7.605249308075449077e-01 1.064530958018087281e-01 6.306470691957204444e-01 +4.265470127256254518e-01 5.933766716280767239e-01 3.698589270536845053e-02 2.173799740537294412e-01 3.032679325475639009e-01 4.271831790058847611e-01 1.828944535901013690e-01 4.772333422710156592e-01 2.564773455194128138e-01 7.120329875362141347e-01 8.952243430110462530e-01 1.808777012183288013e-01 3.612151871458374464e-01 3.960999167923041631e-01 1.821669970670747318e-02 8.835474857189200559e-01 1.353104648821573663e-01 3.457291739160937016e-01 1.126467375304566199e-01 4.107293162402323450e-01 4.051719311053743056e-01 4.007382985250427243e-01 1.286905671428811848e-01 +2.910657003883979632e-01 9.616259180685315933e-03 2.033032441536681834e-01 1.096599110293863255e-01 4.191101704605176836e-01 5.462131536027151624e-01 8.393047907010142694e-01 9.046805198676335369e-01 7.009863472176891541e-01 2.508215985039629059e-01 6.754410796667598138e-01 6.740895474032024826e-01 1.358993708621679675e-01 8.219861775211464439e-01 6.322220445623235596e-01 2.766813559002430090e-01 6.575983861590951607e-01 9.515869708336625044e-01 8.654526462353933081e-01 3.450245117834797037e-01 5.649032890631299209e-01 4.717687914789682191e-01 3.296483580510030098e-01 +9.172477457635394016e-01 3.057396583041891436e-01 7.335332344225760082e-01 8.370236206345178509e-01 3.765464253115927695e-01 5.089680319287778199e-01 1.202325719268168003e-01 9.717771065272349240e-01 5.907820104019682050e-01 9.809211614977710880e-01 9.064285003671219698e-01 8.848841466121748489e-01 2.043407730734815297e-01 9.157600394927275511e-01 4.532260315147775831e-01 4.241077335005828397e-01 1.751730149568804240e-01 4.090412146081819911e-01 3.632197861847064058e-02 5.832539334970230360e-01 4.041848151536805434e-01 3.603643989086504629e-01 1.838411383882069261e-01 +2.508806403290032572e-01 4.381403985282813496e-01 4.694787405018008286e-02 6.353900562024634713e-01 1.200813444244532846e-01 6.072397042913001419e-01 9.937255904754030977e-01 4.916670237677555066e-01 3.473845913923001572e-01 3.526875922864345370e-01 5.448595548197197047e-01 2.245096010156972799e-01 9.003258279804994269e-01 3.534560469735994470e-01 2.989266066346342177e-01 4.621024982808636938e-01 9.626538866576676012e-01 9.791401720716153001e-01 7.138514287330390840e-01 9.832862333928654719e-01 3.233999591031431198e-01 5.406467224926423398e-01 9.581890295057201579e-01 +5.210583601680578436e-01 4.598159993059653949e-01 2.111497132057748027e-01 5.949977700916546652e-01 6.342618461422359077e-01 9.888228769705599275e-01 6.096770711536318998e-01 7.548431368960863974e-01 7.490858664860100546e-01 3.186213496546415058e-01 7.895687083231245351e-01 4.178326793268141159e-01 8.095818334534051752e-01 7.886271673523481684e-01 4.038905626506847923e-01 3.652649247094948981e-01 8.267205959224892542e-01 6.433617243328785262e-01 3.117681563249452559e-01 9.675995575054980868e-01 3.675673836358472890e-01 5.863757289184046151e-01 9.099029857959717305e-02 +4.024573981231733821e-01 3.578997554002771864e-01 3.519299868071553705e-01 7.417747693762357653e-01 2.963713903285800644e-01 9.602967989298948348e-01 3.811392331739601458e-01 5.493237898295448840e-01 6.835113342793640578e-01 2.304506220807415184e-01 3.727299857731285471e-01 5.450263991912108752e-01 6.951521210987908761e-01 6.474582745861203747e-01 6.316089475403589004e-01 5.672043967425510758e-02 9.034937506977609445e-01 2.332567550780038079e-01 1.096955741449157085e-02 8.870663813493575578e-01 4.384385452180562526e-01 7.100898998169548060e-01 3.245358176196319056e-01 +9.162009194452818139e-01 5.572224742426723498e-02 3.445910686865658601e-01 9.683564008127462097e-01 9.375063149031520604e-01 9.128188852869822956e-02 9.613605414326487075e-01 5.298598697556915482e-01 6.724799695520149445e-01 1.269103938571825019e-02 1.008406153387807480e-01 8.951105272379104028e-01 1.585460318853607609e-01 6.739986455059543413e-01 5.345419321702655768e-01 6.248843899572337213e-01 3.050288488994817859e-01 1.423645553465189284e-01 1.802121190541096096e-01 9.474646822694763326e-01 2.345716438587298613e-01 9.688281784764296578e-01 1.845165243240991515e-01 +2.548297646910531178e-01 2.580877375379494465e-01 1.355482532666937301e-01 6.478812986505504412e-01 9.971695982152032345e-01 2.606721082477282403e-01 5.483439686378906996e-01 4.409612606704470528e-01 4.396442074915688503e-01 7.414262832597111608e-01 7.308840725375539416e-01 8.072095530497225280e-02 6.829509968656330976e-01 5.700030854230387911e-01 3.801845336730320657e-01 2.481059916867158766e-01 3.977295094395927322e-03 5.749480512407895150e-01 4.112033136603401307e-01 8.676159710377848722e-01 9.062646588480167686e-01 3.326691167317923359e-01 8.498307982774666591e-01 +4.464338109330643345e-01 8.546516760817471914e-01 7.384800352329814466e-01 3.692485164984804502e-02 2.915662689505471583e-02 9.010049994217171898e-01 8.622900253010918892e-01 9.786230638032608065e-01 6.546824077297251909e-01 6.342297560006789903e-01 2.230339826582647955e-01 7.658846744185553446e-01 4.603043831539479491e-01 2.017100469861691225e-01 4.891590639893540482e-01 1.937140918314912419e-01 8.161582138652878626e-01 5.597293607114051106e-02 8.423261093326828153e-02 5.105392204475533990e-02 8.234193902673621057e-01 1.784268309975372002e-01 9.118997881986501408e-02 +8.588746913421980711e-01 1.479641118621310980e-02 1.375875301146138874e-01 7.533888774725254756e-01 5.782592791549248101e-01 9.128573037619659436e-01 1.831275762880391067e-01 3.471382864827737835e-01 4.859524740929310749e-02 8.955146541561730400e-01 4.787220791101074457e-01 4.222803577759057791e-01 8.469923964908064873e-01 6.300290047587608910e-02 1.020873237837905956e-01 3.585612487182909813e-02 6.320107119904569970e-01 5.891245970008752719e-01 1.104698053665007507e-01 4.233226558073774903e-01 4.432217054386708988e-01 2.864765416628194394e-01 2.489777211814803159e-02 +5.343810659756068615e-01 4.829076396403546578e-01 8.364480888953172988e-01 8.931374995414760321e-01 6.034161442354715188e-01 3.578336000768178593e-03 4.100579775972763574e-01 3.968667908067096128e-01 5.897163653686778861e-01 3.003241263928478899e-01 2.520935203143799264e-01 3.112129371563532310e-02 9.052865295974613646e-01 1.172285124002711010e-01 4.840001666149388315e-01 3.424620676348436588e-01 5.526057133826853818e-01 6.346139530261846184e-01 5.747945930485597321e-01 1.389915612177697879e-01 2.413801217666421417e-01 7.829900796662081497e-01 7.213528084845653998e-01 +9.384509283406079483e-01 6.303019601671526750e-01 1.787921522728125323e-01 1.556003868047917127e-02 5.662397078816850948e-01 3.437473614806091371e-01 8.615844972800188462e-01 7.624380237306396246e-01 1.096468347898514883e-01 1.276566836610887323e-01 8.479188493443535757e-01 3.634713454428405432e-01 7.478112314318967613e-01 9.856395696968375253e-01 6.250293654177319080e-02 1.919327272501809567e-01 1.415594476031050153e-01 7.224057351041784925e-01 8.452145259310355208e-01 5.434318833772002755e-01 5.177620959731277228e-02 3.358977598185840518e-01 2.542654881527960375e-01 +4.800909104006243489e-01 3.651345393613150137e-01 3.657093052788148446e-01 8.579662326651369408e-01 5.787694361240260932e-01 6.491966196891312268e-01 3.252508517294879775e-01 8.639694334693422961e-01 3.028097078756678551e-01 6.295814666338699350e-01 7.305627351548695803e-01 6.975931849120264872e-03 8.321205159004851915e-01 2.681809305821257761e-01 3.628869474597150591e-01 9.598981434716586936e-01 5.947913523332928332e-01 7.794864238003402779e-01 2.819511239444029149e-01 5.134200958476284882e-01 7.284684743064278045e-01 3.099571109539331903e-01 1.502222882866774967e-01 +2.463382654375219083e-01 4.465700737264240994e-01 7.180855317941433613e-01 5.056099420785193921e-01 6.182117344332578313e-01 2.370453793561340117e-01 9.831748018047525850e-01 6.397098184531551102e-01 8.260469782208745837e-02 7.474671691560941245e-01 9.963429983418570224e-02 5.450078811081275898e-01 5.370188678062637333e-02 2.774024442708808991e-01 2.082643088545442778e-01 2.704155352788065736e-01 7.225035580445194894e-01 4.866791976239246420e-01 1.357043111201584606e-01 7.911335827987711067e-01 7.278977102006007893e-01 6.880892094410231419e-01 1.029231496520791600e-01 +6.901796117735281566e-01 1.558248977395644275e-01 4.241818789360329855e-01 5.055658246392458199e-01 1.756288758075611467e-01 4.215083703818177652e-01 7.809231602323289945e-01 1.170053878686481141e-01 6.497026323614403243e-01 5.733120641440232479e-01 4.407703406152092551e-01 5.608677124532297498e-01 7.471045703286000039e-01 3.334604336022076732e-01 8.927208811415126011e-01 9.794565286182396191e-01 9.621542824973521313e-01 3.945825239405253981e-01 8.338963875792834157e-01 9.310552325082104286e-01 7.688283033784242271e-01 3.798823731047119567e-01 1.459993613028365278e-02 +7.848623555505630511e-01 2.681039365355797344e-03 7.833208051794043891e-01 8.184381915171493604e-01 4.682581645582317709e-01 2.391069309436419932e-01 1.765377537168698607e-01 9.863494676539893424e-01 4.378412300863872009e-01 7.494505491149090481e-01 1.942180356195394308e-01 9.981402467222395547e-01 7.992190944052800505e-01 1.350875702852057936e-01 4.950149186748543650e-01 7.243422481248201761e-01 3.544596746353472216e-01 8.320192561472177228e-01 9.776840296475269865e-01 7.733852731914863110e-01 2.305732998099923048e-01 9.746878189802981041e-01 7.747723331200035979e-01 +6.521099013127149568e-01 5.452399443648201505e-01 8.146707517183656710e-01 3.827256063695345656e-01 7.954832091744263867e-01 7.834427643148527132e-01 9.661317930643520402e-02 9.215673965718058636e-01 4.914305728788055383e-01 4.105628408027649501e-01 9.844647830893304974e-02 3.974831165301851987e-01 3.857608898053827007e-01 5.520210781401946321e-01 3.445787541654143915e-03 4.552922057017416702e-01 7.456544561760444223e-01 4.753985092154335845e-01 2.821385239833401615e-01 7.560136035104459973e-01 8.453142510471420845e-01 6.679627143276523071e-01 6.910882868284401459e-01 +8.526493480446283302e-01 1.183917973068240315e-01 6.163988861865119517e-01 5.751899460059114455e-01 1.638797964925038375e-01 8.214597298784013235e-01 5.424670654187370156e-01 1.806631819658732763e-01 9.268107278221827672e-01 4.127397378597359445e-01 7.529877485901653733e-01 1.714251090083847018e-01 2.601487784245806179e-01 2.028326156742237263e-01 5.299879450122358948e-01 7.587877062981395193e-01 4.070738595375062996e-01 3.546903049793261875e-01 8.695365138547607176e-01 1.447085661525142619e-01 3.193366245820845606e-01 8.797841086211429795e-01 2.666562188639977071e-01 Modified: trunk/scipy/cluster/tests/test_distance.py =================================================================== --- trunk/scipy/cluster/tests/test_distance.py 2008-09-07 07:46:29 UTC (rev 4693) +++ trunk/scipy/cluster/tests/test_distance.py 2008-09-07 19:28:06 UTC (rev 4694) @@ -40,11 +40,13 @@ import numpy as np from numpy.testing import * from scipy.cluster.hierarchy import linkage, from_mlab_linkage, numobs_linkage -from scipy.cluster.distance import squareform, pdist, matching, jaccard, dice, sokalsneath, rogerstanimoto, russellrao, yule, numobs_dm, numobs_y +from scipy.cluster.distance import squareform, pdist, cdist, matching, jaccard, dice, sokalsneath, rogerstanimoto, russellrao, yule, numobs_dm, numobs_y #from scipy.cluster.hierarchy import pdist, euclidean _filenames = ["iris.txt", + "cdist-X1.txt", + "cdist-X2.txt", "pdist-hamming-ml.txt", "pdist-boolean-inp.txt", "pdist-jaccard-ml.txt", @@ -97,6 +99,298 @@ #print np.abs(Y_test2 - Y_right).max() #print np.abs(Y_test1 - Y_right).max() +class TestCdist(TestCase): + """ + Test suite for the pdist function. + """ + + def test_cdist_euclidean_random(self): + "Tests cdist(X, 'euclidean') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'euclidean') + Y2 = cdist(X1, X2, 'test_euclidean') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_sqeuclidean_random(self): + "Tests cdist(X, 'sqeuclidean') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'sqeuclidean') + Y2 = cdist(X1, X2, 'test_sqeuclidean') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_cityblock_random(self): + "Tests cdist(X, 'sqeuclidean') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'cityblock') + Y2 = cdist(X1, X2, 'test_cityblock') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_hamming_double_random(self): + "Tests cdist(X, 'hamming') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'hamming') + Y2 = cdist(X1, X2, 'test_hamming') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_hamming_bool_random(self): + "Tests cdist(X, 'hamming') on random boolean data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'hamming') + Y2 = cdist(X1, X2, 'test_hamming') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_jaccard_double_random(self): + "Tests cdist(X, 'jaccard') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'jaccard') + Y2 = cdist(X1, X2, 'test_jaccard') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_jaccard_bool_random(self): + "Tests cdist(X, 'jaccard') on random boolean data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'jaccard') + Y2 = cdist(X1, X2, 'test_jaccard') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_chebychev_random(self): + "Tests cdist(X, 'chebychev') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'chebychev') + Y2 = cdist(X1, X2, 'test_chebychev') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_minkowski_random_p3d8(self): + "Tests cdist(X, 'minkowski') on random data. (p=3.8)" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'minkowski', p=3.8) + Y2 = cdist(X1, X2, 'test_minkowski', p=3.8) + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_minkowski_random_p4d6(self): + "Tests cdist(X, 'minkowski') on random data. (p=4.6)" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'minkowski', p=4.6) + Y2 = cdist(X1, X2, 'test_minkowski', p=4.6) + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_minkowski_random_p1d23(self): + "Tests cdist(X, 'minkowski') on random data. (p=1.23)" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'minkowski', p=1.23) + Y2 = cdist(X1, X2, 'test_minkowski', p=1.23) + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_seuclidean_random(self): + "Tests cdist(X, 'seuclidean') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'seuclidean') + Y2 = cdist(X1, X2, 'test_seuclidean') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_sqeuclidean_random(self): + "Tests cdist(X, 'sqeuclidean') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'sqeuclidean') + Y2 = cdist(X1, X2, 'test_sqeuclidean') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_cosine_random(self): + "Tests cdist(X, 'cosine') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'cosine') + Y2 = cdist(X1, X2, 'test_cosine') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_correlation_random(self): + "Tests cdist(X, 'correlation') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'correlation') + Y2 = cdist(X1, X2, 'test_correlation') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_mahalanobis_random(self): + "Tests cdist(X, 'mahalanobis') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, 'mahalanobis') + Y2 = cdist(X1, X2, 'test_mahalanobis') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_canberra_random(self): + "Tests cdist(X, 'canberra') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'canberra') + Y2 = cdist(X1, X2, 'test_canberra') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_braycurtis_random(self): + "Tests cdist(X, 'braycurtis') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'braycurtis') + Y2 = cdist(X1, X2, 'test_braycurtis') + print Y1, Y2 + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_yule_random(self): + "Tests cdist(X, 'yule') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'yule') + Y2 = cdist(X1, X2, 'test_yule') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_matching_random(self): + "Tests cdist(X, 'matching') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'matching') + Y2 = cdist(X1, X2, 'test_matching') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_kulsinski_random(self): + "Tests cdist(X, 'kulsinski') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'kulsinski') + Y2 = cdist(X1, X2, 'test_kulsinski') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_dice_random(self): + "Tests cdist(X, 'dice') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'dice') + Y2 = cdist(X1, X2, 'test_dice') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_rogerstanimoto_random(self): + "Tests cdist(X, 'rogerstanimoto') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'rogerstanimoto') + Y2 = cdist(X1, X2, 'test_rogerstanimoto') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_russellrao_random(self): + "Tests cdist(X, 'russellrao') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'russellrao') + Y2 = cdist(X1, X2, 'test_russellrao') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_sokalmichener_random(self): + "Tests cdist(X, 'sokalmichener') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'sokalmichener') + Y2 = cdist(X1, X2, 'test_sokalmichener') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_sokalsneath_random(self): + "Tests cdist(X, 'sokalsneath') on random data." + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] < 0.5 + X2 = eo['cdist-X2'] < 0.5 + Y1 = cdist(X1, X2, 'sokalsneath') + Y2 = cdist(X1, X2, 'test_sokalsneath') + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + class TestPdist(TestCase): """ Test suite for the pdist function. From scipy-svn at scipy.org Mon Sep 8 01:01:09 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 00:01:09 -0500 (CDT) Subject: [Scipy-svn] r4696 - trunk/scipy/cluster Message-ID: <20080908050109.2FD2D39C107@scipy.org> Author: damian.eads Date: 2008-09-08 00:01:06 -0500 (Mon, 08 Sep 2008) New Revision: 4696 Modified: trunk/scipy/cluster/hierarchy.py Log: RSTified more hierarchy docs. Modified: trunk/scipy/cluster/hierarchy.py =================================================================== --- trunk/scipy/cluster/hierarchy.py 2008-09-08 03:50:19 UTC (rev 4695) +++ trunk/scipy/cluster/hierarchy.py 2008-09-08 05:01:06 UTC (rev 4696) @@ -866,39 +866,49 @@ def cophenet(*args, **kwargs): """ + Calculates the cophenetic distances between each observation in + the hierarchical clustering defined by the linkage ``Z``. + Suppose :math:`$p$` and :math:`$q$` are original observations in + disjoint clusters :math:`$s$` and :math:`$t$`, respectively and + :math:`$s$` and :math:`$t$` are joined by a direct parent cluster + :math:`$u$`. The cophenetic distance between observations + :math:`$i$` and :math:`$j$` is simply the distance between + clusters :math:`$s$` and :math:`$t$`. + :Parameters: + - Z : ndarray + The encoded linkage matrix on which to perform the calculation. + + - Y : ndarray (optional) + Calculates the cophenetic correlation coefficient ``c`` of a + hierarchical clustering defined by the linkage matrix ``Z`` + of a set of :math:`$n$` observations in :math:`$m$` + dimensions. ``Y`` is the condensed distance matrix from which + ``Z`` was generated. + + :Returns: + - c : ndarray + The cophentic correlation distance (if ``y`` is passed). + + - d : ndarray + The cophenetic distance matrix in condensed form. The + :math:`$ij$`th entry is the cophenetic distance between + original observations :math:`$i$` and :math:`$j$`. + Calling Conventions ------------------- 1. ``d = cophenet(Z)`` + Returns just the cophentic distance matrix. - Calculates the cophenetic distances between each observation in the - hierarchical clustering defined by the linkage ``Z``. - - Suppose :math:`$p$` and :math:`$q$` are original observations in - disjoint clusters :math:`$s$` and :math:`$t$`, respectively and - :math:`$s$` and :math:`$t$` are joined by a direct parent - cluster :math:`$u$`. The cophenetic distance between - observations :math:`$i$` and :math:`$j$` is simply the distance - between clusters :math:`$s$` and :math:`$t$`. - - ``d`` is cophenetic distance matrix in condensed form. The - :math:`$ij$`th entry is the cophenetic distance between original - observations :math:`$i$` and :math:`$j$`. - 2. ``c = cophenet(Z, Y)`` + Returns just the cophentic correlation coefficient. - Calculates the cophenetic correlation coefficient ``c`` of a - hierarchical clustering defined by the linkage matrix ``Z`` of a - set of :math:`$n$` observations in :math:`$m$` dimensions. ``Y`` - is the condensed distance matrix from which ``Z`` was generated. - 3. ``(c, d) = cophenet(Z, Y, [])`` - - Returns a tuple instead, (c, d). The cophenetic distance matrix - ``d`` is included in condensed (upper triangular) form. - + Returns a tuple, ``(c, d)`` where ``c`` is the cophenetic + correlation coefficient and ``d`` is the condensed cophentic + distance matrix (upper triangular form). """ Z = np.asarray(Z) @@ -943,21 +953,35 @@ def inconsistent(Z, d=2): """ - R = inconsistent(Z, d=2) + Calculates inconsistency statistics on a linkage. - Calculates statistics on links up to d levels below each - non-singleton cluster defined in the (n-1)x4 linkage matrix Z. + :Parameters: + - d : int + The number of links up to ``d`` levels below each + non-singleton cluster - R is a (n-1)x5 matrix where the i'th row contains the link - statistics for the non-singleton cluster i. The link statistics - are computed over the link heights for links d levels below the - cluster i. R[i,0] and R[i,1] are the mean and standard deviation of - the link heights, respectively; R[i,2] is the number of links - included in the calculation; and R[i,3] is the inconsistency - coefficient, (Z[i, 2]-R[i,0])/R[i,2]. + - Z : ndarray + The :math:`$(n-1)$` by 4 matrix encoding the linkage + (hierarchical clustering). See ``linkage`` documentation + for more information on its form. + - This function behaves similarly to the MATLAB(TM) inconsistent - function. + :Returns: + - R : ndarray + A :math:`$(n-1)$` by 5 matrix where the ``i``'th row + contains the link statistics for the non-singleton cluster + ``i``. The link statistics are computed over the link + heights for links :math:`$d$` levels below the cluster + ``i``. ``R[i,0]`` and ``R[i,1]`` are the mean and standard + deviation of the link heights, respectively; ``R[i,2]`` is + the number of links included in the calculation; and + ``R[i,3]`` is the inconsistency coefficient, + .. math: + \frac{\mathtt{Z[i,2]}-\mathtt{R[i,0]}} + {R[i,2]}. + + This function behaves similarly to the MATLAB(TM) inconsistent + function. """ Z = np.asarray(Z) @@ -980,17 +1004,29 @@ def from_mlab_linkage(Z): """ - Z2 = from_mlab_linkage(Z) + Converts a linkage matrix generated by MATLAB(TM) to a new + linkage matrix compatible with this module. The conversion does + two things: - Converts a linkage matrix Z generated by MATLAB(TM) to a new linkage - matrix Z2 compatible with this module. The conversion does two - things: + * the indices are converted from ``1..N`` to ``0..(N-1)`` form, + and - * the indices are converted from 1..N to 0..(N-1) form, and + * a fourth column Z[:,3] is added where Z[i,3] is represents the + number of original observations (leaves) in the non-singleton + cluster i. - * a fourth column Z[:,3] is added where Z[i,3] is equal to - the number of original observations (leaves) in the non-singleton - cluster i. + This function is useful when loading in linkages from legacy data + files generated by MATLAB. + + :Arguments: + + - Z : ndarray + A linkage matrix generated by MATLAB(TM) + + :Returns: + + - ZS : ndarray + A linkage matrix compatible with this library. """ Z = np.asarray(Z) Zs = Z.shape @@ -1007,12 +1043,19 @@ def to_mlab_linkage(Z): """ - Z2 = to_mlab_linkage(Z) + Converts a linkage matrix ``Z`` generated by the linkage function + of this module to a MATLAB(TM) compatible one. The return linkage + matrix has the last column removed and the cluster indices are + converted to ``1..N`` indexing. - Converts a linkage matrix Z generated by the linkage function of this - module to one compatible with MATLAB(TM). Z2 is the same as Z with the - last column removed and the cluster indices converted to use - 1..N indexing. + :Arguments: + - Z : ndarray + A linkage matrix generated by this library. + + :Returns: + - ZM : ndarray + A linkage matrix compatible with MATLAB(TM)'s hierarchical + clustering functions. """ Z = np.asarray(Z) is_valid_linkage(Z, throw=True, name='Z') @@ -1021,11 +1064,18 @@ def is_monotonic(Z): """ - is_monotonic(Z) + Returns ``True`` if the linkage passed is monotonic. The linkage + is monotonic if for every cluster :math:`$s$` and :math:`$t$` + joined, the distance between them is no less than the distance + between any previously joined clusters. - Returns True if the linkage Z is monotonic. The linkage is monotonic - if for every cluster s and t joined, the distance between them is - no less than the distance between any previously joined clusters. + :Arguments: + - Z : ndarray + The linkage matrix to check for monotonicity. + + :Returns: + - b : bool + A boolean indicating whether the linkage is monotonic. """ Z = np.asarray(Z) is_valid_linkage(Z, throw=True, name='Z') @@ -1035,12 +1085,31 @@ def is_valid_im(R, warning=False, throw=False, name=None): """ - is_valid_im(R) - Returns True if the inconsistency matrix passed is valid. It must - be a n by 4 numpy array of doubles. The standard deviations R[:,1] - must be nonnegative. The link counts R[:,2] must be positive and - no greater than n-1. + Returns True if the inconsistency matrix passed is valid. It must + be a :math:`$n$` by 4 numpy array of doubles. The standard + deviations ``R[:,1]`` must be nonnegative. The link counts + ``R[:,2]`` must be positive and no greater than :math:`$n-1$`. + + :Arguments: + - R : ndarray + The inconsistency matrix to check for validity. + + - warning : bool + When ``True``, issues a Python warning if the inconsistency + matrix passed is invalid. + + - throw : bool + When ``True``, throws a Python exception if the inconsistency + matrix passed is invalid. + + - name : string + When passed this string is used to refer to the variable name + of the invalid inconsistency matrix. + + :Returns: + b : bool + True iff the inconsistency matrix is valid. """ R = np.asarray(R) valid = True From scipy-svn at scipy.org Mon Sep 8 01:01:55 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 00:01:55 -0500 (CDT) Subject: [Scipy-svn] r4697 - trunk/scipy/stats Message-ID: <20080908050155.C998739C107@scipy.org> Author: pierregm Date: 2008-09-08 00:01:53 -0500 (Mon, 08 Sep 2008) New Revision: 4697 Modified: trunk/scipy/stats/mstats.py Log: mstats.mode: make sure that mode returns a tuple. Modified: trunk/scipy/stats/mstats.py =================================================================== --- trunk/scipy/stats/mstats.py 2008-09-08 05:01:06 UTC (rev 4696) +++ trunk/scipy/stats/mstats.py 2008-09-08 05:01:53 UTC (rev 4697) @@ -265,7 +265,7 @@ output = _mode1D(ma.ravel(a)) else: output = ma.apply_along_axis(_mode1D, axis, a) - return output + return tuple(output) mode.__doc__ = stats.mode.__doc__ From scipy-svn at scipy.org Mon Sep 8 02:54:40 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 01:54:40 -0500 (CDT) Subject: [Scipy-svn] r4698 - branches/refactor_fft/scipy/fftpack/backends/common Message-ID: <20080908065440.5ADA839C107@scipy.org> Author: cdavid Date: 2008-09-08 01:54:35 -0500 (Mon, 08 Sep 2008) New Revision: 4698 Modified: branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h Log: Fix trailing spaces. Modified: branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h =================================================================== --- branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h 2008-09-08 05:01:53 UTC (rev 4697) +++ branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h 2008-09-08 06:54:35 UTC (rev 4698) @@ -20,7 +20,6 @@ public: int m_n; - }; template @@ -30,8 +29,8 @@ Cache(const T& id) : m_id(id) {}; virtual ~Cache() {}; - virtual bool operator==(const Cache& other) const - { + virtual bool operator==(const Cache& other) const + { return other.m_id == m_id; }; @@ -54,7 +53,7 @@ virtual ~CacheManager() { int i; - + for (i = 0; i < m_curn; ++i) { delete m_cache[i]; } @@ -89,7 +88,7 @@ }; private: - U** m_cache; + U** m_cache; int m_n; int m_curn; int m_last; From scipy-svn at scipy.org Mon Sep 8 03:24:35 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 02:24:35 -0500 (CDT) Subject: [Scipy-svn] r4699 - branches/refactor_fft/scipy/fftpack/backends/fftw3/src Message-ID: <20080908072435.2EB1039C107@scipy.org> Author: cdavid Date: 2008-09-08 02:24:31 -0500 (Mon, 08 Sep 2008) New Revision: 4699 Modified: branches/refactor_fft/scipy/fftpack/backends/fftw3/src/drfft.cxx Log: More trailing space fixes. Modified: branches/refactor_fft/scipy/fftpack/backends/fftw3/src/drfft.cxx =================================================================== --- branches/refactor_fft/scipy/fftpack/backends/fftw3/src/drfft.cxx 2008-09-08 06:54:35 UTC (rev 4698) +++ branches/refactor_fft/scipy/fftpack/backends/fftw3/src/drfft.cxx 2008-09-08 07:24:31 UTC (rev 4699) @@ -1,5 +1,5 @@ /* - * Last Change: Tue May 13 02:00 PM 2008 J + * Last Change: Mon Sep 08 03:00 PM 2008 J * * RFFTW3 implementation * @@ -17,13 +17,13 @@ using namespace fft; class RFFTW3Cache : public Cache { - public: + public: RFFTW3Cache(const FFTW3CacheId& id); virtual ~RFFTW3Cache(); int compute_forward(double* inout) const { - assert (m_id.m_isalign ? is_simd_aligned(inout) : + assert (m_id.m_isalign ? is_simd_aligned(inout) : true); fftw_execute_r2r(m_plan, inout, m_wrk); COPYRFFTW2STD(m_wrk, inout, m_id.m_n); @@ -32,7 +32,7 @@ int compute_backward(double* inout) const { - assert (m_id.m_isalign ? is_simd_aligned(inout) : + assert (m_id.m_isalign ? is_simd_aligned(inout) : true); COPYINVRFFTW2STD(inout, m_wrk, m_id.m_n); fftw_execute_r2r(m_plan, m_wrk, inout); @@ -40,9 +40,9 @@ }; protected: - fftw_plan m_plan; - double *m_wrk; - double *m_wrk2; + fftw_plan m_plan; + double *m_wrk; + double *m_wrk2; }; RFFTW3Cache::RFFTW3Cache(const FFTW3CacheId& id) @@ -62,10 +62,10 @@ if (!m_id.m_isalign) { flags |= FFTW_UNALIGNED; - } + } - m_plan = fftw_plan_r2r_1d(id.m_n, m_wrk, m_wrk2, - (id.m_dir > 0 ? FFTW_R2HC:FFTW_HC2R), + m_plan = fftw_plan_r2r_1d(id.m_n, m_wrk, m_wrk2, + (id.m_dir > 0 ? FFTW_R2HC:FFTW_HC2R), flags); if (m_plan == NULL) { @@ -100,13 +100,13 @@ RFFTW3Cache *cache; bool isaligned; - isaligned = is_simd_aligned(ptr); + isaligned = is_simd_aligned(ptr); if (howmany > 1) { - /* + /* * If executing for several consecutive buffers, we have to * check that the shifting one buffer does not make it - * unaligned + * unaligned */ isaligned = isaligned && is_simd_aligned(ptr + n); } From scipy-svn at scipy.org Mon Sep 8 04:05:25 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 03:05:25 -0500 (CDT) Subject: [Scipy-svn] r4700 - in trunk/scipy/stats: . tests Message-ID: <20080908080525.947AF39C107@scipy.org> Author: pierregm Date: 2008-09-08 03:05:22 -0500 (Mon, 08 Sep 2008) New Revision: 4700 Modified: trunk/scipy/stats/mstats.py trunk/scipy/stats/tests/test_mstats.py Log: * force compatibility between mstats.mode and stats.mode Modified: trunk/scipy/stats/mstats.py =================================================================== --- trunk/scipy/stats/mstats.py 2008-09-08 07:24:31 UTC (rev 4699) +++ trunk/scipy/stats/mstats.py 2008-09-08 08:05:22 UTC (rev 4700) @@ -256,16 +256,25 @@ def _mode1D(a): (rep,cnt) = find_repeats(a) if not cnt.ndim: - return (0,0) + return (0, 0) elif cnt.size: return (rep[cnt.argmax()], cnt.max()) return (a[0], 1) # if axis is None: output = _mode1D(ma.ravel(a)) + output = (ma.array(output[0]), ma.array(output[1])) else: output = ma.apply_along_axis(_mode1D, axis, a) - return tuple(output) + newshape = list(a.shape) + newshape[axis] = 1 + slices = [slice(None)] * output.ndim + slices[axis] = 0 + modes = output[tuple(slices)].reshape(newshape) + slices[axis] = 1 + counts = output[tuple(slices)].reshape(newshape) + output = (modes, counts) + return output mode.__doc__ = stats.mode.__doc__ Modified: trunk/scipy/stats/tests/test_mstats.py =================================================================== --- trunk/scipy/stats/tests/test_mstats.py 2008-09-08 07:24:31 UTC (rev 4699) +++ trunk/scipy/stats/tests/test_mstats.py 2008-09-08 08:05:22 UTC (rev 4700) @@ -348,10 +348,10 @@ assert_equal(mstats.mode(ma1, axis=None), (0,3)) assert_equal(mstats.mode(a2, axis=None), (3,4)) assert_equal(mstats.mode(ma2, axis=None), (0,3)) - assert_equal(mstats.mode(a2, axis=0), [[0,0,0,1,1],[1,1,1,1,1]]) - assert_equal(mstats.mode(ma2, axis=0), [[0,0,0,1,1],[1,1,1,1,1]]) - assert_equal(mstats.mode(a2, axis=-1), [[0,3],[3,3],[3,1]]) - assert_equal(mstats.mode(ma2, axis=-1), [[0,3],[1,1],[0,0]]) + assert_equal(mstats.mode(a2, axis=0), ([[0,0,0,1,1]],[[1,1,1,1,1]])) + assert_equal(mstats.mode(ma2, axis=0), ([[0,0,0,1,1]],[[1,1,1,1,1]])) + assert_equal(mstats.mode(a2, axis=-1), ([[0],[3],[3]], [[3],[3],[1]])) + assert_equal(mstats.mode(ma2, axis=-1), ([[0],[1],[0]], [[3],[1],[0]])) class TestPercentile(TestCase): From scipy-svn at scipy.org Mon Sep 8 04:53:04 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 03:53:04 -0500 (CDT) Subject: [Scipy-svn] r4701 - branches/refactor_fft/scipy/fftpack/backends/common Message-ID: <20080908085304.26D0D39C021@scipy.org> Author: cdavid Date: 2008-09-08 03:53:01 -0500 (Mon, 08 Sep 2008) New Revision: 4701 Modified: branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h Log: Use std::vector to keep caches. Modified: branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h =================================================================== --- branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h 2008-09-08 08:05:22 UTC (rev 4700) +++ branches/refactor_fft/scipy/fftpack/backends/common/cycliccache.h 2008-09-08 08:53:01 UTC (rev 4701) @@ -1,6 +1,8 @@ #ifndef _CYCLIC_CACHE_H_ #define _CYCLIC_CACHE_H_ +#include + namespace fft { class CacheId { @@ -46,7 +48,7 @@ m_curn(0), m_last(0) { - m_cache = new U*[n]; + m_cache.resize(n); }; @@ -57,8 +59,6 @@ for (i = 0; i < m_curn; ++i) { delete m_cache[i]; } - - delete[] m_cache; } virtual U* get_cache(const T& id) @@ -88,7 +88,7 @@ }; private: - U** m_cache; + std::vector m_cache; int m_n; int m_curn; int m_last; From scipy-svn at scipy.org Mon Sep 8 05:04:31 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 04:04:31 -0500 (CDT) Subject: [Scipy-svn] r4702 - branches/refactor_fft/scipy/fftpack/backends/mkl/src Message-ID: <20080908090431.266AD39C107@scipy.org> Author: cdavid Date: 2008-09-08 04:04:26 -0500 (Mon, 08 Sep 2008) New Revision: 4702 Modified: branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfft.cxx branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfftnd.cxx Log: More trailing spaces removed. Modified: branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfft.cxx =================================================================== --- branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfft.cxx 2008-09-08 08:53:01 UTC (rev 4701) +++ branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfft.cxx 2008-09-08 09:04:26 UTC (rev 4702) @@ -30,7 +30,7 @@ { int n = id.m_n; - DftiCreateDescriptor(&m_hdl, DFTI_DOUBLE, DFTI_COMPLEX, 1, (long)n); + DftiCreateDescriptor(&m_hdl, DFTI_DOUBLE, DFTI_COMPLEX, 1, (long)n); DftiCommitDescriptor(m_hdl); return; @@ -41,13 +41,13 @@ DftiFreeDescriptor(&m_hdl); } -int MKLCache::compute_forward(complex_double *inout) const +int MKLCache::compute_forward(complex_double *inout) const { DftiComputeForward(m_hdl, (double *) inout); return 0; } -int MKLCache::compute_backward(complex_double *inout) const +int MKLCache::compute_backward(complex_double *inout) const { DftiComputeBackward(m_hdl, (double *) inout); return 0; Modified: branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfftnd.cxx =================================================================== --- branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfftnd.cxx 2008-09-08 08:53:01 UTC (rev 4701) +++ branches/refactor_fft/scipy/fftpack/backends/mkl/src/zfftnd.cxx 2008-09-08 09:04:26 UTC (rev 4702) @@ -3,7 +3,7 @@ * * Original code by David M. Cooke * - * Last Change: Tue May 13 05:00 PM 2008 J + * Last Change: Mon Sep 08 05:00 PM 2008 J */ #include From scipy-svn at scipy.org Mon Sep 8 10:47:43 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 09:47:43 -0500 (CDT) Subject: [Scipy-svn] r4703 - trunk/scipy/cluster Message-ID: <20080908144743.5A07C39C107@scipy.org> Author: damian.eads Date: 2008-09-08 09:47:40 -0500 (Mon, 08 Sep 2008) New Revision: 4703 Modified: trunk/scipy/cluster/hierarchy.py Log: RSTified more hierarchy docs. Modified: trunk/scipy/cluster/hierarchy.py =================================================================== --- trunk/scipy/cluster/hierarchy.py 2008-09-08 09:04:26 UTC (rev 4702) +++ trunk/scipy/cluster/hierarchy.py 2008-09-08 14:47:40 UTC (rev 4703) @@ -1093,23 +1093,23 @@ :Arguments: - R : ndarray - The inconsistency matrix to check for validity. + The inconsistency matrix to check for validity. - warning : bool - When ``True``, issues a Python warning if the inconsistency - matrix passed is invalid. + When ``True``, issues a Python warning if the linkage + matrix passed is invalid. - throw : bool - When ``True``, throws a Python exception if the inconsistency - matrix passed is invalid. + When ``True``, throws a Python exception if the linkage + matrix passed is invalid. - name : string - When passed this string is used to refer to the variable name - of the invalid inconsistency matrix. + This string refers to the variable name of the invalid + linkage matrix. :Returns: - b : bool - True iff the inconsistency matrix is valid. + - b : bool + True iff the inconsistency matrix is valid. """ R = np.asarray(R) valid = True @@ -1149,26 +1149,32 @@ def is_valid_linkage(Z, warning=False, throw=False, name=None): """ - is_valid_linkage(Z, t) + Checks the validity of a linkage matrix. A linkage matrix is valid + if it is a two dimensional nd-array (type double) with :math:`$n$` + rows and 4 columns. The first two columns must contain indices + between 0 and :math:`$2n-1$`. For a given row ``i``, + :math:`$0 \leq \mathtt{Z[i,0]} \leq i+n-1$` and + :math:`$0 \leq Z[i,1] \leq i+n-1$` (i.e. a cluster + cannot join another cluster unless the cluster being joined has + been generated.) - Returns True if Z is a valid linkage matrix. The variable must - be a 2-dimensional double numpy array with n rows and 4 columns. - The first two columns must contain indices between 0 and 2n-1. For a - given row i, 0 <= Z[i,0] <= i+n-1 and 0 <= Z[i,1] <= i+n-1 (i.e. - a cluster cannot join another cluster unless the cluster being joined - has been generated.) + :Arguments: - is_valid_linkage(..., warning=True, name='V') + - warning : bool + When ``True``, issues a Python warning if the linkage + matrix passed is invalid. - Invokes a warning if the variable passed is not a valid linkage. The message - explains why the distance matrix is not valid. 'name' is used when referencing - the offending variable. + - throw : bool + When ``True``, throws a Python exception if the linkage + matrix passed is invalid. - is_valid_linkage(..., throw=True, name='V') + - name : string + This string refers to the variable name of the invalid + linkage matrix. - Throws an exception if the variable passed is not a valid linkage. The message - explains why variable is not valid. 'name' is used when referencing the offending - variable. + :Returns: + - b : bool + True iff the inconsistency matrix is valid. """ Z = np.asarray(Z) @@ -1212,8 +1218,16 @@ def numobs_linkage(Z): """ - Returns the number of original observations that correspond to a - linkage matrix Z. + Returns the number of original observations of the linkage matrix + passed. + + :Arguments: + - Z : ndarray + The linkage matrix on which to perform the operation. + + :Returns: + - n : int + The number of original observations in the linkage. """ Z = np.asarray(Z) is_valid_linkage(Z, throw=True, name='Z') @@ -1221,13 +1235,27 @@ def Z_y_correspond(Z, Y): """ - yesno = Z_y_correspond(Z, Y) + Checks if a linkage matrix Z and condensed distance matrix + Y could possibly correspond to one another. - Returns True if a linkage matrix Z and condensed distance matrix - Y could possibly correspond to one another. They must have the same - number of original observations. This function is useful as a sanity - check in algorithms that make extensive use of linkage and distance - matrices that must correspond to the same set of original observations. + They must have the same number of original observations for + the check to succeed. + + This function is useful as a sanity check in algorithms that make + extensive use of linkage and distance matrices that must + correspond to the same set of original observations. + + :Arguments: + - Z : ndarray + The linkage matrix to check for correspondance. + + - Y : ndarray + The condensed distance matrix to check for correspondance. + + :Returns: + - b : bool + A boolean indicating whether the linkage matrix and distance + matrix could possibly correspond to one another. """ Z = np.asarray(Z) Y = np.asarray(Y) From scipy-svn at scipy.org Mon Sep 8 14:57:45 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 8 Sep 2008 13:57:45 -0500 (CDT) Subject: [Scipy-svn] r4704 - trunk/scipy/special Message-ID: <20080908185745.1DD2B39C0B9@scipy.org> Author: ptvirtan Date: 2008-09-08 13:57:39 -0500 (Mon, 08 Sep 2008) New Revision: 4704 Modified: trunk/scipy/special/cephes_doc.h trunk/scipy/special/info.py Log: Fix Bessel K_nu function name in docstrings: 'modified Bessel function of the second kind' is the more common name, rather than the 'third kind' Modified: trunk/scipy/special/cephes_doc.h =================================================================== --- trunk/scipy/special/cephes_doc.h 2008-09-08 14:47:40 UTC (rev 4703) +++ trunk/scipy/special/cephes_doc.h 2008-09-08 18:57:39 UTC (rev 4704) @@ -80,20 +80,20 @@ #define jn_doc "y=jn(n,x) returns the Bessel function of integer order n at x." #define jv_doc "y=jv(v,z) returns the Bessel function of real order v at complex z." #define jve_doc "y=jve(v,z) returns the exponentially scaled Bessel function of real order\nv at complex z: jve(v,z) = jv(v,z) * exp(-abs(z.imag))" -#define k0_doc "y=i0(x) returns the modified Bessel function of the third kind of\norder 0 at x." -#define k0e_doc "y=k0e(x) returns the exponentially scaled modified Bessel function\nof the third kind of order 0 at x. k0e(x) = exp(x) * k0(x)." -#define k1_doc "y=i1(x) returns the modified Bessel function of the third kind of\norder 1 at x." -#define k1e_doc "y=k1e(x) returns the exponentially scaled modified Bessel function\nof the third kind of order 1 at x. k1e(x) = exp(x) * k1(x)" +#define k0_doc "y=k0(x) returns the modified Bessel function of the second kind (sometimes called the third kind) of\norder 0 at x." +#define k0e_doc "y=k0e(x) returns the exponentially scaled modified Bessel function\nof the second kind (sometimes called the third kind) of order 0 at x. k0e(x) = exp(x) * k0(x)." +#define k1_doc "y=i1(x) returns the modified Bessel function of the second kind (sometimes called the third kind) of\norder 1 at x." +#define k1e_doc "y=k1e(x) returns the exponentially scaled modified Bessel function\nof the second kind (sometimes called the third kind) of order 1 at x. k1e(x) = exp(x) * k1(x)" #define kei_doc "y=kei(x) returns the Kelvin function ker x" #define keip_doc "y=keip(x) returns the derivative of the Kelvin function kei x" #define kelvin_doc "(Be, Ke, Bep, Kep)=kelvin(x) returns the tuple (Be, Ke, Bep, Kep) which containes \ncomplex numbers representing the real and imaginary Kelvin functions \nand their derivatives evaluated at x. For example, \nkelvin(x)[0].real = ber x and kelvin(x)[0].imag = bei x with similar \nrelationships for ker and kei." #define ker_doc "y=ker(x) returns the Kelvin function ker x" #define kerp_doc "y=kerp(x) returns the derivative of the Kelvin function ker x" -#define kn_doc "y=kn(n,x) returns the modified Bessel function of the third kind for\ninteger order n at x." +#define kn_doc "y=kn(n,x) returns the modified Bessel function of the second kind (sometimes called the third kind) for\ninteger order n at x." #define kolmogi_doc "y=kolmogi(p) returns y such that kolmogorov(y) = p" #define kolmogorov_doc "p=kolmogorov(y) returns the complementary cumulative distribution \nfunction of Kolmogorov's limiting distribution (Kn* for large n) \nof a two-sided test for equality between an empirical and a theoretical \ndistribution. It is equal to the (limit as n->infinity of the) probability \nthat sqrt(n) * max absolute deviation > y." -#define kv_doc "y=kv(v,z) returns the modified Bessel function of the third kind for\nreal order v at complex z." -#define kve_doc "y=kve(v,z) returns the exponentially scaled, modified Bessel function\nof the third kind for real order v at complex z: kve(v,z) = kv(v,z) * exp(z)" +#define kv_doc "y=kv(v,z) returns the modified Bessel function of the second kind (sometimes called the third kind) for\nreal order v at complex z." +#define kve_doc "y=kve(v,z) returns the exponentially scaled, modified Bessel function\nof the second kind (sometimes called the third kind) for real order v at complex z: kve(v,z) = kv(v,z) * exp(z)" #define log1p_doc "y=log1p(x) calculates log(1+x) for use when x is near zero." #define lpmv_doc "y=lpmv(m,v,x) returns the associated legendre function of integer order\nm and nonnegative degree v: |x|<=1." #define mathieu_a_doc "lmbda=mathieu_a(m,q) returns the characteristic value for the even solution, \nce_m(z,q), of Mathieu's equation" Modified: trunk/scipy/special/info.py =================================================================== --- trunk/scipy/special/info.py 2008-09-08 14:47:40 UTC (rev 4703) +++ trunk/scipy/special/info.py 2008-09-08 18:57:39 UTC (rev 4704) @@ -25,9 +25,9 @@ * yn -- Bessel function of second kind (integer order). * yv -- Bessel function of the second kind (real-valued order). * yve -- Exponentially scaled Bessel function of the second kind. -* kn -- Modified Bessel function of the third kind (integer order). -* kv -- Modified Bessel function of the third kind (real order). -* kve -- Exponentially scaled modified Bessel function of the third kind. +* kn -- Modified Bessel function of the second kind (integer order). +* kv -- Modified Bessel function of the second kind (real order). +* kve -- Exponentially scaled modified Bessel function of the second kind. * iv -- Modified Bessel function. * ive -- Exponentially scaled modified Bessel function. * hankel1 -- Hankel function of the first kind. @@ -60,10 +60,10 @@ * i0e -- Exponentially scaled modified Bessel function of order 0. * i1 -- Modified Bessel function of order 1. * i1e -- Exponentially scaled modified Bessel function of order 1. -* k0 -- Modified Bessel function of the third kind of order 0. -* k0e -- Exponentially scaled modified Bessel function of the third kind of order 0. -* k1 -- Modified Bessel function of the third kind of order 1. -* k1e -- Exponentially scaled modified Bessel function of the third kind of order 1. +* k0 -- Modified Bessel function of the second kind of order 0. +* k0e -- Exponentially scaled modified Bessel function of the second kind of order 0. +* k1 -- Modified Bessel function of the second kind of order 1. +* k1e -- Exponentially scaled modified Bessel function of the second kind of order 1. Integrals of Bessel Functions ............................. From scipy-svn at scipy.org Tue Sep 9 09:47:36 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 08:47:36 -0500 (CDT) Subject: [Scipy-svn] r4705 - in trunk/scipy/stsci: convolve/lib image/lib Message-ID: <20080909134736.3973A39C23D@scipy.org> Author: alan.mcintyre Date: 2008-09-09 08:46:55 -0500 (Tue, 09 Sep 2008) New Revision: 4705 Modified: trunk/scipy/stsci/convolve/lib/Convolve.py trunk/scipy/stsci/convolve/lib/iraf_frame.py trunk/scipy/stsci/convolve/lib/lineshape.py trunk/scipy/stsci/image/lib/_image.py trunk/scipy/stsci/image/lib/combine.py Log: Standardize NumPy import as "import numpy as np". Removed unused numpy import. Clean up alignment of multiline statements. Modified: trunk/scipy/stsci/convolve/lib/Convolve.py =================================================================== --- trunk/scipy/stsci/convolve/lib/Convolve.py 2008-09-08 18:57:39 UTC (rev 4704) +++ trunk/scipy/stsci/convolve/lib/Convolve.py 2008-09-09 13:46:55 UTC (rev 4705) @@ -1,4 +1,4 @@ -import numpy as num +import numpy as np import _correlate import numpy.fft as dft import iraf_frame @@ -16,12 +16,12 @@ } def _condition_inputs(data, kernel): - data, kernel = num.asarray(data), num.asarray(kernel) - if num.rank(data) == 0: + data, kernel = np.asarray(data), np.asarray(kernel) + if np.rank(data) == 0: data.shape = (1,) - if num.rank(kernel) == 0: + if np.rank(kernel) == 0: kernel.shape = (1,) - if num.rank(data) > 1 or num.rank(kernel) > 1: + if np.rank(data) > 1 or np.rank(kernel) > 1: raise ValueError("arrays must be 1D") if len(data) < len(kernel): data, kernel = kernel, data @@ -30,25 +30,25 @@ def correlate(data, kernel, mode=FULL): """correlate(data, kernel, mode=FULL) - >>> correlate(num.arange(8), [1, 2], mode=VALID) + >>> correlate(np.arange(8), [1, 2], mode=VALID) array([ 2, 5, 8, 11, 14, 17, 20]) - >>> correlate(num.arange(8), [1, 2], mode=SAME) + >>> correlate(np.arange(8), [1, 2], mode=SAME) array([ 0, 2, 5, 8, 11, 14, 17, 20]) - >>> correlate(num.arange(8), [1, 2], mode=FULL) + >>> correlate(np.arange(8), [1, 2], mode=FULL) array([ 0, 2, 5, 8, 11, 14, 17, 20, 7]) - >>> correlate(num.arange(8), [1, 2, 3], mode=VALID) + >>> correlate(np.arange(8), [1, 2, 3], mode=VALID) array([ 8, 14, 20, 26, 32, 38]) - >>> correlate(num.arange(8), [1, 2, 3], mode=SAME) + >>> correlate(np.arange(8), [1, 2, 3], mode=SAME) array([ 3, 8, 14, 20, 26, 32, 38, 20]) - >>> correlate(num.arange(8), [1, 2, 3], mode=FULL) + >>> correlate(np.arange(8), [1, 2, 3], mode=FULL) array([ 0, 3, 8, 14, 20, 26, 32, 38, 20, 7]) - >>> correlate(num.arange(8), [1, 2, 3, 4, 5, 6], mode=VALID) + >>> correlate(np.arange(8), [1, 2, 3, 4, 5, 6], mode=VALID) array([ 70, 91, 112]) - >>> correlate(num.arange(8), [1, 2, 3, 4, 5, 6], mode=SAME) + >>> correlate(np.arange(8), [1, 2, 3, 4, 5, 6], mode=SAME) array([ 17, 32, 50, 70, 91, 112, 85, 60]) - >>> correlate(num.arange(8), [1, 2, 3, 4, 5, 6], mode=FULL) + >>> correlate(np.arange(8), [1, 2, 3, 4, 5, 6], mode=FULL) array([ 0, 6, 17, 32, 50, 70, 91, 112, 85, 60, 38, 20, 7]) - >>> correlate(num.arange(8), 1+1j) + >>> correlate(np.arange(8), 1+1j) Traceback (most recent call last): ... TypeError: array cannot be safely cast to required type @@ -66,17 +66,17 @@ result_type = max(kernel.dtype.name, data.dtype.name) if mode == VALID: - wdata = num.concatenate((kdata, data, kdata)) + wdata = np.concatenate((kdata, data, kdata)) result = wdata.astype(result_type) _correlate.Correlate1d(kernel, wdata, result) return result[lenk+halfk:-lenk-halfk+even] elif mode == SAME: - wdata = num.concatenate((kdata, data, kdata)) + wdata = np.concatenate((kdata, data, kdata)) result = wdata.astype(result_type) _correlate.Correlate1d(kernel, wdata, result) return result[lenk:-lenk] elif mode == FULL: - wdata = num.concatenate((kdata, data, kdata)) + wdata = np.concatenate((kdata, data, kdata)) result = wdata.astype(result_type) _correlate.Correlate1d(kernel, wdata, result) return result[halfk+1:-halfk-1+even] @@ -102,25 +102,25 @@ sequences a and v; mode can be 0 (VALID), 1 (SAME), or 2 (FULL) to specify size of the resulting sequence. - >>> convolve(num.arange(8), [1, 2], mode=VALID) + >>> convolve(np.arange(8), [1, 2], mode=VALID) array([ 1, 4, 7, 10, 13, 16, 19]) - >>> convolve(num.arange(8), [1, 2], mode=SAME) + >>> convolve(np.arange(8), [1, 2], mode=SAME) array([ 0, 1, 4, 7, 10, 13, 16, 19]) - >>> convolve(num.arange(8), [1, 2], mode=FULL) + >>> convolve(np.arange(8), [1, 2], mode=FULL) array([ 0, 1, 4, 7, 10, 13, 16, 19, 14]) - >>> convolve(num.arange(8), [1, 2, 3], mode=VALID) + >>> convolve(np.arange(8), [1, 2, 3], mode=VALID) array([ 4, 10, 16, 22, 28, 34]) - >>> convolve(num.arange(8), [1, 2, 3], mode=SAME) + >>> convolve(np.arange(8), [1, 2, 3], mode=SAME) array([ 1, 4, 10, 16, 22, 28, 34, 32]) - >>> convolve(num.arange(8), [1, 2, 3], mode=FULL) + >>> convolve(np.arange(8), [1, 2, 3], mode=FULL) array([ 0, 1, 4, 10, 16, 22, 28, 34, 32, 21]) - >>> convolve(num.arange(8), [1, 2, 3, 4, 5, 6], mode=VALID) + >>> convolve(np.arange(8), [1, 2, 3, 4, 5, 6], mode=VALID) array([35, 56, 77]) - >>> convolve(num.arange(8), [1, 2, 3, 4, 5, 6], mode=SAME) + >>> convolve(np.arange(8), [1, 2, 3, 4, 5, 6], mode=SAME) array([ 4, 10, 20, 35, 56, 77, 90, 94]) - >>> convolve(num.arange(8), [1, 2, 3, 4, 5, 6], mode=FULL) + >>> convolve(np.arange(8), [1, 2, 3, 4, 5, 6], mode=FULL) array([ 0, 1, 4, 10, 20, 35, 56, 77, 90, 94, 88, 71, 42]) - >>> convolve([1.,2.], num.arange(10.)) + >>> convolve([1.,2.], np.arange(10.)) array([ 0., 1., 4., 7., 10., 13., 16., 19., 22., 25., 18.]) """ data, kernel = _condition_inputs(data, kernel) @@ -131,15 +131,15 @@ def _gaussian(sigma, mew, npoints, sigmas): - ox = num.arange(mew-sigmas*sigma, - mew+sigmas*sigma, - 2*sigmas*sigma/npoints, type=num.float64) + ox = np.arange(mew-sigmas*sigma, + mew+sigmas*sigma, + 2*sigmas*sigma/npoints, type=np.float64) x = ox-mew x /= sigma x = x * x x *= -1/2 - x = num.exp(x) - return ox, 1/(sigma * num.sqrt(2*num.pi)) * x + x = np.exp(x) + return ox, 1/(sigma * np.sqrt(2*np.pi)) * x def _correlate2d_fft(data0, kernel0, output=None, mode="nearest", cval=0.0): """_correlate2d_fft does 2d correlation of 'data' with 'kernel', storing @@ -153,17 +153,17 @@ """ shape = data0.shape kshape = kernel0.shape - oversized = (num.array(shape) + num.array(kshape)) + oversized = (np.array(shape) + np.array(kshape)) dy = kshape[0] // 2 dx = kshape[1] // 2 - kernel = num.zeros(oversized, dtype=num.float64) + kernel = np.zeros(oversized, dtype=np.float64) kernel[:kshape[0], :kshape[1]] = kernel0[::-1,::-1] # convolution <-> correlation data = iraf_frame.frame(data0, oversized, mode=mode, cval=cval) - complex_result = (isinstance(data, num.complexfloating) or - isinstance(kernel, num.complexfloating)) + complex_result = (isinstance(data, np.complexfloating) or + isinstance(kernel, np.complexfloating)) Fdata = dft.fft2(data) del data @@ -171,7 +171,7 @@ Fkernel = dft.fft2(kernel) del kernel - num.multiply(Fdata, Fkernel, Fdata) + np.multiply(Fdata, Fkernel, Fdata) del Fkernel if complex_result: @@ -196,14 +196,14 @@ commutative, _fix_data_kernel reverses kernel and data if necessary and panics if there's no good order. """ - data, kernel = map(num.asarray, [data, kernel]) - if num.rank(data) == 0: + data, kernel = map(np.asarray, [data, kernel]) + if np.rank(data) == 0: data.shape = (1,1) - elif num.rank(data) == 1: + elif np.rank(data) == 1: data.shape = (1,) + data.shape - if num.rank(kernel) == 0: + if np.rank(kernel) == 0: kernel.shape = (1,1) - elif num.rank(kernel) == 1: + elif np.rank(kernel) == 1: kernel.shape = (1,) + kernel.shape if (kernel.shape[0] > data.shape[0] and kernel.shape[1] > data.shape[1]): @@ -226,12 +226,12 @@ If fft is True, the correlation is performed using the FFT, else the correlation is performed using the naive approach. - >>> a = num.arange(20*20) + >>> a = np.arange(20*20) >>> a = a.reshape((20,20)) - >>> b = num.ones((5,5), dtype=num.float64) + >>> b = np.ones((5,5), dtype=np.float64) >>> rn = correlate2d(a, b, fft=0) >>> rf = correlate2d(a, b, fft=1) - >>> num.alltrue(num.ravel(rn-rf<1e-10)) + >>> np.alltrue(np.ravel(rn-rf<1e-10)) True """ data, kernel = _fix_data_kernel(data, kernel) @@ -252,12 +252,12 @@ 'reflect' elements beyond boundary come from reflection on same array edge. 'constant' elements beyond boundary are set to 'cval' - >>> a = num.arange(20*20) + >>> a = np.arange(20*20) >>> a = a.reshape((20,20)) - >>> b = num.ones((5,5), dtype=num.float64) + >>> b = np.ones((5,5), dtype=np.float64) >>> rn = convolve2d(a, b, fft=0) >>> rf = convolve2d(a, b, fft=1) - >>> num.alltrue(num.ravel(rn-rf<1e-10)) + >>> np.alltrue(np.ravel(rn-rf<1e-10)) True """ data, kernel = _fix_data_kernel(data, kernel) @@ -269,8 +269,8 @@ def _boxcar(data, output, boxshape, mode, cval): if len(boxshape) == 1: - _correlate.Boxcar2d(data[num.newaxis,...], 1, boxshape[0], - output[num.newaxis,...], mode, cval) + _correlate.Boxcar2d(data[np.newaxis,...], 1, boxshape[0], + output[np.newaxis,...], mode, cval) elif len(boxshape) == 2: _correlate.Boxcar2d(data, boxshape[0], boxshape[1], output, mode, cval) else: @@ -290,19 +290,19 @@ 'reflect' elements beyond boundary come from reflection on same array edge. 'constant' elements beyond boundary are set to 'cval' - >>> boxcar(num.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="nearest").astype(num.longlong) + >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="nearest").astype(np.longlong) array([ 6, 3, 0, 0, 0, 333, 666], dtype=int64) - >>> boxcar(num.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="wrap").astype(num.longlong) + >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="wrap").astype(np.longlong) array([336, 3, 0, 0, 0, 333, 336], dtype=int64) - >>> boxcar(num.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="reflect").astype(num.longlong) + >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="reflect").astype(np.longlong) array([ 6, 3, 0, 0, 0, 333, 666], dtype=int64) - >>> boxcar(num.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="constant").astype(num.longlong) + >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="constant").astype(np.longlong) array([ 3, 3, 0, 0, 0, 333, 333], dtype=int64) - >>> a = num.zeros((10,10)) + >>> a = np.zeros((10,10)) >>> a[0,0] = 100 >>> a[5,5] = 1000 >>> a[9,9] = 10000 - >>> boxcar(a, (3,3)).astype(num.longlong) + >>> boxcar(a, (3,3)).astype(np.longlong) array([[ 44, 22, 0, 0, 0, 0, 0, 0, 0, 0], [ 22, 11, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], @@ -313,7 +313,7 @@ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 2222], [ 0, 0, 0, 0, 0, 0, 0, 0, 2222, 4444]], dtype=int64) - >>> boxcar(a, (3,3), mode="wrap").astype(num.longlong) + >>> boxcar(a, (3,3), mode="wrap").astype(np.longlong) array([[1122, 11, 0, 0, 0, 0, 0, 0, 1111, 1122], [ 11, 11, 0, 0, 0, 0, 0, 0, 0, 11], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], @@ -324,7 +324,7 @@ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1111, 0, 0, 0, 0, 0, 0, 0, 1111, 1111], [1122, 11, 0, 0, 0, 0, 0, 0, 1111, 1122]], dtype=int64) - >>> boxcar(a, (3,3), mode="reflect").astype(num.longlong) + >>> boxcar(a, (3,3), mode="reflect").astype(np.longlong) array([[ 44, 22, 0, 0, 0, 0, 0, 0, 0, 0], [ 22, 11, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], @@ -335,7 +335,7 @@ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 2222], [ 0, 0, 0, 0, 0, 0, 0, 0, 2222, 4444]], dtype=int64) - >>> boxcar(a, (3,3), mode="constant").astype(num.longlong) + >>> boxcar(a, (3,3), mode="constant").astype(np.longlong) array([[ 11, 11, 0, 0, 0, 0, 0, 0, 0, 0], [ 11, 11, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], @@ -347,9 +347,9 @@ [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 1111], [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 1111]], dtype=int64) - >>> a = num.zeros((10,10)) + >>> a = np.zeros((10,10)) >>> a[3:6,3:6] = 111 - >>> boxcar(a, (3,3)).astype(num.longlong) + >>> boxcar(a, (3,3)).astype(np.longlong) array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 12, 24, 37, 24, 12, 0, 0, 0], @@ -363,11 +363,11 @@ """ mode = pix_modes[ mode ] if output is None: - woutput = data.astype(num.float64) + woutput = data.astype(np.float64) else: woutput = output _fbroadcast(_boxcar, len(boxshape), data.shape, - (data, woutput), (boxshape, mode, cval)) + (data, woutput), (boxshape, mode, cval)) if output is None: return woutput Modified: trunk/scipy/stsci/convolve/lib/iraf_frame.py =================================================================== --- trunk/scipy/stsci/convolve/lib/iraf_frame.py 2008-09-08 18:57:39 UTC (rev 4704) +++ trunk/scipy/stsci/convolve/lib/iraf_frame.py 2008-09-09 13:46:55 UTC (rev 4705) @@ -1,4 +1,4 @@ -import numpy as num +import numpy as np """This module defines the function frame() which creates a framed copy of an input array with the boundary pixels @@ -12,7 +12,7 @@ and the contents of 'a' in the center. The boundary pixels are copied from the nearest edge pixel in 'a'. - >>> a = num.arange(16) + >>> a = np.arange(16) >>> a.shape=(4,4) >>> frame_nearest(a, (8,8)) array([[ 0, 0, 0, 1, 2, 3, 3, 3], @@ -26,8 +26,8 @@ """ - b = num.zeros(shape, dtype=a.dtype) - delta = (num.array(b.shape) - num.array(a.shape)) + b = np.zeros(shape, dtype=a.dtype) + delta = (np.array(b.shape) - np.array(a.shape)) dy = delta[0] // 2 dx = delta[1] // 2 my = a.shape[0] + dy @@ -51,7 +51,7 @@ and the contents of 'a' in the center. The boundary pixels are reflected from the nearest edge pixels in 'a'. - >>> a = num.arange(16) + >>> a = np.arange(16) >>> a.shape = (4,4) >>> frame_reflect(a, (8,8)) array([[ 5, 4, 4, 5, 6, 7, 7, 6], @@ -64,8 +64,8 @@ [ 9, 8, 8, 9, 10, 11, 11, 10]]) """ - b = num.zeros(shape, dtype=a.dtype) - delta = (num.array(b.shape) - num.array(a.shape)) + b = np.zeros(shape, dtype=a.dtype) + delta = (np.array(b.shape) - np.array(a.shape)) dy = delta[0] // 2 dx = delta[1] // 2 my = a.shape[0] + dy @@ -89,7 +89,7 @@ and the contents of 'a' in the center. The boundary pixels are wrapped around to the opposite edge pixels in 'a'. - >>> a = num.arange(16) + >>> a = np.arange(16) >>> a.shape=(4,4) >>> frame_wrap(a, (8,8)) array([[10, 11, 8, 9, 10, 11, 8, 9], @@ -103,8 +103,8 @@ """ - b = num.zeros(shape, dtype=a.dtype) - delta = (num.array(b.shape) - num.array(a.shape)) + b = np.zeros(shape, dtype=a.dtype) + delta = (np.array(b.shape) - np.array(a.shape)) dy = delta[0] // 2 dx = delta[1] // 2 my = a.shape[0] + dy @@ -128,7 +128,7 @@ and the contents of 'a' in the center. The boundary pixels are copied from the nearest edge pixel in 'a'. - >>> a = num.arange(16) + >>> a = np.arange(16) >>> a.shape=(4,4) >>> frame_constant(a, (8,8), cval=42) array([[42, 42, 42, 42, 42, 42, 42, 42], @@ -142,8 +142,8 @@ """ - b = num.zeros(shape, dtype=a.dtype) - delta = (num.array(b.shape) - num.array(a.shape)) + b = np.zeros(shape, dtype=a.dtype) + delta = (np.array(b.shape) - np.array(a.shape)) dy = delta[0] // 2 dx = delta[1] // 2 my = a.shape[0] + dy @@ -183,7 +183,7 @@ """unframe extracts the center slice of framed array 'a' which had 'shape' prior to framing.""" - delta = num.array(a.shape) - num.array(shape) + delta = np.array(a.shape) - np.array(shape) dy = delta[0]//2 dx = delta[1]//2 my = shape[0] + dy Modified: trunk/scipy/stsci/convolve/lib/lineshape.py =================================================================== --- trunk/scipy/stsci/convolve/lib/lineshape.py 2008-09-08 18:57:39 UTC (rev 4704) +++ trunk/scipy/stsci/convolve/lib/lineshape.py 2008-09-09 13:46:55 UTC (rev 4705) @@ -40,11 +40,8 @@ __date__ = "$Date: 2007/03/14 16:35:57 $"[7:-11] __version__ = "$Revision: 1.1 $"[11:-2] - -import numpy as num from convolve._lineshape import * - class Profile(object): """An base object to provide a convolution kernel.""" Modified: trunk/scipy/stsci/image/lib/_image.py =================================================================== --- trunk/scipy/stsci/image/lib/_image.py 2008-09-08 18:57:39 UTC (rev 4704) +++ trunk/scipy/stsci/image/lib/_image.py 2008-09-09 13:46:55 UTC (rev 4705) @@ -1,7 +1,6 @@ -import numpy as num +import numpy as np import scipy.stsci.convolve import scipy.stsci.convolve._correlate as _correlate -MLab=num def _translate(a, dx, dy, output=None, mode="nearest", cval=0.0): """_translate does positive sub-pixel shifts using bilinear interpolation.""" @@ -14,10 +13,8 @@ y = (1-dx) * dy z = dx * dy - kernel = num.array([ - [ z, y ], - [ x, w ], - ]) + kernel = np.array([[ z, y ], + [ x, w ]]) return convolve.correlate2d(a, kernel, output, mode, cval) @@ -33,7 +30,7 @@ 'reflect' elements beyond boundary come from reflection on same array edge. 'constant' elements beyond boundary are set to 'cval' """ - a = num.asarray(a) + a = np.asarray(a) sdx, sdy = -sdx, -sdy # Flip sign to match IRAF sign convention @@ -51,11 +48,11 @@ rotation = 0 dx, dy = abs(sdx), abs(sdy) - b = MLab.rot90(a, rotation) + b = np.rot90(a, rotation) c = _correlate.Shift2d(b, int(dx), int(dy), mode=convolve.pix_modes[mode]) d = _translate(c, dx % 1, dy % 1, output, mode, cval) if output is not None: - output._copyFrom(MLab.rot90(output, -rotation%4)) + output._copyFrom(np.rot90(output, -rotation%4)) else: - return MLab.rot90(d, -rotation % 4).astype(a.type()) + return np.rot90(d, -rotation % 4).astype(a.type()) Modified: trunk/scipy/stsci/image/lib/combine.py =================================================================== --- trunk/scipy/stsci/image/lib/combine.py 2008-09-08 18:57:39 UTC (rev 4704) +++ trunk/scipy/stsci/image/lib/combine.py 2008-09-09 13:46:55 UTC (rev 4705) @@ -1,10 +1,10 @@ -import numpy as num +import numpy as np from _combine import combine as _comb import operator as _operator def _combine_f(funcstr, arrays, output=None, outtype=None, nlow=0, nhigh=0, badmasks=None): - arrays = [ num.asarray(a) for a in arrays ] + arrays = [ np.asarray(a) for a in arrays ] shape = arrays[0].shape if output is None: if outtype is not None: @@ -44,7 +44,7 @@ indicates that a particular pixel is not to be included in the median calculation. - >>> a = num.arange(4) + >>> a = np.arange(4) >>> a = a.reshape((2,2)) >>> arrays = [a*16, a*4, a*2, a*8] >>> median(arrays) @@ -56,10 +56,10 @@ >>> median(arrays, nlow=1) array([[ 0, 8], [16, 24]]) - >>> median(arrays, outtype=num.float32) + >>> median(arrays, outtype=np.float32) array([[ 0., 6.], [ 12., 18.]], dtype=float32) - >>> bm = num.zeros((4,2,2), dtype=num.bool8) + >>> bm = np.zeros((4,2,2), dtype=np.bool8) >>> bm[2,...] = 1 >>> median(arrays, badmasks=bm) array([[ 0, 8], @@ -94,7 +94,7 @@ indicates that a particular pixel is not to be included in the average calculation. - >>> a = num.arange(4) + >>> a = np.arange(4) >>> a = a.reshape((2,2)) >>> arrays = [a*16, a*4, a*2, a*8] >>> average(arrays) @@ -106,10 +106,10 @@ >>> average(arrays, nlow=1) array([[ 0, 9], [18, 28]]) - >>> average(arrays, outtype=num.float32) + >>> average(arrays, outtype=np.float32) array([[ 0. , 7.5], [ 15. , 22.5]], dtype=float32) - >>> bm = num.zeros((4,2,2), dtype=num.bool8) + >>> bm = np.zeros((4,2,2), dtype=np.bool8) >>> bm[2,...] = 1 >>> average(arrays, badmasks=bm) array([[ 0, 9], @@ -145,7 +145,7 @@ indicates that a particular pixel is not to be included in the minimum calculation. - >>> a = num.arange(4) + >>> a = np.arange(4) >>> a = a.reshape((2,2)) >>> arrays = [a*16, a*4, a*2, a*8] >>> minimum(arrays) @@ -157,10 +157,10 @@ >>> minimum(arrays, nlow=1) array([[ 0, 4], [ 8, 12]]) - >>> minimum(arrays, outtype=num.float32) + >>> minimum(arrays, outtype=np.float32) array([[ 0., 2.], [ 4., 6.]], dtype=float32) - >>> bm = num.zeros((4,2,2), dtype=num.bool8) + >>> bm = np.zeros((4,2,2), dtype=np.bool8) >>> bm[2,...] = 1 >>> minimum(arrays, badmasks=bm) array([[ 0, 4], @@ -178,9 +178,9 @@ boolean value is true where each of the arrays values is < the low or >= the high threshholds. - >>> a=num.arange(100) + >>> a=np.arange(100) >>> a=a.reshape((10,10)) - >>> (threshhold(a, 1, 50)).astype(num.int8) + >>> (threshhold(a, 1, 50)).astype(np.int8) array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], @@ -191,7 +191,7 @@ [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int8) - >>> (threshhold([ range(10)]*10, 3, 7)).astype(num.int8) + >>> (threshhold([ range(10)]*10, 3, 7)).astype(np.int8) array([[1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], @@ -202,7 +202,7 @@ [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1]], dtype=int8) - >>> (threshhold(a, high=50)).astype(num.int8) + >>> (threshhold(a, high=50)).astype(np.int8) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], @@ -213,7 +213,7 @@ [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int8) - >>> (threshhold(a, low=50)).astype(num.int8) + >>> (threshhold(a, low=50)).astype(np.int8) array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], @@ -227,12 +227,12 @@ """ - if not isinstance(arrays[0], num.ndarray): - return threshhold( num.asarray(arrays), low, high, outputs) + if not isinstance(arrays[0], np.ndarray): + return threshhold( np.asarray(arrays), low, high, outputs) if outputs is None: - outs = num.zeros(shape=(len(arrays),)+arrays[0].shape, - dtype=num.bool8) + outs = np.zeros(shape=(len(arrays),)+arrays[0].shape, + dtype=np.bool8) else: outs = outputs @@ -241,12 +241,12 @@ out[:] = 0 if high is not None: - num.greater_equal(a, high, out) + np.greater_equal(a, high, out) if low is not None: - num.logical_or(out, a < low, out) + np.logical_or(out, a < low, out) else: if low is not None: - num.less(a, low, out) + np.less(a, low, out) if outputs is None: return outs @@ -254,16 +254,16 @@ def _bench(): """time a 10**6 element median""" import time - a = num.arange(10**6) + a = np.arange(10**6) a = a.reshape((1000, 1000)) arrays = [a*2, a*64, a*16, a*8] t0 = time.clock() median(arrays) print "maskless:", time.clock()-t0 - a = num.arange(10**6) + a = np.arange(10**6) a = a.reshape((1000, 1000)) arrays = [a*2, a*64, a*16, a*8] t0 = time.clock() - median(arrays, badmasks=num.zeros((1000,1000), dtype=num.bool8)) + median(arrays, badmasks=np.zeros((1000,1000), dtype=np.bool8)) print "masked:", time.clock()-t0 From scipy-svn at scipy.org Tue Sep 9 09:55:15 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 08:55:15 -0500 (CDT) Subject: [Scipy-svn] r4706 - in trunk/scipy/sparse/linalg: dsolve/umfpack dsolve/umfpack/tests isolve Message-ID: <20080909135515.6F79939C23D@scipy.org> Author: alan.mcintyre Date: 2008-09-09 08:55:11 -0500 (Tue, 09 Sep 2008) New Revision: 4706 Modified: trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py trunk/scipy/sparse/linalg/isolve/iterative.py Log: Standardize NumPy import as "import numpy as np". Modified: trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py =================================================================== --- trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py 2008-09-09 13:46:55 UTC (rev 4705) +++ trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py 2008-09-09 13:55:11 UTC (rev 4706) @@ -18,7 +18,7 @@ warnings.simplefilter('ignore',SparseEfficiencyWarning) -import numpy as nm +import numpy as np try: import scipy.sparse.linalg.dsolve.umfpack as um except (ImportError, AttributeError): @@ -174,7 +174,7 @@ self.real_matrices = [csc_matrix(x).astype('d') for x \ in self.real_matrices] - self.complex_matrices = [x.astype(nm.complex128) + self.complex_matrices = [x.astype(np.complex128) for x in self.real_matrices] # Skip methods if umfpack not present Modified: trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py =================================================================== --- trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py 2008-09-09 13:46:55 UTC (rev 4705) +++ trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py 2008-09-09 13:55:11 UTC (rev 4706) @@ -7,7 +7,7 @@ #from base import Struct, pause -import numpy as nm +import numpy as np import scipy.sparse as sp import re, imp try: # Silence import error. @@ -275,8 +275,8 @@ raise TypeError, 'wrong family: %s' % family self.family = family - self.control = nm.zeros( (UMFPACK_CONTROL, ), dtype = nm.double ) - self.info = nm.zeros( (UMFPACK_INFO, ), dtype = nm.double ) + self.control = np.zeros( (UMFPACK_CONTROL, ), dtype = np.double ) + self.info = np.zeros( (UMFPACK_INFO, ), dtype = np.double ) self._symbolic = None self._numeric = None self.mtx = None @@ -328,19 +328,19 @@ ## # Should check types of indices to correspond to familyTypes. if self.family[1] == 'i': - if (indx.dtype != nm.dtype('i')) \ - or mtx.indptr.dtype != nm.dtype('i'): + if (indx.dtype != np.dtype('i')) \ + or mtx.indptr.dtype != np.dtype('i'): raise ValueError, 'matrix must have int indices' else: - if (indx.dtype != nm.dtype('l')) \ - or mtx.indptr.dtype != nm.dtype('l'): + if (indx.dtype != np.dtype('l')) \ + or mtx.indptr.dtype != np.dtype('l'): raise ValueError, 'matrix must have long indices' if self.isReal: - if mtx.data.dtype != nm.dtype(' Author: alan.mcintyre Date: 2008-09-09 09:16:57 -0500 (Tue, 09 Sep 2008) New Revision: 4707 Modified: trunk/scipy/io/arff/tests/test_data.py trunk/scipy/linalg/matfuncs.py trunk/scipy/signal/wavelets.py trunk/scipy/weave/accelerate_tools.py Log: Standardize NumPy import as "import numpy as np". Removed unused numpy import. Modified: trunk/scipy/io/arff/tests/test_data.py =================================================================== --- trunk/scipy/io/arff/tests/test_data.py 2008-09-09 13:55:11 UTC (rev 4706) +++ trunk/scipy/io/arff/tests/test_data.py 2008-09-09 14:16:57 UTC (rev 4707) @@ -2,7 +2,6 @@ """Tests for parsing full arff files.""" import os -import numpy as N from numpy.testing import * from scipy.io.arff.arffread import loadarff Modified: trunk/scipy/linalg/matfuncs.py =================================================================== --- trunk/scipy/linalg/matfuncs.py 2008-09-09 13:55:11 UTC (rev 4706) +++ trunk/scipy/linalg/matfuncs.py 2008-09-09 14:16:57 UTC (rev 4707) @@ -12,12 +12,12 @@ cast, log, ogrid, isfinite, imag, real, absolute, amax, sign, \ isfinite, sqrt, identity, single from numpy import matrix as mat -import numpy as sb +import numpy as np from basic import solve, inv, norm, triu, all_mat from decomp import eig, schur, rsf2csf, orth, svd -eps = sb.finfo(float).eps -feps = sb.finfo(single).eps +eps = np.finfo(float).eps +feps = np.finfo(single).eps def expm(A,q=7): """Compute the matrix exponential using Pade approximation. @@ -142,7 +142,7 @@ if tol is None: tol = {0:feps*1e3, 1:eps*1e6}[_array_precision[arr.dtype.char]] if (arr.dtype.char in ['F', 'D','G']) and \ - sb.allclose(arr.imag, 0.0, atol=tol): + np.allclose(arr.imag, 0.0, atol=tol): arr = arr.real return arr @@ -455,11 +455,11 @@ # Shifting to avoid zero eigenvalues. How to ensure that shifting does # not change the spectrum too much? vals = svd(a,compute_uv=0) - max_sv = sb.amax(vals) + max_sv = np.amax(vals) #min_nonzero_sv = vals[(vals>max_sv*errtol).tolist().count(1)-1] #c = 0.5/min_nonzero_sv c = 0.5/max_sv - S0 = a + c*sb.identity(a.shape[0]) + S0 = a + c*np.identity(a.shape[0]) prev_errest = errest for i in range(100): iS0 = inv(S0) @@ -508,7 +508,7 @@ T, Z = rsf2csf(T,Z) n,n = T.shape - R = sb.zeros((n,n),T.dtype.char) + R = np.zeros((n,n),T.dtype.char) for j in range(n): R[j,j] = sqrt(T[j,j]) for i in range(j-1,-1,-1): @@ -521,7 +521,7 @@ X = (Z * R * Z.H) if disp: - nzeig = sb.any(sb.diag(T)==0) + nzeig = np.any(diag(T)==0) if nzeig: print "Matrix is singular and may not have a square root." return X.A Modified: trunk/scipy/signal/wavelets.py =================================================================== --- trunk/scipy/signal/wavelets.py 2008-09-09 13:55:11 UTC (rev 4706) +++ trunk/scipy/signal/wavelets.py 2008-09-09 14:16:57 UTC (rev 4707) @@ -1,6 +1,6 @@ __all__ = ['daub','qmf','cascade','morlet'] -import numpy as sb +import numpy as np from numpy.dual import eig from scipy.misc import comb from scipy import linspace, pi, exp, zeros @@ -10,36 +10,36 @@ p>=1 gives the order of the zero at f=1/2. There are 2p filter coefficients. """ - sqrt = sb.sqrt + sqrt = np.sqrt assert(p>=1) if p==1: c = 1/sqrt(2) - return sb.array([c,c]) + return np.array([c,c]) elif p==2: f = sqrt(2)/8 c = sqrt(3) - return f*sb.array([1+c,3+c,3-c,1-c]) + return f*np.array([1+c,3+c,3-c,1-c]) elif p==3: tmp = 12*sqrt(10) z1 = 1.5 + sqrt(15+tmp)/6 - 1j*(sqrt(15)+sqrt(tmp-15))/6 - z1c = sb.conj(z1) + z1c = np.conj(z1) f = sqrt(2)/8 - d0 = sb.real((1-z1)*(1-z1c)) - a0 = sb.real(z1*z1c) - a1 = 2*sb.real(z1) - return f/d0*sb.array([a0, 3*a0-a1, 3*a0-3*a1+1, a0-3*a1+3, 3-a1, 1]) + d0 = np.real((1-z1)*(1-z1c)) + a0 = np.real(z1*z1c) + a1 = 2*np.real(z1) + return f/d0*np.array([a0, 3*a0-a1, 3*a0-3*a1+1, a0-3*a1+3, 3-a1, 1]) elif p<35: # construct polynomial and factor it if p<35: P = [comb(p-1+k,k,exact=1) for k in range(p)][::-1] - yj = sb.roots(P) + yj = np.roots(P) else: # try different polynomial --- needs work P = [comb(p-1+k,k,exact=1)/4.0**k for k in range(p)][::-1] - yj = sb.roots(P) / 4 + yj = np.roots(P) / 4 # for each root, compute two z roots, select the one with |z|>1 # Build up final polynomial - c = sb.poly1d([1,1])**p - q = sb.poly1d([1]) + c = np.poly1d([1,1])**p + q = np.poly1d([1]) for k in range(p-1): yval = yj[k] part = 2*sqrt(yval*(yval-1)) @@ -49,9 +49,9 @@ z1 = const - part q = q * [1,-z1] - q = c * sb.real(q) + q = c * np.real(q) # Normalize result - q = q / sb.sum(q) * sqrt(2) + q = q / np.sum(q) * sqrt(2) return q.c[::-1] else: raise ValueError, "Polynomial factorization does not work "\ @@ -62,7 +62,7 @@ """ N = len(hk)-1 asgn = [{0:1,1:-1}[k%2] for k in range(N+1)] - return hk[::-1]*sb.array(asgn) + return hk[::-1]*np.array(asgn) def wavedec(amn,hk): gk = qmf(hk) @@ -96,55 +96,55 @@ N = len(hk)-1 - if (J > 30 - sb.log2(N+1)): + if (J > 30 - np.log2(N+1)): raise ValueError, "Too many levels." if (J < 1): raise ValueError, "Too few levels." # construct matrices needed - nn,kk = sb.ogrid[:N,:N] - s2 = sb.sqrt(2) + nn,kk = np.ogrid[:N,:N] + s2 = np.sqrt(2) # append a zero so that take works - thk = sb.r_[hk,0] + thk = np.r_[hk,0] gk = qmf(hk) - tgk = sb.r_[gk,0] + tgk = np.r_[gk,0] - indx1 = sb.clip(2*nn-kk,-1,N+1) - indx2 = sb.clip(2*nn-kk+1,-1,N+1) - m = sb.zeros((2,2,N,N),'d') - m[0,0] = sb.take(thk,indx1,0) - m[0,1] = sb.take(thk,indx2,0) - m[1,0] = sb.take(tgk,indx1,0) - m[1,1] = sb.take(tgk,indx2,0) + indx1 = np.clip(2*nn-kk,-1,N+1) + indx2 = np.clip(2*nn-kk+1,-1,N+1) + m = np.zeros((2,2,N,N),'d') + m[0,0] = np.take(thk,indx1,0) + m[0,1] = np.take(thk,indx2,0) + m[1,0] = np.take(tgk,indx1,0) + m[1,1] = np.take(tgk,indx2,0) m *= s2 # construct the grid of points - x = sb.arange(0,N*(1< Author: cdavid Date: 2008-09-09 12:11:12 -0500 (Tue, 09 Sep 2008) New Revision: 4708 Added: trunk/tools/ trunk/tools/win32/ trunk/tools/win32/build_scripts/ trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Start the prepare bootstrap for scipy. Added: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 14:16:57 UTC (rev 4707) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:11:12 UTC (rev 4708) @@ -0,0 +1,55 @@ +import os +import subprocess +from os.path import join as pjoin, split as psplit, dirname, exists as pexists +import re + +def get_svn_version(chdir): + out = subprocess.Popen(['svn', 'info'], + stdout = subprocess.PIPE, cwd = chdir).communicate()[0] + r = re.compile('Revision: ([0-9]+)') + svnver = None + for line in out.split('\n'): + m = r.match(line) + if m: + svnver = m.group(1) + + if not svnver: + raise ValueError("Error while parsing svn version ?") + + return svnver + +def get_scipy_version(chdir): + version_file = pjoin(chdir, "scipy", "version.py") + if not pexists(version_file): + raise IOError("file %s not found" % version_file) + + fid = open(version_file, "r") + vregex = re.compile("version\s*=\s*'(\d+)\.(\d+)\.(\d+)'") + isrelregex = re.compile("release\s*=\s*True") + isdevregex = re.compile("release\s*=\s*False") + isdev = None + version = None + for line in fid.readlines(): + m = vregex.match(line) + if m: + version = [int(i) for i in m.groups()] + if isrelregex.match(line): + if isdev is None: + isdev = False + else: + raise RuntimeError("isdev already set ?") + if isdevregex.match(line): + if isdev is None: + isdev = True + else: + raise RuntimeError("isdev already set ?") + + verstr = ".".join([str(i) for i in version]) + if isdev: + verstr += "dev" + return verstr + +if __name__ == '__main__': + ROOT = os.path.join("..", "..", "..") + print get_scipy_version(ROOT) + print get_svn_version(ROOT) From scipy-svn at scipy.org Tue Sep 9 13:12:34 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:12:34 -0500 (CDT) Subject: [Scipy-svn] r4709 - trunk/tools/win32/build_scripts Message-ID: <20080909171234.CF6FC39C4BD@scipy.org> Author: cdavid Date: 2008-09-09 12:12:17 -0500 (Tue, 09 Sep 2008) New Revision: 4709 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Add func to build scipy tarball. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:11:12 UTC (rev 4708) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:12:17 UTC (rev 4709) @@ -3,6 +3,17 @@ from os.path import join as pjoin, split as psplit, dirname, exists as pexists import re +def build_sdist(chdir): + cwd = os.getcwd() + try: + os.chdir(chdir) + cmd = ["python", "setup.py", "sdist", "--format=zip"] + subprocess.call(cmd) + except Exception, e: + raise RuntimeError("Error while executing cmd (%s)" % e) + finally: + os.chdir(cwd) + def get_svn_version(chdir): out = subprocess.Popen(['svn', 'info'], stdout = subprocess.PIPE, cwd = chdir).communicate()[0] @@ -51,5 +62,4 @@ if __name__ == '__main__': ROOT = os.path.join("..", "..", "..") - print get_scipy_version(ROOT) - print get_svn_version(ROOT) + print build_sdist(ROOT) From scipy-svn at scipy.org Tue Sep 9 13:13:19 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:13:19 -0500 (CDT) Subject: [Scipy-svn] r4710 - trunk/tools/win32/build_scripts Message-ID: <20080909171319.1C8A639C4BD@scipy.org> Author: cdavid Date: 2008-09-09 12:12:59 -0500 (Tue, 09 Sep 2008) New Revision: 4710 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Fix mising dot between release version and dev. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:12:17 UTC (rev 4709) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:12:59 UTC (rev 4710) @@ -57,7 +57,7 @@ verstr = ".".join([str(i) for i in version]) if isdev: - verstr += "dev" + verstr += ".dev" return verstr if __name__ == '__main__': From scipy-svn at scipy.org Tue Sep 9 13:14:11 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:14:11 -0500 (CDT) Subject: [Scipy-svn] r4711 - trunk/tools/win32/build_scripts Message-ID: <20080909171411.900A939C4BD@scipy.org> Author: cdavid Date: 2008-09-09 12:13:49 -0500 (Tue, 09 Sep 2008) New Revision: 4711 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Full version string correctly generated. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:12:59 UTC (rev 4710) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:13:49 UTC (rev 4711) @@ -58,6 +58,7 @@ verstr = ".".join([str(i) for i in version]) if isdev: verstr += ".dev" + verstr += get_svn_version(ROOT) return verstr if __name__ == '__main__': From scipy-svn at scipy.org Tue Sep 9 13:14:56 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:14:56 -0500 (CDT) Subject: [Scipy-svn] r4712 - trunk/tools/win32/build_scripts Message-ID: <20080909171456.0332E39C2ED@scipy.org> Author: cdavid Date: 2008-09-09 12:14:37 -0500 (Tue, 09 Sep 2008) New Revision: 4712 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Bootsrap script can now prepare scipy sources. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:13:49 UTC (rev 4711) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:14:37 UTC (rev 4712) @@ -1,8 +1,17 @@ import os +import shutil import subprocess from os.path import join as pjoin, split as psplit, dirname, exists as pexists import re +from zipfile import ZipFile +def get_sdist_tarball(src_root): + """Return the name of the installer built by sdist command.""" + # Yeah, the name logic is harcoded in distutils. We have to reproduce it + # here + name = "scipy-%s.zip" % get_scipy_version(src_root) + return name + def build_sdist(chdir): cwd = os.getcwd() try: @@ -14,6 +23,24 @@ finally: os.chdir(cwd) +def prepare_scipy_sources(src_root, bootstrap = 'bootstrap'): + zid = ZipFile(pjoin(src_root, 'dist', get_sdist_tarball(src_root))) + root = 'scipy-%s' % get_scipy_version(src_root) + + # From the sdist-built tarball, extract all files into bootstrap directory, + # but removing the numpy-VERSION head path + for name in zid.namelist(): + cnt = zid.read(name) + if name.startswith(root): + # XXX: even on windows, the path sep in zip is '/' ? + name = name.split('/', 1)[1] + newname = pjoin(bootstrap, name) + + if not os.path.exists(dirname(newname)): + os.makedirs(dirname(newname)) + fid = open(newname, 'wb') + fid.write(cnt) + def get_svn_version(chdir): out = subprocess.Popen(['svn', 'info'], stdout = subprocess.PIPE, cwd = chdir).communicate()[0] @@ -61,6 +88,15 @@ verstr += get_svn_version(ROOT) return verstr +def prepare_bootstrap(src_root, pyver): + bootstrap = "bootstrap-%s" % pyver + if os.path.exists(bootstrap): + shutil.rmtree(bootstrap) + os.makedirs(bootstrap) + + build_sdist(src_root) + prepare_scipy_sources(src_root, bootstrap) + if __name__ == '__main__': ROOT = os.path.join("..", "..", "..") - print build_sdist(ROOT) + prepare_bootstrap(ROOT, "2.5") From scipy-svn at scipy.org Tue Sep 9 13:15:43 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:15:43 -0500 (CDT) Subject: [Scipy-svn] r4713 - trunk/tools/win32/build_scripts Message-ID: <20080909171543.DCCE139C2ED@scipy.org> Author: cdavid Date: 2008-09-09 12:15:25 -0500 (Tue, 09 Sep 2008) New Revision: 4713 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Remove global var reference. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:14:37 UTC (rev 4712) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:15:25 UTC (rev 4713) @@ -56,8 +56,8 @@ return svnver -def get_scipy_version(chdir): - version_file = pjoin(chdir, "scipy", "version.py") +def get_scipy_version(src_root): + version_file = pjoin(src_root, "scipy", "version.py") if not pexists(version_file): raise IOError("file %s not found" % version_file) @@ -85,7 +85,7 @@ verstr = ".".join([str(i) for i in version]) if isdev: verstr += ".dev" - verstr += get_svn_version(ROOT) + verstr += get_svn_version(src_root) return verstr def prepare_bootstrap(src_root, pyver): From scipy-svn at scipy.org Tue Sep 9 13:16:35 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:16:35 -0500 (CDT) Subject: [Scipy-svn] r4714 - trunk/tools/win32/build_scripts Message-ID: <20080909171635.C964A39C2ED@scipy.org> Author: cdavid Date: 2008-09-09 12:16:19 -0500 (Tue, 09 Sep 2008) New Revision: 4714 Added: trunk/tools/win32/build_scripts/build.py Log: Add build script for scipy. Added: trunk/tools/win32/build_scripts/build.py =================================================================== --- trunk/tools/win32/build_scripts/build.py 2008-09-09 17:15:25 UTC (rev 4713) +++ trunk/tools/win32/build_scripts/build.py 2008-09-09 17:16:19 UTC (rev 4714) @@ -0,0 +1,175 @@ +"""Python script to build windows binaries to be fed to the "superpack". + +The script is pretty dumb: it assumes python executables are installed the +standard way, and the location for blas/lapack/atlas is harcoded.""" + +import sys +import subprocess +import os +import shutil +from os.path import join as pjoin, split as psplit, dirname + +PYEXECS = {"2.5" : "C:\python25\python.exe", + "2.4" : "C:\python24\python24.exe", + "2.3" : "C:\python23\python23.exe"} + +_SSE3_CFG = r"""[atlas] +library_dirs = C:\local\lib\yop\sse3""" +_SSE2_CFG = r"""[atlas] +library_dirs = C:\local\lib\yop\sse2""" +_NOSSE_CFG = r"""[DEFAULT] +library_dirs = C:\local\lib\yop\nosse""" + +SITECFG = {"sse2" : _SSE2_CFG, "sse3" : _SSE3_CFG, "nosse" : _NOSSE_CFG} + +def get_svn_version(chdir): + out = subprocess.Popen(['svn', 'info'], + stdout = subprocess.PIPE, cwd = chdir).communicate()[0] + r = re.compile('Revision: ([0-9]+)') + svnver = None + for line in out.split('\n'): + m = r.match(line) + if m: + svnver = m.group(1) + + if not svnver: + raise ValueError("Error while parsing svn version ?") + + return svnver + +def get_scipy_version(src_root): + version_file = pjoin(src_root, "scipy", "version.py") + if not pexists(version_file): + raise IOError("file %s not found" % version_file) + + fid = open(version_file, "r") + vregex = re.compile("version\s*=\s*'(\d+)\.(\d+)\.(\d+)'") + isrelregex = re.compile("release\s*=\s*True") + isdevregex = re.compile("release\s*=\s*False") + isdev = None + version = None + for line in fid.readlines(): + m = vregex.match(line) + if m: + version = [int(i) for i in m.groups()] + if isrelregex.match(line): + if isdev is None: + isdev = False + else: + raise RuntimeError("isdev already set ?") + if isdevregex.match(line): + if isdev is None: + isdev = True + else: + raise RuntimeError("isdev already set ?") + + verstr = ".".join([str(i) for i in version]) + if isdev: + verstr += ".dev" + verstr += get_svn_version(src_root) + return verstr + +def get_python_exec(ver): + """Return the executable of python for the given version.""" + # XXX Check that the file actually exists + try: + return PYEXECS[ver] + except KeyError: + raise ValueError("Version %s not supported/recognized" % ver) + +def get_clean(): + if os.path.exists("build"): + shutil.rmtree("build") + if os.path.exists("dist"): + shutil.rmtree("dist") + +def write_site_cfg(arch): + if os.path.exists("site.cfg"): + os.remove("site.cfg") + f = open("site.cfg", 'w') + f.writelines(SITECFG[arch]) + f.close() + +def build(arch, pyver): + print "Building numpy binary for python %s, arch is %s" % (get_python_exec(pyver), arch) + get_clean() + write_site_cfg(arch) + + if BUILD_MSI: + cmd = "%s setup.py build -c mingw32 bdist_msi" % get_python_exec(pyver) + else: + cmd = "%s setup.py build -c mingw32 bdist_wininst" % get_python_exec(pyver) + build_log = "build-%s-%s.log" % (arch, pyver) + f = open(build_log, 'w') + + try: + try: + subprocess.check_call(cmd, shell = True, stderr = subprocess.STDOUT, stdout = f) + finally: + f.close() + except subprocess.CalledProcessError, e: + msg = """ +There was an error while executing the following command: + + %s + +Error was : %s + +Look at the build log (%s).""" % (cmd, str(e), build_log) + raise Exception(msg) + + move_binary(arch, pyver) + +def move_binary(arch, pyver): + if not os.path.exists("binaries"): + os.makedirs("binaries") + + shutil.move(os.path.join('dist', get_windist_exec(pyver)), + os.path.join("binaries", get_binary_name(arch))) + +def get_binary_name(arch): + if BUILD_MSI: + ext = '.msi' + else: + ext = '.exe' + return "numpy-%s-%s%s" % (get_scipy_version(), arch, ext) + +def get_windist_exec(pyver): + """Return the name of the installer built by wininst command.""" + # Yeah, the name logic is harcoded in distutils. We have to reproduce it + # here + if BUILD_MSI: + ext = '.msi' + else: + ext = '.exe' + name = "numpy-%s.win32-py%s%s" % (get_numpy_version(ROOT), pyver, ext) + return name + +if __name__ == '__main__': + ROOT = pjoin("..", "..", "..", "..") + from optparse import OptionParser + parser = OptionParser() + parser.add_option("-a", "--arch", dest="arch", + help = "Architecture to build (sse2, sse3, nosse, etc...)") + parser.add_option("-p", "--pyver", dest="pyver", + help = "Python version (2.4, 2.5, etc...)") + parser.add_option("-m", "--build-msi", dest="msi", + help = "0 or 1. If 1, build a msi instead of an exe.") + + opts, args = parser.parse_args() + arch = opts.arch + pyver = opts.pyver + msi = opts.msi + + if not pyver: + pyver = "2.5" + if not msi: + BUILD_MSI = False + else: + BUILD_MSI = True + + if not arch: + for arch in SITECFG.keys(): + build(arch, pyver) + else: + build(arch, pyver) From scipy-svn at scipy.org Tue Sep 9 13:17:26 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:17:26 -0500 (CDT) Subject: [Scipy-svn] r4715 - trunk/tools/win32/build_scripts Message-ID: <20080909171726.EC01A39C2ED@scipy.org> Author: cdavid Date: 2008-09-09 12:17:02 -0500 (Tue, 09 Sep 2008) New Revision: 4715 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Copy build script in bootstrapped sources. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:16:19 UTC (rev 4714) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:17:02 UTC (rev 4715) @@ -97,6 +97,8 @@ build_sdist(src_root) prepare_scipy_sources(src_root, bootstrap) + shutil.copy('build.py', bootstrap) + if __name__ == '__main__': ROOT = os.path.join("..", "..", "..") prepare_bootstrap(ROOT, "2.5") From scipy-svn at scipy.org Tue Sep 9 13:18:25 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:18:25 -0500 (CDT) Subject: [Scipy-svn] r4716 - trunk/tools/win32/build_scripts Message-ID: <20080909171825.67C9539C2ED@scipy.org> Author: cdavid Date: 2008-09-09 12:17:57 -0500 (Tue, 09 Sep 2008) New Revision: 4716 Modified: trunk/tools/win32/build_scripts/build.py Log: Remove numpy references in scipy build script. Modified: trunk/tools/win32/build_scripts/build.py =================================================================== --- trunk/tools/win32/build_scripts/build.py 2008-09-09 17:17:02 UTC (rev 4715) +++ trunk/tools/win32/build_scripts/build.py 2008-09-09 17:17:57 UTC (rev 4716) @@ -91,7 +91,7 @@ f.close() def build(arch, pyver): - print "Building numpy binary for python %s, arch is %s" % (get_python_exec(pyver), arch) + print "Building scipy binary for python %s, arch is %s" % (get_python_exec(pyver), arch) get_clean() write_site_cfg(arch) @@ -132,7 +132,7 @@ ext = '.msi' else: ext = '.exe' - return "numpy-%s-%s%s" % (get_scipy_version(), arch, ext) + return "scipy-%s-%s%s" % (get_scipy_version(), arch, ext) def get_windist_exec(pyver): """Return the name of the installer built by wininst command.""" @@ -142,7 +142,7 @@ ext = '.msi' else: ext = '.exe' - name = "numpy-%s.win32-py%s%s" % (get_numpy_version(ROOT), pyver, ext) + name = "scipy-%s.win32-py%s%s" % (get_scipy_version(ROOT), pyver, ext) return name if __name__ == '__main__': From scipy-svn at scipy.org Tue Sep 9 13:19:23 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:19:23 -0500 (CDT) Subject: [Scipy-svn] r4717 - trunk/tools/win32/build_scripts Message-ID: <20080909171923.42E7539C3D5@scipy.org> Author: cdavid Date: 2008-09-09 12:18:55 -0500 (Tue, 09 Sep 2008) New Revision: 4717 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Add nsis script to bootstap. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:17:57 UTC (rev 4716) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:18:55 UTC (rev 4717) @@ -88,6 +88,23 @@ verstr += get_svn_version(src_root) return verstr +def prepare_nsis_script(bootstrap, pyver, numver): + tpl = os.path.join('nsis_scripts', 'scipy-superinstaller.nsi.in') + source = open(tpl, 'r') + target = open(pjoin(bootstrap, 'scipy-superinstaller.nsi'), 'w') + + installer_name = 'scipy-%s-win32-superpack-python%s.exe' % (numver, pyver) + cnt = "".join(source.readlines()) + cnt = cnt.replace('@SCIPY_INSTALLER_NAME@', installer_name) + for arch in ['nosse', 'sse2', 'sse3']: + cnt = cnt.replace('@%s_BINARY@' % arch.upper(), + get_binary_name(arch)) + + target.write(cnt) + +def get_binary_name(arch): + return "scipy-%s-%s.exe" % (get_scipy_version(ROOT), arch) + def prepare_bootstrap(src_root, pyver): bootstrap = "bootstrap-%s" % pyver if os.path.exists(bootstrap): @@ -98,6 +115,7 @@ prepare_scipy_sources(src_root, bootstrap) shutil.copy('build.py', bootstrap) + prepare_nsis_script(bootstrap, pyver, get_numpy_version()) if __name__ == '__main__': ROOT = os.path.join("..", "..", "..") From scipy-svn at scipy.org Tue Sep 9 13:20:04 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:20:04 -0500 (CDT) Subject: [Scipy-svn] r4718 - in trunk/tools/win32/build_scripts: . nsis_scripts Message-ID: <20080909172004.3A8B839C3D5@scipy.org> Author: cdavid Date: 2008-09-09 12:19:48 -0500 (Tue, 09 Sep 2008) New Revision: 4718 Added: trunk/tools/win32/build_scripts/nsis_scripts/ trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi Log: Add nsis script to build super installer. Added: trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi =================================================================== --- trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi 2008-09-09 17:18:55 UTC (rev 4717) +++ trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi 2008-09-09 17:19:48 UTC (rev 4718) @@ -0,0 +1,121 @@ +;-------------------------------- +;Include Modern UI + +!include "MUI2.nsh" + +;SetCompress off ; Useful to disable compression under development +SetCompressor /Solid LZMA ; Useful to disable compression under development + +;-------------------------------- +;General + +;Name and file +Name "Scipy super installer" +OutFile "@SCIPY_INSTALLER_NAME@" + +;Default installation folder +InstallDir "$TEMP" + +;-------------------------------- +;Interface Settings + +!define MUI_ABORTWARNING + +;-------------------------------- +;Pages + +;!insertmacro MUI_PAGE_LICENSE "${NSISDIR}\Docs\Modern UI\License.txt" +;!insertmacro MUI_PAGE_COMPONENTS +;!insertmacro MUI_PAGE_DIRECTORY +;!insertmacro MUI_PAGE_INSTFILES + +;!insertmacro MUI_UNPAGE_CONFIRM +;!insertmacro MUI_UNPAGE_INSTFILES + +;-------------------------------- +;Languages + +!insertmacro MUI_LANGUAGE "English" + +;-------------------------------- +;Component Sections + +!include 'Sections.nsh' +!include LogicLib.nsh + +Var HasSSE2 +Var HasSSE3 +Var CPUSSE + +Section "Core" SecCore + + ;SectionIn RO + SetOutPath "$INSTDIR" + + ;Create uninstaller + ;WriteUninstaller "$INSTDIR\Uninstall.exe" + + DetailPrint "Install dir for actual installers is $INSTDIR" + + StrCpy $CPUSSE "0" + CpuCaps::hasSSE2 + Pop $0 + StrCpy $HasSSE2 $0 + + CpuCaps::hasSSE3 + Pop $0 + StrCpy $HasSSE3 $0 + + ; Debug + StrCmp $HasSSE2 "Y" include_sse2 no_include_sse2 + include_sse2: + DetailPrint '"Target CPU handles SSE2"' + StrCpy $CPUSSE "2" + goto done_sse2 + no_include_sse2: + DetailPrint '"Target CPU does NOT handle SSE2"' + goto done_sse2 + done_sse2: + + StrCmp $HasSSE3 "Y" include_sse3 no_include_sse3 + include_sse3: + DetailPrint '"Target CPU handles SSE3"' + StrCpy $CPUSSE "3" + goto done_sse3 + no_include_sse3: + DetailPrint '"Target CPU does NOT handle SSE3"' + goto done_sse3 + done_sse3: + + ClearErrors + + ; Install files conditionaly on detected cpu + ${Switch} $CPUSSE + ${Case} "3" + DetailPrint '"Install SSE 3"' + File "binaries\@SSE3_BINARY@" + ExecWait '"$INSTDIR\@SSE3_BINARY@"' + ${Break} + ${Case} "2" + DetailPrint '"Install SSE 2"' + File "binaries\@SSE2_BINARY@" + ExecWait '"$INSTDIR\@SSE2_BINARY@"' + ${Break} + ${Default} + DetailPrint '"Install NO SSE"' + File "binaries\@NOSSE_BINARY@" + ExecWait '"$INSTDIR\@NOSSE_BINARY@"' + ${Break} + ${EndSwitch} + + ; Handle errors when executing installers + IfErrors error no_error + + error: + messageBox MB_OK "Executing scipy installer failed" + goto done + no_error: + goto done + done: + +SectionEnd From scipy-svn at scipy.org Tue Sep 9 13:20:51 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:20:51 -0500 (CDT) Subject: [Scipy-svn] r4719 - trunk/tools/win32/build_scripts Message-ID: <20080909172051.F1E9339C3D5@scipy.org> Author: cdavid Date: 2008-09-09 12:20:26 -0500 (Tue, 09 Sep 2008) New Revision: 4719 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Fix numpy script leftover. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:19:48 UTC (rev 4718) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:20:26 UTC (rev 4719) @@ -115,7 +115,7 @@ prepare_scipy_sources(src_root, bootstrap) shutil.copy('build.py', bootstrap) - prepare_nsis_script(bootstrap, pyver, get_numpy_version()) + prepare_nsis_script(bootstrap, pyver, get_scipy_version()) if __name__ == '__main__': ROOT = os.path.join("..", "..", "..") From scipy-svn at scipy.org Tue Sep 9 13:21:33 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:21:33 -0500 (CDT) Subject: [Scipy-svn] r4720 - trunk/tools/win32/build_scripts Message-ID: <20080909172133.B77AF39C3D5@scipy.org> Author: cdavid Date: 2008-09-09 12:21:11 -0500 (Tue, 09 Sep 2008) New Revision: 4720 Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py Log: Forgot to pass ROOT dir to scipy_version func. Modified: trunk/tools/win32/build_scripts/prepare_bootstrap.py =================================================================== --- trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:20:26 UTC (rev 4719) +++ trunk/tools/win32/build_scripts/prepare_bootstrap.py 2008-09-09 17:21:11 UTC (rev 4720) @@ -115,7 +115,7 @@ prepare_scipy_sources(src_root, bootstrap) shutil.copy('build.py', bootstrap) - prepare_nsis_script(bootstrap, pyver, get_scipy_version()) + prepare_nsis_script(bootstrap, pyver, get_scipy_version(src_root)) if __name__ == '__main__': ROOT = os.path.join("..", "..", "..") From scipy-svn at scipy.org Tue Sep 9 13:22:27 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 12:22:27 -0500 (CDT) Subject: [Scipy-svn] r4721 - trunk/tools/win32/build_scripts/nsis_scripts Message-ID: <20080909172227.D8D5139C3D5@scipy.org> Author: cdavid Date: 2008-09-09 12:22:00 -0500 (Tue, 09 Sep 2008) New Revision: 4721 Added: trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi.in Removed: trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi Log: Rename nsis script. Deleted: trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi =================================================================== --- trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi 2008-09-09 17:21:11 UTC (rev 4720) +++ trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi 2008-09-09 17:22:00 UTC (rev 4721) @@ -1,121 +0,0 @@ -;-------------------------------- -;Include Modern UI - -!include "MUI2.nsh" - -;SetCompress off ; Useful to disable compression under development -SetCompressor /Solid LZMA ; Useful to disable compression under development - -;-------------------------------- -;General - -;Name and file -Name "Scipy super installer" -OutFile "@SCIPY_INSTALLER_NAME@" - -;Default installation folder -InstallDir "$TEMP" - -;-------------------------------- -;Interface Settings - -!define MUI_ABORTWARNING - -;-------------------------------- -;Pages - -;!insertmacro MUI_PAGE_LICENSE "${NSISDIR}\Docs\Modern UI\License.txt" -;!insertmacro MUI_PAGE_COMPONENTS -;!insertmacro MUI_PAGE_DIRECTORY -;!insertmacro MUI_PAGE_INSTFILES - -;!insertmacro MUI_UNPAGE_CONFIRM -;!insertmacro MUI_UNPAGE_INSTFILES - -;-------------------------------- -;Languages - -!insertmacro MUI_LANGUAGE "English" - -;-------------------------------- -;Component Sections - -!include 'Sections.nsh' -!include LogicLib.nsh - -Var HasSSE2 -Var HasSSE3 -Var CPUSSE - -Section "Core" SecCore - - ;SectionIn RO - SetOutPath "$INSTDIR" - - ;Create uninstaller - ;WriteUninstaller "$INSTDIR\Uninstall.exe" - - DetailPrint "Install dir for actual installers is $INSTDIR" - - StrCpy $CPUSSE "0" - CpuCaps::hasSSE2 - Pop $0 - StrCpy $HasSSE2 $0 - - CpuCaps::hasSSE3 - Pop $0 - StrCpy $HasSSE3 $0 - - ; Debug - StrCmp $HasSSE2 "Y" include_sse2 no_include_sse2 - include_sse2: - DetailPrint '"Target CPU handles SSE2"' - StrCpy $CPUSSE "2" - goto done_sse2 - no_include_sse2: - DetailPrint '"Target CPU does NOT handle SSE2"' - goto done_sse2 - done_sse2: - - StrCmp $HasSSE3 "Y" include_sse3 no_include_sse3 - include_sse3: - DetailPrint '"Target CPU handles SSE3"' - StrCpy $CPUSSE "3" - goto done_sse3 - no_include_sse3: - DetailPrint '"Target CPU does NOT handle SSE3"' - goto done_sse3 - done_sse3: - - ClearErrors - - ; Install files conditionaly on detected cpu - ${Switch} $CPUSSE - ${Case} "3" - DetailPrint '"Install SSE 3"' - File "binaries\@SSE3_BINARY@" - ExecWait '"$INSTDIR\@SSE3_BINARY@"' - ${Break} - ${Case} "2" - DetailPrint '"Install SSE 2"' - File "binaries\@SSE2_BINARY@" - ExecWait '"$INSTDIR\@SSE2_BINARY@"' - ${Break} - ${Default} - DetailPrint '"Install NO SSE"' - File "binaries\@NOSSE_BINARY@" - ExecWait '"$INSTDIR\@NOSSE_BINARY@"' - ${Break} - ${EndSwitch} - - ; Handle errors when executing installers - IfErrors error no_error - - error: - messageBox MB_OK "Executing scipy installer failed" - goto done - no_error: - goto done - done: - -SectionEnd Copied: trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi.in (from rev 4720, trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi) =================================================================== --- trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi 2008-09-09 17:21:11 UTC (rev 4720) +++ trunk/tools/win32/build_scripts/nsis_scripts/scipy-superinstaller.nsi.in 2008-09-09 17:22:00 UTC (rev 4721) @@ -0,0 +1,121 @@ +;-------------------------------- +;Include Modern UI + +!include "MUI2.nsh" + +;SetCompress off ; Useful to disable compression under development +SetCompressor /Solid LZMA ; Useful to disable compression under development + +;-------------------------------- +;General + +;Name and file +Name "Scipy super installer" +OutFile "@SCIPY_INSTALLER_NAME@" + +;Default installation folder +InstallDir "$TEMP" + +;-------------------------------- +;Interface Settings + +!define MUI_ABORTWARNING + +;-------------------------------- +;Pages + +;!insertmacro MUI_PAGE_LICENSE "${NSISDIR}\Docs\Modern UI\License.txt" +;!insertmacro MUI_PAGE_COMPONENTS +;!insertmacro MUI_PAGE_DIRECTORY +;!insertmacro MUI_PAGE_INSTFILES + +;!insertmacro MUI_UNPAGE_CONFIRM +;!insertmacro MUI_UNPAGE_INSTFILES + +;-------------------------------- +;Languages + +!insertmacro MUI_LANGUAGE "English" + +;-------------------------------- +;Component Sections + +!include 'Sections.nsh' +!include LogicLib.nsh + +Var HasSSE2 +Var HasSSE3 +Var CPUSSE + +Section "Core" SecCore + + ;SectionIn RO + SetOutPath "$INSTDIR" + + ;Create uninstaller + ;WriteUninstaller "$INSTDIR\Uninstall.exe" + + DetailPrint "Install dir for actual installers is $INSTDIR" + + StrCpy $CPUSSE "0" + CpuCaps::hasSSE2 + Pop $0 + StrCpy $HasSSE2 $0 + + CpuCaps::hasSSE3 + Pop $0 + StrCpy $HasSSE3 $0 + + ; Debug + StrCmp $HasSSE2 "Y" include_sse2 no_include_sse2 + include_sse2: + DetailPrint '"Target CPU handles SSE2"' + StrCpy $CPUSSE "2" + goto done_sse2 + no_include_sse2: + DetailPrint '"Target CPU does NOT handle SSE2"' + goto done_sse2 + done_sse2: + + StrCmp $HasSSE3 "Y" include_sse3 no_include_sse3 + include_sse3: + DetailPrint '"Target CPU handles SSE3"' + StrCpy $CPUSSE "3" + goto done_sse3 + no_include_sse3: + DetailPrint '"Target CPU does NOT handle SSE3"' + goto done_sse3 + done_sse3: + + ClearErrors + + ; Install files conditionaly on detected cpu + ${Switch} $CPUSSE + ${Case} "3" + DetailPrint '"Install SSE 3"' + File "binaries\@SSE3_BINARY@" + ExecWait '"$INSTDIR\@SSE3_BINARY@"' + ${Break} + ${Case} "2" + DetailPrint '"Install SSE 2"' + File "binaries\@SSE2_BINARY@" + ExecWait '"$INSTDIR\@SSE2_BINARY@"' + ${Break} + ${Default} + DetailPrint '"Install NO SSE"' + File "binaries\@NOSSE_BINARY@" + ExecWait '"$INSTDIR\@NOSSE_BINARY@"' + ${Break} + ${EndSwitch} + + ; Handle errors when executing installers + IfErrors error no_error + + error: + messageBox MB_OK "Executing scipy installer failed" + goto done + no_error: + goto done + done: + +SectionEnd From scipy-svn at scipy.org Tue Sep 9 14:11:45 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 9 Sep 2008 13:11:45 -0500 (CDT) Subject: [Scipy-svn] r4722 - trunk/tools/win32/build_scripts Message-ID: <20080909181145.A95EA39C2EA@scipy.org> Author: cdavid Date: 2008-09-09 13:11:27 -0500 (Tue, 09 Sep 2008) New Revision: 4722 Modified: trunk/tools/win32/build_scripts/build.py Log: Fix missing imports. Modified: trunk/tools/win32/build_scripts/build.py =================================================================== --- trunk/tools/win32/build_scripts/build.py 2008-09-09 17:22:00 UTC (rev 4721) +++ trunk/tools/win32/build_scripts/build.py 2008-09-09 18:11:27 UTC (rev 4722) @@ -7,7 +7,8 @@ import subprocess import os import shutil -from os.path import join as pjoin, split as psplit, dirname +from os.path import join as pjoin, split as psplit, dirname, exists as pexists +import re PYEXECS = {"2.5" : "C:\python25\python.exe", "2.4" : "C:\python24\python24.exe", From scipy-svn at scipy.org Thu Sep 18 15:13:21 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:13:21 -0500 (CDT) Subject: [Scipy-svn] r4723 - in trunk/scipy/io: . arff/tests matlab/tests Message-ID: <20080918191321.85F8B39C021@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:12:41 -0500 (Thu, 18 Sep 2008) New Revision: 4723 Modified: trunk/scipy/io/arff/tests/test_header.py trunk/scipy/io/fopen.py trunk/scipy/io/matlab/tests/test_mio.py trunk/scipy/io/mmio.py trunk/scipy/io/netcdf.py Log: Removed unused imports. PEP8 conformance (one import per line). Modified: trunk/scipy/io/arff/tests/test_header.py =================================================================== --- trunk/scipy/io/arff/tests/test_header.py 2008-09-09 18:11:27 UTC (rev 4722) +++ trunk/scipy/io/arff/tests/test_header.py 2008-09-18 19:12:41 UTC (rev 4723) @@ -4,8 +4,7 @@ from numpy.testing import * -from scipy.io.arff.arffread import read_header, MetaData, parse_type, \ - ParseArffError +from scipy.io.arff.arffread import read_header, parse_type, ParseArffError data_path = os.path.join(os.path.dirname(__file__), 'data') Modified: trunk/scipy/io/fopen.py =================================================================== --- trunk/scipy/io/fopen.py 2008-09-09 18:11:27 UTC (rev 4722) +++ trunk/scipy/io/fopen.py 2008-09-18 19:12:41 UTC (rev 4723) @@ -2,7 +2,8 @@ # Author: Travis Oliphant -import struct, os, sys +import struct +import sys import types from numpy import * Modified: trunk/scipy/io/matlab/tests/test_mio.py =================================================================== --- trunk/scipy/io/matlab/tests/test_mio.py 2008-09-09 18:11:27 UTC (rev 4722) +++ trunk/scipy/io/matlab/tests/test_mio.py 2008-09-18 19:12:41 UTC (rev 4723) @@ -3,10 +3,10 @@ import os from glob import glob from cStringIO import StringIO -from tempfile import mkstemp, mkdtemp +from tempfile import mkdtemp from numpy.testing import * -from numpy import arange, array, eye, pi, cos, exp, sin, sqrt, ndarray, \ - zeros, reshape, transpose, empty +from numpy import arange, array, pi, cos, exp, sin, sqrt, ndarray, \ + zeros, reshape, transpose import scipy.sparse as SP from scipy.io.matlab.mio import loadmat, savemat @@ -15,11 +15,6 @@ import shutil import gzip -try: # Python 2.3 support - from sets import Set as set -except: - pass - test_data_path = os.path.join(os.path.dirname(__file__), 'data') def _check_level(self, label, expected, actual): Modified: trunk/scipy/io/mmio.py =================================================================== --- trunk/scipy/io/mmio.py 2008-09-09 18:11:27 UTC (rev 4722) +++ trunk/scipy/io/mmio.py 2008-09-18 19:12:41 UTC (rev 4723) @@ -10,9 +10,8 @@ # import os -from numpy import asarray, real, imag, conj, zeros, ndarray, \ - empty, concatenate, ones, ascontiguousarray, \ - vstack, savetxt, fromfile, fromstring +from numpy import asarray, real, imag, conj, zeros, ndarray, concatenate, \ + ones, ascontiguousarray, vstack, savetxt, fromfile, fromstring __all__ = ['mminfo','mmread','mmwrite', 'MMFile'] Modified: trunk/scipy/io/netcdf.py =================================================================== --- trunk/scipy/io/netcdf.py 2008-09-09 18:11:27 UTC (rev 4722) +++ trunk/scipy/io/netcdf.py 2008-09-18 19:12:41 UTC (rev 4723) @@ -15,7 +15,6 @@ __all__ = ['netcdf_file', 'netcdf_variable'] import struct -import itertools import mmap from numpy import ndarray, zeros, array From scipy-svn at scipy.org Thu Sep 18 15:15:50 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:15:50 -0500 (CDT) Subject: [Scipy-svn] r4724 - trunk/scipy/signal Message-ID: <20080918191550.3396839C021@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:15:47 -0500 (Thu, 18 Sep 2008) New Revision: 4724 Modified: trunk/scipy/signal/bsplines.py trunk/scipy/signal/signaltools.py trunk/scipy/signal/wavelets.py Log: Removed unused imports. Standardized NumPy import as "import numpy as np". Modified: trunk/scipy/signal/bsplines.py =================================================================== --- trunk/scipy/signal/bsplines.py 2008-09-18 19:12:41 UTC (rev 4723) +++ trunk/scipy/signal/bsplines.py 2008-09-18 19:15:47 UTC (rev 4724) @@ -3,7 +3,7 @@ import scipy.special from numpy import logical_and, asarray, pi, zeros_like, \ piecewise, array, arctan2, tan, zeros, arange, floor -from numpy.core.umath import sqrt, exp, greater, less, equal, cos, add, sin, \ +from numpy.core.umath import sqrt, exp, greater, less, cos, add, sin, \ less_equal, greater_equal from spline import * # C-modules from scipy.misc import comb Modified: trunk/scipy/signal/signaltools.py =================================================================== --- trunk/scipy/signal/signaltools.py 2008-09-18 19:12:41 UTC (rev 4723) +++ trunk/scipy/signal/signaltools.py 2008-09-18 19:15:47 UTC (rev 4724) @@ -4,17 +4,16 @@ import types import sigtools from scipy import special, linalg -from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2 +from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2, fftn, ifftn from numpy import polyadd, polymul, polydiv, polysub, \ roots, poly, polyval, polyder, cast, asarray, isscalar, atleast_1d, \ ones, sin, linspace, real, extract, real_if_close, zeros, array, arange, \ where, sqrt, rank, newaxis, argmax, product, cos, pi, exp, \ ravel, size, less_equal, sum, r_, iscomplexobj, take, \ - argsort, allclose, expand_dims, unique, prod, sort, reshape, c_, \ - transpose, dot, any, minimum, maximum, mean, cosh, arccosh, \ + argsort, allclose, expand_dims, unique, prod, sort, reshape, \ + transpose, dot, any, mean, cosh, arccosh, \ arccos, concatenate -import numpy -from scipy.fftpack import fftn, ifftn, fft +import numpy as np from scipy.misc import factorial _modedict = {'valid':0, 'same':1, 'full':2} @@ -94,8 +93,8 @@ """ s1 = array(in1.shape) s2 = array(in2.shape) - complex_result = (numpy.issubdtype(in1.dtype, numpy.complex) or - numpy.issubdtype(in2.dtype, numpy.complex)) + complex_result = (np.issubdtype(in1.dtype, np.complex) or + np.issubdtype(in2.dtype, np.complex)) size = s1+s2-1 IN1 = fftn(in1,size) IN1 *= fftn(in2,size) @@ -864,7 +863,7 @@ p = zeros(x.shape) p[x > 1] = cosh(order * arccosh(x[x > 1])) p[x < -1] = (1 - 2*(order%2)) * cosh(order * arccosh(-x[x < -1])) - p[numpy.abs(x) <=1 ] = cos(order * arccos(x[numpy.abs(x) <= 1])) + p[np.abs(x) <=1 ] = cos(order * arccos(x[np.abs(x) <= 1])) # Appropriate IDFT and filling up # depending on even/odd M @@ -1004,11 +1003,11 @@ mult -- The multiplicity of each root """ if rtype in ['max','maximum']: - comproot = numpy.maximum + comproot = np.maximum elif rtype in ['min','minimum']: - comproot = numpy.minimum + comproot = np.minimum elif rtype in ['avg','mean']: - comproot = numpy.mean + comproot = np.mean p = asarray(p)*1.0 tol = abs(tol) p, indx = cmplx_sort(p) @@ -1381,7 +1380,7 @@ sl = [slice(None)]*len(x.shape) newshape = list(x.shape) newshape[axis] = num - N = int(numpy.minimum(num,Nx)) + N = int(np.minimum(num,Nx)) Y = zeros(newshape,'D') sl[axis] = slice(0,(N+1)/2) Y[sl] = X[sl] Modified: trunk/scipy/signal/wavelets.py =================================================================== --- trunk/scipy/signal/wavelets.py 2008-09-18 19:12:41 UTC (rev 4723) +++ trunk/scipy/signal/wavelets.py 2008-09-18 19:15:47 UTC (rev 4724) @@ -3,7 +3,7 @@ import numpy as np from numpy.dual import eig from scipy.misc import comb -from scipy import linspace, pi, exp, zeros +from scipy import linspace, pi, exp def daub(p): """The coefficients for the FIR low-pass filter producing Daubechies wavelets. From scipy-svn at scipy.org Thu Sep 18 15:24:02 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:24:02 -0500 (CDT) Subject: [Scipy-svn] r4725 - in trunk/scipy/maxentropy: . examples tests Message-ID: <20080918192402.9EA6F39C1EC@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:23:58 -0500 (Thu, 18 Sep 2008) New Revision: 4725 Modified: trunk/scipy/maxentropy/examples/bergerexample.py trunk/scipy/maxentropy/examples/conditionalexample2.py trunk/scipy/maxentropy/maxentropy.py trunk/scipy/maxentropy/maxentutils.py trunk/scipy/maxentropy/tests/test_maxentropy.py Log: Removed unused imports. Standardized NumPy import as "import numpy as np". PEP8 conformance (one import per line). Modified: trunk/scipy/maxentropy/examples/bergerexample.py =================================================================== --- trunk/scipy/maxentropy/examples/bergerexample.py 2008-09-18 19:15:47 UTC (rev 4724) +++ trunk/scipy/maxentropy/examples/bergerexample.py 2008-09-18 19:23:58 UTC (rev 4725) @@ -20,8 +20,6 @@ __author__ = 'Ed Schofield' __version__= '2.1' - -import math from scipy import maxentropy a_grave = u'\u00e0' Modified: trunk/scipy/maxentropy/examples/conditionalexample2.py =================================================================== --- trunk/scipy/maxentropy/examples/conditionalexample2.py 2008-09-18 19:15:47 UTC (rev 4724) +++ trunk/scipy/maxentropy/examples/conditionalexample2.py 2008-09-18 19:23:58 UTC (rev 4725) @@ -18,9 +18,7 @@ __author__ = 'Ed Schofield' -import math from scipy import maxentropy, sparse -import numpy samplespace = ['dans', 'en', '?', 'au cours de', 'pendant'] # Occurrences of French words, and their 'next English word' contexts, in Modified: trunk/scipy/maxentropy/maxentropy.py =================================================================== --- trunk/scipy/maxentropy/maxentropy.py 2008-09-18 19:15:47 UTC (rev 4724) +++ trunk/scipy/maxentropy/maxentropy.py 2008-09-18 19:23:58 UTC (rev 4725) @@ -70,8 +70,8 @@ import math, types, cPickle -import numpy -from scipy import optimize, sparse +import numpy as np +from scipy import optimize from scipy.linalg import norm from scipy.maxentropy.maxentutils import * @@ -194,7 +194,7 @@ " using setfeaturesandsamplespace()" # First convert K to a numpy array if necessary - K = numpy.asarray(K, float) + K = np.asarray(K, float) # Store the desired feature expectations as a member variable self.K = K @@ -212,7 +212,7 @@ # self.gradevals = 0 # Make a copy of the parameters - oldparams = numpy.array(self.params) + oldparams = np.array(self.params) callback = self.log @@ -272,7 +272,7 @@ + "' is unsupported. Options are 'CG', 'LBFGSB', " \ "'Nelder-Mead', 'Powell', and 'BFGS'" - if numpy.any(self.params != newparams): + if np.any(self.params != newparams): self.setparams(newparams) self.func_calls = func_calls @@ -322,7 +322,7 @@ self.setparams(params) # Subsumes both small and large cases: - L = self.lognormconst() - numpy.dot(self.params, self.K) + L = self.lognormconst() - np.dot(self.params, self.K) if self.verbose and self.external is None: print " dual is ", L @@ -332,7 +332,7 @@ # Define 0 / 0 = 0 here; this allows a variance term of # sigma_i^2==0 to indicate that feature i should be ignored. if self.sigma2 is not None and ignorepenalty==False: - ratios = numpy.nan_to_num(self.params**2 / self.sigma2) + ratios = np.nan_to_num(self.params**2 / self.sigma2) # Why does the above convert inf to 1.79769e+308? L += 0.5 * ratios.sum() @@ -396,7 +396,7 @@ self.test() if not self.callingback and self.external is None: - if self.mindual > -numpy.inf and self.dual() < self.mindual: + if self.mindual > -np.inf and self.dual() < self.mindual: raise DivergenceError, "dual is below the threshold 'mindual'" \ " and may be diverging to -inf. Fix the constraints" \ " or lower the threshold!" @@ -428,7 +428,7 @@ if self.sigma2 is not None and ignorepenalty==False: penalty = self.params / self.sigma2 G += penalty - features_to_kill = numpy.where(numpy.isnan(penalty))[0] + features_to_kill = np.where(np.isnan(penalty))[0] G[features_to_kill] = 0.0 if self.verbose and self.external is None: normG = norm(G) @@ -449,7 +449,7 @@ return G - def crossentropy(self, fx, log_prior_x=None, base=numpy.e): + def crossentropy(self, fx, log_prior_x=None, base=np.e): """Returns the cross entropy H(q, p) of the empirical distribution q of the data (with the given feature matrix fx) with respect to the model p. For discrete distributions this is @@ -466,9 +466,9 @@ For continuous distributions this makes no sense! """ H = -self.logpdf(fx, log_prior_x).mean() - if base != numpy.e: + if base != np.e: # H' = H * log_{base} (e) - return H / numpy.log(base) + return H / np.log(base) else: return H @@ -483,7 +483,7 @@ Z = E_aux_dist [{exp (params.f(X))} / aux_dist(X)] using a sample from aux_dist. """ - return numpy.exp(self.lognormconst()) + return np.exp(self.lognormconst()) def setsmooth(sigma): @@ -507,7 +507,7 @@ length as the model's feature vector f. """ - self.params = numpy.array(params, float) # make a copy + self.params = np.array(params, float) # make a copy # Log the new params to disk self.logparams() @@ -546,7 +546,7 @@ raise ValueError, "specify the number of features / parameters" # Set parameters, clearing cache variables - self.setparams(numpy.zeros(m, float)) + self.setparams(np.zeros(m, float)) # These bounds on the param values are only effective for the # L-BFGS-B optimizer: @@ -595,7 +595,7 @@ return # Check whether the params are NaN - if not numpy.all(self.params == self.params): + if not np.all(self.params == self.params): raise FloatingPointError, "some of the parameters are NaN" if self.verbose: @@ -775,15 +775,15 @@ raise AttributeError, "prior probability mass function not set" def p(x): - f_x = numpy.array([f[i](x) for i in range(len(f))], float) + f_x = np.array([f[i](x) for i in range(len(f))], float) # Do we have a prior distribution p_0? if priorlogpmf is not None: priorlogprob_x = priorlogpmf(x) - return math.exp(numpy.dot(self.params, f_x) + priorlogprob_x \ + return math.exp(np.dot(self.params, f_x) + priorlogprob_x \ - logZ) else: - return math.exp(numpy.dot(self.params, f_x) - logZ) + return math.exp(np.dot(self.params, f_x) - logZ) return p @@ -893,7 +893,7 @@ # As an optimization, p_tilde need not be copied or stored at all, since # it is only used by this function. - self.p_tilde_context = numpy.empty(numcontexts, float) + self.p_tilde_context = np.empty(numcontexts, float) for w in xrange(numcontexts): self.p_tilde_context[w] = self.p_tilde[0, w*S : (w+1)*S].sum() @@ -932,7 +932,7 @@ if self.priorlogprobs is not None: log_p_dot += self.priorlogprobs - self.logZ = numpy.zeros(numcontexts, float) + self.logZ = np.zeros(numcontexts, float) for w in xrange(numcontexts): self.logZ[w] = logsumexp(log_p_dot[w*S: (w+1)*S]) return self.logZ @@ -972,8 +972,7 @@ logZs = self.lognormconst() - L = numpy.dot(self.p_tilde_context, logZs) - numpy.dot(self.params, \ - self.K) + L = np.dot(self.p_tilde_context, logZs) - np.dot(self.params, self.K) if self.verbose and self.external is None: print " dual is ", L @@ -1069,7 +1068,7 @@ log_p_dot += self.priorlogprobs if not hasattr(self, 'logZ'): # Compute the norm constant (quickly!) - self.logZ = numpy.zeros(numcontexts, float) + self.logZ = np.zeros(numcontexts, float) for w in xrange(numcontexts): self.logZ[w] = logsumexp(log_p_dot[w*S : (w+1)*S]) # Renormalize @@ -1366,8 +1365,8 @@ # -log(n-1) + logsumexp(2*log|Z_k - meanZ|) self.logZapprox = logsumexp(logZs) - math.log(ttrials) - stdevlogZ = numpy.array(logZs).std() - mus = numpy.array(mus) + stdevlogZ = np.array(logZs).std() + mus = np.array(mus) self.varE = columnvariances(mus) self.mu = columnmeans(mus) return @@ -1459,7 +1458,7 @@ log_Z_est = self.lognormconst() def p(fx): - return numpy.exp(innerprodtranspose(fx, self.params) - log_Z_est) + return np.exp(innerprodtranspose(fx, self.params) - log_Z_est) return p @@ -1486,7 +1485,7 @@ """ log_Z_est = self.lognormconst() if len(fx.shape) == 1: - logpdf = numpy.dot(self.params, fx) - log_Z_est + logpdf = np.dot(self.params, fx) - log_Z_est else: logpdf = innerprodtranspose(fx, self.params) - log_Z_est if log_prior_x is not None: @@ -1536,8 +1535,8 @@ # Use Kersten-Deylon accelerated SA, based on the rate of # changes of sign of the gradient. (If frequent swaps, the # stepsize is too large.) - #n += (numpy.dot(y_k, y_kminus1) < 0) # an indicator fn - if numpy.dot(y_k, y_kminus1) < 0: + #n += (np.dot(y_k, y_kminus1) < 0) # an indicator fn + if np.dot(y_k, y_kminus1) < 0: n += 1 else: # Store iterations of sign switches (for plotting @@ -1590,7 +1589,7 @@ if self.verbose: print "SA: after iteration " + str(k) print " approx dual fn is: " + str(self.logZapprox \ - - numpy.dot(self.params, K)) + - np.dot(self.params, K)) print " norm(mu_est - k) = " + str(norm_y_k) # Update params (after the convergence tests too ... don't waste the @@ -1682,7 +1681,7 @@ self.external = None self.clearcache() - meandual = numpy.average(dualapprox,axis=0) + meandual = np.average(dualapprox,axis=0) self.external_duals[self.iters] = dualapprox self.external_gradnorms[self.iters] = gradnorms @@ -1692,7 +1691,7 @@ (len(self.externalFs), meandual) print "** Mean mean square error of the (unregularized) feature" \ " expectation estimates from the external samples =" \ - " mean(|| \hat{\mu_e} - k ||,axis=0) =", numpy.average(gradnorms,axis=0) + " mean(|| \hat{\mu_e} - k ||,axis=0) =", np.average(gradnorms,axis=0) # Track the parameter vector params with the lowest mean dual estimate # so far: if meandual < self.bestdual: Modified: trunk/scipy/maxentropy/maxentutils.py =================================================================== --- trunk/scipy/maxentropy/maxentutils.py 2008-09-18 19:15:47 UTC (rev 4724) +++ trunk/scipy/maxentropy/maxentutils.py 2008-09-18 19:23:58 UTC (rev 4725) @@ -17,7 +17,9 @@ __author__ = "Ed Schofield" __version__ = '2.0' -import random, math, bisect, cmath +import random +import math +import cmath import numpy from numpy import log, exp, asarray, ndarray from scipy import sparse Modified: trunk/scipy/maxentropy/tests/test_maxentropy.py =================================================================== --- trunk/scipy/maxentropy/tests/test_maxentropy.py 2008-09-18 19:15:47 UTC (rev 4724) +++ trunk/scipy/maxentropy/tests/test_maxentropy.py 2008-09-18 19:23:58 UTC (rev 4725) @@ -6,9 +6,8 @@ Copyright: Ed Schofield, 2003-2005 """ -import sys from numpy.testing import * -from numpy import arange, add, array, dot, zeros, identity, log, exp, ones +from numpy import arange, log, exp, ones from scipy.maxentropy.maxentropy import * class TestMaxentropy(TestCase): From scipy-svn at scipy.org Thu Sep 18 15:27:04 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:27:04 -0500 (CDT) Subject: [Scipy-svn] r4726 - in trunk/scipy/fftpack: benchmarks tests Message-ID: <20080918192704.94C7639C153@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:27:00 -0500 (Thu, 18 Sep 2008) New Revision: 4726 Modified: trunk/scipy/fftpack/benchmarks/bench_basic.py trunk/scipy/fftpack/benchmarks/bench_pseudo_diffs.py trunk/scipy/fftpack/tests/test_basic.py trunk/scipy/fftpack/tests/test_helper.py trunk/scipy/fftpack/tests/test_pseudo_diffs.py Log: Removed unused imports. Modified: trunk/scipy/fftpack/benchmarks/bench_basic.py =================================================================== --- trunk/scipy/fftpack/benchmarks/bench_basic.py 2008-09-18 19:23:58 UTC (rev 4725) +++ trunk/scipy/fftpack/benchmarks/bench_basic.py 2008-09-18 19:27:00 UTC (rev 4726) @@ -2,11 +2,9 @@ """ import sys from numpy.testing import * -from scipy.fftpack import ifft,fft,fftn,ifftn,rfft,irfft -from scipy.fftpack import _fftpack as fftpack +from scipy.fftpack import ifft, fft, fftn, irfft -from numpy import arange, add, array, asarray, zeros, dot, exp, pi,\ - swapaxes, double, cdouble +from numpy import arange, asarray, zeros, dot, exp, pi, double, cdouble import numpy.fft from numpy.random import rand Modified: trunk/scipy/fftpack/benchmarks/bench_pseudo_diffs.py =================================================================== --- trunk/scipy/fftpack/benchmarks/bench_pseudo_diffs.py 2008-09-18 19:23:58 UTC (rev 4725) +++ trunk/scipy/fftpack/benchmarks/bench_pseudo_diffs.py 2008-09-18 19:27:00 UTC (rev 4726) @@ -2,12 +2,10 @@ """ import sys -from numpy import arange, add, array, sin, cos, pi,exp,tanh,sum,sign +from numpy import arange, sin, cos, pi, exp, tanh, sign from numpy.testing import * -from scipy.fftpack import diff,fft,ifft,tilbert,itilbert,hilbert,ihilbert,rfft -from scipy.fftpack import shift -from scipy.fftpack import fftfreq +from scipy.fftpack import diff, fft, ifft, tilbert, hilbert, shift, fftfreq def random(size): return rand(*size) Modified: trunk/scipy/fftpack/tests/test_basic.py =================================================================== --- trunk/scipy/fftpack/tests/test_basic.py 2008-09-18 19:23:58 UTC (rev 4725) +++ trunk/scipy/fftpack/tests/test_basic.py 2008-09-18 19:27:00 UTC (rev 4726) @@ -10,7 +10,7 @@ Run tests if fftpack is not installed: python tests/test_basic.py """ -import sys + from numpy.testing import * from scipy.fftpack import ifft,fft,fftn,ifftn,rfft,irfft from scipy.fftpack import _fftpack as fftpack Modified: trunk/scipy/fftpack/tests/test_helper.py =================================================================== --- trunk/scipy/fftpack/tests/test_helper.py 2008-09-18 19:23:58 UTC (rev 4725) +++ trunk/scipy/fftpack/tests/test_helper.py 2008-09-18 19:27:00 UTC (rev 4726) @@ -11,7 +11,6 @@ python tests/test_helper.py [] """ -import sys from numpy.testing import * from scipy.fftpack import fftshift,ifftshift,fftfreq,rfftfreq Modified: trunk/scipy/fftpack/tests/test_pseudo_diffs.py =================================================================== --- trunk/scipy/fftpack/tests/test_pseudo_diffs.py 2008-09-18 19:23:58 UTC (rev 4725) +++ trunk/scipy/fftpack/tests/test_pseudo_diffs.py 2008-09-18 19:27:00 UTC (rev 4726) @@ -10,13 +10,12 @@ Run tests if fftpack is not installed: python tests/test_pseudo_diffs.py [] """ -import sys + from numpy.testing import * -from scipy.fftpack import diff,fft,ifft,tilbert,itilbert,hilbert,ihilbert,rfft -from scipy.fftpack import shift -from scipy.fftpack import fftfreq +from scipy.fftpack import diff, fft, ifft, tilbert, itilbert, hilbert, \ + ihilbert, shift, fftfreq -from numpy import arange, add, array, sin, cos, pi,exp,tanh,sum,sign +from numpy import arange, sin, cos, pi, exp, tanh, sum, sign def random(size): return rand(*size) From scipy-svn at scipy.org Thu Sep 18 15:28:41 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:28:41 -0500 (CDT) Subject: [Scipy-svn] r4727 - trunk/scipy/ndimage Message-ID: <20080918192841.874A339C153@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:28:38 -0500 (Thu, 18 Sep 2008) New Revision: 4727 Modified: trunk/scipy/ndimage/fourier.py trunk/scipy/ndimage/interpolation.py trunk/scipy/ndimage/morphology.py Log: Removed unused imports. Modified: trunk/scipy/ndimage/fourier.py =================================================================== --- trunk/scipy/ndimage/fourier.py 2008-09-18 19:27:00 UTC (rev 4726) +++ trunk/scipy/ndimage/fourier.py 2008-09-18 19:28:38 UTC (rev 4727) @@ -29,7 +29,6 @@ # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. import types -import math import numpy import _ni_support import _nd_image Modified: trunk/scipy/ndimage/interpolation.py =================================================================== --- trunk/scipy/ndimage/interpolation.py 2008-09-18 19:27:00 UTC (rev 4726) +++ trunk/scipy/ndimage/interpolation.py 2008-09-18 19:28:38 UTC (rev 4727) @@ -28,9 +28,7 @@ # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -import types import math -import warnings import numpy import _ni_support import _nd_image Modified: trunk/scipy/ndimage/morphology.py =================================================================== --- trunk/scipy/ndimage/morphology.py 2008-09-18 19:27:00 UTC (rev 4726) +++ trunk/scipy/ndimage/morphology.py 2008-09-18 19:28:38 UTC (rev 4727) @@ -32,7 +32,6 @@ import _ni_support import _nd_image import filters -import types def _center_is_true(structure, origin): From scipy-svn at scipy.org Thu Sep 18 15:30:26 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:30:26 -0500 (CDT) Subject: [Scipy-svn] r4728 - in trunk/scipy/lib: blas/tests lapack/tests Message-ID: <20080918193026.4291839C021@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:30:13 -0500 (Thu, 18 Sep 2008) New Revision: 4728 Modified: trunk/scipy/lib/blas/tests/test_blas.py trunk/scipy/lib/blas/tests/test_fblas.py trunk/scipy/lib/lapack/tests/test_lapack.py Log: Remove unused imports. Modified: trunk/scipy/lib/blas/tests/test_blas.py =================================================================== --- trunk/scipy/lib/blas/tests/test_blas.py 2008-09-18 19:28:38 UTC (rev 4727) +++ trunk/scipy/lib/blas/tests/test_blas.py 2008-09-18 19:30:13 UTC (rev 4728) @@ -12,10 +12,9 @@ python tests/test_blas.py [] """ -import sys import math -from numpy import arange, add, array +from numpy import array from numpy.testing import * from scipy.lib.blas import fblas from scipy.lib.blas import cblas Modified: trunk/scipy/lib/blas/tests/test_fblas.py =================================================================== --- trunk/scipy/lib/blas/tests/test_fblas.py 2008-09-18 19:28:38 UTC (rev 4727) +++ trunk/scipy/lib/blas/tests/test_fblas.py 2008-09-18 19:30:13 UTC (rev 4728) @@ -6,10 +6,8 @@ # !! Complex calculations really aren't checked that carefully. # !! Only real valued complex numbers are used in tests. -import sys -from numpy import zeros, transpose, newaxis, shape, float32, \ - float64, complex64, complex128, arange, array, common_type, \ - conjugate +from numpy import zeros, transpose, newaxis, shape, float32, float64, \ + complex64, complex128, arange, array, common_type, conjugate from numpy.testing import * from scipy.lib.blas import fblas Modified: trunk/scipy/lib/lapack/tests/test_lapack.py =================================================================== --- trunk/scipy/lib/lapack/tests/test_lapack.py 2008-09-18 19:28:38 UTC (rev 4727) +++ trunk/scipy/lib/lapack/tests/test_lapack.py 2008-09-18 19:30:13 UTC (rev 4728) @@ -14,10 +14,8 @@ can be run (see the isrunnable method). ''' -import os -import sys from numpy.testing import * -from numpy import dot, ones, zeros +from numpy import ones from scipy.lib.lapack import flapack, clapack From scipy-svn at scipy.org Thu Sep 18 15:31:52 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:31:52 -0500 (CDT) Subject: [Scipy-svn] r4729 - in trunk/scipy/stats: . tests Message-ID: <20080918193152.B9C4539C021@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:31:49 -0500 (Thu, 18 Sep 2008) New Revision: 4729 Modified: trunk/scipy/stats/mmorestats.py trunk/scipy/stats/morestats.py trunk/scipy/stats/tests/test_mmorestats.py trunk/scipy/stats/tests/test_stats.py Log: Remove unused/redundant imports. Modified: trunk/scipy/stats/mmorestats.py =================================================================== --- trunk/scipy/stats/mmorestats.py 2008-09-18 19:30:13 UTC (rev 4728) +++ trunk/scipy/stats/mmorestats.py 2008-09-18 19:31:49 UTC (rev 4729) @@ -18,20 +18,16 @@ 'trimmed_mean_ci',] import numpy as np -from numpy import bool_, float_, int_, ndarray, array as narray +from numpy import float_, int_, ndarray import numpy.ma as ma -from numpy.ma import masked, nomask, MaskedArray -#from numpy.ma.extras import apply_along_axis, dot, median +from numpy.ma import MaskedArray import scipy.stats.mstats as mstats -#from numpy.ma.mstats import trim_both, trimmed_stde, mquantiles, stde_median from scipy.stats.distributions import norm, beta, t, binom -from scipy.stats.morestats import find_repeats - #####-------------------------------------------------------------------------- #---- --- Quantiles --- #####-------------------------------------------------------------------------- Modified: trunk/scipy/stats/morestats.py =================================================================== --- trunk/scipy/stats/morestats.py 2008-09-18 19:30:13 UTC (rev 4728) +++ trunk/scipy/stats/morestats.py 2008-09-18 19:31:49 UTC (rev 4729) @@ -5,10 +5,9 @@ import statlib import stats import distributions -import inspect from numpy import isscalar, r_, log, sum, around, unique, asarray from numpy import zeros, arange, sort, amin, amax, any, where, \ - array, atleast_1d, sqrt, ceil, floor, array, poly1d, compress, not_equal, \ + atleast_1d, sqrt, ceil, floor, array, poly1d, compress, not_equal, \ pi, exp, ravel, angle import scipy import numpy Modified: trunk/scipy/stats/tests/test_mmorestats.py =================================================================== --- trunk/scipy/stats/tests/test_mmorestats.py 2008-09-18 19:30:13 UTC (rev 4728) +++ trunk/scipy/stats/tests/test_mmorestats.py 2008-09-18 19:31:49 UTC (rev 4729) @@ -9,7 +9,6 @@ import numpy as np import numpy.ma as ma -from numpy.ma import masked import scipy.stats.mstats as ms import scipy.stats.mmorestats as mms Modified: trunk/scipy/stats/tests/test_stats.py =================================================================== --- trunk/scipy/stats/tests/test_stats.py 2008-09-18 19:30:13 UTC (rev 4728) +++ trunk/scipy/stats/tests/test_stats.py 2008-09-18 19:31:49 UTC (rev 4729) @@ -6,7 +6,6 @@ """ -import sys from numpy.testing import * from numpy import array, arange, zeros, ravel, float32, float64, power import numpy From scipy-svn at scipy.org Thu Sep 18 15:35:17 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:35:17 -0500 (CDT) Subject: [Scipy-svn] r4730 - in trunk/scipy/linalg: . benchmarks tests Message-ID: <20080918193517.16A3339C021@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:34:57 -0500 (Thu, 18 Sep 2008) New Revision: 4730 Modified: trunk/scipy/linalg/benchmarks/bench_basic.py trunk/scipy/linalg/benchmarks/bench_decom.py trunk/scipy/linalg/matfuncs.py trunk/scipy/linalg/tests/test_atlas_version.py trunk/scipy/linalg/tests/test_basic.py trunk/scipy/linalg/tests/test_blas.py trunk/scipy/linalg/tests/test_decomp.py trunk/scipy/linalg/tests/test_fblas.py trunk/scipy/linalg/tests/test_lapack.py trunk/scipy/linalg/tests/test_matfuncs.py Log: Removed unused/redundant imports. Modified: trunk/scipy/linalg/benchmarks/bench_basic.py =================================================================== --- trunk/scipy/linalg/benchmarks/bench_basic.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/benchmarks/bench_basic.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -1,19 +1,13 @@ import sys -import numpy -from numpy import arange, add, array, dot, zeros, identity, conjugate, transpose - from numpy.testing import * +import numpy.linalg as linalg -from scipy.linalg import solve,inv,det,lstsq, toeplitz, hankel, tri, triu, \ - tril, pinv, pinv2, solve_banded - def random(size): return rand(*size) class TestSolve(TestCase): def bench_random(self): - import numpy.linalg as linalg basic_solve = linalg.solve print print ' Solving system of linear equations' @@ -53,7 +47,6 @@ class TestInv(TestCase): def bench_random(self): - import numpy.linalg as linalg basic_inv = linalg.inv print print ' Finding matrix inverse' @@ -92,7 +85,6 @@ class TestDet(TestCase): def bench_random(self): - import numpy.linalg as linalg basic_det = linalg.det print print ' Finding matrix determinant' Modified: trunk/scipy/linalg/benchmarks/bench_decom.py =================================================================== --- trunk/scipy/linalg/benchmarks/bench_decom.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/benchmarks/bench_decom.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -3,10 +3,7 @@ """ import sys -import numpy from numpy import linalg -from scipy.linalg import eigvals - from numpy.testing import * def random(size): Modified: trunk/scipy/linalg/matfuncs.py =================================================================== --- trunk/scipy/linalg/matfuncs.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/matfuncs.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -9,7 +9,7 @@ from numpy import asarray, Inf, dot, floor, eye, diag, exp, \ product, logical_not, ravel, transpose, conjugate, \ - cast, log, ogrid, isfinite, imag, real, absolute, amax, sign, \ + cast, log, ogrid, imag, real, absolute, amax, sign, \ isfinite, sqrt, identity, single from numpy import matrix as mat import numpy as np Modified: trunk/scipy/linalg/tests/test_atlas_version.py =================================================================== --- trunk/scipy/linalg/tests/test_atlas_version.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/tests/test_atlas_version.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -3,7 +3,6 @@ # Created by: Pearu Peterson, October 2003 # -import sys from numpy.testing import * import scipy.linalg.atlas_version Modified: trunk/scipy/linalg/tests/test_basic.py =================================================================== --- trunk/scipy/linalg/tests/test_basic.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/tests/test_basic.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -19,10 +19,9 @@ python tests/test_basic.py """ -import numpy from numpy import arange, add, array, dot, zeros, identity, conjugate, transpose +import numpy.linalg as linalg -import sys from numpy.testing import * from scipy.linalg import solve,inv,det,lstsq, toeplitz, hankel, tri, triu, \ @@ -50,33 +49,33 @@ for b in ([[1,0,0,0],[0,0,0,1],[0,1,0,0],[0,1,0,0]], [[2,1],[-30,4],[2,3],[1,3]]): x = solve_banded((l,u),ab,b) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) class TestSolve(TestCase): def test_20Feb04_bug(self): a = [[1,1],[1.0,0]] # ok x0 = solve(a,[1,0j]) - assert_array_almost_equal(numpy.dot(a,x0),[1,0]) + assert_array_almost_equal(dot(a,x0),[1,0]) a = [[1,1],[1.2,0]] # gives failure with clapack.zgesv(..,rowmajor=0) b = [1,0j] x0 = solve(a,b) - assert_array_almost_equal(numpy.dot(a,x0),[1,0]) + assert_array_almost_equal(dot(a,x0),[1,0]) def test_simple(self): a = [[1,20],[-30,4]] for b in ([[1,0],[0,1]],[1,0], [[2,1],[-30,4]]): x = solve(a,b) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_simple_sym(self): a = [[2,3],[3,5]] for lower in [0,1]: for b in ([[1,0],[0,1]],[1,0]): x = solve(a,b,sym_pos=1,lower=lower) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_simple_sym_complex(self): a = [[5,2],[2,4]] @@ -85,7 +84,7 @@ [0,2]], ]: x = solve(a,b,sym_pos=1) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_simple_complex(self): a = array([[5,2],[2j,4]],'D') @@ -96,7 +95,7 @@ array([1,0],'D'), ]: x = solve(a,b) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_nils_20Feb04(self): n = 2 @@ -117,7 +116,7 @@ for i in range(4): b = random([n,3]) x = solve(a,b) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_random_complex(self): n = 20 @@ -126,7 +125,7 @@ for i in range(2): b = random([n,3]) x = solve(a,b) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_random_sym(self): n = 20 @@ -138,7 +137,7 @@ for i in range(4): b = random([n]) x = solve(a,b,sym_pos=1) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_random_sym_complex(self): n = 20 @@ -147,11 +146,11 @@ for i in range(n): a[i,i] = abs(20*(.1+a[i,i])) for j in range(i): - a[i,j] = numpy.conjugate(a[j,i]) + a[i,j] = conjugate(a[j,i]) b = random([n])+2j*random([n]) for i in range(2): x = solve(a,b,sym_pos=1) - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) class TestInv(TestCase): @@ -159,11 +158,11 @@ def test_simple(self): a = [[1,2],[3,4]] a_inv = inv(a) - assert_array_almost_equal(numpy.dot(a,a_inv), + assert_array_almost_equal(dot(a,a_inv), [[1,0],[0,1]]) a = [[1,2,3],[4,5,6],[7,8,10]] a_inv = inv(a) - assert_array_almost_equal(numpy.dot(a,a_inv), + assert_array_almost_equal(dot(a,a_inv), [[1,0,0],[0,1,0],[0,0,1]]) def test_random(self): @@ -172,12 +171,12 @@ a = random([n,n]) for i in range(n): a[i,i] = 20*(.1+a[i,i]) a_inv = inv(a) - assert_array_almost_equal(numpy.dot(a,a_inv), - numpy.identity(n)) + assert_array_almost_equal(dot(a,a_inv), + identity(n)) def test_simple_complex(self): a = [[1,2],[3,4j]] a_inv = inv(a) - assert_array_almost_equal(numpy.dot(a,a_inv), + assert_array_almost_equal(dot(a,a_inv), [[1,0],[0,1]]) def test_random_complex(self): @@ -186,8 +185,8 @@ a = random([n,n])+2j*random([n,n]) for i in range(n): a[i,i] = 20*(.1+a[i,i]) a_inv = inv(a) - assert_array_almost_equal(numpy.dot(a,a_inv), - numpy.identity(n)) + assert_array_almost_equal(dot(a,a_inv), + identity(n)) class TestDet(TestCase): @@ -203,7 +202,6 @@ assert_almost_equal(a_det,-6+4j) def test_random(self): - import numpy.linalg as linalg basic_det = linalg.det n = 20 for i in range(4): @@ -213,7 +211,6 @@ assert_almost_equal(d1,d2) def test_random_complex(self): - import numpy.linalg as linalg basic_det = linalg.det n = 20 for i in range(4): @@ -246,7 +243,7 @@ for b in ([[1,0],[0,1]],[1,0], [[2,1],[-30,4]]): x = lstsq(a,b)[0] - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_simple_overdet(self): a = [[1,2],[4,5],[3,4]] @@ -271,7 +268,7 @@ for i in range(4): b = random([n,3]) x = lstsq(a,b)[0] - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_random_complex_exact(self): n = 20 @@ -280,7 +277,7 @@ for i in range(2): b = random([n,3]) x = lstsq(a,b)[0] - assert_array_almost_equal(numpy.dot(a,x),b) + assert_array_almost_equal(dot(a,x),b) def test_random_overdet(self): n = 20 Modified: trunk/scipy/linalg/tests/test_blas.py =================================================================== --- trunk/scipy/linalg/tests/test_blas.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/tests/test_blas.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -12,13 +12,9 @@ python tests/test_blas.py [] """ -import sys import math -from numpy import arange, add, array - from numpy.testing import * - from scipy.linalg import fblas, cblas Modified: trunk/scipy/linalg/tests/test_decomp.py =================================================================== --- trunk/scipy/linalg/tests/test_decomp.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/tests/test_decomp.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -14,7 +14,6 @@ python tests/test_decomp.py """ -import sys from numpy.testing import * from scipy.linalg import eig,eigvals,lu,svd,svdvals,cholesky,qr, \ @@ -25,10 +24,9 @@ from numpy import array, transpose, sometrue, diag, ones, linalg, \ argsort, zeros, arange, float32, complex64, dot, conj, identity, \ - ravel, sqrt, iscomplex, shape, sort, sign, conjugate, sign, bmat, \ + ravel, sqrt, iscomplex, shape, sort, conjugate, bmat, sign, \ asarray, matrix, isfinite, all - from numpy.random import rand def random(size): Modified: trunk/scipy/linalg/tests/test_fblas.py =================================================================== --- trunk/scipy/linalg/tests/test_fblas.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/tests/test_fblas.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -6,11 +6,8 @@ # !! Complex calculations really aren't checked that carefully. # !! Only real valued complex numbers are used in tests. -import sys - -from numpy import dot, float32, float64, complex64, complex128, \ - arange, array, zeros, shape, transpose, newaxis, \ - common_type, conjugate +from numpy import float32, float64, complex64, complex128, arange, array, \ + zeros, shape, transpose, newaxis, common_type, conjugate from scipy.linalg import fblas from numpy.testing import * Modified: trunk/scipy/linalg/tests/test_lapack.py =================================================================== --- trunk/scipy/linalg/tests/test_lapack.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/tests/test_lapack.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -3,8 +3,6 @@ # Created by: Pearu Peterson, September 2002 # - -import sys from numpy.testing import * from numpy import ones Modified: trunk/scipy/linalg/tests/test_matfuncs.py =================================================================== --- trunk/scipy/linalg/tests/test_matfuncs.py 2008-09-18 19:31:49 UTC (rev 4729) +++ trunk/scipy/linalg/tests/test_matfuncs.py 2008-09-18 19:34:57 UTC (rev 4730) @@ -6,15 +6,11 @@ """ -import sys - -import numpy from numpy import array, identity, dot, sqrt - from numpy.testing import * import scipy.linalg -from scipy.linalg import signm,logm,funm, sqrtm, expm, expm2, expm3 +from scipy.linalg import signm, logm, sqrtm, expm, expm2, expm3 class TestSignM(TestCase): From scipy-svn at scipy.org Thu Sep 18 15:37:51 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:37:51 -0500 (CDT) Subject: [Scipy-svn] r4731 - in trunk/scipy/optimize: . tests Message-ID: <20080918193751.488B739C19A@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:37:37 -0500 (Thu, 18 Sep 2008) New Revision: 4731 Modified: trunk/scipy/optimize/linesearch.py trunk/scipy/optimize/minpack.py trunk/scipy/optimize/optimize.py trunk/scipy/optimize/slsqp.py trunk/scipy/optimize/tests/test_optimize.py trunk/scipy/optimize/tests/test_slsqp.py trunk/scipy/optimize/tests/test_zeros.py trunk/scipy/optimize/zeros.py Log: Removed unused imports. Modified: trunk/scipy/optimize/linesearch.py =================================================================== --- trunk/scipy/optimize/linesearch.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/linesearch.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -2,7 +2,6 @@ from scipy.optimize import minpack2 import numpy -import sys import __builtin__ pymin = __builtin__.min Modified: trunk/scipy/optimize/minpack.py =================================================================== --- trunk/scipy/optimize/minpack.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/minpack.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -2,7 +2,7 @@ from numpy import atleast_1d, dot, take, triu, shape, eye, \ transpose, zeros, product, greater, array, \ - any, all, where, isscalar, asarray, ndarray + all, where, isscalar, asarray error = _minpack.error Modified: trunk/scipy/optimize/optimize.py =================================================================== --- trunk/scipy/optimize/optimize.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/optimize.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -24,7 +24,7 @@ import numpy from numpy import atleast_1d, eye, mgrid, argmin, zeros, shape, empty, \ - squeeze, isscalar, vectorize, asarray, absolute, sqrt, Inf, asfarray, isinf + squeeze, vectorize, asarray, absolute, sqrt, Inf, asfarray, isinf import linesearch # These have been copied from Numeric's MLab.py Modified: trunk/scipy/optimize/slsqp.py =================================================================== --- trunk/scipy/optimize/slsqp.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/slsqp.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -8,8 +8,8 @@ __all__ = ['approx_jacobian','fmin_slsqp'] from _slsqp import slsqp -from numpy import zeros, array, identity, linalg, rank, squeeze, append, \ - asfarray,product, concatenate, finfo, sqrt, vstack, transpose +from numpy import zeros, array, linalg, append, asfarray, concatenate, finfo, \ + sqrt, vstack from optimize import approx_fprime, wrap_function __docformat__ = "restructuredtext en" Modified: trunk/scipy/optimize/tests/test_optimize.py =================================================================== --- trunk/scipy/optimize/tests/test_optimize.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/tests/test_optimize.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -13,8 +13,7 @@ from scipy import optimize from scipy.optimize import leastsq -from numpy import array, zeros, float64, dot, log, exp, inf, \ - pi, sin, cos +from numpy import array, zeros, float64, dot, log, exp, inf, sin, cos import numpy as np from scipy.optimize.tnc import RCSTRINGS, MSG_NONE import numpy.random @@ -264,14 +263,14 @@ return err def test_basic(self): - p0 = numpy.array([0,0,0]) + p0 = array([0,0,0]) params_fit, ier = leastsq(self.residuals, p0, args=(self.y_meas, self.x)) assert ier in (1,2,3,4), 'solution not found (ier=%d)'%ier assert_array_almost_equal( params_fit, self.abc, decimal=2) # low precision due to random def test_full_output(self): - p0 = numpy.array([0,0,0]) + p0 = array([0,0,0]) full_output = leastsq(self.residuals, p0, args=(self.y_meas, self.x), full_output=True) @@ -279,8 +278,8 @@ assert ier in (1,2,3,4), 'solution not found: %s'%mesg def test_input_untouched(self): - p0 = numpy.array([0,0,0],dtype=numpy.float64) - p0_copy = numpy.array(p0, copy=True) + p0 = array([0,0,0],dtype=float64) + p0_copy = array(p0, copy=True) full_output = leastsq(self.residuals, p0, args=(self.y_meas, self.x), full_output=True) Modified: trunk/scipy/optimize/tests/test_slsqp.py =================================================================== --- trunk/scipy/optimize/tests/test_slsqp.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/tests/test_slsqp.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -2,7 +2,6 @@ import numpy as np from scipy.optimize import fmin_slsqp -from numpy import matrix, diag class TestSLSQP(TestCase): Modified: trunk/scipy/optimize/tests/test_zeros.py =================================================================== --- trunk/scipy/optimize/tests/test_zeros.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/tests/test_zeros.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -7,8 +7,7 @@ from scipy.optimize import zeros as cc # Import testing parameters -from scipy.optimize._tstutils import methods, mstrings, functions, \ - fstrings, description +from scipy.optimize._tstutils import functions, fstrings class TestBasic(TestCase) : def run_check(self, method, name): Modified: trunk/scipy/optimize/zeros.py =================================================================== --- trunk/scipy/optimize/zeros.py 2008-09-18 19:34:57 UTC (rev 4730) +++ trunk/scipy/optimize/zeros.py 2008-09-18 19:37:37 UTC (rev 4731) @@ -1,7 +1,7 @@ ## Automatically adapted for scipy Oct 07, 2005 by convertcode.py import _zeros -from numpy import sqrt, sign, finfo +from numpy import finfo _iter = 100 _xtol = 1e-12 From scipy-svn at scipy.org Thu Sep 18 15:45:38 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:45:38 -0500 (CDT) Subject: [Scipy-svn] r4732 - in trunk/scipy/weave: examples tests Message-ID: <20080918194538.85AAC39C153@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:45:20 -0500 (Thu, 18 Sep 2008) New Revision: 4732 Removed: trunk/scipy/weave/tests/test_scxx.py Modified: trunk/scipy/weave/examples/vq.py trunk/scipy/weave/tests/test_blitz_tools.py trunk/scipy/weave/tests/test_c_spec.py trunk/scipy/weave/tests/test_ext_tools.py trunk/scipy/weave/tests/test_inline_tools.py trunk/scipy/weave/tests/test_numpy_scalar_spec.py trunk/scipy/weave/tests/test_scxx_dict.py trunk/scipy/weave/tests/test_scxx_object.py trunk/scipy/weave/tests/test_scxx_sequence.py trunk/scipy/weave/tests/test_size_check.py trunk/scipy/weave/tests/weave_test_utils.py Log: Removed unused imports. Removed test_scxx.py (the nose framework doesn't need to import the test_scxx* modules to find the tests). Modified: trunk/scipy/weave/examples/vq.py =================================================================== --- trunk/scipy/weave/examples/vq.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/examples/vq.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -14,7 +14,6 @@ # [25 29] [ 2.49147272 3.83021021] # speed up: 32.56 -import numpy from numpy import * import sys sys.path.insert(0,'..') Modified: trunk/scipy/weave/tests/test_blitz_tools.py =================================================================== --- trunk/scipy/weave/tests/test_blitz_tools.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_blitz_tools.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,7 +1,7 @@ import os import time -from numpy import dot, float32, float64, complex64, complex128, \ +from numpy import float32, float64, complex64, complex128, \ zeros, random, array, sum, abs, allclose from numpy.testing import * Modified: trunk/scipy/weave/tests/test_c_spec.py =================================================================== --- trunk/scipy/weave/tests/test_c_spec.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_c_spec.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,4 +1,3 @@ -import time import os import sys Modified: trunk/scipy/weave/tests/test_ext_tools.py =================================================================== --- trunk/scipy/weave/tests/test_ext_tools.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_ext_tools.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,5 +1,3 @@ -import time - from numpy.testing import * from scipy.weave import ext_tools, c_spec Modified: trunk/scipy/weave/tests/test_inline_tools.py =================================================================== --- trunk/scipy/weave/tests/test_inline_tools.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_inline_tools.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -2,7 +2,6 @@ from numpy.testing import * from scipy.weave import inline_tools -from test_scxx import * class TestInline(TestCase): """ These are long running tests... Modified: trunk/scipy/weave/tests/test_numpy_scalar_spec.py =================================================================== --- trunk/scipy/weave/tests/test_numpy_scalar_spec.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_numpy_scalar_spec.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,4 +1,3 @@ -import time import os import sys Deleted: trunk/scipy/weave/tests/test_scxx.py =================================================================== --- trunk/scipy/weave/tests/test_scxx.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_scxx.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,14 +0,0 @@ -""" Test refcounting and behavior of SCXX. -""" -import unittest -import time -import os,sys - -from numpy.testing import * -from test_scxx_object import * -from test_scxx_sequence import * -from test_scxx_dict import * - - -if __name__ == "__main__": - nose.run(argv=['', __file__]) Modified: trunk/scipy/weave/tests/test_scxx_dict.py =================================================================== --- trunk/scipy/weave/tests/test_scxx_dict.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_scxx_dict.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,7 +1,6 @@ """ Test refcounting and behavior of SCXX. """ -import time -import os + import sys from numpy.testing import * Modified: trunk/scipy/weave/tests/test_scxx_object.py =================================================================== --- trunk/scipy/weave/tests/test_scxx_object.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_scxx_object.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,7 +1,6 @@ """ Test refcounting and behavior of SCXX. """ -import time -import os + import sys from numpy.testing import * Modified: trunk/scipy/weave/tests/test_scxx_sequence.py =================================================================== --- trunk/scipy/weave/tests/test_scxx_sequence.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_scxx_sequence.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -2,7 +2,7 @@ """ import time -import os,sys +import sys from numpy.testing import * @@ -18,8 +18,6 @@ # operator[] (get) # operator[] (set) DONE -from UserList import UserList - class _TestSequenceBase(TestCase): seq_type = None Modified: trunk/scipy/weave/tests/test_size_check.py =================================================================== --- trunk/scipy/weave/tests/test_size_check.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/test_size_check.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,4 +1,3 @@ -import os import numpy as np from numpy.testing import * Modified: trunk/scipy/weave/tests/weave_test_utils.py =================================================================== --- trunk/scipy/weave/tests/weave_test_utils.py 2008-09-18 19:37:37 UTC (rev 4731) +++ trunk/scipy/weave/tests/weave_test_utils.py 2008-09-18 19:45:20 UTC (rev 4732) @@ -1,7 +1,4 @@ import os -import sys -import string -import pprint def remove_whitespace(in_str): out = in_str.replace(" ","") From scipy-svn at scipy.org Thu Sep 18 15:50:50 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:50:50 -0500 (CDT) Subject: [Scipy-svn] r4733 - in trunk/scipy/misc: . tests Message-ID: <20080918195050.7F49A39C19A@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:50:45 -0500 (Thu, 18 Sep 2008) New Revision: 4733 Modified: trunk/scipy/misc/common.py trunk/scipy/misc/pilutil.py trunk/scipy/misc/tests/test_pilutil.py Log: Removed unused imports. Modified: trunk/scipy/misc/common.py =================================================================== --- trunk/scipy/misc/common.py 2008-09-18 19:45:20 UTC (rev 4732) +++ trunk/scipy/misc/common.py 2008-09-18 19:50:45 UTC (rev 4733) @@ -3,16 +3,9 @@ (special, linalg) """ -import sys +from numpy import exp, asarray, arange, newaxis, hstack, product, array, \ + where, zeros, extract, place, pi, sqrt, eye, poly1d, dot, r_ -import numpy - -from numpy import exp, asarray, arange, \ - newaxis, hstack, product, array, where, \ - zeros, extract, place, pi, sqrt, eye, poly1d, dot, r_ - -from numpy import who - __all__ = ['factorial','factorial2','factorialk','comb', 'central_diff_weights', 'derivative', 'pade', 'lena'] Modified: trunk/scipy/misc/pilutil.py =================================================================== --- trunk/scipy/misc/pilutil.py 2008-09-18 19:45:20 UTC (rev 4732) +++ trunk/scipy/misc/pilutil.py 2008-09-18 19:50:45 UTC (rev 4733) @@ -1,6 +1,5 @@ # Functions which need the PIL -import types import numpy import tempfile Modified: trunk/scipy/misc/tests/test_pilutil.py =================================================================== --- trunk/scipy/misc/tests/test_pilutil.py 2008-09-18 19:45:20 UTC (rev 4732) +++ trunk/scipy/misc/tests/test_pilutil.py 2008-09-18 19:50:45 UTC (rev 4733) @@ -1,5 +1,4 @@ import os.path -import glob import numpy as np from numpy.testing import * From scipy-svn at scipy.org Thu Sep 18 15:55:12 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:55:12 -0500 (CDT) Subject: [Scipy-svn] r4734 - in trunk/scipy: interpolate interpolate/tests odr/tests stsci/image/lib Message-ID: <20080918195512.82F7339C153@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:55:02 -0500 (Thu, 18 Sep 2008) New Revision: 4734 Modified: trunk/scipy/interpolate/interpolate.py trunk/scipy/interpolate/polyint.py trunk/scipy/interpolate/tests/test_fitpack.py trunk/scipy/odr/tests/test_odr.py trunk/scipy/stsci/image/lib/combine.py Log: Removed unused imports. Modified: trunk/scipy/interpolate/interpolate.py =================================================================== --- trunk/scipy/interpolate/interpolate.py 2008-09-18 19:50:45 UTC (rev 4733) +++ trunk/scipy/interpolate/interpolate.py 2008-09-18 19:55:02 UTC (rev 4734) @@ -6,11 +6,10 @@ __all__ = ['interp1d', 'interp2d', 'spline', 'spleval', 'splmake', 'spltopp', 'ppform', 'lagrange'] -from numpy import shape, sometrue, rank, array, transpose, \ - swapaxes, searchsorted, clip, take, ones, putmask, less, greater, \ - logical_or, atleast_1d, atleast_2d, meshgrid, ravel, dot, poly1d +from numpy import shape, sometrue, rank, array, transpose, searchsorted, \ + ones, logical_or, atleast_1d, atleast_2d, meshgrid, ravel, \ + dot, poly1d import numpy as np -import scipy.linalg as slin import scipy.special as spec import math Modified: trunk/scipy/interpolate/polyint.py =================================================================== --- trunk/scipy/interpolate/polyint.py 2008-09-18 19:50:45 UTC (rev 4733) +++ trunk/scipy/interpolate/polyint.py 2008-09-18 19:55:02 UTC (rev 4734) @@ -1,6 +1,5 @@ import numpy as np from scipy import factorial -from numpy import poly1d __all__ = ["KroghInterpolator", "krogh_interpolate", "BarycentricInterpolator", "barycentric_interpolate", "PiecewisePolynomial", "piecewise_polynomial_interpolate","approximate_taylor_polynomial"] Modified: trunk/scipy/interpolate/tests/test_fitpack.py =================================================================== --- trunk/scipy/interpolate/tests/test_fitpack.py 2008-09-18 19:50:45 UTC (rev 4733) +++ trunk/scipy/interpolate/tests/test_fitpack.py 2008-09-18 19:55:02 UTC (rev 4734) @@ -12,13 +12,10 @@ """ #import libwadpy -import sys from numpy.testing import * from numpy import array, diff -from scipy.interpolate.fitpack2 import UnivariateSpline,LSQUnivariateSpline,\ - InterpolatedUnivariateSpline -from scipy.interpolate.fitpack2 import LSQBivariateSpline, \ - SmoothBivariateSpline, RectBivariateSpline +from scipy.interpolate.fitpack2 import UnivariateSpline, LSQBivariateSpline, \ + SmoothBivariateSpline, RectBivariateSpline class TestUnivariateSpline(TestCase): def test_linear_constant(self): Modified: trunk/scipy/odr/tests/test_odr.py =================================================================== --- trunk/scipy/odr/tests/test_odr.py 2008-09-18 19:50:45 UTC (rev 4733) +++ trunk/scipy/odr/tests/test_odr.py 2008-09-18 19:55:02 UTC (rev 4734) @@ -1,6 +1,3 @@ -# Standard library imports. -import cPickle - # Scipy imports. import numpy as np from numpy import pi Modified: trunk/scipy/stsci/image/lib/combine.py =================================================================== --- trunk/scipy/stsci/image/lib/combine.py 2008-09-18 19:50:45 UTC (rev 4733) +++ trunk/scipy/stsci/image/lib/combine.py 2008-09-18 19:55:02 UTC (rev 4734) @@ -1,8 +1,6 @@ import numpy as np from _combine import combine as _comb -import operator as _operator - def _combine_f(funcstr, arrays, output=None, outtype=None, nlow=0, nhigh=0, badmasks=None): arrays = [ np.asarray(a) for a in arrays ] shape = arrays[0].shape From scipy-svn at scipy.org Thu Sep 18 15:57:50 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 14:57:50 -0500 (CDT) Subject: [Scipy-svn] r4735 - in trunk/scipy: cluster/tests special/tests Message-ID: <20080918195750.86CCC39C192@scipy.org> Author: alan.mcintyre Date: 2008-09-18 14:57:42 -0500 (Thu, 18 Sep 2008) New Revision: 4735 Modified: trunk/scipy/cluster/tests/test_distance.py trunk/scipy/cluster/tests/test_hierarchy.py trunk/scipy/cluster/tests/test_vq.py trunk/scipy/special/tests/test_basic.py Log: Removed unused imports. Modified: trunk/scipy/cluster/tests/test_distance.py =================================================================== --- trunk/scipy/cluster/tests/test_distance.py 2008-09-18 19:55:02 UTC (rev 4734) +++ trunk/scipy/cluster/tests/test_distance.py 2008-09-18 19:57:42 UTC (rev 4735) @@ -34,16 +34,13 @@ # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -import sys import os.path import numpy as np from numpy.testing import * -from scipy.cluster.hierarchy import linkage, from_mlab_linkage, numobs_linkage -from scipy.cluster.distance import squareform, pdist, cdist, matching, jaccard, dice, sokalsneath, rogerstanimoto, russellrao, yule, numobs_dm, numobs_y +from scipy.cluster.distance import squareform, pdist, cdist, matching, \ + jaccard, dice, sokalsneath, rogerstanimoto, russellrao, yule -#from scipy.cluster.hierarchy import pdist, euclidean - _filenames = ["iris.txt", "cdist-X1.txt", "cdist-X2.txt", Modified: trunk/scipy/cluster/tests/test_hierarchy.py =================================================================== --- trunk/scipy/cluster/tests/test_hierarchy.py 2008-09-18 19:55:02 UTC (rev 4734) +++ trunk/scipy/cluster/tests/test_hierarchy.py 2008-09-18 19:57:42 UTC (rev 4735) @@ -33,14 +33,13 @@ # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -import sys import os.path import numpy as np from numpy.testing import * from scipy.cluster.hierarchy import linkage, from_mlab_linkage, numobs_linkage, inconsistent -from scipy.cluster.distance import squareform, pdist, matching, jaccard, dice, sokalsneath, rogerstanimoto, russellrao, yule, numobs_dm, numobs_y +from scipy.cluster.distance import squareform, pdist, numobs_dm, numobs_y _tdist = np.array([[0, 662, 877, 255, 412, 996], [662, 0, 295, 468, 268, 400], Modified: trunk/scipy/cluster/tests/test_vq.py =================================================================== --- trunk/scipy/cluster/tests/test_vq.py 2008-09-18 19:55:02 UTC (rev 4734) +++ trunk/scipy/cluster/tests/test_vq.py 2008-09-18 19:57:42 UTC (rev 4735) @@ -3,13 +3,12 @@ # David Cournapeau # Last Change: Tue Jun 24 04:00 PM 2008 J -import sys import os.path import numpy as np from numpy.testing import * -from scipy.cluster.vq import kmeans, kmeans2, py_vq, py_vq2, _py_vq_1d, vq, ClusterError +from scipy.cluster.vq import kmeans, kmeans2, py_vq, py_vq2, vq, ClusterError try: from scipy.cluster import _vq TESTC=True Modified: trunk/scipy/special/tests/test_basic.py =================================================================== --- trunk/scipy/special/tests/test_basic.py 2008-09-18 19:55:02 UTC (rev 4734) +++ trunk/scipy/special/tests/test_basic.py 2008-09-18 19:57:42 UTC (rev 4735) @@ -32,7 +32,7 @@ #8 test_sh_jacobi #8 test_sh_legendre -from numpy import dot, array +from numpy import array from numpy.testing import * From scipy-svn at scipy.org Thu Sep 18 16:00:28 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 15:00:28 -0500 (CDT) Subject: [Scipy-svn] r4736 - in trunk/scipy/integrate: . tests Message-ID: <20080918200028.B7E2139C232@scipy.org> Author: alan.mcintyre Date: 2008-09-18 15:00:22 -0500 (Thu, 18 Sep 2008) New Revision: 4736 Modified: trunk/scipy/integrate/ode.py trunk/scipy/integrate/quadpack.py trunk/scipy/integrate/quadrature.py trunk/scipy/integrate/tests/test_integrate.py trunk/scipy/integrate/tests/test_quadpack.py Log: Removed unused imports. Moved imports to the top of modules. Modified: trunk/scipy/integrate/ode.py =================================================================== --- trunk/scipy/integrate/ode.py 2008-09-18 19:57:42 UTC (rev 4735) +++ trunk/scipy/integrate/ode.py 2008-09-18 20:00:22 UTC (rev 4736) @@ -147,7 +147,7 @@ __version__ = "$Id$" __docformat__ = "restructuredtext en" -from numpy import asarray, array, zeros, sin, int32, isscalar +from numpy import asarray, array, zeros, int32, isscalar import re, sys #------------------------------------------------------------------------------ Modified: trunk/scipy/integrate/quadpack.py =================================================================== --- trunk/scipy/integrate/quadpack.py 2008-09-18 19:57:42 UTC (rev 4735) +++ trunk/scipy/integrate/quadpack.py 2008-09-18 20:00:22 UTC (rev 4736) @@ -6,10 +6,10 @@ import _quadpack import sys import numpy +from numpy import inf, Inf error = _quadpack.error - def quad_explain(output=sys.stdout): output.write(""" Extra information for quad() inputs and outputs: @@ -117,8 +117,6 @@ return -from numpy import inf, Inf - def quad(func, a, b, args=(), full_output=0, epsabs=1.49e-8, epsrel=1.49e-8, limit=50, points=None, weight=None, wvar=None, wopts=None, maxp1=50, limlst=50): Modified: trunk/scipy/integrate/quadrature.py =================================================================== --- trunk/scipy/integrate/quadrature.py 2008-09-18 19:57:42 UTC (rev 4735) +++ trunk/scipy/integrate/quadrature.py 2008-09-18 20:00:22 UTC (rev 4736) @@ -6,7 +6,6 @@ from scipy.special import gammaln from numpy import sum, ones, add, diff, isinf, isscalar, \ asarray, real, trapz, arange, empty -import scipy as sp import numpy as np def fixed_quad(func,a,b,args=(),n=5): Modified: trunk/scipy/integrate/tests/test_integrate.py =================================================================== --- trunk/scipy/integrate/tests/test_integrate.py 2008-09-18 19:57:42 UTC (rev 4735) +++ trunk/scipy/integrate/tests/test_integrate.py 2008-09-18 20:00:22 UTC (rev 4736) @@ -4,9 +4,8 @@ """ import numpy -from numpy import (arange, zeros, array, dot, sqrt, cos, sin, absolute, - eye, pi, exp, allclose) -from scipy.linalg import norm +from numpy import arange, zeros, array, dot, sqrt, cos, sin, eye, pi, exp, \ + allclose from numpy.testing import * from scipy.integrate import odeint, ode Modified: trunk/scipy/integrate/tests/test_quadpack.py =================================================================== --- trunk/scipy/integrate/tests/test_quadpack.py 2008-09-18 19:57:42 UTC (rev 4735) +++ trunk/scipy/integrate/tests/test_quadpack.py 2008-09-18 20:00:22 UTC (rev 4736) @@ -1,4 +1,3 @@ -import numpy from numpy import sqrt, cos, sin, arctan, exp, log, pi, Inf from numpy.testing import * from scipy.integrate import quad, dblquad, tplquad From scipy-svn at scipy.org Thu Sep 18 17:20:23 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 18 Sep 2008 16:20:23 -0500 (CDT) Subject: [Scipy-svn] r4737 - in trunk/scipy/sparse: . benchmarks linalg/dsolve/tests linalg/dsolve/umfpack linalg/dsolve/umfpack/tests linalg/eigen/arpack linalg/eigen/arpack/tests linalg/eigen/lobpcg linalg/eigen/lobpcg/tests linalg/isolve linalg/isolve/tests linalg/tests tests Message-ID: <20080918212023.1391B39C021@scipy.org> Author: alan.mcintyre Date: 2008-09-18 16:20:05 -0500 (Thu, 18 Sep 2008) New Revision: 4737 Modified: trunk/scipy/sparse/benchmarks/bench_sparse.py trunk/scipy/sparse/bsr.py trunk/scipy/sparse/compressed.py trunk/scipy/sparse/construct.py trunk/scipy/sparse/coo.py trunk/scipy/sparse/csc.py trunk/scipy/sparse/csr.py trunk/scipy/sparse/dia.py trunk/scipy/sparse/dok.py trunk/scipy/sparse/linalg/dsolve/tests/test_linsolve.py trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py trunk/scipy/sparse/linalg/eigen/arpack/speigs.py trunk/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py trunk/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py trunk/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py trunk/scipy/sparse/linalg/isolve/iterative.py trunk/scipy/sparse/linalg/isolve/minres.py trunk/scipy/sparse/linalg/isolve/tests/test_iterative.py trunk/scipy/sparse/linalg/tests/test_interface.py trunk/scipy/sparse/spfuncs.py trunk/scipy/sparse/tests/test_base.py trunk/scipy/sparse/tests/test_extract.py Log: Removed unused imports. Standardize NumPy import as "import numpy as np". Modified: trunk/scipy/sparse/benchmarks/bench_sparse.py =================================================================== --- trunk/scipy/sparse/benchmarks/bench_sparse.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/benchmarks/bench_sparse.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -8,8 +8,7 @@ from numpy.testing import * from scipy import sparse -from scipy.sparse import csc_matrix, csr_matrix, dok_matrix, \ - coo_matrix, lil_matrix, dia_matrix, spdiags +from scipy.sparse import csr_matrix, coo_matrix, dia_matrix def random_sparse(m,n,nnz_per_row): Modified: trunk/scipy/sparse/bsr.py =================================================================== --- trunk/scipy/sparse/bsr.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/bsr.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -7,16 +7,15 @@ from warnings import warn from numpy import zeros, intc, array, asarray, arange, diff, tile, rank, \ - prod, ravel, empty, matrix, asmatrix, empty_like, hstack + ravel, empty, empty_like from data import _data_matrix from compressed import _cs_matrix from base import isspmatrix, _formats -from sputils import isshape, getdtype, to_native, isscalarlike, isdense, \ - upcast +from sputils import isshape, getdtype, to_native, upcast import sparsetools -from sparsetools import bsr_matvec, bsr_matvecs, csr_matmat_pass1, csr_matmat_pass2, \ - bsr_matmat_pass2, bsr_transpose, bsr_sort_indices +from sparsetools import bsr_matvec, bsr_matvecs, csr_matmat_pass1, \ + bsr_matmat_pass2, bsr_transpose, bsr_sort_indices class bsr_matrix(_cs_matrix): """Block Sparse Row matrix Modified: trunk/scipy/sparse/compressed.py =================================================================== --- trunk/scipy/sparse/compressed.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/compressed.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -5,10 +5,8 @@ from warnings import warn -import numpy -from numpy import array, matrix, asarray, asmatrix, zeros, rank, intc, \ - empty, hstack, isscalar, ndarray, shape, searchsorted, empty_like, \ - where, concatenate, transpose, deprecate +from numpy import array, asarray, zeros, rank, intc, empty, isscalar, \ + empty_like, where, concatenate, deprecate, diff, multiply from base import spmatrix, isspmatrix, SparseEfficiencyWarning from data import _data_matrix @@ -166,7 +164,7 @@ if self.indices.min() < 0: raise ValueError, "%s index values must be >= 0" % \ minor_name - if numpy.diff(self.indptr).min() < 0: + if diff(self.indptr).min() < 0: raise ValueError,'index pointer values must form a " \ "non-decreasing sequence' @@ -260,7 +258,7 @@ raise ValueError('inconsistent shapes') if isdense(other): - return numpy.multiply(self.todense(),other) + return multiply(self.todense(),other) else: other = self.__class__(other) return self._binopt(other,'_elmul_') @@ -541,7 +539,7 @@ index = self.indices[indices] - start data = self.data[indices] - indptr = numpy.array([0, len(indices)]) + indptr = array([0, len(indices)]) return self.__class__((data, index, indptr), shape=shape, \ dtype=self.dtype) Modified: trunk/scipy/sparse/construct.py =================================================================== --- trunk/scipy/sparse/construct.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/construct.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -15,16 +15,13 @@ from sputils import upcast -from csr import csr_matrix, isspmatrix_csr -from csc import csc_matrix, isspmatrix_csc +from csr import csr_matrix +from csc import csc_matrix from bsr import bsr_matrix from coo import coo_matrix -from dok import dok_matrix from lil import lil_matrix from dia import dia_matrix -from base import isspmatrix - def spdiags(data, diags, m, n, format=None): """Return a sparse matrix from diagonals. Modified: trunk/scipy/sparse/coo.py =================================================================== --- trunk/scipy/sparse/coo.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/coo.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -7,14 +7,13 @@ from itertools import izip from warnings import warn -from numpy import array, asarray, empty, intc, zeros, \ - unique, searchsorted, atleast_2d, rank, deprecate, hstack +from numpy import array, asarray, empty, intc, zeros, unique, searchsorted,\ + atleast_2d, rank, deprecate, hstack -from sparsetools import coo_tocsr, coo_tocsc, coo_todense, coo_matvec +from sparsetools import coo_tocsr, coo_todense, coo_matvec from base import isspmatrix from data import _data_matrix from sputils import upcast, to_native, isshape, getdtype -from spfuncs import estimate_blocksize class coo_matrix(_data_matrix): """A sparse matrix in COOrdinate format. Modified: trunk/scipy/sparse/csc.py =================================================================== --- trunk/scipy/sparse/csc.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/csc.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -6,15 +6,9 @@ from warnings import warn -import numpy -from numpy import array, matrix, asarray, asmatrix, zeros, rank, intc, \ - empty, hstack, isscalar, ndarray, shape, searchsorted, where, \ - concatenate, deprecate, transpose, ravel - -from base import spmatrix, isspmatrix +from numpy import asarray, intc, empty, searchsorted, deprecate from sparsetools import csc_tocsr -from sputils import upcast, to_native, isdense, isshape, getdtype, \ - isscalarlike, isintlike +from sputils import upcast, isintlike from compressed import _cs_matrix Modified: trunk/scipy/sparse/csr.py =================================================================== --- trunk/scipy/sparse/csr.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/csr.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -7,16 +7,12 @@ from warnings import warn -import numpy -from numpy import array, matrix, asarray, asmatrix, zeros, rank, intc, \ - empty, hstack, isscalar, ndarray, shape, searchsorted, where, \ - concatenate, deprecate, arange, ones, ravel +from numpy import asarray, asmatrix, zeros, intc, empty, isscalar, array, \ + searchsorted, where, deprecate, arange, ones, ravel -from base import spmatrix, isspmatrix from sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \ get_csr_submatrix -from sputils import upcast, to_native, isdense, isshape, getdtype, \ - isscalarlike, isintlike +from sputils import upcast, isintlike from compressed import _cs_matrix @@ -319,7 +315,7 @@ index = self.indices[indices] - start data = self.data[indices] - indptr = numpy.array([0, len(indices)]) + indptr = array([0, len(indices)]) return csr_matrix( (data, index, indptr), shape=(1, stop-start) ) def _get_submatrix( self, row_slice, col_slice ): Modified: trunk/scipy/sparse/dia.py =================================================================== --- trunk/scipy/sparse/dia.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/dia.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -4,14 +4,12 @@ __all__ = ['dia_matrix','isspmatrix_dia'] -from numpy import asarray, asmatrix, matrix, zeros, arange, array, \ - empty_like, intc, atleast_1d, atleast_2d, add, multiply, \ - unique, hstack +from numpy import asarray, zeros, arange, array, intc, atleast_1d, \ + atleast_2d, unique, hstack from base import isspmatrix, _formats from data import _data_matrix -from sputils import isscalarlike, isshape, upcast, getdtype, isdense - +from sputils import isshape, upcast, getdtype from sparsetools import dia_matvec class dia_matrix(_data_matrix): Modified: trunk/scipy/sparse/dok.py =================================================================== --- trunk/scipy/sparse/dok.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/dok.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -7,7 +7,7 @@ import operator from itertools import izip -from numpy import asarray, asmatrix, intc, isscalar, array, matrix +from numpy import asarray, intc, isscalar from base import spmatrix,isspmatrix from sputils import isdense, getdtype, isshape, isintlike, isscalarlike Modified: trunk/scipy/sparse/linalg/dsolve/tests/test_linsolve.py =================================================================== --- trunk/scipy/sparse/linalg/dsolve/tests/test_linsolve.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/dsolve/tests/test_linsolve.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -4,7 +4,7 @@ from numpy.testing import * from scipy.linalg import norm, inv -from scipy.sparse import spdiags, csc_matrix, SparseEfficiencyWarning +from scipy.sparse import spdiags, SparseEfficiencyWarning from scipy.sparse.linalg.dsolve import spsolve, use_solver warnings.simplefilter('ignore',SparseEfficiencyWarning) Modified: trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py =================================================================== --- trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/dsolve/umfpack/tests/test_umfpack.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -6,14 +6,11 @@ """ import warnings - -from numpy import transpose, array, arange - import random from numpy.testing import * from scipy import rand, matrix, diag, eye -from scipy.sparse import csc_matrix, dok_matrix, spdiags, SparseEfficiencyWarning +from scipy.sparse import csc_matrix, spdiags, SparseEfficiencyWarning from scipy.sparse.linalg import linsolve warnings.simplefilter('ignore',SparseEfficiencyWarning) @@ -112,8 +109,8 @@ self.a = spdiags([[1, 2, 3, 4, 5], [6, 5, 8, 9, 10]], [0, 1], 5, 5) #print "The sparse matrix (constructed from diagonals):" #print self.a - self.b = array([1, 2, 3, 4, 5]) - self.b2 = array([5, 4, 3, 2, 1]) + self.b = np.array([1, 2, 3, 4, 5]) + self.b2 = np.array([5, 4, 3, 2, 1]) Modified: trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py =================================================================== --- trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/dsolve/umfpack/umfpack.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -9,7 +9,7 @@ #from base import Struct, pause import numpy as np import scipy.sparse as sp -import re, imp +import re try: # Silence import error. import _umfpack as _um except: Modified: trunk/scipy/sparse/linalg/eigen/arpack/speigs.py =================================================================== --- trunk/scipy/sparse/linalg/eigen/arpack/speigs.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/eigen/arpack/speigs.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -1,6 +1,5 @@ import numpy as np import _arpack -import warnings __all___=['ArpackException','ARPACK_eigs', 'ARPACK_gen_eigs'] Modified: trunk/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py =================================================================== --- trunk/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -7,9 +7,8 @@ from numpy.testing import * -from numpy import array,real,imag,finfo,concatenate,\ - column_stack,argsort,dot,round,conj,sort,random -from scipy.sparse.linalg.eigen.arpack import eigen_symmetric,eigen +from numpy import array, finfo, argsort, dot, round, conj, random +from scipy.sparse.linalg.eigen.arpack import eigen_symmetric, eigen def assert_almost_equal_cc(actual,desired,decimal=7,err_msg='',verbose=True): Modified: trunk/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py =================================================================== --- trunk/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -10,8 +10,6 @@ Examples in tests directory contributed by Nils Wagner. """ -from warnings import warn - import numpy as np import scipy as sp Modified: trunk/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py =================================================================== --- trunk/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -5,8 +5,7 @@ import numpy from numpy.testing import * -from scipy import array, arange, ones, sort, cos, pi, rand, \ - set_printoptions, r_, diag, linalg +from scipy import arange, ones, rand, set_printoptions, r_, diag, linalg from scipy.linalg import eig from scipy.sparse.linalg.eigen.lobpcg import lobpcg Modified: trunk/scipy/sparse/linalg/isolve/iterative.py =================================================================== --- trunk/scipy/sparse/linalg/isolve/iterative.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/isolve/iterative.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -13,7 +13,6 @@ import _iterative import numpy as np -import copy from scipy.sparse.linalg.interface import LinearOperator from utils import make_system Modified: trunk/scipy/sparse/linalg/isolve/minres.py =================================================================== --- trunk/scipy/sparse/linalg/isolve/minres.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/isolve/minres.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -1,4 +1,4 @@ -from numpy import ndarray, matrix, sqrt, inner, finfo, asarray, zeros +from numpy import sqrt, inner, finfo, zeros from numpy.linalg import norm from utils import make_system @@ -280,7 +280,6 @@ from scipy import ones, arange from scipy.linalg import norm from scipy.sparse import spdiags - from scipy.sparse.linalg import cg n = 10 Modified: trunk/scipy/sparse/linalg/isolve/tests/test_iterative.py =================================================================== --- trunk/scipy/sparse/linalg/isolve/tests/test_iterative.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/isolve/tests/test_iterative.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -4,8 +4,7 @@ from numpy.testing import * -from numpy import zeros, dot, diag, ones, arange, array, abs, max -from numpy.random import rand +from numpy import zeros, ones, arange, array, abs, max from scipy.linalg import norm from scipy.sparse import spdiags, csr_matrix Modified: trunk/scipy/sparse/linalg/tests/test_interface.py =================================================================== --- trunk/scipy/sparse/linalg/tests/test_interface.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/linalg/tests/test_interface.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -5,7 +5,7 @@ from numpy.testing import * import numpy -from numpy import array, matrix, ones, ravel +from numpy import array, matrix, dtype from scipy.sparse import csr_matrix from scipy.sparse.linalg.interface import * @@ -21,7 +21,7 @@ class matlike: def __init__(self): - self.dtype = numpy.dtype('int') + self.dtype = dtype('int') self.shape = (2,3) def matvec(self,x): y = array([ 1*x[0] + 2*x[1] + 3*x[2], Modified: trunk/scipy/sparse/spfuncs.py =================================================================== --- trunk/scipy/sparse/spfuncs.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/spfuncs.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -3,15 +3,8 @@ __all__ = ['count_blocks','estimate_blocksize'] -from numpy import empty, ravel - -from base import isspmatrix from csr import isspmatrix_csr, csr_matrix from csc import isspmatrix_csc -from bsr import isspmatrix_bsr -from sputils import upcast - -import sparsetools from sparsetools import csr_count_blocks def extract_diagonal(A): Modified: trunk/scipy/sparse/tests/test_base.py =================================================================== --- trunk/scipy/sparse/tests/test_base.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/tests/test_base.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -15,9 +15,10 @@ import warnings -import numpy -from numpy import arange, zeros, array, dot, ones, matrix, asmatrix, \ - asarray, vstack, ndarray, transpose, diag +import numpy as np +from numpy import arange, zeros, array, dot, matrix, asmatrix, asarray, \ + vstack, ndarray, transpose, diag, kron, inf, conjugate, \ + int8 import random from numpy.testing import * @@ -94,12 +95,12 @@ mats.append( [[0,1],[0,2],[0,3]] ) mats.append( [[0,0,1],[0,0,2],[0,3,0]] ) - mats.append( numpy.kron(mats[0],[[1,2]]) ) - mats.append( numpy.kron(mats[0],[[1],[2]]) ) - mats.append( numpy.kron(mats[1],[[1,2],[3,4]]) ) - mats.append( numpy.kron(mats[2],[[1,2],[3,4]]) ) - mats.append( numpy.kron(mats[3],[[1,2],[3,4]]) ) - mats.append( numpy.kron(mats[3],[[1,2,3,4]]) ) + mats.append( kron(mats[0],[[1,2]]) ) + mats.append( kron(mats[0],[[1],[2]]) ) + mats.append( kron(mats[1],[[1,2],[3,4]]) ) + mats.append( kron(mats[2],[[1,2],[3,4]]) ) + mats.append( kron(mats[3],[[1,2],[3,4]]) ) + mats.append( kron(mats[3],[[1,2,3,4]]) ) for m in mats: assert_equal(self.spmatrix(m).diagonal(),diag(m)) @@ -257,7 +258,7 @@ assert_array_equal((self.datsp / self.datsp).todense(),expected) denom = self.spmatrix(matrix([[1,0,0,4],[-1,0,0,0],[0,8,0,-5]],'d')) - res = matrix([[1,0,0,0.5],[-3,0,numpy.inf,0],[0,0.25,0,0]],'d') + res = matrix([[1,0,0,0.5],[-3,0,inf,0],[0,0.25,0,0]],'d') assert_array_equal((self.datsp / denom).todense(),res) # complex @@ -421,7 +422,7 @@ def test_tobsr(self): x = array([[1,0,2,0],[0,0,0,0],[0,0,4,5]]) y = array([[0,1,2],[3,0,5]]) - A = numpy.kron(x,y) + A = kron(x,y) Asp = self.spmatrix(A) for format in ['bsr']: fn = getattr(Asp, 'to' + format ) @@ -584,16 +585,16 @@ Wagner for a 64-bit machine, 02 March 2005 (EJS) """ n = 20 - numpy.random.seed(0) #make tests repeatable + np.random.seed(0) #make tests repeatable A = zeros((n,n), dtype=complex) - x = numpy.random.rand(n) - y = numpy.random.rand(n-1)+1j*numpy.random.rand(n-1) - r = numpy.random.rand(n) + x = np.random.rand(n) + y = np.random.rand(n-1)+1j*np.random.rand(n-1) + r = np.random.rand(n) for i in range(len(x)): A[i,i] = x[i] for i in range(len(y)): A[i,i+1] = y[i] - A[i+1,i] = numpy.conjugate(y[i]) + A[i+1,i] = conjugate(y[i]) A = self.spmatrix(A) x = splu(A).solve(r) assert_almost_equal(A*x,r) @@ -764,7 +765,7 @@ # Check bug reported by Robert Cimrman: # http://thread.gmane.org/gmane.comp.python.scientific.devel/7986 - s = slice(numpy.int8(2),numpy.int8(4),None) + s = slice(int8(2),int8(4),None) assert_equal(A[s,:].todense(), B[2:4,:]) assert_equal(A[:,s].todense(), B[:,2:4]) @@ -920,9 +921,9 @@ def test_constructor4(self): """using (data, ij) format""" - row = numpy.array([2, 3, 1, 3, 0, 1, 3, 0, 2, 1, 2]) - col = numpy.array([0, 1, 0, 0, 1, 1, 2, 2, 2, 2, 1]) - data = numpy.array([ 6., 10., 3., 9., 1., 4., + row = array([2, 3, 1, 3, 0, 1, 3, 0, 2, 1, 2]) + col = array([0, 1, 0, 0, 1, 1, 2, 2, 2, 2, 1]) + data = array([ 6., 10., 3., 9., 1., 4., 11., 2., 8., 5., 7.]) ij = vstack((row,col)) @@ -994,9 +995,9 @@ def test_constructor4(self): """using (data, ij) format""" - row = numpy.array([2, 3, 1, 3, 0, 1, 3, 0, 2, 1, 2]) - col = numpy.array([0, 1, 0, 0, 1, 1, 2, 2, 2, 2, 1]) - data = numpy.array([ 6., 10., 3., 9., 1., 4., + row = array([2, 3, 1, 3, 0, 1, 3, 0, 2, 1, 2]) + col = array([0, 1, 0, 0, 1, 1, 2, 2, 2, 2, 1]) + data = array([ 6., 10., 3., 9., 1., 4., 11., 2., 8., 5., 7.]) ij = vstack((row,col)) @@ -1295,9 +1296,9 @@ spmatrix = coo_matrix def test_constructor1(self): """unsorted triplet format""" - row = numpy.array([2, 3, 1, 3, 0, 1, 3, 0, 2, 1, 2]) - col = numpy.array([0, 1, 0, 0, 1, 1, 2, 2, 2, 2, 1]) - data = numpy.array([ 6., 10., 3., 9., 1., 4., + row = array([2, 3, 1, 3, 0, 1, 3, 0, 2, 1, 2]) + col = array([0, 1, 0, 0, 1, 1, 2, 2, 2, 2, 1]) + data = array([ 6., 10., 3., 9., 1., 4., 11., 2., 8., 5., 7.]) coo = coo_matrix((data,(row,col)),(4,3)) @@ -1306,9 +1307,9 @@ def test_constructor2(self): """unsorted triplet format with duplicates (which are summed)""" - row = numpy.array([0,1,2,2,2,2,0,0,2,2]) - col = numpy.array([0,2,0,2,1,1,1,0,0,2]) - data = numpy.array([2,9,-4,5,7,0,-1,2,1,-5]) + row = array([0,1,2,2,2,2,0,0,2,2]) + col = array([0,2,0,2,1,1,1,0,0,2]) + data = array([2,9,-4,5,7,0,-1,2,1,-5]) coo = coo_matrix((data,(row,col)),(3,3)) mat = matrix([[4,-1,0],[0,0,9],[-3,7,0]]) @@ -1327,14 +1328,14 @@ def test_constructor4(self): """from dense matrix""" - mat = numpy.array([[0,1,0,0], + mat = array([[0,1,0,0], [7,0,3,0], [0,4,0,0]]) coo = coo_matrix(mat) assert_array_equal(coo.todense(),mat) #upgrade rank 1 arrays to row matrix - mat = numpy.array([0,1,0,0]) + mat = array([0,1,0,0]) coo = coo_matrix(mat) assert_array_equal(coo.todense(),mat.reshape(1,-1)) @@ -1366,7 +1367,7 @@ data[3] = array([[ 0, 5, 10], [15, 0, 25]]) - A = numpy.kron( [[1,0,2,0],[0,0,0,0],[0,0,4,5]], [[0,1,2],[3,0,5]] ) + A = kron( [[1,0,2,0],[0,0,0,0],[0,0,4,5]], [[0,1,2],[3,0,5]] ) Asp = bsr_matrix((data,indices,indptr),shape=(6,12)) assert_equal(Asp.todense(),A) @@ -1385,7 +1386,7 @@ assert_equal(bsr_matrix(A,blocksize=(2,2)).todense(),A) assert_equal(bsr_matrix(A,blocksize=(2,3)).todense(),A) - A = numpy.kron( [[1,0,2,0],[0,0,0,0],[0,0,4,5]], [[0,1,2],[3,0,5]] ) + A = kron( [[1,0,2,0],[0,0,0,0],[0,0,4,5]], [[0,1,2],[3,0,5]] ) assert_equal(bsr_matrix(A).todense(),A) assert_equal(bsr_matrix(A,shape=(6,12)).todense(),A) assert_equal(bsr_matrix(A,blocksize=(1,1)).todense(),A) @@ -1395,11 +1396,11 @@ assert_equal(bsr_matrix(A,blocksize=(3,12)).todense(),A) assert_equal(bsr_matrix(A,blocksize=(6,12)).todense(),A) - A = numpy.kron( [[1,0,2,0],[0,1,0,0],[0,0,0,0]], [[0,1,2],[3,0,5]] ) + A = kron( [[1,0,2,0],[0,1,0,0],[0,0,0,0]], [[0,1,2],[3,0,5]] ) assert_equal(bsr_matrix(A,blocksize=(2,3)).todense(),A) def test_eliminate_zeros(self): - data = numpy.kron([1, 0, 0, 0, 2, 0, 3, 0], [[1,1],[1,1]]).T + data = kron([1, 0, 0, 0, 2, 0, 3, 0], [[1,1],[1,1]]).T data = data.reshape(-1,2,2) indices = array( [1, 2, 3, 4, 5, 6, 7, 8] ) indptr = array( [0, 3, 8] ) Modified: trunk/scipy/sparse/tests/test_extract.py =================================================================== --- trunk/scipy/sparse/tests/test_extract.py 2008-09-18 20:00:22 UTC (rev 4736) +++ trunk/scipy/sparse/tests/test_extract.py 2008-09-18 21:20:05 UTC (rev 4737) @@ -1,9 +1,6 @@ """test sparse matrix construction functions""" -import numpy -from numpy import array, matrix from numpy.testing import * - from scipy.sparse import csr_matrix import numpy as np From scipy-svn at scipy.org Sat Sep 20 19:00:14 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Sat, 20 Sep 2008 18:00:14 -0500 (CDT) Subject: [Scipy-svn] r4738 - in trunk/scipy/io: arff tests Message-ID: <20080920230014.E3DFE39C088@scipy.org> Author: alan.mcintyre Date: 2008-09-20 18:00:11 -0500 (Sat, 20 Sep 2008) New Revision: 4738 Modified: trunk/scipy/io/arff/arffread.py trunk/scipy/io/tests/test_mmio.py Log: Remove unused imports. Modified: trunk/scipy/io/arff/arffread.py =================================================================== --- trunk/scipy/io/arff/arffread.py 2008-09-18 21:20:05 UTC (rev 4737) +++ trunk/scipy/io/arff/arffread.py 2008-09-20 23:00:11 UTC (rev 4738) @@ -2,7 +2,6 @@ # Last Change: Mon Aug 20 08:00 PM 2007 J import re import itertools -import sys import numpy as np Modified: trunk/scipy/io/tests/test_mmio.py =================================================================== --- trunk/scipy/io/tests/test_mmio.py 2008-09-18 21:20:05 UTC (rev 4737) +++ trunk/scipy/io/tests/test_mmio.py 2008-09-20 23:00:11 UTC (rev 4738) @@ -4,7 +4,6 @@ from numpy import array,transpose from numpy.testing import * -import scipy import scipy.sparse from scipy.io.mmio import mminfo,mmread,mmwrite From scipy-svn at scipy.org Sat Sep 20 19:01:10 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Sat, 20 Sep 2008 18:01:10 -0500 (CDT) Subject: [Scipy-svn] r4739 - trunk/scipy/cluster/tests Message-ID: <20080920230110.8D44539C088@scipy.org> Author: alan.mcintyre Date: 2008-09-20 18:01:08 -0500 (Sat, 20 Sep 2008) New Revision: 4739 Modified: trunk/scipy/cluster/tests/test_distance.py Log: Silence debugging print statements when verbosity < 3. Modified: trunk/scipy/cluster/tests/test_distance.py =================================================================== --- trunk/scipy/cluster/tests/test_distance.py 2008-09-20 23:00:11 UTC (rev 4738) +++ trunk/scipy/cluster/tests/test_distance.py 2008-09-20 23:01:08 UTC (rev 4739) @@ -109,7 +109,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'euclidean') Y2 = cdist(X1, X2, 'test_euclidean') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_sqeuclidean_random(self): @@ -120,7 +121,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'sqeuclidean') Y2 = cdist(X1, X2, 'test_sqeuclidean') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_cityblock_random(self): @@ -131,7 +133,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'cityblock') Y2 = cdist(X1, X2, 'test_cityblock') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_hamming_double_random(self): @@ -142,7 +145,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'hamming') Y2 = cdist(X1, X2, 'test_hamming') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_hamming_bool_random(self): @@ -153,7 +157,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'hamming') Y2 = cdist(X1, X2, 'test_hamming') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_jaccard_double_random(self): @@ -164,7 +169,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'jaccard') Y2 = cdist(X1, X2, 'test_jaccard') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_jaccard_bool_random(self): @@ -175,7 +181,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'jaccard') Y2 = cdist(X1, X2, 'test_jaccard') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_chebychev_random(self): @@ -186,7 +193,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'chebychev') Y2 = cdist(X1, X2, 'test_chebychev') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_minkowski_random_p3d8(self): @@ -197,7 +205,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'minkowski', p=3.8) Y2 = cdist(X1, X2, 'test_minkowski', p=3.8) - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_minkowski_random_p4d6(self): @@ -208,7 +217,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'minkowski', p=4.6) Y2 = cdist(X1, X2, 'test_minkowski', p=4.6) - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_minkowski_random_p1d23(self): @@ -219,7 +229,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'minkowski', p=1.23) Y2 = cdist(X1, X2, 'test_minkowski', p=1.23) - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_seuclidean_random(self): @@ -230,7 +241,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'seuclidean') Y2 = cdist(X1, X2, 'test_seuclidean') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_sqeuclidean_random(self): @@ -241,7 +253,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'sqeuclidean') Y2 = cdist(X1, X2, 'test_sqeuclidean') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_cosine_random(self): @@ -252,7 +265,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'cosine') Y2 = cdist(X1, X2, 'test_cosine') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_correlation_random(self): @@ -263,7 +277,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'correlation') Y2 = cdist(X1, X2, 'test_correlation') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_mahalanobis_random(self): @@ -274,7 +289,8 @@ X2 = eo['cdist-X2'] Y1 = cdist(X1, X2, 'mahalanobis') Y2 = cdist(X1, X2, 'test_mahalanobis') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_canberra_random(self): @@ -285,7 +301,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'canberra') Y2 = cdist(X1, X2, 'test_canberra') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_braycurtis_random(self): @@ -296,8 +313,9 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'braycurtis') Y2 = cdist(X1, X2, 'test_braycurtis') - print Y1, Y2 - print (Y1-Y2).max() + if verbose > 2: + print Y1, Y2 + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_yule_random(self): @@ -308,7 +326,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'yule') Y2 = cdist(X1, X2, 'test_yule') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_matching_random(self): @@ -319,7 +338,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'matching') Y2 = cdist(X1, X2, 'test_matching') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_kulsinski_random(self): @@ -330,7 +350,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'kulsinski') Y2 = cdist(X1, X2, 'test_kulsinski') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_dice_random(self): @@ -341,7 +362,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'dice') Y2 = cdist(X1, X2, 'test_dice') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_rogerstanimoto_random(self): @@ -352,7 +374,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'rogerstanimoto') Y2 = cdist(X1, X2, 'test_rogerstanimoto') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_russellrao_random(self): @@ -363,7 +386,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'russellrao') Y2 = cdist(X1, X2, 'test_russellrao') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_sokalmichener_random(self): @@ -374,7 +398,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'sokalmichener') Y2 = cdist(X1, X2, 'test_sokalmichener') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_sokalsneath_random(self): @@ -385,7 +410,8 @@ X2 = eo['cdist-X2'] < 0.5 Y1 = cdist(X1, X2, 'sokalsneath') Y2 = cdist(X1, X2, 'test_sokalsneath') - print (Y1-Y2).max() + if verbose > 2: + print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) class TestPdist(TestCase): From scipy-svn at scipy.org Sat Sep 20 23:15:29 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Sat, 20 Sep 2008 22:15:29 -0500 (CDT) Subject: [Scipy-svn] r4740 - trunk Message-ID: <20080921031529.6C39539C1AE@scipy.org> Author: damian.eads Date: 2008-09-20 22:15:27 -0500 (Sat, 20 Sep 2008) New Revision: 4740 Modified: trunk/THANKS.txt Log: Corrected spelling of hierarchical. Modified: trunk/THANKS.txt =================================================================== --- trunk/THANKS.txt 2008-09-20 23:01:08 UTC (rev 4739) +++ trunk/THANKS.txt 2008-09-21 03:15:27 UTC (rev 4740) @@ -30,7 +30,8 @@ Travis Vaught -- initial work on stats module clean up Jeff Whitaker -- Mac OS X support David Cournapeau -- bug-fixes, refactor of fftpack and cluster, numscons build. -Damian Eads -- hiearchical clustering +Damian Eads -- hierarchical clustering, dendrogram plotting, + distance functions, vq documentation Testing: From scipy-svn at scipy.org Wed Sep 24 13:17:52 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Wed, 24 Sep 2008 12:17:52 -0500 (CDT) Subject: [Scipy-svn] r4741 - trunk/scipy/stats Message-ID: <20080924171752.B8A0539C260@scipy.org> Author: oliphant Date: 2008-09-24 12:17:51 -0500 (Wed, 24 Sep 2008) New Revision: 4741 Modified: trunk/scipy/stats/_support.py Log: Remove typecode from _support Modified: trunk/scipy/stats/_support.py =================================================================== --- trunk/scipy/stats/_support.py 2008-09-21 03:15:27 UTC (rev 4740) +++ trunk/scipy/stats/_support.py 2008-09-24 17:17:51 UTC (rev 4741) @@ -48,7 +48,7 @@ except TypeError: uniques = np.concatenate([uniques,np.array([item])]) else: # IT MUST BE A 2+D ARRAY - if inarray.typecode() != 'O': # not an Object array + if inarray.dtype.char != 'O': # not an Object array for item in inarray[1:]: if not np.sum(np.alltrue(np.equal(uniques,item),1),axis=0): try: From scipy-svn at scipy.org Fri Sep 26 00:00:47 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:00:47 -0500 (CDT) Subject: [Scipy-svn] r4742 - trunk/scipy/stats Message-ID: <20080926040047.1CEE539C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:00:25 -0500 (Thu, 25 Sep 2008) New Revision: 4742 Modified: trunk/scipy/stats/stats.py Log: Deprecate scipy.stats.var for numpy.var. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-24 17:17:51 UTC (rev 4741) +++ trunk/scipy/stats/stats.py 2008-09-26 04:00:25 UTC (rev 4742) @@ -1185,13 +1185,18 @@ sd = samplestd(instack,axis) return np.where(sd == 0, 0, m/sd) - def var(a, axis=0, bias=False): """ Returns the estimated population variance of the values in the passed array (i.e., N-1). Axis can equal None (ravel array first), or an integer (the axis over which to operate). """ + warnings.warn("""\ +scipy.stats.var is deprecated; please update your code to use numpy.var. +Please note that: + - numpy.var axis argument defaults to None, not 0 + - numpy.var has a ddof argument to replace bias in a more general manner. + scipy.stats.var(a, bias=True) can be replaced by scipy.stats.var(x, axis=0, ddof=1).""") a, axis = _chk_asarray(a, axis) mn = np.expand_dims(mean(a,axis),axis) deviations = a - mn From scipy-svn at scipy.org Fri Sep 26 00:01:40 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:01:40 -0500 (CDT) Subject: [Scipy-svn] r4743 - trunk/scipy/stats Message-ID: <20080926040140.58A7839C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:01:16 -0500 (Thu, 25 Sep 2008) New Revision: 4743 Modified: trunk/scipy/stats/stats.py Log: scipy.stats.warn raise a DeprecationWarning. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:00:25 UTC (rev 4742) +++ trunk/scipy/stats/stats.py 2008-09-26 04:01:16 UTC (rev 4743) @@ -1196,7 +1196,8 @@ Please note that: - numpy.var axis argument defaults to None, not 0 - numpy.var has a ddof argument to replace bias in a more general manner. - scipy.stats.var(a, bias=True) can be replaced by scipy.stats.var(x, axis=0, ddof=1).""") + scipy.stats.var(a, bias=True) can be replaced by scipy.stats.var(x, +axis=0, ddof=1).""", DeprecationWarning) a, axis = _chk_asarray(a, axis) mn = np.expand_dims(mean(a,axis),axis) deviations = a - mn From scipy-svn at scipy.org Fri Sep 26 00:02:27 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:02:27 -0500 (CDT) Subject: [Scipy-svn] r4744 - trunk/scipy/stats Message-ID: <20080926040227.35EFF39C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:02:08 -0500 (Thu, 25 Sep 2008) New Revision: 4744 Modified: trunk/scipy/stats/stats.py Log: Deprecate scipy.stats.std. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:01:16 UTC (rev 4743) +++ trunk/scipy/stats/stats.py 2008-09-26 04:02:08 UTC (rev 4744) @@ -1214,6 +1214,13 @@ the passed array (i.e., N-1). Axis can equal None (ravel array first), or an integer (the axis over which to operate). """ + warnings.warn("""\ +scipy.stats.std is deprecated; please update your code to use numpy.std. +Please note that: + - numpy.std axis argument defaults to None, not 0 + - numpy.std has a ddof argument to replace bias in a more general manner. + scipy.stats.std(a, bias=True) can be replaced by numpy.std(x, +axis=0, ddof=1).""", DeprecationWarning) return np.sqrt(var(a,axis,bias)) From scipy-svn at scipy.org Fri Sep 26 00:03:10 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:03:10 -0500 (CDT) Subject: [Scipy-svn] r4745 - trunk/scipy/stats Message-ID: <20080926040310.0142A39C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:02:51 -0500 (Thu, 25 Sep 2008) New Revision: 4745 Modified: trunk/scipy/stats/stats.py Log: Fix deprecation warning for stats.var. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:02:08 UTC (rev 4744) +++ trunk/scipy/stats/stats.py 2008-09-26 04:02:51 UTC (rev 4745) @@ -1196,7 +1196,7 @@ Please note that: - numpy.var axis argument defaults to None, not 0 - numpy.var has a ddof argument to replace bias in a more general manner. - scipy.stats.var(a, bias=True) can be replaced by scipy.stats.var(x, + scipy.stats.var(a, bias=True) can be replaced by numpy.var(x, axis=0, ddof=1).""", DeprecationWarning) a, axis = _chk_asarray(a, axis) mn = np.expand_dims(mean(a,axis),axis) From scipy-svn at scipy.org Fri Sep 26 00:03:55 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:03:55 -0500 (CDT) Subject: [Scipy-svn] r4746 - trunk/scipy/stats Message-ID: <20080926040355.945AF39C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:03:37 -0500 (Thu, 25 Sep 2008) New Revision: 4746 Modified: trunk/scipy/stats/stats.py Log: Deprecate stats.mean. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:02:51 UTC (rev 4745) +++ trunk/scipy/stats/stats.py 2008-09-26 04:03:37 UTC (rev 4746) @@ -404,6 +404,13 @@ all values in the array if axis=None. The return value will have a floating point dtype even if the input data are integers. """ + warnings.warn("""\ +scipy.stats.mean is deprecated; please update your code to use numpy.mean. +Please note that: + - numpy.mean axis argument defaults to None, not 0 + - numpy.mean has a ddof argument to replace bias in a more general manner. + scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, +axis=0, ddof=1).""", DeprecationWarning) a, axis = _chk_asarray(a, axis) return a.mean(axis) From scipy-svn at scipy.org Fri Sep 26 00:04:40 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:04:40 -0500 (CDT) Subject: [Scipy-svn] r4747 - trunk/scipy/stats Message-ID: <20080926040440.E92FF39C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:04:19 -0500 (Thu, 25 Sep 2008) New Revision: 4747 Modified: trunk/scipy/stats/stats.py Log: Deprecate stats.median. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:03:37 UTC (rev 4746) +++ trunk/scipy/stats/stats.py 2008-09-26 04:04:19 UTC (rev 4747) @@ -482,6 +482,13 @@ The median of each remaining axis, or of all of the values in the array if axis is None. """ + warnings.warn("""\ +scipy.stats.median is deprecated; please update your code to use numpy.median. +Please note that: + - numpy.median axis argument defaults to None, not 0 + - numpy.median has a ddof argument to replace bias in a more general manner. + scipy.stats.median(a, bias=True) can be replaced by numpy.median(x, +axis=0, ddof=1).""", DeprecationWarning) a, axis = _chk_asarray(a, axis) if axis != 0: a = np.rollaxis(a, axis, 0) From scipy-svn at scipy.org Fri Sep 26 00:05:20 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:05:20 -0500 (CDT) Subject: [Scipy-svn] r4748 - trunk/scipy/stats Message-ID: <20080926040520.8B3E539C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:05:03 -0500 (Thu, 25 Sep 2008) New Revision: 4748 Modified: trunk/scipy/stats/stats.py Log: Deprecate stats.cov. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:04:19 UTC (rev 4747) +++ trunk/scipy/stats/stats.py 2008-09-26 04:05:03 UTC (rev 4748) @@ -1387,6 +1387,11 @@ If rowvar is False, then each row is a variable with observations in the columns. """ + warnings.warn("""\ +scipy.stats.cov is deprecated; please update your code to use numpy.cov. +Please note that: + - numpy.cov axis argument defaults to None, not 0 +""", DeprecationWarning) m = asarray(m) if y is None: y = m From scipy-svn at scipy.org Fri Sep 26 00:06:00 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:06:00 -0500 (CDT) Subject: [Scipy-svn] r4749 - trunk/scipy/stats Message-ID: <20080926040600.6984839C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:05:42 -0500 (Thu, 25 Sep 2008) New Revision: 4749 Modified: trunk/scipy/stats/stats.py Log: Deprecate stats.corrcoef. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:05:03 UTC (rev 4748) +++ trunk/scipy/stats/stats.py 2008-09-26 04:05:42 UTC (rev 4749) @@ -1422,6 +1422,12 @@ If rowvar is True, then each row is a variables with observations in the columns. """ + warnings.warn("""\ +scipy.stats.corrcoef is deprecated; please update your code to use numpy.corrcoef. +Please note that: + - numpy.corrcoef rowvar argument defaults to true, not false + - numpy.corrcoef bias argument defaults to 1, not 0 +""", DeprecationWarning) if y is not None: x = np.transpose([x,y]) y = None From scipy-svn at scipy.org Fri Sep 26 00:06:44 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:06:44 -0500 (CDT) Subject: [Scipy-svn] r4750 - trunk/scipy/stats Message-ID: <20080926040644.3720A39C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:06:26 -0500 (Thu, 25 Sep 2008) New Revision: 4750 Modified: trunk/scipy/stats/stats.py Log: Fix cov/corrcoef deprecation. Modified: trunk/scipy/stats/stats.py =================================================================== --- trunk/scipy/stats/stats.py 2008-09-26 04:05:42 UTC (rev 4749) +++ trunk/scipy/stats/stats.py 2008-09-26 04:06:26 UTC (rev 4750) @@ -1390,7 +1390,8 @@ warnings.warn("""\ scipy.stats.cov is deprecated; please update your code to use numpy.cov. Please note that: - - numpy.cov axis argument defaults to None, not 0 + - numpy.cov rowvar argument defaults to true, not false + - numpy.cov bias argument defaults to false, not true """, DeprecationWarning) m = asarray(m) if y is None: @@ -1426,7 +1427,7 @@ scipy.stats.corrcoef is deprecated; please update your code to use numpy.corrcoef. Please note that: - numpy.corrcoef rowvar argument defaults to true, not false - - numpy.corrcoef bias argument defaults to 1, not 0 + - numpy.corrcoef bias argument defaults to false, not true """, DeprecationWarning) if y is not None: x = np.transpose([x,y]) From scipy-svn at scipy.org Fri Sep 26 00:25:35 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Thu, 25 Sep 2008 23:25:35 -0500 (CDT) Subject: [Scipy-svn] r4751 - trunk/scipy Message-ID: <20080926042535.BBC1E39C0F1@scipy.org> Author: cdavid Date: 2008-09-25 23:25:25 -0500 (Thu, 25 Sep 2008) New Revision: 4751 Modified: trunk/scipy/__init__.py Log: Do not modify error handling when importing scipy. Modified: trunk/scipy/__init__.py =================================================================== --- trunk/scipy/__init__.py 2008-09-26 04:06:26 UTC (rev 4750) +++ trunk/scipy/__init__.py 2008-09-26 04:25:25 UTC (rev 4751) @@ -28,7 +28,6 @@ from numpy.random import rand, randn from numpy.fft import fft, ifft from numpy.lib.scimath import * -_num.seterr(all='ignore') __all__ += ['oldnumeric']+_num.__all__ From scipy-svn at scipy.org Fri Sep 26 08:54:00 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Fri, 26 Sep 2008 07:54:00 -0500 (CDT) Subject: [Scipy-svn] r4752 - trunk/scipy/stats Message-ID: <20080926125400.3AC1039C05F@scipy.org> Author: oliphant Date: 2008-09-26 07:53:59 -0500 (Fri, 26 Sep 2008) New Revision: 4752 Modified: trunk/scipy/stats/distributions.py Log: Fix distributions to return numpy scalars instead of 0-d arrays. Modified: trunk/scipy/stats/distributions.py =================================================================== --- trunk/scipy/stats/distributions.py 2008-09-26 04:25:25 UTC (rev 4751) +++ trunk/scipy/stats/distributions.py 2008-09-26 12:53:59 UTC (rev 4752) @@ -478,6 +478,8 @@ goodargs = argsreduce(cond, *((x,)+args+(scale,))) scale, goodargs = goodargs[-1], goodargs[:-1] place(output,cond,self._pdf(*goodargs) / scale) + if output.ndim == 0: + return output[()] return output def cdf(self,x,*args,**kwds): @@ -507,6 +509,8 @@ place(output,cond2,1.0) goodargs = argsreduce(cond, *((x,)+args)) place(output,cond,self._cdf(*goodargs)) + if output.ndim == 0: + return output[()] return output def sf(self,x,*args,**kwds): @@ -536,6 +540,8 @@ place(output,cond2,1.0) goodargs = argsreduce(cond, *((x,)+args)) place(output,cond,self._sf(*goodargs)) + if output.ndim == 0: + return output[()] return output def ppf(self,q,*args,**kwds): @@ -565,6 +571,8 @@ goodargs = argsreduce(cond, *((q,)+args+(scale,loc))) scale, loc, goodargs = goodargs[-2], goodargs[-1], goodargs[:-2] place(output,cond,self._ppf(*goodargs)*scale + loc) + if output.ndim == 0: + return output[()] return output def isf(self,q,*args,**kwds): @@ -594,6 +602,8 @@ goodargs = argsreduce(cond, *((1.0-q,)+args+(scale,loc))) scale, loc, goodargs = goodargs[-2], goodargs[-1], goodargs[:-2] place(output,cond,self._ppf(*goodargs)*scale + loc) + if output.ndim == 0: + return output[()] return output def stats(self,*args,**kwds): @@ -3536,6 +3546,8 @@ place(output,(1-cond0)*(cond1==cond1),self.badvalue) goodargs = argsreduce(cond, *((k,)+args)) place(output,cond,self._pmf(*goodargs)) + if output.ndim == 0: + return output[()] return output def cdf(self, k, *args, **kwds): @@ -3564,6 +3576,8 @@ place(output,cond2*(cond0==cond0), 1.0) goodargs = argsreduce(cond, *((k,)+args)) place(output,cond,self._cdf(*goodargs)) + if output.ndim == 0: + return output[()] return output def sf(self,k,*args,**kwds): @@ -3592,6 +3606,8 @@ place(output,cond2,1.0) goodargs = argsreduce(cond, *((k,)+args)) place(output,cond,self._sf(*goodargs)) + if output.ndim == 0: + return output[()] return output def ppf(self,q,*args,**kwds): @@ -3620,6 +3636,8 @@ goodargs = argsreduce(cond, *((q,)+args+(loc,))) loc, goodargs = goodargs[-1], goodargs[:-1] place(output,cond,self._ppf(*goodargs) + loc) + if output.ndim == 0: + return output[()] return output def isf(self,q,*args,**kwds): @@ -3649,6 +3667,8 @@ goodargs = argsreduce(cond, *((q,)+args+(loc,))) loc, goodargs = goodargs[-1], goodargs[:-1] place(output,cond,self._ppf(*goodargs) + loc) + if output.ndim == 0: + return output[()] return output def stats(self, *args, **kwds): From scipy-svn at scipy.org Sat Sep 27 17:46:04 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Sat, 27 Sep 2008 16:46:04 -0500 (CDT) Subject: [Scipy-svn] r4753 - trunk/scipy/stats Message-ID: <20080927214604.00B9039C088@scipy.org> Author: oliphant Date: 2008-09-27 16:46:03 -0500 (Sat, 27 Sep 2008) New Revision: 4753 Modified: trunk/scipy/stats/distributions.py Log: Fix error in est_loc_scale. Modified: trunk/scipy/stats/distributions.py =================================================================== --- trunk/scipy/stats/distributions.py 2008-09-26 12:53:59 UTC (rev 4752) +++ trunk/scipy/stats/distributions.py 2008-09-27 21:46:03 UTC (rev 4753) @@ -778,7 +778,7 @@ return optimize.fmin(self.nnlf,x0,args=(ravel(data),),disp=0) def est_loc_scale(self, data, *args): - mu, mu2, g1, g2 = self.stats(*args,**{'moments':'mv'}) + mu, mu2 = self.stats(*args,**{'moments':'mv'}) muhat = st.nanmean(data) mu2hat = st.nanstd(data) Shat = sqrt(mu2hat / mu2) From scipy-svn at scipy.org Mon Sep 29 16:15:37 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Mon, 29 Sep 2008 15:15:37 -0500 (CDT) Subject: [Scipy-svn] r4754 - in trunk/scipy/optimize: . nnls tests Message-ID: <20080929201537.B203C39C0EA@scipy.org> Author: uwe.schmitt Date: 2008-09-29 15:13:24 -0500 (Mon, 29 Sep 2008) New Revision: 4754 Added: trunk/scipy/optimize/nnls.py trunk/scipy/optimize/nnls/ trunk/scipy/optimize/nnls/NNLS.F trunk/scipy/optimize/nnls/nnls.pyf trunk/scipy/optimize/tests/test_nnls.py Modified: trunk/scipy/optimize/SConscript trunk/scipy/optimize/setup.py Log: added nnls, incl. test script Modified: trunk/scipy/optimize/SConscript =================================================================== --- trunk/scipy/optimize/SConscript 2008-09-27 21:46:03 UTC (rev 4753) +++ trunk/scipy/optimize/SConscript 2008-09-29 20:13:24 UTC (rev 4754) @@ -73,6 +73,10 @@ src = [pjoin('minpack2', i) for i in ['dcsrch.f', 'dcstep.f', 'minpack2.pyf']] env.NumpyPythonExtension('minpack2', source = src) +# _nnls pyextension +src = [pjoin('nnls', i) for i in ['NNLS.f', 'nnls.pyf']] +env.NumpyPythonExtension('_nnls', source = src) + # moduleTNC pyextension env.NumpyPythonExtension('moduleTNC', source = [pjoin('tnc', i) for i in \ Added: trunk/scipy/optimize/nnls/NNLS.F =================================================================== --- trunk/scipy/optimize/nnls/NNLS.F 2008-09-27 21:46:03 UTC (rev 4753) +++ trunk/scipy/optimize/nnls/NNLS.F 2008-09-29 20:13:24 UTC (rev 4754) @@ -0,0 +1,477 @@ +C SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) +C +C Algorithm NNLS: NONNEGATIVE LEAST SQUARES +C +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 15, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +c +C GIVEN AN M BY N MATRIX, A, AND AN M-VECTOR, B, COMPUTE AN +C N-VECTOR, X, THAT SOLVES THE LEAST SQUARES PROBLEM +C +C A * X = B SUBJECT TO X .GE. 0 +C ------------------------------------------------------------------ +c Subroutine Arguments +c +C A(),MDA,M,N MDA IS THE FIRST DIMENSIONING PARAMETER FOR THE +C ARRAY, A(). ON ENTRY A() CONTAINS THE M BY N +C MATRIX, A. ON EXIT A() CONTAINS +C THE PRODUCT MATRIX, Q*A , WHERE Q IS AN +C M BY M ORTHOGONAL MATRIX GENERATED IMPLICITLY BY +C THIS SUBROUTINE. +C B() ON ENTRY B() CONTAINS THE M-VECTOR, B. ON EXIT B() CON- +C TAINS Q*B. +C X() ON ENTRY X() NEED NOT BE INITIALIZED. ON EXIT X() WILL +C CONTAIN THE SOLUTION VECTOR. +C RNORM ON EXIT RNORM CONTAINS THE EUCLIDEAN NORM OF THE +C RESIDUAL VECTOR. +C W() AN N-ARRAY OF WORKING SPACE. ON EXIT W() WILL CONTAIN +C THE DUAL SOLUTION VECTOR. W WILL SATISFY W(I) = 0. +C FOR ALL I IN SET P AND W(I) .LE. 0. FOR ALL I IN SET Z +C ZZ() AN M-ARRAY OF WORKING SPACE. +C INDEX() AN INTEGER WORKING ARRAY OF LENGTH AT LEAST N. +C ON EXIT THE CONTENTS OF THIS ARRAY DEFINE THE SETS +C P AND Z AS FOLLOWS.. +C +C INDEX(1) THRU INDEX(NSETP) = SET P. +C INDEX(IZ1) THRU INDEX(IZ2) = SET Z. +C IZ1 = NSETP + 1 = NPP1 +C IZ2 = N +C MODE THIS IS A SUCCESS-FAILURE FLAG WITH THE FOLLOWING +C MEANINGS. +C 1 THE SOLUTION HAS BEEN COMPUTED SUCCESSFULLY. +C 2 THE DIMENSIONS OF THE PROBLEM ARE BAD. +C EITHER M .LE. 0 OR N .LE. 0. +C 3 ITERATION COUNT EXCEEDED. MORE THAN 3*N ITERATIONS. +C +C ------------------------------------------------------------------ + SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) +C ------------------------------------------------------------------ + integer I, II, IP, ITER, ITMAX, IZ, IZ1, IZ2, IZMAX, J, JJ, JZ, L + integer M, MDA, MODE,N, NPP1, NSETP, RTNKEY +c integer INDEX(N) +c double precision A(MDA,N), B(M), W(N), X(N), ZZ(M) + integer INDEX(*) + double precision A(MDA,*), B(*), W(*), X(*), ZZ(*) + double precision ALPHA, ASAVE, CC, DIFF, DUMMY, FACTOR, RNORM + double precision SM, SS, T, TEMP, TWO, UNORM, UP, WMAX + double precision ZERO, ZTEST + parameter(FACTOR = 0.01d0) + parameter(TWO = 2.0d0, ZERO = 0.0d0) +C ------------------------------------------------------------------ + MODE=1 + IF (M .le. 0 .or. N .le. 0) then + MODE=2 + RETURN + endif + ITER=0 + ITMAX=3*N +C +C INITIALIZE THE ARRAYS INDEX() AND X(). +C + DO 20 I=1,N + X(I)=ZERO + 20 INDEX(I)=I +C + IZ2=N + IZ1=1 + NSETP=0 + NPP1=1 +C ****** MAIN LOOP BEGINS HERE ****** + 30 CONTINUE +C QUIT IF ALL COEFFICIENTS ARE ALREADY IN THE SOLUTION. +C OR IF M COLS OF A HAVE BEEN TRIANGULARIZED. +C + IF (IZ1 .GT.IZ2.OR.NSETP.GE.M) GO TO 350 +C +C COMPUTE COMPONENTS OF THE DUAL (NEGATIVE GRADIENT) VECTOR W(). +C + DO 50 IZ=IZ1,IZ2 + J=INDEX(IZ) + SM=ZERO + DO 40 L=NPP1,M + 40 SM=SM+A(L,J)*B(L) + W(J)=SM + 50 continue +C FIND LARGEST POSITIVE W(J). + 60 continue + WMAX=ZERO + DO 70 IZ=IZ1,IZ2 + J=INDEX(IZ) + IF (W(J) .gt. WMAX) then + WMAX=W(J) + IZMAX=IZ + endif + 70 CONTINUE +C +C IF WMAX .LE. 0. GO TO TERMINATION. +C THIS INDICATES SATISFACTION OF THE KUHN-TUCKER CONDITIONS. +C + IF (WMAX .le. ZERO) go to 350 + IZ=IZMAX + J=INDEX(IZ) +C +C THE SIGN OF W(J) IS OK FOR J TO BE MOVED TO SET P. +C BEGIN THE TRANSFORMATION AND CHECK NEW DIAGONAL ELEMENT TO AVOID +C NEAR LINEAR DEPENDENCE. +C + ASAVE=A(NPP1,J) + CALL H12 (1,NPP1,NPP1+1,M,A(1,J),1,UP,DUMMY,1,1,0) + UNORM=ZERO + IF (NSETP .ne. 0) then + DO 90 L=1,NSETP + 90 UNORM=UNORM+A(L,J)**2 + endif + UNORM=sqrt(UNORM) + IF (DIFF(UNORM+ABS(A(NPP1,J))*FACTOR,UNORM) .gt. ZERO) then +C +C COL J IS SUFFICIENTLY INDEPENDENT. COPY B INTO ZZ, UPDATE ZZ +C AND SOLVE FOR ZTEST ( = PROPOSED NEW VALUE FOR X(J) ). +C + DO 120 L=1,M + 120 ZZ(L)=B(L) + CALL H12 (2,NPP1,NPP1+1,M,A(1,J),1,UP,ZZ,1,1,1) + ZTEST=ZZ(NPP1)/A(NPP1,J) +C +C SEE IF ZTEST IS POSITIVE +C + IF (ZTEST .gt. ZERO) go to 140 + endif +C +C REJECT J AS A CANDIDATE TO BE MOVED FROM SET Z TO SET P. +C RESTORE A(NPP1,J), SET W(J)=0., AND LOOP BACK TO TEST DUAL +C COEFFS AGAIN. +C + A(NPP1,J)=ASAVE + W(J)=ZERO + GO TO 60 +C +C THE INDEX J=INDEX(IZ) HAS BEEN SELECTED TO BE MOVED FROM +C SET Z TO SET P. UPDATE B, UPDATE INDICES, APPLY HOUSEHOLDER +C TRANSFORMATIONS TO COLS IN NEW SET Z, ZERO SUBDIAGONAL ELTS IN +C COL J, SET W(J)=0. +C + 140 continue + DO 150 L=1,M + 150 B(L)=ZZ(L) +C + INDEX(IZ)=INDEX(IZ1) + INDEX(IZ1)=J + IZ1=IZ1+1 + NSETP=NPP1 + NPP1=NPP1+1 +C + IF (IZ1 .le. IZ2) then + DO 160 JZ=IZ1,IZ2 + JJ=INDEX(JZ) + CALL H12 (2,NSETP,NPP1,M,A(1,J),1,UP,A(1,JJ),1,MDA,1) + 160 continue + endif +C + IF (NSETP .ne. M) then + DO 180 L=NPP1,M + 180 A(L,J)=ZERO + endif +C + W(J)=ZERO +C SOLVE THE TRIANGULAR SYSTEM. +C STORE THE SOLUTION TEMPORARILY IN ZZ(). + RTNKEY = 1 + GO TO 400 + 200 CONTINUE +C +C ****** SECONDARY LOOP BEGINS HERE ****** +C +C ITERATION COUNTER. +C + 210 continue + ITER=ITER+1 + IF (ITER .gt. ITMAX) then + MODE=3 + write (*,'(/a)') ' NNLS quitting on iteration count.' + GO TO 350 + endif +C +C SEE IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE. +C IF NOT COMPUTE ALPHA. +C + ALPHA=TWO + DO 240 IP=1,NSETP + L=INDEX(IP) + IF (ZZ(IP) .le. ZERO) then + T=-X(L)/(ZZ(IP)-X(L)) + IF (ALPHA .gt. T) then + ALPHA=T + JJ=IP + endif + endif + 240 CONTINUE +C +C IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE THEN ALPHA WILL +C STILL = 2. IF SO EXIT FROM SECONDARY LOOP TO MAIN LOOP. +C + IF (ALPHA.EQ.TWO) GO TO 330 +C +C OTHERWISE USE ALPHA WHICH WILL BE BETWEEN 0. AND 1. TO +C INTERPOLATE BETWEEN THE OLD X AND THE NEW ZZ. +C + DO 250 IP=1,NSETP + L=INDEX(IP) + X(L)=X(L)+ALPHA*(ZZ(IP)-X(L)) + 250 continue +C +C MODIFY A AND B AND THE INDEX ARRAYS TO MOVE COEFFICIENT I +C FROM SET P TO SET Z. +C + I=INDEX(JJ) + 260 continue + X(I)=ZERO +C + IF (JJ .ne. NSETP) then + JJ=JJ+1 + DO 280 J=JJ,NSETP + II=INDEX(J) + INDEX(J-1)=II + CALL G1 (A(J-1,II),A(J,II),CC,SS,A(J-1,II)) + A(J,II)=ZERO + DO 270 L=1,N + IF (L.NE.II) then +c +c Apply procedure G2 (CC,SS,A(J-1,L),A(J,L)) +c + TEMP = A(J-1,L) + A(J-1,L) = CC*TEMP + SS*A(J,L) + A(J,L) =-SS*TEMP + CC*A(J,L) + endif + 270 CONTINUE +c +c Apply procedure G2 (CC,SS,B(J-1),B(J)) +c + TEMP = B(J-1) + B(J-1) = CC*TEMP + SS*B(J) + B(J) =-SS*TEMP + CC*B(J) + 280 continue + endif +c + NPP1=NSETP + NSETP=NSETP-1 + IZ1=IZ1-1 + INDEX(IZ1)=I +C +C SEE IF THE REMAINING COEFFS IN SET P ARE FEASIBLE. THEY SHOULD +C BE BECAUSE OF THE WAY ALPHA WAS DETERMINED. +C IF ANY ARE INFEASIBLE IT IS DUE TO ROUND-OFF ERROR. ANY +C THAT ARE NONPOSITIVE WILL BE SET TO ZERO +C AND MOVED FROM SET P TO SET Z. +C + DO 300 JJ=1,NSETP + I=INDEX(JJ) + IF (X(I) .le. ZERO) go to 260 + 300 CONTINUE +C +C COPY B( ) INTO ZZ( ). THEN SOLVE AGAIN AND LOOP BACK. +C + DO 310 I=1,M + 310 ZZ(I)=B(I) + RTNKEY = 2 + GO TO 400 + 320 CONTINUE + GO TO 210 +C ****** END OF SECONDARY LOOP ****** +C + 330 continue + DO 340 IP=1,NSETP + I=INDEX(IP) + 340 X(I)=ZZ(IP) +C ALL NEW COEFFS ARE POSITIVE. LOOP BACK TO BEGINNING. + GO TO 30 +C +C ****** END OF MAIN LOOP ****** +C +C COME TO HERE FOR TERMINATION. +C COMPUTE THE NORM OF THE FINAL RESIDUAL VECTOR. +C + 350 continue + SM=ZERO + IF (NPP1 .le. M) then + DO 360 I=NPP1,M + 360 SM=SM+B(I)**2 + else + DO 380 J=1,N + 380 W(J)=ZERO + endif + RNORM=sqrt(SM) + RETURN +C +C THE FOLLOWING BLOCK OF CODE IS USED AS AN INTERNAL SUBROUTINE +C TO SOLVE THE TRIANGULAR SYSTEM, PUTTING THE SOLUTION IN ZZ(). +C + 400 continue + DO 430 L=1,NSETP + IP=NSETP+1-L + IF (L .ne. 1) then + DO 410 II=1,IP + ZZ(II)=ZZ(II)-A(II,JJ)*ZZ(IP+1) + 410 continue + endif + JJ=INDEX(IP) + ZZ(IP)=ZZ(IP)/A(IP,JJ) + 430 continue + go to (200, 320), RTNKEY + END + + + double precision FUNCTION DIFF(X,Y) +c +c Function used in tests that depend on machine precision. +c +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 7, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C + double precision X, Y + DIFF=X-Y + RETURN + END + + +C SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) +C +C CONSTRUCTION AND/OR APPLICATION OF A SINGLE +C HOUSEHOLDER TRANSFORMATION.. Q = I + U*(U**T)/B +C +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 12, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C ------------------------------------------------------------------ +c Subroutine Arguments +c +C MODE = 1 OR 2 Selects Algorithm H1 to construct and apply a +c Householder transformation, or Algorithm H2 to apply a +c previously constructed transformation. +C LPIVOT IS THE INDEX OF THE PIVOT ELEMENT. +C L1,M IF L1 .LE. M THE TRANSFORMATION WILL BE CONSTRUCTED TO +C ZERO ELEMENTS INDEXED FROM L1 THROUGH M. IF L1 GT. M +C THE SUBROUTINE DOES AN IDENTITY TRANSFORMATION. +C U(),IUE,UP On entry with MODE = 1, U() contains the pivot +c vector. IUE is the storage increment between elements. +c On exit when MODE = 1, U() and UP contain quantities +c defining the vector U of the Householder transformation. +c on entry with MODE = 2, U() and UP should contain +c quantities previously computed with MODE = 1. These will +c not be modified during the entry with MODE = 2. +C C() ON ENTRY with MODE = 1 or 2, C() CONTAINS A MATRIX WHICH +c WILL BE REGARDED AS A SET OF VECTORS TO WHICH THE +c HOUSEHOLDER TRANSFORMATION IS TO BE APPLIED. +c ON EXIT C() CONTAINS THE SET OF TRANSFORMED VECTORS. +C ICE STORAGE INCREMENT BETWEEN ELEMENTS OF VECTORS IN C(). +C ICV STORAGE INCREMENT BETWEEN VECTORS IN C(). +C NCV NUMBER OF VECTORS IN C() TO BE TRANSFORMED. IF NCV .LE. 0 +C NO OPERATIONS WILL BE DONE ON C(). +C ------------------------------------------------------------------ + SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) +C ------------------------------------------------------------------ + integer I, I2, I3, I4, ICE, ICV, INCR, IUE, J + integer L1, LPIVOT, M, MODE, NCV + double precision B, C(*), CL, CLINV, ONE, SM +c double precision U(IUE,M) + double precision U(IUE,*) + double precision UP + parameter(ONE = 1.0d0) +C ------------------------------------------------------------------ + IF (0.GE.LPIVOT.OR.LPIVOT.GE.L1.OR.L1.GT.M) RETURN + CL=abs(U(1,LPIVOT)) + IF (MODE.EQ.2) GO TO 60 +C ****** CONSTRUCT THE TRANSFORMATION. ****** + DO 10 J=L1,M + 10 CL=MAX(abs(U(1,J)),CL) + IF (CL) 130,130,20 + 20 CLINV=ONE/CL + SM=(U(1,LPIVOT)*CLINV)**2 + DO 30 J=L1,M + 30 SM=SM+(U(1,J)*CLINV)**2 + CL=CL*SQRT(SM) + IF (U(1,LPIVOT)) 50,50,40 + 40 CL=-CL + 50 UP=U(1,LPIVOT)-CL + U(1,LPIVOT)=CL + GO TO 70 +C ****** APPLY THE TRANSFORMATION I+U*(U**T)/B TO C. ****** +C + 60 IF (CL) 130,130,70 + 70 IF (NCV.LE.0) RETURN + B= UP*U(1,LPIVOT) +C B MUST BE NONPOSITIVE HERE. IF B = 0., RETURN. +C + IF (B) 80,130,130 + 80 B=ONE/B + I2=1-ICV+ICE*(LPIVOT-1) + INCR=ICE*(L1-LPIVOT) + DO 120 J=1,NCV + I2=I2+ICV + I3=I2+INCR + I4=I3 + SM=C(I2)*UP + DO 90 I=L1,M + SM=SM+C(I3)*U(1,I) + 90 I3=I3+ICE + IF (SM) 100,120,100 + 100 SM=SM*B + C(I2)=C(I2)+SM*UP + DO 110 I=L1,M + C(I4)=C(I4)+SM*U(1,I) + 110 I4=I4+ICE + 120 CONTINUE + 130 RETURN + END + + + + SUBROUTINE G1 (A,B,CTERM,STERM,SIG) +c +C COMPUTE ORTHOGONAL ROTATION MATRIX.. +c +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 12, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C +C COMPUTE.. MATRIX (C, S) SO THAT (C, S)(A) = (SQRT(A**2+B**2)) +C (-S,C) (-S,C)(B) ( 0 ) +C COMPUTE SIG = SQRT(A**2+B**2) +C SIG IS COMPUTED LAST TO ALLOW FOR THE POSSIBILITY THAT +C SIG MAY BE IN THE SAME LOCATION AS A OR B . +C ------------------------------------------------------------------ + double precision A, B, CTERM, ONE, SIG, STERM, XR, YR, ZERO + parameter(ONE = 1.0d0, ZERO = 0.0d0) +C ------------------------------------------------------------------ + if (abs(A) .gt. abs(B)) then + XR=B/A + YR=sqrt(ONE+XR**2) + CTERM=sign(ONE/YR,A) + STERM=CTERM*XR + SIG=abs(A)*YR + RETURN + endif + + if (B .ne. ZERO) then + XR=A/B + YR=sqrt(ONE+XR**2) + STERM=sign(ONE/YR,B) + CTERM=STERM*XR + SIG=abs(B)*YR + RETURN + endif + + SIG=ZERO + CTERM=ZERO + STERM=ONE + RETURN + END Added: trunk/scipy/optimize/nnls/nnls.pyf =================================================================== --- trunk/scipy/optimize/nnls/nnls.pyf 2008-09-27 21:46:03 UTC (rev 4753) +++ trunk/scipy/optimize/nnls/nnls.pyf 2008-09-29 20:13:24 UTC (rev 4754) @@ -0,0 +1,22 @@ +! -*- f90 -*- +! Note: the context of this file is case sensitive. + +python module _nnls ! in + interface ! in :_nnls + subroutine nnls(a,mda,m,n,b,x,rnorm,w,zz,index_bn,mode) ! in :nnls:NNLS.F + double precision dimension(mda,*), intent(copy) :: a + integer optional,check(shape(a,0)==mda),depend(a) :: mda=shape(a,0) + integer :: m + integer :: n + double precision dimension(*), intent(copy) :: b + double precision dimension(n), intent(out) :: x + double precision, intent(out) :: rnorm + double precision dimension(*) :: w + double precision dimension(*) :: zz + integer dimension(*) :: index_bn + integer , intent(out):: mode + end subroutine nnls +end python module _nnls + +! This file was auto-generated with f2py (version:2_5878). +! See http://cens.ioc.ee/projects/f2py2e/ Added: trunk/scipy/optimize/nnls.py =================================================================== --- trunk/scipy/optimize/nnls.py 2008-09-27 21:46:03 UTC (rev 4753) +++ trunk/scipy/optimize/nnls.py 2008-09-29 20:13:24 UTC (rev 4754) @@ -0,0 +1,41 @@ +import _nnls +from numpy import asarray_chkfinite, zeros, double + +def nnls(A,b, overwrite_a=0, overwrite_b=0): + """ + Solve || Ax - b ||_2 -> min with x>=0 + + Inputs: + A -- matrix as above + b -- vector as above + + Outputs: + x -- solution vector + rnorm -- residual || Ax-b ||_2 + + + wrapper around NNLS.F code below nnls/ directory + + """ + + A,b = map(asarray_chkfinite, (A,b)) + + if len(A.shape)!=2: + raise ValueError, "expected matrix" + if len(b.shape)!=1: + raise ValueError, "expected vector" + + m,n = A.shape + + if m != b.shape[0]: + raise ValueError, "incompatible dimensions" + + w = zeros((n,), dtype=double) + zz = zeros((m,), dtype=double) + index=zeros((n,), dtype=int) + + x,rnorm,mode = _nnls.nnls(A,m,n,b,w,zz,index) + if mode != 1: raise RuntimeError, "too many iterations" + + return x, rnorm + Modified: trunk/scipy/optimize/setup.py =================================================================== --- trunk/scipy/optimize/setup.py 2008-09-27 21:46:03 UTC (rev 4753) +++ trunk/scipy/optimize/setup.py 2008-09-29 20:13:24 UTC (rev 4754) @@ -41,6 +41,9 @@ sources = ['slsqp.pyf', 'slsqp_optmz.f'] config.add_extension('_slsqp', sources=[join('slsqp', x) for x in sources]) + config.add_extension('_nnls', sources=[join('nnls', x) \ + for x in ["NNLS.f","nnls.pyf"]]) + config.add_data_dir('tests') config.add_data_dir('benchmarks') return config Added: trunk/scipy/optimize/tests/test_nnls.py =================================================================== --- trunk/scipy/optimize/tests/test_nnls.py 2008-09-27 21:46:03 UTC (rev 4753) +++ trunk/scipy/optimize/tests/test_nnls.py 2008-09-29 20:13:24 UTC (rev 4754) @@ -0,0 +1,31 @@ +""" Unit tests for nonlinear solvers +Author: Ondrej Certik +May 2007 +""" + +from numpy.testing import * + +from scipy.optimize import nnls +from numpy import arange, dot +from numpy.linalg import norm + + +class TestNNLS(TestCase): + """ Test case for a simple constrained entropy maximization problem + (the machine translation example of Berger et al in + Computational Linguistics, vol 22, num 1, pp 39--72, 1996.) + """ + + def test_nnls(self): + a=arange(25.0).reshape(-1,5) + x=arange(5.0) + y=dot(a,x) + x, res= nnls.nnls(a,y) + assert res<1e-7 + assert norm(dot(a,x)-y)<1e-7 + +if __name__ == "__main__": + run_module_suite() + + + From scipy-svn at scipy.org Tue Sep 30 02:45:55 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 30 Sep 2008 01:45:55 -0500 (CDT) Subject: [Scipy-svn] r4755 - trunk/scipy/optimize Message-ID: <20080930064555.ADC3539C628@scipy.org> Author: uwe.schmitt Date: 2008-09-30 01:45:52 -0500 (Tue, 30 Sep 2008) New Revision: 4755 Modified: trunk/scipy/optimize/nnls.py Log: removed obsolete default parameters from nnls() Modified: trunk/scipy/optimize/nnls.py =================================================================== --- trunk/scipy/optimize/nnls.py 2008-09-29 20:13:24 UTC (rev 4754) +++ trunk/scipy/optimize/nnls.py 2008-09-30 06:45:52 UTC (rev 4755) @@ -1,7 +1,7 @@ import _nnls from numpy import asarray_chkfinite, zeros, double -def nnls(A,b, overwrite_a=0, overwrite_b=0): +def nnls(A,b): """ Solve || Ax - b ||_2 -> min with x>=0 From scipy-svn at scipy.org Tue Sep 30 02:48:20 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 30 Sep 2008 01:48:20 -0500 (CDT) Subject: [Scipy-svn] r4756 - trunk/scipy/optimize Message-ID: <20080930064820.7E12739C594@scipy.org> Author: uwe.schmitt Date: 2008-09-30 01:48:15 -0500 (Tue, 30 Sep 2008) New Revision: 4756 Modified: trunk/scipy/optimize/SConscript trunk/scipy/optimize/setup.py Log: removed NNLS type from setup.py and Sconscript Modified: trunk/scipy/optimize/SConscript =================================================================== --- trunk/scipy/optimize/SConscript 2008-09-30 06:45:52 UTC (rev 4755) +++ trunk/scipy/optimize/SConscript 2008-09-30 06:48:15 UTC (rev 4756) @@ -74,7 +74,7 @@ env.NumpyPythonExtension('minpack2', source = src) # _nnls pyextension -src = [pjoin('nnls', i) for i in ['NNLS.f', 'nnls.pyf']] +src = [pjoin('nnls', i) for i in ['NNLS.F', 'nnls.pyf']] env.NumpyPythonExtension('_nnls', source = src) # moduleTNC pyextension Modified: trunk/scipy/optimize/setup.py =================================================================== --- trunk/scipy/optimize/setup.py 2008-09-30 06:45:52 UTC (rev 4755) +++ trunk/scipy/optimize/setup.py 2008-09-30 06:48:15 UTC (rev 4756) @@ -42,7 +42,7 @@ config.add_extension('_slsqp', sources=[join('slsqp', x) for x in sources]) config.add_extension('_nnls', sources=[join('nnls', x) \ - for x in ["NNLS.f","nnls.pyf"]]) + for x in ["NNLS.F","nnls.pyf"]]) config.add_data_dir('tests') config.add_data_dir('benchmarks') From scipy-svn at scipy.org Tue Sep 30 03:00:13 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 30 Sep 2008 02:00:13 -0500 (CDT) Subject: [Scipy-svn] r4757 - in trunk/scipy/optimize: . nnls Message-ID: <20080930070013.4E53839C594@scipy.org> Author: uwe.schmitt Date: 2008-09-30 02:00:07 -0500 (Tue, 30 Sep 2008) New Revision: 4757 Added: trunk/scipy/optimize/nnls/nnls.f Removed: trunk/scipy/optimize/nnls/NNLS.F Modified: trunk/scipy/optimize/SConscript trunk/scipy/optimize/setup.py Log: renamed NNLS.F to nnls.f Modified: trunk/scipy/optimize/SConscript =================================================================== --- trunk/scipy/optimize/SConscript 2008-09-30 06:48:15 UTC (rev 4756) +++ trunk/scipy/optimize/SConscript 2008-09-30 07:00:07 UTC (rev 4757) @@ -74,7 +74,7 @@ env.NumpyPythonExtension('minpack2', source = src) # _nnls pyextension -src = [pjoin('nnls', i) for i in ['NNLS.F', 'nnls.pyf']] +src = [pjoin('nnls', i) for i in ['nnls.f', 'nnls.pyf']] env.NumpyPythonExtension('_nnls', source = src) # moduleTNC pyextension Deleted: trunk/scipy/optimize/nnls/NNLS.F =================================================================== --- trunk/scipy/optimize/nnls/NNLS.F 2008-09-30 06:48:15 UTC (rev 4756) +++ trunk/scipy/optimize/nnls/NNLS.F 2008-09-30 07:00:07 UTC (rev 4757) @@ -1,477 +0,0 @@ -C SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) -C -C Algorithm NNLS: NONNEGATIVE LEAST SQUARES -C -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 15, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -c -C GIVEN AN M BY N MATRIX, A, AND AN M-VECTOR, B, COMPUTE AN -C N-VECTOR, X, THAT SOLVES THE LEAST SQUARES PROBLEM -C -C A * X = B SUBJECT TO X .GE. 0 -C ------------------------------------------------------------------ -c Subroutine Arguments -c -C A(),MDA,M,N MDA IS THE FIRST DIMENSIONING PARAMETER FOR THE -C ARRAY, A(). ON ENTRY A() CONTAINS THE M BY N -C MATRIX, A. ON EXIT A() CONTAINS -C THE PRODUCT MATRIX, Q*A , WHERE Q IS AN -C M BY M ORTHOGONAL MATRIX GENERATED IMPLICITLY BY -C THIS SUBROUTINE. -C B() ON ENTRY B() CONTAINS THE M-VECTOR, B. ON EXIT B() CON- -C TAINS Q*B. -C X() ON ENTRY X() NEED NOT BE INITIALIZED. ON EXIT X() WILL -C CONTAIN THE SOLUTION VECTOR. -C RNORM ON EXIT RNORM CONTAINS THE EUCLIDEAN NORM OF THE -C RESIDUAL VECTOR. -C W() AN N-ARRAY OF WORKING SPACE. ON EXIT W() WILL CONTAIN -C THE DUAL SOLUTION VECTOR. W WILL SATISFY W(I) = 0. -C FOR ALL I IN SET P AND W(I) .LE. 0. FOR ALL I IN SET Z -C ZZ() AN M-ARRAY OF WORKING SPACE. -C INDEX() AN INTEGER WORKING ARRAY OF LENGTH AT LEAST N. -C ON EXIT THE CONTENTS OF THIS ARRAY DEFINE THE SETS -C P AND Z AS FOLLOWS.. -C -C INDEX(1) THRU INDEX(NSETP) = SET P. -C INDEX(IZ1) THRU INDEX(IZ2) = SET Z. -C IZ1 = NSETP + 1 = NPP1 -C IZ2 = N -C MODE THIS IS A SUCCESS-FAILURE FLAG WITH THE FOLLOWING -C MEANINGS. -C 1 THE SOLUTION HAS BEEN COMPUTED SUCCESSFULLY. -C 2 THE DIMENSIONS OF THE PROBLEM ARE BAD. -C EITHER M .LE. 0 OR N .LE. 0. -C 3 ITERATION COUNT EXCEEDED. MORE THAN 3*N ITERATIONS. -C -C ------------------------------------------------------------------ - SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) -C ------------------------------------------------------------------ - integer I, II, IP, ITER, ITMAX, IZ, IZ1, IZ2, IZMAX, J, JJ, JZ, L - integer M, MDA, MODE,N, NPP1, NSETP, RTNKEY -c integer INDEX(N) -c double precision A(MDA,N), B(M), W(N), X(N), ZZ(M) - integer INDEX(*) - double precision A(MDA,*), B(*), W(*), X(*), ZZ(*) - double precision ALPHA, ASAVE, CC, DIFF, DUMMY, FACTOR, RNORM - double precision SM, SS, T, TEMP, TWO, UNORM, UP, WMAX - double precision ZERO, ZTEST - parameter(FACTOR = 0.01d0) - parameter(TWO = 2.0d0, ZERO = 0.0d0) -C ------------------------------------------------------------------ - MODE=1 - IF (M .le. 0 .or. N .le. 0) then - MODE=2 - RETURN - endif - ITER=0 - ITMAX=3*N -C -C INITIALIZE THE ARRAYS INDEX() AND X(). -C - DO 20 I=1,N - X(I)=ZERO - 20 INDEX(I)=I -C - IZ2=N - IZ1=1 - NSETP=0 - NPP1=1 -C ****** MAIN LOOP BEGINS HERE ****** - 30 CONTINUE -C QUIT IF ALL COEFFICIENTS ARE ALREADY IN THE SOLUTION. -C OR IF M COLS OF A HAVE BEEN TRIANGULARIZED. -C - IF (IZ1 .GT.IZ2.OR.NSETP.GE.M) GO TO 350 -C -C COMPUTE COMPONENTS OF THE DUAL (NEGATIVE GRADIENT) VECTOR W(). -C - DO 50 IZ=IZ1,IZ2 - J=INDEX(IZ) - SM=ZERO - DO 40 L=NPP1,M - 40 SM=SM+A(L,J)*B(L) - W(J)=SM - 50 continue -C FIND LARGEST POSITIVE W(J). - 60 continue - WMAX=ZERO - DO 70 IZ=IZ1,IZ2 - J=INDEX(IZ) - IF (W(J) .gt. WMAX) then - WMAX=W(J) - IZMAX=IZ - endif - 70 CONTINUE -C -C IF WMAX .LE. 0. GO TO TERMINATION. -C THIS INDICATES SATISFACTION OF THE KUHN-TUCKER CONDITIONS. -C - IF (WMAX .le. ZERO) go to 350 - IZ=IZMAX - J=INDEX(IZ) -C -C THE SIGN OF W(J) IS OK FOR J TO BE MOVED TO SET P. -C BEGIN THE TRANSFORMATION AND CHECK NEW DIAGONAL ELEMENT TO AVOID -C NEAR LINEAR DEPENDENCE. -C - ASAVE=A(NPP1,J) - CALL H12 (1,NPP1,NPP1+1,M,A(1,J),1,UP,DUMMY,1,1,0) - UNORM=ZERO - IF (NSETP .ne. 0) then - DO 90 L=1,NSETP - 90 UNORM=UNORM+A(L,J)**2 - endif - UNORM=sqrt(UNORM) - IF (DIFF(UNORM+ABS(A(NPP1,J))*FACTOR,UNORM) .gt. ZERO) then -C -C COL J IS SUFFICIENTLY INDEPENDENT. COPY B INTO ZZ, UPDATE ZZ -C AND SOLVE FOR ZTEST ( = PROPOSED NEW VALUE FOR X(J) ). -C - DO 120 L=1,M - 120 ZZ(L)=B(L) - CALL H12 (2,NPP1,NPP1+1,M,A(1,J),1,UP,ZZ,1,1,1) - ZTEST=ZZ(NPP1)/A(NPP1,J) -C -C SEE IF ZTEST IS POSITIVE -C - IF (ZTEST .gt. ZERO) go to 140 - endif -C -C REJECT J AS A CANDIDATE TO BE MOVED FROM SET Z TO SET P. -C RESTORE A(NPP1,J), SET W(J)=0., AND LOOP BACK TO TEST DUAL -C COEFFS AGAIN. -C - A(NPP1,J)=ASAVE - W(J)=ZERO - GO TO 60 -C -C THE INDEX J=INDEX(IZ) HAS BEEN SELECTED TO BE MOVED FROM -C SET Z TO SET P. UPDATE B, UPDATE INDICES, APPLY HOUSEHOLDER -C TRANSFORMATIONS TO COLS IN NEW SET Z, ZERO SUBDIAGONAL ELTS IN -C COL J, SET W(J)=0. -C - 140 continue - DO 150 L=1,M - 150 B(L)=ZZ(L) -C - INDEX(IZ)=INDEX(IZ1) - INDEX(IZ1)=J - IZ1=IZ1+1 - NSETP=NPP1 - NPP1=NPP1+1 -C - IF (IZ1 .le. IZ2) then - DO 160 JZ=IZ1,IZ2 - JJ=INDEX(JZ) - CALL H12 (2,NSETP,NPP1,M,A(1,J),1,UP,A(1,JJ),1,MDA,1) - 160 continue - endif -C - IF (NSETP .ne. M) then - DO 180 L=NPP1,M - 180 A(L,J)=ZERO - endif -C - W(J)=ZERO -C SOLVE THE TRIANGULAR SYSTEM. -C STORE THE SOLUTION TEMPORARILY IN ZZ(). - RTNKEY = 1 - GO TO 400 - 200 CONTINUE -C -C ****** SECONDARY LOOP BEGINS HERE ****** -C -C ITERATION COUNTER. -C - 210 continue - ITER=ITER+1 - IF (ITER .gt. ITMAX) then - MODE=3 - write (*,'(/a)') ' NNLS quitting on iteration count.' - GO TO 350 - endif -C -C SEE IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE. -C IF NOT COMPUTE ALPHA. -C - ALPHA=TWO - DO 240 IP=1,NSETP - L=INDEX(IP) - IF (ZZ(IP) .le. ZERO) then - T=-X(L)/(ZZ(IP)-X(L)) - IF (ALPHA .gt. T) then - ALPHA=T - JJ=IP - endif - endif - 240 CONTINUE -C -C IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE THEN ALPHA WILL -C STILL = 2. IF SO EXIT FROM SECONDARY LOOP TO MAIN LOOP. -C - IF (ALPHA.EQ.TWO) GO TO 330 -C -C OTHERWISE USE ALPHA WHICH WILL BE BETWEEN 0. AND 1. TO -C INTERPOLATE BETWEEN THE OLD X AND THE NEW ZZ. -C - DO 250 IP=1,NSETP - L=INDEX(IP) - X(L)=X(L)+ALPHA*(ZZ(IP)-X(L)) - 250 continue -C -C MODIFY A AND B AND THE INDEX ARRAYS TO MOVE COEFFICIENT I -C FROM SET P TO SET Z. -C - I=INDEX(JJ) - 260 continue - X(I)=ZERO -C - IF (JJ .ne. NSETP) then - JJ=JJ+1 - DO 280 J=JJ,NSETP - II=INDEX(J) - INDEX(J-1)=II - CALL G1 (A(J-1,II),A(J,II),CC,SS,A(J-1,II)) - A(J,II)=ZERO - DO 270 L=1,N - IF (L.NE.II) then -c -c Apply procedure G2 (CC,SS,A(J-1,L),A(J,L)) -c - TEMP = A(J-1,L) - A(J-1,L) = CC*TEMP + SS*A(J,L) - A(J,L) =-SS*TEMP + CC*A(J,L) - endif - 270 CONTINUE -c -c Apply procedure G2 (CC,SS,B(J-1),B(J)) -c - TEMP = B(J-1) - B(J-1) = CC*TEMP + SS*B(J) - B(J) =-SS*TEMP + CC*B(J) - 280 continue - endif -c - NPP1=NSETP - NSETP=NSETP-1 - IZ1=IZ1-1 - INDEX(IZ1)=I -C -C SEE IF THE REMAINING COEFFS IN SET P ARE FEASIBLE. THEY SHOULD -C BE BECAUSE OF THE WAY ALPHA WAS DETERMINED. -C IF ANY ARE INFEASIBLE IT IS DUE TO ROUND-OFF ERROR. ANY -C THAT ARE NONPOSITIVE WILL BE SET TO ZERO -C AND MOVED FROM SET P TO SET Z. -C - DO 300 JJ=1,NSETP - I=INDEX(JJ) - IF (X(I) .le. ZERO) go to 260 - 300 CONTINUE -C -C COPY B( ) INTO ZZ( ). THEN SOLVE AGAIN AND LOOP BACK. -C - DO 310 I=1,M - 310 ZZ(I)=B(I) - RTNKEY = 2 - GO TO 400 - 320 CONTINUE - GO TO 210 -C ****** END OF SECONDARY LOOP ****** -C - 330 continue - DO 340 IP=1,NSETP - I=INDEX(IP) - 340 X(I)=ZZ(IP) -C ALL NEW COEFFS ARE POSITIVE. LOOP BACK TO BEGINNING. - GO TO 30 -C -C ****** END OF MAIN LOOP ****** -C -C COME TO HERE FOR TERMINATION. -C COMPUTE THE NORM OF THE FINAL RESIDUAL VECTOR. -C - 350 continue - SM=ZERO - IF (NPP1 .le. M) then - DO 360 I=NPP1,M - 360 SM=SM+B(I)**2 - else - DO 380 J=1,N - 380 W(J)=ZERO - endif - RNORM=sqrt(SM) - RETURN -C -C THE FOLLOWING BLOCK OF CODE IS USED AS AN INTERNAL SUBROUTINE -C TO SOLVE THE TRIANGULAR SYSTEM, PUTTING THE SOLUTION IN ZZ(). -C - 400 continue - DO 430 L=1,NSETP - IP=NSETP+1-L - IF (L .ne. 1) then - DO 410 II=1,IP - ZZ(II)=ZZ(II)-A(II,JJ)*ZZ(IP+1) - 410 continue - endif - JJ=INDEX(IP) - ZZ(IP)=ZZ(IP)/A(IP,JJ) - 430 continue - go to (200, 320), RTNKEY - END - - - double precision FUNCTION DIFF(X,Y) -c -c Function used in tests that depend on machine precision. -c -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 7, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -C - double precision X, Y - DIFF=X-Y - RETURN - END - - -C SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) -C -C CONSTRUCTION AND/OR APPLICATION OF A SINGLE -C HOUSEHOLDER TRANSFORMATION.. Q = I + U*(U**T)/B -C -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 12, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -C ------------------------------------------------------------------ -c Subroutine Arguments -c -C MODE = 1 OR 2 Selects Algorithm H1 to construct and apply a -c Householder transformation, or Algorithm H2 to apply a -c previously constructed transformation. -C LPIVOT IS THE INDEX OF THE PIVOT ELEMENT. -C L1,M IF L1 .LE. M THE TRANSFORMATION WILL BE CONSTRUCTED TO -C ZERO ELEMENTS INDEXED FROM L1 THROUGH M. IF L1 GT. M -C THE SUBROUTINE DOES AN IDENTITY TRANSFORMATION. -C U(),IUE,UP On entry with MODE = 1, U() contains the pivot -c vector. IUE is the storage increment between elements. -c On exit when MODE = 1, U() and UP contain quantities -c defining the vector U of the Householder transformation. -c on entry with MODE = 2, U() and UP should contain -c quantities previously computed with MODE = 1. These will -c not be modified during the entry with MODE = 2. -C C() ON ENTRY with MODE = 1 or 2, C() CONTAINS A MATRIX WHICH -c WILL BE REGARDED AS A SET OF VECTORS TO WHICH THE -c HOUSEHOLDER TRANSFORMATION IS TO BE APPLIED. -c ON EXIT C() CONTAINS THE SET OF TRANSFORMED VECTORS. -C ICE STORAGE INCREMENT BETWEEN ELEMENTS OF VECTORS IN C(). -C ICV STORAGE INCREMENT BETWEEN VECTORS IN C(). -C NCV NUMBER OF VECTORS IN C() TO BE TRANSFORMED. IF NCV .LE. 0 -C NO OPERATIONS WILL BE DONE ON C(). -C ------------------------------------------------------------------ - SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) -C ------------------------------------------------------------------ - integer I, I2, I3, I4, ICE, ICV, INCR, IUE, J - integer L1, LPIVOT, M, MODE, NCV - double precision B, C(*), CL, CLINV, ONE, SM -c double precision U(IUE,M) - double precision U(IUE,*) - double precision UP - parameter(ONE = 1.0d0) -C ------------------------------------------------------------------ - IF (0.GE.LPIVOT.OR.LPIVOT.GE.L1.OR.L1.GT.M) RETURN - CL=abs(U(1,LPIVOT)) - IF (MODE.EQ.2) GO TO 60 -C ****** CONSTRUCT THE TRANSFORMATION. ****** - DO 10 J=L1,M - 10 CL=MAX(abs(U(1,J)),CL) - IF (CL) 130,130,20 - 20 CLINV=ONE/CL - SM=(U(1,LPIVOT)*CLINV)**2 - DO 30 J=L1,M - 30 SM=SM+(U(1,J)*CLINV)**2 - CL=CL*SQRT(SM) - IF (U(1,LPIVOT)) 50,50,40 - 40 CL=-CL - 50 UP=U(1,LPIVOT)-CL - U(1,LPIVOT)=CL - GO TO 70 -C ****** APPLY THE TRANSFORMATION I+U*(U**T)/B TO C. ****** -C - 60 IF (CL) 130,130,70 - 70 IF (NCV.LE.0) RETURN - B= UP*U(1,LPIVOT) -C B MUST BE NONPOSITIVE HERE. IF B = 0., RETURN. -C - IF (B) 80,130,130 - 80 B=ONE/B - I2=1-ICV+ICE*(LPIVOT-1) - INCR=ICE*(L1-LPIVOT) - DO 120 J=1,NCV - I2=I2+ICV - I3=I2+INCR - I4=I3 - SM=C(I2)*UP - DO 90 I=L1,M - SM=SM+C(I3)*U(1,I) - 90 I3=I3+ICE - IF (SM) 100,120,100 - 100 SM=SM*B - C(I2)=C(I2)+SM*UP - DO 110 I=L1,M - C(I4)=C(I4)+SM*U(1,I) - 110 I4=I4+ICE - 120 CONTINUE - 130 RETURN - END - - - - SUBROUTINE G1 (A,B,CTERM,STERM,SIG) -c -C COMPUTE ORTHOGONAL ROTATION MATRIX.. -c -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 12, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -C -C COMPUTE.. MATRIX (C, S) SO THAT (C, S)(A) = (SQRT(A**2+B**2)) -C (-S,C) (-S,C)(B) ( 0 ) -C COMPUTE SIG = SQRT(A**2+B**2) -C SIG IS COMPUTED LAST TO ALLOW FOR THE POSSIBILITY THAT -C SIG MAY BE IN THE SAME LOCATION AS A OR B . -C ------------------------------------------------------------------ - double precision A, B, CTERM, ONE, SIG, STERM, XR, YR, ZERO - parameter(ONE = 1.0d0, ZERO = 0.0d0) -C ------------------------------------------------------------------ - if (abs(A) .gt. abs(B)) then - XR=B/A - YR=sqrt(ONE+XR**2) - CTERM=sign(ONE/YR,A) - STERM=CTERM*XR - SIG=abs(A)*YR - RETURN - endif - - if (B .ne. ZERO) then - XR=A/B - YR=sqrt(ONE+XR**2) - STERM=sign(ONE/YR,B) - CTERM=STERM*XR - SIG=abs(B)*YR - RETURN - endif - - SIG=ZERO - CTERM=ZERO - STERM=ONE - RETURN - END Added: trunk/scipy/optimize/nnls/nnls.f =================================================================== --- trunk/scipy/optimize/nnls/nnls.f 2008-09-30 06:48:15 UTC (rev 4756) +++ trunk/scipy/optimize/nnls/nnls.f 2008-09-30 07:00:07 UTC (rev 4757) @@ -0,0 +1,477 @@ +C SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) +C +C Algorithm NNLS: NONNEGATIVE LEAST SQUARES +C +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 15, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +c +C GIVEN AN M BY N MATRIX, A, AND AN M-VECTOR, B, COMPUTE AN +C N-VECTOR, X, THAT SOLVES THE LEAST SQUARES PROBLEM +C +C A * X = B SUBJECT TO X .GE. 0 +C ------------------------------------------------------------------ +c Subroutine Arguments +c +C A(),MDA,M,N MDA IS THE FIRST DIMENSIONING PARAMETER FOR THE +C ARRAY, A(). ON ENTRY A() CONTAINS THE M BY N +C MATRIX, A. ON EXIT A() CONTAINS +C THE PRODUCT MATRIX, Q*A , WHERE Q IS AN +C M BY M ORTHOGONAL MATRIX GENERATED IMPLICITLY BY +C THIS SUBROUTINE. +C B() ON ENTRY B() CONTAINS THE M-VECTOR, B. ON EXIT B() CON- +C TAINS Q*B. +C X() ON ENTRY X() NEED NOT BE INITIALIZED. ON EXIT X() WILL +C CONTAIN THE SOLUTION VECTOR. +C RNORM ON EXIT RNORM CONTAINS THE EUCLIDEAN NORM OF THE +C RESIDUAL VECTOR. +C W() AN N-ARRAY OF WORKING SPACE. ON EXIT W() WILL CONTAIN +C THE DUAL SOLUTION VECTOR. W WILL SATISFY W(I) = 0. +C FOR ALL I IN SET P AND W(I) .LE. 0. FOR ALL I IN SET Z +C ZZ() AN M-ARRAY OF WORKING SPACE. +C INDEX() AN INTEGER WORKING ARRAY OF LENGTH AT LEAST N. +C ON EXIT THE CONTENTS OF THIS ARRAY DEFINE THE SETS +C P AND Z AS FOLLOWS.. +C +C INDEX(1) THRU INDEX(NSETP) = SET P. +C INDEX(IZ1) THRU INDEX(IZ2) = SET Z. +C IZ1 = NSETP + 1 = NPP1 +C IZ2 = N +C MODE THIS IS A SUCCESS-FAILURE FLAG WITH THE FOLLOWING +C MEANINGS. +C 1 THE SOLUTION HAS BEEN COMPUTED SUCCESSFULLY. +C 2 THE DIMENSIONS OF THE PROBLEM ARE BAD. +C EITHER M .LE. 0 OR N .LE. 0. +C 3 ITERATION COUNT EXCEEDED. MORE THAN 3*N ITERATIONS. +C +C ------------------------------------------------------------------ + SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) +C ------------------------------------------------------------------ + integer I, II, IP, ITER, ITMAX, IZ, IZ1, IZ2, IZMAX, J, JJ, JZ, L + integer M, MDA, MODE,N, NPP1, NSETP, RTNKEY +c integer INDEX(N) +c double precision A(MDA,N), B(M), W(N), X(N), ZZ(M) + integer INDEX(*) + double precision A(MDA,*), B(*), W(*), X(*), ZZ(*) + double precision ALPHA, ASAVE, CC, DIFF, DUMMY, FACTOR, RNORM + double precision SM, SS, T, TEMP, TWO, UNORM, UP, WMAX + double precision ZERO, ZTEST + parameter(FACTOR = 0.01d0) + parameter(TWO = 2.0d0, ZERO = 0.0d0) +C ------------------------------------------------------------------ + MODE=1 + IF (M .le. 0 .or. N .le. 0) then + MODE=2 + RETURN + endif + ITER=0 + ITMAX=3*N +C +C INITIALIZE THE ARRAYS INDEX() AND X(). +C + DO 20 I=1,N + X(I)=ZERO + 20 INDEX(I)=I +C + IZ2=N + IZ1=1 + NSETP=0 + NPP1=1 +C ****** MAIN LOOP BEGINS HERE ****** + 30 CONTINUE +C QUIT IF ALL COEFFICIENTS ARE ALREADY IN THE SOLUTION. +C OR IF M COLS OF A HAVE BEEN TRIANGULARIZED. +C + IF (IZ1 .GT.IZ2.OR.NSETP.GE.M) GO TO 350 +C +C COMPUTE COMPONENTS OF THE DUAL (NEGATIVE GRADIENT) VECTOR W(). +C + DO 50 IZ=IZ1,IZ2 + J=INDEX(IZ) + SM=ZERO + DO 40 L=NPP1,M + 40 SM=SM+A(L,J)*B(L) + W(J)=SM + 50 continue +C FIND LARGEST POSITIVE W(J). + 60 continue + WMAX=ZERO + DO 70 IZ=IZ1,IZ2 + J=INDEX(IZ) + IF (W(J) .gt. WMAX) then + WMAX=W(J) + IZMAX=IZ + endif + 70 CONTINUE +C +C IF WMAX .LE. 0. GO TO TERMINATION. +C THIS INDICATES SATISFACTION OF THE KUHN-TUCKER CONDITIONS. +C + IF (WMAX .le. ZERO) go to 350 + IZ=IZMAX + J=INDEX(IZ) +C +C THE SIGN OF W(J) IS OK FOR J TO BE MOVED TO SET P. +C BEGIN THE TRANSFORMATION AND CHECK NEW DIAGONAL ELEMENT TO AVOID +C NEAR LINEAR DEPENDENCE. +C + ASAVE=A(NPP1,J) + CALL H12 (1,NPP1,NPP1+1,M,A(1,J),1,UP,DUMMY,1,1,0) + UNORM=ZERO + IF (NSETP .ne. 0) then + DO 90 L=1,NSETP + 90 UNORM=UNORM+A(L,J)**2 + endif + UNORM=sqrt(UNORM) + IF (DIFF(UNORM+ABS(A(NPP1,J))*FACTOR,UNORM) .gt. ZERO) then +C +C COL J IS SUFFICIENTLY INDEPENDENT. COPY B INTO ZZ, UPDATE ZZ +C AND SOLVE FOR ZTEST ( = PROPOSED NEW VALUE FOR X(J) ). +C + DO 120 L=1,M + 120 ZZ(L)=B(L) + CALL H12 (2,NPP1,NPP1+1,M,A(1,J),1,UP,ZZ,1,1,1) + ZTEST=ZZ(NPP1)/A(NPP1,J) +C +C SEE IF ZTEST IS POSITIVE +C + IF (ZTEST .gt. ZERO) go to 140 + endif +C +C REJECT J AS A CANDIDATE TO BE MOVED FROM SET Z TO SET P. +C RESTORE A(NPP1,J), SET W(J)=0., AND LOOP BACK TO TEST DUAL +C COEFFS AGAIN. +C + A(NPP1,J)=ASAVE + W(J)=ZERO + GO TO 60 +C +C THE INDEX J=INDEX(IZ) HAS BEEN SELECTED TO BE MOVED FROM +C SET Z TO SET P. UPDATE B, UPDATE INDICES, APPLY HOUSEHOLDER +C TRANSFORMATIONS TO COLS IN NEW SET Z, ZERO SUBDIAGONAL ELTS IN +C COL J, SET W(J)=0. +C + 140 continue + DO 150 L=1,M + 150 B(L)=ZZ(L) +C + INDEX(IZ)=INDEX(IZ1) + INDEX(IZ1)=J + IZ1=IZ1+1 + NSETP=NPP1 + NPP1=NPP1+1 +C + IF (IZ1 .le. IZ2) then + DO 160 JZ=IZ1,IZ2 + JJ=INDEX(JZ) + CALL H12 (2,NSETP,NPP1,M,A(1,J),1,UP,A(1,JJ),1,MDA,1) + 160 continue + endif +C + IF (NSETP .ne. M) then + DO 180 L=NPP1,M + 180 A(L,J)=ZERO + endif +C + W(J)=ZERO +C SOLVE THE TRIANGULAR SYSTEM. +C STORE THE SOLUTION TEMPORARILY IN ZZ(). + RTNKEY = 1 + GO TO 400 + 200 CONTINUE +C +C ****** SECONDARY LOOP BEGINS HERE ****** +C +C ITERATION COUNTER. +C + 210 continue + ITER=ITER+1 + IF (ITER .gt. ITMAX) then + MODE=3 + write (*,'(/a)') ' NNLS quitting on iteration count.' + GO TO 350 + endif +C +C SEE IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE. +C IF NOT COMPUTE ALPHA. +C + ALPHA=TWO + DO 240 IP=1,NSETP + L=INDEX(IP) + IF (ZZ(IP) .le. ZERO) then + T=-X(L)/(ZZ(IP)-X(L)) + IF (ALPHA .gt. T) then + ALPHA=T + JJ=IP + endif + endif + 240 CONTINUE +C +C IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE THEN ALPHA WILL +C STILL = 2. IF SO EXIT FROM SECONDARY LOOP TO MAIN LOOP. +C + IF (ALPHA.EQ.TWO) GO TO 330 +C +C OTHERWISE USE ALPHA WHICH WILL BE BETWEEN 0. AND 1. TO +C INTERPOLATE BETWEEN THE OLD X AND THE NEW ZZ. +C + DO 250 IP=1,NSETP + L=INDEX(IP) + X(L)=X(L)+ALPHA*(ZZ(IP)-X(L)) + 250 continue +C +C MODIFY A AND B AND THE INDEX ARRAYS TO MOVE COEFFICIENT I +C FROM SET P TO SET Z. +C + I=INDEX(JJ) + 260 continue + X(I)=ZERO +C + IF (JJ .ne. NSETP) then + JJ=JJ+1 + DO 280 J=JJ,NSETP + II=INDEX(J) + INDEX(J-1)=II + CALL G1 (A(J-1,II),A(J,II),CC,SS,A(J-1,II)) + A(J,II)=ZERO + DO 270 L=1,N + IF (L.NE.II) then +c +c Apply procedure G2 (CC,SS,A(J-1,L),A(J,L)) +c + TEMP = A(J-1,L) + A(J-1,L) = CC*TEMP + SS*A(J,L) + A(J,L) =-SS*TEMP + CC*A(J,L) + endif + 270 CONTINUE +c +c Apply procedure G2 (CC,SS,B(J-1),B(J)) +c + TEMP = B(J-1) + B(J-1) = CC*TEMP + SS*B(J) + B(J) =-SS*TEMP + CC*B(J) + 280 continue + endif +c + NPP1=NSETP + NSETP=NSETP-1 + IZ1=IZ1-1 + INDEX(IZ1)=I +C +C SEE IF THE REMAINING COEFFS IN SET P ARE FEASIBLE. THEY SHOULD +C BE BECAUSE OF THE WAY ALPHA WAS DETERMINED. +C IF ANY ARE INFEASIBLE IT IS DUE TO ROUND-OFF ERROR. ANY +C THAT ARE NONPOSITIVE WILL BE SET TO ZERO +C AND MOVED FROM SET P TO SET Z. +C + DO 300 JJ=1,NSETP + I=INDEX(JJ) + IF (X(I) .le. ZERO) go to 260 + 300 CONTINUE +C +C COPY B( ) INTO ZZ( ). THEN SOLVE AGAIN AND LOOP BACK. +C + DO 310 I=1,M + 310 ZZ(I)=B(I) + RTNKEY = 2 + GO TO 400 + 320 CONTINUE + GO TO 210 +C ****** END OF SECONDARY LOOP ****** +C + 330 continue + DO 340 IP=1,NSETP + I=INDEX(IP) + 340 X(I)=ZZ(IP) +C ALL NEW COEFFS ARE POSITIVE. LOOP BACK TO BEGINNING. + GO TO 30 +C +C ****** END OF MAIN LOOP ****** +C +C COME TO HERE FOR TERMINATION. +C COMPUTE THE NORM OF THE FINAL RESIDUAL VECTOR. +C + 350 continue + SM=ZERO + IF (NPP1 .le. M) then + DO 360 I=NPP1,M + 360 SM=SM+B(I)**2 + else + DO 380 J=1,N + 380 W(J)=ZERO + endif + RNORM=sqrt(SM) + RETURN +C +C THE FOLLOWING BLOCK OF CODE IS USED AS AN INTERNAL SUBROUTINE +C TO SOLVE THE TRIANGULAR SYSTEM, PUTTING THE SOLUTION IN ZZ(). +C + 400 continue + DO 430 L=1,NSETP + IP=NSETP+1-L + IF (L .ne. 1) then + DO 410 II=1,IP + ZZ(II)=ZZ(II)-A(II,JJ)*ZZ(IP+1) + 410 continue + endif + JJ=INDEX(IP) + ZZ(IP)=ZZ(IP)/A(IP,JJ) + 430 continue + go to (200, 320), RTNKEY + END + + + double precision FUNCTION DIFF(X,Y) +c +c Function used in tests that depend on machine precision. +c +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 7, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C + double precision X, Y + DIFF=X-Y + RETURN + END + + +C SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) +C +C CONSTRUCTION AND/OR APPLICATION OF A SINGLE +C HOUSEHOLDER TRANSFORMATION.. Q = I + U*(U**T)/B +C +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 12, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C ------------------------------------------------------------------ +c Subroutine Arguments +c +C MODE = 1 OR 2 Selects Algorithm H1 to construct and apply a +c Householder transformation, or Algorithm H2 to apply a +c previously constructed transformation. +C LPIVOT IS THE INDEX OF THE PIVOT ELEMENT. +C L1,M IF L1 .LE. M THE TRANSFORMATION WILL BE CONSTRUCTED TO +C ZERO ELEMENTS INDEXED FROM L1 THROUGH M. IF L1 GT. M +C THE SUBROUTINE DOES AN IDENTITY TRANSFORMATION. +C U(),IUE,UP On entry with MODE = 1, U() contains the pivot +c vector. IUE is the storage increment between elements. +c On exit when MODE = 1, U() and UP contain quantities +c defining the vector U of the Householder transformation. +c on entry with MODE = 2, U() and UP should contain +c quantities previously computed with MODE = 1. These will +c not be modified during the entry with MODE = 2. +C C() ON ENTRY with MODE = 1 or 2, C() CONTAINS A MATRIX WHICH +c WILL BE REGARDED AS A SET OF VECTORS TO WHICH THE +c HOUSEHOLDER TRANSFORMATION IS TO BE APPLIED. +c ON EXIT C() CONTAINS THE SET OF TRANSFORMED VECTORS. +C ICE STORAGE INCREMENT BETWEEN ELEMENTS OF VECTORS IN C(). +C ICV STORAGE INCREMENT BETWEEN VECTORS IN C(). +C NCV NUMBER OF VECTORS IN C() TO BE TRANSFORMED. IF NCV .LE. 0 +C NO OPERATIONS WILL BE DONE ON C(). +C ------------------------------------------------------------------ + SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) +C ------------------------------------------------------------------ + integer I, I2, I3, I4, ICE, ICV, INCR, IUE, J + integer L1, LPIVOT, M, MODE, NCV + double precision B, C(*), CL, CLINV, ONE, SM +c double precision U(IUE,M) + double precision U(IUE,*) + double precision UP + parameter(ONE = 1.0d0) +C ------------------------------------------------------------------ + IF (0.GE.LPIVOT.OR.LPIVOT.GE.L1.OR.L1.GT.M) RETURN + CL=abs(U(1,LPIVOT)) + IF (MODE.EQ.2) GO TO 60 +C ****** CONSTRUCT THE TRANSFORMATION. ****** + DO 10 J=L1,M + 10 CL=MAX(abs(U(1,J)),CL) + IF (CL) 130,130,20 + 20 CLINV=ONE/CL + SM=(U(1,LPIVOT)*CLINV)**2 + DO 30 J=L1,M + 30 SM=SM+(U(1,J)*CLINV)**2 + CL=CL*SQRT(SM) + IF (U(1,LPIVOT)) 50,50,40 + 40 CL=-CL + 50 UP=U(1,LPIVOT)-CL + U(1,LPIVOT)=CL + GO TO 70 +C ****** APPLY THE TRANSFORMATION I+U*(U**T)/B TO C. ****** +C + 60 IF (CL) 130,130,70 + 70 IF (NCV.LE.0) RETURN + B= UP*U(1,LPIVOT) +C B MUST BE NONPOSITIVE HERE. IF B = 0., RETURN. +C + IF (B) 80,130,130 + 80 B=ONE/B + I2=1-ICV+ICE*(LPIVOT-1) + INCR=ICE*(L1-LPIVOT) + DO 120 J=1,NCV + I2=I2+ICV + I3=I2+INCR + I4=I3 + SM=C(I2)*UP + DO 90 I=L1,M + SM=SM+C(I3)*U(1,I) + 90 I3=I3+ICE + IF (SM) 100,120,100 + 100 SM=SM*B + C(I2)=C(I2)+SM*UP + DO 110 I=L1,M + C(I4)=C(I4)+SM*U(1,I) + 110 I4=I4+ICE + 120 CONTINUE + 130 RETURN + END + + + + SUBROUTINE G1 (A,B,CTERM,STERM,SIG) +c +C COMPUTE ORTHOGONAL ROTATION MATRIX.. +c +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 12, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C +C COMPUTE.. MATRIX (C, S) SO THAT (C, S)(A) = (SQRT(A**2+B**2)) +C (-S,C) (-S,C)(B) ( 0 ) +C COMPUTE SIG = SQRT(A**2+B**2) +C SIG IS COMPUTED LAST TO ALLOW FOR THE POSSIBILITY THAT +C SIG MAY BE IN THE SAME LOCATION AS A OR B . +C ------------------------------------------------------------------ + double precision A, B, CTERM, ONE, SIG, STERM, XR, YR, ZERO + parameter(ONE = 1.0d0, ZERO = 0.0d0) +C ------------------------------------------------------------------ + if (abs(A) .gt. abs(B)) then + XR=B/A + YR=sqrt(ONE+XR**2) + CTERM=sign(ONE/YR,A) + STERM=CTERM*XR + SIG=abs(A)*YR + RETURN + endif + + if (B .ne. ZERO) then + XR=A/B + YR=sqrt(ONE+XR**2) + STERM=sign(ONE/YR,B) + CTERM=STERM*XR + SIG=abs(B)*YR + RETURN + endif + + SIG=ZERO + CTERM=ZERO + STERM=ONE + RETURN + END Modified: trunk/scipy/optimize/setup.py =================================================================== --- trunk/scipy/optimize/setup.py 2008-09-30 06:48:15 UTC (rev 4756) +++ trunk/scipy/optimize/setup.py 2008-09-30 07:00:07 UTC (rev 4757) @@ -42,7 +42,7 @@ config.add_extension('_slsqp', sources=[join('slsqp', x) for x in sources]) config.add_extension('_nnls', sources=[join('nnls', x) \ - for x in ["NNLS.F","nnls.pyf"]]) + for x in ["nnls.f","nnls.pyf"]]) config.add_data_dir('tests') config.add_data_dir('benchmarks') From scipy-svn at scipy.org Tue Sep 30 15:15:04 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 30 Sep 2008 14:15:04 -0500 (CDT) Subject: [Scipy-svn] r4758 - in trunk/scipy/cluster: . src tests Message-ID: <20080930191504.322FA39C05F@scipy.org> Author: damian.eads Date: 2008-09-30 14:14:59 -0500 (Tue, 30 Sep 2008) New Revision: 4758 Modified: trunk/scipy/cluster/distance.py trunk/scipy/cluster/hierarchy.py trunk/scipy/cluster/src/distance.c trunk/scipy/cluster/src/distance.h trunk/scipy/cluster/src/distance_wrap.c trunk/scipy/cluster/tests/test_distance.py Log: Fixed minor bug in cophenet. Modified: trunk/scipy/cluster/distance.py =================================================================== --- trunk/scipy/cluster/distance.py 2008-09-30 07:00:07 UTC (rev 4757) +++ trunk/scipy/cluster/distance.py 2008-09-30 19:14:59 UTC (rev 4758) @@ -199,6 +199,35 @@ raise ValueError("p must be at least 1") return (abs(u-v)**p).sum() ** (1.0 / p) +def wminkowski(u, v, p, w): + """ + Computes the weighted Minkowski distance between two vectors ``u`` + and ``v``, defined as + + .. math:: + + \sum {(w_i*|u_i - v_i|)^p})^(1/p). + + :Parameters: + u : ndarray + An :math:`n`-dimensional vector. + v : ndarray + An :math:`n`-dimensional vector. + p : ndarray + The norm of the difference :math:`${||u-v||}_p$`. + w : ndarray + The weight vector. + + :Returns: + d : double + The Minkowski distance between vectors ``u`` and ``v``. + """ + u = np.asarray(u) + v = np.asarray(v) + if p < 1: + raise ValueError("p must be at least 1") + return ((w * abs(u-v))**p).sum() ** (1.0 / p) + def euclidean(u, v): """ Computes the Euclidean distance between two n-vectors ``u`` and ``v``, @@ -817,6 +846,14 @@ 'jaccard', 'kulsinski', 'mahalanobis', 'matching', 'minkowski', 'rogerstanimoto', 'russellrao', 'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule'. + w : ndarray + The weight vector (for weighted Minkowski). + p : double + The p-norm to apply (for Minkowski, weighted and unweighted) + V : ndarray + The variance vector (for standardized Euclidean). + VI : ndarray + The inverse of the covariance matrix (for Mahalanobis). :Returns: Y : ndarray @@ -978,6 +1015,11 @@ Computes the Sokal-Sneath distance between each pair of boolean vectors. (see sokalsneath function documentation) + 22. ``Y = pdist(X, 'wminkowski')`` + + Computes the weighted Minkowski distance between each pair of + vectors. (see wminkowski function documentation) + 22. ``Y = pdist(X, f)`` Computes the distance between all pairs of vectors in X @@ -1036,6 +1078,11 @@ for j in xrange(i+1, m): dm[k] = minkowski(X[i, :], X[j, :], p) k = k + 1 + elif metric == wminkowski: + for i in xrange(0, m - 1): + for j in xrange(i+1, m): + dm[k] = wminkowski(X[i, :], X[j, :], p, w) + k = k + 1 elif metric == seuclidean: for i in xrange(0, m - 1): for j in xrange(i+1, m): @@ -1079,6 +1126,9 @@ _distance_wrap.pdist_chebyshev_wrap(_convert_to_double(X), dm) elif mstr in set(['minkowski', 'mi', 'm']): _distance_wrap.pdist_minkowski_wrap(_convert_to_double(X), dm, p) + elif mstr in set(['wminkowski', 'wmi', 'wm', 'wpnorm']): + _distance_wrap.cdist_weighted_minkowski_wrap(_convert_to_double(X), + dm, p, w) elif mstr in set(['seuclidean', 'se', 's']): if V is not None: V = np.asarray(V) @@ -1174,7 +1224,9 @@ elif metric == 'test_cityblock': dm = pdist(X, cityblock) elif metric == 'test_minkowski': - dm = pdist(X, minkowski, p) + dm = pdist(X, minkowski, p=p) + elif metric == 'test_wminkowski': + dm = pdist(X, wminkowski, p=p, w=w) elif metric == 'test_cosine': dm = pdist(X, cosine) elif metric == 'test_correlation': @@ -1502,7 +1554,7 @@ return d -def cdist(XA, XB, metric='euclidean', p=2, V=None, VI=None): +def cdist(XA, XB, metric='euclidean', p=2, V=None, VI=None, w=None): """ Computes distance between each pair of observations between two collections of vectors. ``XA`` is a :math:`$m_A$` by :math:`$n$` @@ -1529,8 +1581,18 @@ 'correlation', 'cosine', 'dice', 'euclidean', 'hamming', 'jaccard', 'kulsinski', 'mahalanobis', 'matching', 'minkowski', 'rogerstanimoto', 'russellrao', 'seuclidean', - 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule'. + 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'wminkowski', + 'yule'. + w : ndarray + The weight vector (for weighted Minkowski). + p : double + The p-norm to apply (for Minkowski, weighted and unweighted) + V : ndarray + The variance vector (for standardized Euclidean). + VI : ndarray + The inverse of the covariance matrix (for Mahalanobis). + :Returns: Y : ndarray A :math:`$m_A$` by :math:`$m_B$` distance matrix. @@ -1688,11 +1750,17 @@ 21. ``Y = cdist(X, 'sokalsneath')`` - Computes the Sokal-Sneath distance between each pair of - boolean vectors. (see sokalsneath function documentation) + Computes the Sokal-Sneath distance between the vectors. (see + sokalsneath function documentation) - 22. ``Y = cdist(X, f)`` + 22. ``Y = cdist(X, 'wminkowski')`` + + Computes the weighted Minkowski distance between the + vectors. (see sokalsneath function documentation) + + 23. ``Y = cdist(X, f)`` + Computes the distance between all pairs of vectors in X using the user supplied 2-arity function f. For example, Euclidean distance between the vectors could be computed @@ -1755,6 +1823,10 @@ for i in xrange(0, mA): for j in xrange(0, mB): dm[i, j] = minkowski(XA[i, :], XB[j, :], p) + elif metric == wminkowski: + for i in xrange(0, mA): + for j in xrange(0, mB): + dm[i, j] = wminkowski(XA[i, :], XB[j, :], p, w) elif metric == seuclidean: for i in xrange(0, mA): for j in xrange(0, mB): @@ -1800,9 +1872,12 @@ elif mstr in set(['chebychev', 'chebyshev', 'cheby', 'cheb', 'ch']): _distance_wrap.cdist_chebyshev_wrap(_convert_to_double(XA), _convert_to_double(XB), dm) - elif mstr in set(['minkowski', 'mi', 'm']): + elif mstr in set(['minkowski', 'mi', 'm', 'pnorm']): _distance_wrap.cdist_minkowski_wrap(_convert_to_double(XA), _convert_to_double(XB), dm, p) + elif mstr in set(['wminkowski', 'wmi', 'wm', 'wpnorm']): + _distance_wrap.cdist_weighted_minkowski_wrap(_convert_to_double(XA), + _convert_to_double(XB), dm, p, _convert_to_double(w)) elif mstr in set(['seuclidean', 'se', 's']): if V is not None: V = np.asarray(V) @@ -1921,7 +1996,9 @@ elif metric == 'test_cityblock': dm = cdist(XA, XB, cityblock) elif metric == 'test_minkowski': - dm = cdist(XA, XB, minkowski, p) + dm = cdist(XA, XB, minkowski, p=p) + elif metric == 'test_wminkowski': + dm = cdist(XA, XB, wminkowski, p=p, w=w) elif metric == 'test_cosine': dm = cdist(XA, XB, cosine) elif metric == 'test_correlation': Modified: trunk/scipy/cluster/hierarchy.py =================================================================== --- trunk/scipy/cluster/hierarchy.py 2008-09-30 07:00:07 UTC (rev 4757) +++ trunk/scipy/cluster/hierarchy.py 2008-09-30 19:14:59 UTC (rev 4758) @@ -910,14 +910,13 @@ correlation coefficient and ``d`` is the condensed cophentic distance matrix (upper triangular form). """ - Z = np.asarray(Z) - nargs = len(args) if nargs < 1: raise ValueError('At least one argument must be passed to cophenet.') Z = args[0] + Z = np.asarray(Z) is_valid_linkage(Z, throw=True, name='Z') Zs = Z.shape n = Zs[0] + 1 @@ -932,6 +931,7 @@ return zz Y = args[1] + Y = np.asarray(Y) Ys = Y.shape distance.is_valid_y(Y, throw=True, name='Y') Modified: trunk/scipy/cluster/src/distance.c =================================================================== --- trunk/scipy/cluster/src/distance.c 2008-09-30 07:00:07 UTC (rev 4757) +++ trunk/scipy/cluster/src/distance.c 2008-09-30 19:14:59 UTC (rev 4758) @@ -294,6 +294,16 @@ return pow(s, 1.0 / p); } +double weighted_minkowski_distance(const double *u, const double *v, int n, double p, const double *w) { + int i = 0; + double s = 0.0, d; + for (i = 0; i < n; i++) { + d = fabs(u[i] - v[i]) * w[i]; + s = s + pow(d, p); + } + return pow(s, 1.0 / p); +} + void compute_mean_vector(double *res, const double *X, int m, int n) { int i, j; const double *v; @@ -489,6 +499,19 @@ } } +void pdist_weighted_minkowski(const double *X, double *dm, int m, int n, double p, const double *w) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < m; i++) { + for (j = i + 1; j < m; j++, it++) { + u = X + (n * i); + v = X + (n * j); + *it = weighted_minkowski_distance(u, v, n, p, w); + } + } +} + void pdist_yule_bool(const char *X, double *dm, int m, int n) { int i, j; const char *u, *v; @@ -813,6 +836,19 @@ } } +void cdist_weighted_minkowski(const double *XA, const double *XB, double *dm, int mA, int mB, int n, double p, const double *w) { + int i, j; + const double *u, *v; + double *it = dm; + for (i = 0; i < mA; i++) { + for (j = 0; j < mB; j++, it++) { + u = XA + (n * i); + v = XB + (n * j); + *it = weighted_minkowski_distance(u, v, n, p, w); + } + } +} + void cdist_yule_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n) { int i, j; const char *u, *v; Modified: trunk/scipy/cluster/src/distance.h =================================================================== --- trunk/scipy/cluster/src/distance.h 2008-09-30 07:00:07 UTC (rev 4757) +++ trunk/scipy/cluster/src/distance.h 2008-09-30 19:14:59 UTC (rev 4758) @@ -55,6 +55,7 @@ void pdist_jaccard_bool(const char *X, double *dm, int m, int n); void pdist_kulsinski_bool(const char *X, double *dm, int m, int n); void pdist_minkowski(const double *X, double *dm, int m, int n, double p); +void pdist_weighted_minkowski(const double *X, double *dm, int m, int n, double p, const double *w); void pdist_yule_bool(const char *X, double *dm, int m, int n); void pdist_matching_bool(const char *X, double *dm, int m, int n); void pdist_dice_bool(const char *X, double *dm, int m, int n); @@ -93,6 +94,8 @@ int mA, int mB, int n); void cdist_minkowski(const double *XA, const double *XB, double *dm, int mA, int mB, int n, double p); +void cdist_weighted_minkowski(const double *XA, const double *XB, double *dm, + int mA, int mB, int n, double p, const double *w); void cdist_yule_bool(const char *XA, const char *XB, double *dm, int mA, int mB, int n); void cdist_matching_bool(const char *XA, const char *XB, double *dm, Modified: trunk/scipy/cluster/src/distance_wrap.c =================================================================== --- trunk/scipy/cluster/src/distance_wrap.c 2008-09-30 07:00:07 UTC (rev 4757) +++ trunk/scipy/cluster/src/distance_wrap.c 2008-09-30 19:14:59 UTC (rev 4758) @@ -347,12 +347,36 @@ mA = XA_->dimensions[0]; mB = XB_->dimensions[0]; n = XA_->dimensions[1]; - cdist_minkowski(XA, XB, dm, mA, mB, n, p); } return Py_BuildValue("d", 0.0); } +extern PyObject *cdist_weighted_minkowski_wrap(PyObject *self, PyObject *args) { + PyArrayObject *XA_, *XB_, *dm_, *w_; + int mA, mB, n; + double *dm; + const double *XA, *XB, *w; + double p; + if (!PyArg_ParseTuple(args, "O!O!O!dO!", + &PyArray_Type, &XA_, &PyArray_Type, &XB_, + &PyArray_Type, &dm_, + &p, + &PyArray_Type, &w_)) { + return 0; + } + else { + XA = (const double*)XA_->data; + XB = (const double*)XB_->data; + w = (const double*)w_->data; + dm = (double*)dm_->data; + mA = XA_->dimensions[0]; + mB = XB_->dimensions[0]; + n = XA_->dimensions[1]; + cdist_weighted_minkowski(XA, XB, dm, mA, mB, n, p, w); + } + return Py_BuildValue("d", 0.0); +} extern PyObject *cdist_yule_bool_wrap(PyObject *self, PyObject *args) { PyArrayObject *XA_, *XB_, *dm_; @@ -824,7 +848,31 @@ return Py_BuildValue("d", 0.0); } +extern PyObject *pdist_weighted_minkowski_wrap(PyObject *self, PyObject *args) { + PyArrayObject *X_, *dm_, *w_; + int m, n; + double *dm, *X, *w; + double p; + if (!PyArg_ParseTuple(args, "O!O!dO!", + &PyArray_Type, &X_, + &PyArray_Type, &dm_, + &p, + &PyArray_Type, &w_)) { + return 0; + } + else { + X = (double*)X_->data; + dm = (double*)dm_->data; + w = (const double*)w_->data; + m = X_->dimensions[0]; + n = X_->dimensions[1]; + pdist_weighted_minkowski(X, dm, m, n, p, w); + } + return Py_BuildValue("d", 0.0); +} + + extern PyObject *pdist_yule_bool_wrap(PyObject *self, PyObject *args) { PyArrayObject *X_, *dm_; int m, n; @@ -1048,6 +1096,7 @@ {"cdist_mahalanobis_wrap", cdist_mahalanobis_wrap, METH_VARARGS}, {"cdist_matching_bool_wrap", cdist_matching_bool_wrap, METH_VARARGS}, {"cdist_minkowski_wrap", cdist_minkowski_wrap, METH_VARARGS}, + {"cdist_weighted_minkowski_wrap", cdist_weighted_minkowski_wrap, METH_VARARGS}, {"cdist_rogerstanimoto_bool_wrap", cdist_rogerstanimoto_bool_wrap, METH_VARARGS}, {"cdist_russellrao_bool_wrap", cdist_russellrao_bool_wrap, METH_VARARGS}, {"cdist_seuclidean_wrap", cdist_seuclidean_wrap, METH_VARARGS}, @@ -1069,6 +1118,7 @@ {"pdist_mahalanobis_wrap", pdist_mahalanobis_wrap, METH_VARARGS}, {"pdist_matching_bool_wrap", pdist_matching_bool_wrap, METH_VARARGS}, {"pdist_minkowski_wrap", pdist_minkowski_wrap, METH_VARARGS}, + {"pdist_weighted_minkowski_wrap", pdist_weighted_minkowski_wrap, METH_VARARGS}, {"pdist_rogerstanimoto_bool_wrap", pdist_rogerstanimoto_bool_wrap, METH_VARARGS}, {"pdist_russellrao_bool_wrap", pdist_russellrao_bool_wrap, METH_VARARGS}, {"pdist_seuclidean_wrap", pdist_seuclidean_wrap, METH_VARARGS}, Modified: trunk/scipy/cluster/tests/test_distance.py =================================================================== --- trunk/scipy/cluster/tests/test_distance.py 2008-09-30 07:00:07 UTC (rev 4757) +++ trunk/scipy/cluster/tests/test_distance.py 2008-09-30 19:14:59 UTC (rev 4758) @@ -233,6 +233,44 @@ print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_wminkowski_random_p3d8(self): + "Tests cdist(X, 'wminkowski') on random data. (p=3.8)" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + w = 1.0 / X1.std(axis=0) + Y1 = cdist(X1, X2, 'wminkowski', p=3.8, w=w) + Y2 = cdist(X1, X2, 'test_wminkowski', p=3.8, w=w) + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_wminkowski_random_p4d6(self): + "Tests cdist(X, 'wminkowski') on random data. (p=4.6)" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + w = 1.0 / X1.std(axis=0) + Y1 = cdist(X1, X2, 'wminkowski', p=4.6, w=w) + Y2 = cdist(X1, X2, 'test_wminkowski', p=4.6, w=w) + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_wminkowski_random_p1d23(self): + "Tests cdist(X, 'wminkowski') on random data. (p=1.23)" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + w = 1.0 / X1.std(axis=0) + Y1 = cdist(X1, X2, 'wminkowski', p=1.23, w=w) + Y2 = cdist(X1, X2, 'test_wminkowski', p=1.23, w=w) + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + + def test_cdist_seuclidean_random(self): "Tests cdist(X, 'seuclidean') on random data." eps = 1e-07 From scipy-svn at scipy.org Tue Sep 30 19:55:26 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 30 Sep 2008 18:55:26 -0500 (CDT) Subject: [Scipy-svn] r4759 - in trunk/scipy: . spatial spatial/tests Message-ID: <20080930235526.6B3EF39C05F@scipy.org> Author: peridot Date: 2008-09-30 18:55:21 -0500 (Tue, 30 Sep 2008) New Revision: 4759 Added: trunk/scipy/spatial/ trunk/scipy/spatial/__init__.py trunk/scipy/spatial/info.py trunk/scipy/spatial/kdtree.py trunk/scipy/spatial/setup.py trunk/scipy/spatial/tests/ trunk/scipy/spatial/tests/test_kdtree.py Modified: trunk/scipy/__init__.py trunk/scipy/setup.py Log: New module for spatial data structure. Currently contains one pure-python implementation of a kd-tree. Modified: trunk/scipy/__init__.py =================================================================== --- trunk/scipy/__init__.py 2008-09-30 19:14:59 UTC (rev 4758) +++ trunk/scipy/__init__.py 2008-09-30 23:55:21 UTC (rev 4759) @@ -66,7 +66,7 @@ # Remove subpackage names from __all__ such that they are not imported via # "from scipy import *". This works around a numpy bug present in < 1.2. subpackages = """cluster constants fftpack integrate interpolate io lib linalg -linsolve maxentropy misc ndimage odr optimize signal sparse special +linsolve maxentropy misc ndimage odr optimize signal sparse special spatial splinalg stats stsci weave""".split() for name in subpackages: try: Modified: trunk/scipy/setup.py =================================================================== --- trunk/scipy/setup.py 2008-09-30 19:14:59 UTC (rev 4758) +++ trunk/scipy/setup.py 2008-09-30 23:55:21 UTC (rev 4759) @@ -18,6 +18,7 @@ config.add_subpackage('signal') config.add_subpackage('sparse') config.add_subpackage('special') + config.add_subpackage('spatial') config.add_subpackage('stats') config.add_subpackage('ndimage') config.add_subpackage('stsci') Added: trunk/scipy/spatial/__init__.py =================================================================== --- trunk/scipy/spatial/__init__.py 2008-09-30 19:14:59 UTC (rev 4758) +++ trunk/scipy/spatial/__init__.py 2008-09-30 23:55:21 UTC (rev 4759) @@ -0,0 +1,13 @@ +# +# spatial - spatial data structures and algorithms +# + +from info import __doc__ + +from kdtree import * + + +__all__ = filter(lambda s:not s.startswith('_'),dir()) +from numpy.testing import Tester +test = Tester().test + Added: trunk/scipy/spatial/info.py =================================================================== --- trunk/scipy/spatial/info.py 2008-09-30 19:14:59 UTC (rev 4758) +++ trunk/scipy/spatial/info.py 2008-09-30 23:55:21 UTC (rev 4759) @@ -0,0 +1,12 @@ +""" +Spatial data structures and algorithms +====================================== + +Nearest-neighbor queries: + + KDTree -- class for efficient nearest-neighbor queries + distance -- function for computing Minkowski p-norms + +""" + +postpone_import = 1 Added: trunk/scipy/spatial/kdtree.py =================================================================== --- trunk/scipy/spatial/kdtree.py 2008-09-30 19:14:59 UTC (rev 4758) +++ trunk/scipy/spatial/kdtree.py 2008-09-30 23:55:21 UTC (rev 4759) @@ -0,0 +1,305 @@ +# Copyright Anne M. Archibald 2008 +# Released under the scipy license +import numpy as np +from heapq import heappush, heappop + +def distance_p(x,y,p=2): + if p==np.inf: + return np.amax(np.abs(y-x),axis=-1) + elif p==1: + return np.sum(np.abs(y-x),axis=-1) + else: + return np.sum(np.abs(y-x)**p,axis=-1) +def distance(x,y,p=2): + if p==np.inf or p==1: + return distance_p(x,y,p) + else: + return distance_p(x,y,p)**(1./p) + + +class KDTree(object): + """kd-tree for quick nearest-neighbor lookup + + This class provides an index into a set of k-dimensional points + which can be used to rapidly look up the nearest neighbors of any + point. + + The algorithm used is described in Maneewongvatana and Mount 1999. + The general idea is that the kd-tree is a binary trie, each of whose + nodes represents an axis-aligned hyperrectangle. Each node specifies + an axis and splits the set of points based on whether their coordinate + along that axis is greater than or less than a particular value. + + During construction, the axis and splitting point are chosen by the + "sliding midpoint" rule, which ensures that the cells do not all + become long and thin. + + The tree can be queried for the r closest neighbors of any given point + (optionally returning only those within some maximum distance of the + point). It can also be queried, with a substantial gain in efficiency, + for the r approximate closest neighbors. + + For large dimensions (20 is already large) do not expect this to run + significantly faster than brute force. High-dimensional nearest-neighbor + queries are a substantial open problem in computer science. + """ + + def __init__(self, data, leafsize=10): + """Construct a kd-tree. + + Parameters: + =========== + + data : array-like, shape (n,k) + The data points to be indexed. This array is not copied, and + so modifying this data will result in bogus results. + leafsize : positive integer + The number of points at which the algorithm switches over to + brute-force. + """ + self.data = np.asarray(data) + self.n, self.k = np.shape(self.data) + self.leafsize = int(leafsize) + if self.leafsize<1: + raise ValueError("leafsize must be at least 1") + self.maxes = np.amax(self.data,axis=0) + self.mins = np.amin(self.data,axis=0) + + self.tree = self.__build(np.arange(self.n), self.maxes, self.mins) + + class node(object): + pass + class leafnode(node): + def __init__(self, idx): + self.idx = idx + class innernode(node): + def __init__(self, split_dim, split, less, greater): + self.split_dim = split_dim + self.split = split + self.less = less + self.greater = greater + + def __build(self, idx, maxes, mins): + if len(idx)<=self.leafsize: + return KDTree.leafnode(idx) + else: + data = self.data[idx] + #maxes = np.amax(data,axis=0) + #mins = np.amin(data,axis=0) + d = np.argmax(maxes-mins) + maxval = maxes[d] + minval = mins[d] + if maxval==minval: + # all points are identical; warn user? + return KDTree.leafnode(idx) + data = data[:,d] + + # sliding midpoint rule; see Maneewongvatana and Mount 1999 + # for arguments that this is a good idea. + split = (maxval+minval)/2 + less_idx = np.nonzero(data<=split)[0] + greater_idx = np.nonzero(data>split)[0] + if len(less_idx)==0: + split = np.amin(data) + less_idx = np.nonzero(data<=split)[0] + greater_idx = np.nonzero(data>split)[0] + if len(greater_idx)==0: + split = np.amax(data) + less_idx = np.nonzero(data=split)[0] + if len(less_idx)==0: + # _still_ zero? all must have the same value + assert np.all(data==data[0]), "Troublesome data array: %s" % data + split = data[0] + less_idx = np.arange(len(data)-1) + greater_idx = np.array([len(data)-1]) + + lessmaxes = np.copy(maxes) + lessmaxes[d] = split + greatermins = np.copy(mins) + greatermins[d] = split + return KDTree.innernode(d, split, + self.__build(idx[less_idx],lessmaxes,mins), + self.__build(idx[greater_idx],maxes,greatermins)) + + def __query(self, x, k=1, eps=0, p=2, distance_upper_bound=np.inf): + + side_distances = [max(0,x[i]-self.maxes[i],self.mins[i]-x[i]) for i in range(self.k)] + # priority queue for chasing nodes + # entries are: + # minimum distance between the cell and the target + # distances between the nearest side of the cell and the target + # the head node of the cell + q = [(distance_p(np.array(side_distances),0), + tuple(side_distances), + self.tree)] + # priority queue for the nearest neighbors + # furthest known neighbor first + # entries are (-distance**p, i) + neighbors = [] + + if eps==0: + epsfac=1 + elif p==np.inf: + epsfac = 1/(1+eps) + else: + epsfac = 1/(1+eps)**p + + if p!=np.inf and distance_upper_bound!=np.inf: + distance_upper_bound = distance_upper_bound**p + + while q: + min_distance, side_distances, node = heappop(q) + if isinstance(node, KDTree.leafnode): + # brute-force + data = self.data[node.idx] + a = np.abs(data-x[np.newaxis,:]) + if p==np.inf: + ds = np.amax(a,axis=1) + elif p==1: + ds = np.sum(a,axis=1) + else: + ds = np.sum(a**p,axis=1) + for i in range(len(ds)): + if ds[i]distance_upper_bound*epsfac: + # since this is the nearest cell, we're done, bail out + break + # compute minimum distances to the children and push them on + if x[node.split_dim]1: + dd = np.empty(retshape+(k,),dtype=np.float) + dd.fill(np.inf) + ii = np.empty(retshape+(k,),dtype=np.int) + ii.fill(self.n) + elif k==1: + dd = np.empty(retshape,dtype=np.float) + dd.fill(np.inf) + ii = np.empty(retshape,dtype=np.int) + ii.fill(self.n) + elif k is None: + dd = np.empty(retshape,dtype=np.object) + ii = np.empty(retshape,dtype=np.object) + else: + raise ValueError("Requested %s nearest neighbors; acceptable numbers are integers greater than or equal to one, or None") + for c in np.ndindex(retshape): + hits = self.__query(x[c], k=k, p=p, distance_upper_bound=distance_upper_bound) + if k>1: + for j in range(len(hits)): + dd[c+(j,)], ii[c+(j,)] = hits[j] + elif k==1: + if len(hits)>0: + dd[c], ii[c] = hits[0] + else: + dd[c] = np.inf + ii[c] = self.n + elif k is None: + dd[c] = [d for (d,i) in hits] + ii[c] = [i for (d,i) in hits] + return dd, ii + else: + hits = self.__query(x, k=k, p=p, distance_upper_bound=distance_upper_bound) + if k==1: + if len(hits)>0: + return hits[0] + else: + return np.inf, self.n + elif k>1: + dd = np.empty(k,dtype=np.float) + dd.fill(np.inf) + ii = np.empty(k,dtype=np.int) + ii.fill(self.n) + for j in range(len(hits)): + dd[j], ii[j] = hits[j] + return dd, ii + elif k is None: + return [d for (d,i) in hits], [i for (d,i) in hits] + else: + raise ValueError("Requested %s nearest neighbors; acceptable numbers are integers greater than or equal to one, or None") + + + Copied: trunk/scipy/spatial/setup.py (from rev 4758, trunk/scipy/interpolate/setup.py) =================================================================== --- trunk/scipy/interpolate/setup.py 2008-09-30 19:14:59 UTC (rev 4758) +++ trunk/scipy/spatial/setup.py 2008-09-30 23:55:21 UTC (rev 4759) @@ -0,0 +1,16 @@ +#!/usr/bin/env python + +from os.path import join + +def configuration(parent_package='',top_path=None): + from numpy.distutils.misc_util import Configuration + + config = Configuration('spatial', parent_package, top_path) + + config.add_data_dir('tests') + + return config + +if __name__ == '__main__': + from numpy.distutils.core import setup + setup(**configuration(top_path='').todict()) Property changes on: trunk/scipy/spatial/setup.py ___________________________________________________________________ Name: svn:executable + * Name: svn:keywords + Author Date Id Revision Name: svn:mergeinfo + Name: svn:eol-style + native Added: trunk/scipy/spatial/tests/test_kdtree.py =================================================================== --- trunk/scipy/spatial/tests/test_kdtree.py 2008-09-30 19:14:59 UTC (rev 4758) +++ trunk/scipy/spatial/tests/test_kdtree.py 2008-09-30 23:55:21 UTC (rev 4759) @@ -0,0 +1,179 @@ +# Copyright Anne M. Archibald 2008 +# Released under the scipy license +from numpy.testing import * + +import numpy as np +from scipy.spatial import KDTree, distance + +class CheckSmall(NumpyTestCase): + def setUp(self): + self.data = np.array([[0,0,0], + [0,0,1], + [0,1,0], + [0,1,1], + [1,0,0], + [1,0,1], + [1,1,0], + [1,1,1]]) + self.kdtree = KDTree(self.data) + + def test_nearest(self): + assert_array_equal( + self.kdtree.query((0,0,0.1), 1), + (0.1,0)) + def test_nearest_two(self): + assert_array_equal( + self.kdtree.query((0,0,0.1), 2), + ([0.1,0.9],[0,1])) +class CheckSmallNonLeaf(NumpyTestCase): + def setUp(self): + self.data = np.array([[0,0,0], + [0,0,1], + [0,1,0], + [0,1,1], + [1,0,0], + [1,0,1], + [1,1,0], + [1,1,1]]) + self.kdtree = KDTree(self.data,leafsize=1) + + def test_nearest(self): + assert_array_equal( + self.kdtree.query((0,0,0.1), 1), + (0.1,0)) + def test_nearest_two(self): + assert_array_equal( + self.kdtree.query((0,0,0.1), 2), + ([0.1,0.9],[0,1])) + +class CheckRandom(NumpyTestCase): + def setUp(self): + self.n = 1000 + self.k = 4 + self.data = np.random.randn(self.n, self.k) + self.kdtree = KDTree(self.data) + + def test_nearest(self): + x = np.random.randn(self.k) + d, i = self.kdtree.query(x, 1) + assert_almost_equal(d**2,np.sum((x-self.data[i])**2)) + eps = 1e-8 + assert np.all(np.sum((self.data-x[np.newaxis,:])**2,axis=1)>d**2-eps) + + def test_m_nearest(self): + x = np.random.randn(self.k) + m = 10 + dd, ii = self.kdtree.query(x, m) + d = np.amax(dd) + i = ii[np.argmax(dd)] + assert_almost_equal(d**2,np.sum((x-self.data[i])**2)) + eps = 1e-8 + assert_equal(np.sum(np.sum((self.data-x[np.newaxis,:])**2,axis=1) Author: damian.eads Date: 2008-09-30 19:10:16 -0500 (Tue, 30 Sep 2008) New Revision: 4760 Modified: trunk/scipy/cluster/distance.py Log: Added order keyword in asarray statements to ensure contiguity of data prior to passing to C functions. Modified: trunk/scipy/cluster/distance.py =================================================================== --- trunk/scipy/cluster/distance.py 2008-09-30 23:55:21 UTC (rev 4759) +++ trunk/scipy/cluster/distance.py 2008-10-01 00:10:16 UTC (rev 4760) @@ -193,8 +193,8 @@ d : double The Minkowski distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') if p < 1: raise ValueError("p must be at least 1") return (abs(u-v)**p).sum() ** (1.0 / p) @@ -222,8 +222,8 @@ d : double The Minkowski distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') if p < 1: raise ValueError("p must be at least 1") return ((w * abs(u-v))**p).sum() ** (1.0 / p) @@ -247,8 +247,8 @@ d : double The Euclidean distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') q=np.matrix(u-v) return np.sqrt((q*q.T).sum()) @@ -272,8 +272,8 @@ d : double The squared Euclidean distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return ((u-v)*(u-v).T).sum() def cosine(u, v): @@ -295,8 +295,8 @@ d : double The Cosine distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return (1.0 - (np.dot(u, v.T) / \ (np.sqrt(np.dot(u, u.T)) * np.sqrt(np.dot(v, v.T))))) @@ -356,8 +356,8 @@ d : double The Hamming distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return (u != v).mean() def jaccard(u, v): @@ -384,8 +384,8 @@ d : double The Jaccard distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return (np.double(np.bitwise_and((u != v), np.bitwise_or(u != 0, v != 0)).sum()) / np.double(np.bitwise_or(u != 0, v != 0).sum())) @@ -414,8 +414,8 @@ d : double The Kulsinski distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') n = len(u) (nff, nft, ntf, ntt) = _nbool_correspond_all(u, v) @@ -438,9 +438,9 @@ d : double The standardized Euclidean distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) - V = np.asarray(V) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') + V = np.asarray(V, order='c') if len(V.shape) != 1 or V.shape[0] != u.shape[0] or u.shape[0] != v.shape[0]: raise TypeError('V must be a 1-D array of the same dimension as u and v.') return np.sqrt(((u-v)**2 / V).sum()) @@ -464,8 +464,8 @@ d : double The City Block distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return abs(u-v).sum() def mahalanobis(u, v, VI): @@ -488,9 +488,9 @@ d : double The Mahalanobis distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) - VI = np.asarray(VI) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') + VI = np.asarray(VI, order='c') return np.sqrt(np.dot(np.dot((u-v),VI),(u-v).T).sum()) def chebyshev(u, v): @@ -511,8 +511,8 @@ d : double The Chebyshev distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return max(abs(u-v)) def braycurtis(u, v): @@ -534,8 +534,8 @@ d : double The Bray-Curtis distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return abs(u-v).sum() / abs(u+v).sum() def canberra(u, v): @@ -559,8 +559,8 @@ d : double The Canberra distance between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') return abs(u-v).sum() / (abs(u).sum() + abs(v).sum()) def _nbool_correspond_all(u, v): @@ -626,8 +626,8 @@ d : double The Yule dissimilarity between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') (nff, nft, ntf, ntt) = _nbool_correspond_all(u, v) return float(2.0 * ntf * nft) / float(ntt * nff + ntf * nft) @@ -654,8 +654,8 @@ d : double The Matching dissimilarity between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') (nft, ntf) = _nbool_correspond_ft_tf(u, v) return float(nft + ntf) / float(len(u)) @@ -683,8 +683,8 @@ d : double The Dice dissimilarity between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') if u.dtype == np.bool: ntt = (u & v).sum() else: @@ -716,8 +716,8 @@ The Rogers-Tanimoto dissimilarity between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') (nff, nft, ntf, ntt) = _nbool_correspond_all(u, v) return float(2.0 * (ntf + nft)) / float(ntt + nff + (2.0 * (ntf + nft))) @@ -745,8 +745,8 @@ d : double The Russell-Rao dissimilarity between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') if u.dtype == np.bool: ntt = (u & v).sum() else: @@ -778,8 +778,8 @@ d : double The Sokal-Michener dissimilarity between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') if u.dtype == np.bool: ntt = (u & v).sum() nff = (~u & ~v).sum() @@ -813,8 +813,8 @@ d : double The Sokal-Sneath dissimilarity between vectors ``u`` and ``v``. """ - u = np.asarray(u) - v = np.asarray(v) + u = np.asarray(u, order='c') + v = np.asarray(v, order='c') if u.dtype == np.bool: ntt = (u & v).sum() else: @@ -1052,7 +1052,7 @@ # verifiable, but less efficient implementation. - X = np.asarray(X) + X = np.asarray(X, order='c') #if np.issubsctype(X, np.floating) and not np.issubsctype(X, np.double): # raise TypeError('Floating point arrays must be 64-bit (got %r).' % @@ -1131,7 +1131,7 @@ dm, p, w) elif mstr in set(['seuclidean', 'se', 's']): if V is not None: - V = np.asarray(V) + V = np.asarray(V, order='c') if type(V) != np.ndarray: raise TypeError('Variance vector V must be a numpy array') if V.dtype != np.double: @@ -1169,7 +1169,7 @@ _distance_wrap.pdist_cosine_wrap(_convert_to_double(X2), _convert_to_double(dm), _convert_to_double(norms)) elif mstr in set(['mahalanobis', 'mahal', 'mah']): if VI is not None: - VI = _convert_to_double(np.asarray(VI)) + VI = _convert_to_double(np.asarray(VI, order='c')) if type(VI) != np.ndarray: raise TypeError('VI must be a numpy array.') if VI.dtype != np.double: @@ -1206,7 +1206,7 @@ if V is None: V = np.var(X, axis=0, ddof=1) else: - V = np.asarray(V) + V = np.asarray(V, order='c') dm = pdist(X, lambda u, v: seuclidean(u, v, V)) elif metric == 'test_braycurtis': dm = pdist(X, braycurtis) @@ -1215,7 +1215,7 @@ V = np.cov(X.T) VI = np.linalg.inv(V) else: - VI = np.asarray(VI) + VI = np.asarray(VI, order='c') [VI] = _copy_arrays_if_base_present([VI]) # (u-v)V^(-1)(u-v)^T dm = pdist(X, (lambda u, v: mahalanobis(u, v, VI))) @@ -1310,7 +1310,7 @@ """ - X = _convert_to_double(np.asarray(X)) + X = _convert_to_double(np.asarray(X, order='c')) if not np.issubsctype(X, np.double): raise TypeError('A double array must be passed.') @@ -1408,7 +1408,7 @@ ``D.T`` and non-zeroness of the diagonal are ignored if they are within the tolerance specified by ``tol``. """ - D = np.asarray(D) + D = np.asarray(D, order='c') valid = True try: if type(D) != np.ndarray: @@ -1484,7 +1484,7 @@ warning or exception message. """ - y = np.asarray(y) + y = np.asarray(y, order='c') valid = True try: if type(y) != np.ndarray: @@ -1529,7 +1529,7 @@ :Returns: The number of observations in the redundant distance matrix. """ - D = np.asarray(D) + D = np.asarray(D, order='c') is_valid_dm(D, tol=np.inf, throw=True, name='D') return D.shape[0] @@ -1548,7 +1548,7 @@ The number of observations in the condensed distance matrix passed. """ - Y = np.asarray(Y) + Y = np.asarray(Y, order='c') is_valid_y(Y, throw=True, name='Y') d = int(np.ceil(np.sqrt(Y.shape[0] * 2))) return d @@ -1791,8 +1791,8 @@ # verifiable, but less efficient implementation. - XA = np.asarray(XA) - XB = np.asarray(XB) + XA = np.asarray(XA, order='c') + XB = np.asarray(XB, order='c') #if np.issubsctype(X, np.floating) and not np.issubsctype(X, np.double): # raise TypeError('Floating point arrays must be 64-bit (got %r).' % @@ -1880,7 +1880,7 @@ _convert_to_double(XB), dm, p, _convert_to_double(w)) elif mstr in set(['seuclidean', 'se', 's']): if V is not None: - V = np.asarray(V) + V = np.asarray(V, order='c') if type(V) != np.ndarray: raise TypeError('Variance vector V must be a numpy array') if V.dtype != np.double: @@ -1922,7 +1922,7 @@ _convert_to_double(normsB)) elif mstr in set(['mahalanobis', 'mahal', 'mah']): if VI is not None: - VI = _convert_to_double(np.asarray(VI)) + VI = _convert_to_double(np.asarray(VI, order='c')) if type(VI) != np.ndarray: raise TypeError('VI must be a numpy array.') if VI.dtype != np.double: @@ -1973,7 +1973,7 @@ if V is None: V = np.var(np.vstack([XA, XB]), axis=0, ddof=1) else: - V = np.asarray(V) + V = np.asarray(V, order='c') dm = cdist(XA, XB, lambda u, v: seuclidean(u, v, V)) elif metric == 'test_sqeuclidean': dm = cdist(XA, XB, lambda u, v: sqeuclidean(u, v)) @@ -1987,7 +1987,7 @@ X = None del X else: - VI = np.asarray(VI) + VI = np.asarray(VI, order='c') [VI] = _copy_arrays_if_base_present([VI]) # (u-v)V^(-1)(u-v)^T dm = cdist(XA, XB, (lambda u, v: mahalanobis(u, v, VI))) From scipy-svn at scipy.org Tue Sep 30 20:14:22 2008 From: scipy-svn at scipy.org (scipy-svn at scipy.org) Date: Tue, 30 Sep 2008 19:14:22 -0500 (CDT) Subject: [Scipy-svn] r4761 - trunk/scipy/cluster Message-ID: <20081001001422.6958739C05F@scipy.org> Author: damian.eads Date: 2008-09-30 19:14:20 -0500 (Tue, 30 Sep 2008) New Revision: 4761 Modified: trunk/scipy/cluster/hierarchy.py Log: Added order keyword in asarray statements to ensure contiguity of data prior to passing to C functions. Modified: trunk/scipy/cluster/hierarchy.py =================================================================== --- trunk/scipy/cluster/hierarchy.py 2008-10-01 00:10:16 UTC (rev 4760) +++ trunk/scipy/cluster/hierarchy.py 2008-10-01 00:14:20 UTC (rev 4761) @@ -577,7 +577,7 @@ if not isinstance(method, str): raise TypeError("Argument 'method' must be a string.") - y = _convert_to_double(np.asarray(y)) + y = _convert_to_double(np.asarray(y, order='c')) s = y.shape if len(s) == 1: @@ -800,7 +800,7 @@ library. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') @@ -916,7 +916,7 @@ raise ValueError('At least one argument must be passed to cophenet.') Z = args[0] - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') Zs = Z.shape n = Zs[0] + 1 @@ -931,7 +931,7 @@ return zz Y = args[1] - Y = np.asarray(Y) + Y = np.asarray(Y, order='c') Ys = Y.shape distance.is_valid_y(Y, throw=True, name='Y') @@ -983,7 +983,7 @@ This function behaves similarly to the MATLAB(TM) inconsistent function. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') Zs = Z.shape is_valid_linkage(Z, throw=True, name='Z') @@ -1028,7 +1028,7 @@ - ZS : ndarray A linkage matrix compatible with this library. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') Zs = Z.shape Zpart = Z[:,0:2] Zd = Z[:,2].reshape(Zs[0], 1) @@ -1057,7 +1057,7 @@ A linkage matrix compatible with MATLAB(TM)'s hierarchical clustering functions. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') return np.hstack([Z[:,0:2] + 1, Z[:,2]]) @@ -1077,7 +1077,7 @@ - b : bool A boolean indicating whether the linkage is monotonic. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') # We expect the i'th value to be greater than its successor. @@ -1111,7 +1111,7 @@ - b : bool True iff the inconsistency matrix is valid. """ - R = np.asarray(R) + R = np.asarray(R, order='c') valid = True try: if type(R) != np.ndarray: @@ -1177,7 +1177,7 @@ True iff the inconsistency matrix is valid. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') valid = True try: if type(Z) != np.ndarray: @@ -1229,7 +1229,7 @@ - n : int The number of original observations in the linkage. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') return (Z.shape[0] + 1) @@ -1257,8 +1257,8 @@ A boolean indicating whether the linkage matrix and distance matrix could possibly correspond to one another. """ - Z = np.asarray(Z) - Y = np.asarray(Y) + Z = np.asarray(Z, order='c') + Y = np.asarray(Y, order='c') return numobs_y(Y) == numobs_linkage(Z) def fcluster(Z, t, criterion='inconsistent', depth=2, R=None, monocrit=None): @@ -1318,7 +1318,7 @@ cluster(Z, t=3, criterion='maxclust_monocrit', monocrit=MI) """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') n = Z.shape[0] + 1 @@ -1332,7 +1332,7 @@ if R is None: R = inconsistent(Z, depth) else: - R = np.asarray(R) + R = np.asarray(R, order='c') is_valid_im(R, throw=True, name='R') # Since the C code does not support striding using strides. # The dimensions are used instead. @@ -1402,7 +1402,7 @@ This function is similar to MATLAB(TM) clusterdata function. """ - X = np.asarray(X) + X = np.asarray(X, order='c') if type(X) != np.ndarray or len(X.shape) != 2: raise TypeError('The observation matrix X must be an n by m numpy array.') @@ -1412,7 +1412,7 @@ if R is None: R = inconsistent(Z, d=depth) else: - R = np.asarray(R) + R = np.asarray(R, order='c') T = fcluster(Z, criterion=criterion, depth=depth, R=R, t=t) return T @@ -1423,7 +1423,7 @@ Returns a list of leaf node ids as they appear in the tree from left to right. Z is a linkage matrix. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') n = Z.shape[0] + 1 ML = np.zeros((n,), dtype=np.int) @@ -1862,7 +1862,7 @@ # or results in a crossing, an exception will be thrown. Passing # None orders leaf nodes based on the order they appear in the # pre-order traversal. - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') Zs = Z.shape @@ -2226,8 +2226,8 @@ Returns True iff two different cluster assignments T1 and T2 are equivalent. T1 and T2 must be arrays of the same size. """ - T1 = np.asarray(T1) - T2 = np.asarray(T2) + T1 = np.asarray(T1, order='c') + T2 = np.asarray(T2, order='c') if type(T1) != np.ndarray: raise TypeError('T1 must be a numpy array.') @@ -2266,7 +2266,7 @@ Note that when Z[:,2] is monotonic, Z[:,2] and MD should not differ. See linkage for more information on this issue. """ - Z = np.asarray(Z) + Z = np.asarray(Z, order='c') is_valid_linkage(Z, throw=True, name='Z') n = Z.shape[0] + 1 @@ -2284,8 +2284,8 @@ inconsistency matrix. MI is a monotonic (n-1)-sized numpy array of doubles. """ - Z = np.asarray(Z) - R = np.asarray(R) + Z = np.asarray(Z, order='c') + R = np.asarray(R, order='c') is_valid_linkage(Z, throw=True, name='Z') is_valid_im(R, throw=True, name='R') @@ -2304,8 +2304,8 @@ is the maximum over R[Q(j)-n, i] where Q(j) the set of all node ids corresponding to nodes below and including j. """ - Z = np.asarray(Z) - R = np.asarray(R) + Z = np.asarray(Z, order='c') + R = np.asarray(R, order='c') is_valid_linkage(Z, throw=True, name='Z') is_valid_im(R, throw=True, name='R') if type(i) is not types.IntType: @@ -2341,8 +2341,8 @@ i < n, i corresponds to an original observation, otherwise it corresponds to a non-singleton cluster. """ - Z = np.asarray(Z) - T = np.asarray(T) + Z = np.asarray(Z, order='c') + T = np.asarray(T, order='c') if type(T) != np.ndarray or T.dtype != np.int: raise TypeError('T must be a one-dimensional numpy array of integers.') is_valid_linkage(Z, throw=True, name='Z')