From eric.bruning at gmail.com Sun Sep 5 00:40:52 2010 From: eric.bruning at gmail.com (Eric Bruning) Date: Sat, 4 Sep 2010 23:40:52 -0500 Subject: [SciPy-Dev] RFR: N-dimensional interpolation Message-ID: I made a trial run with some 3D data from a weather radar this evening. I'm interpolating to a horizontal slice through a (distorted) spherical grid, as shown here: http://bitbucket.org/deeplycloudy/pyrsl/src/tip/example.py Both linear and nearest-neighbor work great. Image of linear interpolation at http://yfrog.com/msspop Many thanks for cooking this up. I've been hoping for something like this for a long time. -Eric From pav at iki.fi Sun Sep 5 10:15:25 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 5 Sep 2010 14:15:25 +0000 (UTC) Subject: [SciPy-Dev] Nonlinear large-scale root finding Message-ID: Hi, I finally committed the scipy.optimize.nonlin rewrite that was pending for a long time. What is now there for large-scale root finding is: * Newton-Krylov method (``scipy.optimize.newton_krylov``) * Generalized secant methods: - Limited-memory Broyden (``scipy.optimize.broyden1``, ``scipy.optimize.broyden2``) - Anderson (``scipy.optimize.anderson``) * Simple iterations (``scipy.optimize.diagbroyden``, ``scipy.optimize.excitingmixing``, ``scipy.optimize.linearmixing``) Here's a simple demo: http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#large-problems -- Pauli Virtanen From asmund.hjulstad at gmail.com Mon Sep 6 05:22:08 2010 From: asmund.hjulstad at gmail.com (=?ISO-8859-1?Q?=C5smund_Hjulstad?=) Date: Mon, 6 Sep 2010 12:22:08 +0300 Subject: [SciPy-Dev] Unexpected behaviour from timeseries.filled (in Python 2.7) Message-ID: It seems something in Python 2.7 / Numpy 1.5 (or my Python 2.7 installation) is breaking scikits.timeseries.filled import scikits.timeseries from numpy import array t3 = scikits.timeseries.time_series(array([0.]), mask=[True],dates=['2010-Aug'],freq='M') t3.fill_value= 0. print(t3) print(t3.data) print(t3.fill_value) print(t3.filled()) # It seems it ignores the fill_value parameter print(t3.filled(0.)) In Python 2.6 this outputs c:\python26\python scikits_bug.py [--] [ 0.] 0.0 [ 0.] [ 0.] , but in Python 2.7 I get: c:\python27\python scikits_bug.py [--] [ 0.] 0.0 [ 1.00000000e+20] # This must be wrong! [ 0.] Python 2.6: scikits.timeseries version 0.91.3, numpy 1.3.0 Python 2.7: scikits.timeseries version 0.91.3, numpy 1.5.0 -- ?smund Hjulstad, asmund at hjulstad.com From stefan at sun.ac.za Mon Sep 6 06:15:12 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 6 Sep 2010 12:15:12 +0200 Subject: [SciPy-Dev] RFR: N-dimensional interpolation In-Reply-To: References: <17D79004-0D89-41E4-92FA-19819025A342@cs.toronto.edu> Message-ID: On Wed, Sep 1, 2010 at 12:28 AM, Pauli Virtanen wrote: >> I don't really have a suggestion of another name >> though. "ndgriddata"? Then griddata looks a bit more separate, and it'd >> match scipy.ndimage. > > That might be a slightly better name, yes. I saw another checkin today on "grid-datand"; I support David's idea to rename the module to save future confusion. Aside from nitpicking about the name, thanks very much for implementing this! We've had so many questions on the mailing list on this topic, and it's great to have a clean, built-in solution to offer. Regards St?fan From asmund.hjulstad at gmail.com Mon Sep 6 11:32:35 2010 From: asmund.hjulstad at gmail.com (=?ISO-8859-1?Q?=C5smund_Hjulstad?=) Date: Mon, 6 Sep 2010 18:32:35 +0300 Subject: [SciPy-Dev] Unexpected behaviour from timeseries.filled (in Python 2.7) In-Reply-To: References: Message-ID: 2010/9/6 ?smund Hjulstad : > It seems something in Python 2.7 / Numpy 1.5 ?(or my Python 2.7 > installation) is breaking scikits.timeseries.filled > > import scikits.timeseries > from numpy import array > t3 = scikits.timeseries.time_series(array([0.]), > ?mask=[True],dates=['2010-Aug'],freq='M') > t3.fill_value= 0. > print(t3) > print(t3.data) > print(t3.fill_value) > print(t3.filled()) ? ?# It seems it ignores the fill_value parameter > print(t3.filled(0.)) > , but in Python 2.7 I get: > c:\python27\python scikits_bug.py > [--] > [ 0.] > 0.0 > [ ?1.00000000e+20] ? ? ?# This must be wrong! > [ 0.] After some debugging, I found out the following: the TimeSeries.filled() method does this: result = self._series.filled(fill_value=fill_value).view(type(self)) somehow, the _series view does not retain the fill value. replacing this with (around line 1217 in tseries.py) result = MaskedArray.filled(self,fill_value=fill_value).view(type(self)) things work better (for me, at least) Regards, ?smund Hjulstad From pgmdevlist at gmail.com Mon Sep 6 12:35:09 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 6 Sep 2010 18:35:09 +0200 Subject: [SciPy-Dev] Unexpected behaviour from timeseries.filled (in Python 2.7) In-Reply-To: References: Message-ID: <53975EA0-2487-4E51-9BF4-11AC3B5FCDFF@gmail.com> On Sep 6, 2010, at 5:32 PM, ?smund Hjulstad wrote: > > After some debugging, I found out the following: > > the TimeSeries.filled() method does this: > > result = self._series.filled(fill_value=fill_value).view(type(self)) > > somehow, the _series view does not retain the fill value. > > replacing this with (around line 1217 in tseries.py) > > result = MaskedArray.filled(self,fill_value=fill_value).view(type(self)) > > things work better (for me, at least) ?smund, Glad you were able to figure it out by yourself (and showing me where th pb was). Nevertheless, do you mind opening a ticket ? I have quite a few to process first, and I don't want this one to fall between the cracks. Thanks a lot in advance. P. From asmund.hjulstad at gmail.com Mon Sep 6 13:37:20 2010 From: asmund.hjulstad at gmail.com (=?ISO-8859-1?Q?=C5smund_Hjulstad?=) Date: Mon, 6 Sep 2010 20:37:20 +0300 Subject: [SciPy-Dev] Unexpected behaviour from timeseries.filled (in Python 2.7) In-Reply-To: <53975EA0-2487-4E51-9BF4-11AC3B5FCDFF@gmail.com> References: <53975EA0-2487-4E51-9BF4-11AC3B5FCDFF@gmail.com> Message-ID: 2010/9/6 Pierre GM : > > On Sep 6, 2010, at 5:32 PM, ?smund Hjulstad wrote: >> >> After some debugging, I found out the following: >> the TimeSeries.filled() method does this: >> result = self._series.filled(fill_value=fill_value).view(type(self)) >> somehow, the _series view does not retain the fill value. >> replacing this with (around line 1217 in tseries.py) >> result = MaskedArray.filled(self,fill_value=fill_value).view(type(self)) >> things work better (for me, at least) > > ?smund, > Glad you were able to figure it out by yourself (and showing me where th pb was). > Nevertheless, do you mind opening a ticket ? I have quite a few to process first, and I don't want this one to fall between the cracks. > Thanks a lot in advance. I will, one question though: Why does MaskedArray reset the fill_value when creating a view? This, appearantly, is what happens when the self._series property is accessed. Isn't it more reasonable to preserve it? I haven't been able to pinpoint what exactly caused the behaviour to change, whether it was the Numpy 1.3 -> 1.5 change or the Python 2.6 -> 2.7 (I would guess the former, though.) Ticket #105 created. http://projects.scipy.org/scikits/ticket/105 From pgmdevlist at gmail.com Mon Sep 6 14:45:56 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 6 Sep 2010 20:45:56 +0200 Subject: [SciPy-Dev] Unexpected behaviour from timeseries.filled (in Python 2.7) In-Reply-To: References: <53975EA0-2487-4E51-9BF4-11AC3B5FCDFF@gmail.com> Message-ID: <38D75F32-12F9-4F90-ACE7-5FB536B0A9C2@gmail.com> On Sep 6, 2010, at 7:37 PM, ?smund Hjulstad wrote: > 2010/9/6 Pierre GM : >> >> On Sep 6, 2010, at 5:32 PM, ?smund Hjulstad wrote: >>> >>> After some debugging, I found out the following: >>> the TimeSeries.filled() method does this: >>> result = self._series.filled(fill_value=fill_value).view(type(self)) >>> somehow, the _series view does not retain the fill value. >>> replacing this with (around line 1217 in tseries.py) >>> result = MaskedArray.filled(self,fill_value=fill_value).view(type(self)) >>> things work better (for me, at least) >> >> ?smund, >> Glad you were able to figure it out by yourself (and showing me where th pb was). >> Nevertheless, do you mind opening a ticket ? I have quite a few to process first, and I don't want this one to fall between the cracks. >> Thanks a lot in advance. > > I will, one question though: Why does MaskedArray reset the fill_value > when creating a view? It does, but it shouldn't. > This, appearantly, is what happens when the > self._series property is accessed. Isn't it more reasonable to > preserve it? I haven't been able to pinpoint what exactly caused the > behaviour to change, whether it was the Numpy 1.3 -> 1.5 change or the > Python 2.6 -> 2.7 (I would guess the former, though.) Probably the former indeed. OK, I'll be on that. Thx for the report ! From hofsaess at ifb.uni-stuttgart.de Thu Sep 9 12:00:57 2010 From: hofsaess at ifb.uni-stuttgart.de (=?UTF-8?B?TWFydGluIEhvZnPDpMOf?=) Date: Thu, 09 Sep 2010 18:00:57 +0200 Subject: [SciPy-Dev] problem building scipy with qhull Message-ID: <4C8904B9.1030505@ifb.uni-stuttgart.de> Hi all together, I had the newest scipy version 0.9 from the svn trunk and I get one error by building scipy under windows with the intel mkl and the MSVC & IFORT compiler. Who can me help? Executing scons command (pkg is scipy.spatial): C:\Python26\python.exe "C:\Python26\lib\site-packages\numscons\scons-local\scons.py" -f scipy\spatial\SConstruct -I. scons_tool_path="" src_dir="scipy\spatial" pkg_path="scipy\spatial" pkg_name="scipy.spatial" log_level=0 distutils_libdir="..\..\..\..\build\lib.win32-2.6" distutils_clibdir="..\..\..\..\build\temp.win32-2.6" distutils_install_prefix="C:\Python26\Lib\site-packages\scipy\spatial" cc_opt=msvc cc_opt=msvc debug=0 f77_opt=ifort cxx_opt=msvc include_bootstrap=C:\Python26\lib\site-packages\numpy\core\include bypass=1 import_env=0 silent=0 bootstrapping=0 scons: Reading SConscript files ... Mkdir("build\scons\scipy\spatial") C:\Programme\Intel\Compiler\11.0\074\fortran\bin\ifortvars.bat scons: done reading SConscript files. scons: Building targets ... cl /Fobuild\scons\scipy\spatial\ckdtree.obj /c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo /IC:\Python26\lib\site-packages\numpy\core\include /IC:\Python26\include build\scons\scipy\spatial\ckdtree.c ckdtree.c link /nologo /dll /OUT:build\scons\scipy\spatial\ckdtree.pyd /LIBPATH:C:\Python26\libs build\scons\scipy\spatial\ckdtree.obj Bibliothek "build\scons\scipy\spatial\ckdtree.lib" und Objekt "build\scons\scipy\spatial\ckdtree.exp" werden erstellt. Install file: "build\scons\scipy\spatial\ckdtree.pyd" as "build\lib.win32-2.6\scipy\spatial\ckdtree.pyd" cl /Fobuild\scons\scipy\spatial\qhull.obj /c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo /IC:\Python26\lib\site-packages\numpy\core\include /IC:\Python26\include build\scons\scipy\spatial\qhull.c qhull.c build\scons\scipy\spatial\qhull.c(1143) : warning C4133: '=': Inkompatible Typen - von 'PyObject *' zu 'PyArrayObject *' build\scons\scipy\spatial\qhull.c(1768) : warning C4018: '>=': Konflikt zwischen 'signed' und 'unsigned' build\scons\scipy\spatial\qhull.c(2057) : warning C4018: '>=': Konflikt zwischen 'signed' und 'unsigned' build\scons\scipy\spatial\qhull.c(2777) : warning C4013: 'dgesv_' undefiniert; Annahme: extern mit R?ckgabetyp int cl /Fobuild\scons\scipy\spatial\qhull\src\geom.obj /c build\scons\scipy\spatial\qhull\src\geom.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo geom.c cl /Fobuild\scons\scipy\spatial\qhull\src\geom2.obj /c build\scons\scipy\spatial\qhull\src\geom2.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo geom2.c cl /Fobuild\scons\scipy\spatial\qhull\src\global.obj /c build\scons\scipy\spatial\qhull\src\global.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo global.c cl /Fobuild\scons\scipy\spatial\qhull\src\io.obj /c build\scons\scipy\spatial\qhull\src\io.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo io.c cl /Fobuild\scons\scipy\spatial\qhull\src\mem.obj /c build\scons\scipy\spatial\qhull\src\mem.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo mem.c cl /Fobuild\scons\scipy\spatial\qhull\src\merge.obj /c build\scons\scipy\spatial\qhull\src\merge.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo merge.c cl /Fobuild\scons\scipy\spatial\qhull\src\poly.obj /c build\scons\scipy\spatial\qhull\src\poly.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo poly.c cl /Fobuild\scons\scipy\spatial\qhull\src\poly2.obj /c build\scons\scipy\spatial\qhull\src\poly2.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo poly2.c cl /Fobuild\scons\scipy\spatial\qhull\src\qset.obj /c build\scons\scipy\spatial\qhull\src\qset.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo qset.c cl /Fobuild\scons\scipy\spatial\qhull\src\user.obj /c build\scons\scipy\spatial\qhull\src\user.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo user.c cl /Fobuild\scons\scipy\spatial\qhull\src\stat.obj /c build\scons\scipy\spatial\qhull\src\stat.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo stat.c cl /Fobuild\scons\scipy\spatial\qhull\src\qhull.obj /c build\scons\scipy\spatial\qhull\src\qhull.c /MD /nologo /GS- /D_CRT_SECURE_NO_WARNINGS /openmp /Ox /DNDEBUG /W3 /nologo qhull.c lib /nologo /OUT:build\scons\scipy\spatial\qhullsrc.lib build\scons\scipy\spatial\qhull\src\geom.obj build\scons\scipy\spatial\qhull\src\geom2.obj build\scons\scipy\spatial\qhull\src\global.obj build\scons\scipy\spatial\qhull\src\io.obj build\scons\scipy\spatial\qhull\src\mem.obj build\scons\scipy\spatial\qhull\src\merge.obj build\scons\scipy\spatial\qhull\src\poly.obj build\scons\scipy\spatial\qhull\src\poly2.obj build\scons\scipy\spatial\qhull\src\qset.obj build\scons\scipy\spatial\qhull\src\user.obj build\scons\scipy\spatial\qhull\src\stat.obj build\scons\scipy\spatial\qhull\src\qhull.obj Install file: "build\scons\scipy\spatial\qhullsrc.lib" as "build\temp.win32-2.6\qhullsrc.lib" link /nologo /dll /OUT:build\scons\scipy\spatial\qhull.pyd /LIBPATH:C:\Python26\libs build\scons\scipy\spatial\qhullsrc.lib build\temp.win32-2.6\qhullsrc.lib build\scons\scipy\spatial\qhull.obj Bibliothek "build\scons\scipy\spatial\qhull.lib" und Objekt "build\scons\scipy\spatial\qhull.exp" werden erstellt. qhull.obj : error LNK2019: Verweis auf nicht aufgel?stes externes Symbol "_dgesv_" in Funktion "___pyx_pf_5scipy_7spatial_5qhull__get_barycentric_transforms". build\scons\scipy\spatial\qhull.pyd : fatal error LNK1120: 1 nicht aufgel?ste externe Verweise. scons: building terminated because of errors. From pav at iki.fi Thu Sep 9 13:33:16 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 9 Sep 2010 17:33:16 +0000 (UTC) Subject: [SciPy-Dev] problem building scipy with qhull References: <4C8904B9.1030505@ifb.uni-stuttgart.de> Message-ID: Thu, 09 Sep 2010 18:00:57 +0200, Martin Hofs?? wrote: [clip] > I had the newest scipy version 0.9 from the svn trunk and I get one > error by building scipy under windows with the intel mkl and the MSVC & > IFORT compiler. > > Who can me help? > > Executing scons command (pkg is scipy.spatial): C:\Python26\python.exe [clip] The SCons build configuration was broken, and should now work again. Try updating from SVN now. -- Pauli Virtanen From matthieu.brucher at gmail.com Fri Sep 10 13:18:04 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 10 Sep 2010 19:18:04 +0200 Subject: [SciPy-Dev] Error in scipy.ndimage documentation Message-ID: Hi, The following function is referenced on the scipy.ndimage page (http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#ndimage-ccallbacks) : static int _shift_function(int *output_coordinates, double* input_coordinates, int output_rank, int input_rank, void *callback_data) { int ii; /* get the shift from the callback data pointer: */ double shift = *(double*)callback_data; /* calculate the coordinates: */ for(ii = 0; ii < irank; ii++) icoor[ii] = ocoor[ii] - shift; /* return OK status: */ return 1; } It obviously cannot be compiled (irank -> input_rank, icoor -> input_coordinates, ocoor -> output_coordinates) Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From eric.moscardi at sophia.inria.fr Fri Sep 10 14:24:40 2010 From: eric.moscardi at sophia.inria.fr (moscardi) Date: Fri, 10 Sep 2010 20:24:40 +0200 Subject: [SciPy-Dev] Error in scipy.ndimage documentation In-Reply-To: References: Message-ID: <07B95D5A-88D7-44ED-AF73-DFD0FE676507@sophia.inria.fr> Hi Matthieu, I changed these arguments to the compilation. But the result is not the same as the example, the result matrix is : >>> a = np.arange(12.).reshape((4, 3)) >>> fnc= example.shift_function(0.5) >>> print ndimage.geometric_transform(a, fnc) array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]) instead of : [[ 0. 0. 0. ] [ 0. 1.3625 2.7375] [ 0. 4.8125 6.1875] [ 0. 8.2625 9.6375]] If I try with : >>> fnc= example.shift_function(0) The result matrix is : array([[ -8.32667268e-16, -8.32667268e-16, -8.32667268e-16], [ 3.00000000e+00, 3.00000000e+00, 3.00000000e+00], [ 6.00000000e+00, 6.00000000e+00, 6.00000000e+00], [ 9.00000000e+00, 9.00000000e+00, 9.00000000e+00]]) So I think that something is wrong into the _shift_function from the tutorial. I think that the output_coordinates argument is not correct...? But maybe I forgot something which is not specify in the tuto... Any ideas? Thanks, Eric On Sep 10, 2010, at 7:18 PM, Matthieu Brucher wrote: > Hi, > > The following function is referenced on the scipy.ndimage page > (http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#ndimage-ccallbacks) > : > > static int > _shift_function(int *output_coordinates, double* input_coordinates, > int output_rank, int input_rank, void *callback_data) > { > int ii; > /* get the shift from the callback data pointer: */ > double shift = *(double*)callback_data; > /* calculate the coordinates: */ > for(ii = 0; ii < irank; ii++) > icoor[ii] = ocoor[ii] - shift; > /* return OK status: */ > return 1; > } > > It obviously cannot be compiled (irank -> input_rank, icoor -> > input_coordinates, ocoor -> output_coordinates) > > Matthieu > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Eric MOSCARDI INRIA - Virtual Plants CIRAD, Avenue Agropolis 34398 Montpellier Cedex 5, France 04 67 61 58 00 (ask number 60 09) email : eric.moscardi at sophia.inria.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Sep 11 21:10:43 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 12 Sep 2010 01:10:43 +0000 (UTC) Subject: [SciPy-Dev] Scipy on Python 3 Message-ID: Hi, I flushed the Python 3 branch containing work from me and David to SVN trunk. Scipy now builds with Python 3, and all tests pass, except for scipy.weave which still needs to be ported. More testing is welcome. I suspect Scipy's test suite does not cover all of the code, so there might be some work left to do. You'll probably need the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's distutils that are not in 1.5.0. Pauli From warren.weckesser at enthought.com Sat Sep 11 21:38:07 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 11 Sep 2010 20:38:07 -0500 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: Message-ID: <4C8C2EFF.1080008@enthought.com> Pauli Virtanen wrote: > Hi, > > I flushed the Python 3 branch containing work from me and David to SVN > trunk. Scipy now builds with Python 3, and all tests pass, except for > scipy.weave which still needs to be ported. > > More testing is welcome. I suspect Scipy's test suite does not cover all > of the code, so there might be some work left to do. You'll probably need > the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's > distutils that are not in 1.5.0. > > Hey Pauli, I built numpy 1.5.x using "python setup.py build", and added the built lib directory to PYTHONPATH. Is this sufficient to build scipy trunk? When I try to build scipy, I get the following: ----- $ python setup.py build non-existing path in 'scipy/cluster': '/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/core/include' non-existing path in 'scipy/cluster': '/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/core/include' Warning: No configuration returned, assuming unavailable.blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3'] umfpack_info: libraries umfpack not found in /Library/Frameworks/Python.framework/Versions/6.2/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/system_info.py:459: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE non-existing path in 'scipy/spatial': '/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/core/include' non-existing path in 'scipy/special': '/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/core/include' non-existing path non-existing path non-existing path in 'scipy/special': '/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/core/include' Traceback (most recent call last): File "setup.py", line 177, in setup_package() File "setup.py", line 169, in setup_package configuration=configuration ) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/core.py", line 152, in setup config = configuration() File "setup.py", line 118, in configuration config.add_subpackage('scipy') File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 972, in add_subpackage caller_level = 2) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 941, in get_subpackage caller_level = caller_level + 1) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 878, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 20, in configuration config.add_subpackage('special') File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 972, in add_subpackage caller_level = 2) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 941, in get_subpackage caller_level = caller_level + 1) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 878, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/special/setup.py", line 54, in configuration extra_info=get_info("npymath") File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 2092, in get_info pkg_info = get_pkg_info(pkgname, dirs) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/misc_util.py", line 2044, in get_pkg_info return read_config(pkgname, dirs) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/npy_pkg_config.py", line 384, in read_config v = _read_config_imp(pkg_to_filename(pkgname), dirs) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/npy_pkg_config.py", line 320, in _read_config_imp meta, vars, sections, reqs = _read_config(filenames) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/npy_pkg_config.py", line 304, in _read_config meta, vars, sections, reqs = parse_config(f, dirs) File "/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/distutils/npy_pkg_config.py", line 276, in parse_config raise PkgNotFound("Could not find file(s) %s" % str(filenames)) numpy.distutils.npy_pkg_config.PkgNotFound: Could not find file(s) ['/Users/warren/numpy_src_1.5.x/build/lib.macosx-10.5-i386-2.6/numpy/core/lib/npy-pkg-config/npymath.ini'] $ ----- Warren From pav at iki.fi Sat Sep 11 21:49:14 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 12 Sep 2010 01:49:14 +0000 (UTC) Subject: [SciPy-Dev] Scipy on Python 3 References: <4C8C2EFF.1080008@enthought.com> Message-ID: Sat, 11 Sep 2010 20:38:07 -0500, Warren Weckesser wrote: > I built numpy 1.5.x using "python setup.py build", and added the built > lib directory to PYTHONPATH. Is this sufficient to build scipy trunk? > When I try to build scipy, I get the following: You probably need also to "setup.py install" it somewhere, I guess not everything necessary goes to the lib directory under build/ Pauli From cournape at gmail.com Sat Sep 11 22:08:17 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 12 Sep 2010 11:08:17 +0900 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: <4C8C2EFF.1080008@enthought.com> References: <4C8C2EFF.1080008@enthought.com> Message-ID: On Sun, Sep 12, 2010 at 10:38 AM, Warren Weckesser wrote: > Pauli Virtanen wrote: >> Hi, >> >> I flushed the Python 3 branch containing work from me and David to SVN >> trunk. Scipy now builds with Python 3, and all tests pass, except for >> scipy.weave which still needs to be ported. >> >> More testing is welcome. I suspect Scipy's test suite does not cover all >> of the code, so there might be some work left to do. You'll probably need >> the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's >> distutils that are not in 1.5.0. >> >> > > Hey Pauli, > > I built numpy 1.5.x using "python setup.py build", and added the built > lib directory to PYTHONPATH. ?Is this sufficient to build scipy trunk? No, you should never do that (numpy or otherwise). If you want an usable package without installing it, you need the inplace build option: python setup.py build -i And point your PYTHONPATH to the *source* directory, not the build directory. This should work, and if it does not, it is a bug. cheers, David From warren.weckesser at enthought.com Sat Sep 11 22:08:39 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 11 Sep 2010 21:08:39 -0500 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: <4C8C2EFF.1080008@enthought.com> Message-ID: <4C8C3627.7060509@enthought.com> Pauli Virtanen wrote: > Sat, 11 Sep 2010 20:38:07 -0500, Warren Weckesser wrote: > >> I built numpy 1.5.x using "python setup.py build", and added the built >> lib directory to PYTHONPATH. Is this sufficient to build scipy trunk? >> When I try to build scipy, I get the following: >> > > You probably need also to "setup.py install" it somewhere, I guess not > everything necessary goes to the lib directory under build/ > > That worked--thanks. And thanks for all your work on the move to Python 3! Warren From warren.weckesser at enthought.com Sat Sep 11 22:13:19 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 11 Sep 2010 21:13:19 -0500 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: <4C8C2EFF.1080008@enthought.com> Message-ID: <4C8C373F.2080200@enthought.com> David Cournapeau wrote: > On Sun, Sep 12, 2010 at 10:38 AM, Warren Weckesser > wrote: > >> Pauli Virtanen wrote: >> >>> Hi, >>> >>> I flushed the Python 3 branch containing work from me and David to SVN >>> trunk. Scipy now builds with Python 3, and all tests pass, except for >>> scipy.weave which still needs to be ported. >>> >>> More testing is welcome. I suspect Scipy's test suite does not cover all >>> of the code, so there might be some work left to do. You'll probably need >>> the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's >>> distutils that are not in 1.5.0. >>> >>> >>> >> Hey Pauli, >> >> I built numpy 1.5.x using "python setup.py build", and added the built >> lib directory to PYTHONPATH. Is this sufficient to build scipy trunk? >> > > No, you should never do that (numpy or otherwise). If you want an > usable package without installing it, you need the inplace build > option: > > python setup.py build -i > > And point your PYTHONPATH to the *source* directory, not the build > directory. This should work, and if it does not, it is a bug. > > Thanks David, that's good to know. Warren From nwagner at iam.uni-stuttgart.de Mon Sep 13 14:53:42 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 13 Sep 2010 20:53:42 +0200 Subject: [SciPy-Dev] spatial.test() failure Message-ID: >>> from scipy import spatial >>> spatial.test() Running unit tests for scipy.spatial NumPy version 2.0.0.dev8714 NumPy is installed in /home/nwagner/local/lib64/python2.6/site-packages/numpy SciPy version 0.9.0.dev6803 SciPy is installed in /home/nwagner/local/lib64/python2.6/site-packages/scipy Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] nose version 0.11.2 ............................................................................................................................F.................................................................................................................................................. ====================================================================== FAIL: Tests pdist(X, 'minkowski') on iris data. (float32) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/spatial/tests/test_distance.py", line 837, in test_pdist_minkowski_3_2_iris_float32 self.failUnless(within_tol(Y_test1, Y_right, eps)) AssertionError ---------------------------------------------------------------------- Ran 271 tests in 38.005s FAILED (failures=1) From bsouthey at gmail.com Mon Sep 13 15:32:35 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 13 Sep 2010 14:32:35 -0500 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: Message-ID: <4C8E7C53.7080503@gmail.com> On 09/11/2010 08:10 PM, Pauli Virtanen wrote: > Hi, > > I flushed the Python 3 branch containing work from me and David to SVN > trunk. Scipy now builds with Python 3, and all tests pass, except for > scipy.weave which still needs to be ported. > > More testing is welcome. I suspect Scipy's test suite does not cover all > of the code, so there might be some work left to do. You'll probably need > the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's > distutils that are not in 1.5.0. > > Pauli > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Hi, I checked out revision 6803 and the setup.py is missing the line (based on numpy's) for the 'def svn_version()' that is needed before line 82: out = out.decode() Let you know if or when I get any other non-weave errors. Bruce $ python3 setup.py build Traceback (most recent call last): File "setup.py", line 82, in FULLVERSION += svn_version() File "setup.py", line 68, in svn_version for line in out.split('\n'): TypeError: Type str doesn't support the buffer API From warren.weckesser at enthought.com Mon Sep 13 15:47:40 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 13 Sep 2010 14:47:40 -0500 Subject: [SciPy-Dev] spatial.test() failure In-Reply-To: References: Message-ID: <4C8E7FDC.10705@enthought.com> Nils Wagner wrote: >>>> from scipy import spatial >>>> spatial.test() >>>> > Running unit tests for scipy.spatial > NumPy version 2.0.0.dev8714 > NumPy is installed in > /home/nwagner/local/lib64/python2.6/site-packages/numpy > SciPy version 0.9.0.dev6803 > SciPy is installed in > /home/nwagner/local/lib64/python2.6/site-packages/scipy > Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) > [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] > nose version 0.11.2 > ............................................................................................................................F.................................................................................................................................................. > ====================================================================== > FAIL: Tests pdist(X, 'minkowski') on iris data. (float32) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/spatial/tests/test_distance.py", > line 837, in test_pdist_minkowski_3_2_iris_float32 > self.failUnless(within_tol(Y_test1, Y_right, eps)) > AssertionError > > ---------------------------------------------------------------------- > Ran 271 tests in 38.005s > > FAILED (failures=1) > > In r6799, the names of some tests were changed so there are no longer duplicated test function names. This means tests are now being run that had previously been shadowed by the duplicate names. I also get the failure of the test test_pdist_minkowski_3_2_iris_float32(). I created a ticket for the problem: http://projects.scipy.org/scipy/ticket/1278 Warren From bsouthey at gmail.com Mon Sep 13 16:01:27 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 13 Sep 2010 15:01:27 -0500 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: <4C8E7C53.7080503@gmail.com> References: <4C8E7C53.7080503@gmail.com> Message-ID: On Mon, Sep 13, 2010 at 2:32 PM, Bruce Southey wrote: > ?On 09/11/2010 08:10 PM, Pauli Virtanen wrote: >> >> Hi, >> >> I flushed the Python 3 branch containing work from me and David to SVN >> trunk. Scipy now builds with Python 3, and all tests pass, except for >> scipy.weave which still needs to be ported. >> >> More testing is welcome. I suspect Scipy's test suite does not cover all >> of the code, so there might be some work left to do. You'll probably need >> the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's >> distutils that are not in 1.5.0. >> >> ? ? ? ?Pauli >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > Hi, > I checked out revision 6803 and the setup.py is missing the line (based on > numpy's) for the 'def svn_version()' that is needed before line 82: > > out = out.decode() > > Let you know if or when I get any other non-weave errors. > > Bruce > > $ python3 setup.py build > Traceback (most recent call last): > ?File "setup.py", line 82, in > ? ?FULLVERSION += svn_version() > ?File "setup.py", line 68, in svn_version > ? ?for line in out.split('\n'): > TypeError: Type str doesn't support the buffer API > After adding that line, I got scipy installed with a weaver error and the failures reported by Nils and Warren. There are a lot of 'PendingDeprecationWarning: Please use assertTrue instead.' messages resulting from the usage of failUnless which is deprecated since version 2.7. Really impressive work! Bruce ====================================================================== ERROR: Failure: ImportError (No module named c_spec) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.1/site-packages/nose/failure.py", line 37, in runTest reraise(self.exc_class, self.exc_val, self.tb) File "/usr/lib/python3.1/site-packages/nose/_3.py", line 7, in reraise raise exc_class(exc_val).with_traceback(tb) File "/usr/lib/python3.1/site-packages/nose/loader.py", line 389, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python3.1/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python3.1/site-packages/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib64/python3.1/site-packages/scipy/weave/__init__.py", line 13, in from .inline_tools import inline File "/usr/lib64/python3.1/site-packages/scipy/weave/inline_tools.py", line 5, in from . import ext_tools File "/usr/lib64/python3.1/site-packages/scipy/weave/ext_tools.py", line 7, in from . import converters File "/usr/lib64/python3.1/site-packages/scipy/weave/converters.py", line 5, in from . import c_spec File "/usr/lib64/python3.1/site-packages/scipy/weave/c_spec.py", line 380, in import os, c_spec # yes, I import myself to find out my __file__ location. ImportError: No module named c_spec ====================================================================== FAIL: test_ncg (test_optimize.TestOptimize) line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python3.1/site-packages/scipy/optimize/tests/test_optimize.py", line 177, in test_ncg assert_(self.gradcalls == 18, self.gradcalls) # 0.8.0 File "/usr/lib64/python3.1/site-packages/numpy/testing/utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: 16 ====================================================================== FAIL: test_pdist_minkowski_3_2_iris_float32 (test_distance.TestPdist) Tests pdist(X, 'minkowski') on iris data. (float32) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python3.1/site-packages/scipy/spatial/tests/test_distance.py", line 837, in test_pdist_minkowski_3_2_iris_float32 self.failUnless(within_tol(Y_test1, Y_right, eps)) AssertionError: False is not True ---------------------------------------------------------------------- Ran 4602 tests in 43.328s FAILED (KNOWNFAIL=12, SKIP=40, errors=1, failures=2) From pav at iki.fi Mon Sep 13 16:06:37 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 13 Sep 2010 20:06:37 +0000 (UTC) Subject: [SciPy-Dev] Scipy on Python 3 References: <4C8E7C53.7080503@gmail.com> Message-ID: Mon, 13 Sep 2010 15:01:27 -0500, Bruce Southey wrote: [clip] > There are a lot of 'PendingDeprecationWarning: Please use assertTrue > instead.' messages resulting from the usage of failUnless which is > deprecated since version 2.7. Unfortunately, we can't use assertTrue since it's not in Python 2.4. Also these could be changed to user assert_(), similarly how Warren changed recently bare asserts. -- Pauli Virtanen From warren.weckesser at enthought.com Mon Sep 13 16:12:38 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 13 Sep 2010 15:12:38 -0500 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: <4C8E7C53.7080503@gmail.com> Message-ID: <4C8E85B6.5000308@enthought.com> Pauli Virtanen wrote: > Mon, 13 Sep 2010 15:01:27 -0500, Bruce Southey wrote: > [clip] > >> There are a lot of 'PendingDeprecationWarning: Please use assertTrue >> instead.' messages resulting from the usage of failUnless which is >> deprecated since version 2.7. >> > > Unfortunately, we can't use assertTrue since it's not in Python 2.4. Also > these could be changed to user assert_(), similarly how Warren changed > recently bare asserts. > > I didn't get to the tests in sparse and weave--ran out of time and steam. (See http://projects.scipy.org/scipy/ticket/1197.) Warren From charlesr.harris at gmail.com Mon Sep 13 16:23:15 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 Sep 2010 14:23:15 -0600 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: <4C8E7C53.7080503@gmail.com> Message-ID: On Mon, Sep 13, 2010 at 2:06 PM, Pauli Virtanen wrote: > Mon, 13 Sep 2010 15:01:27 -0500, Bruce Southey wrote: > [clip] > > There are a lot of 'PendingDeprecationWarning: Please use assertTrue > > instead.' messages resulting from the usage of failUnless which is > > deprecated since version 2.7. > > Unfortunately, we can't use assertTrue since it's not in Python 2.4. Also > these could be changed to user assert_(), similarly how Warren changed > recently bare asserts. > > Numpy uses assertTrue, it is present in python2.4 but undocumented. I had a sed script to make all the needed replacements... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Sep 13 16:33:47 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 Sep 2010 14:33:47 -0600 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: <4C8E7C53.7080503@gmail.com> Message-ID: On Mon, Sep 13, 2010 at 2:23 PM, Charles R Harris wrote: > > > On Mon, Sep 13, 2010 at 2:06 PM, Pauli Virtanen wrote: > >> Mon, 13 Sep 2010 15:01:27 -0500, Bruce Southey wrote: >> [clip] >> > There are a lot of 'PendingDeprecationWarning: Please use assertTrue >> > instead.' messages resulting from the usage of failUnless which is >> > deprecated since version 2.7. >> >> Unfortunately, we can't use assertTrue since it's not in Python 2.4. Also >> these could be changed to user assert_(), similarly how Warren changed >> recently bare asserts. >> >> > Numpy uses assertTrue, it is present in python2.4 but undocumented. I had a > sed script to make all the needed replacements... > > Here is the script s/\/assertTrue/g s/\/assertFalse/g s/\/assertEqual/g s/\/assertRaises/g Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Sep 13 17:25:43 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 Sep 2010 15:25:43 -0600 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: <4C8E7C53.7080503@gmail.com> Message-ID: On Mon, Sep 13, 2010 at 2:33 PM, Charles R Harris wrote: > > > On Mon, Sep 13, 2010 at 2:23 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Mon, Sep 13, 2010 at 2:06 PM, Pauli Virtanen wrote: >> >>> Mon, 13 Sep 2010 15:01:27 -0500, Bruce Southey wrote: >>> [clip] >>> > There are a lot of 'PendingDeprecationWarning: Please use assertTrue >>> > instead.' messages resulting from the usage of failUnless which is >>> > deprecated since version 2.7. >>> >>> Unfortunately, we can't use assertTrue since it's not in Python 2.4. Also >>> these could be changed to user assert_(), similarly how Warren changed >>> recently bare asserts. >>> >>> >> Numpy uses assertTrue, it is present in python2.4 but undocumented. I had >> a sed script to make all the needed replacements... >> >> > Here is the script > > s/\/assertTrue/g > s/\/assertFalse/g > s/\/assertEqual/g > s/\/assertRaises/g > > I made the changes in r6805. For future reference on mass changes like that: find scipy -name "test*.py" -exec sed --in-place=bck --file=sedfixdeprecated {} + Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Mon Sep 13 17:49:26 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 13 Sep 2010 16:49:26 -0500 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: <4C8E7C53.7080503@gmail.com> Message-ID: <4C8E9C66.6040409@gmail.com> On 09/13/2010 04:25 PM, Charles R Harris wrote: > > > On Mon, Sep 13, 2010 at 2:33 PM, Charles R Harris > > wrote: > > > > On Mon, Sep 13, 2010 at 2:23 PM, Charles R Harris > > wrote: > > > > On Mon, Sep 13, 2010 at 2:06 PM, Pauli Virtanen > wrote: > > Mon, 13 Sep 2010 15:01:27 -0500, Bruce Southey wrote: > [clip] > > There are a lot of 'PendingDeprecationWarning: Please > use assertTrue > > instead.' messages resulting from the usage of > failUnless which is > > deprecated since version 2.7. > > Unfortunately, we can't use assertTrue since it's not in > Python 2.4. Also > these could be changed to user assert_(), similarly how > Warren changed > recently bare asserts. > > > Numpy uses assertTrue, it is present in python2.4 but > undocumented. I had a sed script to make all the needed > replacements... > > > Here is the script > > s/\/assertTrue/g > s/\/assertFalse/g > s/\/assertEqual/g > s/\/assertRaises/g > > > I made the changes in r6805. For future reference on mass changes like > that: > > find scipy -name "test*.py" -exec sed --in-place=bck > --file=sedfixdeprecated {} + > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev That was fast! No warnings with Python3 and just the same weave error and reported test failures as before. There are errors for Python2.4 that are not related to the depreciation changes (output below). I can file tickets if needed. I noticed that numpy svn version under Python2.4 also failed some tests but that's another list (and day). (But, for completeness these are also below since involve polynomial.) Thanks Bruce $ python2.4 Python 2.4.6 (#1, Sep 13 2010, 15:54:12) [GCC 4.4.4 20100630 (Red Hat 4.4.4-10)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 2.0.0.dev8714 NumPy is installed in /usr/local/lib/python2.4/site-packages/numpy SciPy version 0.9.0.dev6805 SciPy is installed in /usr/local/lib/python2.4/site-packages/scipy Python version 2.4.6 (#1, Sep 13 2010, 15:54:12) [GCC 4.4.4 20100630 (Red Hat 4.4.4-10)] nose version 0.11.2 [snip]test_polynomial.py ====================================================================== ERROR: test_idl.TestArrayDimensions.test_2d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 133, in test_2d s = readsav(path.join(DATA_PATH, 'array_float32_2d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_3d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 137, in test_3d s = readsav(path.join(DATA_PATH, 'array_float32_3d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_4d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 141, in test_4d s = readsav(path.join(DATA_PATH, 'array_float32_4d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_5d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 145, in test_5d s = readsav(path.join(DATA_PATH, 'array_float32_5d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_6d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 149, in test_6d s = readsav(path.join(DATA_PATH, 'array_float32_6d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_7d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 153, in test_7d s = readsav(path.join(DATA_PATH, 'array_float32_7d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_8d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 157, in test_8d s = readsav(path.join(DATA_PATH, 'array_float32_8d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestCompressed.test_compressed ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 114, in test_compressed s = readsav(path.join(DATA_PATH, 'various_compressed.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== FAIL: line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 177, in test_ncg assert_(self.gradcalls == 18, self.gradcalls) # 0.8.0 File "/usr/local/lib/python2.4/site-packages/numpy/testing/utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: 16 ====================================================================== FAIL: Tests pdist(X, 'minkowski') on iris data. (float32) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/spatial/tests/test_distance.py", line 837, in test_pdist_minkowski_3_2_iris_float32 self.assertTrue(within_tol(Y_test1, Y_right, eps)) AssertionError ---------------------------------------------------------------------- Ran 4736 tests in 33.837s FAILED (KNOWNFAIL=12, SKIP=40, errors=8, failures=2) >>> numpy.test() Running unit tests for numpy NumPy version 2.0.0.dev8714 NumPy is installed in /usr/local/lib/python2.4/site-packages/numpy Python version 2.4.6 (#1, Sep 13 2010, 15:54:12) [GCC 4.4.4 20100630 (Red Hat 4.4.4-10)] nose version 0.11.2 ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K...................................................................................................................................................................................................................................K............................................................................................K......................K............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................F.......................................................................................................................................................................................................................................................................................................................................................................................E..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Warning: divide by zero encountered in log .........................................................................................................................................................................................................................................................................SS............ ====================================================================== ERROR: test_type_check.TestDateTimeData.test_basic ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/numpy/lib/tests/test_type_check.py", line 383, in test_basic assert_equal(datetime_data(a.dtype), (asbytes('us'), 1, 1, 1)) File "/usr/local/lib/python2.4/site-packages/numpy/lib/type_check.py", line 610, in datetime_data raise RuntimeError, "Cannot access date-time internals without ctypes installed" RuntimeError: Cannot access date-time internals without ctypes installed ====================================================================== FAIL: test_doctests (test_polynomial.TestDocs) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/numpy/lib/tests/test_polynomial.py", line 90, in test_doctests return rundocs() File "/usr/local/lib/python2.4/site-packages/numpy/testing/utils.py", line 962, in rundocs raise AssertionError("Some doctests failed:\n%s" % "\n".join(msg)) AssertionError: Some doctests failed: ********************************************************************** File "/usr/local/lib/python2.4/site-packages/numpy/lib/tests/test_polynomial.py", line 38, in test_polynomial Failed example: p / q Expected: (poly1d([ 0.33333333]), poly1d([ 1.33333333, 2.66666667])) Got: (poly1d([ 0.333]), poly1d([ 1.333, 2.667])) ********************************************************************** File "/usr/local/lib/python2.4/site-packages/numpy/lib/tests/test_polynomial.py", line 60, in test_polynomial Failed example: p.integ() Expected: poly1d([ 0.33333333, 1. , 3. , 0. ]) Got: poly1d([ 0.333, 1. , 3. , 0. ]) ********************************************************************** File "/usr/local/lib/python2.4/site-packages/numpy/lib/tests/test_polynomial.py", line 62, in test_polynomial Failed example: p.integ(1) Expected: poly1d([ 0.33333333, 1. , 3. , 0. ]) Got: poly1d([ 0.333, 1. , 3. , 0. ]) ********************************************************************** File "/usr/local/lib/python2.4/site-packages/numpy/lib/tests/test_polynomial.py", line 64, in test_polynomial Failed example: p.integ(5) Expected: poly1d([ 0.00039683, 0.00277778, 0.025 , 0. , 0. , 0. , 0. , 0. ]) Got: poly1d([ 0. , 0.003, 0.025, 0. , 0. , 0. , 0. , 0. ]) ---------------------------------------------------------------------- Ran 3011 tests in 18.576s FAILED (KNOWNFAIL=4, SKIP=2, errors=1, failures=1) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Tue Sep 14 00:23:58 2010 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 13 Sep 2010 21:23:58 -0700 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: Message-ID: <4C8EF8DE.20308@uci.edu> On 9/11/2010 6:10 PM, Pauli Virtanen wrote: > Hi, > > I flushed the Python 3 branch containing work from me and David to SVN > trunk. Scipy now builds with Python 3, and all tests pass, except for > scipy.weave which still needs to be ported. > > More testing is welcome. I suspect Scipy's test suite does not cover all > of the code, so there might be some work left to do. You'll probably need > the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's > distutils that are not in 1.5.0. > > Pauli > Thank you and David for making this available. Will numscons eventually be ported to Python 3? So far numscons has been the most reliable way to build scipy with Microsoft C and Intel Fortran compilers. Anyway, I succeeded building scipy 0.9 rev 6805 on Python 3.1 for Windows (64 bit) using the distutils method after applying some bold fixes. Most problems encountered are not Python 3 specific. The attached patch gives a hint what needs to be addressed. I am using this site.cfg with numpy 1.5.0 svn rev 8714: [mkl] include_dirs = C:/Program Files (x86)/Intel/Compiler/11.1/067/mkl/include library_dirs = C:/Program Files (x86)/Intel/Compiler/11.1/067/mkl/em64t/lib;C:/Program Files (x86)/Intel/Compiler/11.1/067/lib/intel64 mkl_libs = mkl_lapack95_lp64,mkl_blas95_lp64,mkl_intel_lp64,mkl_intel_thread,mkl_core,libiomp5md,libifportmd lapack_libs = mkl_lapack95_lp64,mkl_blas95_lp64,mkl_intel_lp64,mkl_intel_thread,mkl_core,libiomp5md,libifportmd The problem with this configuration is that numpy.distutils expects all the libraries to be found in a single directory, which is not the case. Hence it claims that MKL is not available. A quick solution is to copy libiomp5md.lib and libifportmd.lib from '.../lib/intel64' to '.../mkl/em64t/lib'. In order for numpy.distutils to find the ifort compiler, call "ifortvars.bat intel64 vs2008" before "python setup.py build". The Visual C compiler reproducibly crashes while compiling scipy\special\cephes\polevl.c. Running the build command twice works. Scipy\interpolate\interpnd.c uses the C99 fmax() function, which is unknown to msvc9. Using max() instead of fmax(). The atlas_version.c files in scipy/lib/lapack/ and scipy/lib/blas/ are not Python 3 compatible. Using the atlas_version.c file from scipy/linalg, which has been ported. Atlas_version.c needs 'NO_ATLAS_INFO' defined when using MKL. Using "define_macros = [('NO_ATLAS_INFO', 1)]" when defining the extension in setup.py. Removing 'lsame.c' from the SuperLU source files in scipy/sparse/linalg/dsolve/setup.py. lsame is provided by MKL. Scipy/optimize/lbfgsb/routines.f and scipy/sparse/linalg/eigen/arpack/ARPACK/UTIL/second.f use the etime function, which can not be resolved even though libifportmd is linked. Probably some underscore/case issue. Boldly replacing the function call with a constant for now. Scipy.test() crashes during sparse matrix slicing. See . Deleting scipy\sparse\tests\test_base.py for now. Below is the the test output. Looks good. The TestODR and TestNdimage failures are known. See and . -- Christoph Running unit tests for scipy NumPy version 1.5.0 NumPy is installed in X:\Python31\lib\site-packages\numpy SciPy version 0.9.0.dev6805 SciPy is installed in X:\Python31\lib\site-packages\scipy Python version 3.1.2 (r312:79149, Mar 20 2010, 22:55:39) [MSC v.1500 64 bit (AMD64)] nose version 0.11.0 ====================================================================== ERROR: test suite for ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python31\lib\site-packages\nose\suite.py", line 173, in run self.tearDown() File "X:\Python31\lib\site-packages\nose\suite.py", line 295, in tearDown self.teardownContext(ancestor) File "X:\Python31\lib\site-packages\nose\suite.py", line 311, in teardownContext try_run(context, names) File "X:\Python31\lib\site-packages\nose\util.py", line 453, in try_run return func() File "X:\Python31\lib\site-packages\scipy\io\matlab\tests\test_streams.py", line 47, in teardown os.unlink(fname) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\users\\gohlke\\appdata\\local\\temp\\tmppbkhkm' ====================================================================== ERROR: Failure: ImportError (No module named c_spec) ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python31\lib\site-packages\nose\failure.py", line 27, in runTest reraise(self.exc_class, self.exc_val, self.tb) File "X:\Python31\lib\site-packages\nose\_3.py", line 7, in reraise raise exc_class(exc_val).with_traceback(tb) File "X:\Python31\lib\site-packages\nose\loader.py", line 372, in loadTestsFromName addr.filename, addr.module) File "X:\Python31\lib\site-packages\nose\importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "X:\Python31\lib\site-packages\nose\importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "X:\Python31\lib\site-packages\scipy\weave\__init__.py", line 13, in from .inline_tools import inline File "X:\Python31\lib\site-packages\scipy\weave\inline_tools.py", line 5, in from . import ext_tools File "X:\Python31\lib\site-packages\scipy\weave\ext_tools.py", line 7, in from . import converters File "X:\Python31\lib\site-packages\scipy\weave\converters.py", line 5, in from . import c_spec File "X:\Python31\lib\site-packages\scipy\weave\c_spec.py", line 380, in import os, c_spec # yes, I import myself to find out my __file__ location. ImportError: No module named c_spec ====================================================================== FAIL: test_extrema03 (test_ndimage.TestNdimage) extrema 3 ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python31\lib\site-packages\scipy\ndimage\tests\test_ndimage.py", line 3153, in test_extrema03 self.assertTrue(numpy.all(output1[2] == output4)) AssertionError: False is not True ====================================================================== FAIL: test_lorentz (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python31\lib\site-packages\scipy\odr\tests\test_odr.py", line 292, in test_lorentz 3.7798193600109009e+00]), File "X:\Python31\lib\site-packages\numpy\testing\utils.py", line 774, in assert_array_almost_equal header='Arrays are not almost equal') File "X:\Python31\lib\site-packages\numpy\testing\utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.00000000e+03, 1.00000000e-01, 3.80000000e+00]) y: array([ 1.43067808e+03, 1.33905090e-01, 3.77981936e+00]) ====================================================================== FAIL: test_multi (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python31\lib\site-packages\scipy\odr\tests\test_odr.py", line 188, in test_multi 0.5101147161764654, 0.5173902330489161]), File "X:\Python31\lib\site-packages\numpy\testing\utils.py", line 774, in assert_array_almost_equal header='Arrays are not almost equal') File "X:\Python31\lib\site-packages\numpy\testing\utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4. , 2. , 7. , 0.4, 0.5]) y: array([ 4.37998803, 2.43330576, 8.00288459, 0.51011472, 0.51739023]) ====================================================================== FAIL: test_pearson (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python31\lib\site-packages\scipy\odr\tests\test_odr.py", line 235, in test_pearson np.array([ 5.4767400299231674, -0.4796082367610305]), File "X:\Python31\lib\site-packages\numpy\testing\utils.py", line 774, in assert_array_almost_equal header='Arrays are not almost equal') File "X:\Python31\lib\site-packages\numpy\testing\utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1., 1.]) y: array([ 5.47674003, -0.47960824]) ====================================================================== FAIL: test_pdist_minkowski_3_2_iris_float32 (test_distance.TestPdist) Tests pdist(X, 'minkowski') on iris data. (float32) ---------------------------------------------------------------------- Traceback (most recent call last): File "X:\Python31\lib\site-packages\scipy\spatial\tests\test_distance.py", line 837, in test_pdist_minkowski_3_2_iris_float32 self.assertTrue(within_tol(Y_test1, Y_right, eps)) AssertionError: False is not True ---------------------------------------------------------------------- Ran 4203 tests in 39.045s FAILED (KNOWNFAIL=8, SKIP=41, errors=2, failures=5) -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy_py3_msvc9.patch URL: From fperez.net at gmail.com Tue Sep 14 19:40:31 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 14 Sep 2010 16:40:31 -0700 Subject: [SciPy-Dev] Scipy.org down, again Message-ID: f From david at silveregg.co.jp Tue Sep 14 22:39:08 2010 From: david at silveregg.co.jp (David) Date: Wed, 15 Sep 2010 11:39:08 +0900 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: <4C8EF8DE.20308@uci.edu> References: <4C8EF8DE.20308@uci.edu> Message-ID: <4C9031CC.9000408@silveregg.co.jp> On 09/14/2010 01:23 PM, Christoph Gohlke wrote: > > > On 9/11/2010 6:10 PM, Pauli Virtanen wrote: >> Hi, >> >> I flushed the Python 3 branch containing work from me and David to SVN >> trunk. Scipy now builds with Python 3, and all tests pass, except for >> scipy.weave which still needs to be ported. >> >> More testing is welcome. I suspect Scipy's test suite does not cover all >> of the code, so there might be some work left to do. You'll probably need >> the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's >> distutils that are not in 1.5.0. >> >> Pauli >> > > > Thank you and David for making this available. I should note that Pauli really did most of the work. > Will numscons eventually be ported to Python 3? Not by me at least. The first obvious issue is that scons is not python 3 compatible, and I don't know the timeline there. Also, I am spending most (all) of my "free" time on bento, my distutils/setuptools/etc... alternative, which should make something like numscons much less relevant (with the same advantages, though). I don't have time to work on numscons at the same time, except for the obvious bugs, cheers, David From jjstickel at vcn.com Wed Sep 15 15:34:14 2010 From: jjstickel at vcn.com (Jonathan Stickel) Date: Wed, 15 Sep 2010 13:34:14 -0600 Subject: [SciPy-Dev] Scipy.org down, again In-Reply-To: References: Message-ID: <4C911FB6.5050903@vcn.com> I notice this happens a lot, too. Interestingly, and fortunately for me, the docs seems to be much less affected: http://docs.scipy.org/ So I at least can reach the documentation. Jonathan On 9/15/10 11:00 , scipy-dev-request at scipy.org wrote: > Date: Tue, 14 Sep 2010 16:40:31 -0700 > From: Fernando Perez > Subject: [SciPy-Dev] Scipy.org down, again > To: SciPy Developers List From ralf.gommers at googlemail.com Thu Sep 16 09:05:36 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 16 Sep 2010 21:05:36 +0800 Subject: [SciPy-Dev] Scipy on Python 3 In-Reply-To: References: Message-ID: On Sun, Sep 12, 2010 at 9:10 AM, Pauli Virtanen wrote: > Hi, > > I flushed the Python 3 branch containing work from me and David to SVN > trunk. Scipy now builds with Python 3, and all tests pass, except for > scipy.weave which still needs to be ported. > Thanks a lot to both of you for all the work! > > More testing is welcome. I suspect Scipy's test suite does not cover all > of the code, so there might be some work left to do. You'll probably need > the latest 1.5.x branch Numpy to build, due to some fixes in Numpy's > distutils that are not in 1.5.0. > > Tested on OS X 10.6, looks good. One test error from weave, and the other one is known as well: ====================================================================== ERROR: Failure: ImportError (cannot import name c_spec) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/nose-3.0.0.dev-py3.1.egg/nose/failure.py", line 37, in runTest reraise(self.exc_class, self.exc_val, self.tb) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/nose-3.0.0.dev-py3.1.egg/nose/_3.py", line 7, in reraise raise exc_class(exc_val).with_traceback(tb) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/nose-3.0.0.dev-py3.1.egg/nose/loader.py", line 389, in loadTestsFromName addr.filename, addr.module) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/nose-3.0.0.dev-py3.1.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/nose-3.0.0.dev-py3.1.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/scipy/weave/__init__.py", line 13, in from .inline_tools import inline File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/scipy/weave/inline_tools.py", line 5, in from . import ext_tools File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/scipy/weave/ext_tools.py", line 7, in from . import converters File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/scipy/weave/converters.py", line 5, in from . import c_spec File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/scipy/weave/c_spec.py", line 381, in from .. import c_spec # yes, I import myself to find out my __file__ location. ImportError: cannot import name c_spec ====================================================================== FAIL: test_pdist_minkowski_3_2_iris_float32 (test_distance.TestPdist) Tests pdist(X, 'minkowski') on iris data. (float32) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/scipy/spatial/tests/test_distance.py", line 837, in test_pdist_minkowski_3_2_iris_float32 self.assertTrue(within_tol(Y_test1, Y_right, eps)) AssertionError: False is not True ---------------------------------------------------------------------- Ran 4614 tests in 88.827s FAILED (KNOWNFAIL=12, SKIP=41, errors=1, failures=1) One other warning from 2to3 besides the ones from weave: RefactoringTool: ### In file /Users/rgommers/Code/scipy/build/py3k/scipy/interpolate/fitpack.py ### RefactoringTool: Line 1087: cannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From martinez at isg.cs.uni-magdeburg.de Fri Sep 17 08:40:01 2010 From: martinez at isg.cs.uni-magdeburg.de (Janick Martinez Esturo) Date: Fri, 17 Sep 2010 14:40:01 +0200 Subject: [SciPy-Dev] dopri ODE integrator solout integration constraints Message-ID: <4C9361A1.10809@isg.cs.uni-magdeburg.de> Hi, I need to be able to observe and possibly stop an ODE integration using the scipy.integrate.ode dopri5 and dopri853 integrators. However, the supporting call of the _solout method is completely disabled as the fortran iout parameter is set to zero inside the wrapper code. I therefore attached a patch that does not break the existing interface and enables the usage of a costum _solout method by overwriting the method in a derived class as follows: from scipy.integrate.ode import ode, dopri5, IntegratorBase class mydopri5(dopri5): name = 'mydopri5' def _solout(self, nr,xold,x,y,nd,icomp,con): print("Costum solout with variable access ", x, y) return(1) #Note: return -1 to break def reset(self,n,has_jac): super(mydopri5, self).reset(n, has_jac) self._iout = 1 self.call_args = [self.rtol,self.atol,self._solout,self._iout,self.work,self.iwork] if mydopri5.runner: IntegratorBase.integrator_classes.append(mydopri5) y0, t0 = [1.0, 2.0], 0 def f(t, y): return [y[0] + y[1], -y[1]**2] r = ode(f).set_integrator('mydopri5') r.set_initial_value(y0, t0) r.integrate(10) print r.t, r.y Here I derived from dopri5 overwrote the reset method to be called with iout=1 and implemented a simple _solout callback. I'm using this code in a productive environment and just wanted to know if there is a chance that the patch might get included in a future release of scipy or what would be necessary to to so? I'm not experienced with Fortran libs, but it might be interesting to make the Fortran CONTD5 function also avaible to be called from the _solout method in order to interpolate intermediate values correctly, but that would require a little bit more efford i guess. Any comments about this would be nice. Greets Janick -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy-dopri_solout.patch URL: From jtravs at gmail.com Fri Sep 17 10:56:00 2010 From: jtravs at gmail.com (John Travers) Date: Fri, 17 Sep 2010 15:56:00 +0100 Subject: [SciPy-Dev] dopri ODE integrator solout integration constraints In-Reply-To: <4C9361A1.10809@isg.cs.uni-magdeburg.de> References: <4C9361A1.10809@isg.cs.uni-magdeburg.de> Message-ID: On Fri, Sep 17, 2010 at 1:40 PM, Janick Martinez Esturo wrote: > Hi, > > I need to be able to observe and possibly stop an ODE integration using > the scipy.integrate.ode dopri5 and dopri853 integrators. However, the > supporting call of the _solout method is completely disabled as the > fortran iout parameter is set to zero inside the wrapper code. I therefore > attached a patch that does not break the existing interface and enables the > usage of a costum _solout method by overwriting the method in a derived class > as follows: Excellent. I wrote the dopri5 wrappers and had adding this stuff on my todo list. Unfortunately, the time I get to work on scipy is currently vanishingly small, and will be for the next month or so. After that, if no one else beats me, then I will look at your patch and try to get this integrated. > I'm using this code in a productive environment and just wanted to know if > there is a chance that the patch might get included in a future release of > scipy or what would be necessary to to so? I'm not experienced with Fortran > libs, but it might be interesting to make the Fortran CONTD5 function > also avaible to be called from the _solout method in order to interpolate > intermediate values correctly, but that would require a little bit more efford > i guess. Yes, I use this a lot in my fortran code and it would be a nice addition. Thanks for the patch. John From cimrman3 at ntc.zcu.cz Mon Sep 20 04:26:58 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 20 Sep 2010 10:26:58 +0200 (CEST) Subject: [SciPy-Dev] utility for composing sparse matrices Message-ID: FYI: I have created a function that creates a sparse matrix from sparse (or dense) blocks, somewhat in the spirit of np.r_ and friends, see the example below. Shall we add it to scipy.sparse? It may help to go around some of the fancy-indexing limitations. r. In [1]: from compose_sparse import compose_sparse In [2]: %doctest_mode *** Pasting of code with ">>>" or "..." has been enabled. Exception reporting mode: Plain Doctest mode is: ON >>> >>> import scipy.sparse as sps >>> >>> A = sps.csr_matrix([[1, 0], [0, 1]]) >>> >>> B = sps.coo_matrix([[1, 1]]) >>> >>> K = compose_sparse([[A, B.T], [B, 0]]) >>> >>> print K.todense() --> print(K.todense()) [[1 0 1] [0 1 1] [1 1 0]] -------------- next part -------------- A non-text attachment was scrubbed... Name: compose_sparse.py Type: text/x-python Size: 2835 bytes Desc: URL: From pav at iki.fi Mon Sep 20 05:25:25 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 20 Sep 2010 09:25:25 +0000 (UTC) Subject: [SciPy-Dev] utility for composing sparse matrices References: Message-ID: Mon, 20 Sep 2010 10:26:58 +0200, Robert Cimrman wrote: > FYI: I have created a function that creates a sparse matrix from sparse > (or dense) blocks, somewhat in the spirit of np.r_ and friends, see the > example below. > > Shall we add it to scipy.sparse? It may help to go around some of the > fancy-indexing limitations. Add some tests, and I see no problem in putting it in. Also, I'd check that cases such as compose_sparse([A, B]) compose_sparse(A) are handled sensibly. Also, the equivalent function in `Numpy` is called `bmat`. Should the name here reflect that. Also, we would need to think if it is possible to get the sparse matrices in Scipy to play along with functions in Numpy. (E.g. np.bmat() and np.linalg.dot() doing the correct stuff for sparse matrices.) Pauli From cimrman3 at ntc.zcu.cz Mon Sep 20 06:25:41 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 20 Sep 2010 12:25:41 +0200 (CEST) Subject: [SciPy-Dev] utility for composing sparse matrices In-Reply-To: References: Message-ID: On Mon, 20 Sep 2010, Pauli Virtanen wrote: > Mon, 20 Sep 2010 10:26:58 +0200, Robert Cimrman wrote: >> FYI: I have created a function that creates a sparse matrix from sparse >> (or dense) blocks, somewhat in the spirit of np.r_ and friends, see the >> example below. >> >> Shall we add it to scipy.sparse? It may help to go around some of the >> fancy-indexing limitations. > > Add some tests, and I see no problem in putting it in. Also, I'd check > that cases such as > > compose_sparse([A, B]) > compose_sparse(A) > > are handled sensibly. > > Also, the equivalent function in `Numpy` is called `bmat`. Should the > name here reflect that. I have been using numpy so long, unaware of bmat - call me exlusive array user :). It's sensible to make it work with 0D and 1D arrays, yes. > Also, we would need to think if it is possible to get the sparse matrices > in Scipy to play along with functions in Numpy. (E.g. np.bmat() and > np.linalg.dot() doing the correct stuff for sparse matrices.) I would love to see this. I guess some basic sparse matrix support would have to be in Numpy directly(?). A good candidate is IMHO the COO format - it is simple, can be easily converted and has the 'data' array. I will create an enhancement ticket with the function renamed to bmat, and a test, then. r. From eric.moscardi at sophia.inria.fr Mon Sep 20 07:35:48 2010 From: eric.moscardi at sophia.inria.fr (moscardi) Date: Mon, 20 Sep 2010 13:35:48 +0200 Subject: [SciPy-Dev] Error in scipy.ndimage documentation In-Reply-To: References: Message-ID: <7578D643-CD3C-4498-ADE1-96BD98A31623@sophia.inria.fr> Nobody has tried to extend ndimage with C...? On Sep 10, 2010, at 7:18 PM, Matthieu Brucher wrote: > Hi, > > The following function is referenced on the scipy.ndimage page > (http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#ndimage-ccallbacks) > : > > static int > _shift_function(int *output_coordinates, double* input_coordinates, > int output_rank, int input_rank, void *callback_data) > { > int ii; > /* get the shift from the callback data pointer: */ > double shift = *(double*)callback_data; > /* calculate the coordinates: */ > for(ii = 0; ii < irank; ii++) > icoor[ii] = ocoor[ii] - shift; > /* return OK status: */ > return 1; > } > > It obviously cannot be compiled (irank -> input_rank, icoor -> > input_coordinates, ocoor -> output_coordinates) > > Matthieu > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev Eric MOSCARDI INRIA - Virtual Plants CIRAD, Avenue Agropolis 34398 Montpellier Cedex 5, France 04 67 61 58 00 (ask number 60 09) email : eric.moscardi at sophia.inria.fr From josef.pktd at gmail.com Mon Sep 20 09:26:16 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 20 Sep 2010 09:26:16 -0400 Subject: [SciPy-Dev] utility for composing sparse matrices In-Reply-To: References: Message-ID: On Mon, Sep 20, 2010 at 6:25 AM, Robert Cimrman wrote: > On Mon, 20 Sep 2010, Pauli Virtanen wrote: > >> Mon, 20 Sep 2010 10:26:58 +0200, Robert Cimrman wrote: >>> FYI: I have created a function that creates a sparse matrix from sparse >>> (or dense) blocks, somewhat in the spirit of np.r_ and friends, see the >>> example below. >>> >>> Shall we add it to scipy.sparse? It may help to go around some of the >>> fancy-indexing limitations. >> >> Add some tests, and I see no problem in putting it in. Also, I'd check >> that cases such as >> >> ? ? ? compose_sparse([A, B]) >> ? ? ? compose_sparse(A) >> >> are handled sensibly. >> >> Also, the equivalent function in `Numpy` is called `bmat`. Should the >> name here reflect that. > > I have been using numpy so long, unaware of bmat - call me exlusive array > user :). It's sensible to make it work with 0D and 1D arrays, yes. I never heard of it either, but it looks useful and needs some advertising. The equivalent of scipy.linalg.block_diag would also be very useful for building sparse matrices (in stats). Josef > >> Also, we would need to think if it is possible to get the sparse matrices >> in Scipy to play along with functions in Numpy. (E.g. np.bmat() and >> np.linalg.dot() doing the correct stuff for sparse matrices.) > > I would love to see this. I guess some basic sparse matrix support would > have to be in Numpy directly(?). A good candidate is IMHO the COO format - > it is simple, can be easily converted and has the 'data' array. > > I will create an enhancement ticket with the function renamed to bmat, and > a test, then. > > r. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From eavventi at yahoo.it Mon Sep 20 12:47:47 2010 From: eavventi at yahoo.it (Enrico Avventi) Date: Mon, 20 Sep 2010 18:47:47 +0200 Subject: [SciPy-Dev] PATCH: enhancing fmin_bfgs to handle functions that are not defined everywhere Message-ID: hello, i would like to propose a patch for enhancing the optimization module. the svn diff output is attached. let me know what do you think about it. *Rationale:* Convergence theory for a large class of uncostrained optimization algorithms (descent methods) does not require the function to be defined everywhere. In fact you just need a function that is defined in an open convex set and whose sublevel set: {x: f(x) <= f(x0) } is compact where x0 is the starting point. It would be helpful to let the optimization routines to handle the most general case for which they converge. *Assumptions:* I assumed that the function tends to (plus) infinity as it approaches its domain boundary. This is reasonable in the sense that if not true the function either tends to minus infinity or can be extended to a larger set. The function is assumed to be infinite ouside its domain with derivative, if any, non defined (None). *Implementation:* The line search methods is the only block of the optimization routine(s) that needs to be changed significantly in order to handle such functions. In particular i modified the line search method (line_search_wolfe2) to call a new zoom algorith (_barrier_zoom) whenever one of the bracketing points leads to an unfeasible point. _barrier_zoom uses linear-logarithmic and quadratic-logarithmic interpolants instead of quadratic and cubic ones in order to generate new candidates steps. Note that the interpolation strategy may not be ideal and could be certainly improved. All descent algorithms can be easily adapted to use such a line search, here i started with fmin_bfgs. In order to adapt it i added a new exception (DomainError) and modified phi and derphi in line_search_wolfe1 to raise it when they are evaluated outside the domain. Whenever that happens fmin_bfgs catches the exception and calls line_search_wolfe2. Additionally line_search_wolfe2 will call the original _zoom method if the bracketing phase leads to finite-valued points. this ensures that for problems that are defined everywhere the old code-path is always taken. * Testing:* I added two functions for testing the scalar_search_wolfe2 phi1(x) = -100 x - log(1-x) and phi2(x) = -100 x + (1-x)^-k. All old and new tests pass. regards, Enrico -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: svn_diff Type: application/octet-stream Size: 12636 bytes Desc: not available URL: From pav at iki.fi Mon Sep 20 18:34:29 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 20 Sep 2010 22:34:29 +0000 (UTC) Subject: [SciPy-Dev] PATCH: enhancing fmin_bfgs to handle functions that are not defined everywhere References: Message-ID: Hi, Mon, 20 Sep 2010 18:47:47 +0200, Enrico Avventi wrote: > I would like to propose a patch for enhancing the optimization module. > the svn diff output is attached. let me know what do you think about it. Thanks, the patch looks pretty good to me. I'll give it a whirl and push it to SVN after testing it a bit more. Pauli From cimrman3 at ntc.zcu.cz Tue Sep 21 02:45:36 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 21 Sep 2010 08:45:36 +0200 (CEST) Subject: [SciPy-Dev] utility for composing sparse matrices In-Reply-To: References: Message-ID: On Mon, 20 Sep 2010, josef.pktd at gmail.com wrote: > On Mon, Sep 20, 2010 at 6:25 AM, Robert Cimrman wrote: >> On Mon, 20 Sep 2010, Pauli Virtanen wrote: >>> Mon, 20 Sep 2010 10:26:58 +0200, Robert Cimrman wrote: >>>> FYI: I have created a function that creates a sparse matrix from sparse >>>> (or dense) blocks, somewhat in the spirit of np.r_ and friends, see the >>>> example below. >>>> >>>> Shall we add it to scipy.sparse? It may help to go around some of the >>>> fancy-indexing limitations. >>> >>> Add some tests, and I see no problem in putting it in. Also, I'd check >>> that cases such as >>> >>> ? ? ? compose_sparse([A, B]) >>> ? ? ? compose_sparse(A) >>> >>> are handled sensibly. >>> >>> Also, the equivalent function in `Numpy` is called `bmat`. Should the >>> name here reflect that. >> >> I have been using numpy so long, unaware of bmat - call me exlusive array >> user :). It's sensible to make it work with 0D and 1D arrays, yes. > > I never heard of it either, but it looks useful and needs some advertising. there is even bmat() in scipy.sparse which I found just after writing compose_sparse() - the two functions use a slightly different strategy, so it might be interesting to compare them performance-wise. As a side note, I did perform google search first, and bmat() did not show, so better advertising is needed. Or I just cannot use google :}. > The equivalent of scipy.linalg.block_diag would also be very useful > for building sparse matrices (in stats). At least I have created [1] for constructing sparse block diagonal matrix (do not tell me it's already there :)). [1] http://projects.scipy.org/scipy/ticket/1284 r. >>> Also, we would need to think if it is possible to get the sparse matrices >>> in Scipy to play along with functions in Numpy. (E.g. np.bmat() and >>> np.linalg.dot() doing the correct stuff for sparse matrices.) >> >> I would love to see this. I guess some basic sparse matrix support would >> have to be in Numpy directly(?). A good candidate is IMHO the COO format - >> it is simple, can be easily converted and has the 'data' array. >> >> I will create an enhancement ticket with the function renamed to bmat, and >> a test, then. >> >> r. >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From millman at berkeley.edu Tue Sep 21 03:14:21 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 21 Sep 2010 00:14:21 -0700 Subject: [SciPy-Dev] [ANN] SciPy India 2009 Call for Presentations Message-ID: ========================== SciPy 2010 Call for Papers ========================== The second `SciPy India Conference `_ will be held from December 13th to 18th, 2010 at `IIIT-Hyderabad `_. At this conference, novel applications and breakthroughs made in the pursuit of science using Python are presented. Attended by leading figures from both academia and industry, it is an excellent opportunity to experience the cutting edge of scientific software development. The conference is followed by two days of tutorials and a code sprint, during which community experts provide training on several scientific Python packages. We invite you to take part by submitting a talk abstract on the conference website at: http://scipy.in Talk/Paper Submission ========================== We solicit talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics regarding scientific computing using Python, including applications, teaching, development and research. Papers are included in the peer-reviewed conference proceedings, published online. Please note that submissions primarily aimed at the promotion of a commercial product or service will not be considered. Important Dates ========================== Monday, Oct. 11: Abstracts Due Saturday, Oct. 30: Schedule announced Tuesday, Nov. 30: Proceedings paper submission due Monday-Tuesday, Dec. 13-14: Conference Wednesday-Friday, Dec. 15-17: Tutorials/Sprints Saturday, Dec. 18: Sprints Organizers ========================== * Jarrod Millman, Neuroscience Institute, UC Berkeley, USA (Conference Co-Chair) * Prabhu Ramachandran, Department of Aerospace Engineering, IIT Bombay, India (Conference Co-Chair) * FOSSEE Team From amachnik at gmail.com Tue Sep 21 07:54:45 2010 From: amachnik at gmail.com (Adam Machnik) Date: Tue, 21 Sep 2010 13:54:45 +0200 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 Message-ID: Hello , I have downloaded from SVN the latest `griddata` implementation for working in N-D with irregular grids (scipy v0.9.0dev6812). It works great for 2D problems. However for 3D (and more) I have a problem: the NaN values appears always in the same position, and it does not depend on the scalar field being interpolated. Here is a small example of simple 3x3x3 regular grid and a constant scalar field of 1.0 values, that is interpolated at the input data points. Example: ****************************** from numpy import * from scipy.interpolate import * grid_x, grid_y, grid_z = mgrid[0:1:3j, 0:1:3j, 0:1:3j] pts = [[ 0.0 , 0.0 , 0.0 ], [ 0.0 , 0.0 , 0.5 ], [ 0.0 , 0.0 , 1.0 ], [ 0.0 , 0.5 , 0.0 ], [ 0.0 , 0.5 , 0.5 ], [ 0.0 , 0.5 , 1.0 ], [ 0.0 , 1.0 , 0.0 ], [ 0.0 , 1.0 , 0.5 ], [ 0.0 , 1.0 , 1.0 ], [ 0.5 , 0.0 , 0.0 ], [ 0.5 , 0.0 , 0.5 ], [ 0.5 , 0.0 , 1.0 ], [ 0.5 , 0.5 , 0.0 ], [ 0.5 , 0.5 , 0.5 ], [ 0.5 , 0.5 , 1.0 ], [ 0.5 , 1.0 , 0.0 ], [ 0.5 , 1.0 , 0.5 ], [ 0.5 , 1.0 , 1.0 ], [ 1.0 , 0.0 , 0.0 ], [ 1.0 , 0.0 , 0.5 ], [ 1.0 , 0.0 , 1.0 ], [ 1.0 , 0.5 , 0.0 ], [ 1.0 , 0.5 , 0.5 ], [ 1.0 , 0.5 , 1.0 ], [ 1.0 , 1.0 , 0.0 ], [ 1.0 , 1.0 , 0.5 ], [ 1.0 , 1.0 , 1.0 ]] vls = ones(27) points = array(pts).astype('float64') values = array(vls).astype('float64') grid_z1 = griddata(points, values, (grid_x, grid_y, grid_z), method='linear', fill_value=0) print 'grid_z1=', grid_z1 *********************** I obtain the following result: grid_z1= [[[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. nan] [ 1. 1. 1.]]] Thanks, Adam From pav at iki.fi Tue Sep 21 08:03:46 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 21 Sep 2010 12:03:46 +0000 (UTC) Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 References: Message-ID: Tue, 21 Sep 2010 13:54:45 +0200, Adam Machnik wrote: > I have downloaded from SVN the latest `griddata` implementation for > working in N-D > with irregular grids (scipy v0.9.0dev6812). It works great for 2D > problems. However for 3D (and more) I have a problem: the NaN values > appears always in the same position, and it does not depend on the > scalar field being interpolated. Here is a small example of simple 3x3x3 > regular grid and a constant scalar field of 1.0 values, that is > interpolated at the input data points. Works for me. grid_z1= [[[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]]] Do you get the same triangulation? >>> from scipy.spatial import Delaunay >>> Delaunay(points).vertices array([[ 1, 12, 10, 13], [ 1, 9, 12, 10], [ 1, 4, 10, 13], [ 1, 4, 12, 13], [ 1, 3, 4, 12], [ 1, 9, 10, 0], [ 1, 9, 12, 0], [ 1, 3, 12, 0], [ 1, 3, 4, 0], [21, 12, 10, 13], [21, 22, 12, 13], [21, 9, 12, 10], [21, 22, 10, 13], [21, 19, 22, 10], [21, 9, 10, 18], [21, 9, 12, 18], [21, 19, 10, 18], [21, 19, 22, 18], [15, 4, 12, 13], [15, 16, 12, 13], [15, 3, 4, 6], [15, 16, 4, 13], [15, 7, 16, 4], [15, 3, 4, 12], [15, 7, 4, 6], [15, 7, 16, 6], [25, 22, 12, 13], [25, 16, 12, 13], [25, 21, 22, 24], [25, 16, 22, 13], [25, 15, 16, 24], [25, 21, 12, 24], [25, 21, 22, 12], [25, 15, 12, 24], [25, 15, 16, 12], [23, 16, 22, 13], [23, 25, 16, 26], [23, 14, 16, 13], [23, 17, 14, 16], [23, 25, 16, 22], [23, 17, 16, 26], [23, 17, 14, 26], [23, 22, 10, 13], [23, 14, 10, 13], [23, 11, 14, 20], [23, 19, 10, 20], [23, 19, 22, 10], [23, 11, 10, 20], [23, 11, 14, 10], [ 5, 16, 4, 13], [ 5, 14, 16, 13], [ 5, 7, 16, 8], [ 5, 7, 16, 4], [ 5, 17, 16, 8], [ 5, 17, 14, 16], [ 5, 4, 10, 13], [ 5, 11, 14, 10], [ 5, 14, 10, 13], [ 5, 1, 10, 2], [ 5, 1, 4, 10], [ 5, 11, 10, 2]]) From amachnik at gmail.com Tue Sep 21 08:21:09 2010 From: amachnik at gmail.com (Adam Machnik) Date: Tue, 21 Sep 2010 14:21:09 +0200 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: The triangulation seems not exactly the same: [[23 16 22 13] [23 25 16 22] [23 14 22 13] [23 14 16 13] [23 17 14 16] [23 25 22 26] [23 25 16 26] [23 17 16 26] [23 17 14 26] [19 14 22 13] [19 10 22 13] [19 23 14 20] [19 10 14 13] [19 11 10 14] [19 23 22 20] [19 23 14 22] [19 11 14 20] [19 11 10 20] [21 10 22 13] [21 19 10 18] [21 12 10 13] [21 9 12 10] [21 19 10 22] [21 9 10 18] [21 9 12 18] [21 16 22 13] [21 15 12 16] [21 12 16 13] [21 25 16 22] [21 25 16 24] [21 15 16 24] [21 15 12 24] [ 1 12 10 13] [ 1 4 10 13] [ 1 9 12 0] [ 1 4 12 13] [ 1 3 4 12] [ 1 9 12 10] [ 1 3 12 0] [ 1 3 4 0] [ 5 10 14 13] [ 5 4 10 13] [ 5 11 10 2] [ 5 4 14 13] [ 5 1 4 2] [ 5 11 10 14] [ 5 1 10 2] [ 5 1 4 10] [ 7 12 16 13] [ 7 4 12 13] [ 7 15 12 6] [ 7 15 12 16] [ 7 3 12 6] [ 7 3 4 12] [ 7 14 16 13] [ 7 4 14 13] [ 7 17 14 8] [ 7 17 14 16] [ 7 5 14 8] [ 7 5 4 14]] On Tue, Sep 21, 2010 at 2:03 PM, Pauli Virtanen wrote: > Tue, 21 Sep 2010 13:54:45 +0200, Adam Machnik wrote: >> I have downloaded from SVN the latest `griddata` implementation for >> working in N-D >> with irregular grids (scipy v0.9.0dev6812). It works great for 2D >> problems. However for 3D (and more) I have a problem: the NaN values >> appears always in the same position, and it does not depend on the >> scalar field being interpolated. Here is a small example of simple 3x3x3 >> regular grid and a constant scalar field of 1.0 values, that is >> interpolated at the input data points. > > Works for me. > > grid_z1= [[[ 1. ?1. ?1.] > ?[ 1. ?1. ?1.] > ?[ 1. ?1. ?1.]] > ?[[ 1. ?1. ?1.] > ?[ 1. ?1. ?1.] > ?[ 1. ?1. ?1.]] > ?[[ 1. ?1. ?1.] > ?[ 1. ?1. ?1.] > ?[ 1. ?1. ?1.]]] > > Do you get the same triangulation? > >>>> from scipy.spatial import Delaunay >>>> Delaunay(points).vertices > array([[ 1, 12, 10, 13], > ? ? ? [ 1, ?9, 12, 10], > ? ? ? [ 1, ?4, 10, 13], > ? ? ? [ 1, ?4, 12, 13], > ? ? ? [ 1, ?3, ?4, 12], > ? ? ? [ 1, ?9, 10, ?0], > ? ? ? [ 1, ?9, 12, ?0], > ? ? ? [ 1, ?3, 12, ?0], > ? ? ? [ 1, ?3, ?4, ?0], > ? ? ? [21, 12, 10, 13], > ? ? ? [21, 22, 12, 13], > ? ? ? [21, ?9, 12, 10], > ? ? ? [21, 22, 10, 13], > ? ? ? [21, 19, 22, 10], > ? ? ? [21, ?9, 10, 18], > ? ? ? [21, ?9, 12, 18], > ? ? ? [21, 19, 10, 18], > ? ? ? [21, 19, 22, 18], > ? ? ? [15, ?4, 12, 13], > ? ? ? [15, 16, 12, 13], > ? ? ? [15, ?3, ?4, ?6], > ? ? ? [15, 16, ?4, 13], > ? ? ? [15, ?7, 16, ?4], > ? ? ? [15, ?3, ?4, 12], > ? ? ? [15, ?7, ?4, ?6], > ? ? ? [15, ?7, 16, ?6], > ? ? ? [25, 22, 12, 13], > ? ? ? [25, 16, 12, 13], > ? ? ? [25, 21, 22, 24], > ? ? ? [25, 16, 22, 13], > ? ? ? [25, 15, 16, 24], > ? ? ? [25, 21, 12, 24], > ? ? ? [25, 21, 22, 12], > ? ? ? [25, 15, 12, 24], > ? ? ? [25, 15, 16, 12], > ? ? ? [23, 16, 22, 13], > ? ? ? [23, 25, 16, 26], > ? ? ? [23, 14, 16, 13], > ? ? ? [23, 17, 14, 16], > ? ? ? [23, 25, 16, 22], > ? ? ? [23, 17, 16, 26], > ? ? ? [23, 17, 14, 26], > ? ? ? [23, 22, 10, 13], > ? ? ? [23, 14, 10, 13], > ? ? ? [23, 11, 14, 20], > ? ? ? [23, 19, 10, 20], > ? ? ? [23, 19, 22, 10], > ? ? ? [23, 11, 10, 20], > ? ? ? [23, 11, 14, 10], > ? ? ? [ 5, 16, ?4, 13], > ? ? ? [ 5, 14, 16, 13], > ? ? ? [ 5, ?7, 16, ?8], > ? ? ? [ 5, ?7, 16, ?4], > ? ? ? [ 5, 17, 16, ?8], > ? ? ? [ 5, 17, 14, 16], > ? ? ? [ 5, ?4, 10, 13], > ? ? ? [ 5, 11, 14, 10], > ? ? ? [ 5, 14, 10, 13], > ? ? ? [ 5, ?1, 10, ?2], > ? ? ? [ 5, ?1, ?4, 10], > ? ? ? [ 5, 11, 10, ?2]]) > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From amachnik at gmail.com Tue Sep 21 10:17:55 2010 From: amachnik at gmail.com (Adam Machnik) Date: Tue, 21 Sep 2010 16:17:55 +0200 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: Hello, I am concerned in the difference in triangulation. I think that on such a simple grid there should be no difference from qhull results. FYI: I have tested the scipy.spatial module on my workstation. There is one test that fails. However it was already reported in the mailinglist, and it does not seem to be the problem. What can I do else ? Adam >>> import scipy >>> scipy.spatial.test() Running unit tests for scipy.spatial NumPy version 1.5.0 NumPy is installed in /UTIL/PYTHON26/lib/python2.6/site-packages/numpy SciPy version 0.9.0.dev6812 SciPy is installed in /UTIL/PYTHON26/lib/python2.6/site-packages/scipy Python version 2.6.4 (r264:75706, Nov 9 2009, 15:41:41) [GCC 3.4.6 20060404 (Red Hat 3.4.6-9)] nose version 0.11.1 ............................................................................................................................F.................................................................................................................................................. ====================================================================== FAIL: Tests pdist(X, 'minkowski') on iris data. (float32) ---------------------------------------------------------------------- Traceback (most recent call last): File "/UTIL/PYTHON26/lib/python2.6/site-packages/scipy/spatial/tests/test_distance.py", line 838, in test_pdist_minkowski_3_2_iris_float32 self.assertTrue(within_tol(Y_test1, Y_right, eps)) AssertionError: """Fail the test unless the expression is true.""" >> if not False: raise self.failureException, None ---------------------------------------------------------------------- Ran 271 tests in 15.020s FAILED (failures=1) >>> On Tue, Sep 21, 2010 at 2:21 PM, Adam Machnik wrote: > The triangulation seems not exactly the same: > > [[23 16 22 13] > ?[23 25 16 22] > ?[23 14 22 13] > ?[23 14 16 13] > ?[23 17 14 16] > ?[23 25 22 26] > ?[23 25 16 26] > ?[23 17 16 26] > ?[23 17 14 26] > ?[19 14 22 13] > ?[19 10 22 13] > ?[19 23 14 20] > ?[19 10 14 13] > ?[19 11 10 14] > ?[19 23 22 20] > ?[19 23 14 22] > ?[19 11 14 20] > ?[19 11 10 20] > ?[21 10 22 13] > ?[21 19 10 18] > ?[21 12 10 13] > ?[21 ?9 12 10] > ?[21 19 10 22] > ?[21 ?9 10 18] > ?[21 ?9 12 18] > ?[21 16 22 13] > ?[21 15 12 16] > ?[21 12 16 13] > ?[21 25 16 22] > ?[21 25 16 24] > ?[21 15 16 24] > ?[21 15 12 24] > ?[ 1 12 10 13] > ?[ 1 ?4 10 13] > ?[ 1 ?9 12 ?0] > ?[ 1 ?4 12 13] > ?[ 1 ?3 ?4 12] > ?[ 1 ?9 12 10] > ?[ 1 ?3 12 ?0] > ?[ 1 ?3 ?4 ?0] > ?[ 5 10 14 13] > ?[ 5 ?4 10 13] > ?[ 5 11 10 ?2] > ?[ 5 ?4 14 13] > ?[ 5 ?1 ?4 ?2] > ?[ 5 11 10 14] > ?[ 5 ?1 10 ?2] > ?[ 5 ?1 ?4 10] > ?[ 7 12 16 13] > ?[ 7 ?4 12 13] > ?[ 7 15 12 ?6] > ?[ 7 15 12 16] > ?[ 7 ?3 12 ?6] > ?[ 7 ?3 ?4 12] > ?[ 7 14 16 13] > ?[ 7 ?4 14 13] > ?[ 7 17 14 ?8] > ?[ 7 17 14 16] > ?[ 7 ?5 14 ?8] > ?[ 7 ?5 ?4 14]] > > > On Tue, Sep 21, 2010 at 2:03 PM, Pauli Virtanen wrote: >> Tue, 21 Sep 2010 13:54:45 +0200, Adam Machnik wrote: >>> I have downloaded from SVN the latest `griddata` implementation for >>> working in N-D >>> with irregular grids (scipy v0.9.0dev6812). It works great for 2D >>> problems. However for 3D (and more) I have a problem: the NaN values >>> appears always in the same position, and it does not depend on the >>> scalar field being interpolated. Here is a small example of simple 3x3x3 >>> regular grid and a constant scalar field of 1.0 values, that is >>> interpolated at the input data points. >> >> Works for me. >> >> grid_z1= [[[ 1. ?1. ?1.] >> ?[ 1. ?1. ?1.] >> ?[ 1. ?1. ?1.]] >> ?[[ 1. ?1. ?1.] >> ?[ 1. ?1. ?1.] >> ?[ 1. ?1. ?1.]] >> ?[[ 1. ?1. ?1.] >> ?[ 1. ?1. ?1.] >> ?[ 1. ?1. ?1.]]] >> >> Do you get the same triangulation? >> >>>>> from scipy.spatial import Delaunay >>>>> Delaunay(points).vertices >> array([[ 1, 12, 10, 13], >> ? ? ? [ 1, ?9, 12, 10], >> ? ? ? [ 1, ?4, 10, 13], >> ? ? ? [ 1, ?4, 12, 13], >> ? ? ? [ 1, ?3, ?4, 12], >> ? ? ? [ 1, ?9, 10, ?0], >> ? ? ? [ 1, ?9, 12, ?0], >> ? ? ? [ 1, ?3, 12, ?0], >> ? ? ? [ 1, ?3, ?4, ?0], >> ? ? ? [21, 12, 10, 13], >> ? ? ? [21, 22, 12, 13], >> ? ? ? [21, ?9, 12, 10], >> ? ? ? [21, 22, 10, 13], >> ? ? ? [21, 19, 22, 10], >> ? ? ? [21, ?9, 10, 18], >> ? ? ? [21, ?9, 12, 18], >> ? ? ? [21, 19, 10, 18], >> ? ? ? [21, 19, 22, 18], >> ? ? ? [15, ?4, 12, 13], >> ? ? ? [15, 16, 12, 13], >> ? ? ? [15, ?3, ?4, ?6], >> ? ? ? [15, 16, ?4, 13], >> ? ? ? [15, ?7, 16, ?4], >> ? ? ? [15, ?3, ?4, 12], >> ? ? ? [15, ?7, ?4, ?6], >> ? ? ? [15, ?7, 16, ?6], >> ? ? ? [25, 22, 12, 13], >> ? ? ? [25, 16, 12, 13], >> ? ? ? [25, 21, 22, 24], >> ? ? ? [25, 16, 22, 13], >> ? ? ? [25, 15, 16, 24], >> ? ? ? [25, 21, 12, 24], >> ? ? ? [25, 21, 22, 12], >> ? ? ? [25, 15, 12, 24], >> ? ? ? [25, 15, 16, 12], >> ? ? ? [23, 16, 22, 13], >> ? ? ? [23, 25, 16, 26], >> ? ? ? [23, 14, 16, 13], >> ? ? ? [23, 17, 14, 16], >> ? ? ? [23, 25, 16, 22], >> ? ? ? [23, 17, 16, 26], >> ? ? ? [23, 17, 14, 26], >> ? ? ? [23, 22, 10, 13], >> ? ? ? [23, 14, 10, 13], >> ? ? ? [23, 11, 14, 20], >> ? ? ? [23, 19, 10, 20], >> ? ? ? [23, 19, 22, 10], >> ? ? ? [23, 11, 10, 20], >> ? ? ? [23, 11, 14, 10], >> ? ? ? [ 5, 16, ?4, 13], >> ? ? ? [ 5, 14, 16, 13], >> ? ? ? [ 5, ?7, 16, ?8], >> ? ? ? [ 5, ?7, 16, ?4], >> ? ? ? [ 5, 17, 16, ?8], >> ? ? ? [ 5, 17, 14, 16], >> ? ? ? [ 5, ?4, 10, 13], >> ? ? ? [ 5, 11, 14, 10], >> ? ? ? [ 5, 14, 10, 13], >> ? ? ? [ 5, ?1, 10, ?2], >> ? ? ? [ 5, ?1, ?4, 10], >> ? ? ? [ 5, 11, 10, ?2]]) >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > From charlesr.harris at gmail.com Tue Sep 21 10:44:33 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 21 Sep 2010 08:44:33 -0600 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: On Tue, Sep 21, 2010 at 8:17 AM, Adam Machnik wrote: > Hello, > I am concerned in the difference in triangulation. I think that on > such a simple grid there should be no difference from qhull results. > FYI: I have tested the scipy.spatial module on my workstation. There > is one test that fails. > However it was already reported in the mailinglist, and it does not > seem to be the problem. > What can I do else ? > > Make sure you have a clean install, i.e., remove the old one and the build directory and try again. Sometimes odd things can happen. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Sep 21 10:57:03 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Sep 2010 09:57:03 -0500 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: On Tue, Sep 21, 2010 at 09:17, Adam Machnik wrote: > Hello, > I am concerned in the difference in triangulation. I think that on > such a simple grid there should be no difference from qhull results. What you call a "simple grid" is actually a degenerate set of points for Delaunay triangulation. There are a a large number of valid triangulations that satisfy the Delaunay condition. The way that qhull deals with degeneracy is to randomly perturb the points. While perhaps qhull should pick a consistent random seed in order to reproduce whichever of the arbitrarily chosen triangulations to return in your degenerate case, please be aware that interpolation using triangulation is not going to work very well for your case. Trilinear interpolation would be better. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From pav at iki.fi Tue Sep 21 11:17:16 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 21 Sep 2010 15:17:16 +0000 (UTC) Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 References: Message-ID: Tue, 21 Sep 2010 09:57:03 -0500, Robert Kern wrote: > On Tue, Sep 21, 2010 at 09:17, Adam Machnik wrote: >> Hello, >> I am concerned in the difference in triangulation. I think that on such >> a simple grid there should be no difference from qhull results. > > What you call a "simple grid" is actually a degenerate set of points for > Delaunay triangulation. There are a a large number of valid > triangulations that satisfy the Delaunay condition. The way that qhull > deals with degeneracy is to randomly perturb the points. That's the "joggle" mode of Qhull. The 2003.1 version included in Scipy can also work so that it first produces non-simplical facets, and in a second step splits them into simplices in an arbitrary way. See http://qhull.org/html/qh-optq.htm#Qt This is still sensitive to rounding error, and so it seems that results obtained with different optimization levels in the compiler are different. The problem here seems to be that degenerate facets with zero volume appear in the result, which the interpolation routines do not at the moment handle sensibly. This should be fixable. -- Pauli Virtanen From amachnik at gmail.com Tue Sep 21 14:35:06 2010 From: amachnik at gmail.com (Adam Machnik) Date: Tue, 21 Sep 2010 20:35:06 +0200 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: Thank you very much for your explanations. Indeed, I would like to use the n-linear interpolation.The problem is that I cannot find any python package that can do this on N-D dimension on the incomplete and irregular set of points. For regular grids I use "scipy.ndimage.interpolation.map_coordinates" and it works perfectly. I use also "scipy.interpolate.Rbf " to perform interpolation on irregular set of points using radial basis functions and it works also great. But when using Rbf the interpolation is nonlinear, and often inconsistent. The only package that I have found up to now is this new griddata() function. I hope that it will work for me when fixed these issues with the handling of degenerated zero volume cases on interpolation level. Pauli Vitranen, who wrote this function, says that it should be possible, so I wait with impatience to test it when available. Thanks, Adam On Tue, Sep 21, 2010 at 4:57 PM, Robert Kern wrote: > On Tue, Sep 21, 2010 at 09:17, Adam Machnik wrote: >> Hello, >> I am concerned in the difference in triangulation. I think that on >> such a simple grid there should be no difference from qhull results. > > What you call a "simple grid" is actually a degenerate set of points > for Delaunay triangulation. There are a a large number of valid > triangulations that satisfy the Delaunay condition. The way that qhull > deals with degeneracy is to randomly perturb the points. While perhaps > qhull should pick a consistent random seed in order to reproduce > whichever of the arbitrarily chosen triangulations to return in your > degenerate case, please be aware that interpolation using > triangulation is not going to work very well for your case. Trilinear > interpolation would be better. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Tue Sep 21 14:52:13 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Sep 2010 13:52:13 -0500 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: On Tue, Sep 21, 2010 at 13:35, Adam Machnik wrote: > Thank you very much for your explanations. Indeed, I would like to use > the n-linear > interpolation.The problem is that I cannot find any python package that > can do this on N-D dimension on the incomplete and irregular set of points. > For regular grids I use "scipy.ndimage.interpolation.map_coordinates" and it > works perfectly. "Trilinear interpolation" is a very specific kind of interpolation. It is only defined on regular grids. map_coordinates() is a generalization, I believe. Trilinear interpolation is not a general term for linear interpolation methods on 3D points. > I use also "scipy.interpolate.Rbf " to perform interpolation > on irregular set of points using radial basis functions and it works > also great. > But when using Rbf the interpolation is nonlinear, and often inconsistent. > The only package that I have found up to now is this new griddata() function. > I hope that it will work for me when fixed these issues with the > handling of degenerated > zero volume cases on interpolation level. > Pauli Vitranen, who wrote this function, says that it should be > possible, so I wait with impatience > to test it when available. Regardless of whether it is possible to fix the Delaunay triangularization for these cases, I am trying to convey that the interpolation scheme is still not appropriate for rectilinear grids. If I have a point roughly in the middle of a grid cell, it will only get information from the simplex it happens to fall in rather than from all of the surrounding grid points. Which simplex it happens to fall in will be arbitrary because of the degeneracy. This will create inconsistencies across your grid as some grid cells will be broken up one way and other grid cells will be broken up others. There is no single interpolation method that works well for all inputs. The properties of the data are intimately associated with the interpolation technique that is best for it. For rectilinear grids, map_coordinates is the better choice. For scattered point clouds, griddata will work fine. Natural neighbors would be better, but we'll have to wait until someone implements that for 3D. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From amachnik at gmail.com Tue Sep 21 16:18:14 2010 From: amachnik at gmail.com (Adam Machnik) Date: Tue, 21 Sep 2010 22:18:14 +0200 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: On Tue, Sep 21, 2010 at 8:52 PM, Robert Kern wrote: > On Tue, Sep 21, 2010 at 13:35, Adam Machnik wrote: >> Thank you very much for your explanations. Indeed, I would like to use >> the n-linear >> interpolation.The problem is that I cannot find any python package that >> can do this on N-D dimension on the incomplete and irregular set of points. >> For regular grids I use "scipy.ndimage.interpolation.map_coordinates" and it >> works perfectly. > > "Trilinear interpolation" is a very specific kind of interpolation. It > is only defined on regular grids. map_coordinates() is a > generalization, I believe. Trilinear interpolation is not a general > term for linear interpolation methods on 3D points. > >> I use also "scipy.interpolate.Rbf " to perform interpolation >> on irregular set of points using radial basis functions and it works >> also great. >> But when using Rbf the interpolation is nonlinear, and often inconsistent. >> The only package that I have found up to now is this new griddata() function. >> I hope that it will work for me when fixed these issues with the >> handling of degenerated >> zero volume cases on interpolation level. >> Pauli Vitranen, who wrote this function, says that it should be >> possible, so I wait with impatience >> to test it when available. > > Regardless of whether it is possible to fix the Delaunay > triangularization for these cases, I am trying to convey that the > interpolation scheme is still not appropriate for rectilinear grids. > If I have a point roughly in the middle of a grid cell, it will only > get information from the simplex it happens to fall in rather than > from all of the surrounding grid points. Which simplex it happens to > fall in will be arbitrary because of the degeneracy. This will create > inconsistencies across your grid as some grid cells will be broken up > one way and other grid cells will be broken up others. > > There is no single interpolation method that works well for all > inputs. The properties of the data are intimately associated with the > interpolation technique that is best for it. For rectilinear grids, > map_coordinates is the better choice. For scattered point clouds, > griddata will work fine. Natural neighbors would be better, but we'll > have to wait until someone implements that for 3D. Thanks again for explanations. I think that I understand now the problem of using Delaunay triangularization for my rectilinear grids. Unfortunately it looks that I have a problem: my grids are rectilinear, but non-uniform (not equally spaced) and often incomplete (some points are missing). And to make things worse: they are often in 6D or more... And the facts are: Delaunay seems inappriopriate, the map_coordinates require equally spaced grids, Rbf method produces smooth but non-linear interpolation, and nobody implemented natural neighbors in N-dimensions... I think that I will stay with RbF interpolation...I wonder if it is possible to tune RbF basis functions to perform linear interpolation on my weird N-D grids... ? Thanks, Adam From pav at iki.fi Tue Sep 21 17:46:26 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 21 Sep 2010 21:46:26 +0000 (UTC) Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 References: Message-ID: Tue, 21 Sep 2010 20:35:06 +0200, Adam Machnik wrote: [clip] > griddata() function. I hope that it will work for me when fixed these > issues with the handling of degenerated > zero volume cases on interpolation level. [clip] I believe it is fixed here: http://github.com/pv/scipy-work/tree/bug/griddata-nans Click the "download source" link to get a fixed tarball. If you can test this, that would be useful, since I cannot reproduce the issue you are seeing. If you want to help further, please also do >>> from scipy.spatial import Delaunay >>> tri = Delaunay(points) >>> numpy.savez('delny.npz', tri.__dict__) and send the 'delny.npz' file to me -- this way it should be possible to get a reproducible test case. *** If it is indeed fixed now, the problem was that the directed search for a simplex could stop at a degenerate one, in which barycentric coordinates and linear interpolation is not defined -- which then would produce nans. -- Pauli Virtanen From gharras at ethz.ch Wed Sep 22 08:06:47 2010 From: gharras at ethz.ch (gharras) Date: Wed, 22 Sep 2010 14:06:47 +0200 Subject: [SciPy-Dev] Problem with manipulating dates in the time series object Message-ID: <4C99F157.9070507@ethz.ch> Hi there, I really like the time series object from scipy but I have a problem with manipulating dates. Here is a copy from my ipython session: ----------------------------------------------------------------------------- In [748]: myts=ts.time_series(data=n.arange(20).reshape(2,10).T, dates=n.arange(10)+720000, freq='D') In [749]: myts Out[749]: timeseries( [[ 0 10] [ 1 11] [ 2 12] [ 3 13] [ 4 14] [ 5 15] [ 6 16] [ 7 17] [ 8 18] [ 9 19]], dates = [17-Apr-1972 ... 26-Apr-1972], freq = D) In [750]: myts.dates[myts.weekday==1]+=1 In [751]: myts.dates Out[751]: DateArray([17-Apr-1972, 19-Apr-1972, 19-Apr-1972, 20-Apr-1972, 21-Apr-1972, 22-Apr-1972, 23-Apr-1972, 24-Apr-1972, 26-Apr-1972, 26-Apr-1972], freq='D') In [752]: myts.has_duplicated_dates() Out[752]: False ----------------------------------------------------------------------------- In words: I can change the dates by hand (in line 750), but the object is not aware of it, i.e. it still thinks that it does not contain any duplicates but the dates were however affected by the manipulation in line 750, cf. line 751. Is there a work-around? all the best. Cheers, Georges -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Sep 22 08:18:41 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 22 Sep 2010 14:18:41 +0200 Subject: [SciPy-Dev] Problem with manipulating dates in the time series object In-Reply-To: <4C99F157.9070507@ethz.ch> References: <4C99F157.9070507@ethz.ch> Message-ID: On Sep 22, 2010, at 2:06 PM, gharras wrote: > > Hi there, > > I really like the time series object from scipy but I have a problem with manipulating dates. > Here is a copy from my ipython session: > > ----------------------------------------------------------------------------- > In [748]: myts=ts.time_series(data=n.arange(20).reshape(2,10).T, dates=n.arange(10)+720000, freq='D') > > In [749]: myts > Out[749]: > timeseries( > [[ 0 10] > [ 1 11] > [ 2 12] > [ 3 13] > [ 4 14] > [ 5 15] > [ 6 16] > [ 7 17] > [ 8 18] > [ 9 19]], > dates = > [17-Apr-1972 ... 26-Apr-1972], > freq = D) > > > In [750]: myts.dates[myts.weekday==1]+=1 Shouldn't it be >>> myts.dates[myts.dates.weekday==1] += 1 >>> myts.dates DateArray([17-Apr-1972, 19-Apr-1972, 19-Apr-1972, 20-Apr-1972, 21-Apr-1972, 22-Apr-1972, 23-Apr-1972, 24-Apr-1972, 26-Apr-1972, 26-Apr-1972], freq='D') Now, there's still a pb with has_duplicated_dates which is not properly updated. The reason is that we store the status in a cache to avoid having to recalculate it. In most cases, it works, but here, the cache should be deleted... Use myts.dates._reset_cachedinfo() to reset the cache to its default as a workaround. Please fill a ticket as well... Thx From amachnik at gmail.com Wed Sep 22 10:53:53 2010 From: amachnik at gmail.com (Adam Machnik) Date: Wed, 22 Sep 2010 16:53:53 +0200 Subject: [SciPy-Dev] Problem with N-dimensional interpolation using a new griddata function for N>=3 In-Reply-To: References: Message-ID: Hello Pauli, I have downloaded and installed a new development version and.... it works !!! No NaN on simple examples, and the interpolation seems OK in 3D. I will test it tomorow on my more complex irregular grids. Thank you for your help ! Adam On Tue, Sep 21, 2010 at 11:46 PM, Pauli Virtanen wrote: > Tue, 21 Sep 2010 20:35:06 +0200, Adam Machnik wrote: > [clip] >> griddata() function. I hope that it will work for me when fixed these >> issues with the handling of degenerated >> zero volume cases on interpolation level. > [clip] > > I believe it is fixed here: > > ? ?http://github.com/pv/scipy-work/tree/bug/griddata-nans > [clip] From josef.pktd at gmail.com Fri Sep 24 00:41:22 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 24 Sep 2010 00:41:22 -0400 Subject: [SciPy-Dev] script for trailing whitespace cleanup Message-ID: Is the script for trailing whitespace cleanup that is used for numpy, scipy available somewhere? Thanks, Josef From charlesr.harris at gmail.com Fri Sep 24 09:02:35 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 24 Sep 2010 07:02:35 -0600 Subject: [SciPy-Dev] script for trailing whitespace cleanup In-Reply-To: References: Message-ID: On Thu, Sep 23, 2010 at 10:41 PM, wrote: > Is the script for trailing whitespace cleanup that is used for numpy, > scipy available somewhere? > > I've attached the perl script I use. You can use it wildcards for the file names in the usual way. Hmmm, I wonder how it treats the cr-lf endings that sometimes turn up on windows. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cleanfile Type: application/octet-stream Size: 1122 bytes Desc: not available URL: From josef.pktd at gmail.com Fri Sep 24 09:45:04 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 24 Sep 2010 09:45:04 -0400 Subject: [SciPy-Dev] script for trailing whitespace cleanup In-Reply-To: References: Message-ID: On Fri, Sep 24, 2010 at 9:02 AM, Charles R Harris wrote: > > > On Thu, Sep 23, 2010 at 10:41 PM, wrote: >> >> Is the script for trailing whitespace cleanup that is used for numpy, >> scipy available somewhere? >> > > I've attached the perl script I use. You can use it wildcards for the file > names in the usual way. Hmmm, I wonder how it treats the cr-lf endings that > sometimes turn up on windows. Thank you, I will ask Skipper to run it on Linux, then I don't have to worry about line-endings (nor do I have to figure out how to use perl on my machine, on second thought, maybe I should). Josef > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From ben.root at ou.edu Fri Sep 24 09:54:04 2010 From: ben.root at ou.edu (Benjamin Root) Date: Fri, 24 Sep 2010 08:54:04 -0500 Subject: [SciPy-Dev] script for trailing whitespace cleanup In-Reply-To: References: Message-ID: On Fri, Sep 24, 2010 at 8:45 AM, wrote: > On Fri, Sep 24, 2010 at 9:02 AM, Charles R Harris > wrote: > > > > > > On Thu, Sep 23, 2010 at 10:41 PM, wrote: > >> > >> Is the script for trailing whitespace cleanup that is used for numpy, > >> scipy available somewhere? > >> > > > > I've attached the perl script I use. You can use it wildcards for the > file > > names in the usual way. Hmmm, I wonder how it treats the cr-lf endings > that > > sometimes turn up on windows. > > Thank you, > I will ask Skipper to run it on Linux, then I don't have to worry > about line-endings (nor do I have to figure out how to use perl on my > machine, on second thought, maybe I should). > > Josef > > On just about any linux machine I have used, I have found the program 'dos2unix' to be very helpful. Note, it converts line endings for more than just dos->unix. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Sep 26 01:05:27 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 26 Sep 2010 00:05:27 -0500 Subject: [SciPy-Dev] spatial.test() failure In-Reply-To: References: <4C8E7FDC.10705@enthought.com> Message-ID: <4C9ED497.7020307@enthought.com> > > Nils Wagner wrote: > >>>> from scipy import spatial > >>>> spatial.test() > >>>> > > Running unit tests for scipy.spatial > > NumPy version 2.0.0.dev8714 > > NumPy is installed in > > /home/nwagner/local/lib64/python2.6/site-packages/numpy > > SciPy version 0.9.0.dev6803 > > SciPy is installed in > > /home/nwagner/local/lib64/python2.6/site-packages/scipy > > Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) > > [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] > > nose version 0.11.2 > > > ............................................................................................................................F.................................................................................................................................................. > > ====================================================================== > > FAIL: Tests pdist(X, 'minkowski') on iris data. (float32) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "/home/nwagner/local/lib64/python2.6/site-packages/scipy/spatial/tests/test_distance.py", > > line 837, in test_pdist_minkowski_3_2_iris_float32 > > self.failUnless(within_tol(Y_test1, Y_right, eps)) > > AssertionError > > > > ---------------------------------------------------------------------- > > Ran 271 tests in 38.005s > > > > FAILED (failures=1) > > > > > > > In r6799, the names of some tests were changed so there are no longer > duplicated test function names. This means tests are now being run that > had previously been shadowed by the duplicate names. I also get the > failure of the test test_pdist_minkowski_3_2_iris_float32(). I created > a ticket for the problem: http://projects.scipy.org/scipy/ticket/1278 > > Warren > If the tolerance in the test is changed from eps=1e-7 to 1e-6, the test passes on my computer. Several other float32 tests using the iris data also use 1e-6. I added a note about the tolerance of similar tests in the ticket. It does not seem unreasonable to simply loosen the tolerance a bit. Any objections? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Sep 26 01:13:22 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 26 Sep 2010 01:13:22 -0400 Subject: [SciPy-Dev] spatial.test() failure In-Reply-To: <4C9ED497.7020307@enthought.com> References: <4C8E7FDC.10705@enthought.com> <4C9ED497.7020307@enthought.com> Message-ID: On Sun, Sep 26, 2010 at 1:05 AM, Warren Weckesser wrote: > > > Nils Wagner wrote: >>>>> from scipy import spatial >>>>> spatial.test() >>>>> >> Running unit tests for scipy.spatial >> NumPy version 2.0.0.dev8714 >> NumPy is installed in >> /home/nwagner/local/lib64/python2.6/site-packages/numpy >> SciPy version 0.9.0.dev6803 >> SciPy is installed in >> /home/nwagner/local/lib64/python2.6/site-packages/scipy >> Python version 2.6.5 (r265:79063, Jul ?5 2010, 11:46:13) >> [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] >> nose version 0.11.2 >> >> ............................................................................................................................F.................................................................................................................................................. >> ====================================================================== >> FAIL: Tests pdist(X, 'minkowski') on iris data. (float32) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ? ?File >> >> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/spatial/tests/test_distance.py", >> line 837, in test_pdist_minkowski_3_2_iris_float32 >> ? ? ?self.failUnless(within_tol(Y_test1, Y_right, eps)) >> AssertionError >> >> ---------------------------------------------------------------------- >> Ran 271 tests in 38.005s >> >> FAILED (failures=1) >> >> > > > In r6799, the names of some tests were changed so there are no longer > duplicated test function names. ?This means tests are now being run that > had previously been shadowed by the duplicate names. ?I also get the > failure of the test test_pdist_minkowski_3_2_iris_float32(). ?I created > a ticket for the problem: http://projects.scipy.org/scipy/ticket/1278 > > Warren > > > If the tolerance in the test is changed from eps=1e-7 to 1e-6, the test > passes on my computer.? Several other float32 tests using the iris data also > use 1e-6.? I added a note about the tolerance of similar tests in the > ticket.? It does not seem unreasonable to simply loosen the tolerance a bit. > Any objections? No, sounds reasonable. Josef > > Warren > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From adrian.prw at gmail.com Mon Sep 27 17:19:51 2010 From: adrian.prw at gmail.com (Adrian) Date: Mon, 27 Sep 2010 21:19:51 +0000 (UTC) Subject: [SciPy-Dev] =?utf-8?q?scipy_error_compiling_csr=5Fwrap_under_Pyth?= =?utf-8?q?on_2=2E7?= References: <4C339FAD.6010505@gmail.com> Message-ID: > I've got precisely the problem when attempting to compile Scipy under > centos-5-x86_64 I'm having the same issue on CentOS-4.4 with Python 2.7, was a resolution ever found? From bsouthey at gmail.com Mon Sep 27 17:45:46 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 27 Sep 2010 16:45:46 -0500 Subject: [SciPy-Dev] scipy error compiling csr_wrap under Python 2.7 In-Reply-To: References: <4C339FAD.6010505@gmail.com> Message-ID: <4CA1108A.2060603@gmail.com> On 09/27/2010 04:19 PM, Adrian wrote: >> I've got precisely the problem when attempting to compile Scipy under >> centos-5-x86_64 > I'm having the same issue on CentOS-4.4 with Python 2.7, was a resolution ever > found? > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev This should be have been fixed about two months ago (r6645 , r6646 ), see ticket 1180: http://projects.scipy.org/scipy/ticket/1180 Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Mon Sep 27 22:13:21 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 27 Sep 2010 21:13:21 -0500 Subject: [SciPy-Dev] spatial.test() failure In-Reply-To: References: <4C8E7FDC.10705@enthought.com> <4C9ED497.7020307@enthought.com> Message-ID: <4CA14F41.2000500@enthought.com> On 9/26/10 12:13 AM, josef.pktd at gmail.com wrote: > On Sun, Sep 26, 2010 at 1:05 AM, Warren Weckesser > wrote: >> >> Nils Wagner wrote: >>>>>> from scipy import spatial >>>>>> spatial.test() >>>>>> >>> Running unit tests for scipy.spatial >>> NumPy version 2.0.0.dev8714 >>> NumPy is installed in >>> /home/nwagner/local/lib64/python2.6/site-packages/numpy >>> SciPy version 0.9.0.dev6803 >>> SciPy is installed in >>> /home/nwagner/local/lib64/python2.6/site-packages/scipy >>> Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) >>> [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] >>> nose version 0.11.2 >>> >>> ............................................................................................................................F.................................................................................................................................................. >>> ====================================================================== >>> FAIL: Tests pdist(X, 'minkowski') on iris data. (float32) >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>> >>> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/spatial/tests/test_distance.py", >>> line 837, in test_pdist_minkowski_3_2_iris_float32 >>> self.failUnless(within_tol(Y_test1, Y_right, eps)) >>> AssertionError >>> >>> ---------------------------------------------------------------------- >>> Ran 271 tests in 38.005s >>> >>> FAILED (failures=1) >>> >>> >> >> In r6799, the names of some tests were changed so there are no longer >> duplicated test function names. This means tests are now being run that >> had previously been shadowed by the duplicate names. I also get the >> failure of the test test_pdist_minkowski_3_2_iris_float32(). I created >> a ticket for the problem: http://projects.scipy.org/scipy/ticket/1278 >> >> Warren >> >> >> If the tolerance in the test is changed from eps=1e-7 to 1e-6, the test >> passes on my computer. Several other float32 tests using the iris data also >> use 1e-6. I added a note about the tolerance of similar tests in the >> ticket. It does not seem unreasonable to simply loosen the tolerance a bit. >> Any objections? > No, sounds reasonable. > > Josef > Done, r6827. For anyone using trunk, could you check whether or not test_pdist_minkowski_3_2_iris_float32 now passes? Warren From mpan at cesga.es Tue Sep 28 03:08:45 2010 From: mpan at cesga.es (Miguel Pan Fidalgo) Date: Tue, 28 Sep 2010 09:08:45 +0200 Subject: [SciPy-Dev] problems configuring scipy (0.8.0) from numpy (1.5.0) in a itanium cluster Message-ID: <201009280908.45347.mpan@cesga.es> Technical information ++++++++++++++++++++++ $ python -c 'import sys;print sys.version' 2.4.2 (#1, Apr 13 2007, 16:14:04) [GCC 4.1.0 (SUSE Linux)] $ module load numpy/1.5.0 $ module list Currently Loaded Modulefiles: 1) icc/11.1.056 3) mkl/10.2.2 5) nose/0.11.2 7) numpy/1.5.0 2) ifort/11.1.056 4) fftw/3.2.2 6) SuiteSparse/3.4.0 $ python -c 'import numpy;print numpy.__version__' 1.5.0 $ uname -a Linux cn002 2.6.16.53-0.8_SFS2.3_0-default #1 SMP Mon Nov 5 14:15:47 CET 2007 ia64 ia64 ia64 GNU/Linux numpy-1.5.0 +++++++++++++++++ I have installed numpy-1.5.0 in our Itanium cluster. This is my site.cfg file: $ cat /opt/cesga/numpy-1.5.0/src_numpy-1.5.0/site.cfg [DEFAULT] library_dirs = /opt/cesga/fftw-3.2.2/lib:/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64:/opt/cesga/SuiteSparse-3.4.0/lib include_dirs = /opt/cesga/fftw-3.2.2/include:/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include:/opt/cesga/SuiteSparse-3.4.0/include # UMFPACK # ------- [amd] amd_libs = amd [umfpack] umfpack_libs = umfpack # FFT libraries # ------------- [fftw] libraries = fftw3 # MKL # ---- [mkl] library_dirs = /opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64 lapack_libs = mkl_lapack mkl_libs = mkl_solver_lp64, mkl_intel_lp64, mkl_intel_thread, mkl_i2p, mkl_core These are the hacks: $ diff system_info.py.original system_info.py 818c818 < _lib_mkl = ['mkl','vml','guide'] --- > _lib_mkl = ['mkl','mkl_lapack','guide'] $ diff intelccompiler.py.original intelccompiler.py 28a29 > cc_exe=cc_exe + ' -liomp5 -lpthread -fPIC' scipy-0.8.0 +++++++++++++++ When I try to install scipy-0.8.0, the configuration process fails: $ tar zxvf scipy-0.8.0.tar.gz $ mv scipy-0.8.0 src $ cp -va run.sh src/ $ cd src/ $ cat run.sh #!/bin/bash module load numpy/1.5.0 python setup.py config_fc --fcompiler=intele #install --prefix=/sfs/home/cesga/mpan/scipy-0.8.0 $ ./run.sh Warning: No configuration returned, assuming unavailable.blas_opt_info: blas_mkl_info: FOUND: libraries = ['mkl_solver_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_i2p', 'mkl_core', 'pthread'] library_dirs = ['/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/cesga/fftw-3.2.2/include', '/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include', '/opt/cesga/SuiteSparse-3.4.0/include'] FOUND: libraries = ['mkl_solver_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_i2p', 'mkl_core', 'pthread'] library_dirs = ['/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/cesga/fftw-3.2.2/include', '/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include', '/opt/cesga/SuiteSparse-3.4.0/include'] lapack_opt_info: lapack_mkl_info: mkl_info: FOUND: libraries = ['mkl_solver_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_i2p', 'mkl_core', 'pthread'] library_dirs = ['/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/cesga/fftw-3.2.2/include', '/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include', '/opt/cesga/SuiteSparse-3.4.0/include'] FOUND: libraries = ['mkl_lapack', 'mkl_solver_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_i2p', 'mkl_core', 'pthread'] library_dirs = ['/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/cesga/fftw-3.2.2/include', '/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include', '/opt/cesga/SuiteSparse-3.4.0/include'] FOUND: libraries = ['mkl_lapack', 'mkl_solver_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_i2p', 'mkl_core', 'pthread'] library_dirs = ['/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/cesga/fftw-3.2.2/include', '/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include', '/opt/cesga/SuiteSparse-3.4.0/include'] umfpack_info: libraries umfpack not found in /opt/cesga/fftw-3.2.2/lib libraries umfpack not found in /opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64 amd_info: libraries amd not found in /opt/cesga/fftw-3.2.2/lib libraries amd not found in /opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64 FOUND: libraries = ['amd'] library_dirs = ['/opt/cesga/SuiteSparse-3.4.0/lib'] swig_opts = ['-I/opt/cesga/SuiteSparse-3.4.0/include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/opt/cesga/SuiteSparse-3.4.0/include'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/opt/cesga/SuiteSparse-3.4.0/lib'] swig_opts = ['-I/opt/cesga/SuiteSparse-3.4.0/include', '- I/opt/cesga/SuiteSparse-3.4.0/include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/opt/cesga/SuiteSparse-3.4.0/include'] running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options the warning (first line) implies that BLAS primitives have to be recompiled. At the end the installation ends properly... this is not the point, but scipy doesn't use MKL BLAS primitives. BUT according to numpy... blas_mkl and blas_opt should be the same (like lapack_mkl and lapack_opt): $ module load numpy/1.5.0 $ python Python 2.4.2 (#1, Apr 13 2007, 16:14:04) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy.distutils.system_info import get_info >>> get_info('blas_mkl') {'libraries': ['mkl_solver_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_i2p', 'mkl_core', 'pthread'], 'library_dirs': ['/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64'], 'define_macros': [('SCIPY_MKL_H', None)], 'include_dirs': ['/opt/cesga/fftw-3.2.2/include', '/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include', '/opt/cesga/SuiteSparse-3.4.0/include']} >>> get_info('blas_opt') {'libraries': ['mkl_solver_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_i2p', 'mkl_core', 'pthread'], 'library_dirs': ['/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/lib/64'], 'define_macros': [('SCIPY_MKL_H', None)], 'include_dirs': ['/opt/cesga/fftw-3.2.2/include', '/opt/cesga/intel/intel11.0/Compiler/11.1/056/mkl/include', '/opt/cesga/SuiteSparse-3.4.0/include']} >>> There is no need to recompile BLAS!!!! any idea or suggestion ? WHAT I'm doing wrong ? -- Atentamente, Miguel Pan Fidalgo (Applications Technician) mail: mpan at cesga.es web: http://www.cesga.es Avda. de Vigo s/n 15705, Santiago de Compostela Telf.: +34 981 569810 - Fax: 981 594616 ------------------------------------------------------------------------- From daniele at grinta.net Tue Sep 28 06:16:40 2010 From: daniele at grinta.net (Daniele Nicolodi) Date: Tue, 28 Sep 2010 12:16:40 +0200 Subject: [SciPy-Dev] scipy.optimize.curve_fit does not work with single parameter function Message-ID: <4CA1C088.5080905@grinta.net> Hello, I copied the code of scipy.optimize.curve_fit into my data analysis code to use it on a not so recent versions of scipy available on my systems. However it does not work when I try to fit a simple function that has only one parameter, for example: def func(x, p1): return x * p1 The problem is that scipy.optimize.leastsq() returns a list of parameters values for cases where the function has more than one parameter it returns a tuple of parameters values, while in the case of one single parameters it returns a numpy scalar value. This causes an error in the curve_fit _general_function() and _weighted_general_function() where the target function is called: function(xdata, *params) This works only if params is a python list or tuple. Also the return value suffers from the same problem. If the target function has more than one parameter you get a parameters list, otherwise you get a scalar. It would be more consistent to return always a list, also because the covariance is always returned as a matrix. Please ignore this if that has been resolved in more recent version of scipy, I have acces to 0.7.2 (last available on debian sid). Cheers, -- Daniele From warren.weckesser at enthought.com Tue Sep 28 08:12:33 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 28 Sep 2010 07:12:33 -0500 Subject: [SciPy-Dev] scipy.optimize.curve_fit does not work with single parameter function In-Reply-To: <4CA1C088.5080905@grinta.net> References: <4CA1C088.5080905@grinta.net> Message-ID: <4CA1DBB1.10402@enthought.com> On 9/28/10 5:16 AM, Daniele Nicolodi wrote: > Hello, > > I copied the code of scipy.optimize.curve_fit into my data analysis code > to use it on a not so recent versions of scipy available on my systems. > However it does not work when I try to fit a simple function that has > only one parameter, for example: > > def func(x, p1): > return x * p1 > > The problem is that scipy.optimize.leastsq() returns a list of > parameters values for cases where the function has more than one > parameter it returns a tuple of parameters values, while in the case of > one single parameters it returns a numpy scalar value. > > This causes an error in the curve_fit _general_function() and > _weighted_general_function() where the target function is called: > > function(xdata, *params) > > This works only if params is a python list or tuple. > > Also the return value suffers from the same problem. If the target > function has more than one parameter you get a parameters list, > otherwise you get a scalar. It would be more consistent to return always > a list, also because the covariance is always returned as a matrix. > > Please ignore this if that has been resolved in more recent version of > scipy, I have acces to 0.7.2 (last available on debian sid). > This was fixed in 0.8; see http://projects.scipy.org/scipy/ticket/1204 Warren From zunzun at zunzun.com Tue Sep 28 13:12:21 2010 From: zunzun at zunzun.com (James Phillips) Date: Tue, 28 Sep 2010 07:12:21 -1000 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example Message-ID: Since I observed the following behavior in the SVN respository version of SciPy, it seemed to me proper to post to the dev mailing list. I'm using Ubuntu Lucid Lynx 32 bit and a fresh GIT of Numpy. I am not sure if a Trac bug report needs to be entered. Below is some example code for fitting two statistical distributions. Sometimes the numpy-generated data is fit, as I can see the estimated and fitted parameters are different. Sometimes I receive many messages repeated on the command line like: Warning: invalid value encountered in absolute Warning: invalid value encountered in subtract and the estimated parameters equal the fitted parameter values, indicating no fitting took place. Sometimes I receive on the command line: Traceback (most recent call last): File "/home/zunzun/local/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1987, in func sk = 2*(b-a)*math.sqrt(a + b + 1) / (a + b + 2) / math.sqrt(a*b) ValueError: math domain error Traceback (most recent call last): File "example.py", line 10, in fitStart_beta = scipy.stats.beta._fitstart(data) File "/home/zunzun/local/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1992, in _fitstart a, b = optimize.fsolve(func, (1.0, 1.0)) File "/home/zunzun/local/lib/python2.6/site-packages/scipy/optimize/minpack.py", line 125, in fsolve maxfev, ml, mu, epsfcn, factor, diag) minpack.error: Error occured while calling the Python function named func and program flow is stopped. In summary, three behaviors: (1) Fits OK (2) Many exceptions with no fitting (3) minpack error. Running the program 10 times or so will reproduce these behaviors without fail from the "bleeding-edge" repository code. James Phillips ######################################################## import numpy, scipy, scipy.stats # test uniform distribution fitting data = numpy.random.uniform(2.0, 3.0, size=100) fitStart_uniform = scipy.stats.uniform._fitstart(data) fittedParameters_uniform = scipy.stats.uniform.fit(data) # test beta distribution fitting data = numpy.random.beta(2.0, 3.0, size=100) fitStart_beta = scipy.stats.beta._fitstart(data) fittedParameters_beta = scipy.stats.beta.fit(data) print print 'uniform._fitstart returns', fitStart_uniform print 'fitted parameters for uniform =', fittedParameters_uniform print print 'beta._fitstart returns', fitStart_beta print 'fitted parameters for beta =', fittedParameters_beta print From jsseabold at gmail.com Tue Sep 28 13:18:13 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 28 Sep 2010 13:18:13 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: Message-ID: On Tue, Sep 28, 2010 at 1:12 PM, James Phillips wrote: > Since I observed the following behavior in the SVN respository version > of SciPy, it seemed to me proper to post to the dev mailing list. ?I'm > using Ubuntu Lucid Lynx 32 bit and a fresh GIT of Numpy. ?I am not > sure if a Trac bug report needs to be entered. > > > Below is some example code for fitting two statistical distributions. > Sometimes the numpy-generated data is fit, as I can see the estimated > and fitted parameters are different. ?Sometimes I receive many > messages repeated on the command line like: > > Warning: invalid value encountered in absolute > Warning: invalid value encountered in subtract > > and the estimated parameters equal the fitted parameter values, > indicating no fitting took place. ?Sometimes I receive on the command > line: > > Traceback (most recent call last): > ?File "/home/zunzun/local/lib/python2.6/site-packages/scipy/stats/distributions.py", > line 1987, in func > ? ?sk = 2*(b-a)*math.sqrt(a + b + 1) / (a + b + 2) / math.sqrt(a*b) > ValueError: math domain error > Traceback (most recent call last): > ?File "example.py", line 10, in > ? ?fitStart_beta = scipy.stats.beta._fitstart(data) > ?File "/home/zunzun/local/lib/python2.6/site-packages/scipy/stats/distributions.py", > line 1992, in _fitstart > ? ?a, b = optimize.fsolve(func, (1.0, 1.0)) > ?File "/home/zunzun/local/lib/python2.6/site-packages/scipy/optimize/minpack.py", > line 125, in fsolve > ? ?maxfev, ml, mu, epsfcn, factor, diag) > minpack.error: Error occured while calling the Python function named func > > and program flow is stopped. > > > In summary, three behaviors: (1) Fits OK (2) Many exceptions with no > fitting (3) minpack error. ?Running the program 10 times or so will > reproduce these behaviors without fail from the "bleeding-edge" > repository code. > > ? ? James Phillips > > > ######################################################## > > import numpy, scipy, scipy.stats > > # test uniform distribution fitting > data = numpy.random.uniform(2.0, 3.0, size=100) > fitStart_uniform = scipy.stats.uniform._fitstart(data) > fittedParameters_uniform = scipy.stats.uniform.fit(data) > > # test beta distribution fitting > data = numpy.random.beta(2.0, 3.0, size=100) > fitStart_beta = scipy.stats.beta._fitstart(data) > fittedParameters_beta = scipy.stats.beta.fit(data) > > print > print 'uniform._fitstart returns', fitStart_uniform > print 'fitted parameters for uniform =', fittedParameters_uniform > print > print 'beta._fitstart returns', fitStart_beta > print 'fitted parameters for beta =', fittedParameters_beta > print > _______________________________________________ Is there an existing bug ticket for this? If not there probably should be... I think the fitting code should be looked at as experimental. It's good that you caught that no fitting is actually done in these cases. The problem stems (for the most part) from bad starting values (outside the support of the distribution for those with bounded support). I've tried to go through and fix this, giving very naive (but correct) starting values to fit methods, but I haven't gotten much further than that. I don't know if Travis or Josef have gone back to look at this. Hopefully one of these days I will find some more time to look at this and try to give a systematic fix. Skipper From zunzun at zunzun.com Tue Sep 28 13:31:28 2010 From: zunzun at zunzun.com (James Phillips) Date: Tue, 28 Sep 2010 07:31:28 -1000 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: Message-ID: On Tue, Sep 28, 2010 at 7:18 AM, Skipper Seabold wrote: > I think the fitting code should be looked at as experimental. Oops. Not knowing this, I naively fit all continuous distributions to a user-entered data set and return the fitted results. Link is: http://zunzun.com/StatisticalDistributions/1/ and push the "Submit" version to fit the example data to all continuous distributions. Takes about 15-20 seconds and shows sorted results with graphical output - this is with the SciPy version in Ubuntu Lucid Lynx, 0.7.0. I was actually trying the repository versions of Numpy and SciPy to see if their fits would work better than in 0.7.0. James From jsseabold at gmail.com Tue Sep 28 13:42:16 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 28 Sep 2010 13:42:16 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: Message-ID: On Tue, Sep 28, 2010 at 1:31 PM, James Phillips wrote: > On Tue, Sep 28, 2010 at 7:18 AM, Skipper Seabold wrote: >> I think the fitting code should be looked at as experimental. > Just my opinion, so take it for what you will. It should be okay for those distributions that do not have a bounded support, but as you indicated, some do no fitting and just return the start values. > Oops. ?Not knowing this, I naively fit all continuous distributions to > a user-entered data set and return the fitted results. ?Link is: > > http://zunzun.com/StatisticalDistributions/1/ > > and push the "Submit" version to fit the example data to all > continuous distributions. ?Takes about 15-20 seconds and shows sorted > results with graphical output - this is with the SciPy version in > Ubuntu Lucid Lynx, 0.7.0. > Very cool. Are the test data sets you use in the public domain? > I was actually trying the repository versions of Numpy and SciPy to > see if their fits would work better than in 0.7.0. > > ? ? James > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From zunzun at zunzun.com Tue Sep 28 13:46:50 2010 From: zunzun at zunzun.com (James Phillips) Date: Tue, 28 Sep 2010 12:46:50 -0500 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: Message-ID: On Tue, Sep 28, 2010 at 12:42 PM, Skipper Seabold wrote: > > Very cool. ?Are the test data sets you use in the public domain? BSD licensed source for all fitting code is at http://code.google.com/p/pythonequations/downloads/list, the made-up fictional example data is surely public domain. James From alan.isaac at gmail.com Tue Sep 28 13:52:26 2010 From: alan.isaac at gmail.com (Alan G Isaac) Date: Tue, 28 Sep 2010 13:52:26 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: Message-ID: <4CA22B5A.3020601@gmail.com> On 9/28/2010 1:46 PM, James Phillips wrote: > BSD licensed source for all fitting code is at > http://code.google.com/p/pythonequations/downloads/list Is this something that might appropriately be moved into the statsmodels scikit? Alan Isaac From zunzun at zunzun.com Tue Sep 28 14:01:04 2010 From: zunzun at zunzun.com (James Phillips) Date: Tue, 28 Sep 2010 13:01:04 -0500 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: <4CA22B5A.3020601@gmail.com> References: <4CA22B5A.3020601@gmail.com> Message-ID: You might consider the rate of change might be too high for that (?) - see http://code.google.com/p/pythonequations/source/list - the source code repository is quite active on updates and commits. James On Tue, Sep 28, 2010 at 12:52 PM, Alan G Isaac wrote: > On 9/28/2010 1:46 PM, James Phillips wrote: >> BSD licensed source for all fitting code is at >> http://code.google.com/p/pythonequations/downloads/list > > Is this something that might appropriately be moved > into the statsmodels scikit? > > Alan Isaac > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From jsseabold at gmail.com Tue Sep 28 14:04:32 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 28 Sep 2010 14:04:32 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> Message-ID: On Tue, Sep 28, 2010 at 2:01 PM, James Phillips wrote: > You might consider the rate of change might be too high for that (?) - > see http://code.google.com/p/pythonequations/source/list - the source > code repository is quite active on updates and commits. > If interested, we might be able to mirror your repo(?). Skipper From zunzun at zunzun.com Tue Sep 28 14:06:37 2010 From: zunzun at zunzun.com (James Phillips) Date: Tue, 28 Sep 2010 08:06:37 -1000 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> Message-ID: On Tue, Sep 28, 2010 at 8:04 AM, Skipper Seabold wrote: > On Tue, Sep 28, 2010 at 2:01 PM, James Phillips wrote: > > If interested, we might be able to mirror your repo(?). I would be honored, sir. James From nwagner at iam.uni-stuttgart.de Tue Sep 28 14:08:18 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Sep 2010 20:08:18 +0200 Subject: [SciPy-Dev] spatial.test() failure In-Reply-To: <4CA14F41.2000500@enthought.com> References: <4C8E7FDC.10705@enthought.com> <4C9ED497.7020307@enthought.com> <4CA14F41.2000500@enthought.com> Message-ID: On Mon, 27 Sep 2010 21:13:21 -0500 Warren Weckesser wrote: > On 9/26/10 12:13 AM, josef.pktd at gmail.com wrote: >> On Sun, Sep 26, 2010 at 1:05 AM, Warren Weckesser >> wrote: >>> >>> Nils Wagner wrote: >>>>>>> from scipy import spatial >>>>>>> spatial.test() >>>>>>> >>>> Running unit tests for scipy.spatial >>>> NumPy version 2.0.0.dev8714 >>>> NumPy is installed in >>>> /home/nwagner/local/lib64/python2.6/site-packages/numpy >>>> SciPy version 0.9.0.dev6803 >>>> SciPy is installed in >>>> /home/nwagner/local/lib64/python2.6/site-packages/scipy >>>> Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) >>>> [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] >>>> nose version 0.11.2 >>>> >>>> ............................................................................................................................F.................................................................................................................................................. >>>> ====================================================================== >>>> FAIL: Tests pdist(X, 'minkowski') on iris data. >>>>(float32) >>>> ---------------------------------------------------------------------- >>>> Traceback (most recent call last): >>>> File >>>> >>>> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/spatial/tests/test_distance.py", >>>> line 837, in test_pdist_minkowski_3_2_iris_float32 >>>> self.failUnless(within_tol(Y_test1, Y_right, eps)) >>>> AssertionError >>>> >>>> ---------------------------------------------------------------------- >>>> Ran 271 tests in 38.005s >>>> >>>> FAILED (failures=1) >>>> >>>> >>> >>> In r6799, the names of some tests were changed so there >>>are no longer >>> duplicated test function names. This means tests are >>>now being run that >>> had previously been shadowed by the duplicate names. I >>>also get the >>> failure of the test >>>test_pdist_minkowski_3_2_iris_float32(). I created >>> a ticket for the problem: >>>http://projects.scipy.org/scipy/ticket/1278 >>> >>> Warren >>> >>> >>> If the tolerance in the test is changed from eps=1e-7 to >>>1e-6, the test >>> passes on my computer. Several other float32 tests >>>using the iris data also >>> use 1e-6. I added a note about the tolerance of similar >>>tests in the >>> ticket. It does not seem unreasonable to simply loosen >>>the tolerance a bit. >>> Any objections? >> No, sounds reasonable. >> >> Josef >> > > Done, r6827. > >For anyone using trunk, could you check whether or not > test_pdist_minkowski_3_2_iris_float32 now passes? > > Warren > > Warren, It works fine for me. Thank you very much ! NumPy version 2.0.0.dev8715 NumPy is installed in /home/nwagner/local/lib64/python2.6/site-packages/numpy SciPy version 0.9.0.dev6827 SciPy is installed in /home/nwagner/local/lib64/python2.6/site-packages/scipy Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] nose version 0.11.2 ............................................................................................................................................................................................................................................................................... ---------------------------------------------------------------------- Ran 271 tests in 33.979s OK Cheers, Nils From alan.isaac at gmail.com Tue Sep 28 14:10:43 2010 From: alan.isaac at gmail.com (Alan G Isaac) Date: Tue, 28 Sep 2010 14:10:43 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> Message-ID: <4CA22FA3.5080402@gmail.com> On 9/28/2010 2:01 PM, James Phillips wrote: > You might consider the rate of change might be too high As long as you can provide unit tests, I don't see a problem. But you and Skipper shd work out the details. Cheers, Alan Isaac From josef.pktd at gmail.com Tue Sep 28 15:35:04 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 28 Sep 2010 15:35:04 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: <4CA22FA3.5080402@gmail.com> References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Tue, Sep 28, 2010 at 2:10 PM, Alan G Isaac wrote: > On 9/28/2010 2:01 PM, James Phillips wrote: >> You might consider the rate of change might be too high > > As long as you can provide unit tests, > I don't see a problem. > > But you and Skipper shd work out the details. The part that fits the distributions from what I can see is LGPL License is GNU LGPL v3, see https://launchpad.net/twolumps more in a short time Josef > > Cheers, > Alan Isaac > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Tue Sep 28 16:05:23 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 28 Sep 2010 16:05:23 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Tue, Sep 28, 2010 at 3:35 PM, wrote: > On Tue, Sep 28, 2010 at 2:10 PM, Alan G Isaac wrote: >> On 9/28/2010 2:01 PM, James Phillips wrote: >>> You might consider the rate of change might be too high >> >> As long as you can provide unit tests, >> I don't see a problem. >> >> But you and Skipper shd work out the details. > > The part that fits the distributions from what I can see is LGPL > > License is GNU LGPL v3, see https://launchpad.net/twolumps > > more in a short time James, I didn't see where in pythonequations you calculate the results statistics, e.g. aic, in the base class? The ranking of distributions, that I have seen are based on kolmogorov-smirnov, chisquare-test or Anderson-Darling. I have a test script where I use kstest for the ranking, and starting values dependent on the support, similar to twolump (but not as clean.) As Skipper said, the fit method works pretty well for the distributions with open support, with bound support setting the starting parameters based the data also works ok for many of them, e.g. like the min in twolump. For several of the bound support distributions an option to fix the support, (e.g. loc) would make more sense. (generalized extreme value and generalized pareto are supposed to have some problems with mle for a large parameter range.) The current scipy trunk is still "experimental", in the sense that it doesn't work for all distributions or might raise some random exceptions. I haven't looked much at the new fit version yet. (I'm still experimenting in statsmodels.) Any bugs with the current code should go into trac tickets. What I did in my test script was to split up the distributions into open, one-sided and bound distributions, and select starting values depending on this. I would restrict to the subset of distributions for which MLE works or put it inside an try except ( I think pymvpa did it this way.) Josef > > Josef > >> >> Cheers, >> Alan Isaac >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > From josef.pktd at gmail.com Tue Sep 28 16:19:46 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 28 Sep 2010 16:19:46 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: Message-ID: On Tue, Sep 28, 2010 at 1:12 PM, James Phillips wrote: > Since I observed the following behavior in the SVN respository version > of SciPy, it seemed to me proper to post to the dev mailing list. ?I'm > using Ubuntu Lucid Lynx 32 bit and a fresh GIT of Numpy. ?I am not > sure if a Trac bug report needs to be entered. > > > Below is some example code for fitting two statistical distributions. > Sometimes the numpy-generated data is fit, as I can see the estimated > and fitted parameters are different. ?Sometimes I receive many > messages repeated on the command line like: > > Warning: invalid value encountered in absolute > Warning: invalid value encountered in subtract > > and the estimated parameters equal the fitted parameter values, > indicating no fitting took place. ?Sometimes I receive on the command > line: > > Traceback (most recent call last): > ?File "/home/zunzun/local/lib/python2.6/site-packages/scipy/stats/distributions.py", > line 1987, in func > ? ?sk = 2*(b-a)*math.sqrt(a + b + 1) / (a + b + 2) / math.sqrt(a*b) > ValueError: math domain error > Traceback (most recent call last): > ?File "example.py", line 10, in > ? ?fitStart_beta = scipy.stats.beta._fitstart(data) > ?File "/home/zunzun/local/lib/python2.6/site-packages/scipy/stats/distributions.py", > line 1992, in _fitstart > ? ?a, b = optimize.fsolve(func, (1.0, 1.0)) > ?File "/home/zunzun/local/lib/python2.6/site-packages/scipy/optimize/minpack.py", > line 125, in fsolve > ? ?maxfev, ml, mu, epsfcn, factor, diag) > minpack.error: Error occured while calling the Python function named func same here when I was running my old scripts from scipy 0.7. I opened a ticket a while ago http://projects.scipy.org/scipy/ticket/1276 I'm monkey patching while I figure out what to do stats.distributions.beta_gen._fitstart = lambda self, data : (5,5,0,1) Josef > > and program flow is stopped. > > > In summary, three behaviors: (1) Fits OK (2) Many exceptions with no > fitting (3) minpack error. ?Running the program 10 times or so will > reproduce these behaviors without fail from the "bleeding-edge" > repository code. > > ? ? James Phillips > > > ######################################################## > > import numpy, scipy, scipy.stats > > # test uniform distribution fitting > data = numpy.random.uniform(2.0, 3.0, size=100) > fitStart_uniform = scipy.stats.uniform._fitstart(data) > fittedParameters_uniform = scipy.stats.uniform.fit(data) > > # test beta distribution fitting > data = numpy.random.beta(2.0, 3.0, size=100) > fitStart_beta = scipy.stats.beta._fitstart(data) > fittedParameters_beta = scipy.stats.beta.fit(data) > > print > print 'uniform._fitstart returns', fitStart_uniform > print 'fitted parameters for uniform =', fittedParameters_uniform > print > print 'beta._fitstart returns', fitStart_beta > print 'fitted parameters for beta =', fittedParameters_beta > print > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From zunzun at zunzun.com Tue Sep 28 16:28:52 2010 From: zunzun at zunzun.com (James Phillips) Date: Tue, 28 Sep 2010 10:28:52 -1000 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: I am in the process of eliminating the LGPL code, all I sed it for was to iterate over the distributions and guess initial parameters. It was the investigation of why thiat code would not work correctly that led me to the problem I originally discussed. The "Simple" examples have a StatisticalDistribution.py file, as soon as I dump the LGPL code I'll make a "Complex" example that iterates over all of the continuous distributions. Perhaps we should wait until the LGPL file is gone from my source code distribution before considering the code for mirroring. I do not use the Greedy Programmer's License (GPL) myself. James On Tue, Sep 28, 2010 at 9:35 AM, wrote: > On Tue, Sep 28, 2010 at 2:10 PM, Alan G Isaac wrote: >> On 9/28/2010 2:01 PM, James Phillips wrote: >>> You might consider the rate of change might be too high >> >> As long as you can provide unit tests, >> I don't see a problem. >> >> But you and Skipper shd work out the details. > > The part that fits the distributions from what I can see is LGPL > > License is GNU LGPL v3, see https://launchpad.net/twolumps > > more in a short time > > Josef > >> >> Cheers, >> Alan Isaac >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Tue Sep 28 16:46:50 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 28 Sep 2010 16:46:50 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Tue, Sep 28, 2010 at 4:28 PM, James Phillips wrote: > I am in the process of eliminating the LGPL code, all I sed it for was > to iterate over the distributions and guess initial parameters. ?It > was the investigation of why thiat code would not work correctly that > led me to the problem I originally discussed. > > The "Simple" examples have a StatisticalDistribution.py file, as soon > as I dump the LGPL code I'll make a "Complex" example that iterates > over all of the continuous distributions. > > Perhaps we should wait until the LGPL file is gone from my source code > distribution before considering the code for mirroring. ?I do not use > the Greedy Programmer's License (GPL) myself. I just added the beta patch to my old script http://bazaar.launchpad.net/~josef-pktd/statsmodels/statsmodels-josef-experimental-gsoc/annotate/head%3A/scikits/statsmodels/sandbox/stats/examples/matchdist.py This runs with scipy trunk from maybe a month ago. it saves 79 histograms in a sub-directory. I haven't looked at it in a while, and I didn't update it yet with what Skipper found out when he worked on this. I have been working more on getting standard errors for the estimates, using GenericLikelihoodModel which is in development in statsmodel, which should give asymptotic or bootstrap and eventually profile likelihood standard errors and confidence intervals for the parameter estimates, and allows parameters to depend on some explanatory variables. (When I tried to do this this summer, I got stuck in the literature on pareto distributions and power-laws.) One of the main work for fitting is getting good starting values for individual distributions, and I think Skipper went further than what I have. It would be good if we can get this to work (with BSD). Josef > > ? ? James > > On Tue, Sep 28, 2010 at 9:35 AM, ? wrote: >> On Tue, Sep 28, 2010 at 2:10 PM, Alan G Isaac wrote: >>> On 9/28/2010 2:01 PM, James Phillips wrote: >>>> You might consider the rate of change might be too high >>> >>> As long as you can provide unit tests, >>> I don't see a problem. >>> >>> But you and Skipper shd work out the details. >> >> The part that fits the distributions from what I can see is LGPL >> >> License is GNU LGPL v3, see https://launchpad.net/twolumps >> >> more in a short time >> >> Josef >> >>> >>> Cheers, >>> Alan Isaac >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From warren.weckesser at enthought.com Tue Sep 28 19:24:07 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 28 Sep 2010 18:24:07 -0500 Subject: [SciPy-Dev] spatial.test() failure In-Reply-To: References: <4C8E7FDC.10705@enthought.com> <4C9ED497.7020307@enthought.com> <4CA14F41.2000500@enthought.com> Message-ID: <4CA27917.7050909@enthought.com> On 9/28/10 1:08 PM, Nils Wagner wrote: > On Mon, 27 Sep 2010 21:13:21 -0500 > Warren Weckesser wrote: >> On 9/26/10 12:13 AM, josef.pktd at gmail.com wrote: >>> On Sun, Sep 26, 2010 at 1:05 AM, Warren Weckesser >>> wrote: >>>> Nils Wagner wrote: >>>>>>>> from scipy import spatial >>>>>>>> spatial.test() >>>>>>>> >>>>> Running unit tests for scipy.spatial >>>>> NumPy version 2.0.0.dev8714 >>>>> NumPy is installed in >>>>> /home/nwagner/local/lib64/python2.6/site-packages/numpy >>>>> SciPy version 0.9.0.dev6803 >>>>> SciPy is installed in >>>>> /home/nwagner/local/lib64/python2.6/site-packages/scipy >>>>> Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) >>>>> [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] >>>>> nose version 0.11.2 >>>>> >>>>> ............................................................................................................................F.................................................................................................................................................. >>>>> ====================================================================== >>>>> FAIL: Tests pdist(X, 'minkowski') on iris data. >>>>> (float32) >>>>> ---------------------------------------------------------------------- >>>>> Traceback (most recent call last): >>>>> File >>>>> >>>>> "/home/nwagner/local/lib64/python2.6/site-packages/scipy/spatial/tests/test_distance.py", >>>>> line 837, in test_pdist_minkowski_3_2_iris_float32 >>>>> self.failUnless(within_tol(Y_test1, Y_right, eps)) >>>>> AssertionError >>>>> >>>>> ---------------------------------------------------------------------- >>>>> Ran 271 tests in 38.005s >>>>> >>>>> FAILED (failures=1) >>>>> >>>>> >>>> In r6799, the names of some tests were changed so there >>>> are no longer >>>> duplicated test function names. This means tests are >>>> now being run that >>>> had previously been shadowed by the duplicate names. I >>>> also get the >>>> failure of the test >>>> test_pdist_minkowski_3_2_iris_float32(). I created >>>> a ticket for the problem: >>>> http://projects.scipy.org/scipy/ticket/1278 >>>> >>>> Warren >>>> >>>> >>>> If the tolerance in the test is changed from eps=1e-7 to >>>> 1e-6, the test >>>> passes on my computer. Several other float32 tests >>>> using the iris data also >>>> use 1e-6. I added a note about the tolerance of similar >>>> tests in the >>>> ticket. It does not seem unreasonable to simply loosen >>>> the tolerance a bit. >>>> Any objections? >>> No, sounds reasonable. >>> >>> Josef >>> >> Done, r6827. >> >> For anyone using trunk, could you check whether or not >> test_pdist_minkowski_3_2_iris_float32 now passes? >> >> Warren >> >> > Warren, > > It works fine for me. > > Thank you very much ! > That's good to hear. Thanks for reporting back (and for the original report of the failure). Warren > NumPy version 2.0.0.dev8715 > NumPy is installed in > /home/nwagner/local/lib64/python2.6/site-packages/numpy > SciPy version 0.9.0.dev6827 > SciPy is installed in > /home/nwagner/local/lib64/python2.6/site-packages/scipy > Python version 2.6.5 (r265:79063, Jul 5 2010, 11:46:13) > [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] > nose version 0.11.2 > ............................................................................................................................................................................................................................................................................... > ---------------------------------------------------------------------- > Ran 271 tests in 33.979s > > OK > > > Cheers, > Nils > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From ariver at enthought.com Wed Sep 29 13:20:56 2010 From: ariver at enthought.com (Aaron River) Date: Wed, 29 Sep 2010 12:20:56 -0500 Subject: [SciPy-Dev] Improvements to SciPy.org Server Message-ID: Greetings Gentle Folk, I'm happy to announce that the server hosting SciPy.org, as well as other OSS projects, has been migrated into a cluster assigned to our community support efforts. The base instance itself remains relatively unchanged, but is now running atop a more capable set of servers. (Improvements to the services themselves are next in line.) Yesterday afternoon, I allowed the system to run with the configurations carried over from the old hardware, while watching the services for warning signs of trouble. Then, that evening, I loosened things up and adjusted configurations to better utilize the available resources. Though I always expect there to be "gotchas", I've thus far observed much improved response times and throughput. If you do experience any problems, please do not hesitate to let us know. We hope you enjoy the improvements to SciPy.org. Thanks, -- Aaron River Sr IT Administrator Enthought, Inc. -- From zunzun at zunzun.com Thu Sep 30 14:28:11 2010 From: zunzun at zunzun.com (James Phillips) Date: Thu, 30 Sep 2010 08:28:11 -1000 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Tue, Sep 28, 2010 at 10:46 AM, wrote: > > One of the main work for fitting is getting good starting values for > individual distributions... Attached is source code that uses Robert Kern's sandbox version of the Differential Evolution genetic algorithm (diffev.py) to find starting parameters of loc and scale, it works well in my tests. I plan to use this approach on the zunzun.com web site, having no better general options than this one at the moment. Note that I have slightly modified Robert's code so that I can pass in a stopping criteria, that file is also attached. James -------------- next part -------------- A non-text attachment was scrubbed... Name: diffev.py Type: text/x-python Size: 12834 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: example.py Type: text/x-python Size: 2343 bytes Desc: not available URL: From josef.pktd at gmail.com Thu Sep 30 16:05:52 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 30 Sep 2010 16:05:52 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Thu, Sep 30, 2010 at 2:28 PM, James Phillips wrote: > On Tue, Sep 28, 2010 at 10:46 AM, ? wrote: >> >> One of the main work for fitting is getting good starting values for >> individual distributions... > > Attached is source code that uses Robert Kern's sandbox version of the > Differential Evolution genetic algorithm (diffev.py) to find starting > parameters of loc and scale, it works well in my tests. ?I plan to use > this approach on the zunzun.com web site, having no better general > options than this one at the moment. ?Note that I have slightly > modified Robert's code so that I can pass in a stopping criteria, that > file is also attached. interesting, Why did you choose to minimize the squared difference of quantiles, instead of the negative log-likelihood in the diffev ? Josef > > ? ? James > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From josef.pktd at gmail.com Thu Sep 30 16:05:52 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 30 Sep 2010 16:05:52 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Thu, Sep 30, 2010 at 2:28 PM, James Phillips wrote: > On Tue, Sep 28, 2010 at 10:46 AM, ? wrote: >> >> One of the main work for fitting is getting good starting values for >> individual distributions... > > Attached is source code that uses Robert Kern's sandbox version of the > Differential Evolution genetic algorithm (diffev.py) to find starting > parameters of loc and scale, it works well in my tests. ?I plan to use > this approach on the zunzun.com web site, having no better general > options than this one at the moment. ?Note that I have slightly > modified Robert's code so that I can pass in a stopping criteria, that > file is also attached. interesting, Why did you choose to minimize the squared difference of quantiles, instead of the negative log-likelihood in the diffev ? Josef > > ? ? James > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From zunzun at zunzun.com Thu Sep 30 16:27:42 2010 From: zunzun at zunzun.com (James Phillips) Date: Thu, 30 Sep 2010 10:27:42 -1000 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Thu, Sep 30, 2010 at 10:05 AM, wrote: > > Why did you choose to minimize the squared difference of quantiles, > instead of the negative log-likelihood in the diffev ? For my use in estimating initial parameters, the genetic algorithm needs a cutoff point at which a sufficient solution has been reached and I have useful starting parameters. While I would not know in advance what log likelihood value to use as a stopping parameter, I do know in advance that a residual sum-of-squares near zero should have good initial parameter estimates. That means I can pass a small value to the genetic algorithm and have it stop if it finds such parameters, there would no need to continue past that point. I did notice that it really drags on for the beta distribution, I would need to tune my GA parameters; for example, to give up after fewer than the 500 generations I used in the initial test code. James From josef.pktd at gmail.com Thu Sep 30 20:20:46 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 30 Sep 2010 20:20:46 -0400 Subject: [SciPy-Dev] Subversion scipy.stats irregular problem with source code example In-Reply-To: References: <4CA22B5A.3020601@gmail.com> <4CA22FA3.5080402@gmail.com> Message-ID: On Thu, Sep 30, 2010 at 4:27 PM, James Phillips wrote: > On Thu, Sep 30, 2010 at 10:05 AM, ? wrote: >> >> Why did you choose to minimize the squared difference of quantiles, >> instead of the negative log-likelihood in the diffev ? > > For my use in estimating initial parameters, the genetic algorithm > needs a cutoff point at which a sufficient solution has been reached > and I have useful starting parameters. ?While I would not know in > advance what log likelihood value to use as a stopping parameter, I do > know in advance that a residual sum-of-squares near zero should have > good initial parameter estimates. ?That means I can pass a small value > to the genetic algorithm and have it stop if it finds such parameters, > there would no need to continue past that point. ppf is (very) expensive to calculate for some distributions. I tried something similar, but then switched to matching the cdf instead of the pdf. The differential evolution seems pretty slow in this case. What I also did in larger samples is to match only a few quantiles instead of each observation. (I haven't quite figured out yet how to get standard errors when matching quantiles this way.) I tried a few different starting values for your powerlaw example and my impression is that it doesn't converge to a unique solution, i.e. different starting values end up with different maximum likellihood fit results. I haven't tried powerlaw specifically before, but for pareto there are some problems with mle if loc and scale are also estimated. It could be that powerlaw is also a distribution that requires special fitting methods. I was using gamma as one of the distributions to play with for fitting. > > I did notice that it really drags on for the beta distribution, I > would need to tune my GA parameters; for example, to give up after > fewer than the 500 generations I used in the initial test code. If you mainly look for starting values instead of getting a good estimate, then it might be possible just to do a rough randomization. Especially for online/interactive estimation of many distributions, this looks a bit too slow to me. But I think using a global optimizer will be quite a bit more robust for several distributions where the likelihood doesn't have a well behaved shape. Josef > > ? ? James > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev >