From answer at tnoo.net Wed Aug 2 08:10:56 2006 From: answer at tnoo.net (Martin =?iso-8859-1?Q?L=FCthi?=) Date: Wed, 02 Aug 2006 14:10:56 +0200 Subject: [SciPy-dev] [BUG] SVN scipy installation error Message-ID: <874pwvjq27.fsf@tnoo.net> Hi Sorry to bug the list with bug reports. If someone can point me at a bug tracking system on scipy.org I would certainly do that. The Scipy homepage is silent about it, and also searching reveals nothing. The newest SVN version of scipy gives me an error with markoutercomma. numpy revision 2942. scipy revision 2142. -- Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 ================== ...../numeric/scipy$ python setup.py install building extension "scipy.fftpack._fftpack" sources f2py options: [] f2py: Lib/fftpack/fftpack.pyf Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/install.py", li ne 16, in run r = old_install.run(self) File "/usr/lib/python2.4/distutils/command/install.py", line 506, in run self.run_command('build') File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 87, in run self.build_sources() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 218, in build_extension_sources sources = self.f2py_sources(sources, ext) File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 450, in f2py_sources import numpy.f2py as f2py2e File "/usr/lib/python2.4/site-packages/numpy/f2py/__init__.py", line 11, in ? import f2py2e File "/usr/lib/python2.4/site-packages/numpy/f2py/f2py2e.py", line 27, in ? import rules File "/usr/lib/python2.4/site-packages/numpy/f2py/rules.py", line 66, in ? import capi_maps File "/usr/lib/python2.4/site-packages/numpy/f2py/capi_maps.py", line 21, in ? from crackfortran import markoutercomma ImportError: cannot import name markoutercomma ================== Thanks! Martin L?thi answer at tnoo.net From nwagner at iam.uni-stuttgart.de Wed Aug 2 08:17:58 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 Aug 2006 14:17:58 +0200 Subject: [SciPy-dev] [BUG] SVN scipy installation error In-Reply-To: <874pwvjq27.fsf@tnoo.net> References: <874pwvjq27.fsf@tnoo.net> Message-ID: <44D097F6.1080201@iam.uni-stuttgart.de> Martin L?thi wrote: > Hi > > Sorry to bug the list with bug reports. If someone can point me at a bug > tracking system on scipy.org I would certainly do that. The Scipy homepage is > silent about it, and also searching reveals nothing. > > http://www.scipy.org/Developer_Zone http://projects.scipy.org/scipy/numpy http://projects.scipy.org/scipy/scipy is a starting point. You need an account to submit bug reports. Nils > The newest SVN version of scipy gives me an error with markoutercomma. > > > numpy revision 2942. > scipy revision 2142. > > -- Python 2.4.3 (#2, Apr 27 2006, 14:43:58) > [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 > > ================== > > ...../numeric/scipy$ python setup.py install > > building extension "scipy.fftpack._fftpack" sources > f2py options: [] > f2py: Lib/fftpack/fftpack.pyf > Traceback (most recent call last): > File "setup.py", line 55, in ? > setup_package() > File "setup.py", line 47, in setup_package > configuration=configuration ) > File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 174, in > setup > return old_setup(**new_attr) > File "/usr/lib/python2.4/distutils/core.py", line 149, in setup > dist.run_commands() > File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands > self.run_command(cmd) > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/install.py", li > ne 16, in run > r = old_install.run(self) > File "/usr/lib/python2.4/distutils/command/install.py", line 506, in run > self.run_command('build') > File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run > self.run_command(cmd_name) > File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", > line 87, in run > self.build_sources() > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", > line 106, in build_sources > self.build_extension_sources(ext) > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", > line 218, in build_extension_sources > sources = self.f2py_sources(sources, ext) > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", > line 450, in f2py_sources > import numpy.f2py as f2py2e > File "/usr/lib/python2.4/site-packages/numpy/f2py/__init__.py", line 11, in ? > import f2py2e > File "/usr/lib/python2.4/site-packages/numpy/f2py/f2py2e.py", line 27, in ? > import rules > File "/usr/lib/python2.4/site-packages/numpy/f2py/rules.py", line 66, in ? > import capi_maps > File "/usr/lib/python2.4/site-packages/numpy/f2py/capi_maps.py", line 21, in ? > from crackfortran import markoutercomma > ImportError: cannot import name markoutercomma > ================== > > Thanks! > > > Martin L?thi answer at tnoo.net > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From answer at tnoo.net Wed Aug 2 08:41:40 2006 From: answer at tnoo.net (Martin =?iso-8859-1?Q?L=FCthi?=) Date: Wed, 02 Aug 2006 14:41:40 +0200 Subject: [SciPy-dev] [BUG] scipy.interpolate.splprep Message-ID: <87zmenia2j.fsf@tnoo.net> Hi The function scipy.interpolate.splprep only returns a tuple of length 2 instead of three. Therefore the splines cannot be evaluated with splev: --------------- import scipy as S import scipy.interpolate x = S.array([[0.,0.,0.], [1.,1.,0.], [1.5,0.9,0.5], [2.,0.,1.],] ) sp = scipy.interpolate.splprep(x.transpose()) print scipy.interpolate.splev([0.], sp) --------------- yields --------------- exceptions.ValueError Traceback (most recent call last) /home/tinu/projects/lbie/python/ /tmp/python-5096nEG.py 7 [2.,0.,1.],] 8 ) 9 10 sp = scipy.interpolate.splprep(x.transpose()) ---> 11 print scipy.interpolate.splev([0.], sp) /usr/lib/python2.4/site-packages/scipy/interpolate/fitpack.py in splev(x, tck, der) 406 representing the curve in N-dimensional space. 407 """ --> 408 t,c,k=tck 409 try: 410 c[0][0] ValueError: need more than 2 values to unpack --------------- Thanks, Martin -- Martin L?thi answer at tnoo.net From nwagner at iam.uni-stuttgart.de Wed Aug 2 13:34:58 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 Aug 2006 19:34:58 +0200 Subject: [SciPy-dev] [SciPy-user] Call for testing In-Reply-To: <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> References: <44D056CF.6040605@iam.uni-stuttgart.de> <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> Message-ID: On Wed, 2 Aug 2006 07:20:30 -0400 Steve Lianoglou wrote: > Hi Nils et. al, > >> is someone able to confirm the output (32bit.png) of >>test_det.py on a >> 32-bit machine ? >> The script is available via >> >> http://projects.scipy.org/scipy/scipy/ticket/223 > > I just ran it on the newest scipy and almost newest >numpy from svn and my output is the correct one (the >64bit.png) and I'm running on a 32bit machine (Intel >Core Duo (MacBook Pro)). > > > No atlas as such, tho I was under the impression I did >install atlas ... there's even a libatlas.a in my >/usr/local/lib ... hmm .. > > Anyway, scipy/numpy info is pasted below. > > -steve > > > In [5]: scipy.__version__ > Out[5]: '0.5.0.2142' > > In [6]: scipy.show_config() > umfpack_info: > NOT AVAILABLE > > dfftw_info: > libraries = ['drfftw', 'dfftw'] > library_dirs = ['/opt/local/lib'] > define_macros = [('SCIPY_DFFTW_H', None)] > include_dirs = ['/opt/local/include'] > > fft_opt_info: > libraries = ['drfftw', 'dfftw'] > library_dirs = ['/opt/local/lib'] > define_macros = [('SCIPY_DFFTW_H', None)] > include_dirs = ['/opt/local/include'] > > djbfft_info: > NOT AVAILABLE > > lapack_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3'] > define_macros = [('NO_ATLAS_INFO', 3)] > > fftw2_info: > NOT AVAILABLE > > fftw3_info: > NOT AVAILABLE > > blas_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3', >'-I/System/Library/Frameworks/ > vecLib.framework/Headers'] > define_macros = [('NO_ATLAS_INFO', 3)] > > ============================= > > > In [9]: numpy.__version__ > Out[9]: '1.0b2.dev2940' > > In [10]: numpy.show_config() > lapack_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3'] > define_macros = [('NO_ATLAS_INFO', 3)] > > blas_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3', >'-I/System/Library/Frameworks/ > vecLib.framework/Headers'] > define_macros = [('NO_ATLAS_INFO', 3)] > Thank you all for running the test ! Finally I have disabled ATLAS by export ATLAS=None Good news is that I get the correct result. So I guess it's definitely an ATLAS issue. Bad news is that scipy.test(1) results in FAILED (failures=17, errors=4) More details below. Can someone reproduce these failures (No ATLAS) ? >>> numpy.__version__ '1.0b2.dev2943' >>> scipy.__version__ '0.5.0.2142' >>> ====================================================================== ERROR: Compare eigenvalues of eigvals_banded with those of linalg.eig. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 270, in check_eigvals_banded select='i', select_range=(ind1, ind2) ) File "/usr/lib/python2.4/site-packages/scipy/linalg/decomp.py", line 381, in eigvals_banded select_range=select_range) File "/usr/lib/python2.4/site-packages/scipy/linalg/decomp.py", line 362, in eig_banded if info>0: raise LinAlgError,"eig algorithm did not converge" LinAlgError: eig algorithm did not converge ====================================================================== ERROR: check_zero (scipy.linalg.tests.test_matfuncs.test_expm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 105, in check_zero assert_array_almost_equal(expm2(a),[[1,0],[0,1]]) File "/usr/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 71, in expm2 vri = inv(vr) File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 180, in inv a1 = asarray_chkfinite(a) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 181, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== ERROR: check_defective1 (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 46, in check_defective1 r = signm(a) File "/usr/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 274, in signm iS0 = inv(S0) File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 180, in inv a1 = asarray_chkfinite(a) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 181, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== ERROR: check_defective3 (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 67, in check_defective3 r = signm(a) File "/usr/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 274, in signm iS0 = inv(S0) File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 180, in inv a1 = asarray_chkfinite(a) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 181, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== FAIL: Compare eigenvalues and eigenvectors of eig_banded ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 314, in check_eig_banded self.w_sym_lin[ind1:ind2+1]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 199, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (shapes (0,), (5,) mismatch) x: array([], dtype=float64) y: array([-0.52994106, -0.43023792, 0.86217766, 1.66090423, 2.84350424]) ====================================================================== FAIL: check_simple (scipy.linalg.tests.test_decomp.test_svdvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 508, in check_simple assert s[0]>=s[1]>=s[2] AssertionError ====================================================================== FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_svdvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 526, in check_simple_complex assert s[0]>=s[1]>=s[2] AssertionError ====================================================================== FAIL: check_syevr_irange_high (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 74, in check_syevr_irange_high def check_syevr_irange_high(self): self.check_syevr_irange(irange=[1,2]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.]) y: array([ 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange_low (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 70, in check_syevr_irange_low def check_syevr_irange_low(self): self.check_syevr_irange(irange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.]) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_irange_mid (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 72, in check_syevr_irange_mid def check_syevr_irange_mid(self): self.check_syevr_irange(irange=[1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.]) y: array([ 0.48769389]) ====================================================================== FAIL: check_syevr_vrange (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.5, 4.5, 4.5]) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_high (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 100, in check_syevr_vrange_high def check_syevr_vrange_high(self): self.check_syevr_vrange(vrange=[1,10]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 5.5]) y: array([ 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_low (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 96, in check_syevr_vrange_low def check_syevr_vrange_low(self): self.check_syevr_vrange(vrange=[-1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.]) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_vrange_mid (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 98, in check_syevr_vrange_mid def check_syevr_vrange_mid(self): self.check_syevr_vrange(vrange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.5]) y: array([ 0.48769389]) ====================================================================== FAIL: check_syevr_irange_high (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 74, in check_syevr_irange_high def check_syevr_irange_high(self): self.check_syevr_irange(irange=[1,2]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.], dtype=float32) y: array([ 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange_low (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 70, in check_syevr_irange_low def check_syevr_irange_low(self): self.check_syevr_irange(irange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.], dtype=float32) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_irange_mid (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 72, in check_syevr_irange_mid def check_syevr_irange_mid(self): self.check_syevr_irange(irange=[1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.], dtype=float32) y: array([ 0.48769389]) ====================================================================== FAIL: check_syevr_vrange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.5, 4.5, 4.5], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_high (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 100, in check_syevr_vrange_high def check_syevr_vrange_high(self): self.check_syevr_vrange(vrange=[1,10]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 5.5], dtype=float32) y: array([ 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_low (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 96, in check_syevr_vrange_low def check_syevr_vrange_low(self): self.check_syevr_vrange(vrange=[-1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.], dtype=float32) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_vrange_mid (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 98, in check_syevr_vrange_mid def check_syevr_vrange_mid(self): self.check_syevr_vrange(vrange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.5], dtype=float32) y: array([ 0.48769389]) ---------------------------------------------------------------------- Ran 1552 tests in 3.979s From bhendrix at enthought.com Wed Aug 2 13:46:12 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 02 Aug 2006 12:46:12 -0500 Subject: [SciPy-dev] ANN: Python Enthought Edition 1.0.0 Released Message-ID: <44D0E4E4.4020304@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0 (http://code.enthought.com/enthon/) -- a python distribution for Windows. About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numpy SciPy IPython Enthought Tool Suite wxPython PIL mingw MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com 1.0.0 Release Notes ------------------------- A lot of work has gone into testing this release, and it is our most stable release to date, but there are a couple of caveats: * The generated documentation index entries are missing. The full text search does work and the table of contents is complete, so this feature will be pushed to version 1.1.0. * IPython may cause problems when starting the first time if a previous version of IPython was ran. If you see "WARNING: could not import user config", either follow the directions which follow the warning. * Some users are reporting that older matplotlibrc files are not compatible with the version of matplotlib installed with this release. Please refer to the matplotlib mailing list (http://sourceforge.net/mail/?group_id=80706) for further help. We are grateful to everyone who has helped test this release. If you'd like to contribute or report a bug, you can do so at https://svn.enthought.com/enthought. From matthew.brett at gmail.com Thu Aug 3 07:40:43 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 3 Aug 2006 12:40:43 +0100 Subject: [SciPy-dev] Where to put data for tests? In-Reply-To: <1e2af89e0608030437v77289434y9ba8623be745f9fb@mail.gmail.com> References: <1e2af89e0608030437v77289434y9ba8623be745f9fb@mail.gmail.com> Message-ID: <1e2af89e0608030440l63c547c5s43cc7614717a6394@mail.gmail.com> Dear sci-pies, I am sorry if I have missed the obvious answer to this question, but... Some of us are working on unit tests for the scipy.io.loadmat routine. We would like to provide the tests with simple real examples of matlab format .mat files, generated by various versions of matlab. The questions are, where is the best place to put such files, and how should they best be plumbed into setup.py? Should they for example be package data? We had thought that a subdirectory of tests, called 'data' would be a good place for them. Does this seem sensible? Thanks a lot, Matthew From jstrunk at enthought.com Thu Aug 3 10:21:38 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Thu, 3 Aug 2006 09:21:38 -0500 Subject: [SciPy-dev] New ISP Message-ID: <200608030921.38594.jstrunk@enthought.com> Good Morning, I am pleased to announce that Enthought is now using a new ISP. We have a 10 megabit internet connection from our in building provider now. That is 6.6 times faster than our old T1. Downloads of Enthon and Scipy should be much faster now. If you find that you are connecting at slow speeds, it may be that your ISP's DNS cache has not recognized our DNS changes yet. Most should have changed within 20 minutes of the switchover. Some ISPs may take a day, and very few will take a few weeks. Thanks, Jeff Strunk IT Administrator Enthought, Inc. From nwagner at iam.uni-stuttgart.de Thu Aug 3 11:43:55 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Aug 2006 17:43:55 +0200 Subject: [SciPy-dev] Flop counter for numpy/scipy Message-ID: Hi all, Is there a way to count flops in numpy/scipy ? I mean, how can I compare two algorithms ? Nils From cookedm at physics.mcmaster.ca Thu Aug 3 17:21:49 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 3 Aug 2006 17:21:49 -0400 Subject: [SciPy-dev] Flop counter for numpy/scipy In-Reply-To: References: Message-ID: <20060803172149.647723e6@arbutus.physics.mcmaster.ca> On Thu, 03 Aug 2006 17:43:55 +0200 "Nils Wagner" wrote: > Hi all, > > Is there a way to count flops in numpy/scipy ? > I mean, how can I compare two algorithms ? Easiest is to just time them. Realistically, that's what you're worried about, not how many flops they take. If you need the O(.) behaviour, plot it for several sizes of inputs, and see what it fits to. If you really want to count flops, you've got two options: make a number type that keeps track of operations done on it, then use object arrays, or make an array class that does the same; you'll likely need __getitem__ to return a number type like in the first case. Or something like that. You'll run into trouble if you can't pass arbitrary arrays, or object arrays. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Thu Aug 3 17:21:49 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 3 Aug 2006 17:21:49 -0400 Subject: [SciPy-dev] Flop counter for numpy/scipy In-Reply-To: References: Message-ID: <20060803172149.647723e6@arbutus.physics.mcmaster.ca> On Thu, 03 Aug 2006 17:43:55 +0200 "Nils Wagner" wrote: > Hi all, > > Is there a way to count flops in numpy/scipy ? > I mean, how can I compare two algorithms ? Easiest is to just time them. Realistically, that's what you're worried about, not how many flops they take. If you need the O(.) behaviour, plot it for several sizes of inputs, and see what it fits to. If you really want to count flops, you've got two options: make a number type that keeps track of operations done on it, then use object arrays, or make an array class that does the same; you'll likely need __getitem__ to return a number type like in the first case. Or something like that. You'll run into trouble if you can't pass arbitrary arrays, or object arrays. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jstrunk at enthought.com Fri Aug 4 13:00:14 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 4 Aug 2006 12:00:14 -0500 Subject: [SciPy-dev] scipy site will be back up real soon now Message-ID: <200608041200.14489.jstrunk@enthought.com> Hello, The automatic update for apache had different behavior than the previous version. I am compiling a fixed version, so it should work again soon. Sorry for the inconvenience. Jeff Strunk IT Administrator Enthought, Inc. From oliphant.travis at ieee.org Fri Aug 4 22:06:18 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 04 Aug 2006 20:06:18 -0600 Subject: [SciPy-dev] SciPy SVN does not work with NumPy SVN for a bit Message-ID: <44D3FD1A.8060802@ieee.org> I'm aware that SciPy SVN does not work with NumPy SVN. This will be fixed by improving the use of oldnumeric in SciPy in a few hours unless somebody beats me to it. -Travis From bhendrix at enthought.com Mon Aug 7 12:11:24 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 07 Aug 2006 11:11:24 -0500 Subject: [SciPy-dev] SciPy 2006 LiveCD update Message-ID: <44D7662C.90508@enthought.com> In order to prepare the LiveCD's, I need to build them from source tomorrow. Is anyone planning on demo'ing anything that has been added since the SciPy 0.5.0 and NumPy beta 1 releases? If all goes as expected, the LiveCD ISO will also be available Wednesday via bittorrent. Bryce From pebarrett at gmail.com Tue Aug 8 09:59:55 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Tue, 8 Aug 2006 09:59:55 -0400 Subject: [SciPy-dev] Fwd: 206 AstroPy moderator request(s) waiting In-Reply-To: References: Message-ID: <40e64fa20608080659i4bd6ee00vdd7dbeeb86b62538@mail.gmail.com> Can anyone on this list explain to me what is going on here? I am the primary adminstrator for this mailing list. I keep getting these messages sent to me. However, when I logon to the administrator's web page, none of these messages are there. -- Paul ---------- Forwarded message ---------- From: astropy-bounces at scipy.net Date: Aug 8, 2006 9:00 AM Subject: 206 AstroPy moderator request(s) waiting To: astropy-owner at scipy.net The AstroPy at scipy.net mailing list has 206 request(s) waiting for your consideration at: http://www.scipy.net/mailman/admindb/astropy Please attend to this at your earliest convenience. This notice of pending requests, if any, will be sent out daily. Pending posts: From: qfoua at egm.lib.rochester.edu on Tue May 30 12:07:10 2006 Subject: Amazing Opportunity for Americans Cause: Post by non-member to a members-only list From: prugmi at dc12.petsmart.com on Tue May 30 12:08:24 2006 Subject: Quality Home Loans simplified Cause: Post by non-member to a members-only list [etc.] From fperez.net at gmail.com Tue Aug 8 14:55:15 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 8 Aug 2006 12:55:15 -0600 Subject: [SciPy-dev] Fwd: 206 AstroPy moderator request(s) waiting In-Reply-To: <40e64fa20608080659i4bd6ee00vdd7dbeeb86b62538@mail.gmail.com> References: <40e64fa20608080659i4bd6ee00vdd7dbeeb86b62538@mail.gmail.com> Message-ID: On 8/8/06, Paul Barrett wrote: > Can anyone on this list explain to me what is going on here? I am the > primary adminstrator for this mailing list. I keep getting these > messages sent to me. However, when I logon to the administrator's web > page, none of these messages are there. > > -- Paul > > ---------- Forwarded message ---------- > From: astropy-bounces at scipy.net > Date: Aug 8, 2006 9:00 AM > Subject: 206 AstroPy moderator request(s) waiting > To: astropy-owner at scipy.net > > > The AstroPy at scipy.net mailing list has 206 request(s) waiting for your > consideration at: > > http://www.scipy.net/mailman/admindb/astropy > > Please attend to this at your earliest convenience. This notice of > pending requests, if any, will be sent out daily. [...] Something changed in the scipy mailing list setup a few weeks ago, I also started receiving these for the two ipython lists. I'm just ignoring them for now, but it would be nice if the old behavior could be restored (I never had any problems with it). Cheers, f From fperez.net at gmail.com Tue Aug 8 14:55:15 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 8 Aug 2006 12:55:15 -0600 Subject: [SciPy-dev] Fwd: 206 AstroPy moderator request(s) waiting In-Reply-To: <40e64fa20608080659i4bd6ee00vdd7dbeeb86b62538@mail.gmail.com> References: <40e64fa20608080659i4bd6ee00vdd7dbeeb86b62538@mail.gmail.com> Message-ID: On 8/8/06, Paul Barrett wrote: > Can anyone on this list explain to me what is going on here? I am the > primary adminstrator for this mailing list. I keep getting these > messages sent to me. However, when I logon to the administrator's web > page, none of these messages are there. > > -- Paul > > ---------- Forwarded message ---------- > From: astropy-bounces at scipy.net > Date: Aug 8, 2006 9:00 AM > Subject: 206 AstroPy moderator request(s) waiting > To: astropy-owner at scipy.net > > > The AstroPy at scipy.net mailing list has 206 request(s) waiting for your > consideration at: > > http://www.scipy.net/mailman/admindb/astropy > > Please attend to this at your earliest convenience. This notice of > pending requests, if any, will be sent out daily. [...] Something changed in the scipy mailing list setup a few weeks ago, I also started receiving these for the two ipython lists. I'm just ignoring them for now, but it would be nice if the old behavior could be restored (I never had any problems with it). Cheers, f From nwagner at iam.uni-stuttgart.de Sun Aug 13 06:08:14 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 Aug 2006 12:08:14 +0200 Subject: [SciPy-dev] Trouble with check loadmat on 64 bit machines Message-ID: <44DEFA0E.7080908@iam.uni-stuttgart.de> Hi all, I get 16 errors with >>> numpy.__version__ '1.0b2.dev3005' >>> scipy.__version__ '0.5.0.2158' Three typical messages are below Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in cc self._check_case(name, expected) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 69, in _check_case matdict = loadmat(f) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 801, in loadmat thisdict = _loadv5(fid,basename) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 739, in _loadv5 el, varname, unused = _get_element(fid, return_name_dtype=True) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 724, in _get_element el, name = _parse_mimatrix(fid,numbytes) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 605, in _parse_mimatrix result = zeros(length, object) MemoryError Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in cc self._check_case(name, expected) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 69, in _check_case matdict = loadmat(f) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 801, in loadmat thisdict = _loadv5(fid,basename) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 739, in _loadv5 el, varname, unused = _get_element(fid, return_name_dtype=True) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 724, in _get_element el, name = _parse_mimatrix(fid,numbytes) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 575, in _parse_mimatrix result = squeeze(transpose(reshape(result,tupdims))) File "/usr/lib64/python2.4/site-packages/numpy/core/fromnumeric.py", line 62, in reshape return reshape(newshape, order=order) ValueError: total size of new array must be unchanged Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in cc self._check_case(name, expected) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 69, in _check_case matdict = loadmat(f) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 801, in loadmat thisdict = _loadv5(fid,basename) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 739, in _loadv5 el, varname, unused = _get_element(fid, return_name_dtype=True) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 724, in _get_element el, name = _parse_mimatrix(fid,numbytes) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 643, in _parse_mimatrix rowind = _get_element(fid) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 718, in _get_element el = fid.read(numbytes,miDataTypes[dtype][2],c_is_b=1) File "/usr/lib64/python2.4/site-packages/scipy/io/mio.py", line 209, in read raise ValueError, "When c_is_b is non-zero then " \ ValueError: When c_is_b is non-zero then count is bytes and must be multiple of basic size. Nils From matthew.brett at gmail.com Sun Aug 13 06:17:55 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 13 Aug 2006 11:17:55 +0100 Subject: [SciPy-dev] Trouble with check loadmat on 64 bit machines In-Reply-To: <44DEFA0E.7080908@iam.uni-stuttgart.de> References: <44DEFA0E.7080908@iam.uni-stuttgart.de> Message-ID: <1e2af89e0608130317j682f0e98x764aa4a681a7d665@mail.gmail.com> Hi, On 8/13/06, Nils Wagner wrote: > Hi all, > > I get 16 errors with > > >>> numpy.__version__ > '1.0b2.dev3005' > >>> scipy.__version__ > '0.5.0.2158' Thanks for the report - some of us have been working on loadmat recently, and have added unit tests, which expose this problem. I have a 64 bit machine and will look into it tomorrow. Best, Matthew From matthew.brett at gmail.com Sun Aug 13 06:17:55 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 13 Aug 2006 11:17:55 +0100 Subject: [SciPy-dev] Trouble with check loadmat on 64 bit machines In-Reply-To: <44DEFA0E.7080908@iam.uni-stuttgart.de> References: <44DEFA0E.7080908@iam.uni-stuttgart.de> Message-ID: <1e2af89e0608130317j682f0e98x764aa4a681a7d665@mail.gmail.com> Hi, On 8/13/06, Nils Wagner wrote: > Hi all, > > I get 16 errors with > > >>> numpy.__version__ > '1.0b2.dev3005' > >>> scipy.__version__ > '0.5.0.2158' Thanks for the report - some of us have been working on loadmat recently, and have added unit tests, which expose this problem. I have a 64 bit machine and will look into it tomorrow. Best, Matthew From nwagner at iam.uni-stuttgart.de Sun Aug 13 06:28:15 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 Aug 2006 12:28:15 +0200 Subject: [SciPy-dev] [SciPy-user] Call for testing In-Reply-To: References: <44D056CF.6040605@iam.uni-stuttgart.de> <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> <9ADBF898-D18F-4957-BA41-9CDC09D597EC@arachnedesign.net> <44DE7A0C.1060401@csun.edu> Message-ID: <44DEFEBF.7000505@iam.uni-stuttgart.de> Nils Wagner wrote: > On Sat, 12 Aug 2006 18:02:04 -0700 > Stephen Walton wrote: > >> Steve Lianoglou wrote: >> >> >>>> Thank you all for running the test ! >>>> >>>> Finally I have disabled ATLAS by >>>> export ATLAS=None >>>> >>>> Good news is that I get the correct result. >>>> So I guess it's definitely an ATLAS issue. >>>> >>>> >>>> >> Yes, but I think the real difficulty is ATLAS 3.6.0 vs. >> 3.7.11. I would >> like to see some kind of official statement, I suppose, >> about which >> version of ATLAS scipy builds best with. 3.6.0 is >> stable but very old; >> Clint Whaley is recommending use of the 3.7 series. The >> good news is >> that he has a grant to help pay him for the time it will >> take to get a >> new stable release out. Nils, it might be worth trying >> ATLAS 3.7.13, >> which has a completely new (and much improved) >> configuration, but READ >> THE DOCS first. Building 3.7.13 is very different from >> building any >> previous versions. >> >> Frankly, I need stability more than the last iota of >> speed right now, so >> I'm using the Fedora Extras 3.6.0 ATLAS binaries with >> Scipy, which gives >> adequate performance and scipy.test(1) finishes with no >> errors. >> >> Steve >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > Steve, > > I have used 3.7.11 and the problem is still open. > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi all, The different results disappeared misteriously ! Something must happened to numpy/scipy between ATLAS 3.6.0 >>> scipy.__version__ '0.5.0.2147' >>> numpy.__version__ '1.0b2.dev2951' >>> and ATLAS 3.6.0 >>> numpy.__version__ '1.0b2.dev3005' >>> scipy.__version__ '0.5.0.2158' >>> Note that ATLAS has been retained unchanged !!! Nils From matthew.brett at gmail.com Mon Aug 14 13:12:07 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Aug 2006 18:12:07 +0100 Subject: [SciPy-dev] Trouble with check loadmat on 64 bit machines In-Reply-To: <1e2af89e0608130317j682f0e98x764aa4a681a7d665@mail.gmail.com> References: <44DEFA0E.7080908@iam.uni-stuttgart.de> <1e2af89e0608130317j682f0e98x764aa4a681a7d665@mail.gmail.com> Message-ID: <1e2af89e0608141012q5632239arae79cfe20dd6c090@mail.gmail.com> Hi, > > I get 16 errors with > > > > >>> numpy.__version__ > > '1.0b2.dev3005' > > >>> scipy.__version__ > > '0.5.0.2158' > > Thanks for the report - some of us have been working on loadmat > recently, and have added unit tests, which expose this problem. I > have a 64 bit machine and will look into it tomorrow. It proved an easy fix; patch attached to: http://projects.scipy.org/scipy/scipy/ticket/249 Best, Matthew From christopher.e.kees at erdc.usace.army.mil Wed Aug 16 16:34:49 2006 From: christopher.e.kees at erdc.usace.army.mil (Chris Kees) Date: Wed, 16 Aug 2006 15:34:49 -0500 Subject: [SciPy-dev] Mac OS X and gcc 4.2 Message-ID: Good Afternoon, I'm trying to port a set of finite element tools and linear/ nonlinear multilevel solvers to the new numpy and scipy. I'm having trouble even building the trunk of numpy and scipy on Mac OS X using a recent build of gcc (version 4.2.0 20060710) . Is it pointless for me to even try using something besides apple's gcc 3.3? I'm just getting simple unrecognized option errors, but I don't know how to modify the configuration to get rid of them or ignore them. Could anyone give me some pointers here? Thanks, Chris errors with gcc 4.2: ... gcc: _configtest.c gcc: unrecognized option '-no-cpp-precomp' cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-Wno-long-double" gcc: unrecognized option '-no-cpp-precomp' cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-Wno-long-double" failure. removing: _configtest.c _configtest.o ... From cookedm at physics.mcmaster.ca Wed Aug 16 20:37:17 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 16 Aug 2006 20:37:17 -0400 Subject: [SciPy-dev] Mac OS X and gcc 4.2 In-Reply-To: References: Message-ID: <20060817003717.GA9112@arbutus.physics.mcmaster.ca> On Wed, Aug 16, 2006 at 03:34:49PM -0500, Chris Kees wrote: > Good Afternoon, > > I'm trying to port a set of finite element tools and linear/ > nonlinear multilevel solvers to the new numpy and scipy. I'm > having trouble even building the trunk of numpy and scipy on Mac > OS X using a recent build of gcc (version 4.2.0 20060710) . Is > it pointless for me to even try using something besides apple's > gcc 3.3? I'm just getting simple unrecognized option errors, but I > don't know how to modify the configuration to get rid of them or > ignore them. Could anyone give me some pointers here? > > Thanks, > Chris > > errors with gcc 4.2: > > ... > gcc: _configtest.c > gcc: unrecognized option '-no-cpp-precomp' > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-Wno-long-double" > gcc: unrecognized option '-no-cpp-precomp' > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-Wno-long-double" > failure. > removing: _configtest.c _configtest.o Annoying, eh? The *easiest* way I've found to use a different compiler that takes a different set of flags is to make a compiler wrapper script, that deletes the flags your compiler won't recognize, and add others you think you need. I've attached the one I use on my Mac. Just set CC to it before building: $ CC=gcc-wrapper python setup.py build Otherwise, you'll find that you're running into the problem that Python adds flags from its Makefile (kept at lib/python2.4/Config/Makefile) that there is no way to override (BASECFLAGS and OPT -- see distutils.sysconfig). Setting CFLAGS only adds flags to the compiler command. Alternatively, look in numpy/distutils/ccompiler.py at the top. Edit _new_init_posix() to override the 'OPT' and 'BASECFLAGS' config_var (the one for 'OPT' is already there), and uncomment the line below that function that sets distutils.sysconfig._init_posix to this new routine. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed Aug 16 20:57:34 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 16 Aug 2006 20:57:34 -0400 Subject: [SciPy-dev] Mac OS X and gcc 4.2 In-Reply-To: <20060817003717.GA9112@arbutus.physics.mcmaster.ca> References: <20060817003717.GA9112@arbutus.physics.mcmaster.ca> Message-ID: <20060817005734.GA9312@arbutus.physics.mcmaster.ca> On Wed, Aug 16, 2006 at 08:37:17PM -0400, David M. Cooke wrote: > Annoying, eh? The *easiest* way I've found to use a different compiler > that takes a different set of flags is to make a compiler wrapper > script, that deletes the flags your compiler won't recognize, and add > others you think you need. I've attached the one I use on my Mac. probably help if I attached it ... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -------------- next part -------------- #!/usr/bin/env python import sys import os realprog = "/opt/local/bin/gcc-dp-4.2" options_to_remove = set(['-no-cpp-precomp', '-Wno-long-double', '-mno-fused-madd', '-faltivec', ]) args = [a for a in sys.argv[1:] if a not in options_to_remove] try: while 1: i = args.index('-arch') del args[i:i+2] except ValueError: pass args.insert(0, '-fno-strict-aliasing') print args os.execvp(realprog, [realprog] + args) From christopher.e.kees at erdc.usace.army.mil Thu Aug 17 11:58:55 2006 From: christopher.e.kees at erdc.usace.army.mil (Chris Kees) Date: Thu, 17 Aug 2006 10:58:55 -0500 Subject: [SciPy-dev] Mac OS X and gcc 4.2 In-Reply-To: <20060817005734.GA9312@arbutus.physics.mcmaster.ca> References: <20060817003717.GA9112@arbutus.physics.mcmaster.ca> <20060817005734.GA9312@arbutus.physics.mcmaster.ca> Message-ID: <1F0FCE76-5C4B-4EFB-80C9-251982EF1F2C@erdc.usace.army.mil> David, Thanks. That got me part of the way through along with similar g+ + and gfortran wrappers like what you gave me. That's nuts that those options are hardwired in. Now I'm getting more link errors because it's trying to build shared libraries without any of the options needed to build them from python extension modules: gfortran-wrapper -m32 -mpowerpc -maltivec -framework Accelerate build/ temp.macosx-10.4-fat-2.4/Lib/integrate/_quadpackmodule.o -L/usr/local/ lib/gcc/powerpc-apple-darwin8.7.0/4.2.0 -Lbuild/temp.macosx-10.4- fat-2.4 -lquadpack -llinpack_lite -lmach -lgfortran -o build/ lib.macosx-10.4-fat-2.4/scipy/integrate/_quadpack.so /usr/bin/ld: Undefined symbols: _PyArg_ParseTuple _PyCObject_AsVoidPtr _PyCObject_Type Anyway, all the tests passed on numpy so I may have to put scipy on the back burner for a while. I just need pysparse and my code ported to the new numpy. FYI, apple's latest default compiler (4.0.1) builds numpy without any trouble on both my intel and power pc macs and all the tests pass, contrary to what I was seeing using gcc 3.3. Chris On Aug 16, 2006, at 7:57 PM, David M. Cooke wrote: > On Wed, Aug 16, 2006 at 08:37:17PM -0400, David M. Cooke wrote: >> Annoying, eh? The *easiest* way I've found to use a different >> compiler >> that takes a different set of flags is to make a compiler wrapper >> script, that deletes the flags your compiler won't recognize, and add >> others you think you need. I've attached the one I use on my Mac. > > probably help if I attached it ... > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From cookedm at physics.mcmaster.ca Thu Aug 17 18:08:00 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 17 Aug 2006 18:08:00 -0400 Subject: [SciPy-dev] Mac OS X and gcc 4.2 In-Reply-To: <1F0FCE76-5C4B-4EFB-80C9-251982EF1F2C@erdc.usace.army.mil> References: <20060817003717.GA9112@arbutus.physics.mcmaster.ca> <20060817005734.GA9312@arbutus.physics.mcmaster.ca> <1F0FCE76-5C4B-4EFB-80C9-251982EF1F2C@erdc.usace.army.mil> Message-ID: <20060817220800.GA22471@arbutus.physics.mcmaster.ca> On Thu, Aug 17, 2006 at 10:58:55AM -0500, Chris Kees wrote: > David, > > Thanks. That got me part of the way through along with similar g+ > + and gfortran wrappers like what you gave me. That's nuts that > those options are hardwired in. Now I'm getting more link errors > because it's trying to build shared libraries without any of the > options needed to build them from python extension modules: > > gfortran-wrapper -m32 -mpowerpc -maltivec -framework Accelerate build/ > temp.macosx-10.4-fat-2.4/Lib/integrate/_quadpackmodule.o -L/usr/local/ > lib/gcc/powerpc-apple-darwin8.7.0/4.2.0 -Lbuild/temp.macosx-10.4- > fat-2.4 -lquadpack -llinpack_lite -lmach -lgfortran -o build/ > lib.macosx-10.4-fat-2.4/scipy/integrate/_quadpack.so > /usr/bin/ld: Undefined symbols: > _PyArg_ParseTuple > _PyCObject_AsVoidPtr > _PyCObject_Type Hmm, that looks like it's trying to make a program. You shouldn't need a gfortran wrapper, as everything for the Fortran compiler you can override (b/c we wrote that as part of numpy.distutils; the C part comes from Python distutils). I have this in my ~/.pydistutils.cfg so I don't have to remember to set environmet variables or add command-line switches: [config_fc] fcompiler=gnu95 f77exec=gfortran-dp-4.2 f90exec=gfortran-dp-4.2 opt = -g -Wall -O3 If you're on PPC, I'd suggest still using g77 3.4 (you can grab a copy from http://hpc.sourceforge.net/). > FYI, apple's latest default compiler (4.0.1) builds numpy without > any trouble on both my intel and power pc macs and all the tests > pass, contrary to what I was seeing using gcc 3.3. I build scipy with Apple's gcc 4.0.1, but with gfortran 4.2. And 3.3 is *only* for PPC, anyways. I'm on an Intel Mac, btw. Haven't tried building scipy on my iBook for a while. I do have an idea on how to make a Universal build of Scipy for Tiger that wouldn't require you having any other libraries installed. I'll try putting it together sometime soon :) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From w.northcott at unsw.edu.au Thu Aug 17 21:46:24 2006 From: w.northcott at unsw.edu.au (Bill Northcott) Date: Fri, 18 Aug 2006 11:46:24 +1000 Subject: [SciPy-dev] Mac OS X and gcc 4.2 In-Reply-To: References: Message-ID: <6AC79249-D115-4A3E-90C9-50D49BC31340@unsw.edu.au> On 18/08/2006, at 3:00 AM, Chris Kees wrote: > having trouble even building the trunk of numpy and scipy on Mac > OS X using a recent build of gcc (version 4.2.0 20060710) . Why use such a bleeding edge compiler? The FSF compilers don't even work very well on Darwin, which is only of secondary interest to the FSF developers. OTOH Apple incorporate a number of Darwin optimisations into their code which does not reach the FSF sources until later if at all. Even worse current Apple compilers are based on gcc-4.0.x so a lot of Apple stuff will be in FSF 4.0.3 but none of it in 4.2 which is very different. You can use the standard Apple compilers with a Fortran from hpc.sourceforge.net. Or just get the current R 2.3.1 binary package which includes gcc-4.0.3 with gfortran. Although this appears to be built from FSF sources it is universal and can target i386, ppc and ppc64. > errors with gcc 4.2: > > ... > gcc: _configtest.c > gcc: unrecognized option '-no-cpp-precomp' > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-Wno-long-double" As others have observed the problem is the configure script. As the autoconf macro guidelines make clear one should test for features not software versions. The script clearly tests for Darwin and assumes an Apple compiler. The options causing the errors are all Apple specific. I think the -no-cpp-precomp option is now redundant, -arch is only necessary if you are trying to target a different architecture to the build host. I am surprised it is used at all. While -Wno-long- double is sort of important. The size of long doubles has changed recently on Darwin so it is useful to now if you have them in your code. Bill Northcott From cookedm at physics.mcmaster.ca Thu Aug 17 22:56:46 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 17 Aug 2006 22:56:46 -0400 Subject: [SciPy-dev] Mac OS X and gcc 4.2 In-Reply-To: <6AC79249-D115-4A3E-90C9-50D49BC31340@unsw.edu.au> References: <6AC79249-D115-4A3E-90C9-50D49BC31340@unsw.edu.au> Message-ID: <20060818025646.GA29110@arbutus.physics.mcmaster.ca> On Fri, Aug 18, 2006 at 11:46:24AM +1000, Bill Northcott wrote: > On 18/08/2006, at 3:00 AM, Chris Kees wrote: > > having trouble even building the trunk of numpy and scipy on Mac > > OS X using a recent build of gcc (version 4.2.0 20060710) . > > Why use such a bleeding edge compiler? On Intel Macs, you *need* the latest for gfortran (within in the last several months, at least). > The FSF compilers don't even work very well on Darwin, which is only > of secondary interest to the FSF developers. OTOH Apple incorporate > a number of Darwin optimisations into their code which does not reach > the FSF sources until later if at all. Even worse current Apple > compilers are based on gcc-4.0.x so a lot of Apple stuff will be in > FSF 4.0.3 but none of it in 4.2 which is very different. > > You can use the standard Apple compilers with a Fortran from > hpc.sourceforge.net. Or just get the current R 2.3.1 binary package > which includes gcc-4.0.3 with gfortran. Although this appears to be > built from FSF sources it is universal and can target i386, ppc and > ppc64. Note that the gfortran from hpc.sf.net was built August 2006 -- it is a bleeding edge compiler :) > > > errors with gcc 4.2: > > > > ... > > gcc: _configtest.c > > gcc: unrecognized option '-no-cpp-precomp' > > cc1: error: unrecognized command line option "-arch" > > cc1: error: unrecognized command line option "-arch" > > cc1: error: unrecognized command line option "-Wno-long-double" > > As others have observed the problem is the configure script. As the > autoconf macro guidelines make clear one should test for features not > software versions. The script clearly tests for Darwin and assumes > an Apple compiler. The options causing the errors are all Apple > specific. > > I think the -no-cpp-precomp option is now redundant, -arch is only > necessary if you are trying to target a different architecture to the > build host. I am surprised it is used at all. While -Wno-long- > double is sort of important. The size of long doubles has changed > recently on Darwin so it is useful to now if you have them in your code. The problem is these options come from the Python distutils, which is expecting that extensions are built with the same compiler that it was built with. I've run into the same problems with using gcc and Compaq cc on an Alpha machine :) For numpy.distutils, we could add something for this. It's on my list :D -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david at ar.media.kyoto-u.ac.jp Fri Aug 18 04:32:14 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 17:32:14 +0900 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 Message-ID: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> Hi there, I noticed recently that when using the fft module of scipy, it is much slower (5-10 folds) than numpy for complex inputs (only in the 1d case) when linking to fftw3. This problem is reported on the ticket #1 for scipy : http://projects.scipy.org/scipy/scipy/ticket/1 I am not sure, because the code is a bit difficult to read, but it looks like in the case of complex input + fftw3, the plan is always recomputed for each call to zfft (file:zfft.c), whereas in the real case or in the complexe case + fftw2, the function drfft(file:drfft.c), called from zrfft (file:zrfft.c) is calling a plan which is cached. I am trying to see how the caching is done, but I am not sure I will have the time to make it work for fftw3. David P.S: I am wondering if there is a reason why the code is written with all those #ifdef ? Because it makes the hacking of the module quite difficult. Why not implementing each function for each fft library, and wraps them around in the header files ? Is is just a time constraint, or is there another reason ? From cookedm at physics.mcmaster.ca Fri Aug 18 05:03:53 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 18 Aug 2006 05:03:53 -0400 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> Message-ID: <20060818090353.GA29917@arbutus.physics.mcmaster.ca> On Fri, Aug 18, 2006 at 05:32:14PM +0900, David Cournapeau wrote: > Hi there, > > I noticed recently that when using the fft module of scipy, it is > much slower (5-10 folds) than numpy for complex inputs (only in the 1d > case) when linking to fftw3. This problem is reported on the ticket #1 > for scipy : http://projects.scipy.org/scipy/scipy/ticket/1 > > I am not sure, because the code is a bit difficult to read, but it looks > like in the case of complex input + fftw3, the plan is always recomputed > for each call to zfft (file:zfft.c), whereas in the real case or in the > complexe case + fftw2, the function drfft(file:drfft.c), called from > zrfft (file:zrfft.c) is calling a plan which is cached. I am trying to > see how the caching is done, but I am not sure I will have the time to > make it work for fftw3. Well, for fftw3 it uses FFTW_ESTIMATE for the plan. So it does a cheap estimate of what it needs. I tried changing it to FFTW_MEASURE, but it went slower. I'd have to dig into fftw3's source to see how it does the caching of wisdom. Basically, after playing around, I still have no idea why it's slow. > P.S: I am wondering if there is a reason why the code is written with > all those #ifdef ? Because it makes the hacking of the module quite > difficult. Why not implementing each function for each fft library, and > wraps them around in the header files ? Is is just a time constraint, or > is there another reason ? I was looking at that code and was wondering the same thing :-) I'm thinking of writing separate wrappers for each library, so you can get at each one specifically (scipy.fftpack.impl.fftw3, for instance). This would be a big win, I think, for fftw3, where it would then be easier to handle the wisdom. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david at ar.media.kyoto-u.ac.jp Fri Aug 18 06:01:17 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 19:01:17 +0900 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: <20060818090353.GA29917@arbutus.physics.mcmaster.ca> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <20060818090353.GA29917@arbutus.physics.mcmaster.ca> Message-ID: <44E58FED.6060805@ar.media.kyoto-u.ac.jp> David M. Cooke wrote: > On Fri, Aug 18, 2006 at 05:32:14PM +0900, David Cournapeau wrote: > >> Hi there, >> >> I noticed recently that when using the fft module of scipy, it is >> much slower (5-10 folds) than numpy for complex inputs (only in the 1d >> case) when linking to fftw3. This problem is reported on the ticket #1 >> for scipy : http://projects.scipy.org/scipy/scipy/ticket/1 >> >> I am not sure, because the code is a bit difficult to read, but it looks >> like in the case of complex input + fftw3, the plan is always recomputed >> for each call to zfft (file:zfft.c), whereas in the real case or in the >> complexe case + fftw2, the function drfft(file:drfft.c), called from >> zrfft (file:zrfft.c) is calling a plan which is cached. I am trying to >> see how the caching is done, but I am not sure I will have the time to >> make it work for fftw3. >> > > Well, for fftw3 it uses FFTW_ESTIMATE for the plan. So it does a cheap > estimate of what it needs. Well, it depends what you mean by cheap. If compared to FFTW_MEASURE, yes. But compared to pre-computing the plan, and then doing multiple fftw, then it is not cheap. Computing the fftw is negligeable compared to computing the plan ! I paste a c file which shows the difference, and which emulates (the function test_noncached) the scipy.fft module (emulates as I understand it): it gives the computation for a a complex array of 1024 elements, iterated 1000 times. First, the number of cycles for all iteration, then per fft on average, and min of all iteration. The only difference between cached and non cached is that the plan is computed again for each iteration in the non cached case (as done by scipy.fft now in the case of begin linked with fftw3): output: testing cached cycle is 69973016, 69973 per execution, min is 66416 testing non cached cycle is 946186540, 946186 per execution, min is 918036 Code: (cycle.h is available here: http://www.fftw.org/cycle.h, and is the code used for the estimation of best code in plans by fftw3) #include #include #include #include "cycle.h" #define MALLOC(size) fftw_malloc((size)) #define FREE(ptr) fftw_free((ptr)) typedef struct { double total; double min; } result; result test_cached(size_t size, size_t niter); result test_noncached(size_t size, size_t niter); int main(void) { size_t niter = 1e3; size_t size = 1024; result res; printf("testing cached\n"); res = test_cached(size, niter); printf("\t cycle is %d, %d per execution, min is %d\n", (size_t)res.total, (size_t)res.total/niter, (size_t)res.min); printf("testing non cached\n"); res = test_noncached(size, niter); printf("\t cycle is %d, %d per execution, min is %d\n", (size_t)res.total, (size_t)res.total/niter, (size_t)res.min); fftw_cleanup(); return 0; } result test_cached(size_t size, size_t niter) { fftw_complex *in, *out; fftw_plan plan; ticks t0, t1; double res, min, acc; size_t i, j; result r; in = MALLOC(sizeof(*in)*size); out = MALLOC(sizeof(*out)*size); plan = fftw_plan_dft_1d(size, in, out, 1, FFTW_ESTIMATE); acc = 0; min = 1e100; for(i = 0; i < niter; ++i) { for(j = 0; j < size; ++j) { in[j][0] = (0.5 - (double)rand() / RAND_MAX); in[j][1] = 0.1*j; } t0 = getticks(); fftw_execute(plan); t1 = getticks(); res = elapsed(t1, t0); if (res < min) { min = res; } acc += res; } r.total = acc; r.min = min; fftw_destroy_plan(plan); FREE(in); FREE(out); return r; } result test_noncached(size_t size, size_t niter) { fftw_complex *in, *out; fftw_plan plan; ticks t0, t1; double res, min, acc; size_t i, j; result r; in = MALLOC(sizeof(*in)*size); out = MALLOC(sizeof(*out)*size); acc = 0; min = 1e100; for(i = 0; i < niter; ++i) { for(j = 0; j < size; ++j) { in[j][0] = (0.5 - (double)rand() / RAND_MAX); in[j][1] = 0.1*j; } t0 = getticks(); plan = fftw_plan_dft_1d(size, in, out, 1, FFTW_ESTIMATE); fftw_execute(plan); fftw_destroy_plan(plan); t1 = getticks(); res = elapsed(t1, t0); if (res < min) { min = res; } acc += res; } r.total = acc; r.min = min; FREE(in); FREE(out); return r; } Compile by: gcc -c test.c -o test.o -W -Wall gcc -lfftw3 -lm -o test test.o From cookedm at physics.mcmaster.ca Fri Aug 18 06:47:41 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 18 Aug 2006 06:47:41 -0400 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: <44E58FED.6060805@ar.media.kyoto-u.ac.jp> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <20060818090353.GA29917@arbutus.physics.mcmaster.ca> <44E58FED.6060805@ar.media.kyoto-u.ac.jp> Message-ID: <20060818104740.GA30858@arbutus.physics.mcmaster.ca> On Fri, Aug 18, 2006 at 07:01:17PM +0900, David Cournapeau wrote: > David M. Cooke wrote: > > On Fri, Aug 18, 2006 at 05:32:14PM +0900, David Cournapeau wrote: > > > >> Hi there, > >> > >> I noticed recently that when using the fft module of scipy, it is > >> much slower (5-10 folds) than numpy for complex inputs (only in the 1d > >> case) when linking to fftw3. This problem is reported on the ticket #1 > >> for scipy : http://projects.scipy.org/scipy/scipy/ticket/1 > >> > >> I am not sure, because the code is a bit difficult to read, but it looks > >> like in the case of complex input + fftw3, the plan is always recomputed > >> for each call to zfft (file:zfft.c), whereas in the real case or in the > >> complexe case + fftw2, the function drfft(file:drfft.c), called from > >> zrfft (file:zrfft.c) is calling a plan which is cached. I am trying to > >> see how the caching is done, but I am not sure I will have the time to > >> make it work for fftw3. > >> > > > > Well, for fftw3 it uses FFTW_ESTIMATE for the plan. So it does a cheap > > estimate of what it needs. > Well, it depends what you mean by cheap. If compared to FFTW_MEASURE, > yes. But compared to pre-computing the plan, and then doing multiple > fftw, then it is not cheap. Computing the fftw is negligeable compared > to computing the plan ! > > The only difference between cached and non cached is that the plan is > computed again for each iteration in the non cached case (as done by > scipy.fft now in the case of begin linked with fftw3): The problem is that the plan depends on the input arrays! Caching it won't help with Python, unless you can guarantee that the same arrays are passed to successive calls. Getting around that will mean digging into the guru interface, I think (ugh). I'll have a clearer idea of what we can and can not do once I dig into fftw3. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david at ar.media.kyoto-u.ac.jp Fri Aug 18 07:29:00 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 20:29:00 +0900 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: <20060818104740.GA30858@arbutus.physics.mcmaster.ca> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <20060818090353.GA29917@arbutus.physics.mcmaster.ca> <44E58FED.6060805@ar.media.kyoto-u.ac.jp> <20060818104740.GA30858@arbutus.physics.mcmaster.ca> Message-ID: <44E5A47C.8060107@ar.media.kyoto-u.ac.jp> David M. Cooke wrote: > > > The problem is that the plan depends on the input arrays! Caching it > won't help with Python, unless you can guarantee that the same arrays > are passed to successive calls. > I know that, but it really make no sense to use fftw3 as it is used now... For moderate sizes, it is more than 10 times slower ! > Getting around that will mean digging into the guru interface, I think (ugh). > I tried a dirty hack using the function fftw_execute_dft, which executes a given plan for different arrays, given they have the same properties. The problem is that because of the obfuscated way fftpack is coded right now, it is difficult to track what is going on; I have a small test program which calls zfft directly from the module _fftpack.so, running it under valgrind shows no problem... So there is something going on in fftpack I don't understand. The other obvious thing is to copy the content of the array into a cached buffer, computing the fft on it, and recopying the result. This is stupid, but it is better than the current situation I think. I implemented this solution, the speed is much better, and tests succeed. Do you know how I am supposed to build a patch (I am not familiar with patch...) David From christopher.e.kees at erdc.usace.army.mil Fri Aug 18 10:37:13 2006 From: christopher.e.kees at erdc.usace.army.mil (Chris Kees) Date: Fri, 18 Aug 2006 09:37:13 -0500 Subject: [SciPy-dev] Mac OS X and gcc 4.2 In-Reply-To: <20060818025646.GA29110@arbutus.physics.mcmaster.ca> References: <6AC79249-D115-4A3E-90C9-50D49BC31340@unsw.edu.au> <20060818025646.GA29110@arbutus.physics.mcmaster.ca> Message-ID: On Aug 17, 2006, at 9:56 PM, David M. Cooke wrote: > On Fri, Aug 18, 2006 at 11:46:24AM +1000, Bill Northcott wrote: >> On 18/08/2006, at 3:00 AM, Chris Kees wrote: >>> having trouble even building the trunk of numpy and scipy on Mac >>> OS X using a recent build of gcc (version 4.2.0 20060710) . >> >> Why use such a bleeding edge compiler? > > On Intel Macs, you *need* the latest for gfortran (within in the last > several months, at least). > >> The FSF compilers don't even work very well on Darwin, which is only >> of secondary interest to the FSF developers. OTOH Apple incorporate >> a number of Darwin optimisations into their code which does not reach >> the FSF sources until later if at all. Even worse current Apple >> compilers are based on gcc-4.0.x so a lot of Apple stuff will be in >> FSF 4.0.3 but none of it in 4.2 which is very different. >> >> You can use the standard Apple compilers with a Fortran from >> hpc.sourceforge.net. Or just get the current R 2.3.1 binary package >> which includes gcc-4.0.3 with gfortran. Although this appears to be >> built from FSF sources it is universal and can target i386, ppc and >> ppc64. > > Note that the gfortran from hpc.sf.net was built August 2006 -- it > is a > bleeding edge compiler :) > I talked to the fellow who builds those compilers, and he's just doing what I'm doing: periodically building them from gcc development sources with the same configuration options I'm using. Up until recently I had been building python too because I needed features that weren't yet in mac python, maybe I need to go back to that approach. If I built python from the same compilers that would eliminate the problem below, wouldn't it? >>> errors with gcc 4.2: >>> >>> ... >>> gcc: _configtest.c >>> gcc: unrecognized option '-no-cpp-precomp' >>> cc1: error: unrecognized command line option "-arch" >>> cc1: error: unrecognized command line option "-arch" >>> cc1: error: unrecognized command line option "-Wno-long-double" >> >> As others have observed the problem is the configure script. As the >> autoconf macro guidelines make clear one should test for features not >> software versions. The script clearly tests for Darwin and assumes >> an Apple compiler. The options causing the errors are all Apple >> specific. >> >> I think the -no-cpp-precomp option is now redundant, -arch is only >> necessary if you are trying to target a different architecture to the >> build host. I am surprised it is used at all. While -Wno-long- >> double is sort of important. The size of long doubles has changed >> recently on Darwin so it is useful to now if you have them in your >> code. > > The problem is these options come from the Python distutils, which is > expecting that extensions are built with the same compiler that it was > built with. I've run into the same problems with using gcc and Compaq > cc on an Alpha machine :) I'm building numpy/scipy and a lot of other mixed language code on mac(ppc64, ppc32,intal), compaq, sgi, linux, and cray. In the past the easiest way to make things work has been to build the gnu compilers from a relatively recent source. If Apple is going to make their version of the gnu compilers incompatible with FSF (and not supply good fortran compilers), I'm sticking with FSF. Thanks for the help on this. If I can help with the distutils work, please let me know. Chris > For numpy.distutils, we could add something for this. It's on my > list :D > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From david at ar.media.kyoto-u.ac.jp Sat Aug 19 05:40:35 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 19 Aug 2006 18:40:35 +0900 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: <44E5A47C.8060107@ar.media.kyoto-u.ac.jp> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <20060818090353.GA29917@arbutus.physics.mcmaster.ca> <44E58FED.6060805@ar.media.kyoto-u.ac.jp> <20060818104740.GA30858@arbutus.physics.mcmaster.ca> <44E5A47C.8060107@ar.media.kyoto-u.ac.jp> Message-ID: <44E6DC93.4030202@ar.media.kyoto-u.ac.jp> > I tried a dirty hack using the function fftw_execute_dft, which executes > a given plan for different arrays, given they have the same properties. > The problem is that because of the obfuscated way fftpack is coded right > now, it is difficult to track what is going on; I have a small test > program which calls zfft directly from the module _fftpack.so, running > it under valgrind shows no problem... So there is something going on in > fftpack I don't understand. > > The other obvious thing is to copy the content of the array into a > cached buffer, computing the fft on it, and recopying the result. This > is stupid, but it is better than the current situation I think. I > implemented this solution, the speed is much better, and tests succeed. > Do you know how I am supposed to build a patch (I am not familiar with > patch...) I build a patch anyway; I am not sure the format is OK, as I am not familiar with patch/diff options: Before patch (last numpy and scipy SVN), scipy.fftpack.test(10) gives: Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | numpy | scipy | numpy ------------------------------------------------- 100 | 0.17 | 0.14 | 1.93 | 0.11 (secs for 7000 calls) 1000 | 0.12 | 0.16 | 1.12 | 0.14 (secs for 2000 calls) 256 | 0.28 | 0.28 | 3.09 | 0.22 (secs for 10000 calls) 512 | 0.37 | 0.48 | 3.82 | 0.38 (secs for 10000 calls) 1024 | 0.05 | 0.09 | 0.53 | 0.07 (secs for 1000 calls) 2048 | 0.10 | 0.16 | 0.88 | 0.15 (secs for 1000 calls) 4096 | 0.08 | 0.17 | 0.75 | 0.17 (secs for 500 calls) 8192 | 0.20 | 0.53 | 1.39 | 0.47 (secs for 500 calls) .... Inverse Fast Fourier Transform =============================================== | real input | complex input ----------------------------------------------- size | scipy | numpy | scipy | numpy ----------------------------------------------- 100 | 0.16 | 0.29 | 2.29 | 0.28 (secs for 7000 calls) 1000 | 0.13 | 0.27 | 1.22 | 0.26 (secs for 2000 calls) 256 | 0.29 | 0.57 | 3.43 | 0.51 (secs for 10000 calls) 512 | 0.41 | 0.84 | 4.14 | 0.77 (secs for 10000 calls) 1024 | 0.06 | 0.14 | 0.62 | 0.13 (secs for 1000 calls) 2048 | 0.10 | 0.26 | 0.91 | 0.25 (secs for 1000 calls) 4096 | 0.11 | 0.31 | 0.81 | 0.26 (secs for 500 calls) 8192 | 0.23 | 0.78 | 1.53 | 0.72 (secs for 500 calls) ....... After patch (last numpy and scipy SVN), scipy.fftpack.test(10) gives: Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | numpy | scipy | numpy ------------------------------------------------- 100 | 0.17 | 0.14 | 0.12 | 0.11 (secs for 7000 calls) 1000 | 0.12 | 0.16 | 0.10 | 0.14 (secs for 2000 calls) 256 | 0.28 | 0.28 | 0.20 | 0.20 (secs for 10000 calls) 512 | 0.36 | 0.47 | 0.27 | 0.36 (secs for 10000 calls) 1024 | 0.05 | 0.08 | 0.05 | 0.07 (secs for 1000 calls) 2048 | 0.09 | 0.16 | 0.08 | 0.14 (secs for 1000 calls) 4096 | 0.10 | 0.17 | 0.10 | 0.16 (secs for 500 calls) 8192 | 0.23 | 0.53 | 0.23 | 0.47 (secs for 500 calls) .... Inverse Fast Fourier Transform =============================================== | real input | complex input ----------------------------------------------- size | scipy | numpy | scipy | numpy ----------------------------------------------- 100 | 0.17 | 0.29 | 0.14 | 0.26 (secs for 7000 calls) 1000 | 0.13 | 0.27 | 0.15 | 0.25 (secs for 2000 calls) 256 | 0.29 | 0.54 | 0.27 | 0.50 (secs for 10000 calls) 512 | 0.38 | 0.81 | 0.43 | 0.73 (secs for 10000 calls) 1024 | 0.06 | 0.14 | 0.08 | 0.13 (secs for 1000 calls) 2048 | 0.09 | 0.24 | 0.13 | 0.23 (secs for 1000 calls) 4096 | 0.09 | 0.24 | 0.13 | 0.23 (secs for 500 calls) 8192 | 0.22 | 0.75 | 0.37 | 0.73 (secs for 500 calls) This makes things much faster, particularly for small sizes (which is logical considering the main cause of slowness is the building of plans). http://projects.scipy.org/scipy/scipy/attachment/ticket/1/fftw3slow.patch Paste below: --- scipy/Lib/fftpack/src/zfft.c 2006-08-18 20:45:50.000000000 +0900 +++ scipy-new/Lib/fftpack/src/zfft.c 2006-08-18 21:00:26.000000000 +0900 @@ -35,9 +35,21 @@ #endif #if defined WITH_FFTW3 -/* - *don't cache anything - */ +GEN_CACHE(zfftw,(int n,int d) + ,int direction; + fftw_plan plan; + fftw_complex* ptr; + ,((caches_zfftw[i].n==n) && + (caches_zfftw[i].direction==d)) + ,caches_zfftw[id].direction = d; + caches_zfftw[id].ptr = fftw_malloc(sizeof(fftw_complex)*(n)); + caches_zfftw[id].plan = fftw_plan_dft_1d(n, caches_zfftw[id].ptr, + caches_zfftw[id].ptr, + (d>0?FFTW_FORWARD:FFTW_BACKWARD), + FFTW_ESTIMATE); + ,fftw_destroy_plan(caches_zfftw[id].plan); + fftw_free(caches_zfftw[id].ptr); + ,10) #elif defined WITH_FFTW /**************** FFTW2 *****************************/ GEN_CACHE(zfftw,(int n,int d) @@ -73,6 +85,7 @@ destroy_zdjbfft_caches(); #endif #ifdef WITH_FFTW3 + destroy_zfftw_caches(); #elif defined WITH_FFTW destroy_zfftw_caches(); #else @@ -118,6 +131,7 @@ if (f==0) #endif #ifdef WITH_FFTW3 + plan = caches_zfftw[get_cache_id_zfftw(n,direction)].plan; #elif defined WITH_FFTW plan = caches_zfftw[get_cache_id_zfftw(n,direction)].plan; #else @@ -147,11 +161,10 @@ } else #endif #ifdef WITH_FFTW3 - plan = fftw_plan_dft_1d(n, (fftw_complex*)ptr, (fftw_complex*)ptr, - (direction>0?FFTW_FORWARD:FFTW_BACKWARD), - FFTW_ESTIMATE); - fftw_execute(plan); - fftw_destroy_plan(plan); + ptrm = caches_zfftw[get_cache_id_zfftw(n,direction)].ptr; + memcpy(ptrm, ptr, sizeof(double)*2*n); + fftw_execute(plan); + memcpy(ptr, ptrm, sizeof(double)*2*n); #elif defined WITH_FFTW fftw_one(plan,(fftw_complex*)ptr,NULL); #else @@ -181,11 +194,10 @@ } else #endif #ifdef WITH_FFTW3 - plan = fftw_plan_dft_1d(n, (fftw_complex*)ptr, (fftw_complex*)ptr, - (direction>0?FFTW_FORWARD:FFTW_BACKWARD), - FFTW_ESTIMATE); - fftw_execute(plan); - fftw_destroy_plan(plan); + ptrm = caches_zfftw[get_cache_id_zfftw(n,direction)].ptr; + memcpy(ptrm, ptr, sizeof(double)*2*n); + fftw_execute(plan); + memcpy(ptr, ptrm, sizeof(double)*2*n); #elif defined WITH_FFTW fftw_one(plan,(fftw_complex*)ptr,NULL); #else This is really a quick/dirty hack, as I don't really know the mechanism. For example, I am not sure if there is no memory leak (I test the new function zfft through valgrind, though, with no memory leak reported). There should be a way to avoid the two full copies, too, which makes for a significant proportion of the computing time, I think, but this would require a rewrite of the module, I guess. David From david at ar.media.kyoto-u.ac.jp Mon Aug 21 00:57:11 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 21 Aug 2006 13:57:11 +0900 Subject: [SciPy-dev] prefered way to submit code, patch, new module to scipy/numpy ? Message-ID: <44E93D27.1090904@ar.media.kyoto-u.ac.jp> hello, I tried to find some doc on scipy wiki regarding how to submit patches, new code to scipy or numpy, buyt couldn't find so. Is there a recommended way to do so ? For new modules or new code in existing modules, is there a kind of reviewing process ? David From mforbes at nt05.phys.washington.edu Mon Aug 21 22:13:11 2006 From: mforbes at nt05.phys.washington.edu (Michael McNeil Forbes) Date: 21 Aug 2006 19:13:11 -0700 Subject: [SciPy-dev] ImportError: Inappropriate file type for dynamic loading Message-ID: I am having some problems with scipy on an Intel Mac. It built and installed fine, but when I try to load various components, I get the following error messages: ---------------------------------------- Python 2.4.3 (#1, Apr 7 2006, 10:54:33) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Overwriting info= from scipy.misc (was from numpy.lib.utils) >>> import scipy.interpolate Traceback (most recent call last): File "", line 1, in ? File "/data/apps/python//lib/python/scipy/interpolate/__init__.py", line 7, in ? from interpolate import * File "/data/apps/python//lib/python/scipy/interpolate/interpolate.py", line 13, in ? import fitpack File "/data/apps/python//lib/python/scipy/interpolate/fitpack.py", line 34, in ? import _fitpack ImportError: Inappropriate file type for dynamic loading >>> ---------------------------------------- It seems that python is having difficulty loading libraries derived from fortran/c sources, but I am having a very difficult time trying to figure out why or how to fix this. I was having some issues earlier with numpy extensions, but was able to get numpy to build an install cleanly after ensuring that there were no spurious libraries around (I have installed some previously pre-compiled version of numpy and scipy that worked fine). I have verified that the appropriate file _fitpack.so in this case is being regenerated, but it still cannot be loaded. I was hoping I could see exactly how it was generated, but this information seems to be hidden deep within the distribution utilities and I do not know how to explore this easily. I am having related problems when trying to use f2py to generate simple fortran modules, so I suspect that something is messed up with my fortran compiler, but the compiler seems to work (the library is generated...) Any suggestions about how I can try to diagnose and fix this problem? Michael. From cookedm at physics.mcmaster.ca Fri Aug 25 18:29:38 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 25 Aug 2006 18:29:38 -0400 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: <44E6DC93.4030202@ar.media.kyoto-u.ac.jp> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <20060818090353.GA29917@arbutus.physics.mcmaster.ca> <44E58FED.6060805@ar.media.kyoto-u.ac.jp> <20060818104740.GA30858@arbutus.physics.mcmaster.ca> <44E5A47C.8060107@ar.media.kyoto-u.ac.jp> <44E6DC93.4030202@ar.media.kyoto-u.ac.jp> Message-ID: On Aug 19, 2006, at 05:40 , David Cournapeau wrote: > I build a patch anyway; I am not sure the format is OK, as I am not > familiar with > patch/diff options: > > Before patch (last numpy and scipy SVN), scipy.fftpack.test(10) > gives: > > Fast Fourier Transform > ================================================= > | real input | complex input > ------------------------------------------------- > size | scipy | numpy | scipy | numpy > ------------------------------------------------- > 100 | 0.17 | 0.14 | 1.93 | 0.11 (secs for 7000 calls) > 1000 | 0.12 | 0.16 | 1.12 | 0.14 (secs for 2000 calls) > 256 | 0.28 | 0.28 | 3.09 | 0.22 (secs for 10000 calls) > 512 | 0.37 | 0.48 | 3.82 | 0.38 (secs for 10000 calls) > 1024 | 0.05 | 0.09 | 0.53 | 0.07 (secs for 1000 calls) > 2048 | 0.10 | 0.16 | 0.88 | 0.15 (secs for 1000 calls) > 4096 | 0.08 | 0.17 | 0.75 | 0.17 (secs for 500 calls) > 8192 | 0.20 | 0.53 | 1.39 | 0.47 (secs for 500 calls) > .... > Inverse Fast Fourier Transform > =============================================== > | real input | complex input > ----------------------------------------------- > size | scipy | numpy | scipy | numpy > ----------------------------------------------- > 100 | 0.16 | 0.29 | 2.29 | 0.28 (secs for 7000 calls) > 1000 | 0.13 | 0.27 | 1.22 | 0.26 (secs for 2000 calls) > 256 | 0.29 | 0.57 | 3.43 | 0.51 (secs for 10000 calls) > 512 | 0.41 | 0.84 | 4.14 | 0.77 (secs for 10000 calls) > 1024 | 0.06 | 0.14 | 0.62 | 0.13 (secs for 1000 calls) > 2048 | 0.10 | 0.26 | 0.91 | 0.25 (secs for 1000 calls) > 4096 | 0.11 | 0.31 | 0.81 | 0.26 (secs for 500 calls) > 8192 | 0.23 | 0.78 | 1.53 | 0.72 (secs for 500 calls) > ....... > > After patch (last numpy and scipy SVN), scipy.fftpack.test(10) gives: > > Fast Fourier Transform > ================================================= > | real input | complex input > ------------------------------------------------- > size | scipy | numpy | scipy | numpy > ------------------------------------------------- > 100 | 0.17 | 0.14 | 0.12 | 0.11 (secs for 7000 calls) > 1000 | 0.12 | 0.16 | 0.10 | 0.14 (secs for 2000 calls) > 256 | 0.28 | 0.28 | 0.20 | 0.20 (secs for 10000 calls) > 512 | 0.36 | 0.47 | 0.27 | 0.36 (secs for 10000 calls) > 1024 | 0.05 | 0.08 | 0.05 | 0.07 (secs for 1000 calls) > 2048 | 0.09 | 0.16 | 0.08 | 0.14 (secs for 1000 calls) > 4096 | 0.10 | 0.17 | 0.10 | 0.16 (secs for 500 calls) > 8192 | 0.23 | 0.53 | 0.23 | 0.47 (secs for 500 calls) > .... > Inverse Fast Fourier Transform > =============================================== > | real input | complex input > ----------------------------------------------- > size | scipy | numpy | scipy | numpy > ----------------------------------------------- > 100 | 0.17 | 0.29 | 0.14 | 0.26 (secs for 7000 calls) > 1000 | 0.13 | 0.27 | 0.15 | 0.25 (secs for 2000 calls) > 256 | 0.29 | 0.54 | 0.27 | 0.50 (secs for 10000 calls) > 512 | 0.38 | 0.81 | 0.43 | 0.73 (secs for 10000 calls) > 1024 | 0.06 | 0.14 | 0.08 | 0.13 (secs for 1000 calls) > 2048 | 0.09 | 0.24 | 0.13 | 0.23 (secs for 1000 calls) > 4096 | 0.09 | 0.24 | 0.13 | 0.23 (secs for 500 calls) > 8192 | 0.22 | 0.75 | 0.37 | 0.73 (secs for 500 calls) > > This makes things much faster, particularly for small sizes (which is > logical considering the main cause of slowness is the building of > plans). > > http://projects.scipy.org/scipy/scipy/attachment/ticket/1/ > fftw3slow.patch The patch format is fine, btw. Although, it looks like you used diff; it's much easier to just do 'svn diff' (add the file or directory names to the end if you're only interested in some of what you've changed). I'm working on splitting the _fttpack module into separate extensions for each fft library, so it's clearer. Then, you could link in both fftw2 and fftw3, and use either (or fftpack). Along the way, I'd like to add more of their API functions (such as the wisdom functions), or have routines that may have better performance (fftw3, for instance, can be more efficient [or not] if it's allowed to destroy its input array, or if the input and output array are different). Also, it looks like using the guru interface, plans could be cached (but for each length, there would be possibly two plans: for aligned and unaligned data). -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david at ar.media.kyoto-u.ac.jp Sun Aug 27 22:37:57 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 Aug 2006 11:37:57 +0900 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <20060818090353.GA29917@arbutus.physics.mcmaster.ca> <44E58FED.6060805@ar.media.kyoto-u.ac.jp> <20060818104740.GA30858@arbutus.physics.mcmaster.ca> <44E5A47C.8060107@ar.media.kyoto-u.ac.jp> <44E6DC93.4030202@ar.media.kyoto-u.ac.jp> Message-ID: <44F25705.5030502@ar.media.kyoto-u.ac.jp> David M. Cooke wrote: > > The patch format is fine, btw. Although, it looks like you used diff; > it's much easier to just do 'svn diff' (add the file or directory > names to the end if you're only interested in some of what you've > changed). I of course discovered svn diff a few minutes after having sent the patch... I am no really used to subversion, will do this way next time. > > I'm working on splitting the _fttpack module into separate extensions > for each fft library, so it's clearer. Then, you could link in both > fftw2 and fftw3, and use either (or fftpack). Along the way, I'd like > to add more of their API functions (such as the wisdom functions), or > have routines that may have better performance (fftw3, for instance, > can be more efficient [or not] if it's allowed to destroy its input > array, or if the input and output array are different). I was thinking about doing it myself; maybe we can share our effort ? > > Also, it looks like using the guru interface, plans could be cached > (but for each length, there would be possibly two plans: for aligned > and unaligned data). When I used this way, it didn't work, but I didn't investigate more than a few minutes. I was wondering if this could come from alignment problems, or some other issues. I don't know much about the C numpy interface: how is the memory allocated for arrays ? Is the memory always 16 bytes aligned ? Is it an option ? Can we check it ? David From faltet at carabos.com Mon Aug 28 05:13:44 2006 From: faltet at carabos.com (Francesc Altet) Date: Mon, 28 Aug 2006 11:13:44 +0200 Subject: [SciPy-dev] =?iso-8859-1?q?scipy=2Efft_module_slow_for_complex=09?= =?iso-8859-1?q?inputs=09when=09linked_to_fftw3?= In-Reply-To: <44F25705.5030502@ar.media.kyoto-u.ac.jp> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <44F25705.5030502@ar.media.kyoto-u.ac.jp> Message-ID: <200608281113.44174.faltet@carabos.com> A Dilluns 28 Agost 2006 04:37, David Cournapeau va escriure: > David M. Cooke wrote: > > The patch format is fine, btw. Although, it looks like you used diff; > > it's much easier to just do 'svn diff' (add the file or directory > > names to the end if you're only interested in some of what you've > > changed). > > I of course discovered svn diff a few minutes after having sent the > patch... I am no really used to subversion, will do this way next time. You don't need to install subversion for this (although learning it is always a good thing). "diff -urN" will do the trick as well. -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From david at ar.media.kyoto-u.ac.jp Mon Aug 28 06:36:23 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 Aug 2006 19:36:23 +0900 Subject: [SciPy-dev] scipy.fft module slow for complex inputs when linked to fftw3 In-Reply-To: <200608281113.44174.faltet@carabos.com> References: <44E57B0E.1080008@ar.media.kyoto-u.ac.jp> <44F25705.5030502@ar.media.kyoto-u.ac.jp> <200608281113.44174.faltet@carabos.com> Message-ID: <44F2C727.5010404@ar.media.kyoto-u.ac.jp> Francesc Altet wrote: > > You don't need to install subversion for this (although learning it is always > a good thing). "diff -urN" will do the trick as well. > That's what I used, because I didn't know about subversion integrated diff (I am using arch myself for my projects). I was already using subversion anyway to track all the last great things happening in numpy and scipy ;) David From ndbecker2 at gmail.com Mon Aug 28 07:51:30 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 28 Aug 2006 07:51:30 -0400 Subject: [SciPy-dev] numpy-1.0b4 build failure Message-ID: Any hints? running install_data [...] creating /var/tmp/rpm/numpy-1.0b4-1-root-nbecker/usr/lib64/python2.4/site-packages/numpy/numarray/numpy error: can't copy 'numpy/numarray/numpy/': doesn't exist or not a regular file error: Bad exit status from /var/tmp/rpm/rpm-tmp.82145 (%install) From dd55 at cornell.edu Mon Aug 28 09:52:03 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 28 Aug 2006 09:52:03 -0400 Subject: [SciPy-dev] import misc -> failed: cannot import name place Message-ID: <200608280952.03680.dd55@cornell.edu> Just a heads up, with numpy 1.0b5.dev3084 and scipy 0.5.0.2180, I get the following error when I so from scipy import *: import misc -> failed: cannot import name place --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/darren/ /usr/lib64/python2.4/site-packages/scipy/signal/__init__.py 6 7 import sigtools ----> 8 from waveforms import * 9 from bsplines import * 10 from filter_design import * /usr/lib64/python2.4/site-packages/scipy/signal/waveforms.py 4 # 2003 5 ----> 6 from numpy import asarray, zeros, place, nan, mod, pi, extract, log, sqrt, \ 7 exp, cos, sin, size, polyval, polyint, log10 8 ImportError: cannot import name place From oliphant.travis at ieee.org Mon Aug 28 15:55:38 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 13:55:38 -0600 Subject: [SciPy-dev] import misc -> failed: cannot import name place In-Reply-To: <200608280952.03680.dd55@cornell.edu> References: <200608280952.03680.dd55@cornell.edu> Message-ID: <44F34A3A.3040903@ieee.org> Darren Dale wrote: > Just a heads up, with numpy 1.0b5.dev3084 and scipy 0.5.0.2180, I get the > following error when I so from scipy import *: > > import misc -> failed: cannot import name place > Darn... I was making changes on the 1.0b4 tag again (didn't switch back to the trunk). This should be fixed, now. -Travis From a.h.jaffe at gmail.com Wed Aug 30 15:00:19 2006 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Wed, 30 Aug 2006 20:00:19 +0100 Subject: [SciPy-dev] fftfreq very slow; rfftfreq incorrect? In-Reply-To: <20060830164149.GV23074@mentat.za.net> References: <20060830164149.GV23074@mentat.za.net> Message-ID: Stefan van der Walt wrote: > On Wed, Aug 30, 2006 at 12:04:22PM +0100, Andrew Jaffe wrote: >> the current implementation of fftfreq (which is meant to return the >> appropriate frequencies for an FFT) does the following: >> >> k = range(0,(n-1)/2+1)+range(-(n/2),0) >> return array(k,'d')/(n*d) >> >> I have tried this with very long (2**24) arrays, and it is ridiculously >> slow. Should this instead use arange (or linspace?) and concatenate >> rather than converting the above list? This seems to result in >> acceptable performance, but we could also perhaps even pre-allocate the >> space. > > Please try the attached benchmark. Results attached. Bottom line is that both new versions are several times faster than the old, and the concat (hstack) version is somewhat faster than the other, but it depends on n. I'm on OSX with the 2.4.3 universal build on PPC and the latest SVN numpy. >> The numpy.fft.rfftfreq seems just plain incorrect to me. It seems to >> produce lots of duplicated frequencies, contrary to the actual output of >> rfft: << removed >> > Please produce a code snippet to demonstrate the problem. We can then > fix the bug and use your code as a unit test. Aha, the problem is that scipy and numpy define rfft differently! numpy returns n/2+1 complex numbers (so the first and last numbers are actually real) with the frequencies equivalent to the positive part of the fftfreq, whereas scipy returns n real numbers with the frequencies as in rfftfreq. I think the numpy behavior makes more sense, as it doesn't require any unpacking after the fact, at the expense of a tiny amount of wasted space. But would this in fact require scipy doing extra work from whatever the 'native' real_fft (fftw, I assume) produces? Anyone else have an opinion? -------------- next part -------------- A non-text attachment was scrubbed... Name: fftfreq_bench.out Type: application/octet-stream Size: 449 bytes Desc: not available URL: From patperry at stanford.edu Wed Aug 30 20:17:40 2006 From: patperry at stanford.edu (Patrick Perry) Date: Thu, 31 Aug 2006 00:17:40 +0000 (UTC) Subject: [SciPy-dev] Bug in scipy.special.exp1 Message-ID: The following code should demonstrate the problem: >>> special.exp1(-1) nan >>> special.exp1(complex(-1)) (-1.8951178163559368-3.1415926535897931j) I'm not sure why the explicit cast to complex is necessary. Also, I barely know anything about exponential integrals, but I think it should be the case that Ei(z) = -E1(-z). Even though E1(z) may only be defined modulo multiples of pi when z is complex, it might make sense to have "special.expi(x)" return the same value as "-special.exp1(-x)" when x is real. This is not the current behavior: >>> special.expi(1) 1.8951178163559368 From mforbes at phys.washington.edu Thu Aug 31 04:40:39 2006 From: mforbes at phys.washington.edu (Michael McNeil Forbes) Date: Thu, 31 Aug 2006 01:40:39 -0700 Subject: [SciPy-dev] ImportError: Inappropriate file type for dynamic loading References: Message-ID: The problem was an errant version of g77 on my path that was being used. Once everything was recompiled with the same compiler (gfortran), then things worked fine. Michael. In article , Michael McNeil Forbes wrote: > I am having some problems with scipy on an Intel Mac. It built and > installed fine, but when I try to load various components, I get the > following error messages: > > ---------------------------------------- > Python 2.4.3 (#1, Apr 7 2006, 10:54:33) > [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > Overwriting info= from scipy.misc (was info at 0x1288930> from numpy.lib.utils) > >>> import scipy.interpolate > Traceback (most recent call last): > File "", line 1, in ? > File "/data/apps/python//lib/python/scipy/interpolate/__init__.py", line 7, > in ? > from interpolate import * > File "/data/apps/python//lib/python/scipy/interpolate/interpolate.py", line > 13, in ? > import fitpack > File "/data/apps/python//lib/python/scipy/interpolate/fitpack.py", line 34, > in ? > import _fitpack > ImportError: Inappropriate file type for dynamic loading > >>> > ---------------------------------------- > > It seems that python is having difficulty loading libraries derived > from fortran/c sources, but I am having a very difficult time trying > to figure out why or how to fix this. > > I was having some issues earlier with numpy extensions, but was able > to get numpy to build an install cleanly after ensuring that there > were no spurious libraries around (I have installed some previously > pre-compiled version of numpy and scipy that worked fine). > > I have verified that the appropriate file _fitpack.so in this case is > being regenerated, but it still cannot be loaded. I was hoping I > could see exactly how it was generated, but this information seems to > be hidden deep within the distribution utilities and I do not know how > to explore this easily. > > I am having related problems when trying to use f2py to generate > simple fortran modules, so I suspect that something is messed up with > my fortran compiler, but the compiler seems to work (the library is > generated...) > > Any suggestions about how I can try to diagnose and fix this problem? > > Michael.