From eric at enthought.com Tue Oct 1 02:27:16 2002 From: eric at enthought.com (eric jones) Date: Tue, 1 Oct 2002 01:27:16 -0500 Subject: [SciPy-dev] RE: f2py and NO_APPEND_FORTRAN In-Reply-To: <3D98DB1F.EE7DD0E1@llnl.gov> Message-ID: <002b01c26913$96cb5bc0$6b01a8c0@ericlaptop> Hey Pat, I've forwarded this to the Scipy Dev list to see if anyone else has a quick solution. eric > -----Original Message----- > From: pjmiller at smtp-2.llnl.gov [mailto:pjmiller at smtp-2.llnl.gov] On Behalf > Of Pat Miller > Sent: Monday, September 30, 2002 6:16 PM > To: Eric Jones > Subject: f2py and NO_APPEND_FORTRAN > > Eric, > > On the AIX port, I'm having difficulty with underscores in the f2py > generated sources.... I'm not sure (within scipy) how to specify > that the C source (e.g. fblasmodule.c) should be compiled with > -DNO_APPEND_FORTRAN. > > I have written a ibm_fortran_compiler for detecting xlf/xlF, but > that doesn't feed back to the C compilation step. > > Pat > > -- > Patrick Miller | (925) 423-0309 | > http://www.llnl.gov/CASC/people/pmiller > > Access to power must be confined to those who are not in love with it. > -- Plato From pearu at cens.ioc.ee Tue Oct 1 03:20:28 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 1 Oct 2002 10:20:28 +0300 (EEST) Subject: [SciPy-dev] RE: f2py and NO_APPEND_FORTRAN In-Reply-To: <002b01c26913$96cb5bc0$6b01a8c0@ericlaptop> Message-ID: > > On the AIX port, I'm having difficulty with underscores in the f2py > > generated sources.... I'm not sure (within scipy) how to specify > > that the C source (e.g. fblasmodule.c) should be compiled with > > -DNO_APPEND_FORTRAN. > > > > I have written a ibm_fortran_compiler for detecting xlf/xlF, but > > that doesn't feed back to the C compilation step. I'd suggest using -qextname option for xlf. It is safer to have Fortran names ended with _ in order to avoid possible name conflicts. Another (but only if necessary, e.g. when using system Fortran libraries that were compiled without -qextname) option would be to use python setup.py build_ext -DNO_APPEND_FORTRAN build for building scipy. Pearu From eric at enthought.com Tue Oct 1 06:19:58 2002 From: eric at enthought.com (eric jones) Date: Tue, 1 Oct 2002 05:19:58 -0500 Subject: [SciPy-dev] fftpack2 for review In-Reply-To: Message-ID: <003b01c26934$193d7870$6b01a8c0@ericlaptop> Hey Pearu, Overall, first tests went fairly smoothly. Even without djbfft and fftw, things appeared to install correctly on win32. *very* nice job on system checking and handling the lack of libraries. All tests passed. Sure am glad to finally have some unit tests for fft stuff! I do not have fftw installed. I tested once without djbfft and once with it. I didn't notice any material difference in timings (the non-djbfft set given below). My tests are on a 850 MHz PIII and they are slower than yours. This leads me to believe, even when trying to use djbfft, I'm not getting it linked correctly. Setup said it detected my djbfft libraries though (I defined the environment variable pointing to my libs to get this to work). I'll look over the tests tomorrow. Also, Travis O., could you give the package a once over? I've also copied your "proposal" here to solicit comments from the signal processing geeks: Differences between fftpack and FFT from Numeric ================================================ * Functions rfft and irfft accept and return only real sequences. So, the corresponding functions real_fft, inverse_real_fft from FFT are not equivalent with rfft and irfft. The difference is in the storage of data, see the definitions of corresponding functions for details. * Functions fftshift and ifftshift from Numeric.FFT are renamed to freqshift and ifreqshift, respectively. Rationale: there is nothing FFT algorithm specific what these functions aim to accomplish. Also freqshift puts the Nyquist component at the end of the result while fftshift puts it at the beginning of the result. This is for the consistency among varios functions in the fftpack module. * PROPOSAL: When calling ifft with forced truncation or zero-padding then I would like to propose that the corresponding action is applied to the middle of data. For example, ifft([1,2,3,4,5,6],n=8) is equivalent to ifft([1,2,3,4,0,0,5,6]), that is, the Fourier terms with higher frequencies and zero coefficients are introduced. In the Numeric.FFT case, the example above would be equivalent to ifft([1,2,3,4,5,6,0,0],n=8), which would mean that Fourier coefficients [5,6] become the coefficients of higher frequency terms and the original terms are zerod. Note that this proposal is **not implemented** because it needs to be discussed. For instance, Matlab uses the same convention as FFT and this change would be confusing for Matlab users. On the other hand, FFT or Matlab's conventions change the spectrum of the original signal and I don't see any sense in this behaviour (if you don't agree then please provide an example). Namely, one of the applications of the argument n would be to compose a new signal with a more dense or sparse grid than the original one by using new_signal = ifft(fft(signal),n) Note that the new_signal would have the same Fourier spectrum as original signal. With Matlab/FFT convention this is not true. Any thoughts? And now some installation notes about what I did to get djbfft running: Install notes: 1. win32 (cygwin and mingw) required #include "errno.h" in any djbfft file that used errno. 2. I installed the latest f2py but still got the warning error: WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING! fftpack2 requires F2PY version 2.23.190-1367 or higher but got 2.23.190-1354 WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING! I'm hitting the hay right now, so I'll check out why this happened tomorrow. 3. How hard would it be to get setup.py to build the djbfft library if one wasn't detected. The makefile builds a lot of the files on the fly using stuff like: 4c0.c: \ idea/c0.c pentium/c0.c ppro/c0.c sparc/c0.c auto_opt sed 1s/PRE/pre4.c/ `cat auto_opt`/c0.c > 4c0.c 4c0.o: \ compile 4c0.c pre4.c fftc4.h complex4.h real4.h fftr4.h real4.h 4i.c ./compile 4c0.c I'm not sure what all is going on here, but it seems like we could either do the same in python or auto-generate a generic version of this stuff and include it in SciPy. Of course, since djbfft isn't required, this isn't necessary. TEST RESULTS: >>> fftpack2.test() creating test suite for: fftpack2.basic creating test suite for: fftpack2.helper creating test suite for: fftpack2.pseudo_diffs ................... Fast Fourier Transform ======================================================== | real input | complex input -------------------------------------------------------- size |fftpack2| FFT | scipy |fftpack2| FFT | scipy -------------------------------------------------------- 100 | 0.39 | 0.36 | 0.35 | 0.40 | 0.37 | 0.36 (secs for 7000 call 1000 | 0.31 | 0.48 | 0.43 | 0.44 | 0.48 | 0.41 (secs for 2000 call 256 | 0.65 | 0.75 | 0.70 | 0.75 | 0.75 | 0.70 (secs for 10000 cal 512 | 0.89 | 1.32 | 1.11 | 1.16 | 1.33 | 1.12 (secs for 10000 cal 1024 | 0.14 | 0.35 | 0.30 | 0.21 | 0.24 | 0.20 (secs for 1000 call 2048 | 0.50 | 0.76 | 0.64 | 0.49 | 0.77 | 0.68 (secs for 1000 call 4096 | 0.60 | 0.93 | 0.82 | 0.67 | 0.79 | 0.65 (secs for 500 calls 8192 | 1.23 | 3.16 | 2.85 | 2.88 | 3.23 | 2.91 (secs for 500 calls . Inverse Fast Fourier Transform ======================================================== | real input | complex input -------------------------------------------------------- size |fftpack2| FFT | scipy |fftpack2| FFT | scipy -------------------------------------------------------- 100 | 0.38 | 0.88 | 0.87 | 0.45 | 0.89 | 0.88 (secs for 7000 call 1000 | 0.45 | 1.93 | 1.85 | 0.80 | 1.44 | 1.36 (secs for 2000 call 256 | 0.66 | 3.07 | 2.85 | 0.81 | 1.79 | 1.74 (secs for 10000 cal 512 | 0.91 | 2.89 | 2.78 | 1.27 | 3.22 | 3.23 (secs for 10000 cal 1024 | 0.24 | 0.80 | 0.74 | 0.37 | 0.90 | 0.85 (secs for 1000 call 2048 | 0.51 | 1.85 | 1.79 | 0.78 | 1.90 | 1.84 (secs for 1000 call 4096 | 0.35 | 1.99 | 1.86 | 0.84 | 2.18 | 2.02 (secs for 500 calls 8192 | 1.56 | 5.35 | 5.07 | 2.82 | 5.45 | 5.19 (secs for 500 calls . Multi-dimensional Fast Fourier Transform ======================================================== | real input | complex input -------------------------------------------------------- size |fftpack2| FFT | scipy |fftpack2| FFT | scipy -------------------------------------------------------- 100x100 | 0.51 | 0.67 | 0.59 | 0.55 | 0.72 | 0.64 (secs for 100 c ) 1000x100 | 0.70 | 0.67 | 0.62 | 0.70 | 0.70 | 0.64 (secs for 7 cal 256x256 | 0.70 | 0.64 | 0.61 | 0.75 | 0.67 | 0.63 (secs for 10 ca 512x512 | 0.93 | 0.84 | 0.79 | 0.94 | 0.86 | 0.80 (secs for 3 cal . Fast Fourier Transform (real data) ================================== size |fftpack2| FFT | scipy ---------------------------------- 100 | 0.35 | 0.43 | 0.47 (secs for 7000 calls) 1000 | 0.27 | 0.32 | 0.34 (secs for 2000 calls) 256 | 0.60 | 0.78 | 0.77 (secs for 10000 calls) 512 | 0.83 | 1.04 | 1.01 (secs for 10000 calls) 1024 | 0.13 | 0.22 | 0.20 (secs for 1000 calls) 2048 | 0.28 | 0.39 | 0.41 (secs for 1000 calls) 4096 | 0.25 | 0.47 | 0.43 (secs for 500 calls) 8192 | 0.81 | 1.15 | 1.20 (secs for 500 calls) . Inverse Fast Fourier Transform (real data) ================================== size |fftpack2| FFT | scipy ---------------------------------- 100 | 0.36 | 0.92 | 0.87 (secs for 7000 calls) 1000 | 0.41 | 1.12 | 1.15 (secs for 2000 calls) 256 | 0.62 | 1.61 | 2.49 (secs for 10000 calls) 512 | 0.89 | 2.28 | 2.57 (secs for 10000 calls) 1024 | 0.17 | 0.52 | 0.24 (secs for 1000 calls) 2048 | 0.27 | 1.08 | 1.02 (secs for 1000 calls) 4096 | 0.44 | 1.10 | 1.08 (secs for 500 calls) 8192 | 1.10 | 2.22 | 2.18 (secs for 500 calls) ......................... Shifting periodic functions ============================== size | optimized | naive ------------------------------ 100 | 0.10 | 0.53 (secs for 1500 calls) 1000 | 0.07 | 0.76 (secs for 300 calls) 256 | 0.12 | 0.94 (secs for 1500 calls) 512 | 0.13 | 1.11 (secs for 1000 calls) 1024 | 0.11 | 1.08 (secs for 500 calls) 2048 | 0.08 | 1.06 (secs for 200 calls) 4096 | 0.12 | 1.16 (secs for 100 calls) 8192 | 0.17 | 1.45 (secs for 50 calls) . Differentiation of periodic functions ===================================== size | convolve | naive ------------------------------------- 100 | 0.09 | 0.78 (secs for 1500 calls) 1000 | 0.07 | 1.04 (secs for 300 calls) 256 | 0.13 | 1.44 (secs for 1500 calls) 512 | 0.13 | 1.74 (secs for 1000 calls) 1024 | 0.11 | 2.37 (secs for 500 calls) 2048 | 0.09 | 1.73 (secs for 200 calls) 4096 | 0.14 | 1.74 (secs for 100 calls) 8192 | 0.19 | 2.06 (secs for 50 calls) . Tilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.10 | 0.63 (secs for 1500 calls) 1000 | 0.07 | 0.93 (secs for 300 calls) 256 | 0.13 | 1.02 (secs for 1500 calls) 512 | 0.12 | 1.25 (secs for 1000 calls) 1024 | 0.10 | 1.62 (secs for 500 calls) 2048 | 0.08 | 1.12 (secs for 200 calls) 4096 | 0.12 | 1.25 (secs for 100 calls) 8192 | 0.17 | 1.36 (secs for 50 calls) . Hilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.09 | 0.51 (secs for 1500 calls) 1000 | 0.07 | 0.49 (secs for 300 calls) 256 | 0.12 | 0.76 (secs for 1500 calls) 512 | 0.11 | 1.27 (secs for 1000 calls) 1024 | 0.10 | 1.15 (secs for 500 calls) 2048 | 0.08 | 0.88 (secs for 200 calls) 4096 | 0.13 | 0.90 (secs for 100 calls) 8192 | 0.16 | 1.09 (secs for 50 calls) . ---------------------------------------------------------------------- Ran 52 tests in 268.065s OK > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Pearu Peterson > Sent: Saturday, September 28, 2002 3:54 PM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] fftpack2 for review > > > Dear SciPy devels, > > Please find a scipy.fftpack replacement candidate fftpack2 from > > http://cens.ioc.ee/~pearu/misc/fftpack2-0.1.tar.gz > > for review. fftpack2 main features include: > > - Optional support for high-performance FFT libraries such as FFTW and > DJBFFT libraries. By default, Fortran FFTPACK is used and other > libraries are automatically detected by > the scipy_distutils/system_info.py script. > > - Support for both real and complex DFT's and their inverse > transforms. When used with FFTW and/or DJBFFT libraries then fftpack2 > routines are considerably faster than Numeric.FFT and the current > scipy.fftpack. > Some comparison timings are included at the end of this message. > > - Implementation of differentiation and integration of periodic > sequences, various pseudo-differential operators such as Tilbert > transform, Hilbert transform, hyperbolic pseudo-differential > operators, their inverses, etc. > > - Because different FFT libraries use different data storage convention, > fftpack2 implements interface to these conventions so that users > and developers need not to learn all these conventions, just > one is enough. Namely, fftpack2 uses the same data storage > specification as Fortran FFTPACK (also used by Numeric.FFT and > Matlab, though with somewhat different formal definitions). > > - Generic convolution routines that allow quick and easy way of > introducing new pseudo-differential operators with the same > performance as core pseudo-differential operators. > > - All functions have complete documentation strings. > > - A rather complete testing site. > > Note that fftpack2 requires the latest F2PY that you can get from > > http://cens.ioc.ee/projects/f2py2e/2.x/F2PY-2-latest.tar.gz > > For testing, fftpack2 uses scipy_test from SciPy CVS repository. > > For building/testing instructions and for various notes, including open > questions, see NOTES.txt file after unpacking fftpack2-0.1.tar.gz. > > As usual, comments, suggestions, and criticism are most welcome. > > Regards, > Pearu > > ------------------------------- > fftpack2 timing results: > ------------------------------- > Fast Fourier Transform > ======================================================== > | real input | complex input > -------------------------------------------------------- > size |fftpack2| FFT | scipy |fftpack2| FFT | scipy > -------------------------------------------------------- > 100 | 0.38 | 0.39 | 0.38 | 0.40 | 0.40 | 0.38 (secs > for7000calls) > 1000 | 0.31 | 0.63 | 0.60 | 0.54 | 0.64 | 0.61 (secs for 2000 > 256 | 0.60 | 0.75 | 0.62 | 0.71 | 0.78 | 0.66 (secs for 10000 > 512 | 0.77 | 1.30 | 1.00 | 0.95 | 1.30 | 1.01 (secs for 10000 > 1024 | 0.14 | 0.26 | 0.18 | 0.17 | 0.25 | 0.19 (secs for 1000 > 2048 | 0.21 | 0.51 | 0.36 | 0.34 | 0.65 | 0.54 (secs for 1000 > 4096 | 0.28 | 1.08 | 0.66 | 0.56 | 0.93 | 0.79 (secs for 500 > 8192 | 1.09 | 3.88 | 3.94 | 2.19 | 4.36 | 3.48 (secs for 500 > > Inverse Fast Fourier Transform > ======================================================== > | real input | complex input > -------------------------------------------------------- > size |fftpack2| FFT | scipy |fftpack2| FFT | scipy > -------------------------------------------------------- > 100 | 0.17 | 0.26 | 0.63 | 0.45 | 0.87 | 0.85 (secs for 7000 > 1000 | 0.34 | 1.24 | 1.25 | 0.72 | 1.29 | 1.29 (secs for 2000 > 256 | 0.62 | 1.57 | 1.44 | 0.69 | 1.62 | 1.47 (secs for 10000 > 512 | 0.80 | 2.69 | 2.30 | 1.00 | 2.62 | 2.20 (secs for 10000 > 1024 | 0.12 | 0.51 | 0.41 | 0.18 | 0.54 | 0.44 (secs for 1000 > 2048 | 0.23 | 1.16 | 1.09 | 0.40 | 1.33 | 1.27 (secs for 1000 > 4096 | 0.35 | 1.75 | 1.44 | 0.63 | 1.72 | 1.54 (secs for 500 > 8192 | 1.34 | 5.71 | 6.02 | 2.40 | 6.14 | 5.67 (secs for 500 > > Multi-dimensional Fast Fourier Transform > ======================================================== > | real input | complex input > -------------------------------------------------------- > size |fftpack2| FFT | scipy |fftpack2| FFT | scipy > -------------------------------------------------------- > 100x100 | 0.43 | 0.87 | 0.81 | 0.48 | 0.88 | 0.82 (secs for 100 > 1000x100 | 0.54 | 0.78 | 0.72 | 0.56 | 0.73 | 0.75 (secs for 7 > 256x256 | 0.69 | 0.72 | 0.62 | 0.67 | 0.72 | 0.64 (secs for 10 > 512x512 | 0.81 | 0.91 | 0.84 | 0.81 | 0.87 | 0.81 (secs for 3 > > Fast Fourier Transform (real data) > ================================== > size |fftpack2| FFT | scipy > ---------------------------------- > 100 | 0.36 | 0.47 | 0.47 (secs for 7000 calls) > 1000 | 0.29 | 0.38 | 0.38 (secs for 2000 calls) > 256 | 0.55 | 0.76 | 0.81 (secs for 10000 calls) > 512 | 0.67 | 1.06 | 1.07 (secs for 10000 calls) > 1024 | 0.10 | 0.18 | 0.17 (secs for 1000 calls) > 2048 | 0.16 | 0.30 | 0.30 (secs for 1000 calls) > 4096 | 0.20 | 0.37 | 0.32 (secs for 500 calls) > 8192 | 0.75 | 1.29 | 1.26 (secs for 500 calls) > > Inverse Fast Fourier Transform (real data) > ================================== > size |fftpack2| FFT | scipy > ---------------------------------- > 100 | 0.39 | 0.90 | 0.92 (secs for 7000 calls) > 1000 | 0.32 | 0.63 | 0.70 (secs for 2000 calls) > 256 | 0.59 | 1.37 | 1.39 (secs for 10000 calls) > 512 | 0.73 | 1.70 | 1.80 (secs for 10000 calls) > 1024 | 0.11 | 0.27 | 0.27 (secs for 1000 calls) > 2048 | 0.20 | 0.45 | 0.44 (secs for 1000 calls) > 4096 | 0.24 | 0.65 | 0.68 (secs for 500 calls) > 8192 | 0.73 | 1.98 | 2.06 (secs for 500 calls) > > > Shifting periodic functions > ============================== > size | optimized | naive > ------------------------------ > 100 | 0.09 | 0.52 (secs for 1500 calls) > 1000 | 0.06 | 0.66 (secs for 300 calls) > 256 | 0.10 | 0.84 (secs for 1500 calls) > 512 | 0.10 | 1.02 (secs for 1000 calls) > 1024 | 0.07 | 1.07 (secs for 500 calls) > 2048 | 0.06 | 0.80 (secs for 200 calls) > 4096 | 0.00 | 0.86 (secs for 100 calls) > 8192 | 0.13 | 1.23 (secs for 50 calls) > > Differentiation of periodic functions > ===================================== > size | convolve | naive > ------------------------------------- > 100 | 0.09 | 0.59 (secs for 1500 calls) > 1000 | 0.06 | 0.75 (secs for 300 calls) > 256 | 0.11 | 0.97 (secs for 1500 calls) > 512 | 0.09 | 1.21 (secs for 1000 calls) > 1024 | 0.08 | 1.19 (secs for 500 calls) > 2048 | 0.06 | 1.11 (secs for 200 calls) > 4096 | 0.09 | 1.19 (secs for 100 calls) > 8192 | 0.14 | 1.48 (secs for 50 calls) > > Tilbert transform of periodic functions > ========================================= > size | optimized | naive > ----------------------------------------- > 100 | 0.09 | 0.54 (secs for 1500 calls) > 1000 | 0.06 | 0.60 (secs for 300 calls) > 256 | 0.10 | 0.77 (secs for 1500 calls) > 512 | 0.09 | 0.89 (secs for 1000 calls) > 1024 | 0.07 | 0.90 (secs for 500 calls) > 2048 | 0.05 | 0.83 (secs for 200 calls) > 4096 | 0.07 | 0.92 (secs for 100 calls) > 8192 | 0.11 | 1.04 (secs for 50 calls) > . > Hilbert transform of periodic functions > ========================================= > size | optimized | naive > ----------------------------------------- > 100 | 0.09 | 0.47 (secs for 1500 calls) > 1000 | 0.06 | 0.48 (secs for 300 calls) > 256 | 0.10 | 0.68 (secs for 1500 calls) > 512 | 0.09 | 0.77 (secs for 1000 calls) > 1024 | 0.07 | 0.74 (secs for 500 calls) > 2048 | 0.06 | 0.72 (secs for 200 calls) > 4096 | 0.07 | 0.79 (secs for 100 calls) > 8192 | 0.11 | 0.93 (secs for 50 calls) > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From eric at enthought.com Tue Oct 1 06:26:14 2002 From: eric at enthought.com (eric jones) Date: Tue, 1 Oct 2002 05:26:14 -0500 Subject: [SciPy-dev] Are we using the latest ODEPACK? Message-ID: <003c01c26934$f84f2ef0$6b01a8c0@ericlaptop> Hey group, I got the following message a while back: Hello Eric, I was just told that SciPy includes a package of mine, ODEPACK, within Python wrappers, and that the source of ODEPACK was Netlib. I am delighted to hear this, but concerned about the version used. I recently did a major upgrade to ODEPACK, with enhancements to both the capabilities and portability. The current version is available from a website here, http://www.llnl.gov/CASC/download/download_home.html (along with other ODE solvers of possible interest to SciPy users). -Alan Hindmarsh Are we using this version of ODEPACK, or an older one? If we're using an older one, do you think we should upgrade before trying to release 0.2.0? Will anyone volunteer to check into this and also what is involved with the upgrade (if needed). I'm inclined to but Pearu's fftpack2 in as fftpack for the new version if it passes muster (looks like it is going to), so I'm happy to make this upgrade also if you think it would go smoothly. Thanks, eric From pearu at cens.ioc.ee Tue Oct 1 08:13:09 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 1 Oct 2002 15:13:09 +0300 (EEST) Subject: [SciPy-dev] fftpack2 for review In-Reply-To: <003b01c26934$193d7870$6b01a8c0@ericlaptop> Message-ID: On Tue, 1 Oct 2002, eric jones wrote: > Hey Pearu, > > Overall, first tests went fairly smoothly. Even without djbfft and > fftw, things appeared to install correctly on win32. *very* nice job on > system checking and handling the lack of libraries. > > All tests passed. Sure am glad to finally have some unit tests for fft > stuff! > > I do not have fftw installed. I tested once without djbfft and once > with it. I didn't notice any material difference in timings (the > non-djbfft set given below). My tests are on a 850 MHz PIII and they > are slower than yours. This leads me to believe, even when trying to > use djbfft, I'm not getting it linked correctly. Setup said it detected > my djbfft libraries though (I defined the environment variable pointing > to my libs to get this to work). Note that measuring time is currently reliable only on Linux platform where real jiffies [*] are used. Under windows and other unices either real jiffies are not available or I don't know how to use them, and then time.time is used instead. [*] jiffies are used to measure how much the given process uses CPU time, unit is 1/100th of a second. Under Linux one can get user jiffies and system jiffies, in scipy bench tests the last ones are ignored when measuring performance of algorithms. > Also freqshift puts the Nyquist component at the end of the result > while fftshift puts it at the beginning of the result. This is for > the consistency among varios functions in the fftpack module. Let me add here that it would be good for performance not to fix the data storage order and provide helper functions such as dftfreq function that return this information, also provide functions for manipulating fft results. For example, djbfft uses a rather strange ordering (that may be the key for its good performance) and to transform the results from djbfft routines to "standard" convention (and back), two nontrivial copying of arrays is needed. And this may be the performance killer when using the djbfft library. If no one will object this, I'll put it in to my todo list. But it is fine with me to stick to one and only one convention if users really want that. > And now some installation notes about what I did to get djbfft running: > > Install notes: > > 1. > win32 (cygwin and mingw) required #include "errno.h" in any djbfft file > that used errno. So, we need to patch djbfft. Probably D.J.B. does not like this. > 2. > I installed the latest f2py but still got the warning error: > > WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING! > > fftpack2 requires F2PY version 2.23.190-1367 or higher but got > 2.23.190-1354 > > WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING! > > I'm hitting the hay right now, so I'll check out why this happened > tomorrow. That is strange. I would expect that Python will crash (with segfault) when running convolve tests if older F2PY version than 2.23.190-1367 was used for building. Any change that you have several f2py2e packages lying around in your system? > 3. > How hard would it be to get setup.py to build the djbfft library if one > wasn't detected. The makefile builds a lot of the files on the fly > using stuff like: > > 4c0.c: \ > idea/c0.c pentium/c0.c ppro/c0.c sparc/c0.c auto_opt > sed 1s/PRE/pre4.c/ `cat auto_opt`/c0.c > 4c0.c > > 4c0.o: \ > compile 4c0.c pre4.c fftc4.h complex4.h real4.h fftr4.h real4.h 4i.c > ./compile 4c0.c > > I'm not sure what all is going on here, but it seems like we could > either do the same in python or auto-generate a generic version of this > stuff and include it in SciPy. Of course, since djbfft isn't required, > this isn't necessary. In principle, this could be done. However, D.J.B. certainly has made it more difficult for us to accomplish that because djbfft uses a rather unique building process, and also the conditions of altering this software are rather restrictive. I think that having setup.py to build djbfft library does not have a very high priority right now, especially if the performance boost is not so great on all platforms and arhitectures. So, I would put it into the todo list and deal with it in future. Pearu From skip at pobox.com Tue Oct 1 10:08:52 2002 From: skip at pobox.com (Skip Montanaro) Date: Tue, 1 Oct 2002 09:08:52 -0500 Subject: [SciPy-dev] segfault in scipy.test() Message-ID: <15769.44148.736971.303581@12-248-11-90.client.attbi.com> Does this traceback seem familiar to anyone? Program received signal SIGSEGV, Segmentation fault. 0xfde47060 in copy_ND_array () from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so (gdb) bt #0 0xfde47060 in copy_ND_array () from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so #1 0xfde468c4 in array_from_pyobj () from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so #2 0xfde478d4 in f2py_rout__flinalg_ddet_r () from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so ... (rest of call stack is Python "call this, eval that" stuff) ... Here's what I have to go on at the moment. The build environment is Solaris 8 using Sun's Forte v7 compilers. The module was build using these commands: mycc -DNDEBUG -O -I/home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/f2py2e/src \ -I/home/skip/tmp/sun_cc/2.2/include/python2.2 -c \ build/temp.solaris-2.8-sun4u-2.2/_flinalgmodule.c -o \ build/temp.solaris-2.8-sun4u-2.2/_flinalgmodule.o f77 build/temp.solaris-2.8-sun4u-2.2/fortranobject.o \ build/temp.solaris-2.8-sun4u-2.2/_flinalgmodule.o \ -L/home/skip/local/SunOS/lib/atlas -L/usr/lib \ -Lbuild/temp.solaris-2.8-sun4u-2.2 \ -Lbuild/temp.solaris-2.8-sun4u-2.2 \ -Lbuild/temp.solaris-2.8-sun4u-2.2 \ -Lbuild/temp.solaris-2.8-sun4u-2.2 -L/usr/local/ssl/lib \ -L/home/skip/local/SunOS/lib -L/usr/local/ssl/7.0/lib \ -R/usr/local/ssl/lib -R/home/skip/local/SunOS/lib \ -R/usr/local/ssl/7.0/lib -l_flinalg -lfblas -llapack -lblas -lfsu \ -lF77 -lM77 -lsunmath -lmvec -lf77compat -lm -o \ build/lib.solaris-2.8-sun4u-2.2/scipy/linalg/_flinalg.so -Bdynamic \ -G "mycc" is a shell script which selects between C or C++ compilers based on its command line arguments. Anyone know how to get rid of the "-L/usr/lib"? It almost certainly shouldn't be there. 'ldd' shows these library dependencies: $ ldd ./2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so libfsu.so.1 => /opt/SUNWspro/lib/libfsu.so.1 libF77.so.2 => /opt/SUNWspro/lib/libF77.so.2 libM77.so.2 => /opt/SUNWspro/lib/libM77.so.2 libsunmath.so.1 => /opt/SUNWspro/lib/libsunmath.so.1 libf77compat.so.1 => /opt/SUNWspro/lib/libf77compat.so.1 libm.so.1 => /usr/lib/libm.so.1 libc.so.1 => /usr/lib/libc.so.1 libdl.so.1 => /usr/lib/libdl.so.1 /usr/platform/SUNW,Ultra-2/lib/libc_psr.so.1 I will try building with debug symbols to see if it gives me any more useful information. Thx, Skip From pearu at cens.ioc.ee Tue Oct 1 10:38:13 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 1 Oct 2002 17:38:13 +0300 (EEST) Subject: [SciPy-dev] segfault in scipy.test() In-Reply-To: <15769.44148.736971.303581@12-248-11-90.client.attbi.com> Message-ID: On Tue, 1 Oct 2002, Skip Montanaro wrote: > > Does this traceback seem familiar to anyone? > > Program received signal SIGSEGV, Segmentation fault. > 0xfde47060 in copy_ND_array () > from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so > (gdb) bt > #0 0xfde47060 in copy_ND_array () > from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so > #1 0xfde468c4 in array_from_pyobj () > from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so > #2 0xfde478d4 in f2py_rout__flinalg_ddet_r () > from /home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/scipy/linalg/_flinalg.so > ... (rest of call stack is Python "call this, eval that" stuff) ... > > Here's what I have to go on at the moment. The build environment is Solaris > 8 using Sun's Forte v7 compilers. What f2py version are you using? If it is 2.13.175-1250, then upgrade to the latest might fix this (since 2.13.175-1250 there are few fixes in f2py that solve various segmentation faults that showed up especially on non-linux platforms). To be sure that f2py is working correctly, run also its tests: cd f2py2e/tests/f77 python return_integer.py python return_real.py python return_logical.py python return_complex.py python return_character.py > The module was build using these commands: > > mycc -DNDEBUG -O > -I/home/skip/tmp/sun_cc/2.2/lib/python2.2/site-packages/f2py2e/src \ > -I/home/skip/tmp/sun_cc/2.2/include/python2.2 -c \ > build/temp.solaris-2.8-sun4u-2.2/_flinalgmodule.c -o \ > build/temp.solaris-2.8-sun4u-2.2/_flinalgmodule.o > > f77 build/temp.solaris-2.8-sun4u-2.2/fortranobject.o \ > build/temp.solaris-2.8-sun4u-2.2/_flinalgmodule.o \ > -L/home/skip/local/SunOS/lib/atlas -L/usr/lib \ > -Lbuild/temp.solaris-2.8-sun4u-2.2 \ > -Lbuild/temp.solaris-2.8-sun4u-2.2 \ > -Lbuild/temp.solaris-2.8-sun4u-2.2 \ > -Lbuild/temp.solaris-2.8-sun4u-2.2 -L/usr/local/ssl/lib \ > -L/home/skip/local/SunOS/lib -L/usr/local/ssl/7.0/lib \ > -R/usr/local/ssl/lib -R/home/skip/local/SunOS/lib \ > -R/usr/local/ssl/7.0/lib -l_flinalg -lfblas -llapack -lblas -lfsu \ > -lF77 -lM77 -lsunmath -lmvec -lf77compat -lm -o \ > build/lib.solaris-2.8-sun4u-2.2/scipy/linalg/_flinalg.so -Bdynamic \ > -G > > "mycc" is a shell script which selects between C or C++ compilers based on > its command line arguments. Anyone know how to get rid of the "-L/usr/lib"? > It almost certainly shouldn't be there. What's the output of python scipy_distutils/system_info.py ? This might show where "-L/usr/lib" comes in and also give hints if anything is wrong. Pearu From skip at pobox.com Tue Oct 1 12:28:09 2002 From: skip at pobox.com (Skip Montanaro) Date: Tue, 1 Oct 2002 11:28:09 -0500 Subject: [SciPy-dev] setup.py traceback In-Reply-To: References: <15768.22827.596066.102006@12-248-11-90.client.attbi.com> Message-ID: <15769.52505.181402.850655@12-248-11-90.client.attbi.com> Pearu> So, could you check what is the type/value of Pearu> self.search_static_first in your case? Does this tell you what you need to know? % python Python 2.3a0 (#90, Sep 28 2002, 09:08:31) [GCC 2.96 20000731 (Mandrake Linux 8.1 2.96-0.62mdk)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import ConfigParser >>> cp = ConfigParser.ConfigParser() >>> cp.read("dummy.ini") >>> cp.getboolean("Tokenizer", "mine_received_headers") False The file dummy.ini has a section named "Tokenizer" that contains a boolean option named "mine_received_headers". Looks like in 2.3 .getboolean() returns booleans. Also, note that since booleans are a subclass of ints, your assert could be coded as assert isinstance(self.search_static_first, int) Skip From kern at caltech.edu Tue Oct 1 13:28:17 2002 From: kern at caltech.edu (Robert Kern) Date: Tue, 1 Oct 2002 10:28:17 -0700 Subject: [SciPy-dev] Are we using the latest ODEPACK? In-Reply-To: <003c01c26934$f84f2ef0$6b01a8c0@ericlaptop> References: <003c01c26934$f84f2ef0$6b01a8c0@ericlaptop> Message-ID: <20021001172817.GA12973@taliesen.caltech.edu> On Tue, Oct 01, 2002 at 05:26:14AM -0500, eric jones wrote: [snip] > Are we using this version of ODEPACK, or an older one? The old one. > If we're using > an older one, do you think we should upgrade before trying to release > 0.2.0? > Will anyone volunteer to check into this and also what is involved with > the upgrade (if needed). I'm not an expert on the package, but it looks like the only routine SciPy uses (LSODA) has only changed its name to DLSODA. The call arguments appear to be the same. Ditto with error messages. Then all you have to do is update the name __odrpack.h and fiddle with the build procedure. -- Robert Kern Ruddock House President kern at caltech.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at cens.ioc.ee Tue Oct 1 17:26:56 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 2 Oct 2002 00:26:56 +0300 (EEST) Subject: [SciPy-dev] Are we using the latest ODEPACK? In-Reply-To: <003c01c26934$f84f2ef0$6b01a8c0@ericlaptop> Message-ID: On Tue, 1 Oct 2002, eric jones wrote: > Are we using this version of ODEPACK, or an older one? The old one. > If we're using an older one, do you think we should upgrade before > trying to release 0.2.0? Actually, I think that the integrate module should be rewritten for using f2py generated interfaces. So, I would suggest postponing this upgrade for the next scipy release. Since I am using quite actively ODE solvers right now and hence I am interested in getting the integrate module improved, documented and tested; I could spend some time with it, but not in the next coming few weeks. Btw, a nice date for 0.2.0 would be the same or soon after when Python 2.2.2 will be released, may be within a week or two. So that Scipy could be tested against Python 2.2.2 before its release. Pearu From eric at enthought.com Wed Oct 2 00:32:44 2002 From: eric at enthought.com (eric jones) Date: Tue, 1 Oct 2002 23:32:44 -0500 Subject: [SciPy-dev] Are we using the latest ODEPACK? In-Reply-To: <20021001172817.GA12973@taliesen.caltech.edu> Message-ID: <000101c269cc$c0d90c50$777ba8c0@ericlaptop> > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Robert Kern > Sent: Tuesday, October 01, 2002 12:28 PM > To: scipy-dev at scipy.net > Subject: Re: [SciPy-dev] Are we using the latest ODEPACK? > > On Tue, Oct 01, 2002 at 05:26:14AM -0500, eric jones wrote: > > [snip] > > > Are we using this version of ODEPACK, or an older one? > > The old one. > > > If we're using > > an older one, do you think we should upgrade before trying to release > > 0.2.0? > > Will anyone volunteer to check into this and also what is involved with > > the upgrade (if needed). > > I'm not an expert on the package, but it looks like the only routine > SciPy uses (LSODA) has only changed its name to DLSODA. The call > arguments appear to be the same. Ditto with error messages. Then all you > have to do is update the name __odrpack.h and fiddle with the build > procedure. Hey Robert, Thanks for looking into this. It sounds like the update won't be to bad. Based on Pearu's comments, we'll wait to do this when he has a chance to write f2py wrappers for it. eric > > -- > Robert Kern > Ruddock House President > kern at caltech.edu > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From eric at enthought.com Wed Oct 2 00:36:52 2002 From: eric at enthought.com (eric jones) Date: Tue, 1 Oct 2002 23:36:52 -0500 Subject: [SciPy-dev] Are we using the latest ODEPACK? In-Reply-To: Message-ID: <000201c269cd$549e2e20$777ba8c0@ericlaptop> > On Tue, 1 Oct 2002, eric jones wrote: > > > Are we using this version of ODEPACK, or an older one? > > The old one. > > > If we're using an older one, do you think we should upgrade before > > trying to release 0.2.0? > > Actually, I think that the integrate module should be rewritten for > using f2py generated interfaces. So, I would suggest postponing this > upgrade for the next scipy release. OK. F2py generated interfaces sounds like it is a good idea. > > Since I am using quite actively ODE solvers right now and hence I am > interested in getting the integrate module improved, documented and > tested; I could spend some time with it, but not in the next coming few > weeks. All very good things. > > Btw, a nice date for 0.2.0 would be the same or soon after when Python > 2.2.2 will be released, may be within a week or two. So that Scipy could > be tested against Python 2.2.2 before its release. I think this is doable. I continue to spend all my available programming time on weave. However, the end is in sight for this -- perhaps 2 more nights of work. After that, I will focus on the release. We are dang close to having a usable bug tracker (Round-Up) running. It actually works now, but we have a few kinks to work out before we turn everyone loose on it. Hopefully it will be up by Thursday and we can use it to manage the release process in a semi-organized way. eric From eric at enthought.com Wed Oct 2 00:50:14 2002 From: eric at enthought.com (eric jones) Date: Tue, 1 Oct 2002 23:50:14 -0500 Subject: [SciPy-dev] fftpack2 for review In-Reply-To: Message-ID: <000401c269cf$329b71f0$777ba8c0@ericlaptop> > On Tue, 1 Oct 2002, eric jones wrote: > > > Hey Pearu, > > > > Overall, first tests went fairly smoothly. Even without djbfft and > > fftw, things appeared to install correctly on win32. *very* nice job on > > system checking and handling the lack of libraries. > > > > All tests passed. Sure am glad to finally have some unit tests for fft > > stuff! > > > > I do not have fftw installed. I tested once without djbfft and once > > with it. I didn't notice any material difference in timings (the > > non-djbfft set given below). My tests are on a 850 MHz PIII and they > > are slower than yours. This leads me to believe, even when trying to > > use djbfft, I'm not getting it linked correctly. Setup said it detected > > my djbfft libraries though (I defined the environment variable pointing > > to my libs to get this to work). > > Note that measuring time is currently reliable only on Linux platform > where real jiffies [*] are used. Under windows and other unices either > real jiffies are not available or I don't know how to use them, and then > time.time is used instead. Ok. > > [*] jiffies are used to measure how much the given process uses CPU time, > unit is 1/100th of a second. Under Linux one can get user jiffies and > system jiffies, in scipy bench tests the last ones are ignored when > measuring performance of algorithms. > > > Also freqshift puts the Nyquist component at the end of the result > > while fftshift puts it at the beginning of the result. This is for > > the consistency among varios functions in the fftpack module. > > Let me add here that it would be good for performance not to fix the data > storage order and provide helper functions such as dftfreq function that > return this information, also provide functions for manipulating fft > results. For example, djbfft uses a rather strange ordering (that may be > the key for its good performance) and to transform the results from djbfft > routines to "standard" convention (and back), two nontrivial copying of > arrays is needed. And this may be the performance killer when using > the djbfft library. > If no one will object this, I'll put it in to my todo list. > But it is fine with me to stick to one and only one convention if users > really want that. Hmmm. I think it is fine to stick with the current behavior for now. Having multiple output formats is fairly tricky to deal with. Well... I guess it worth asking now. Does anyone need absolute speed with fft so much that they are willing to call some obscure function name and then deal with the ordering themselves? Speak now or forever hold your peace. (Or add it on your own later...) > > > And now some installation notes about what I did to get djbfft running: > > > > Install notes: > > > > 1. > > win32 (cygwin and mingw) required #include "errno.h" in any djbfft file > > that used errno. > > So, we need to patch djbfft. Probably D.J.B. does not like this. Yes. I haven't ever gotten a response from him, so I think the chances are slim. > > > 2. > > I installed the latest f2py but still got the warning error: > > > > WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING! > > > > fftpack2 requires F2PY version 2.23.190-1367 or higher but got > > 2.23.190-1354 > > > > WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING!WARNING! > > > > I'm hitting the hay right now, so I'll check out why this happened > > tomorrow. > > That is strange. I would expect that Python will crash (with > segfault) when running convolve tests if older F2PY version than > 2.23.190-1367 was used for building. Any change that you have several > f2py2e packages lying around in your system? I don't think so, but I'll look into it again tonight. > > > > 3. > > How hard would it be to get setup.py to build the djbfft library if one > > wasn't detected. The makefile builds a lot of the files on the fly > > using stuff like: > > > > 4c0.c: \ > > idea/c0.c pentium/c0.c ppro/c0.c sparc/c0.c auto_opt > > sed 1s/PRE/pre4.c/ `cat auto_opt`/c0.c > 4c0.c > > > > 4c0.o: \ > > compile 4c0.c pre4.c fftc4.h complex4.h real4.h fftr4.h real4.h 4i.c > > ./compile 4c0.c > > > > I'm not sure what all is going on here, but it seems like we could > > either do the same in python or auto-generate a generic version of this > > stuff and include it in SciPy. Of course, since djbfft isn't required, > > this isn't necessary. > > In principle, this could be done. However, D.J.B. certainly has made it > more difficult for us to accomplish that because djbfft uses a rather > unique building process, and also the conditions of altering this > software are rather restrictive. If we do it within our own CVS tree, I think it should be Ok with him, but I'm also not sure it is worth it. > > I think that having setup.py to build djbfft library does not have a very > high priority right now, especially if the performance boost is not so > great on all platforms and arhitectures. So, I would put it into the todo > list and deal with it in future. Agreed. Eric From npk at astro.ucsc.edu Wed Oct 2 02:39:22 2002 From: npk at astro.ucsc.edu (Nicholas Konidaris) Date: Tue, 1 Oct 2002 23:39:22 -0700 (PDT) Subject: [SciPy-dev] weave.test() causes segfault! Message-ID: Hello all -- [PYTHON STARTUP] Python 2.2.1 (#11, Sep 30 2002, 16:34:45) [GCC 2.96 20000731 (Red Hat Linux 7.3 2.96-112)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import weave >>> weave.test() creating test suite for: weave.ast_tools No test suite found for weave.base_info No test suite found for weave.base_spec No test suite found for weave.blitz_info No test suite found for weave.blitz_spec creating test suite for: weave.blitz_tools creating test suite for: weave.build_tools creating test suite for: weave.catalog No test suite found for weave.code_blocks No test suite found for weave.common_info creating test suite for: weave.common_spec No test suite found for weave.conversion_code No test suite found for weave.conversion_code_old No test suite found for weave.cxx_info No test suite found for weave.dumbdbm_patched No test suite found for weave.dumb_shelve creating test suite for: weave.ext_tools building extensions here: /export/home/grads/npk/.python22_compiled/257540 building extensions here: /export/home/grads/npk/.python22_compiled/257541 No test suite found for weave.inline_info creating test suite for: weave.inline_tools No test suite found for weave.lib2def No test suite found for weave.misc No test suite found for weave.scalar_info creating test suite for: weave.scalar_spec creating test suite for: weave.sequence_spec creating test suite for: weave.size_check creating test suite for: weave.slice_handler No test suite found for weave.standard_array_info creating test suite for: weave.standard_array_spec No test suite found for weave.swig_info No test suite found for weave.wx_info No test suite found for weave.wx_spec No test suite found for weave.tests ... Expression: result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]+ b[1:-1,2:] + b[1:-1,:-2]) / 5. Run: (10, 10) f Ewarning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations .warning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations ............................Segmentation fault (core dumped) So I tried to look up _bad_path_ in the code (with grep) but I was not able to figure out what exactly the test suite is doing by putting bad paths into the pathlist. I have the Numeric module, I have python 2.2, I'm not sure what is going on! Please help! Thank you Nick From eric at enthought.com Wed Oct 2 13:35:58 2002 From: eric at enthought.com (eric jones) Date: Wed, 2 Oct 2002 12:35:58 -0500 Subject: [SciPy-dev] RE: [Scipy-cvs] world/scipy/tutorial installation.txt,1.3,1.4 In-Reply-To: <20021002171812.168EC3EAD7@www.scipy.com> Message-ID: <000201c26a3a$2b591120$777ba8c0@ericlaptop> Ah. I see you've already done this. Ok. Didn't notice my commit failed. Thanks for fixing this. eric > -----Original Message----- > From: scipy-cvs-admin at scipy.org [mailto:scipy-cvs-admin at scipy.org] On > Behalf Of pearu at localhost.localdomain > Sent: Wednesday, October 02, 2002 12:18 PM > To: scipy-cvs at scipy.org > Subject: [Scipy-cvs] world/scipy/tutorial installation.txt,1.3,1.4 > > Update of /home/cvsroot/world/scipy/tutorial > In directory shaft:/tmp/cvs-serv6716 > > Modified Files: > installation.txt > Log Message: > Corrected Scipy testing instructions + minor formatting. TODO: use > docutils. > > > Index: installation.txt > =================================================================== > RCS file: /home/cvsroot/world/scipy/tutorial/installation.txt,v > retrieving revision 1.3 > retrieving revision 1.4 > diff -C2 -d -r1.3 -r1.4 > *** installation.txt 1 Oct 2002 18:07:01 -0000 1.3 > --- installation.txt 2 Oct 2002 17:17:39 -0000 1.4 > *************** > *** 1,79 **** > Building SciPy on Windows. > > ! First, if your interested in getting going as fast as possible with > SciPy, I'd > ! recommend you grab nice click-installation package instead of > trying to > ! build it from scracth. You only need all these tools if your interested > in mucking > ! around with SciPy's internals or rebuilding it. That said, all of scipy > requires > ! (1) and (2), and the scipy.compiler module requires the gcc package > discussed in > ! (3), so you'll likely need to grab these three things unless you already > have them. > > What you need: > > ! 1. 2.1.1c1.exe> or > > ownload?OS=Windows&version=2.1.0&build=211&download=/ActivePython/window s/ > 2.1/ActivePython-2.1.0.msi> > > ! You need Python 2.1 or higher. ActiveState Python comes with some > nice > ! extra packages bundled in (PythonWin sets the standard for freely > available > ! interpreters), but all of these things are also available from > www.python.org. > ! You just have to click-install a few more times. > > ! Python 2.0 may also work, but we haven't tested it and will pretend > we didn't hear > ! you if you ask a question about it. I know Python 1.52 won't work. > > 2. 20.0.0.win32-py2.1.exe> or higher > > ! This is the core to Numeric computing in Python. SciPy relies > heavily on it. You'll > ! probably also want to get may also want to get the > ! py2.1.exe> module, > ! although scipy has its own fft module, and the > ! py2.1.exe> package for > ! random variable functions. > > 3. win32/mingw32/gcc-2.95.2/gcc-2.95.2-msvcrt.exe> > > ! This is a click-install package from Mumit Khan's site that includes > the > ! entire tool set(gcc, make, g77, nm, ar, etc) you'll need for > building > ! SciPy. It lives in my c:\gcc-2.95.2 directory, and you need to edit > your > ! environment to include this in your path. For instructions on how to > do > ! this are here , . This is a pretty old version (1999), but > it > ! works fine for me. Its been tested on Windows 2000 and windows 98 > ! successfully. > > ! There is a http://prdownloads.sourceforge.net/mingw/mingw-1.0-20010608.tar.gz> > ! from sourceforge. Please let us know if you try this out. Also be > aware > ! that you need a patch if you use this on a win 9x machine. > > ! Why aren't we using MSVC++? The main reason is it doesn't come with > a > ! fortran compiler, and scipy has *a lot* of fortran code in it. It is > ! possible that MSVC+Absoft or MSVC+Intel could get the job done, but > we > ! haven't checked this out. For those of you who only build things if > they > ! come with a .dsw or .dsp file, I'm sorry. I like working in the > Visual > ! Studio IDE to (outside of it, I'm reduced to printf for debugging). > Figure > ! out a way to do it, and we'll put gladly give it a home. For now, > setup.py > ! is the only way to go (but its pretty dang good if you ask me). > > ! By the way, this'll work with ActiveState Python or Python built with > MSVC. > ! The original Python *does not* have to be built using gcc (thanks to > ! lib2def.py by ). > > 4. -- fastest fourier transform in the > west librareis > > ! Unpack this file in your C:\ directory, as this is where scipy's > setup.py > ! file expects to find it. don't seem > so > ! fond of Mr. Gates, so the Windows build process isn't straight > forward. The > ! standard Unix build process works in cygwin, but the resulting > library > ! will be dependent on the cygwin.dll. I couldn't get the -mno-cygwin > flag > ! to work, nor could I get things to behave when I set gcc to the > mingw32 > ! compiler. Instead, I've put together a simple makefile and config.h > file > ! that build all the libraries on windows using the mingw32 tool chain > ! discussed in (3). A source distrobution for this is . > > 5. -- self-tuning version of blas and lapack routines > > ! Actually, you don't really need these yet, but you soon will when > the > ! linalg module is finished. Be the first one on your block to own > them! > ! Again, I've made a zip file of these little that you can > unpack in > ! your C:\ directory. That's where (a future version of) setup.py > expects to > ! find them. > > Getting SciPy: > --- 1,88 ---- > Building SciPy on Windows. > > ! First, if your interested in getting going as fast as possible with > ! SciPy, I'd recommend you grab nice click-installation package > ! instead of trying to build it from scracth. You only need all these > ! tools if your interested in mucking around with SciPy's internals or > ! rebuilding it. That said, all of scipy requires (1) and (2), and the > ! scipy.compiler module requires the gcc package discussed in (3), so > ! you'll likely need to grab these three things unless you already have > ! them. > > What you need: > > ! 1. 2.1.1c1.exe> > ! or > > ownload?OS=Windows&version=2.1.0&build=211&download=/ActivePython/window s/ > 2.1/ActivePython-2.1.0.msi> > > ! You need Python 2.1 or higher. ActiveState Python comes with some > ! nice extra packages bundled in (PythonWin sets the standard for > ! freely available interpreters), but all of these things are also > ! available from www.python.org. You just have to click-install a > ! few more times. > > ! Python 2.0 may also work, but we haven't tested it and will > ! pretend we didn't hear you if you ask a question about it. I know > ! Python 1.52 won't work. > > 2. 20.0.0.win32-py2.1.exe> or higher > > ! This is the core to Numeric computing in Python. SciPy relies > ! heavily on it. You'll probably also want to get may also want to > ! get the > ! py2.1.exe> > ! module, although scipy has its own fft module, and the > ! py2.1.exe> > ! package for random variable functions. > > 3. win32/mingw32/gcc-2.95.2/gcc-2.95.2-msvcrt.exe> > > ! This is a click-install package from Mumit Khan's site that > ! includes the entire tool set(gcc, make, g77, nm, ar, etc) you'll > ! need for building SciPy. It lives in my c:\gcc-2.95.2 directory, > ! and you need to edit your environment to include this in your path. > ! For instructions on how to do this are here , . This is > ! a pretty old version (1999), but it works fine for me. Its been > ! tested on Windows 2000 and windows 98 successfully. > > ! There is a version=http://prdownloads.sourceforge.net/mingw/mingw-1.0- > 20010608.tar.gz> > ! from sourceforge. Please let us know if you try this out. Also be > ! aware that you need a patch if you use this on a win 9x machine. > > ! Why aren't we using MSVC++? The main reason is it doesn't come > ! with a fortran compiler, and scipy has *a lot* of fortran code in > ! it. It is possible that MSVC+Absoft or MSVC+Intel could get the > ! job done, but we haven't checked this out. For those of you who > ! only build things if they come with a .dsw or .dsp file, I'm sorry. > ! I like working in the Visual Studio IDE to (outside of it, I'm > ! reduced to printf for debugging). Figure out a way to do it, and > ! we'll put gladly give it a home. For now, setup.py is the only way > ! to go (but its pretty dang good if you ask me). > > ! By the way, this'll work with ActiveState Python or Python built > ! with MSVC. The original Python *does not* have to be built using > ! gcc (thanks to lib2def.py by ). > > 4. -- fastest fourier transform in the > west librareis > > ! Unpack this file in your C:\ directory, as this is where scipy's > ! setup.py file expects to find it. > ! don't seem so fond of Mr. Gates, so the Windows build process > ! isn't straight forward. The standard Unix build process works in > ! cygwin, but the resulting library will be dependent on the > ! cygwin.dll. I couldn't get the -mno-cygwin flag to work, nor > ! could I get things to behave when I set gcc to the mingw32 > ! compiler. Instead, I've put together a simple makefile and > ! config.h file that build all the libraries on windows using the > ! mingw32 tool chain discussed in (3). A source distrobution for > ! this is . > > 5. -- self-tuning version of blas and lapack routines > > ! Actually, you don't really need these yet, but you soon will when > ! the linalg module is finished. Be the first one on your block to > ! own them! Again, I've made a zip file of these little that > ! you can unpack in your C:\ directory. That's where (a future > ! version of) setup.py expects to find them. > > Getting SciPy: > *************** > *** 84,92 **** > , and don't use it in > software=http://groups.google.com/groups?hl=en&lr=lang_en&safe=off&th=d3 46 > 0cb8b82dc6d,129&start=0>. > ! Unpack this zip file in c:\temp, or some other temporary place, and > then > ! execute the setup.py file( Assuming you've installed (1) through > (5)). > ! This is much like a makefile for the Unix geeks in the crowd, but it > is > ! "Python-aware" and handles all versioning and installation path > issues > ! automatically. > > c:\> cd c:\temp\scipy > --- 93,101 ---- > , and don't use it in > software=http://groups.google.com/groups?hl=en&lr=lang_en&safe=off&th=d3 46 > 0cb8b82dc6d,129&start=0>. > ! Unpack this zip file in c:\temp, or some other temporary place, > ! and then execute the setup.py file( Assuming you've installed (1) > ! through (5)). This is much like a makefile for the Unix geeks in > ! the crowd, but it is "Python-aware" and handles all versioning and > ! installation path issues automatically. > > c:\> cd c:\temp\scipy > *************** > *** 96,116 **** > << runs for maybe a minute copying files into the home python > directory >> > > ! For those who want to be on the bleeding edge, you can also check > the latest > ! snapshot of SciPy out of the > ! > > 7. Testing SciPy > > ! Python comes with some unit tests and the numbers are growing every > day > ! (well monthly anyway). To test your installation use > scipy.test('fast') to > ! test the quick sub-set of all tests or scipy.test() to run all > available > ! tests. The first should take less than one second, the second will > take > ! minutes, depending on your processor speed because it includes the > ! scipy.compiler tests which give gcc a major workout. > > c:> python > ... > >>> import scipy > ! >>> scipy.test('fast') > No test suite found for scipy.build_flib > No test suite found for scipy.data_store > --- 105,125 ---- > << runs for maybe a minute copying files into the home python > directory >> > > ! For those who want to be on the bleeding edge, you can also check > ! the latest snapshot of SciPy out of the repository=http://scipy.org/download/CVSInstruct> > > 7. Testing SciPy > > ! Python comes with some unit tests and the numbers are growing > ! every day (well monthly anyway). To test your installation use > ! scipy.test(1) to test the quick sub-set of all tests or > ! scipy.test(10) to run all available tests. The first should take > ! less than one second, the second will take minutes, depending on > ! your processor speed because it includes the scipy.compiler tests > ! which give gcc a major workout. > > c:> python > ... > >>> import scipy > ! >>> scipy.test(1) > No test suite found for scipy.build_flib > No test suite found for scipy.data_store > *************** > *** 145,155 **** > >>> > > ! If any tests fail, please report them to the , and > include the > ! output in your posting. > > 8. > > ! Subscribing to the developer or user mailing lists will keep you up > with > ! SciPy's latest > ! happenings. > > --- 154,163 ---- > >>> > > ! If any tests fail, please report them to the , and > ! include the output in your posting. > > 8. > > ! Subscribing to the developer or user mailing lists will keep you up > ! with SciPy's latest happenings. > > > > _______________________________________________ > Scipy-cvs mailing list > Scipy-cvs at scipy.org > http://scipy.net/mailman/listinfo/scipy-cvs From dmorrill at enthought.com Wed Oct 2 16:38:38 2002 From: dmorrill at enthought.com (David C. Morrill) Date: Wed, 2 Oct 2002 15:38:38 -0500 Subject: [SciPy-dev] Version problems between fastumath in scipy_base and Numeric Message-ID: <02ad01c26a53$afc5a270$6501a8c0@Dave> We've recently run into a problem with the Windows Installer version of Chaco: If the version of Numeric used to build fastumath (in scipy_base) when constructing the Installer executable is not exactly the same as the version of Numeric on the installing user's Windows machine, errors occur, as illustrated by the following Chaco traceback reported by several users: Traceback (most recent call last): File "C:\Python22\Lib\site-packages\chaco\wxplot_window.py", line 238, in _min_size dx, dy = canvas._min_size() File "C:\Python22\Lib\site-packages\chaco\plot_canvas.py", line 2006, in _min_size tdx, tdy = item._min_size() File "C:\Python22\Lib\site-packages\chaco\plot_canvas.py", line 423, in _min_size axes_info = AxesInfo( index_axis, values, process_index ) File "C:\Python22\Lib\site-packages\chaco\plot_info.py", line 426, in __init__ self.add_axes( values ) File "C:\Python22\Lib\site-packages\chaco\plot_info.py", line 443, in add_axes self.process_axes() File "C:\Python22\Lib\site-packages\chaco\plot_info.py", line 522, in process_axes axis_info._min_size() File "C:\Python22\Lib\site-packages\chaco\plot_info.py", line 731, in _min_size axis.tick_interval ) File "C:\Python22\Lib\site-packages\chaco\plot_ticks.py", line 67, in auto_ticks if is_base2( rng ) and is_base2( upper ) and rng > 4: File "C:\Python22\Lib\site-packages\chaco\plot_ticks.py", line 121, in is_base2 return ((lg == floor( lg )) and (lg > 0.0)) TypeError: function not supported for these types, and can't coerce to supported types Eric thinks fastumath used to be more insensitive to the versions of Numeric used by builder and installer. Does anyone have any idea what the problem might be? Dave Morrill From MAILER-DAEMON at postoffice.pacbell.net Wed Oct 2 19:37:52 2002 From: MAILER-DAEMON at postoffice.pacbell.net (Mail Delivery Subsystem) Date: Wed, 02 Oct 2002 16:37:52 -0700 (PDT) Subject: [SciPy-dev] Returned mail: User unknown Message-ID: <0H3D00535OB4EJ@mta6.snfc21.pbi.net> The original message was received at Wed, 2 Oct 2002 16:37:47 0800 ----- The following addresses had permanent fatal errors ----- ----- Transcript of session follows ----- ... while talking to postoffice.pacbell.net: >>> RCPT To: <<< 550 ... User unknown 550 ... User unknown From eric at enthought.com Wed Oct 2 22:17:27 2002 From: eric at enthought.com (eric jones) Date: Wed, 2 Oct 2002 21:17:27 -0500 Subject: [SciPy-dev] RE: [Scipy-cvs] world/scipy/sparse setup_sparse.py,1.1,1.2 In-Reply-To: <20021003002027.E874D3EAD2@www.scipy.com> Message-ID: <000001c26a83$054a2340$6b01a8c0@ericlaptop> I would like this to happen also. There is one issue that I think needs to be solved to handle this. Some packages (weave for instances) has a --without-dependencies option. Unless this is specified, the weave distribution includes scipy_distutils, scipy_test, and maybe something else. This could, of course, be folded into the setup_weave.py file, but it begs a larger question. I've been struggling lately with how we should structure things moving forward now that weave, scipy, and chaco all exist, run independently, and rely on scipy_base, scipy_distutils, and scipy_test. Right now, each of the packages we provide on the web include all the dependencies. This leads to weave installations clobbering the scipy_distutils installation provided by a scipy package and visa-versa. There are multiple ways to handle this. The easiest is to actually package scipy_base, scipy_distutils, and scipy_test into a separate package and list it as a dependency that people must install first. I hate requiring people to install multiple packages to get a tool working, but things are shaking out so that this sort of thing makes sense. Another approach, which is possible now that Pearu and Travis added all the version information, is to make augment scipy_distutils with the capabilities of checking the version of dependent packages and only installing them if they don't exist or the version number of the currently installed package is older. This doesn't sound hard, but I also don't want to wander down the path of re-implementing RPM or anything like that. This sort of project sometimes mushrooms out of control. Still, I like the solution better, because it allows us to continue distributing source tarballs with everything in them so that users only need to get a single file to try out Chaco (assuming they Python... and Numeric... version 22.0... and wxPython... version 2.3.3.1...). The platform that I want to keep simplest unfortunately won't benefit from any of this. The windows click-button-install that distutils creates don't have the ability to check any of this version stuff and do conditional installs. For these guys, clobbering old installations or requiring a two teered installation process are the only (short term) solutions. What are other people's opinions on the subject? Eric > > Update of /home/cvsroot/world/scipy/sparse > In directory shaft:/tmp/cvs-serv7795/sparse > > Modified Files: > setup_sparse.py > Log Message: > Carried out major unification of xxx/setup_xxx.py files. Discussion: some > modules contain setup.py files that repeat the functionality of the > corresponding setup_xxx.py files. Are there any objections if setup.py and > setup_xxx.py will be merged into setup_xxx.py and setup.py files will be > removed from CVS? > > > Index: setup_sparse.py > =================================================================== > RCS file: /home/cvsroot/world/scipy/sparse/setup_sparse.py,v > retrieving revision 1.1 > retrieving revision 1.2 > diff -C2 -d -r1.1 -r1.2 > *** setup_sparse.py 14 Aug 2002 16:30:04 -0000 1.1 > --- setup_sparse.py 3 Oct 2002 00:19:55 -0000 1.2 > *************** > *** 9,13 **** > > def configuration(parent_package=''): > ! config = default_config_dict('sparse',parent_package) > local_path = get_path(__name__) > > --- 9,14 ---- > > def configuration(parent_package=''): > ! package = 'sparse' > ! config = default_config_dict(package,parent_package) > local_path = get_path(__name__) > > *************** > *** 32,36 **** > # Extension > sources = ['_zsuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,'sparse._zsuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > --- 33,37 ---- > # Extension > sources = ['_zsuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,package,'_zsuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > *************** > *** 41,45 **** > > sources = ['_dsuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,'sparse._dsuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > --- 42,46 ---- > > sources = ['_dsuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,package,'_dsuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > *************** > *** 50,54 **** > > sources = ['_csuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,'sparse._csuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > --- 51,55 ---- > > sources = ['_csuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,package,'_csuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > *************** > *** 59,63 **** > > sources = ['_ssuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,'sparse._ssuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > --- 60,64 ---- > > sources = ['_ssuperlumodule.c'] > ! ext_args = {'name':dot_join(parent_package,package,'_ssuperlu'), > 'sources':[os.path.join(local_path,x) for x in sources], > 'libraries': ['superlu','myblas'] > *************** > *** 67,71 **** > config['ext_modules'].append(ext) > > ! ext_args = {'name':dot_join(parent_package,'sparse._sparsekit'), > 'sources':[os.path.join(local_path,'_sparsekit.pyf')], > #'f2py_options':['--no-wrap-functions'], > --- 68,72 ---- > config['ext_modules'].append(ext) > > ! ext_args = {'name':dot_join(parent_package,package,'_sparsekit'), > 'sources':[os.path.join(local_path,'_sparsekit.pyf')], > #'f2py_options':['--no-wrap-functions'], > > > _______________________________________________ > Scipy-cvs mailing list > Scipy-cvs at scipy.org > http://scipy.net/mailman/listinfo/scipy-cvs From prabhu at aero.iitm.ernet.in Thu Oct 3 00:52:21 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 3 Oct 2002 10:22:21 +0530 Subject: [SciPy-dev] RE: [Scipy-cvs] world/scipy/sparse setup_sparse.py,1.1,1.2 In-Reply-To: <000001c26a83$054a2340$6b01a8c0@ericlaptop> References: <20021003002027.E874D3EAD2@www.scipy.com> <000001c26a83$054a2340$6b01a8c0@ericlaptop> Message-ID: <15771.52485.965494.966333@monster.linux.in> >>>>> "EJ" == eric jones writes: [snip] EJ> The platform that I want to keep simplest unfortunately won't EJ> benefit from any of this. The windows click-button-install EJ> that distutils creates don't have the ability to check any of EJ> this version stuff and do conditional installs. For these EJ> guys, clobbering old installations or requiring a two teered EJ> installation process are the only (short term) solutions. EJ> What are other people's opinions on the subject? If scipy_base, scipy_distutils and scipy_test do not have any dependencies like ATLAS, wxPython, Numeric etc. Then simply package scipy_core (or scipy_base) as a package that contains these. Make scipy, weave, chaco etc. simply require this scipy_core package apart from their own specific requirements (ReportLab, wxPython etc.). This way you isolate out dependencies and make it easy for packagers and users. I'm sure folks would not have difficulty in installing one package using python setup.py (especially if its easy and has no hard to install dependencies). In this case they only do it once and then can install scipy, weave, chaco, f2py and whatnot. I think this is the best solution. cheers, prabhu From skip at pobox.com Thu Oct 3 07:31:09 2002 From: skip at pobox.com (Skip Montanaro) Date: Thu, 3 Oct 2002 06:31:09 -0500 Subject: [SciPy-dev] RE: [Scipy-cvs] world/scipy/sparse setup_sparse.py,1.1,1.2 In-Reply-To: <000001c26a83$054a2340$6b01a8c0@ericlaptop> References: <20021003002027.E874D3EAD2@www.scipy.com> <000001c26a83$054a2340$6b01a8c0@ericlaptop> Message-ID: <15772.10877.900245.941698@localhost.localdomain> eric> There are multiple ways to handle this. The easiest is to eric> actually package scipy_base, scipy_distutils, and scipy_test into eric> a separate package and list it as a dependency that people must eric> install first.... eric> Another approach is to make augment scipy_distutils with the eric> capabilities of checking the version of dependent packages and eric> only installing them if they don't exist or the version number of eric> the currently installed package is older. Why not just distribute everything as a single, bundled, package? Sort of a sumo-scipy? Skip From skip at pobox.com Thu Oct 3 11:35:07 2002 From: skip at pobox.com (Skip Montanaro) Date: Thu, 3 Oct 2002 10:35:07 -0500 Subject: [SciPy-dev] Why no ATLAS? Message-ID: <15772.25515.189150.270966@localhost.localdomain> Given this environment: bash-2.05$ ATLAS=$HOME/src/ATLAS/lib/SunOS_gcc ; export ATLAS bash-2.05$ BLAS=$HOME/src/BLAS/sungcc ; export BLAS bash-2.05$ LAPACK=$HOME/src/LAPACK/Sun_GCC ; export LAPACK bash-2.05$ ls -lL $ATLAS total 13056 -rw-r--r-- 1 skip skip 6181 Sep 23 10:57 Make.inc -rw-r--r-- 1 skip skip 739 Sep 23 10:57 Makefile -rw-r--r-- 1 skip skip 5722752 Sep 23 11:52 libatlas.a -rw-r--r-- 1 skip skip 290888 Sep 23 11:37 libcblas.a -rw-r--r-- 1 skip skip 343020 Sep 23 11:54 liblapack.a -rw-r--r-- 1 skip skip 279972 Sep 23 11:04 libtstatlas.a Why doesn't scipy_distutils/system_info.py report the presence of any ATLAS libraries? bash-2.05$ ~/tmp/sun_gcc/2.2/bin/python scipy_distutils/system_info.py atlas_info: NOT AVAILABLE blas_info: FOUND: libraries = ['blas'] library_dirs = ['/home/skip/src/BLAS/sungcc'] ... lapack_info: FOUND: libraries = ['lapack'] library_dirs = ['/home/skip/src/LAPACK/Sun_GCC'] Thx, Skip From pearu at cens.ioc.ee Thu Oct 3 11:44:20 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 3 Oct 2002 18:44:20 +0300 (EEST) Subject: [SciPy-dev] Why no ATLAS? In-Reply-To: <15772.25515.189150.270966@localhost.localdomain> Message-ID: On Thu, 3 Oct 2002, Skip Montanaro wrote: > > Given this environment: > > bash-2.05$ ATLAS=$HOME/src/ATLAS/lib/SunOS_gcc ; export ATLAS > bash-2.05$ BLAS=$HOME/src/BLAS/sungcc ; export BLAS > bash-2.05$ LAPACK=$HOME/src/LAPACK/Sun_GCC ; export LAPACK > bash-2.05$ ls -lL $ATLAS > total 13056 > -rw-r--r-- 1 skip skip 6181 Sep 23 10:57 Make.inc > -rw-r--r-- 1 skip skip 739 Sep 23 10:57 Makefile > -rw-r--r-- 1 skip skip 5722752 Sep 23 11:52 libatlas.a > -rw-r--r-- 1 skip skip 290888 Sep 23 11:37 libcblas.a > -rw-r--r-- 1 skip skip 343020 Sep 23 11:54 liblapack.a > -rw-r--r-- 1 skip skip 279972 Sep 23 11:04 libtstatlas.a > > Why doesn't scipy_distutils/system_info.py report the presence of any ATLAS > libraries? Because $ATLAS does not contain libf77blas.a file. I don't know why your atlas built did not create that, though. Pearu From skip at pobox.com Thu Oct 3 12:34:13 2002 From: skip at pobox.com (Skip Montanaro) Date: Thu, 3 Oct 2002 11:34:13 -0500 Subject: [SciPy-dev] Why no ATLAS? In-Reply-To: References: <15772.25515.189150.270966@localhost.localdomain> Message-ID: <15772.29061.422092.633885@localhost.localdomain> >> Why doesn't scipy_distutils/system_info.py report the presence of any >> ATLAS libraries? Pearu> Because $ATLAS does not contain libf77blas.a file. I don't know Pearu> why your atlas built did not create that, though. Most probably pilot error. I'll revisit the Atlas build (again). Thanks for the help. Skip From scipyman at scipy.org Thu Oct 3 16:14:46 2002 From: scipyman at scipy.org (Scipy Manager) Date: Thu, 3 Oct 2002 15:14:46 -0500 Subject: [SciPy-dev] test_type_check.test_isnan Message-ID: <000501c26b19$84f77700$7601a8c0@superfly> Greetings, I get a python interpreter crash ("memory cannot be "read" "or some such) with the latest cvs when running the test suite. Windows 2K C:\>c:\python22\python Python 2.2.1 (#34, Apr 9 2002, 19:34:33) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.2.0_alpha_140.4279' >>> scipy.test() ... Here's the last part of the verbose test output... check_integer_split_2D_rows (test_shape_base.test_array_split) ... ok check_equal_split (test_shape_base.test_split) ... ok check_unequal_split (test_shape_base.test_split) ... ok check_0D_array (test_shape_base.test_atleast_1d) ... ok check_1D_array (test_shape_base.test_atleast_1d) ... ok check_2D_array (test_shape_base.test_atleast_1d) ... ok check_3D_array (test_shape_base.test_atleast_1d) ... ok Test to make sure equivalent Travis O's r1array function ... ok check_0D_array (test_shape_base.test_atleast_2d) ... ok check_1D_array (test_shape_base.test_atleast_2d) ... ok check_2D_array (test_shape_base.test_atleast_2d) ... ok check_3D_array (test_shape_base.test_atleast_2d) ... ok Test to make sure equivalent Travis O's r2array function ... ok check_0D_array (test_shape_base.test_atleast_3d) ... ok check_1D_array (test_shape_base.test_atleast_3d) ... ok check_2D_array (test_shape_base.test_atleast_3d) ... ok check_3D_array (test_shape_base.test_atleast_3d) ... ok check_0D_array (test_shape_base.test_hstack) ... ok check_1D_array (test_shape_base.test_hstack) ... ok check_2D_array (test_shape_base.test_hstack) ... ok check_0D_array (test_shape_base.test_vstack) ... ok check_1D_array (test_shape_base.test_vstack) ... ok check_2D_array (test_shape_base.test_vstack) ... ok check_2D_array2 (test_shape_base.test_vstack) ... ok check_0D_array (test_shape_base.test_dstack) ... ok check_1D_array (test_shape_base.test_dstack) ... ok check_2D_array (test_shape_base.test_dstack) ... ok check_2D_array2 (test_shape_base.test_dstack) ... ok check_0D_array (test_shape_base.test_hsplit) ... ok check_1D_array (test_shape_base.test_hsplit) ... ok check_2D_array (test_shape_base.test_hsplit) ... ok check_1D_array (test_shape_base.test_vsplit) ... ok check_2D_array (test_shape_base.test_vsplit) ... ok check_2D_array (test_shape_base.test_dsplit) ... ok check_3D_array (test_shape_base.test_dsplit) ... ok check_basic (test_shape_base.test_squeeze) ... ok check_basic (test_type_check.test_isscalar) ... ok check_basic (test_type_check.test_real_if_close) ... ok check_cmplx (test_type_check.test_real) ... ok check_real (test_type_check.test_real) ... ok check_cmplx (test_type_check.test_imag) ... ok check_real (test_type_check.test_imag) ... ok check_basic (test_type_check.test_iscomplexobj) ... ok check_basic (test_type_check.test_isrealobj) ... ok check_fail (test_type_check.test_iscomplex) ... ok check_pass (test_type_check.test_iscomplex) ... ok check_fail (test_type_check.test_isreal) ... ok check_pass (test_type_check.test_isreal) ... ok check_complex (test_type_check.test_isnan) ... ok check_complex1 (test_type_check.test_isnan) ... [[then the big crash]] Thanks, Travis From Chuck.Harris at sdl.usu.edu Thu Oct 3 17:21:27 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 3 Oct 2002 15:21:27 -0600 Subject: [SciPy-dev] Physical Constants Message-ID: Hi All, I had far too much time on my hands, so put all the CODATA physical constants into a dictionary. I admit this doesn't save a whole lot of time over just looking up the constants, but it does add some consistancy and self documention, i.e. >>> import PhysicalConstants as pc >>> c = pc.value('speed of light in vacuum') >>> print c >>> 299792458.0 The module documentation is """ Fundamental Physical Constants These constants are taken from CODATA Recommended Values of the Fundamental Physical Constants: 1998. They may be found at physics.nist.gov/constants. The values are stored in the dictionary physical_constants as a tuple containing the value, the units, and the relative precision, in that order. All constants are in SI units unless otherwise stated. Several helper functions are provided: value(key) returns the value of the physical constant. unit(key) returns the units of the physical constant. precision(key) returns the relative precision of the physical constant. find(sub) prints out a list of keys containing the string sub. """ I've attached the python file for anyone who is interested. On a further note, I have some small routines for generating permutations, mixed radix counters, and grey code counters. I find these come in handy on occasion. Any takers? Chuck -------------- next part -------------- A non-text attachment was scrubbed... Name: PhysicalConstants.py Type: application/octet-stream Size: 16739 bytes Desc: PhysicalConstants.py URL: From eric at enthought.com Thu Oct 3 23:26:17 2002 From: eric at enthought.com (eric jones) Date: Thu, 3 Oct 2002 22:26:17 -0500 Subject: [SciPy-dev] test_type_check.test_isnan In-Reply-To: <000501c26b19$84f77700$7601a8c0@superfly> Message-ID: <000001c26b55$cd119930$777ba8c0@ericlaptop> Hey Travis, This should be fixed in CVS now. eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Scipy Manager > Sent: Thursday, October 03, 2002 3:15 PM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] test_type_check.test_isnan > > Greetings, > > I get a python interpreter crash ("memory cannot be "read" "or some such) > with the latest cvs when running the test suite. > > Windows 2K > > C:\>c:\python22\python > Python 2.2.1 (#34, Apr 9 2002, 19:34:33) [MSC 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.__version__ > '0.2.0_alpha_140.4279' > >>> scipy.test() > ... > Here's the last part of the verbose test output... > > check_integer_split_2D_rows (test_shape_base.test_array_split) ... ok > check_equal_split (test_shape_base.test_split) ... ok > check_unequal_split (test_shape_base.test_split) ... ok > check_0D_array (test_shape_base.test_atleast_1d) ... ok > check_1D_array (test_shape_base.test_atleast_1d) ... ok > check_2D_array (test_shape_base.test_atleast_1d) ... ok > check_3D_array (test_shape_base.test_atleast_1d) ... ok > Test to make sure equivalent Travis O's r1array function ... ok > check_0D_array (test_shape_base.test_atleast_2d) ... ok > check_1D_array (test_shape_base.test_atleast_2d) ... ok > check_2D_array (test_shape_base.test_atleast_2d) ... ok > check_3D_array (test_shape_base.test_atleast_2d) ... ok > Test to make sure equivalent Travis O's r2array function ... ok > check_0D_array (test_shape_base.test_atleast_3d) ... ok > check_1D_array (test_shape_base.test_atleast_3d) ... ok > check_2D_array (test_shape_base.test_atleast_3d) ... ok > check_3D_array (test_shape_base.test_atleast_3d) ... ok > check_0D_array (test_shape_base.test_hstack) ... ok > check_1D_array (test_shape_base.test_hstack) ... ok > check_2D_array (test_shape_base.test_hstack) ... ok > check_0D_array (test_shape_base.test_vstack) ... ok > check_1D_array (test_shape_base.test_vstack) ... ok > check_2D_array (test_shape_base.test_vstack) ... ok > check_2D_array2 (test_shape_base.test_vstack) ... ok > check_0D_array (test_shape_base.test_dstack) ... ok > check_1D_array (test_shape_base.test_dstack) ... ok > check_2D_array (test_shape_base.test_dstack) ... ok > check_2D_array2 (test_shape_base.test_dstack) ... ok > check_0D_array (test_shape_base.test_hsplit) ... ok > check_1D_array (test_shape_base.test_hsplit) ... ok > check_2D_array (test_shape_base.test_hsplit) ... ok > check_1D_array (test_shape_base.test_vsplit) ... ok > check_2D_array (test_shape_base.test_vsplit) ... ok > check_2D_array (test_shape_base.test_dsplit) ... ok > check_3D_array (test_shape_base.test_dsplit) ... ok > check_basic (test_shape_base.test_squeeze) ... ok > check_basic (test_type_check.test_isscalar) ... ok > check_basic (test_type_check.test_real_if_close) ... ok > check_cmplx (test_type_check.test_real) ... ok > check_real (test_type_check.test_real) ... ok > check_cmplx (test_type_check.test_imag) ... ok > check_real (test_type_check.test_imag) ... ok > check_basic (test_type_check.test_iscomplexobj) ... ok > check_basic (test_type_check.test_isrealobj) ... ok > check_fail (test_type_check.test_iscomplex) ... ok > check_pass (test_type_check.test_iscomplex) ... ok > check_fail (test_type_check.test_isreal) ... ok > check_pass (test_type_check.test_isreal) ... ok > check_complex (test_type_check.test_isnan) ... ok > check_complex1 (test_type_check.test_isnan) ... > > [[then the big crash]] > > Thanks, > > Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From Chuck.Harris at sdl.usu.edu Fri Oct 4 15:41:25 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Fri, 4 Oct 2002 13:41:25 -0600 Subject: [SciPy-dev] RE: zeros.c Message-ID: Hi Travis, I am glad to see that the reference counting in zeros.c got fixed. I had a few more cosmetic fixes that made the error reporting more useful, namely double scipy_zeros_functions_func(double x, void *params) { scipy_zeros_parameters *myparams = params; PyObject *args, *f, *retval=NULL; double val; args = myparams->args; f = myparams->function; PyTuple_SetItem(args,0,Py_BuildValue("d",x)); retval = PyObject_CallObject(f,args); if (PyErr_Occurred()) { Py_XDECREF(retval); longjmp(myparams->env, 1); } val = PyFloat_AsDouble(retval); if (PyErr_Occurred()) { PyErr_SetString(PyExc_RuntimeError,"Could not convert function value to double"); Py_XDECREF(retval); longjmp(myparams->env, 1); } Py_XDECREF(retval); return val; } Chuck From oliphant.travis at ieee.org Fri Oct 4 16:02:08 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 04 Oct 2002 14:02:08 -0600 Subject: [SciPy-dev] fftpack2 for review In-Reply-To: References: Message-ID: <1033761729.14732.5.camel@travis.local.net> I finally had some time to test fftpack2. It looks really great and I'm happy to support including it. I'll attach my test results at the end (with fftw and djbfft). My only comment right now, is that I don't like doing away with the word fftshift. This is a common command in MATLAB and is quite familiar. I don't necessarily mind freqshift, > > Also freqshift puts the Nyquist component at the end of the result > > while fftshift puts it at the beginning of the result. This is for > > the consistency among varios functions in the fftpack module. Is there a difference between freqshift and fftshift? If so, then I am not satisfied. I took great pains to ensure that fftshift works exactly the same way as in MATLAB. It is important that the common fft operations present a single interface to the user. If we want to offer specialized functions that do not mess with the ordering but expect the user to know what they are doing, that is fine, as long as the default functions follow the basic pattern. I think that the MATLAB convention should be followed here. -Travis Results of testing: ................... Fast Fourier Transform ======================================================== | real input | complex input -------------------------------------------------------- size |fftpack2| FFT | scipy |fftpack2| FFT | scipy -------------------------------------------------------- 100 | 0.31 | 0.27 | 0.27 | 0.32 | 0.28 | 0.27 (secs for 7000 calls) 1000 | 0.23 | 0.32 | 0.30 | 0.30 | 0.32 | 0.30 (secs for 2000 calls) 256 | 0.47 | 0.50 | 0.48 | 0.57 | 0.47 | 0.48 (secs for 10000 calls) 512 | 0.61 | 0.82 | 0.78 | 0.79 | 0.82 | 0.76 (secs for 10000 calls) 1024 | 0.09 | 0.16 | 0.12 | 0.13 | 0.15 | 0.12 (secs for 1000 calls) 2048 | 0.16 | 0.27 | 0.25 | 0.26 | 0.27 | 0.31 (secs for 1000 calls) 4096 | 0.15 | 0.43 | 0.39 | 0.30 | 0.44 | 0.41 (secs for 500 calls) 8192 | 0.60 | 2.53 | 2.54 | 1.45 | 2.71 | 2.57 (secs for 500 calls) . Inverse Fast Fourier Transform ======================================================== | real input | complex input -------------------------------------------------------- size |fftpack2| FFT | scipy |fftpack2| FFT | scipy -------------------------------------------------------- 100 | 0.31 | 0.68 | 0.70 | 0.34 | 0.71 | 0.69 (secs for 7000 calls) 1000 | 0.25 | 0.76 | 0.74 | 0.39 | 0.78 | 0.76 (secs for 2000 calls) 256 | 0.51 | 1.28 | 1.29 | 0.58 | 1.29 | 1.32 (secs for 10000 calls) 512 | 0.67 | 2.03 | 1.98 | 0.81 | 2.09 | 2.04 (secs for 10000 calls) 1024 | 0.10 | 0.36 | 0.34 | 0.14 | 0.36 | 0.33 (secs for 1000 calls) 2048 | 0.18 | 0.69 | 0.70 | 0.25 | 0.72 | 0.73 (secs for 1000 calls) 4096 | 0.19 | 1.13 | 1.14 | 0.32 | 1.20 | 1.21 (secs for 500 calls) 8192 | 0.80 | 3.82 | 3.80 | 1.56 | 4.05 | 3.88 (secs for 500 calls) . Multi-dimensional Fast Fourier Transform ======================================================== | real input | complex input -------------------------------------------------------- size |fftpack2| FFT | scipy |fftpack2| FFT | scipy -------------------------------------------------------- 100x100 | 0.20 | 0.45 | 0.35 | 0.25 | 0.47 | 0.45 (secs for 100 calls) 1000x100 | 0.27 | 0.47 | 0.38 | 0.28 | 0.43 | 0.43 (secs for 7 calls) 256x256 | 0.61 | 0.46 | 0.49 | 0.63 | 0.47 | 0.45 (secs for 10 calls) 512x512 | 0.77 | 0.60 | 0.65 | 0.80 | 0.66 | 0.62 (secs for 3 calls) . Fast Fourier Transform (real data) ================================== size |fftpack2| FFT | scipy ---------------------------------- 100 | 0.30 | 0.36 | 0.37 (secs for 7000 calls) 1000 | 0.22 | 0.26 | 0.23 (secs for 2000 calls) 256 | 0.44 | 0.58 | 0.62 (secs for 10000 calls) 512 | 0.57 | 0.79 | 0.84 (secs for 10000 calls) 1024 | 0.09 | 0.12 | 0.12 (secs for 1000 calls) 2048 | 0.13 | 0.19 | 0.21 (secs for 1000 calls) 4096 | 0.13 | 0.20 | 0.19 (secs for 500 calls) 8192 | 0.40 | 0.82 | 0.81 (secs for 500 calls) . Inverse Fast Fourier Transform (real data) ================================== size |fftpack2| FFT | scipy ---------------------------------- 100 | 0.29 | 0.80 | 0.77 (secs for 7000 calls) 1000 | 0.23 | 0.46 | 0.45 (secs for 2000 calls) 256 | 0.45 | 1.23 | 1.22 (secs for 10000 calls) 512 | 0.59 | 1.47 | 1.46 (secs for 10000 calls) 1024 | 0.08 | 0.21 | 0.20 (secs for 1000 calls) 2048 | 0.15 | 0.30 | 0.33 (secs for 1000 calls) 4096 | 0.13 | 0.36 | 0.38 (secs for 500 calls) 8192 | 0.40 | 1.28 | 1.50 (secs for 500 calls) . ---------------------------------------------------------------------- Ran 24 tests in 298.302s OK [travis at travis fftpack2-0.1]$ python tests/test_helper.py 10 .... ---------------------------------------------------------------------- Ran 4 tests in 0.013s OK [travis at travis fftpack2-0.1]$ python tests/test_pseudo_diffs.py 10 .................... Shifting periodic functions ============================== size | optimized | naive ------------------------------ 100 | 0.07 | 0.44 (secs for 1500 calls) 1000 | 0.05 | 0.50 (secs for 300 calls) 256 | 0.08 | 0.76 (secs for 1500 calls) 512 | 0.07 | 0.85 (secs for 1000 calls) 1024 | 0.06 | 0.80 (secs for 500 calls) 2048 | 0.05 | 0.74 (secs for 200 calls) 4096 | 0.04 | 0.81 (secs for 100 calls) 8192 | 0.11 | 0.90 (secs for 50 calls) . Differentiation of periodic functions ===================================== size | convolve | naive ------------------------------------- 100 | 0.14 | 0.59 (secs for 1500 calls) 1000 | 0.05 | 0.65 (secs for 300 calls) 256 | 0.09 | 0.92 (secs for 1500 calls) 512 | 0.07 | 1.12 (secs for 1000 calls) 1024 | 0.06 | 1.06 (secs for 500 calls) 2048 | 0.04 | 0.93 (secs for 200 calls) 4096 | 0.05 | 0.98 (secs for 100 calls) 8192 | 0.08 | 1.10 (secs for 50 calls) . Tilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.08 | 0.48 (secs for 1500 calls) 1000 | 0.04 | 0.41 (secs for 300 calls) 256 | 0.09 | 0.69 (secs for 1500 calls) 512 | 0.06 | 0.70 (secs for 1000 calls) 1024 | 0.06 | 0.67 (secs for 500 calls) 2048 | 0.04 | 0.64 (secs for 200 calls) 4096 | 0.04 | 0.66 (secs for 100 calls) 8192 | 0.06 | 0.72 (secs for 50 calls) . Hilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.07 | 0.42 (secs for 1500 calls) 1000 | 0.04 | 0.36 (secs for 300 calls) 256 | 0.08 | 0.61 (secs for 1500 calls) 512 | 0.07 | 0.62 (secs for 1000 calls) 1024 | 0.05 | 0.58 (secs for 500 calls) 2048 | 0.03 | 0.53 (secs for 200 calls) 4096 | 0.05 | 0.63 (secs for 100 calls) 8192 | 0.06 | 0.72 (secs for 50 calls) . ---------------------------------------------------------------------- Ran 24 tests in 62.302s OK From pearu at cens.ioc.ee Fri Oct 4 18:28:03 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 5 Oct 2002 01:28:03 +0300 (EEST) Subject: [SciPy-dev] fftpack2 for review In-Reply-To: <1033761729.14732.5.camel@travis.local.net> Message-ID: On 4 Oct 2002, Travis Oliphant wrote: > > I finally had some time to test fftpack2. It looks really great and I'm > happy to support including it. I'll attach my test results at the end > (with fftw and djbfft). > > My only comment right now, is that I don't like doing away with the word > fftshift. This is a common command in MATLAB and is quite familiar. I > don't necessarily mind freqshift, > > > > > Also freqshift puts the Nyquist component at the end of the result > > > while fftshift puts it at the beginning of the result. This is for > > > the consistency among various functions in the fftpack module. > > Is there a difference between freqshift and fftshift? If so, then I am > not satisfied. I took great pains to ensure that fftshift works exactly > the same way as in MATLAB. To be honest, I used the docs of scipy.fftpack.fft as a basic reference and followed that while implementing fftpack2. Until fftshift, everything was consistent. Then it turned out that fftshift uses a different convention than fft. Namely, according to scipy.fftpack.fft docs, the corresponding frequencies of the fft result are given as [ 0, 1, 2, 3, 4, -3, -2, -1] for n=8 case, for example. But fftshift assumed [ 0, 1, 2, 3, -4, -3, -2, -1] Both cases are equivalent except when applying the definition of fftshift, that is, how to order the frequencies of the fft result. Now, in fftpack2 I just kept being consistent (as everything were documented already) and as a result, the results of fftpack2.freqshift and fftshift (in scipy.fftpack and Matlab) had to differ by the location of the Nyquist mode (that's also one of the hidden reasons why I introduced a new name freqshift for fftshift; about the other reason, see below). I guess it would not be a good idea to have both freqshift and fftshift at the same time (the latter behaving exactly as fftshift in MATLAB). Personally, I have no preference to either of them, it's just a matter of choosing a convention and sticking to it. But since you insist MATLAB convention, I'll change the docs in fftpack2: basically, the Nyquist mode will be corresponding to the smallest negative frequency instead of the largest positive one. > It is important that the common fft operations present a single > interface to the user. If we want to offer specialized functions that > do not mess with the ordering but expect the user to know what they are > doing, that is fine, as long as the default functions follow the basic > pattern. Ok. > I think that the MATLAB convention should be followed here. Ok. But I still think that "fftshift" is a misleading name, even if MATLAB uses it. Often people forget that FFT is just an efficient algorithm for performing the Fourier matrix multiplication with a vector, nothing more. And fftshift has nothing to do with this FFT algorithm. I see the following solutions: 1) continue with a bad manner learned from Matlab and use fftshift 2) use a more appropriate name, I proposed freqshift (other suggestions are welcome), and may be provide fftshift in MLab.py, for example. On the other hand, I don't want that some function name will make somebody unhappy. Right now I am just proposing this name change because it feels right to me and it is the right time to make a choice, later it would be too late. What do you think? Pearu From oliphant at ee.byu.edu Fri Oct 4 19:29:32 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 4 Oct 2002 17:29:32 -0600 (MDT) Subject: [SciPy-dev] fftpack2 for review In-Reply-To: Message-ID: > To be honest, I used the docs of scipy.fftpack.fft as a basic reference > and followed that while implementing fftpack2. Until fftshift, everything > was consistent. Then it turned out that fftshift uses a different > convention than fft. Namely, according to scipy.fftpack.fft docs, the > corresponding frequencies of the fft result are given as > [ 0, 1, 2, 3, 4, -3, -2, -1] > for n=8 case, for example. But fftshift assumed > [ 0, 1, 2, 3, -4, -3, -2, -1] O.K. so this is the issue: How to treat the Nyquist value for even length fft. > Both cases are equivalent except when applying the definition of fftshift, Yes, this is true, the fft value for bin 4 is the same as for bin -4 so how you interpret it doesn't really matter. Except for the fact that a large group of people have been trained to calculate the FFT using MATLAB and will be used to using fftshift where the convention is to interpret the Nyquist value as the value for the negative bin number (-4). This is a sutble difference but could cause headaches for many people starting to use scipy who think they know how to shift the results of an FFT to get a plot based on their MATLAB experience. Because it really doesn't matter, I'm inclined to go with what MATLAB chose rather than have a new convention. > that is, how to order the frequencies of the fft result. > Now, in fftpack2 I just kept being consistent (as everything were > documented already) and as a result, the results of fftpack2.freqshift and > fftshift (in scipy.fftpack and Matlab) had to differ by the location > of the Nyquist mode I don't understand how you were consistent. What is documented already? Why is interpreting the Nyquist mode as coming from a positive frequency more consistent? (that's also one of the hidden reasons why I > introduced a new name freqshift for fftshift; about the other reason, see > below). > > I guess it would not be a good idea to have both freqshift and fftshift at > the same time (the latter behaving exactly as fftshift in MATLAB). I suppose that is fine. > Personally, I have no preference to either of them, it's just a > matter of choosing a convention and sticking to it. But since you insist > MATLAB convention, I'll change the docs in fftpack2: basically, the > Nyquist mode will be corresponding to the smallest negative frequency > instead of the largest positive one. That is what I would recommend. > Ok. But I still think that "fftshift" is a misleading name, even > if MATLAB uses it. Often people forget that FFT is just an efficient > algorithm for performing the Fourier matrix multiplication with a vector, > nothing more. And fftshift has nothing to do with this FFT algorithm. > True people do forget the difference between DTFT and FFT, and fftshift doesn't have anything to do with the FFT algorithm. But, the reason for fftshift is because of the way the FFT algorithm computes the frequency leaving the positive frequencies in the first half and the negative frequencies in the last half. We usually don't draw graphs this way (or create ranges this way). So, I would submit that fftshift is still a good name because it is a shift made necessary by the use of the FFT. But, it doesn't really matter and I'm not so opposed to freqshift either. It's a fine name too. We aren't copying all of MATLAB's names for every other function. I just lean towards the name fftshift because I've gotten so used to using it the many, many, many times I've computed Fourier transforms > I see the following solutions: > 1) continue with a bad manner learned from Matlab and use fftshift I don't agree that fftshift is so bad. > 2) use a more appropriate name, I proposed freqshift (other suggestions > are welcome), and may be provide fftshift in MLab.py, for example. > > On the other hand, I don't want that some function name will make somebody > unhappy. Right now I am just proposing this name change because it feels > right to me and it is the right time to make a choice, later it would be > too late. What do you think? I'm not going to be sad either way. But, I do think we should stick with the convention of interpreting the Nyquist mode as a negative frequency. I'm very happy with the great job you've done in cleaning up and testing the fft package. --Travis O. P.S. I also have a command called fftfreq that generates the frequency values for each index (bin) assumed by the FFT algorithm. I noticed you did something similar with dftfreq. This is another point we need to iron out. From MAILER-DAEMON at postoffice.pacbell.net Sat Oct 5 00:07:52 2002 From: MAILER-DAEMON at postoffice.pacbell.net (Mail Delivery Subsystem) Date: Fri, 04 Oct 2002 21:07:52 -0700 (PDT) Subject: [SciPy-dev] Returned mail: User unknown Message-ID: <0H3H0001LQ542T@mta6.snfc21.pbi.net> The original message was received at Fri, 4 Oct 2002 21:07:44 0800 ----- The following addresses had permanent fatal errors ----- ----- Transcript of session follows ----- ... while talking to postoffice.pacbell.net: >>> RCPT To: <<< 550 ... User unknown 550 ... User unknown From paustin at eos.ubc.ca Mon Oct 7 21:53:49 2002 From: paustin at eos.ubc.ca (Philip Austin) Date: Mon, 7 Oct 2002 18:53:49 -0700 Subject: [SciPy-dev] libnumarray.h and NO_IMPORT_ARRAY In-Reply-To: References: Message-ID: <15778.15021.277882.364160@gull.eos.ubc.ca> Hi -- I'm not sure this is the right place for this question, please redirect me if there's a more appropriate mailing list: We're currently porting several of our boost-wrapped Numeric classes to numarray. The routines are spread across several cpp files, and **libnumarray_API is needed in each file. In Numeric, we invoke import_array() in the module file, to initialize **PyArray_API, and def NO_IMPORT_ARRAY in the other files; this produces extern void **PyArray_API in those files, and everyone's happy. Currently libnumarray.h doesn't implement this -- is there another way to compile a multiple file project which initializes the API just once? Thanks, Phil Austin From pearu at cens.ioc.ee Tue Oct 8 06:11:57 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 8 Oct 2002 13:11:57 +0300 (EEST) Subject: [SciPy-dev] attacking import failure from scipy source tree In-Reply-To: <15778.3197.339505.707367@montanaro.dyndns.org> Message-ID: On Mon, 7 Oct 2002, Skip Montanaro wrote: > > Travis> It sounds like you may be trying to import scipy from a python > Travis> session started in your build directory. Try switching to > Travis> another dir (like ~/ or /) and begin the python session and try > Travis> an import. I won't tell you how many times I have to force > Travis> myself to remember this each week. > > I get bitten by this occasionally as well. It's mildly irksome that I need > to install scipy to test it. Has anyone debugged this problem? Yes, I have also thought about this since I started to use scipy. Possible solutions: 1) If all scipy modules (at least those modules that contain extension modules) would be in some subdirectory, say lib or src, of scipy source tree, then the problem would be solved. But this would require extensive file moving in scipy CVS repository. I am not sure if scipy CVS server can handle this; limited disk space could be one obstacle. 2) Importing scipy_base is first that will fail when importing scipy from its source tree. So, we could catch import fastumath failure and give some additional information why it failed. This solution can be quickly applied but it would be still nicer if scipy tests could be run without installing scipy first. 3) Extend the local module testing hooks. The scipy modules that I have been working with can be build locally (meaning: inside its source directory) and also tested locally without the need to install scipy or this particular module. Currently, only scipy_test and scipy_distutils needs to be installed for these hooks to work. But I think that also this requirement can be removed. As for an example, currently linalg module can be tested as follows: cd scipy/linalg python setup_linalg.py build python tests/test_basic.py python tests/test_decomp.py python tests/test_blas.py (last three commands can have also optional level argument) We could extend these hooks so that something as follows would work: cd scipy/ python setup_scipy.py build python setup_scipy.py test [level] # run all tests using specified # level, default level is 1 python /tests/test_ [level] On the other hand, I am not sure that the following should work be default: cd scipy/ python >>> import scipy >>> scipy.test() because of possible complexity for required hooks. Note that inserting, for example, build/lib.linux-i686-2.2/ to sys.path may not be enough. I tried that, but it seems that python picks up modules first from the current directory (if they exist there) without looking at sys.path first. I also tried removing '' from sys.path but still "import scipy" tries to import ./scipy_base instead of build/lib.linux-i686-2.2/scipy_base. However, "import scipy" might work already if python setup.py build build_ext --inplace is used for building scipy. But I am not sure that something will not go wrong with this (I haven't tried it yet). Pearu From steven.robbins at videotron.ca Tue Oct 8 09:23:10 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Tue, 08 Oct 2002 09:23:10 -0400 Subject: [SciPy-dev] attacking import failure from scipy source tree In-Reply-To: References: <15778.3197.339505.707367@montanaro.dyndns.org> Message-ID: <20021008132310.GB2956@nyongwa.montreal.qc.ca> On Tue, Oct 08, 2002 at 01:11:57PM +0300, Pearu Peterson wrote: > > On Mon, 7 Oct 2002, Skip Montanaro wrote: > > > > > Travis> It sounds like you may be trying to import scipy from a python > > Travis> session started in your build directory. Try switching to > > Travis> another dir (like ~/ or /) and begin the python session and try > > Travis> an import. I won't tell you how many times I have to force > > Travis> myself to remember this each week. > > > > I get bitten by this occasionally as well. It's mildly irksome that I need > > to install scipy to test it. Has anyone debugged this problem? > > Yes, I have also thought about this since I started to use scipy. > > Possible solutions: > > 1) If all scipy modules (at least those modules that contain extension > modules) would be in some subdirectory, say lib or src, of scipy source > tree, then the problem would be solved. But this would require extensive > file moving in scipy CVS repository. I am not sure if scipy CVS server can > handle this; limited disk space could be one obstacle. If source tree reorganization proves infeasible, could you fake it using a script to create appropriate symlinks before running the tests? -S From pearu at cens.ioc.ee Tue Oct 8 09:49:19 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 8 Oct 2002 16:49:19 +0300 (EEST) Subject: [SciPy-dev] attacking import failure from scipy source tree In-Reply-To: <20021008132310.GB2956@nyongwa.montreal.qc.ca> Message-ID: On Tue, 8 Oct 2002, Steve M. Robbins wrote: > On Tue, Oct 08, 2002 at 01:11:57PM +0300, Pearu Peterson wrote: > > > > On Mon, 7 Oct 2002, Skip Montanaro wrote: > > > > > > > > Travis> It sounds like you may be trying to import scipy from a python > > > Travis> session started in your build directory. Try switching to > > > Travis> another dir (like ~/ or /) and begin the python session and try > > > Travis> an import. I won't tell you how many times I have to force > > > Travis> myself to remember this each week. > > > > > > I get bitten by this occasionally as well. It's mildly irksome that I need > > > to install scipy to test it. Has anyone debugged this problem? > > > > Yes, I have also thought about this since I started to use scipy. > > > > Possible solutions: > > > > 1) If all scipy modules (at least those modules that contain extension > > modules) would be in some subdirectory, say lib or src, of scipy source > > tree, then the problem would be solved. But this would require extensive > > file moving in scipy CVS repository. I am not sure if scipy CVS server can > > handle this; limited disk space could be one obstacle. > > If source tree reorganization proves infeasible, could you fake it > using a script to create appropriate symlinks before running the > tests? My initial guess would be no because Windows does not have symlinks. But is this completely true considering cygwin and mingw? And what about Mac and symlinks? Anyway, using symlinks sounds like a quick and dirty solution compared to already available solutions that are based on playing with distutils command line flags. Though, it seems that people hardly use them. Pearu From skip at pobox.com Tue Oct 8 11:02:37 2002 From: skip at pobox.com (Skip Montanaro) Date: Tue, 8 Oct 2002 10:02:37 -0500 Subject: [SciPy-dev] attacking import failure from scipy source tree In-Reply-To: References: <20021008132310.GB2956@nyongwa.montreal.qc.ca> Message-ID: <15778.62349.431907.71148@montanaro.dyndns.org> Pearu> And what about Mac and symlinks? MacOS X certainly has symlinks. I know you can create aliases on MacOS 9 and earlier. I don't know what they look like to programs though. Skip From skip at pobox.com Tue Oct 8 16:18:18 2002 From: skip at pobox.com (Skip Montanaro) Date: Tue, 8 Oct 2002 15:18:18 -0500 Subject: [SciPy-dev] segfault during scipy.test(1) - Solaris 8, Sun compilers Message-ID: <15779.15754.954012.517864@montanaro.dyndns.org> Python is crashing on me during scipy.test(1) (Solaris 8, Sun compilers). The start of the traceback in dbx looks like: [1] copy_ND_array(in = 0x7771d8, out = 0x777a70), line 780 in "fortranobject.c" [2] array_from_pyobj(type_num = 9, dims = 0xffbeaa3c, rank = 2, intent = 97, obj = 0x7771d8), line 556 in "fortranobject.c" [3] f2py_rout__flinalg_ddet_r(capi_self = 0x402858, capi_args = 0x85dcf8, capi_keywds = 0x85c790, f2py_func = 0xfd6bf320 = &ddet_r_()), line 298 in "_flinalgmodule.c" [4] fortran_call(fp = 0x402858, arg = 0x85dcf8, kw = 0x85c790), line 248 in "fortranobject.c" [5] PyObject_Call(func = 0x402858, arg = 0x85dcf8, kw = 0x85c790), line 1684 in "abstract.c" [6] do_call(func = 0x402858, pp_stack = 0xffbeaf40, na = 1, nk = 1), line 3262 in "ceval.c" [7] eval_frame(f = 0x386568), line 2028 in "ceval.c" [8] PyEval_EvalCodeEx(co = 0x7cbcc8, globals = 0x7cf358, locals = (nil), args = 0x3093b4, argcount = 1, kws = 0x3093b8, kwcount = 0, defs = 0x7cccbc, defcount = 1, closure = (nil)), line 2585 in "ceval.c" [9] fast_function(func = 0x7d9c50, pp_stack = 0xffbeb4a0, n = 1, na = 1, nk = 0), line 3164 in "ceval.c" Line 780 of F2PY's fortranobject.c is PyArray_VectorUnaryFunc *cast = in->descr->cast[out->descr->type_num]; The value in out->descr->type_num (and in->descr->type_num for that matter) is -18789104, which is clearly out of range for an element of PyArray_TYPES. (Why doesn't Numeric define it as such in its PyArray_Descr struct?) Does this seem familiar to anyone? Skip From pearu at cens.ioc.ee Tue Oct 8 17:49:14 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 9 Oct 2002 00:49:14 +0300 (EEST) Subject: [SciPy-dev] segfault during scipy.test(1) - Solaris 8, Sun compilers In-Reply-To: <15779.15754.954012.517864@montanaro.dyndns.org> Message-ID: On Tue, 8 Oct 2002, Skip Montanaro wrote: > Python is crashing on me during scipy.test(1) (Solaris 8, Sun compilers). > The start of the traceback in dbx looks like: > > [1] copy_ND_array(in = 0x7771d8, out = 0x777a70), line 780 in "fortranobject.c" > [2] array_from_pyobj(type_num = 9, dims = 0xffbeaa3c, rank = 2, intent = 97, obj = 0x7771d8), line 556 in "fortranobject.c" > [3] f2py_rout__flinalg_ddet_r(capi_self = 0x402858, capi_args = 0x85dcf8, capi_keywds = 0x85c790, f2py_func = 0xfd6bf320 = &ddet_r_()), line 298 in "_flinalgmodule.c" > [4] fortran_call(fp = 0x402858, arg = 0x85dcf8, kw = 0x85c790), line 248 in "fortranobject.c" > [5] PyObject_Call(func = 0x402858, arg = 0x85dcf8, kw = 0x85c790), line 1684 in "abstract.c" > [6] do_call(func = 0x402858, pp_stack = 0xffbeaf40, na = 1, nk = 1), line 3262 in "ceval.c" > [7] eval_frame(f = 0x386568), line 2028 in "ceval.c" > [8] PyEval_EvalCodeEx(co = 0x7cbcc8, globals = 0x7cf358, locals = (nil), args = 0x3093b4, argcount = 1, kws = 0x3093b8, kwcount = 0, defs = 0x7cccbc, defcount = 1, closure = (nil)), line 2585 in "ceval.c" > [9] fast_function(func = 0x7d9c50, pp_stack = 0xffbeb4a0, n = 1, na = 1, nk = 0), line 3164 in "ceval.c" > > Line 780 of F2PY's fortranobject.c is > > PyArray_VectorUnaryFunc *cast = in->descr->cast[out->descr->type_num]; > > The value in out->descr->type_num (and in->descr->type_num for that matter) > is -18789104, which is clearly out of range for an element of PyArray_TYPES. > (Why doesn't Numeric define it as such in its PyArray_Descr struct?) What do you mean? type_num is defined in PyArray_Descr struct as an int. Hmm, looking the definition of PyArray_TYPES and the way how the length of PyArray_Descr.cast array is determined, makes me wonder... but it should be correct because people use Numeric on Sun, I suppose. > Does this seem familiar to anyone? No. But I am wondering if other `in' and `out' attributes make sense? To find out, uncomment lines starting at #782 and also dump_attrs() function definition starting at line #455 in fortranobject.c. You may also need to initialize `cast' after the line #788. What is the output? Is it reasonable? I ask this in order to see if there is some memory corruption going on or if just descr->type_num is not initialized correctly for some reasons. Pearu PS: I assume that f2py tests were all ok on your platform. Is that correct? From p.berkes at biologie.hu-berlin.de Wed Oct 9 08:04:56 2002 From: p.berkes at biologie.hu-berlin.de (Pietro Berkes) Date: Wed, 9 Oct 2002 14:04:56 +0200 Subject: [SciPy-dev] tiny bug in read_array Message-ID: <02100914045601.29127@sherrington.biologie.hu-berlin.de> Hi! I found a microscopic bug in the function read_array in scipy.io, which causes a ValueError when loading array bigger than rowsize. If I understood what's going on, the solution seems to be simple: line 339 in scipy/io/array_import.py: outarr[k].resize((outarr[k].shape[0] + rowsize,colsize)) should be: outarr[k].resize((outarr[k].shape[0] + rowsize,colsize[k])) Greetings, Pietro. From p.berkes at biologie.hu-berlin.de Wed Oct 9 12:27:03 2002 From: p.berkes at biologie.hu-berlin.de (Pietro Berkes) Date: Wed, 9 Oct 2002 18:27:03 +0200 Subject: [SciPy-dev] problem with write_array Message-ID: <02100918270303.29127@sherrington.biologie.hu-berlin.de> Hi! There seem to be a problem with write_array in scipy.io, since it cannot write 1D or 0D arrays, for example: >>> import scipy as s >>> s.__version__ '0.2.0_alpha_133.4174' >>> a = s.zeros((10,)) >>> a array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> fid = open('file.dat','w') >>> s.io.write_array(fid,a) Traceback (most recent call last): File "", line 1, in ? File "/home/scipy/local/lib/python2.2/site-packages/scipy/io/array_import.py", line 384, in write_array array_output=0) File "/home/scipy/local/lib/python2.2/site-packages/Numeric/ArrayPrinter.py", line 17, in array2string if len(a.shape) == 0: AttributeError: 'int' object has no attribute 'shape' The problem in the 1D case is that Numeric.array2string doesn't accept scalar quantities as input, while in the 0D case there is an undefined variable. Here is my suggestion (should replace function write_array in scipy/io/array_import.py): def write_array(fileobject, arr, separator=" ", linesep='\n', precision=5, suppress_small=0): """Write a rank-2 or less array to file represented by fileobject. Inputs: fileobject -- An open file object or a string to a valid filename. arr -- The array to write. separator -- separator to write between elements of the array. linesep -- separator to write between rows of array precision -- number of digits after the decimal place to write. suppress_small -- non-zero to suppress small digits and not write them in scientific notation. Outputs: file -- The open file. """ file = get_open_file(fileobject, mode='wa') rank = len(arr.shape) if rank > 2: raise ValueError, "Can-only write up to 2-D arrays." if rank == 0: h = 1 rarr = Numeric.reshape(arr, (1,1)) elif rank == 1: h = arr.shape[0] rarr = Numeric.reshape(arr, (h,1)) else: h = arr.shape[0] for k in range(h): astr = Numeric.array2string(rarr[k], max_line_width=sys.maxint, precision=precision, suppress_small=suppress_small, separator=' '+separator, array_output=0) astr = astr[1:-1] file.write(astr) file.write(linesep) return file Greetings, Pietro. From p.berkes at biologie.hu-berlin.de Wed Oct 9 12:34:41 2002 From: p.berkes at biologie.hu-berlin.de (Pietro Berkes) Date: Wed, 9 Oct 2002 18:34:41 +0200 Subject: [SciPy-dev] problem with write_array (2) Message-ID: <02100918344104.29127@sherrington.biologie.hu-berlin.de> One line has been lost during cut&paste, sorry. The function is: def write_array(fileobject, arr, separator=" ", linesep='\n', precision=5, suppress_small=0): """Write a rank-2 or less array to file represented by fileobject. Inputs: fileobject -- An open file object or a string to a valid filename. arr -- The array to write. separator -- separator to write between elements of the array. linesep -- separator to write between rows of array precision -- number of digits after the decimal place to write. suppress_small -- non-zero to suppress small digits and not write them in scientific notation. Outputs: file -- The open file. """ file = get_open_file(fileobject, mode='wa') rank = len(arr.shape) if rank > 2: raise ValueError, "Can-only write up to 2-D arrays." if rank == 0: h = 1 rarr = Numeric.reshape(arr, (1,1)) elif rank == 1: h = arr.shape[0] rarr = Numeric.reshape(arr, (h,1)) else: h = arr.shape[0] rarr = arr for k in range(h): astr = Numeric.array2string(rarr[k], max_line_width=sys.maxint, precision=precision, suppress_small=suppress_small, separator=' '+separator, array_output=0) astr = astr[1:-1] file.write(astr) file.write(linesep) return file From skip at pobox.com Wed Oct 9 12:53:58 2002 From: skip at pobox.com (Skip Montanaro) Date: Wed, 9 Oct 2002 11:53:58 -0500 Subject: [SciPy-dev] segfault during scipy.test(1) - Solaris 8, Sun compilers In-Reply-To: References: <15779.15754.954012.517864@montanaro.dyndns.org> Message-ID: <15780.24358.378864.268853@montanaro.dyndns.org> >>>>> "Pearu" == Pearu Peterson writes: Pearu> On Tue, 8 Oct 2002, Skip Montanaro wrote: >> Python is crashing on me during scipy.test(1) (Solaris 8, Sun compilers). >> The start of the traceback in dbx looks like: >> >> [1] copy_ND_array(in = 0x7771d8, out = 0x777a70), line 780 in "fortranobject.c" >> [2] array_from_pyobj(type_num = 9, dims = 0xffbeaa3c, rank = 2, intent = 97, obj = 0x7771d8), line 556 in "fortranobject.c" >> [3] f2py_rout__flinalg_ddet_r(capi_self = 0x402858, capi_args = 0x85dcf8, capi_keywds = 0x85c790, f2py_func = 0xfd6bf320 = &ddet_r_()), line 298 in "_flinalgmodule.c" >> [4] fortran_call(fp = 0x402858, arg = 0x85dcf8, kw = 0x85c790), line 248 in "fortranobject.c" >> [5] PyObject_Call(func = 0x402858, arg = 0x85dcf8, kw = 0x85c790), line 1684 in "abstract.c" >> [6] do_call(func = 0x402858, pp_stack = 0xffbeaf40, na = 1, nk = 1), line 3262 in "ceval.c" >> [7] eval_frame(f = 0x386568), line 2028 in "ceval.c" >> [8] PyEval_EvalCodeEx(co = 0x7cbcc8, globals = 0x7cf358, locals = (nil), args = 0x3093b4, argcount = 1, kws = 0x3093b8, kwcount = 0, defs = 0x7cccbc, defcount = 1, closure = (nil)), line 2585 in "ceval.c" >> [9] fast_function(func = 0x7d9c50, pp_stack = 0xffbeb4a0, n = 1, na = 1, nk = 0), line 3164 in "ceval.c" >> >> Line 780 of F2PY's fortranobject.c is >> >> PyArray_VectorUnaryFunc *cast = in->descr->cast[out->descr->type_num]; >> >> The value in out->descr->type_num (and in->descr->type_num for that >> matter) is -18789104, which is clearly out of range for an element of >> PyArray_TYPES. (Why doesn't Numeric define it as such in its >> PyArray_Descr struct?) Pearu> What do you mean? type_num is defined in PyArray_Descr struct as Pearu> an int. PyArray_TYPES is an enum. It appears to me from the tests in copy_ND_array, that the type_num field is being treated as a slot which holds values from that enum. I was just wondering why the Numeric folks didn't define the type_num field to be of type 'enum PyArray_TYPES'. It's hardly an esoteric corner of the C language that needs to be avoided for portability. >> Does this seem familiar to anyone? Pearu> No. But I am wondering if other `in' and `out' attributes make Pearu> sense? *in and *out look okay: *in = { ob_refcnt = 5 ob_type = 0xfee39ad8 data = 0x857dc8 "?\xed\xb2k\xc5\xd8\xae^L?\xc7\xbav=k\xee ?\xe3\xfc\xbd\xc7\xee6^D?\xd6\xa2v\xd8\x93\xef\xc0?\xea^P\xf9\x90M^\xc4?\xe4Q]|\xf8\xfc\xfa?\xbfuK\xf9Y^N\xc0?\xe7.?\xa23\xd5\xc8?\xe9L\x94\xc9;\xd8\xac?\xeb^[^_\xa3^T\x91,?\xe1P\x84\xc2\xebR"?\xe8\x84Uo\xc5\x81\xb6?\xd4^\\xc3^U^F\xd9 ?\xeeFN\xffI\xc8\xe2?\xef^A\x92\xcb\xfat\x83?\xebw\xe1\xa8\xa1]^?\xe3\x92" nd = 2 dimensions = 0x782738 strides = 0x782768 base = (nil) descr = 0xfee39f90 flags = 15 } (dbx) print *out *out = { ob_refcnt = 1 ob_type = 0xfee39ad8 data = 0x858a58 "" nd = 2 dimensions = 0x76d6c8 strides = 0x782788 base = (nil) descr = 0xfee39f90 flags = 15 } but the contents of their descr field looks bogus: *out->descr = { cast = (0xfee14410 = &`_numpy.so`arrayobject.c`DOUBLE_to_CHAR(double *ip, int ipstep, char *op, int opstep, int n), 0xfee144c0 = &`_numpy.so`arrayobject.c`DOUBLE_to_UBYTE(double *ip, int ipstep, unsigned char *op, int opstep, int n), 0xfee14570 = &`_numpy.so`arrayobject.c`DOUBLE_to_SBYTE(double *ip, int ipstep, signed char *op, int opstep, int n), 0xfee14620 = &`_numpy.so`arrayobject.c`DOUBLE_to_SHORT(double *ip, int ipstep, short *op, int opstep, int n), 0xfee146d0 = &`_numpy.so`arrayobject.c`DOUBLE_to_USHORT(double *ip, int ipstep, unsigned short *op, int opstep, int n), 0xfee14780 = &`_numpy.so`arrayobject.c`DOUBLE_to_INT(double *ip, int ipstep, int *op, int opstep, int n), 0xfee14830 = &`_numpy.so`arrayobject.c`DOUBLE_to_UINT(double *ip, int ipstep, unsigned int *op, int opstep, int n), 0xfee148e0 = &`_numpy.so`arrayobject.c`DOUBLE_to_LONG(double *ip, int ipstep, long *op, int opstep, int n), 0xfee14990 = &`_numpy.so`arrayobject.c`DOUBLE_to_FLOAT(double *ip, int ipstep, float *op, int opstep, int n), 0xfee14a38 = &`_numpy.so`arrayobject.c`DOUBLE_to_DOUBLE(double *ip, int ipstep, double *op, int opstep, int n), 0xfee14ae0 = &`_numpy.so`arrayobject.c`DOUBLE_to_CFLOAT(double *ip, int ipstep, float *op, int opstep, int n)) getitem = 0xfee14ba0 = &`_numpy.so`arrayobject.c`DOUBLE_to_CDOUBLE(double *ip, int ipstep, double *op, int opstep, int n) setitem = 0xfee14c60 = &`_numpy.so`arrayobject.c`DOUBLE_to_OBJECT(double *ip, int ipstep, PyObject **op, int opstep, int n) type_num = -18789104 elsize = -18789024 one = 0x9 "" zero = 0x8 "" type = '\0' } Evaluating -18789104 as a hex value yields something which looks strangely like a pointer: 0xfee14d10. I wonder if the cast field got initialized with an array that was too long for the size it was declared (PyArray_NTYPES)? Pearu> To find out, uncomment lines starting at #782 and also Pearu> dump_attrs() function definition starting at line #455 in Pearu> fortranobject.c. You may also need to initialize `cast' after the Pearu> line #788. What is the output? Is it reasonable? I ask this in Pearu> order to see if there is some memory corruption going on or if Pearu> just descr->type_num is not initialized correctly for some Pearu> reasons. This doesn't seem like it will be fruitful. The various printf and dump_attrs lines are already commented out, so moving the cast variable initialization past them won't help. Pearu> PS: I assume that f2py tests were all ok on your platform. Is Pearu> that correct? I don't know. I'm just building and installing f2py as part of a larger build of SciPy. I don't even know how to run f2py's tests independently. Skip From oliphant at ee.byu.edu Wed Oct 9 14:15:33 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 9 Oct 2002 12:15:33 -0600 (MDT) Subject: [SciPy-dev] tiny bug in read_array In-Reply-To: <02100914045601.29127@sherrington.biologie.hu-berlin.de> Message-ID: > > Hi! I found a microscopic bug in the function read_array in scipy.io, which > causes a ValueError when loading array bigger than rowsize. > If I understood what's going on, the solution seems to be simple: > > line 339 in scipy/io/array_import.py: > > outarr[k].resize((outarr[k].shape[0] + rowsize,colsize)) > > should be: > > outarr[k].resize((outarr[k].shape[0] + rowsize,colsize[k])) > Thanks. I do believe you are right. -Travis From pearu at cens.ioc.ee Wed Oct 9 15:54:55 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 9 Oct 2002 22:54:55 +0300 (EEST) Subject: [SciPy-dev] segfault during scipy.test(1) - Solaris 8, Sun compilers In-Reply-To: <15780.24358.378864.268853@montanaro.dyndns.org> Message-ID: On Wed, 9 Oct 2002, Skip Montanaro wrote: > but the contents of their descr field looks bogus: > > *out->descr = { > cast = (0xfee14410 = &`_numpy.so`arrayobject.c`DOUBLE_to_CHAR(double *ip, int ipstep, char *op, int opstep, int n), 0xfee144c0 = &`_numpy.so`arrayobject.c`DOUBLE_to_UBYTE(double *ip, int ipstep, unsigned char *op, int opstep, int n), 0xfee14570 = &`_numpy.so`arrayobject.c`DOUBLE_to_SBYTE(double *ip, int ipstep, signed char *op, int opstep, int n), 0xfee14620 = &`_numpy.so`arrayobject.c`DOUBLE_to_SHORT(double *ip, int ipstep, short *op, int opstep, int n), 0xfee146d0 = &`_numpy.so`arrayobject.c`DOUBLE_to_USHORT(double *ip, int ipstep, unsigned short *op, int opstep, int n), 0xfee14780 = &`_numpy.so`arrayobject.c`DOUBLE_to_INT(double *ip, int ipstep, int *op, int opstep, int n), 0xfee14830 = &`_numpy.so`arrayobject.c`DOUBLE_to_UINT(double *ip, int ipstep, unsigned int *op, int opstep, int n), 0xfee148e0 = &`_numpy.so`arrayobject.c`DOUBLE_to_LONG(double *ip, int ipstep, long *op, int opstep, int n), 0xfee14990 = &`_numpy.so`arrayobject.c`DOUBLE_to_FLOAT(double *ip, int ipstep, float *op, int opstep, int n), 0xfee14a38 = &`_numpy.so`arrayobject.c`DOUBLE_to_DOUBLE(double *ip, int ipstep, double *op, int opstep, int n), 0xfee14ae0 = &`_numpy.so`arrayobject.c`DOUBLE_to_CFLOAT(double *ip, int ipstep, float *op, int opstep, int n)) > getitem = 0xfee14ba0 = &`_numpy.so`arrayobject.c`DOUBLE_to_CDOUBLE(double *ip, int ipstep, double *op, int opstep, int n) > setitem = 0xfee14c60 = &`_numpy.so`arrayobject.c`DOUBLE_to_OBJECT(double *ip, int ipstep, PyObject **op, int opstep, int n) > type_num = -18789104 > elsize = -18789024 > one = 0x9 "" > zero = 0x8 "" > type = '\0' > } > > Evaluating -18789104 as a hex value yields something which looks strangely > like a pointer: 0xfee14d10. I wonder if the cast field got initialized with > an array that was too long for the size it was declared (PyArray_NTYPES)? I think you have different Numeric versions or old Numeric header files lying in your system. Note that in Numeric-22.0 the size of PyArray_TYPES is 15 while in Numeric-21.0 it is 13. That may explain the two additional pointer values that you see in type_num and elsize above. I suspect that when you upgraded Numeric, the new header files did not get installed, possibly because of the well-know distutils bug we have seen earlier... > Pearu> PS: I assume that f2py tests were all ok on your platform. Is > Pearu> that correct? > > I don't know. I'm just building and installing f2py as part of a larger > build of SciPy. I don't even know how to run f2py's tests independently. Once I explained it to you... Pearu From skip at pobox.com Wed Oct 9 16:29:40 2002 From: skip at pobox.com (Skip Montanaro) Date: Wed, 9 Oct 2002 15:29:40 -0500 Subject: [SciPy-dev] segfault during scipy.test(1) - Solaris 8, Sun compilers In-Reply-To: References: <15780.24358.378864.268853@montanaro.dyndns.org> Message-ID: <15780.37300.211610.397808@montanaro.dyndns.org> Pearu> I think you have different Numeric versions or old Numeric header Pearu> files lying in your system. Note that in Numeric-22.0 the size Pearu> of PyArray_TYPES is 15 while in Numeric-21.0 it is 13. That may Pearu> explain the two additional pointer values that you see in Pearu> type_num and elsize above. Pearu> I suspect that when you upgraded Numeric, the new header files Pearu> did not get installed, possibly because of the well-know Pearu> distutils bug we have seen earlier... Ah, that may be. I am building multiple combinations of Scipy, Numeric, and f2py in rotation. I had been simply wiping out the directory tree before each run. I will have to go back to that for the time being. Pearu> PS: I assume that f2py tests were all ok on your platform. Is Pearu> that correct? >> >> I don't know. I'm just building and installing f2py as part of a >> larger build of SciPy. I don't even know how to run f2py's tests >> independently. Pearu> Once I explained it to you... My apologies. It must have reached escape velocity at some point and left my neural atmosphere... Skip From oliphant at ee.byu.edu Thu Oct 10 14:39:10 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Oct 2002 12:39:10 -0600 (MDT) Subject: [SciPy-dev] Building on HP Message-ID: I've almost got Scipy working on an HP runing HP-UX B.11.11 U 9000/785 I am having trouble linking. On import I get errors like: /usr/lib/dld.sl: Unresolved symbol: FTN_DTOI$ (code) from /grad1/users/oliphant/lib/python2.2/site-packages/scipy/special/cephes.sl I believe this has to do with Fortran code not being linked in correctly. But, I'm not sure which library, and I'm not sure how to get the setup.py script to link correctly against this library. Any ideas out there to either question. Anyone seen the same issue? -Travis -- Travis Oliphant Assistant Professor 459 CB Electrical and Computer Engineering Brigham Young University Provo, UT 84602 Tel: (801) 422-3108 oliphant.travis at ieee.org From pearu at cens.ioc.ee Thu Oct 10 17:39:12 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 11 Oct 2002 00:39:12 +0300 (EEST) Subject: [SciPy-dev] Building on HP In-Reply-To: Message-ID: On Thu, 10 Oct 2002, Travis Oliphant wrote: > > I've almost got Scipy working on an HP runing HP-UX B.11.11 U 9000/785 > > I am having trouble linking. On import I get errors like: > > /usr/lib/dld.sl: Unresolved symbol: FTN_DTOI$ (code) from > /grad1/users/oliphant/lib/python2.2/site-packages/scipy/special/cephes.sl > > I believe this has to do with Fortran code not being linked in correctly. > But, I'm not sure which library, and I'm not sure how to get the setup.py > script to link correctly against this library. > > Any ideas out there to either question. Anyone seen the same issue? Try -lU77 -lf I picked it up from f2py2e/buildmakefile.py that used to work for a HP-UX f77 compiler few years ago. Also, you should check compilers manuals that usually describe which libraries are needed and when. HTH, Pearu From oliphant.travis at ieee.org Fri Oct 11 12:37:42 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 11 Oct 2002 10:37:42 -0600 Subject: [SciPy-dev] Weave checkins In-Reply-To: <20021011064847.DA6EB3EACE@www.scipy.com> References: <20021011064847.DA6EB3EACE@www.scipy.com> Message-ID: <1034354263.3767.5.camel@travis.local.net> A recent build of SciPy with the latest weave checkins now gives me 53 weave errors on a level 10 test where gcc is failing on Linux Mandrake 8.2 Is this happening to anyone else or do I have a bad build caused perhaps by not doing a full cleanup. At least it's not segfaulting which has happened in the past. I can give more details, but I wanted to be sure if these problems exist elsewhere first. -Travis O. From pearu at cens.ioc.ee Fri Oct 11 13:18:00 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 11 Oct 2002 20:18:00 +0300 (EEST) Subject: [SciPy-dev] Weave checkins In-Reply-To: <1034354263.3767.5.camel@travis.local.net> Message-ID: On 11 Oct 2002, Travis Oliphant wrote: > A recent build of SciPy with the latest weave checkins now gives me > > 53 weave errors on a level 10 test where gcc is failing on Linux > Mandrake 8.2 > > Is this happening to anyone else or do I have a bad build caused perhaps > by not doing a full cleanup. At least it's not segfaulting which has > happened in the past. With Python 2.2.1, gcc 2.95.4 on Debian Woody I get 0 errors from 184 tests, though there are some strange (?) warning messages (see first part of the end of this message). With Python 2.2.1, gcc 3.1.1 on Debian Woody I get 30 errors from 184 tests. There are numerous compiler errors, one error message is given at the end of this message. I'll look forward when Eric installes gcc >=3.1 ;-) Pearu --------------------------- First part ----------------------------- python: 2.99997901917 ......................................................../home/users/pearu/.python22_compiled/13322/sc_a6df888dc732b38ae7a4c5fa1063b4251.cpp: In function `struct PyObject * compiled_func(PyObject *, PyObject *)': /home/users/pearu/.python22_compiled/13322/sc_a6df888dc732b38ae7a4c5fa1063b4251.cpp:645: no match for `string & < int' /home/users/pearu/.python22_compiled/13322/sc_a6df888dc732b38ae7a4c5fa1063b4251.cpp:649: no match for `string & + int' /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++-3/std/fcomplex.h:60: candidates are: class complex operator +(const complex &, float) /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++-3/std/fcomplex.h:62: class complex operator +(float, const complex &) /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++-3/std/dcomplex.h:60: class complex operator +(const complex &, double) /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++-3/std/dcomplex.h:62: class complex operator +(double, const complex &) /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++-3/std/ldcomplex.h:60: class complex operator +(const complex &, long double) /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++-3/std/ldcomplex.h:62: class complex operator +(long double, const complex &) .warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations .warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ............................................................................................... ---------------------------------------------------------------------- Ran 184 tests in 450.117s OK ------------------ end of first part --------------------- --------------------- Second part ------------------------ E..............................In file included from /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa1063b4250.cpp:9: /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:683: declaration does not declare anything /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:683: parse error before `!' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:690: non-member function `py::object py::type()' cannot have `const' method qualifier /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: In function `py::object py::type()': /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:691: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:691: (Each undeclared identifier is reported only once for each function it appears in.) /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:694: `lose_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: At global scope: /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:703: non-member function `int py::size()' cannot have `const' method qualifier /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:709: non-member function `int py::len()' cannot have `const' method qualifier /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:712: non-member function `int py::length()' cannot have `const' method qualifier /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:722: virtual outside class declaration /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:733: syntax error before `operator' /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:734: syntax error before `operator' /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:735: syntax error before `operator' /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:736: syntax error before `operator' /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:737: syntax error before `operator' /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:738: syntax error before `operator' /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: In function `PyObject* py::disown()': /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:746: `_own' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: At global scope: /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:767: syntax error before `:' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:770: aggregate ` object _key' has incomplete type and cannot be defined /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:771: parse error before `public' /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:774: destructors must be member functions /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:774: virtual outside class declaration /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:776: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:778: syntax error before `.' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:781: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:785: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:789: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:793: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:797: syntax error before `&' token In file included from /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:16, from /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa1063b4250.cpp:10: /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:27: base class ` object' has incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In constructor `py::sequence::sequence()': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:32: type `struct object' is not a direct base of `py::sequence' /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In copy constructor `py::sequence::sequence(const py::sequence&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:33: type `struct object' is not a direct base of `py::sequence' /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In constructor `py::sequence::sequence(PyObject*)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:34: type `struct object' is not a direct base of `py::sequence' /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `virtual py::sequence& py::sequence::operator=(const py::sequence&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:47: `grab_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `py::sequence& py::sequence::operator=(const object&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:51: `grab_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:785: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:789: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:793: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:797: syntax error before `&' token In file included from /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:16, from /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa1063b4250.cpp:10: /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:27: base class ` object' has incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In constructor `py::sequence::sequence()': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:32: type `struct object' is not a direct base of `py::sequence' /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In copy constructor `py::sequence::sequence(const py::sequence&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:33: type `struct object' is not a direct base of `py::sequence' /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In constructor `py::sequence::sequence(PyObject*)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:34: type `struct object' is not a direct base of `py::sequence' /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `virtual py::sequence& py::sequence::operator=(const py::sequence&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:47: `grab_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `py::sequence& py::sequence::operator=(const object&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:51: `grab_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `virtual void py::sequence::_violentTypeCheck()': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:60: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:61: `grab_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:62: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `py::sequence py::sequence::operator+(const py::sequence&) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:70: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:72: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:73: `lose_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::count(const object&) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:80: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:82: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::count(int) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:86: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::count(double) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:90: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::count(char*) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:94: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::count(std::string&) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:98: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `virtual void py::sequence::set_item(int, object&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:107: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:109: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `object py::sequence::operator[](int)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:116: return type ` struct object' is incomplete /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:117: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:127: `lose_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `py::sequence py::sequence::slice(int, int) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:135: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:137: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:138: `lose_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `bool py::sequence::in(const object&) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:146: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:148: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `bool py::sequence::in(int)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:152: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `bool py::sequence::in(double)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:156: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `bool py::sequence::in(const char*)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:160: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `bool py::sequence::in(std::string&)': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:164: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::index(const object&) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:173: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:175: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::index(int) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:179: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::index(double) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:183: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::index(const std::complex&) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:187: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::index(const char*) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:191: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `int py::sequence::index(const std::string&) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:195: variable ` object val' has initializer but incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member function `py::sequence py::sequence::operator*(int) const': /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:208: `_obj' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:210: `fail' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:211: `lose_ref' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: At global scope: /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:223: base class ` object' has incomplete type /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:224: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:227: type specifier omitted for parameter `sequence' /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:227: parse error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:228: missing ';' before right brace /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:229: destructors must be member functions /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:229: virtual outside class declaration /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:231: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:233: syntax error before `.' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:236: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:240: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:244: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:248: syntax error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:252: syntax error before `&' token In file included from /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa1063b4250.cpp:10: /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:23: parse error before `{' token /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: parse error before `&' token /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: ISO C++ forbids declaration of `list' with no type /usr/local/lib/python2.2/site-packages/weave/scxx/list.h: In function `int list(...)': /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: `other' undeclared (first use this function) /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: only constructors take base initializers /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: confused by earlier errors, bailing out E ------------------ end of second part -------------------- From eric at enthought.com Fri Oct 11 16:19:53 2002 From: eric at enthought.com (eric jones) Date: Fri, 11 Oct 2002 15:19:53 -0500 Subject: [SciPy-dev] Weave checkins In-Reply-To: <1034354263.3767.5.camel@travis.local.net> Message-ID: <000a01c27163$8ed5ea40$6b01a8c0@ericlaptop> Hey Travis, I had a few problems on Linux last night, but I think they are worked out -- at least they work here. Can you do a CVS update and then send me the output log of the errors? Thanks, eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Travis Oliphant > Sent: Friday, October 11, 2002 11:38 AM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] Weave checkins > > A recent build of SciPy with the latest weave checkins now gives me > > 53 weave errors on a level 10 test where gcc is failing on Linux > Mandrake 8.2 > > Is this happening to anyone else or do I have a bad build caused perhaps > by not doing a full cleanup. At least it's not segfaulting which has > happened in the past. > > I can give more details, but I wanted to be sure if these problems exist > elsewhere first. > > -Travis O. > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From eric at enthought.com Fri Oct 11 17:20:55 2002 From: eric at enthought.com (eric jones) Date: Fri, 11 Oct 2002 16:20:55 -0500 Subject: [SciPy-dev] Weave checkins In-Reply-To: Message-ID: <000101c2716c$15e4fb90$6b01a8c0@ericlaptop> > > > > On 11 Oct 2002, Travis Oliphant wrote: > > > A recent build of SciPy with the latest weave checkins now gives me > > > > 53 weave errors on a level 10 test where gcc is failing on Linux > > Mandrake 8.2 > > > > Is this happening to anyone else or do I have a bad build caused perhaps > > by not doing a full cleanup. At least it's not segfaulting which has > > happened in the past. > > With Python 2.2.1, gcc 2.95.4 on Debian Woody I get 0 errors from 184 > tests, though there are some strange (?) warning messages (see first part > of the end of this message). Some of the stuff spewed in the weave tests are compiler errors in tests that are supposed to fail (the first part of your first traceback below for example). I don't think we can do anything about this without a lot of work -- and I'm not sure we even want too. Everyone complains about the __bad_path__ message. I'll try think of a less "scary" warning message. The test suite exercises some edge cases which I don't expect to ever happen in real life, and one of these edge cases emits this warning to tell the user that some unexpected stuff is happening. Maybe this isn't a good policy? At the least, I'll bracket this test with a note that says THE FOLLOWING WARNING IS SUPPOSED TO HAPPEN. > > With Python 2.2.1, gcc 3.1.1 on Debian Woody I get 30 errors from 184 > tests. There are numerous compiler errors, one error message is given at > the end of this message. > I'll look forward when Eric installes gcc >=3.1 ;-) We luddites just got a RH 8.0 on a machine, so I can probably check on this tonight. :-) (I think it is gcc 3.2 or something like that??) Unfortunately, the mingw 3.2 causes seg-faults currently when running against an MSVC compiled Python (but it does compile...). A can't tell off hand what is going on from the error messages -- maybe there are some 'const' issues in the new C++ code. I'll track it down tonight. Eric > > Pearu > > --------------------------- First part ----------------------------- > python: 2.99997901917 > ......................................................../home/users/pear u/ > .python22_compiled/13322/sc_a6df888dc732b38ae7a4c5fa1063b4251.cpp: In > function `struct PyObject * compiled_func(PyObject *, PyObject *)': > /home/users/pearu/.python22_compiled/13322/sc_a6df888dc732b38ae7a4c5fa10 63 > b4251.cpp:645: no > match for `string & < int' > /home/users/pearu/.python22_compiled/13322/sc_a6df888dc732b38ae7a4c5fa10 63 > b4251.cpp:649: no > match for `string & + int' > /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++- > 3/std/fcomplex.h:60: candidates > are: class complex operator +(const complex &, float) > /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++- > 3/std/fcomplex.h:62: class > complex operator +(float, const complex &) > /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++- > 3/std/dcomplex.h:60: class > complex operator +(const complex &, double) > /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++- > 3/std/dcomplex.h:62: class > complex operator +(double, const complex &) > /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++- > 3/std/ldcomplex.h:60: class > complex operator +(const complex &, long double) > /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++- > 3/std/ldcomplex.h:62: class > complex operator +(long double, const complex &) > .warning: specified build_dir '_bad_path_' does not exist or is not > writable. Trying default locations > ...warning: specified build_dir '..' does not exist or is not > writable. Trying default locations > .warning: specified build_dir '_bad_path_' does not exist or is not > writable. Trying default locations > ...warning: specified build_dir '..' does not exist or is not > writable. Trying default locations > ........................................................................ .. > ..................... > ---------------------------------------------------------------------- > Ran 184 tests in 450.117s > > OK > ------------------ end of first part --------------------- > > --------------------- Second part ------------------------ > > E..............................In file included from > /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa10 63 > b4250.cpp:9: > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:683: > declaration > does not declare anything > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:683: parse > error > before `!' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:690: non-member > function `py::object py::type()' cannot have `const' method qualifier > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: In function > `py::object py::type()': > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:691: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:691: (Each > undeclared identifier is reported only once for each function it > appears > in.) > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:694: `lose_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: At global > scope: > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:703: non-member > function `int py::size()' cannot have `const' method qualifier > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:709: non-member > function `int py::len()' cannot have `const' method qualifier > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:712: non-member > function `int py::length()' cannot have `const' method qualifier > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:722: virtual > outside > class declaration > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:733: syntax > error > before `operator' > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:734: syntax > error > before `operator' > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:735: syntax > error > before `operator' > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:736: syntax > error > before `operator' > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:737: syntax > error > before `operator' > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:738: syntax > error > before `operator' > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: In function > `PyObject* py::disown()': > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:746: `_own' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h: At global > scope: > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:767: syntax > error > before `:' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:770: aggregate > ` > object _key' has incomplete type and cannot be defined > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:771: parse > error > before `public' > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:774: > destructors > must be member functions > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:774: virtual > outside > class declaration > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:776: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:778: syntax > error > before `.' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:781: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:785: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:789: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:793: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:797: syntax > error > before `&' token > In file included from > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:16, > from > /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa10 63 > b4250.cpp:10: > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:27: base > class ` > object' has incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In > constructor > `py::sequence::sequence()': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:32: type > `struct > object' is not a direct base of `py::sequence' > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In copy > constructor `py::sequence::sequence(const py::sequence&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:33: type > `struct > object' is not a direct base of `py::sequence' > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In > constructor > `py::sequence::sequence(PyObject*)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:34: type > `struct > object' is not a direct base of `py::sequence' > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `virtual py::sequence& py::sequence::operator=(const > py::sequence&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:47: > `grab_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `py::sequence& py::sequence::operator=(const object&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:51: > `grab_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:785: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:789: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:793: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/object.h:797: syntax > error > before `&' token > In file included from > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:16, > from > /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa10 63 > b4250.cpp:10: > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:27: base > class ` > object' has incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In > constructor > `py::sequence::sequence()': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:32: type > `struct > object' is not a direct base of `py::sequence' > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In copy > constructor `py::sequence::sequence(const py::sequence&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:33: type > `struct > object' is not a direct base of `py::sequence' > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In > constructor > `py::sequence::sequence(PyObject*)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:34: type > `struct > object' is not a direct base of `py::sequence' > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `virtual py::sequence& py::sequence::operator=(const > py::sequence&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:47: > `grab_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `py::sequence& py::sequence::operator=(const object&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:51: > `grab_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `virtual void py::sequence::_violentTypeCheck()': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:60: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:61: > `grab_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:62: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `py::sequence py::sequence::operator+(const > py::sequence&) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:70: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:72: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:73: > `lose_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::count(const object&) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:80: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:82: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::count(int) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:86: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::count(double) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:90: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::count(char*) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:94: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::count(std::string&) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:98: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `virtual void py::sequence::set_item(int, object&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:107: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:109: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `object py::sequence::operator[](int)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:116: return > type ` > struct object' is incomplete > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:117: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:127: > `lose_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `py::sequence py::sequence::slice(int, int) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:135: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:137: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:138: > `lose_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `bool py::sequence::in(const object&) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:146: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:148: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `bool py::sequence::in(int)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:152: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `bool py::sequence::in(double)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:156: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `bool py::sequence::in(const char*)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:160: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `bool py::sequence::in(std::string&)': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:164: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::index(const object&) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:173: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:175: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::index(int) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:179: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::index(double) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:183: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::index(const std::complex&) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:187: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::index(const char*) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:191: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `int py::sequence::index(const std::string&) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:195: variable > ` > object val' has initializer but incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: In member > function `py::sequence py::sequence::operator*(int) const': > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:208: `_obj' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:210: `fail' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:211: > `lose_ref' > undeclared (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h: At global > scope: > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:223: base > class ` > object' has incomplete type > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:224: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:227: type > specifier omitted for parameter `sequence' > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:227: parse > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:228: missing > ';' > before right brace > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:229: > destructors > must be member functions > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:229: virtual > outside class declaration > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:231: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:233: syntax > error > before `.' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:236: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:240: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:244: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:248: syntax > error > before `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/sequence.h:252: syntax > error > before `&' token > In file included from > /home/users/pearu/.python22_compiled/16853/sc_a6df888dc732b38ae7a4c5fa10 63 > b4250.cpp:10: > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:23: parse error > before > `{' token > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: parse error > before > `&' token > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: ISO C++ > forbids > declaration of `list' with no type > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h: In function `int > list(...)': > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: `other' > undeclared > (first use this function) > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: only > constructors > take base initializers > /usr/local/lib/python2.2/site-packages/weave/scxx/list.h:30: confused by > earlier errors, bailing out > E > > ------------------ end of second part -------------------- > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From eric at enthought.com Fri Oct 11 23:39:31 2002 From: eric at enthought.com (eric jones) Date: Fri, 11 Oct 2002 22:39:31 -0500 Subject: [SciPy-dev] Weave checkins In-Reply-To: Message-ID: <000101c271a0$fcb2fd90$8901a8c0@ERICDESKTOP> Hey Pearu, All of weave passes on our RedHat 8.0 box with gcc 3.2 except for the blitz stuff, and it looks like the blitz problem is only with complex numbers. I've attached the traceback below in case anyone can make sense of the compiler error message. It basically boils down to this: /blitz/ops.h: In static member function `static blitz::promote_trait::T_promote blitz::Divide::apply(T_numtype1,.... sc_f951e2d4a9b0e08d923fe6c17a9778ae2.cpp:699: instantiated from here /blitz/ops.h:220: no match for `std::complex& / double&' operator It looks like there is some issue with the binary op templates in op.h or type promotion templates in promote.h when it comes to complex numbers. I'll post to the blitz support list and see if they have a fix. Travis 0., do you still get all the failures on Mandrake? Thanks, eric ---------------------------------------------------------------------- [eric at ip192-168-1-136 weave]$ python -c "import weave;weave.test()" No test suite found for weave.accelerate_tools creating test suite for: weave.ast_tools No test suite found for weave.base_info No test suite found for weave.base_spec No test suite found for weave.blitz_spec creating test suite for: weave.blitz_tools creating test suite for: weave.build_tools No test suite found for weave.bytecodecompiler creating test suite for: weave.c_spec creating test suite for: weave.catalog No test suite found for weave.common_info No test suite found for weave.converters No test suite found for weave.dumb_shelve No test suite found for weave.dumbdbm_patched creating test suite for: weave.ext_tools building extensions here: /home/eric/.python22_compiled/69320 building extensions here: /home/eric/.python22_compiled/69321 creating test suite for: weave.inline_tools creating test suite for: weave.size_check creating test suite for: weave.slice_handler creating test suite for: weave.standard_array_spec No test suite found for weave.vtk_spec No test suite found for weave.wx_spec ... Expression: result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]+ b[1:-1 ,2:] + b[1:-1,:-2]) / 5. Run: (10, 10) f 1st run(Numeric,compiled,speed up): 0.0014, 19.9448, 0.0001 2nd run(Numeric,compiled,speed up): 0.0012, 0.0054, 0.2287 Run: (50, 50) f 1st run(Numeric,compiled,speed up): 0.0015, 0.0055, 0.2783 2nd run(Numeric,compiled,speed up): 0.0016, 0.0055, 0.2875 Run: (100, 100) f 1st run(Numeric,compiled,speed up): 0.0029, 0.0060, 0.4819 2nd run(Numeric,compiled,speed up): 0.0029, 0.0060, 0.4832 Run: (500, 500) f 1st run(Numeric,compiled,speed up): 0.0822, 0.0284, 2.8985 2nd run(Numeric,compiled,speed up): 0.0779, 0.0290, 2.6866 Run: (1000, 1000) f 1st run(Numeric,compiled,speed up): 0.3111, 0.0943, 3.2996 2nd run(Numeric,compiled,speed up): 0.3110, 0.0959, 3.2419 Run: (10, 10) d 1st run(Numeric,compiled,speed up): 0.0014, 17.1779, 0.0001 2nd run(Numeric,compiled,speed up): 0.0013, 0.0054, 0.2374 Run: (50, 50) d 1st run(Numeric,compiled,speed up): 0.0020, 0.0059, 0.3380 2nd run(Numeric,compiled,speed up): 0.0020, 0.0057, 0.3481 Run: (100, 100) d 1st run(Numeric,compiled,speed up): 0.0072, 0.0062, 1.1604 2nd run(Numeric,compiled,speed up): 0.0070, 0.0062, 1.1389 Run: (500, 500) d 1st run(Numeric,compiled,speed up): 0.1939, 0.0336, 5.7638 2nd run(Numeric,compiled,speed up): 0.1948, 0.0355, 5.4826 Run: (1000, 1000) d 1st run(Numeric,compiled,speed up): 0.9406, 0.1066, 8.8223 2nd run(Numeric,compiled,speed up): 0.7744, 0.1066, 7.2665 Run: (10, 10) F /home/eric/lib/python2.2/site-packages/weave/blitz-20001213/blitz/ops.h: In static member function `static blitz::promote_trait::T_promote blitz::Divide::apply(T_numtype1, T_numtype2) [with T_numtype1 = std::complex, T_numtype2 = double]': /home/eric/lib/python2.2/site-packages/weave/blitz-20001213/blitz/array/ expr.h:4 45: instantiated from `P_op::T_numtype blitz::_bz_ArrayExprOp::operator()(const blitz::TinyVector&) [with int N_rank = 2, P_expr1 = blitz::_bz_ArrayExpr, 2>, bli tz::FastArrayIterator, 2>, blitz::Add, s td::complex > > >, blitz::FastArrayIterator, 2>, blit z::Add, std::complex > > >, blitz::FastArrayIterator< std::complex, 2>, blitz::Add, std::complex > > >, blitz::FastArrayIterator, 2>, blitz::Add, std::complex > > >, P_expr2 = blitz::_bz_ArrayExprConstant, P_op = blitz::Divide, double>]' /home/eric/lib/python2.2/site-packages/weave/blitz-20001213/blitz/array/ expr.h:2 35: instantiated from `P_expr::T_numtype blitz::_bz_ArrayExpr::operato r()(const blitz::TinyVector&) [with int N_rank = 2, P_expr = blitz: :_bz_ArrayExprOp, 2 >, blitz::FastArrayIterator, 2>, blitz::Add, std::complex > > >, blitz::FastArrayIterator, 2> , blitz::Add, std::complex > > >, blitz::FastArrayIte rator, 2>, blitz::Add, std::complex > > >, blitz::FastArrayIterator, 2>, blitz::Add, std::complex > > >, blitz::_bz_ArrayExprConstant, bli tz::Divide, double> >]' /home/eric/lib/python2.2/site-packages/weave/blitz-20001213/blitz/array/ eval.cc: 758: instantiated from `blitz::Array& blitz::Array::evaluateWithIn dexTraversal1(T_expr, T_update) [with T_expr = blitz::_bz_ArrayExpr, 2>, bli tz::FastArrayIterator, 2>, blitz::Add, s td::complex > > >, blitz::FastArrayIterator, 2>, blit z::Add, std::complex > > >, blitz::FastArrayIterator< std::complex, 2>, blitz::Add, std::complex > > >, blitz::FastArrayIterator, 2>, blitz::Add, std::complex > > >, blitz::_bz_ArrayExprConstant, blitz::Di vide, double> > >, T_update = blitz::_bz_update, std::complex >, P_numtype = std::complex, int N_rank = 2] ' /home/eric/lib/python2.2/site-packages/weave/blitz-20001213/blitz/array/ eval.cc: 259: instantiated from `blitz::Array& blitz::Array::evaluate(T_exp r, T_update) [with T_expr = blitz::_bz_ArrayExpr, 2>, blitz::FastArrayIterat or, 2>, blitz::Add, std::complex > > >, blitz::FastArrayIterator, 2>, blitz::Add, std::complex > > >, blitz::FastArrayIterator , 2>, blitz::Add, std::complex > > >, blitz::FastArra yIterator, 2>, blitz::Add, std::complex< float> > > >, blitz::_bz_ArrayExprConstant, blitz::Divide, double> > >, T_update = blitz::_bz_update, std::compl ex >, P_numtype = std::complex, int N_rank = 2]' /home/eric/lib/python2.2/site-packages/weave/blitz-20001213/blitz/array/ ops.cc:1 18: instantiated from `blitz::Array& blitz::Array::operator=(const blitz::ETBase&) [with T_expr = blitz::_bz_ArrayExpr, 2>, blitz::Fas tArrayIterator, 2>, blitz::Add, std::com plex > > >, blitz::FastArrayIterator, 2>, blitz::Add< std::complex, std::complex > > >, blitz::FastArrayIterator, 2>, blitz::Add, std::complex > > >, bli tz::FastArrayIterator, 2>, blitz::Add, s td::complex > > >, blitz::_bz_ArrayExprConstant, blitz::Divide, double> > >, P_numtype = std::complex, int N_rank = 2] ' /home/eric/.python22_compiled/69322/sc_f951e2d4a9b0e08d923fe6c17a9778ae2 .cpp:699 : instantiated from here /home/eric/lib/python2.2/site-packages/weave/blitz-20001213/blitz/ops.h: 220: no match for `std::complex& / double&' operator Ewarning: specified build_dir '_bad_path_' does not exist or is not writable. Tr ying default locations ....warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...................speed test for list access compiler: scxx: 0.276726961136 C, no checking: 0.215849041939 python: 1.99733901024 ............................................................. test printing a value:2 ../home/eric/.python22_compiled/69322/sc_a6df888dc732b38ae7a4c5fa1063b42 51.cpp: In function `PyObject* compiled_func(PyObject*, PyObject*)': /home/eric/.python22_compiled/69322/sc_a6df888dc732b38ae7a4c5fa1063b4251 .cpp:645 : no match for `std::string& < int' operator /home/eric/.python22_compiled/69322/sc_a6df888dc732b38ae7a4c5fa1063b4251 .cpp:649 : no match for `std::string& + int' operator ........................................................................ ........ .............. ====================================================================== ERROR: result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/eric/lib/python2.2/site-packages/weave/tests/test_blitz_tools.py" , line 150, in check_5point_avg_2d self.generic_2d(expr) File "/home/eric/lib/python2.2/site-packages/weave/tests/test_blitz_tools.py" , line 124, in generic_2d mod_location) File "/home/eric/lib/python2.2/site-packages/weave/tests/test_blitz_tools.py" , line 80, in generic_test blitz_tools.blitz(expr,arg_dict,{},verbose=0) #, File "/home/eric/lib/python2.2/site-packages/weave/blitz_tools.py", line 72, i n blitz type_converters = converters.blitz, File "/home/eric/lib/python2.2/site-packages/weave/inline_tools.py", line 425, in compile_function verbose=verbose, **kw) File "/home/eric/lib/python2.2/site-packages/weave/ext_tools.py", line 327, in compile verbose = verbose, **kw) File "/home/eric/lib/python2.2/site-packages/weave/build_tools.py", line 221, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/usr/src/build/143041-i386/install/usr/lib/python2.2/distutils/core.py" , line 157, in setup CompileError: error: command 'gcc' failed with exit status 1 ---------------------------------------------------------------------- Ran 184 tests in 215.422s FAILED (errors=1) ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 From pearu at cens.ioc.ee Sat Oct 12 06:13:29 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 12 Oct 2002 13:13:29 +0300 (EEST) Subject: [SciPy-dev] Re: [Scipy-cvs] world/scipy/weave/scxx object.h,1.8,1.9 sequence.h,1.3,1.4 In-Reply-To: <20021012005117.D2C6B3EACE@www.scipy.com> Message-ID: On Fri, 11 Oct 2002 eric at localhost.localdomain wrote: > Update of /home/cvsroot/world/scipy/weave/scxx > In directory shaft:/tmp/cvs-serv27620 > > Modified Files: > object.h sequence.h > Log Message: > removed the not() operator from object.h. For some reason, it was > causing problems in gcc 3.2 -- is it reserved or something? Anyway, I > haven't used it and don't consider it to be critical at the moment, so > I've just commented it out until there is more time to look it over. not is C++ operator: bool operator not() const { return PyObject_Not(_obj) == 1; }; works in gcc 3.0 and 3.1 but not in gcc 2.95. The following works in all versions of gcc: #if defined(__GNUC__) && __GNUC__ < 3 bool not() const { #else bool operator not() const { #endif return PyObject_Not(_obj) == 1; }; I don't know which one road works for other C++ compilers but my guess would be the #else part provided that these compilers follow C++ standard closely. Pearu From eric at enthought.com Sat Oct 12 06:46:55 2002 From: eric at enthought.com (eric jones) Date: Sat, 12 Oct 2002 05:46:55 -0500 Subject: [SciPy-dev] Re: [Scipy-cvs] world/scipy/weave/scxx object.h,1.8,1.9 sequence.h,1.3,1.4 In-Reply-To: Message-ID: <000b01c271dc$b1577d80$8901a8c0@ERICDESKTOP> Hey Thanks! I'm glad to know what was happening here. For now I'll leave it off so that it doesn't crop up as a problem on other platforms -- it isn't really that critical. Once we get most of the other stuff ironed out, we can look at adding it back in. Currently, there aren't any ifdefs. That probably won't last long, but I'm loath to add them right now. :-) eric ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 -----Original Message----- From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On Behalf Of Pearu Peterson Sent: Saturday, October 12, 2002 4:13 AM To: scipy-dev at scipy.org Subject: [SciPy-dev] Re: [Scipy-cvs] world/scipy/weave/scxx object.h,1.8,1.9 sequence.h,1.3,1.4 On Fri, 11 Oct 2002 eric at localhost.localdomain wrote: > Update of /home/cvsroot/world/scipy/weave/scxx > In directory shaft:/tmp/cvs-serv27620 > > Modified Files: > object.h sequence.h > Log Message: > removed the not() operator from object.h. For some reason, it was > causing problems in gcc 3.2 -- is it reserved or something? Anyway, I > haven't used it and don't consider it to be critical at the moment, so > I've just commented it out until there is more time to look it over. not is C++ operator: bool operator not() const { return PyObject_Not(_obj) == 1; }; works in gcc 3.0 and 3.1 but not in gcc 2.95. The following works in all versions of gcc: #if defined(__GNUC__) && __GNUC__ < 3 bool not() const { #else bool operator not() const { #endif return PyObject_Not(_obj) == 1; }; I don't know which one road works for other C++ compilers but my guess would be the #else part provided that these compilers follow C++ standard closely. Pearu _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.net http://www.scipy.net/mailman/listinfo/scipy-dev From eric at enthought.com Sat Oct 12 20:05:32 2002 From: eric at enthought.com (eric jones) Date: Sat, 12 Oct 2002 19:05:32 -0500 Subject: [SciPy-dev] ccache Message-ID: <001901c2724c$422a3ae0$8901a8c0@ERICDESKTOP> I just played with ccache (ccache.samba.org) after seeing a related project (distcc) on Slashdot. It's way cool. The whole idea is for the "compiler" to cache object files and only re-compile .c/.cpp files when the compiler version, compile options, or file has changed. As a result, if you do a the following in the Python CVS make (2.33 minutes) make clean make (0.33 minutes) The second make will run really fast. Or, in our case, python setup.py build (4.5 minutes) rm -rf build python setup.py build (1.5 minutes) That is *really* handy. I had to edit a single line of the ccache.c to get it to recognize Fortran files (at bottom of page). The only thing now that the caching mechanism doesn't handle is pyf files. Ccache is built to work explicitly with C/C++ compilers (gcc specifically I think), and so my na?ve attempt at adding an pyf extension (and replace f2py with the ccache call) didn't work. If we could get ccache to handle f2py, secondary builds would probably drop to 15-30 seconds. Even if we don't, this is worth knowing about -- at least I was happy to find it! Regards, eric ---- edits to ccache.c ---- } extensions[] = { {"c", "i"}, {"C", "ii"}, {"m", "mi"}, {"f", "i"}, /* I added this one */ {"cc", "ii"}, {"CC", "ii"}, {"cpp", "ii"}, {"CPP", "ii"}, {"cxx", "ii"}, {"CXX", "ii"}, {"c++", "ii"}, {"C++", "ii"}, {NULL, NULL}}; ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 From oliphant.travis at ieee.org Mon Oct 14 05:55:54 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 14 Oct 2002 03:55:54 -0600 Subject: [SciPy-dev] ccache In-Reply-To: <001901c2724c$422a3ae0$8901a8c0@ERICDESKTOP> References: <001901c2724c$422a3ae0$8901a8c0@ERICDESKTOP> Message-ID: <1034589355.6617.4.camel@travis.local.net> Report on Mandrake 8.2 (gcc-2.96 self-compiled python 2.2.1) Updated SciPy installation. Tests pass (although now I have to do weave.test() separately) I had to change wx_spec.py in a couple of places. First, I had to change the definition of wx_base (this should be in the system_info file). Then, I had to change ldflags = get_wxconfig('ldflags') to ldflags = get_wxconfig('ld') These changes allowed the tests to succeed. Partial outputs shown below. >>> weave.test(10) results in .............................. ---------------------------------------------------------------------- Ran 184 tests in 97.017s OK >>> scipy.test(10) results in .................................................................................................................... ---------------------------------------------------------------------- Ran 423 tests in 51.404s OK From pearu at cens.ioc.ee Mon Oct 14 07:54:26 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 14 Oct 2002 14:54:26 +0300 (EEST) Subject: [SciPy-dev] ccache In-Reply-To: <1034589355.6617.4.camel@travis.local.net> Message-ID: On 14 Oct 2002, Travis Oliphant wrote: > > Report on Mandrake 8.2 (gcc-2.96 self-compiled python 2.2.1) > Updated SciPy installation. > > Tests pass (although now I have to do weave.test() separately) > > I had to change wx_spec.py in a couple of places. First, I had to > change the definition of wx_base (this should be in the system_info > file). At least under unices wx_base shouldn't be necessary. If wx-config is not in the path then it means that wxPython was not installed (or not properly at least) and wx support should be then disabled. > Then, I had to change > > ldflags = get_wxconfig('ldflags') > > to > > ldflags = get_wxconfig('ld') Are you sure about this change? Here wx-config --ld gives c++ -shared -o and that shouldn't be suitable for extra_link_args. Pearu From eric at enthought.com Mon Oct 14 09:27:17 2002 From: eric at enthought.com (eric jones) Date: Mon, 14 Oct 2002 08:27:17 -0500 Subject: [SciPy-dev] ccache In-Reply-To: Message-ID: <000401c27385$6d83bd30$8901a8c0@ERICDESKTOP> > > On 14 Oct 2002, Travis Oliphant wrote: > > > > > Report on Mandrake 8.2 (gcc-2.96 self-compiled python 2.2.1) > > Updated SciPy installation. > > > > Tests pass (although now I have to do weave.test() separately) > > > > I had to change wx_spec.py in a couple of places. First, I had to > > change the definition of wx_base (this should be in the system_info > > file). You are right, but I'm not sure when I'll get to it. If someone gets to it that'd be great. > > At least under unices wx_base shouldn't be necessary. If wx-config is not > in the path then it means that wxPython was not installed (or > not properly at least) and wx support should be then disabled. Is this true? It live is /usr/lib/wxPython/bin on our machine which is most definitely not in the path. Where does it live on your machine? Perhaps we need to fix how ours is installed. > > > Then, I had to change > > > > ldflags = get_wxconfig('ldflags') > > > > to > > > > ldflags = get_wxconfig('ld') > > Are you sure about this change? Here > wx-config --ld > gives > c++ -shared -o > and that shouldn't be suitable for extra_link_args. Pearu is right. Travis, what error did you get from using ldflags? What do you get on the command line: wx-config --ldflags Also, what version of wxPython do you have installed? Thanks, Eric From pearu at cens.ioc.ee Mon Oct 14 10:18:17 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 14 Oct 2002 17:18:17 +0300 (EEST) Subject: [SciPy-dev] ccache In-Reply-To: <000401c27385$6d83bd30$8901a8c0@ERICDESKTOP> Message-ID: On Mon, 14 Oct 2002, eric jones wrote: > > At least under unices wx_base shouldn't be necessary. If wx-config is > not > > in the path then it means that wxPython was not installed (or > > not properly at least) and wx support should be then disabled. > > Is this true? It live is /usr/lib/wxPython/bin on our machine which is > most definitely not in the path. Where does it live on your machine? > Perhaps we need to fix how ours is installed. In my machine(s) wx-config lives in $prefix/bin where $prefix was specified in configure command line, and the default $prefix is /usr/local as usual for Gnu software. On the other hand, instructions in http://www.wxpython.org/README.1st.txt use /usr/lib/wxPython for the prefix as an example and that's where your system probably got it. This document does not mention the location of wx-config file because it is not important for doing 'import wxPython'. For example, in Debian Woody, `wx-config --prefix` gives /usr and I would expect other distributions to use the same convention. I think it would be safer and simpler to require that wx-config is in the path than either hard coding it to weave or implementing various heuristics similar to what system_info basically does. One can always make heuristics to fail and most software that use xxx-config convention (e.g. ginac,cln,glib,gtk, and many others, see for -config files in your system), assume that xxx-config can be found from the path. Pearu From oliphant at ee.byu.edu Mon Oct 14 14:20:07 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Oct 2002 12:20:07 -0600 (MDT) Subject: [SciPy-dev] ccache In-Reply-To: Message-ID: > > Then, I had to change > > > > ldflags = get_wxconfig('ldflags') > > > > to > > > > ldflags = get_wxconfig('ld') > > Are you sure about this change? Here > wx-config --ld > gives > c++ -shared -o > and that shouldn't be suitable for extra_link_args. > No, I'm not sure, but wx-config --ldflags gives an error on my machine (no such option). So I'm not sure what is desired here. At least this way I didn't get an import error when I tried to import scipy. -Travis From pearu at cens.ioc.ee Mon Oct 14 14:37:24 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 14 Oct 2002 21:37:24 +0300 (EEST) Subject: [SciPy-dev] ccache In-Reply-To: Message-ID: On Mon, 14 Oct 2002, Travis Oliphant wrote: > > > Then, I had to change > > > > > > ldflags = get_wxconfig('ldflags') > > > > > > to > > > > > > ldflags = get_wxconfig('ld') > > > > Are you sure about this change? Here > > wx-config --ld > > gives > > c++ -shared -o > > and that shouldn't be suitable for extra_link_args. > > > > No, I'm not sure, but wx-config --ldflags > gives an error on my machine (no such option). So I'm not sure what is > desired here. I bet you are using pre 2.3 version of wxPython as it does not use --ldflags indeed. But how important is ldflags anyway? With wxPython-2.3.3.1 in my linux boxes (suse and debian) `wx-config --ldflags` outputs nothing. Pearu From sales at smoking.com.net Mon Oct 14 09:53:41 2002 From: sales at smoking.com.net (Sales Department) Date: Mon, 14 Oct 2002 15:53:41 +0200 Subject: [SciPy-dev] Cheap Cigarettes Message-ID: <20021015004444.C132D3EACE@www.scipy.com> Dear Sir or Madam In the past you have requested information on discounted products. If you are not a smoker, and find this email offensive, then we sincerely apologise. We will be only too happy to take you off our database. If you are a smoker, however, you are probably fed up with paying high prices for your cigarettes and tobacco. Take a look at what we can do for you at http://www.smokersassociation.co.uk/?S=16&ID=2 We can send you, legally, by registered air mail, direct to your door, 4 cartons of cigarettes or 40 pouches of rolling tobacco (all brands are available) from only 170 Euros - about 105 pounds - fully inclusive of postage and packing. Why pay more? If you would rather not hear from us any more, this link will ensure that you are not bothered again. mailto:unsubscribe at smokersassociation.co.uk Yours faithfully. Smokers Association http://www.smokersassociation.co.uk/?S=16&ID=2 xay26261291y From sales at smoking.com.net Mon Oct 14 09:53:47 2002 From: sales at smoking.com.net (Sales Department) Date: Mon, 14 Oct 2002 15:53:47 +0200 Subject: [SciPy-dev] Cheap Smokes Message-ID: <20021015004445.EBA7D3EAD3@www.scipy.com> Dear Sir or Madam In the past you have requested information on discounted products. If you are not a smoker, and find this email offensive, then we sincerely apologise. We will be only too happy to take you off our database. If you are a smoker, however, you are probably fed up with paying high prices for your cigarettes and tobacco. Take a look at what we can do for you at http://www.smokersassociation.co.uk/?S=16&ID=2 We can send you, legally, by registered air mail, direct to your door, 4 cartons of cigarettes or 40 pouches of rolling tobacco (all brands are available) from only 170 Euros - about 105 pounds - fully inclusive of postage and packing. Why pay more? If you would rather not hear from us any more, this link will ensure that you are not bothered again. mailto:unsubscribe at smokersassociation.co.uk Yours faithfully. Smokers Association http://www.smokersassociation.co.uk/?S=16&ID=2 xay26261301y From pearu at cens.ioc.ee Tue Oct 15 06:29:46 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 15 Oct 2002 13:29:46 +0300 (EEST) Subject: [SciPy-dev] tar-ball problem Message-ID: Hi, It appears that the scipy source SciPy-0.2.0_alpha_144.4350_src.tar.gz in http://www.scipy.org/site_content/download_list is incomplete: it lacks files in xplt module. This is the reason why users complain about missing gistCmodule.c file when building scipy. When I hit python setup.py sdist on my debian box then the resulting tar-ball *is* complete. So, my guess is that SciPy-0.2.0_alpha_144.4350_src.tar.gz is actually created in some Windows box (hmm, I am having deja vu). Could the creation of this tar-ball moved to some linux box? Thanks, Pearu From skip at pobox.com Tue Oct 15 08:23:11 2002 From: skip at pobox.com (Skip Montanaro) Date: Tue, 15 Oct 2002 07:23:11 -0500 Subject: [SciPy-dev] tar-ball problem In-Reply-To: References: Message-ID: <15788.2223.957828.96999@montanaro.dyndns.org> Pearu> It appears that the scipy source Pearu> SciPy-0.2.0_alpha_144.4350_src.tar.gz Pearu> in Pearu> http://www.scipy.org/site_content/download_list Pearu> is incomplete: it lacks files in xplt module. This is the reason Pearu> why users complain about missing gistCmodule.c file when building Pearu> scipy. ... Pearu> So, my guess is that SciPy-0.2.0_alpha_144.4350_src.tar.gz is Pearu> actually created in some Windows box (hmm, I am having deja vu). Based upon the timestamps on files in the downloads directory, I suspect it's generated on the Sun at Enthought which I've been struggling with. I wasn't aware source tarballs were being generated. I'll take a look at it. -- Skip Montanaro - skip at pobox.com http://www.mojam.com/ http://www.musi-cal.com/ From travis at enthought.com Tue Oct 15 08:38:03 2002 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 15 Oct 2002 07:38:03 -0500 Subject: [SciPy-dev] tar-ball problem In-Reply-To: Message-ID: I'll look at this today. I think you're correct about the creation of the source file on a windows box. Thanks, Travis > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net]On > Behalf Of Pearu Peterson > Sent: Tuesday, October 15, 2002 5:30 AM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] tar-ball problem > > > > Hi, > > It appears that the scipy source > SciPy-0.2.0_alpha_144.4350_src.tar.gz > in > http://www.scipy.org/site_content/download_list > is incomplete: it lacks files in xplt module. This is the reason why users > complain about missing gistCmodule.c file when building scipy. > > When I hit > python setup.py sdist > on my debian box then the resulting tar-ball *is* complete. > > So, my guess is that SciPy-0.2.0_alpha_144.4350_src.tar.gz is actually > created in some Windows box (hmm, I am having deja vu). > > Could the creation of this tar-ball moved to some linux box? > > Thanks, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From skip at pobox.com Tue Oct 15 15:33:48 2002 From: skip at pobox.com (Skip Montanaro) Date: Tue, 15 Oct 2002 14:33:48 -0500 Subject: [SciPy-dev] Re: [Scipy-cvs] world/scipy/tests test_common.py,1.4,1.5 In-Reply-To: <20021015184305.2EBDC3EACE@www.scipy.com> References: <20021015184305.2EBDC3EACE@www.scipy.com> Message-ID: <15788.28060.926042.550891@montanaro.dyndns.org> pearu> Fixed another typo. This opened a can of errors. Takers? How does fixing a typo open a can of errors? What *is* a can of errors? Takers for what? Skip From pearu at cens.ioc.ee Tue Oct 15 16:23:29 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 15 Oct 2002 23:23:29 +0300 (EEST) Subject: [SciPy-dev] Re: [Scipy-cvs] world/scipy/tests test_common.py,1.4,1.5 In-Reply-To: <15788.28060.926042.550891@montanaro.dyndns.org> Message-ID: On Tue, 15 Oct 2002, Skip Montanaro wrote: > > pearu> Fixed another typo. This opened a can of errors. Takers? > > How does fixing a typo open a can of errors? In this particular case, the file test_common.py had syntax errors (that obviously looked like typos: missing comma and a comma in place of a colon) and the scipy testing hooks just ignored this file. After fixing the syntax errors, the tests in test_common.py were actually executed (probably for the first time) and not all of them passed... > What *is* a can of errors? A bunch of errors (or bugs) that are there but do not show up because of another error (or bug) that cause the process stoppage before this bunch of errors could ever become effective. I thought that the analogy with the expression "a can of worms" was appropriate in this case. > Takers for what? Takers for fixing these errors, of course. I would have fixed them myself but right now I am in a middle of another task... The errors are shown below for the record. Pearu $ python test_common.py ......EEEE ====================================================================== ERROR: check_basic (__main__.test_factorial) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_common.py", line 53, in check_basic assert_equal(factorial(-10),0) File "/usr/lib/python2.2/site-packages/scipy/common.py", line 20, in factorial raise ValueError, "n must be >= 0" ValueError: n must be >= 0 ====================================================================== ERROR: check_exact (__main__.test_factorial) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_common.py", line 60, in check_exact assert_equal(factorial(key,exact=1),resdict[key]) File "/usr/lib/python2.2/site-packages/scipy/common.py", line 20, in factorial raise ValueError, "n must be >= 0" ValueError: n must be >= 0 ====================================================================== ERROR: check_basic (__main__.test_comb) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_common.py", line 68, in check_basic assert_equal(comb(-10,1),0) File "/usr/lib/python2.2/site-packages/scipy/common.py", line 40, in comb raise ValueError, "N and k must be non-negative and k <= N" ValueError: N and k must be non-negative and k <= N ====================================================================== ERROR: check_exact (__main__.test_comb) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_common.py", line 79, in check_exact assert_equal(comb(key[0],key[1],exact=1),resdict[key]) File "/usr/lib/python2.2/site-packages/scipy/common.py", line 40, in comb raise ValueError, "N and k must be non-negative and k <= N" ValueError: N and k must be non-negative and k <= N ---------------------------------------------------------------------- Ran 10 tests in 0.470s FAILED (errors=4) From oliphant at ee.byu.edu Tue Oct 15 17:04:06 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 15 Oct 2002 15:04:06 -0600 (MDT) Subject: [SciPy-dev] Common tests failing Message-ID: Pearu, Thanks for fixing the tests in test_common. But, are you sure you don't have an old version of common.py lying around? The new version of common.py should not raise these errors. The tests errors you are getting do not happen for me. -Travis O. From pearu at cens.ioc.ee Wed Oct 16 04:26:22 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 16 Oct 2002 11:26:22 +0300 (EEST) Subject: [SciPy-dev] Common tests failing In-Reply-To: Message-ID: On Tue, 15 Oct 2002, Travis Oliphant wrote: > But, are you sure you don't have an old version of common.py lying around? > The new version of common.py should not raise these errors. Yes, after CVS update all tests pass. Thanks, Pearu From eric at enthought.com Wed Oct 16 12:12:45 2002 From: eric at enthought.com (eric jones) Date: Wed, 16 Oct 2002 11:12:45 -0500 Subject: [SciPy-dev] ANNOUNCE: Class -- Building C/C++ Extensions for Python Message-ID: <003501c2752e$dfc350e0$8901a8c0@ERICDESKTOP> Building C/C++ Extensions for Python Description: Enthought is proud to announce "Building C/C++ extensions for Python," an intense 3 day course taught by David Beazley, David Abrahams, and Eric Jones, the lead authors of SWIG, Boost.Python, and Weave. This hands-on course is geared toward C/C++ programmers and teaches the start-to-finish process of exposing C/C++ libraries for use in Python. Students are sure to benefit from this opportunity to learn from three of the leading experts in the field. The class teaches the fundamentals of designing and building Python extensions and addresses the common problems that arise during the process. Boost.Python, SWIG, and Weave are central to the discussion, and each is covered in depth. Fundamental topics such as handling reference counts and building Python 2.2 style extension types by hand are also covered. Other topics discussed include Pyrex, distributing extensions using distutils, debugging extensions, and solving shared library issues. The course is divided equally between lectures and hands-on programming sessions. Users will need to bring either a Linux or a Windows NT/2000/XP, network enabled, laptop to the class with Python 2.2 and a C++ compiler installed for the programming sessions. More information about setup requirements is available at the website. Space is limited. Website: http://www.enthought.com/training/building_extensions.html Signup Information: Call (512) 536-1057 to reserve your place. Dates: December 9th, 10th, and 11th, 2002 (Monday through Wednesday) Instruction Hours: Instruction runs from 9am-9pm on the first two days (with breaks for lunch, dinner, and between sessions), and 9am-6pm on the final day of the class. Location: The Driskill Hotel www.driskillhotel.com 604 Brazos Street Austin, Texas The Driskill is conveniently located in the heart of downtown Austin. And, yes, it has high speed internet connections in the guest rooms. What's Included: 3 nights of lodging All meals and snacks Unlimited caffeinated drinks Class materials 3 days of instruction Cost: $2400.00 US. About the Authors: David Beazley David is an assistant professor in the Department of Computer Science at the University of Chicago. He has worked on Python extension building since 1996 and is the creator of SWIG (www.swig.org). He is also the author of the "Python Essential Reference." David holds a PhD in Computer Science from the University of Utah. David Abrahams David is a founding member and moderator of Boost (www.boost.org) and heads Boost Consulting, a company dedicated to providing professional support and development services for the Boost C++ libraries and associated tools. He has been an ANSI/ISO C++ committee member since 1996, when he contributed a theory, specification, and implementation of exception handling for the C++ standard library. In his 14-year career he has developed applications for the desktop and embedded devices in the fields of music software, speech recognition, and circuit simulation. David holds a BSE in Computer Science from the University of Pennsylvania. Eric Jones Eric has a broad background in engineering and software development and leads Enthought's product engineering and software design. Eric is a lead developer on the SciPy (www.scipy.org) and Weave projects and has taught numerous courses about Python and how to leverage it for scientific computing. Prior to co-founding Enthought, Eric worked in the fields of numerical electromagnetics and genetic optimization. Eric holds a PhD degree in Electrical Engineering from Duke University. ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 From p.berkes at biologie.hu-berlin.de Fri Oct 18 09:00:15 2002 From: p.berkes at biologie.hu-berlin.de (Pietro Berkes) Date: Fri, 18 Oct 2002 15:00:15 +0200 Subject: [SciPy-dev] shape_base questions Message-ID: <02101815001500.18707@sherrington.biologie.hu-berlin.de> Hi! I was taking a look to the 'shape_base.py' file, and I've got a couple of questions/suggestions: - 'apply_over_axes' won't work, since it cannot find SequenceType on line 15. SequenceType was defined in stats/rv2.py and stats/stats.py, where 'apply_over_axes' is used. I suggest to move the definition SequenceType = [types.TupleType, types.ListType, Numeric.ArrayType] together with a predicate function: def issequence(seq): return type(seq) in SequenceType in scipy_base/type_check.py and then to fix rv2.py, stats.py and shape_base.py. By the way: in rv2.py, SequenceType contains array.ArrayType and Numeric.ArrayType. Are they different? And if yes, why? - 'apply_over_axes' and 'expand_dims' accept negative axes, while other functions don't. Are there design guidelines regarding negative axes? The two functions correct negative indices in two different ways: 'apply_over_axes' : axis = len(shape) + axis 'expand_dims' : axis = len(shape) + axis + 1 I think the first way is the correct one in both cases, at least it would match the documentation of both functions. I could send a patch for these issues... what is the preferred way to do this? Should I use the SourceForge page (it appears to be void...)? Regards, Pietro. From bud at sistema.it Fri Oct 18 11:20:18 2002 From: bud at sistema.it (Bud P.Bruegger) Date: Fri, 18 Oct 2002 17:20:18 +0200 Subject: [SciPy-dev] least square function fit for vector fields Message-ID: <20021018172018.00a12170.bud@sistema.it> Hello everyone, I'm new to this list. I just wrote a module for non-linear least square fit on top of numeric python and plan to release it under an open source license. While I looked around for existing sw before coding, and actually looked at Scipy, I didn't look close enough and missed out on optimize.leastsq... If I understand leastsq correctly, my module seems to be more general as it allows to use vector fields as data. For example, x could be a scalar for time and y a point in space (this is the case for my application); or x could be a point in space (x,y,z) and y a vector (such as wind speed). My style is a little "heavier" that what you did in leastsq, for example, it uses custom classes for control points (your data) and parameters. I use labels for both parameters, and control points (such as t, x, y, z), and parameters encapsulate approximation values for the linearization and initial epsilon used in the numeric derivation. The code is alpha and was used only once for fitting ephemeris parameters for GPS satellite orbits. I took some shortcuts and quite some cleaning up and refactoring is needed. I'm wondering whether this code could be useful to somebody and whether it would possibly fit into the scipy package. You can look at the code at http://www.sistema.it/fit/. The two files are the actual code and an incomplete test suite. The quickest overview of usage can be found in the last test of fitTest.py. Hoping this is of interest. Kind regards --bud /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- From oliphant at ee.byu.edu Fri Oct 18 14:03:26 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Oct 2002 12:03:26 -0600 (MDT) Subject: [SciPy-dev] shape_base questions In-Reply-To: <02101815001500.18707@sherrington.biologie.hu-berlin.de> Message-ID: > > Hi! I was taking a look to the 'shape_base.py' file, and I've got a couple of > questions/suggestions: > > - 'apply_over_axes' won't work, since it cannot find SequenceType on line 15. > SequenceType was defined in stats/rv2.py and stats/stats.py, where > 'apply_over_axes' is used. I suggest to move the definition > I didn't realize shape_base.py was using these functions. These two functions were created to handle converting stats.py to scipy style. I don't believe they have ever been "standardized" We should look at this, though. -Travis O. From pearu at cens.ioc.ee Fri Oct 18 17:44:15 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 19 Oct 2002 00:44:15 +0300 (EEST) Subject: [SciPy-dev] Re: [SciPy-user] Install failure of SciPy In-Reply-To: <3DB07420.3060906@bellsouth.net> Message-ID: On Fri, 18 Oct 2002, Jeffrey B. Layton wrote: > Good afternoon, > > After some suggestions Pearu made, I rebuilt gcc-3.1.1 in > my home account. I found the g2c libs and linked them > appropriately. I then downloaded the CVS version of Scipy > and start building again. Unfortunately, I got another error, > > ifc -shared build/temp.linux-i686-2.2/cephesmodule.o > build/temp.linux-i686-2.2/amos_wrappers.o > build/temp.linux-i686-2.2/specfun_wrappers.o > build/temp.linux-i686-2.2/toms_wrappers.o > build/temp.linux-i686-2.2/cdf_wrappers.o > build/temp.linux-i686-2.2/ufunc_extras.o -Lbuild/temp.linux-i686-2.2 > -Lbuild/temp.linux-i686-2.2 > -L/home/laytonj/local/gcc-3.1.1/lib/gcc-lib/i686-pc-linux-gnu/3.1.1 > -lamos -ltoms -lc_misc -lcephes -lmach -lcdf -lspecfun -lg2c -o > build/lib.linux-i686-2.2/scipy/special/cephes.so > /opt/intel/compiler60/ia32/lib/libF90.a(int8.o): In function `isnan': > int8.o(.text+0x4c40): multiple definition of `isnan' > build/temp.linux-i686-2.2/libcephes.a(isnan.o):/home/laytonj/TEMP/scipy/special/cephes/isnan.c:125: > first defined here > ld: Warning: size of symbol `isnan' changed from 51 to 80 in > /opt/intel/compiler60/ia32/lib/libF90.a(int8.o) > error: command 'ifc' failed with exit status 1 > > > I'm not sure the problem is, but I'll hazard to guess that > isnan is being redefined. I hope that after all the trouble, you'll not give up on SciPy, especially when I have another bad news about the Intel compiler support. First, to fix the above problem, insert the following line #define isnan cephes_isnan to files special/cephes/mconf_LE.h special/cephes/mconf_BE.h and re-run the build command that should finish now successfully. The bad news is that when running scipy tests, it will segfault after .Testing ncf in the stats module (I have tested it both on suse and debian with Intel 5.0 compiler). All other tests from other modules seem to pass fine. I am not sure yet what function, even what module exactly, is causing this segfault, but it must be either stats or special (as stats uses special). And both modules cannot be tested locally (special even doesn't have working unittests) as they import scipy and that makes finding/fixing bugs very time consuming and difficult. I also tried to remove the dependencies of these modules on scipy but gave up on short term, the dependencies were too deep and messy. On long term, I think these issues (the lack of independence and unittests) really ought to be fixed. So, unless someone will figure out quickly the source of this segfault, the Intel compiler remains unsupported. But I'll keep looking for the solution ... Until then, I'd suggest using g77 compiler for building scipy (see one of my previous mails how) and when we have fixed scipy for Intel compiler, you can later simply rebuild scipy. Sorry for inconvinience, Pearu From laytonjb at bellsouth.net Fri Oct 18 18:27:13 2002 From: laytonjb at bellsouth.net (Jeffrey B. Layton) Date: Fri, 18 Oct 2002 18:27:13 -0400 Subject: [SciPy-dev] Re: [SciPy-user] Install failure of SciPy References: Message-ID: <3DB08AC1.9070106@bellsouth.net> Pearu Peterson wrote: > > >>I'm not sure the problem is, but I'll hazard to guess that >>isnan is being redefined. >> >> > >I hope that after all the trouble, you'll not give up on SciPy, especially >when I have another bad news about the Intel compiler support. > I'm not ready to give up yet. > >Until then, I'd suggest using g77 compiler for building scipy (see one of >my previous mails how) and when we have fixed scipy for Intel compiler, >you can later simply rebuild scipy. > OK. This sounds like a plan to me! > >Sorry for inconvinience, > Pearu > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From oliphant at ee.byu.edu Fri Oct 18 21:02:11 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Oct 2002 19:02:11 -0600 (MDT) Subject: [SciPy-dev] Re: [SciPy-user] Install failure of SciPy In-Reply-To: Message-ID: > > I hope that after all the trouble, you'll not give up on SciPy, especially > when I have another bad news about the Intel compiler support. > > First, to fix the above problem, insert the following line > > #define isnan cephes_isnan > > to files > > special/cephes/mconf_LE.h > special/cephes/mconf_BE.h > Should this be changed in the source tree. It looks like other platforms/compilers have an isnan already defined. > and re-run the build command that should finish now successfully. > > The bad news is that when running scipy tests, it will segfault after > > .Testing ncf > This is definitely a special function problem. This is the "first-place" in the scipy testing that one calls out to a (hand-wrapped) fortran-compiled special function. I'm sure the problem is with different linking needs, here. The special function module needs a c-function to call so that's why f2py was not used. I'm not sure how to get setup to use the right compiler switches so that the FORTRAN code called in cdf_wrappers.c works on many platforms. > in the stats module (I have tested it both on suse and debian with > Intel 5.0 compiler). All other tests from other modules seem to pass fine. > > I am not sure yet what function, even what module exactly, is causing this > segfault, but it must be either stats or special (as stats uses special). I'm sure it's special. Try calling special.ncfdtr all by itself and seeing what happens. > And both modules cannot be tested locally (special even doesn't have > working unittests) Unittests for special are being written as we speak (I've got an undergrad working on it). > as they import scipy and that makes finding/fixing bugs > very time consuming and difficult. I also tried to remove the dependencies > of these modules on scipy I don't think stats should ever have dependency on scipy removed (it will always need special for example). Special should be able to depend only on scipy_base. > > So, unless someone will figure out quickly the source of this segfault, > the Intel compiler remains unsupported. But I'll keep looking for the > solution ... > Again, I think it has to do with magic C-to-Fortran calling conventions for the Intel compiler. Pearu, You will recognize the #if defined(NO_APPEND_FORTRAN) #if defined(UPPERCASE_FORTRAN) #define F_FUNC(f,F) F #else #define F_FUNC(f,F) f #endif #else #if defined(UPPERCASE_FORTRAN) #define F_FUNC(f,F) F##_ #else #define F_FUNC(f,F) f##_ #endif #endif code in the C-wrappers (remember these are not Python wrappers) for the fortran code in cdflib and specfun. -Travis O. From pyzun18 at yahoo.com.au Sat Oct 19 07:11:45 2002 From: pyzun18 at yahoo.com.au (=?iso-8859-1?q?Johnny=20boi?=) Date: Sat, 19 Oct 2002 21:11:45 +1000 (EST) Subject: [SciPy-dev] Failure installing SciPy Message-ID: <20021019111145.1657.qmail@web13107.mail.yahoo.com> Hi all I've install all the Prerequisits for SciPy, (python2.1, Numerical Python, ATLAS, f2py). Then I went on and did a 'python2.1 setup.py install' on SciPy and it gave me all these errors: atlas_info: NOT AVAILABLE linalg/setup_linalg.py:86: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: NOT AVAILABLE linalg/setup_linalg.py:90: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE Traceback (most recent call last): File "setup.py", line 130, in ? install_package() File "setup.py", line 95, in install_package config.extend([get_package_config(x,parent_package)for x in standard_packages]) File "setup.py", line 45, in get_package_config config = mod.configuration(parent) File "linalg/setup_linalg.py", line 93, in configuration raise BlasSrcNotFoundError,BlasSrcNotFoundError.__doc__ scipy_distutils.system_info.BlasSrcNotFoundError: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the scipy_distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. I've done sanity_test for atlas and everything passed. So what could be the problem?? Regards Johnny http://careers.yahoo.com.au - Yahoo! Careers - 1,000's of jobs waiting online for you! From pearu at cens.ioc.ee Sat Oct 19 07:31:17 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 19 Oct 2002 14:31:17 +0300 (EEST) Subject: [SciPy-dev] Re: [SciPy-user] Install failure of SciPy In-Reply-To: Message-ID: Hi Travis, On Fri, 18 Oct 2002, Travis Oliphant wrote: > > > > I hope that after all the trouble, you'll not give up on SciPy, especially > > when I have another bad news about the Intel compiler support. > > > > First, to fix the above problem, insert the following line > > > > #define isnan cephes_isnan > > > > to files > > > > special/cephes/mconf_LE.h > > special/cephes/mconf_BE.h > > > > Should this be changed in the source tree. It looks like other > platforms/compilers have an isnan already defined. Yes. At least under linux, isnan is a macro (what about Windows, Sun platforms?), so, I renamed cephes isnan() to cephes_isnan() and using it only when macro isnan is not defined. > > and re-run the build command that should finish now successfully. > > > > The bad news is that when running scipy tests, it will segfault after > > > > .Testing ncf > > > > This is definitely a special function problem. This is the "first-place" > in the scipy testing that one calls out to a (hand-wrapped) fortran-compiled special function. > I'm sure the problem is with different linking needs, here. I'm not sure that the problem is with linking. Only few functions actually give segfaults. Other seem to work fine. > The special function module needs a c-function to call so that's why f2py > was not used. You mean that wrapper functions execute additional C commands after returning from C/Fortran function? In principle, f2py can handle this (I have used it in few places in linalg wrappers) but I'd like to introduce better hooks for this in f2py before others start to use it... > I'm not sure how to get setup to use the right compiler > switches so that the FORTRAN code called in cdf_wrappers.c works on many > platforms. I don't understand the problem. As I said above, it seems that only few functions cause the trouble with ifc (I haven't tested them all, though). > > in the stats module (I have tested it both on suse and debian with > > Intel 5.0 compiler). All other tests from other modules seem to pass fine. > > > > I am not sure yet what function, even what module exactly, is causing this > > segfault, but it must be either stats or special (as stats uses special). > > I'm sure it's special. I agree. > Try calling special.ncfdtr all by itself and seeing what happens. Here follows some debugging info: >>> import libwadpy WAD Enabled >>> import cephes >>> cephes.ncfdtr(1,1,1,1) WAD: Collecting debugging information... WAD: Segmentation fault. #0 0x403cfb98 in select_types() In GDB: >>> import cephes >>> cephes.ncfdtr(1,1,1,1) (no debugging symbols found)... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1024 (LWP 3214)] 0x4038ab98 in select_types (self=0x0, arg_types=0x3ff00000
, data=0x80f3560, function=0x0) at Src/ufuncobject.c:235 235 for(j=0; jnin; j++) { > > And both modules cannot be tested locally (special even doesn't have > > working unittests) > > Unittests for special are being written as we speak (I've got an undergrad > working on it). Great! I have introduced few simple testing hooks already. I hope they will not get into a way. > > as they import scipy and that makes finding/fixing bugs > > very time consuming and difficult. I also tried to remove the dependencies > > of these modules on scipy > > I don't think stats should ever have dependency on scipy removed (it will > always need special for example). > > Special should be able to depend only on scipy_base. Note that special.orthogonal uses linalg.eig, otherwise it seems to be rather independent. OT: Hmm, I have long time had in mind implementing lazy import hooks so that modules will be imported only when some of its function is acctually going to be used. This would reduce scipy import time considerably (sometimes it takes 5-15 seconds to just import scipy) and would make scipy more accesible for simple scripts that use only few scipy features. The general idea would be the following: m = lazy_import('m') # lazy equivalent of 'import m' # m would be a lazy_import class (maybe subclass of module class) instance # that will be stored also in sys.modules, it contains only python state # (sys.path, etc.) needed to import m. m.foo() # When accessing m attribute for the first time, the following would # happen: # 1) m is removed from sys.modules # 2) 'import m' is executed # 3) local lazy_import instance `m' is replaced with module `m' instance > > > > So, unless someone will figure out quickly the source of this segfault, > > the Intel compiler remains unsupported. But I'll keep looking for the > > solution ... > > > > Again, I think it has to do with magic C-to-Fortran calling conventions > for the Intel compiler. > > Pearu, > > You will recognize the > > #if defined(NO_APPEND_FORTRAN) > #if defined(UPPERCASE_FORTRAN) > #define F_FUNC(f,F) F > #else > #define F_FUNC(f,F) f > #endif > #else > #if defined(UPPERCASE_FORTRAN) > #define F_FUNC(f,F) F##_ > #else > #define F_FUNC(f,F) f##_ > #endif > #endif > > code in the C-wrappers (remember these are not Python wrappers) for the > fortran code in cdflib and specfun. I don't understand the relevance of your notes. AFAIK, Intel compiler does not use magic for C-to-Fortran calling more than any other compiler. Pearu From pearu at cens.ioc.ee Sat Oct 19 07:40:21 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 19 Oct 2002 14:40:21 +0300 (EEST) Subject: [SciPy-dev] Failure installing SciPy In-Reply-To: <20021019111145.1657.qmail@web13107.mail.yahoo.com> Message-ID: On Sat, 19 Oct 2002, Johnny boi wrote: > Hi all > > > I've install all the Prerequisits for SciPy, > (python2.1, Numerical Python, ATLAS, f2py). Then I > went on and did a 'python2.1 setup.py install' on > SciPy and it gave me all these errors: > > atlas_info: > NOT AVAILABLE > I've done sanity_test for atlas and everything passed. > So what could be the problem?? How/where did you install atlas? Note that 'make install' in ATLAS does not install the libraries to the system. You have to do that manually, e.g. copy atlas libraries (the files libf77blas.a, libcblas.a, liblapack.a, libaltas.a) to /usr/local/lib/atlas. And make sure that you have applied instructions from http://math-atlas.sourceforge.net/errata.html#completelp before the copy. Without that import scipy will fail. Pearu From bounce at trafficmagnet.com Sun Oct 20 02:09:45 2002 From: bounce at trafficmagnet.com (Sarah Williams) Date: Sun, 20 Oct 2002 14:09:45 +0800 (CST) Subject: [SciPy-dev] scipy.org Message-ID: <623CR1000043532@p1j2m3a4.pdhost.com> Hi I visited scipy.org, and noticed that you're not listed on some search engines! I think we can offer you a service which can help you increase traffic and the number of visitors to your website. I would like to introduce you to TrafficMagnet.com. We offer a unique technology that will submit your website to over 300,000 search engines and directories every month. You'll be surprised by the low cost, and by how effective this website promotion method can be. To find out more about TrafficMagnet and the cost for submitting your website to over 300,000 search engines and directories, visit us at: http://p1j2m3a4.pdhost.com/pdsvr/www/r?1000043532.623.23.XBv+4lSCqhDYEf I would love to hear from you. Best Regards, Sarah Williams Sales and Marketing E-mail: Sarah_Williams at trafficmagnet.com http://www.TrafficMagnet.com This email was sent to scipy-dev at scipy.org. We apologize if this email has reached you in error. We honor all removal requests. Please go to the link below to be removed from our mailing list. http://p1j2m3a4.pdhost.com/pdsvr/www/optoutredirect?UC=Lead&UI=10519279 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Sun Oct 20 21:08:27 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 21 Oct 2002 04:08:27 +0300 (EEST) Subject: [SciPy-dev] Progress with Intel Fortran Compiler support Message-ID: Hi, It took a whole weekend to get Scipy working with the Intel compiler. Now all scipy tests pass (except test_shapiro) also with ifc. However, few functions from special.cephes still cause python to crash (see test methods starting with _check). I'll get back to it later when special module has more tests that actually test the correctness cephes functions results. Currently when cephes functions (built with gcc) return results then I am not always sure that they are correct, in same cases I have even been unable to call these functions without getting domain errors (see test methods starting with __check). Anyway, guide lines how to build scipy with ifc can be found in INSTALL.txt. Unfortunately, the building process is not as straightforward as one would wish: 1) there is binary incompability between ifc and g77 generated codes so that ATLAS and LAPACK libraries *must* be compiled with ifc. 2) in some cases ifc optimization is too good so that LAPACK functions ?lamch fail for that; see INSTALL.txt for a workaround. Finally, Intel Fortran Compiler 5.0 is not supported because scipy build fails with internal compiler error when debugging flags are enabled. Regards, Pearu From jeffrey.b.layton at lmco.com Mon Oct 21 06:21:18 2002 From: jeffrey.b.layton at lmco.com (Jeff Layton) Date: Mon, 21 Oct 2002 06:21:18 -0400 Subject: [SciPy-dev] Progress with Intel Fortran Compiler support References: Message-ID: <3DB3D51E.A0E08F3C@lmco.com> Pearu Peterson wrote: > Hi, > > It took a whole weekend to get Scipy working with the Intel compiler. > Now all scipy tests pass (except test_shapiro) also with ifc. > > However, few functions from special.cephes still cause python to > crash (see test methods starting with _check). > I'll get back to it later when special module has more tests that > actually test the correctness cephes functions results. Currently when > cephes functions (built with gcc) return results then I am not always > sure that they are correct, in same cases I have even been unable to call > these functions without getting domain errors (see test methods starting > with __check). > > Anyway, guide lines how to build scipy with ifc can be found in > INSTALL.txt. Unfortunately, the building process is not as straightforward > as one would wish: > 1) there is binary incompability between ifc and g77 generated codes so > that ATLAS and LAPACK libraries *must* be compiled with ifc. > 2) in some cases ifc optimization is too good so that LAPACK > functions ?lamch fail for that; see INSTALL.txt for a workaround. Pearu, I thought I would chip in with my sugestions for ATLAS and LAPACK since I've built and tested those with ifc/icc (version 6). I would suggest building ATLAS with icc (the Intel C compiler). You can build it with gcc and link it to ifc code, but it makes life easier to use icc. I'm not sure of the speed difference between icc and gcc on ATLAS (one person said icc was slower, one said it was faster). Here are the ootions I used with icc on ATLAS: -O3 -unroll -align -tpp6 -axK -Xa Typically I modify the makefile that ATLAS creates to make sure it is correct. As for LAPACK, you have to use a couple of options to make sure the precision is correct. In particular, '-mp -pc64 -fp_port'. There is a speed lose compared to not using the flags, but I'd rather have correct answers today :) The flags I used for ifc are : -O2 -tpp6 -prefetch -align -unroll -Vaxlib -mp -pc64 -fp_port -xK -axK BTW, these flags are for an Athlon or a PIII. Also, don't forget to use the '-Vaxlib' flag during linking. Finally, after you build the LAPACK library it was recommended to me to go back and recompile xerbla.f, slamch.f, and dlamch.f using no optimization ('-O0'). Then relink the libs (I erase the final libs, recompile these 3 routines, and then run 'make libs' again). So far this process allows ATLAS to pass all of the tests and for LAPACK to pass all the tests as well. Thanks for all of your hard work this weekend! Jeff > > > Finally, Intel Fortran Compiler 5.0 is not supported because scipy build > fails with internal compiler error when debugging flags are enabled. > > Regards, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- Jeff Layton Senior Engineer Lockheed-Martin Aeronautical Company - Marietta Aerodynamics & CFD "Is it possible to overclock a cattle prod?" - Irv Mullins This email may contain confidential information. If you have received this email in error, please delete it immediately, and inform me of the mistake by return email. Any form of reproduction, or further dissemination of this email is strictly prohibited. Also, please note that opinions expressed in this email are those of the author, and are not necessarily those of the Lockheed-Martin Corporation. From pearu at cens.ioc.ee Mon Oct 21 08:42:25 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 21 Oct 2002 15:42:25 +0300 (EEST) Subject: [SciPy-dev] Progress with Intel Fortran Compiler support In-Reply-To: <3DB3D51E.A0E08F3C@lmco.com> Message-ID: On Mon, 21 Oct 2002, Jeff Layton wrote: > I thought I would chip in with my sugestions for ATLAS and LAPACK > since I've built and tested those with ifc/icc (version 6). Your suggestions are welcome! > I would suggest building ATLAS with icc (the Intel C compiler). You > can build it with gcc and link it to ifc code, but it makes life > easier to use icc. Hmm, actually what comes to using ATLAS in Scipy, I saw no downside of using gcc, in fact, I didn't even need to bother with this matter (I haven't installed icc either). > I'm not sure of the speed difference between icc and gcc on ATLAS (one > person said icc was slower, one said it was faster). Yes, both persons might be right: ATLAS is basically developed with gcc and optimized for that. On the other hand, icc, at least in principle, should be faster on intel processors as they know better... So, I would not recommend either of them until seeing some benchmark results. > Here are the ootions I used with icc on ATLAS: > > -O3 -unroll -align -tpp6 -axK -Xa > > Typically I modify the makefile that ATLAS creates to make sure it > is correct. I think, -align is set default. -O3 -unroll are good. Other options are CPU specific, hence users choice. > As for LAPACK, you have to use a couple of options to make sure the > precision is correct. In particular, '-mp -pc64 -fp_port'. -pc64 is default. Though, to get identical results with gcc, one should use -pc80 when it matters. -mp -fp_port I guess using these flags depends on applications. So, I would let users to deside about these flags. Sofar, I haven't seen any anomalies if these flags are missing. > There is a speed > lose compared to not using the flags, but I'd rather have correct answers > today :) The flags I used for ifc are : > > -O2 -tpp6 -prefetch -align -unroll -Vaxlib -mp -pc64 -fp_port -xK -axK > > BTW, these flags are for an Athlon or a PIII. Also, don't forget to use > the '-Vaxlib' flag during linking. Hmm, is '-Vaxlib' really necessary? Linkage works fine here without -Vaxlib when using ifc 6 and gcc 3.1.1. ifc -help says: -Vaxlib link with portability library I guess it doesn't matter when building locally. I don't think that we should distribute scipy binaries compiled with Intel compilers, at least not in near future. Also there might be some legal matters with respect distributing Intel binaries. > Finally, after you build the LAPACK library it was recommended to > me to go back and recompile xerbla.f, slamch.f, and dlamch.f using no > optimization ('-O0'). Recompiling these files (didn't know that about xerbla.f before) with -O0 is absolutely required. > Then relink the libs (I erase the final libs, recompile > these 3 routines, and then run 'make libs' again). > So far this process allows ATLAS to pass all of the tests and for > LAPACK to pass all the tests as well. > Thanks for all of your hard work this weekend! Thanks for your notes! I'll update INSTALL.txt accordingly. Pearu From jeffrey.b.layton at lmco.com Mon Oct 21 09:39:33 2002 From: jeffrey.b.layton at lmco.com (Jeff Layton) Date: Mon, 21 Oct 2002 09:39:33 -0400 Subject: [SciPy-dev] Progress with Intel Fortran Compiler support References: Message-ID: <3DB40395.984D532D@lmco.com> > > I'm not sure of the speed difference between icc and gcc on ATLAS (one > > person said icc was slower, one said it was faster). > > Yes, both persons might be right: ATLAS is basically developed with gcc > and optimized for that. On the other hand, icc, at least in principle, > should be faster on intel processors as they know better... > So, I would not recommend either of them until seeing some benchmark > results. It probably depends on your specific application (one was GEMM and one was GETRF). > > > > Here are the ootions I used with icc on ATLAS: > > > > -O3 -unroll -align -tpp6 -axK -Xa > > > > Typically I modify the makefile that ATLAS creates to make sure it > > is correct. > > I think, -align is set default. -O3 -unroll are good. Other options are > CPU specific, hence users choice. > > > As for LAPACK, you have to use a couple of options to make sure the > > precision is correct. In particular, '-mp -pc64 -fp_port'. > > -pc64 is default. Though, to get identical results with gcc, one should > use -pc80 when it matters. > > -mp -fp_port > I guess using these flags depends on applications. So, I would let users > to deside about these flags. Sofar, I haven't seen any anomalies if these > flags are missing. I'm not totally sure about LAPACK (if these options are required). However for ScaLAPACK I had to use these options or the tests would not pass. Another person also mentioned these options (I think it was Tim Prince on the Fortran newsgroup - I'll have to google for his posting again). > > > > There is a speed > > lose compared to not using the flags, but I'd rather have correct answers > > today :) The flags I used for ifc are : > > > > -O2 -tpp6 -prefetch -align -unroll -Vaxlib -mp -pc64 -fp_port -xK -axK > > > > BTW, these flags are for an Athlon or a PIII. Also, don't forget to use > > the '-Vaxlib' flag during linking. > > Hmm, is '-Vaxlib' really necessary? Linkage works fine here without > -Vaxlib when using ifc 6 and gcc 3.1.1. ifc -help says: > > -Vaxlib link with portability library You're right. It may have been just for ScaLAPACK and not LAPACK nor ATLAS. > > > I guess it doesn't matter when building locally. I don't think that we > should distribute scipy binaries compiled with Intel compilers, at > least not in near future. Also there might be some legal matters with > respect distributing Intel binaries. Most definitely. Thanks! Jeff -- Jeff Layton Senior Engineer Lockheed-Martin Aeronautical Company - Marietta Aerodynamic & CFD Phone: 770-494-8341 Fax: 770-494-3055 email: jeffrey.b.layton at lmco.com "Is it possible to overclock a cattle prod?" - Irv Mullins This email may contain confidential information. If you have received this email in error, please delete it immediately, and inform me of the mistake by return email. Any form of reproduction, or further dissemination of this email is strictly prohibited. Also, please note that opinions expressed in this email are those of the author, and are not necessarily those of the Lockheed-Martin Corporation. From oliphant at ee.byu.edu Sat Oct 26 00:43:37 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 25 Oct 2002 22:43:37 -0600 (MDT) Subject: [SciPy-dev] Re: [Scipy-cvs] world/scipy/special/tests test_cephes.py,1.1,1.2 In-Reply-To: <20021019105308.61A6C3EACE@www.scipy.com> Message-ID: > Update of /home/cvsroot/world/scipy/special/tests > In directory shaft:/tmp/cvs-serv14246/tests > > Modified Files: > test_cephes.py > Log Message: > (1) Introduced testing hooks and few simple test calls. Test methods starting with _check_.. give segfault with ifc but pass with gcc. (2) Renamed isnan to cephes_isnan and using platform isnan macro if available. Now builds with ifc. (3) Made special locally buildable and testable (temporarily, at least, as orthogonal.py uses linalg.eig) > I have a student working on a large set of test cases for special, so I would not worry too much about testing this right now. -Travis From h.callow at elec.canterbury.ac.nz Sat Oct 26 03:00:29 2002 From: h.callow at elec.canterbury.ac.nz (Hayden John Callow) Date: Sat, 26 Oct 2002 20:00:29 +1300 Subject: [SciPy-dev] possible pilutils.py bug + patch Message-ID: <1035615631.16723.15.camel@donner.study.home> with slightly old scipy cvs scipy.imshow() segfaults with complex arrays (I presume that complex arrays are not a desired input) I have included a patch to trap complex array arguments System info --> RH8.0 + python 2.2.1 >>> scipy.__cvs_version__.cvs_version (1, 144, 1520, 4373) >>> Image.VERSION '1.1.2' >>> sys.version '2.2.1 (#1, Aug 30 2002, 12:15:30) \n[GCC 3.2 20020822 (Red Hat Linux Rawhide 3.2-4)]' cheers Hayden -------------- next part -------------- --- pilutil.py 2002-10-26 19:38:27.000000000 +1300 +++ pilutil_new.py 2002-10-26 19:40:24.000000000 +1300 @@ -100,6 +100,10 @@ The Numeric array must be either 2 dimensional or 3 dimensional. """ data = Numeric.asarray(arr) + + real_img = data.typecode() not in Numeric.typecodes['Complex'] + assert real_img, "Not a suitable array: can't use complex images" + shape = list(data.shape) valid = len(shape)==2 or ((len(shape)==3) and \ ((3 in shape) or (4 in shape))) From pearu at cens.ioc.ee Sat Oct 26 14:01:58 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 26 Oct 2002 21:01:58 +0300 (EEST) Subject: [SciPy-dev] Replacing scipy.fftpack with fftpack2 Message-ID: Greetings scipy-devels! The latest version of fftpack2 is available at http://cens.ioc.ee/~pearu/misc/fftpack2-0.2.tar.gz It includes the following changes: * Renamed freqshift to fftshift * Renamed dftfreq to fftfreq * For even signal length, fftshift(f)[0] corresponds to Nyquist mode (like in Matlab). I believe that fftpack2 is ready to replace scipy.fftpack and I would like commit fftpack2 to Scipy CVS by the end of next week. I am actually ready to do that now but the week is provided for review. Let me know if you need more time for review. After the commit, Scipy will require F2PY version 2.23.190-1367 or newer. Regards, Pearu From lluang at yahoo.com Tue Oct 29 11:58:04 2002 From: lluang at yahoo.com (Louis Luangkesorn) Date: Tue, 29 Oct 2002 08:58:04 -0800 (PST) Subject: [SciPy-dev] Testing stats.py - Error in stats.kurtosis [ATTN: Travis O.] In-Reply-To: <20021028180001.8833.71195.Mailman@shaft> Message-ID: <20021029165804.36834.qmail@web12102.mail.yahoo.com> [ATTN: Travis Oliphant] I've written some more tests for the stats module. Since the stats.py module groups functions in the introductory pydoc comments, I organized the tests I have written into classes based on these groupings. I don't know of any standard problem sets, so I used a basic test set [1,2,3,4] as well as a test set from the Mathworks web site (Matlab documentation). The return value for each test was determined by using R v1.5.1 on Windows, and the R input used was copied into the comments for each test. I have one test that is failing, kurtosis. For the two test cases, I get using R [testcase] Kurtosis = 1.64 [Mathworks] Kurtosis = 2.1658856803 [Mathworks has 2.1615] http://www.mathworks.com/access/helpdesk/help/toolbox/stats/kurtosis.shtml Using stats.py I get [testcase] Kurtosis = -1.36 [Mathworks] Kurtosis = -0.834114319703 Attached are the additional test classes. The classes test_moments and test_variability can be added directly before the function test_suite(level=1) in test_stats.py. The lines in test_suite in the attached file should be added to the current test_suite in test_stats.py. Let me know if this works. I almost put it directly in CVS, but I think Travis O. has someone working on the stats module so I thought that it should be added from there. Louis ===== lluang at northwestern.edu http://pubweb.nwu.edu/~kll560 GPG Key:0xF3E2D362 Y!M, AOL, jabber.com: lluang Whatsoever things are true, ... honest, ... just, ... pure, ... lovely, ... of good report; if there be any virtue, and if there be any praise, think on these things. - motto - Northwestern University __________________________________________________ Do you Yahoo!? HotJobs - Search new jobs daily now http://hotjobs.yahoo.com/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: more_test_stats.py URL: From oliphant at ee.byu.edu Tue Oct 29 13:13:29 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Oct 2002 11:13:29 -0700 (MST) Subject: [SciPy-dev] Testing stats.py - Error in stats.kurtosis [ATTN: Travis O.] In-Reply-To: <20021029165804.36834.qmail@web12102.mail.yahoo.com> Message-ID: > > I've written some more tests for the stats module. Since the > stats.py module groups functions in the introductory pydoc > comments, I organized the tests I have written into classes > based on these groupings. I don't know of any standard problem > sets, so I used a basic test set [1,2,3,4] as well as a test set > from the Mathworks web site (Matlab documentation). The return > value for each test was determined by using R v1.5.1 on Windows, > and the R input used was copied into the comments for each test. > Great. I'll include these tests. > I have one test that is failing, kurtosis. For the two test > cases, I get using R > [testcase] Kurtosis = 1.64 > [Mathworks] Kurtosis = 2.1658856803 [Mathworks has 2.1615] > > http://www.mathworks.com/access/helpdesk/help/toolbox/stats/kurtosis.shtml > > Using stats.py I get > [testcase] Kurtosis = -1.36 > [Mathworks] Kurtosis = -0.834114319703 > This looks like the difference of 3.0 between the Fisher and Pearson definitions of kurtosis. The default for stats.kurtosis is to use Fisher's definition which subtracts 3.0 from the ratio of the fourth moment to the variance. Add fisher=0 to the kurtosis command to get Pearson's definition. Fisher's definition means that a normal will have a kurtosis of 0.0 which is a usful guide. > Let me know if this works. I almost put it directly in CVS, but > I think Travis O. has someone working on the stats module so I > thought that it should be added from there. No, I have someone working on special. So, these are welcome tests. -Travis O. From oliphant at ee.byu.edu Thu Oct 31 16:45:06 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 Oct 2002 14:45:06 -0700 (MST) Subject: [SciPy-dev] Testing stats.py - Error in stats.kurtosis In-Reply-To: <20021031154658.48409.qmail@web12105.mail.yahoo.com> Message-ID: > Travis > > 1. check_kurtosis changed > 2. check_2d error > Thanks for the help. I'm including these tests in the CVS tree, today. But, I first have to correct errors I'm getting with them... Regarding your questions. > > where b2 = array((1.0,2.0,2.64575131106)) > When I enter stats.std(a) I get: > array([ 4.52769257, 2.12132034, 7.21110255]) I don't get these answers regardless of what I put for the second argument. What is a for you? > When I enter stats.std(a,1) I get: > array([ 1. , 2. , 2.64575131]) I do get this result, though. > Finaly, stats.std(ravel(a)) gives me: > 3.5707142142714252 I get this too. > > So, > A) Is the intent of the check_2d to find the population > standard deviation of each row in the array, in which case > check_2d should be changed from > assert_array_almost_equal(stats.std(a),b2,11) My results don't agree with yours. When I say stats.std(a) it should find the standard deviation along the last dimension of the array (axis == 1) or (axis == -1). So, it should give equivalent answers to stats.std(a,1). I don't know why your version is not doing this. Mine is. Are you sure you are getting the right stats module from the right place? > > B) When no axis parameter is entered into stats.std() [and by > extention stats.var() ] When no axis is given, the default is to use the last dimension of the array. There is still some debate on what the proper default behavior should be, but currently all of the stats module (and all of scipy) should behave like this (and does as far as I know). The None argument to axis will ravel the array prior to applying the std operation. > > I'm not much of a stats expert but I'll help where I can once > we've decided what we are trying to do. Thanks for your help. It is appreciated. -Travis From lluang at yahoo.com Thu Oct 31 17:07:12 2002 From: lluang at yahoo.com (Louis Luangkesorn) Date: Thu, 31 Oct 2002 14:07:12 -0800 (PST) Subject: [SciPy-dev] assert_approx_equal in testing.py [Was: Testing stats.py] In-Reply-To: Message-ID: <20021031220712.83075.qmail@web12101.mail.yahoo.com> --- Travis Oliphant wrote: > > Travis > > > > 1. check_kurtosis changed > > 2. check_2d error > > > Thanks for the help. I'm including these tests in the CVS > tree, today. > > But, I first have to correct errors I'm getting with them... > Oops, I think this may be my fault. Their is a problem in the unittesting functions. The attached def assert_approx_equal() belongs in \scipy_base\testing.py to replace the version that was there previously. The previous version does not handle 0 or negative numbers well (I say that as the person who wrote the original version. Note, this version of the function has not been tested in any systematic way.) ===== lluang at northwestern.edu http://pubweb.nwu.edu/~kll560 Y!M, AOL, jabber.com: lluang GPG Key:0xF3E2D362 Whatsoever things are true, ... honest, ... just, ... pure, ... lovely, ... of good report; if there be any virtue, and if there be any praise, think on these things. - motto - Northwestern University __________________________________________________ Do you Yahoo!? HotJobs - Search new jobs daily now http://hotjobs.yahoo.com/ From lluang at yahoo.com Thu Oct 31 17:08:05 2002 From: lluang at yahoo.com (Louis Luangkesorn) Date: Thu, 31 Oct 2002 14:08:05 -0800 (PST) Subject: [SciPy-dev] assert_approx_equal in testing.py [Was: Testing stats.py] [with attachment] In-Reply-To: Message-ID: <20021031220805.78748.qmail@web12102.mail.yahoo.com> --- Travis Oliphant wrote: > > Travis > > > > 1. check_kurtosis changed > > 2. check_2d error > > > Thanks for the help. I'm including these tests in the CVS > tree, today. > > But, I first have to correct errors I'm getting with them... > Oops, I think this may be my fault. Their is a problem in the unittesting functions. The attached def assert_approx_equal() belongs in \scipy_base\testing.py to replace the version that was there previously. The previous version does not handle 0 or negative numbers well (I say that as the person who wrote the original version. Note, this version of the function has not been tested in any systematic way.) ===== lluang at northwestern.edu http://pubweb.nwu.edu/~kll560 Y!M, AOL, jabber.com: lluang GPG Key:0xF3E2D362 Whatsoever things are true, ... honest, ... just, ... pure, ... lovely, ... of good report; if there be any virtue, and if there be any praise, think on these things. - motto - Northwestern University __________________________________________________ Do you Yahoo!? HotJobs - Search new jobs daily now http://hotjobs.yahoo.com/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: more_testing.py URL: From pearu at cens.ioc.ee Thu Oct 31 17:19:56 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 1 Nov 2002 00:19:56 +0200 (EET) Subject: [SciPy-dev] assert_approx_equal in testing.py [Was: Testing stats.py] [with attachment] In-Reply-To: <20021031220805.78748.qmail@web12102.mail.yahoo.com> Message-ID: On Thu, 31 Oct 2002, Louis Luangkesorn wrote: > --- Travis Oliphant wrote: > > > Travis > > > > > > 1. check_kurtosis changed > > > 2. check_2d error > > > > > Thanks for the help. I'm including these tests in the CVS > > tree, today. > > > > But, I first have to correct errors I'm getting with them... > > > Oops, I think this may be my fault. Their is a problem in the > unittesting functions. The attached def assert_approx_equal() > belongs in \scipy_base\testing.py to replace the version that > was there previously. There is no scipy_base/testing.py anymore. testing.py is moved to scipy_test module. > The previous version does not handle 0 or negative numbers well (I say > that as the person who wrote the original version. Note, this version > of the function has not been tested in any systematic way.) scipy_test/testing.py already contains a fix for negative numbers. Pearu