From kowald at molgen.mpg.de Tue Jan 4 05:22:03 2005 From: kowald at molgen.mpg.de (Axel Kowald) Date: Tue, 04 Jan 2005 11:22:03 +0100 Subject: [SciPy-user] Using weave with Activestate python Message-ID: <41DA6E4B.9060700@molgen.mpg.de> Hello everybody, I installed the latest scipy on my winXP machine running Activestate Python 2.3.4. I like to use weave, but run into some problems. Is it correct that to use weave I need the same compiler that was used to compile the python version I'm using ?? It seems Activestate Python 2.3.4 was built using MSVC 6 (that's what the weave error message tells me) and I "only" have Visual StudioNET :-( Does this really mean I have to get from somewhere MSVC6 to use weave with Activestate Python 2.3.4 ? Why can't I use VisualStudioNET or maybe gcc ? Many thanks, Axel From zunzun at zunzun.com Tue Jan 4 05:44:44 2005 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Tue, 4 Jan 2005 05:44:44 -0500 Subject: [SciPy-user] Using weave with Activestate python In-Reply-To: <41DA6E4B.9060700@molgen.mpg.de> References: <41DA6E4B.9060700@molgen.mpg.de> Message-ID: <20050104104444.GB27394@localhost.members.linode.com> On Tue, Jan 04, 2005 at 11:22:03AM +0100, Axel Kowald wrote: > Is it correct that to use weave I need the same compiler that was used > to compile the python version I'm using ?? Look in the Weave User's Guide at: http://www.scipy.org/documentation/weave/weaveusersguide.html for the section labeled "inline() Arguments", the one you'll most likely be interested in is called compiler. James Phillips http://zunzun.com From yichunwe at usc.edu Fri Jan 7 16:11:28 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Fri, 07 Jan 2005 13:11:28 -0800 Subject: [SciPy-user] io.loadmat: broken? Message-ID: <41DEFB00.9020103@usc.edu> Hi, Sorry if you received multiple messages about this. I posted this last month for help. I was fine with io.loadmat about 1 month ago. It persuaded me to make a switch to scipy. However, when I updated my python to python 2.3.4, scipy to 0.3.2, the following happened: >>> import scipy.io >>> scipy.io.loadmat("test") Traceback (most recent call last): File "", line 1, in ? File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 692, in loadmat thisdict = _loadv5(fid,basename) File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 631, in _loadv5 el, varname = _get_element(fid) File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 619, in _get_element el, name = _parse_mimatrix(fid,numbytes) File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 510, in _parse_mimatrix result = squeeze(transpose(reshape(result,dims[::-1]))) TypeError: Array can not be safely cast to required type Scipy 0.3.2_266.4242 Numeric tried with 23.5 and 23.6, with the same problem Python 2.3.4 and 2.3.2 from activepython This file can be loaded using Enthought version python which is 2.3.3 and scipy 0.3. I am wondering whether io.loadmat is broken in the new version of scipy, or is there anything wrong I did with io.loadmat. Thanks for any hint... - yichun From yichunwe at usc.edu Fri Jan 7 18:56:32 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Fri, 07 Jan 2005 15:56:32 -0800 Subject: [SciPy-user] Help on speed of signal.convolve Message-ID: <41DF21B0.70906@usc.edu> Hi, I'd like to convolve a (64,64,41) array (kernel) with a (64,64,1800) array with mode='valid' . What would be the fastest method in scipy. Here I tried with signal.convolve and it takes >400 s to solve. a.shape is (64,64,41), b.shape is (64,64,1800) res = signal.convolve (a, b, mode='same') it took around 425 s to solve. I have the file dumped from profile, if you want to have a look I can attach it. 'valid' and 'full' mode is still running at the time I am writing. I am using the Enthought Python with scipy 0.3. Is this performance normal on a P-IV 1.8G CPU. How can I improve the performance of operations of this kind? Any hint will be appreciated! - yichun From vroudnev at ksu.edu Mon Jan 10 02:33:44 2005 From: vroudnev at ksu.edu (Vladimir A. Roudnev) Date: Mon, 10 Jan 2005 01:33:44 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation Message-ID: <41E22FD8.2040104@ksu.edu> Dear All, I do not know whether the problem I've found is already known or not, but I think it makes no harm if I report it. As is known, for real vectors a matrix-vector multiplication and dot (aka scalar, aka inner) product can be treated as the same operation. But for complex vectors this not the case: scalar product in vector spaces over complex numbers must satisfy the condition innerproduct(z1,z2)==conjugate(innerproduct(z2,z1)) Scipy implementation, however, does not satisfy this property, what can lead to serious complications when adapting real vector algorithms to complex arithmetics. In particular, the existing implementation breaks the complex vector space metric: >>> import scipy >>> a=scipy.array(range(3),'D') >>> print a [ 0.+0.j 1.+0.j 2.+0.j] >>> scipy.innerproduct(a,a) (5+0j) >>> import math >>> a=a*scipy.exp(math.pi*1.0j/4) # multiply by a unit scalar >>> print a [ 0. +0.j 0.70710678+0.70710678j 1.41421356+1.41421356j] >>> scipy.innerproduct(a,a) # wrong scalar product, PURE IMAGINARY RESULT (7.850978981510659e-16+5j) >>> scipy.innerproduct(scipy.conjugate(a),a) # the correct result, real (5+0j) I guess, that the problem is inhereted from Numeric. Regards, Vladimir Roudnev From rkern at ucsd.edu Mon Jan 10 03:03:19 2005 From: rkern at ucsd.edu (Robert Kern) Date: Mon, 10 Jan 2005 00:03:19 -0800 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E22FD8.2040104@ksu.edu> References: <41E22FD8.2040104@ksu.edu> Message-ID: <41E236C7.8030203@ucsd.edu> Vladimir A. Roudnev wrote: > Dear All, > > I do not know whether the problem I've found is already known or not, > but I think it makes no harm if I report it. > > As is known, for real vectors a matrix-vector multiplication and dot > (aka scalar, aka inner) product can be treated as the same operation. > But for complex vectors this not the case: scalar product in vector > spaces over complex numbers must satisfy the condition > innerproduct(z1,z2)==conjugate(innerproduct(z2,z1)) > Scipy implementation, however, does not satisfy this property, what can > lead to serious complications when adapting real vector algorithms to > complex arithmetics. In particular, the existing implementation breaks > the complex vector space metric: > >>> import scipy > >>> a=scipy.array(range(3),'D') > >>> print a > [ 0.+0.j 1.+0.j 2.+0.j] > >>> scipy.innerproduct(a,a) > (5+0j) > >>> import math > >>> a=a*scipy.exp(math.pi*1.0j/4) # multiply by a unit scalar > >>> print a > [ 0. +0.j 0.70710678+0.70710678j 1.41421356+1.41421356j] > >>> scipy.innerproduct(a,a) # wrong scalar product, PURE IMAGINARY > RESULT > (7.850978981510659e-16+5j) > >>> scipy.innerproduct(scipy.conjugate(a),a) # the correct result, real > (5+0j) > > I guess, that the problem is inhereted from Numeric. Yes, it is indeed inherited from Numeric. If you are doing serious linear algebra with complex matrices (and not everyone who uses innerproduct on complex arrays does), I suggest you write a function that does the appropriate conjugation. def cdot(a, b): return dot(conjugate(a), b) -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From vroudnev at ksu.edu Mon Jan 10 15:52:16 2005 From: vroudnev at ksu.edu (Vladimir Roudnev) Date: Mon, 10 Jan 2005 14:52:16 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E236C7.8030203@ucsd.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> Message-ID: <41E2EB00.70206@ksu.edu> Robert Kern wrote: >> innerproduct(z1,z2)==conjugate(innerproduct(z2,z1)) >> Scipy implementation, however, does not satisfy this property, what >> can lead to serious complications when adapting real vector >> algorithms to complex arithmetics. In particular, the existing >> implementation breaks the complex vector space metric [...] > > [...]If you are doing serious linear algebra with complex matrices > (and not everyone who uses innerproduct on complex arrays does), I > suggest you write a function that does the appropriate conjugation. > > def cdot(a, b): > return dot(conjugate(a), b) Indeed, one can write the function, but my message was that the misimplemented scalar product is a major Scipy library DESIGN ISSUE. I am not even talking about the perfomance. The structure of a good program must reflect the structure of the problem it solves, this is the basic structure programming principle. It is the structure of the Scipy linear algebra library that does not reflect the structure of linear algebraic problems in general, and cheating with writing functions at the user level does not fix it. A real vector space code should work flawlessly with complex vector spaces when the algorithm is applicable, isn't it in the Python programming spirit? Otherwise we end up with programming good old Fortran 77. (Or Fortran 777, if you wish ;) ) BW, VR From oliphant at ee.byu.edu Tue Jan 11 04:30:50 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 11 Jan 2005 02:30:50 -0700 Subject: [SciPy-user] io.loadmat: broken? In-Reply-To: <41DEFB00.9020103@usc.edu> References: <41DEFB00.9020103@usc.edu> Message-ID: <41E39CCA.6070003@ee.byu.edu> Yichun Wei wrote: > Hi, > > Sorry if you received multiple messages about this. I posted this last > month for help. > > I was fine with io.loadmat about 1 month ago. It persuaded me to make > a switch to scipy. However, when I updated my python to python 2.3.4, > scipy to 0.3.2, the following happened: > >>>> import scipy.io >>>> scipy.io.loadmat("test") >>> Could you attach the test file so that I can debug the problem? Thanks, -Travis From rkern at ucsd.edu Tue Jan 11 04:56:57 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 11 Jan 2005 01:56:57 -0800 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E2EB00.70206@ksu.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> Message-ID: <41E3A2E9.6040205@ucsd.edu> Vladimir Roudnev wrote: > Robert Kern wrote: > >>> innerproduct(z1,z2)==conjugate(innerproduct(z2,z1)) >>> Scipy implementation, however, does not satisfy this property, what >>> can lead to serious complications when adapting real vector >>> algorithms to complex arithmetics. In particular, the existing >>> implementation breaks the complex vector space metric [...] >> >> >> [...]If you are doing serious linear algebra with complex matrices >> (and not everyone who uses innerproduct on complex arrays does), I >> suggest you write a function that does the appropriate conjugation. >> >> def cdot(a, b): >> return dot(conjugate(a), b) > > > Indeed, one can write the function, but my message was that the > misimplemented scalar product is a major Scipy library DESIGN ISSUE. I > am not even talking about the perfomance. The structure of a good > program must reflect the structure of the problem it solves, this is the > basic structure programming principle. It is the structure of the Scipy > linear algebra library that does not reflect the structure of linear > algebraic problems in general, and cheating with writing functions at > the user level does not fix it. A real vector space code should work > flawlessly with complex vector spaces when the algorithm is applicable, > isn't it in the Python programming spirit? Otherwise we end up with > programming good old Fortran 77. (Or Fortran 777, if you wish ;) ) And I was trying to point out that the current behaviour is a valid design decision for Numeric. Not everyone using Numeric with dot() and complex numbers is doing linear algebra. If the function were scipy.linalg.dot(), I'd agree with you, but it's not; it's Numeric.dot(). The name is, indeed, misleading; but it's not going to change now for backwards compatibility reasons. I'd point you to the original discussions on the matrix-sig for the original reasoning, but it seems the archives are down. Numeric is not just for linear algebra, and so its functions reflect that fact. I would vote against changing the implementation of dot() to apply the conjugate(); however, I'd support adding a function to scipy.linalg that does do the conjugation that is appropriate for linear algebra. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From yichunwe at usc.edu Tue Jan 11 18:09:30 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Tue, 11 Jan 2005 15:09:30 -0800 Subject: [SciPy-user] Re: io.loadmat: broken? Message-ID: <41E45CAA.5010709@usc.edu> Here is the file (I should have said that it is a .mat file saved using -v6 option of Matlab 7): >> a = 1; save -v6 a.mat a >> clear >> load a.mat >> a a = 1 The Enthought python with Scipy 0.3 loads this .mat file fine, however, it is not the case with Scipy 0.3.2: >>> a = io.loadmat('a') Traceback (most recent call last): File "", line 1, in ? File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 692, in loadmat thisdict = _loadv5(fid,basename) File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 631, in _loadv5 el, varname = _get_element(fid) File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 619, in _get_elemen t el, name = _parse_mimatrix(fid,numbytes) File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 510, in _parse_mima trix result = squeeze(transpose(reshape(result,dims[::-1]))) TypeError: Array can not be safely cast to required type >>> import scipy >>> scipy.__version__ '0.3.2_280.4176' >>> >>> import Numeric >>> Numeric.__version__ '23.6' I am not sure if this is brought by the new Numeric used here. This is on a Win2k. I am not sure if it is the case on other platforms. > Could you attach the test file so that I can debug the problem? > > Thanks, > -Travis Thanks for looking into this issue! I'm looking forward to use Scipy 0.3.2 so as to be able to work with wxpython 2.5.x.x, pythoncard etc... regards, yichun -------------- next part -------------- A non-text attachment was scrubbed... Name: a.mat Type: application/octet-stream Size: 184 bytes Desc: not available URL: From yichunwe at usc.edu Tue Jan 11 18:16:36 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Tue, 11 Jan 2005 15:16:36 -0800 Subject: [SciPy-user] io.loadmat: broken? Message-ID: <41E45E54.4080408@usc.edu> Hi Travis, Sorry I post my message under a title "Re: io.loadmat: broken?" and I noticed in this list the attached file becomes "a.obj", but simply changing the name to "a.mat" should be ok. Thanks for looking into this issue! regards, - yichun > Could you attach the test file so that I can debug the problem? > > Thanks, > -Travis -------------- next part -------------- A non-text attachment was scrubbed... Name: a.mat Type: application/octet-stream Size: 184 bytes Desc: not available URL: From oliphant at ee.byu.edu Tue Jan 11 18:35:33 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 11 Jan 2005 16:35:33 -0700 Subject: [SciPy-user] Re: io.loadmat: broken? In-Reply-To: <41E45CAA.5010709@usc.edu> References: <41E45CAA.5010709@usc.edu> Message-ID: <41E462C5.8070500@ee.byu.edu> Yichun Wei wrote: > Here is the file (I should have said that it is a .mat file saved > using -v6 option of Matlab 7): > > >> a = 1; save -v6 a.mat a > >> clear > >> load a.mat > >> a > a = > 1 > > The Enthought python with Scipy 0.3 loads this .mat file fine, > however, it is not the case with Scipy 0.3.2: > > >>> a = io.loadmat('a') > Traceback (most recent call last): > File "", line 1, in ? > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 692, in > loadmat > thisdict = _loadv5(fid,basename) > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 631, in > _loadv5 > el, varname = _get_element(fid) > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 619, in > _get_elemen > t > el, name = _parse_mimatrix(fid,numbytes) > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 510, in > _parse_mima > trix > result = squeeze(transpose(reshape(result,dims[::-1]))) > TypeError: Array can not be safely cast to required type > > > >>> import scipy > >>> scipy.__version__ > '0.3.2_280.4176' > >>> > >>> import Numeric > >>> Numeric.__version__ > '23.6' > > I am not sure if this is brought by the new Numeric used here. This is > on a Win2k. I am not sure if it is the case on other platforms. > > > Could you attach the test file so that I can debug the problem? > > > > Thanks, > > > -Travis > > > Thanks for looking into this issue! I'm looking forward to use Scipy > 0.3.2 so as to be able to work with wxpython 2.5.x.x, pythoncard etc... This is a problem with Numeric. Numeric 23.7 fixes the issue that is causing this problem. I recommend installing Numeric 23.7. In the mean time, you can temporarily fix the issue by replacing dims[::-1] with tuple(dims[::-1]) in line 510 in io/mio.py Best, -Travis From oliphant at ee.byu.edu Tue Jan 11 18:35:48 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 11 Jan 2005 16:35:48 -0700 Subject: [SciPy-user] Re: io.loadmat: broken? In-Reply-To: <41E45CAA.5010709@usc.edu> References: <41E45CAA.5010709@usc.edu> Message-ID: <41E462D4.5050004@ee.byu.edu> Yichun Wei wrote: > Here is the file (I should have said that it is a .mat file saved > using -v6 option of Matlab 7): > > >> a = 1; save -v6 a.mat a > >> clear > >> load a.mat > >> a > a = > 1 > > The Enthought python with Scipy 0.3 loads this .mat file fine, > however, it is not the case with Scipy 0.3.2: > > >>> a = io.loadmat('a') > Traceback (most recent call last): > File "", line 1, in ? > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 692, in > loadmat > thisdict = _loadv5(fid,basename) > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 631, in > _loadv5 > el, varname = _get_element(fid) > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 619, in > _get_elemen > t > el, name = _parse_mimatrix(fid,numbytes) > File "C:\Python23\Lib\site-packages\scipy\io\mio.py", line 510, in > _parse_mima > trix > result = squeeze(transpose(reshape(result,dims[::-1]))) > TypeError: Array can not be safely cast to required type > > > >>> import scipy > >>> scipy.__version__ > '0.3.2_280.4176' > >>> > >>> import Numeric > >>> Numeric.__version__ > '23.6' > > I am not sure if this is brought by the new Numeric used here. This is > on a Win2k. I am not sure if it is the case on other platforms. > > > Could you attach the test file so that I can debug the problem? > > > > Thanks, > > > -Travis > > > Thanks for looking into this issue! I'm looking forward to use Scipy > 0.3.2 so as to be able to work with wxpython 2.5.x.x, pythoncard etc... This is a problem with Numeric. Numeric 23.7 fixes the issue that is causing this problem. I recommend installing Numeric 23.7. In the mean time, you can temporarily fix the issue by replacing dims[::-1] with tuple(dims[::-1]) in line 510 in io/mio.py Best, -Travis From nwagner at mecha.uni-stuttgart.de Wed Jan 12 10:43:18 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 12 Jan 2005 16:43:18 +0100 Subject: [SciPy-user] Possibly bug in optimize/zeros.py Message-ID: <41E54596.6080202@mecha.uni-stuttgart.de> Hi all, I am going to solve secular equations with scipy's optimize package. This is the result of my short test program solomonoff.py numerix Numeric 23.7 Internal Error occured. Error when calling Python function. See traceback. Traceback (most recent call last): File "solomonoff.py", line 46, in ? print optimize.zeros.brenth(f,d[i],d[i+1]) File "/usr/lib/python2.3/site-packages/scipy/optimize/zeros.py", line 137, in brenth return _zeros._brenth(f,a,b,xtol,maxiter,args,full_output,disp) TypeError: bad argument type for built-in operation >>> Is it a bug or a wrong function call ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: solomonoff.py Type: text/x-python Size: 958 bytes Desc: not available URL: From yichunwe at usc.edu Wed Jan 12 14:53:25 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Wed, 12 Jan 2005 11:53:25 -0800 Subject: [SciPy-user] Help on performance of signal.convolve Message-ID: <41E58035.1000803@usc.edu> Dear Experts, Sorry if I was not concrete or even not correct last time I posted this for help. I'd like to convolve a (64,64,41) kernel with a (64,64,1800) array with mode='valid' . What would be the fastest method in scipy? Here I tried with signal.convolve and it takes >400 s to solve. a.shape is (64,64,41), b.shape is (64,64,1800) res = signal.convolve (a, b, mode='valid') it took around 420 s CPU time to solve on my P-IV 1.8G CPU. I have the file dumped from profile, if you want to have a look I can attach it. 'same' and 'full' never ends when I ran them. I am using the Enthought Python with scipy 0.3. Is this performance normal on a P-IV 1.8G CPU? >>> p.sort_stats('cumulative').print_stats(10) Wed Jan 12 11:31:08 2005 Profile_k_GetRespons_same 1631 function calls (1623 primitive calls) in 420.407 CPU seconds Ordered by: cumulative time List reduced from 175 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.001 0.001 420.407 420.407 profile:0(res = k.GetResponse()) 1 0.000 0.000 420.406 420.406 :1(?) 1 0.000 0.000 420.406 420.406 F:\tmp\py\cte\kernel.py:173(GetResponse) 1 419.705 419.705 419.705 419.705 C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:79(convolve) 5/1 0.000 0.000 0.701 0.701 C:\Python23\Lib\site-packages\scipy_base\ppimport.py:299(__getattr__) 5/1 0.033 0.007 0.701 0.701 C:\Python23\Lib\site-packages\scipy_base\ppimport.py:252(_ppimport_importer) 1 0.091 0.091 0.699 0.699 C:\Python23\Lib\site-packages\scipy\signal\__init__.py:5(?) 1 0.000 0.000 0.395 0.395 C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:4(?) 1 0.091 0.091 0.313 0.313 C:\Python23\Lib\site-packages\scipy\stats\__init__.py:5(?) 1 0.019 0.019 0.196 0.196 C:\Python23\Lib\site-packages\scipy\signal\bsplines.py:1(?) I read some performance guide like the one by Prabhu at http://www.scipy.org/documentation/weave/weaveperformance.html. But since this is only a function call to sigtools._correlateND, I think it is already implemented in C++. If it is the case, I think it is not profitable to use blitz, swig or f2py. Also, I find there is a fftpack.convolve, however I am not sure if it works only on 1-d array, or if it is appropriate to use fft in this convolution I will do. (I also find in numarray the convolution object got an option to decide whether or not to use fft.) Could you be kind enought to point out where the effort should be put to improve the performance of such a convolution? Any hint will be greatly appreciated!! - yichun From yichunwe at usc.edu Wed Jan 12 18:01:01 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Wed, 12 Jan 2005 15:01:01 -0800 Subject: [SciPy-user] Re: Help on performance of signal.convolve In-Reply-To: <41E58035.1000803@usc.edu> References: <41E58035.1000803@usc.edu> Message-ID: <41E5AC2D.2050306@usc.edu> I think I need fft to do this. Also I found a thread on this lists discussion this in a 2-dimensional case: http://www.scipy.net/pipermail/scipy-user/2004-May/002888.html Yichun Wei wrote: > Dear Experts, > > Sorry if I was not concrete or even not correct last time I posted this > for help. > > I'd like to convolve a (64,64,41) kernel with a (64,64,1800) > array with mode='valid' . What would be the fastest method in scipy? > > Here I tried with signal.convolve and it takes >400 s to solve. > a.shape is (64,64,41), b.shape is (64,64,1800) > > res = signal.convolve (a, b, mode='valid') > > it took around 420 s CPU time to solve on my P-IV 1.8G CPU. I have the > file dumped from profile, if you want to have a look I can attach it. > 'same' and 'full' never ends when I ran them. I am using the Enthought > Python with scipy 0.3. Is this performance normal on a P-IV 1.8G CPU? > >>>> p.sort_stats('cumulative').print_stats(10) > > Wed Jan 12 11:31:08 2005 Profile_k_GetRespons_same > > 1631 function calls (1623 primitive calls) in 420.407 CPU seconds > > Ordered by: cumulative time > List reduced from 175 to 10 due to restriction <10> > > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.001 0.001 420.407 420.407 profile:0(res = > k.GetResponse()) > 1 0.000 0.000 420.406 420.406 :1(?) > 1 0.000 0.000 420.406 420.406 > F:\tmp\py\cte\kernel.py:173(GetResponse) > 1 419.705 419.705 419.705 419.705 > C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:79(convolve) > 5/1 0.000 0.000 0.701 0.701 > C:\Python23\Lib\site-packages\scipy_base\ppimport.py:299(__getattr__) > 5/1 0.033 0.007 0.701 0.701 > C:\Python23\Lib\site-packages\scipy_base\ppimport.py:252(_ppimport_importer) > > 1 0.091 0.091 0.699 0.699 > C:\Python23\Lib\site-packages\scipy\signal\__init__.py:5(?) > 1 0.000 0.000 0.395 0.395 > C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:4(?) > 1 0.091 0.091 0.313 0.313 > C:\Python23\Lib\site-packages\scipy\stats\__init__.py:5(?) > 1 0.019 0.019 0.196 0.196 > C:\Python23\Lib\site-packages\scipy\signal\bsplines.py:1(?) > > > > > > I read some performance guide like the one by Prabhu at > http://www.scipy.org/documentation/weave/weaveperformance.html. But > since this is only a function call to sigtools._correlateND, I think it > is already implemented in C++. If it is the case, I think it is not > profitable to use blitz, swig or f2py. > > Also, I find there is a fftpack.convolve, however I am not sure if it > works only on 1-d array, or if it is appropriate to use fft in this > convolution I will do. (I also find in numarray the convolution object > got an option to decide whether or not to use fft.) > > Could you be kind enought to point out where the effort should be put to > improve the performance of such a convolution? Any hint will be greatly > appreciated!! > > - yichun > > > From lanceboyle at cwazy.co.uk Wed Jan 12 19:09:15 2005 From: lanceboyle at cwazy.co.uk (Lance Boyle) Date: Wed, 12 Jan 2005 17:09:15 -0700 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E3A2E9.6040205@ucsd.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> Message-ID: <5CA5E654-64F7-11D9-AFBD-003065F93FF0@cwazy.co.uk> On Jan 11, 2005, at 2:56 AM, Robert Kern wrote: > Not everyone using Numeric with dot() and complex numbers is doing > linear algebra. Anyone invoking dot() expects a dot product, PERIOD. Unless they have adapted to the idiosyncrasy of Python Numeric in order to get their work done. I'm very casual reader of this list so I might have missed something. However, the original poster demonstrates a design flaw and it should be fixed. There is no math text in existence that promotes a dot product that is not an inner product. Sure, names are arbitrary, but bad names are bad design. In "sarcasm" mode, I might say, why not name the sine function cos() and the cosine function sin(). It has been bugs like this that have appeared in scipy in the past that have kept me a casual reader of this list. I simply don't have time to write test cases for numerical software when there exist other alternatives that have been tested for me. I apologize if I sound harsh on this. Jerry From vroudnev at ksu.edu Wed Jan 12 19:28:36 2005 From: vroudnev at ksu.edu (Vladimir Roudnev) Date: Wed, 12 Jan 2005 18:28:36 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E3A2E9.6040205@ucsd.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> Message-ID: <41E5C0B4.5010000@ksu.edu> Robert Kern wrote: > The name is, indeed, misleading; but it's not going to change now for > backwards compatibility reasons. [...] > > Numeric is not just for linear algebra, and so its functions reflect > that fact. > > I would vote against changing the implementation of dot() to apply the > conjugate(); however, I'd support adding a function to scipy.linalg > that does do the conjugation that is appropriate for linear algebra. I would be convinced if sombody have shown me an example, that requires the wrong innerproduct() implementation and if this example is more important for scientific computing than having linear algebra problems solved correctly. I strongly doubt that such example can be found. From vroudnev at ksu.edu Wed Jan 12 19:44:14 2005 From: vroudnev at ksu.edu (Vladimir Roudnev) Date: Wed, 12 Jan 2005 18:44:14 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <5CA5E654-64F7-11D9-AFBD-003065F93FF0@cwazy.co.uk> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> <5CA5E654-64F7-11D9-AFBD-003065F93FF0@cwazy.co.uk> Message-ID: <41E5C45E.7020907@ksu.edu> Lance Boyle wrote: > It has been bugs like this that have appeared in scipy in the past > that have kept me a casual reader of this list. I simply don't have > time to write test cases for numerical software when there exist other > alternatives that have been tested for me. I'm just starting trying Python for my computations. Your comment makes me feeling that the innerproduct() design problem is not the only one, isn't it? I wonder if there is any good in relying on scipy developing a serious project? Is it designed so badly in general? I've met a major problem on the very first session... What's your opinion? The other thread suggests that there are some performance issues as well... BW, VR From rkern at ucsd.edu Wed Jan 12 19:49:18 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 12 Jan 2005 16:49:18 -0800 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <5CA5E654-64F7-11D9-AFBD-003065F93FF0@cwazy.co.uk> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> <5CA5E654-64F7-11D9-AFBD-003065F93FF0@cwazy.co.uk> Message-ID: <41E5C58E.6060506@ucsd.edu> Lance Boyle wrote: > > On Jan 11, 2005, at 2:56 AM, Robert Kern wrote: > >> Not everyone using Numeric with dot() and complex numbers is doing >> linear algebra. > > > Anyone invoking dot() expects a dot product, PERIOD. Unless they have > adapted to the idiosyncrasy of Python Numeric in order to get their work > done. > > I'm very casual reader of this list so I might have missed something. > However, the original poster demonstrates a design flaw and it should be > fixed. There is no math text in existence that promotes a dot product > that is not an inner product. Sure, names are arbitrary, but bad names > are bad design. In "sarcasm" mode, I might say, why not name the sine > function cos() and the cosine function sin(). You won't get any argument from me that the name is bad (well, actually, the error is duplicated: both "dot" and "innerproduct" are bad names). If I had been around during Numeric's formative years, I would have argued strongly for having dot/innerproduct do the conjugation. I would also have argued that they should implement a proper tensor inner product for N-D arrays N>2 (dot(A,B) contracting on the last axis of A and the first axis of B rather than the current implementation). I don't agree that we should change either of these behaviours at this time. The amount of code that depends on these well-documented behaviours is too great. I have no problem with putting functions into scipy.linalg that do the right things for linear algebra. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From rkern at ucsd.edu Wed Jan 12 19:52:54 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 12 Jan 2005 16:52:54 -0800 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E5C0B4.5010000@ksu.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> <41E5C0B4.5010000@ksu.edu> Message-ID: <41E5C666.9000108@ucsd.edu> Vladimir Roudnev wrote: > Robert Kern wrote: > >> The name is, indeed, misleading; but it's not going to change now for >> backwards compatibility reasons. [...] >> >> Numeric is not just for linear algebra, and so its functions reflect >> that fact. > > >> >> I would vote against changing the implementation of dot() to apply the >> conjugate(); however, I'd support adding a function to scipy.linalg >> that does do the conjugation that is appropriate for linear algebra. > > > I would be convinced if sombody have shown me an example, that requires > the wrong innerproduct() implementation and if this example is more > important for scientific computing than having linear algebra problems > solved correctly. I strongly doubt that such example can be found. Once the matrix-sig archives come back up, I'll find the original discussions. In the meantime, I'll point out that the BLAS has [CZ]DOTU which don't do the conjugation. Someone certainly thought it was important. As I've said elsewhere, I agree that the names are bad, and it would not have been my decision to implement dot/innerproduct as it is. I do think that when complex numbers are involved, most uses of dot() actually do need the conjugation. However, I don't think that breaking backwards compatibility at this point is worthwhile particularly when writing the correct function is a trivial 2-liner. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Wed Jan 12 20:35:32 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 12 Jan 2005 18:35:32 -0700 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E5C666.9000108@ucsd.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> <41E5C0B4.5010000@ksu.edu> <41E5C666.9000108@ucsd.edu> Message-ID: <41E5D064.3090208@colorado.edu> Robert Kern wrote: > Vladimir Roudnev wrote: >>I would be convinced if sombody have shown me an example, that requires >>the wrong innerproduct() implementation and if this example is more >>important for scientific computing than having linear algebra problems >>solved correctly. I strongly doubt that such example can be found. > > > Once the matrix-sig archives come back up, I'll find the original > discussions. In the meantime, I'll point out that the BLAS has [CZ]DOTU > which don't do the conjugation. Someone certainly thought it was important. There is one important reason to want to have pure non-conjugating functions: performance. If you are writing code which is using purely real arrays, and dot/inner are at the center of your critical path, you do NOT want constant type checks or no-op conjugations to be taking place. Now, I am not arguing that the current choice of names was a good one, but there is a very valid reason for having an explicitely, purely real set of functions. It's important to keep in mind that this is python, not C/C++: we don't have the benefit of compile-time type-checking to make function selection decisions. Until python grows some fancy-schmanzy type inference engine which does full-program optimizations, we're stuck with either name-based choices of functions for different types, or runtime (expensive) checks to implement type dispatching. Whether the 'default' functions should be the most general (complex) ones or the specialized purely real ones is a separate question. As a reference, the python standard library addresses this by using 'c' names for the complex functions, and leaving the regular names to only handle real arguments. Regards, f From rkern at ucsd.edu Wed Jan 12 21:14:31 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 12 Jan 2005 18:14:31 -0800 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E5D064.3090208@colorado.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> <41E5C0B4.5010000@ksu.edu> <41E5C666.9000108@ucsd.edu> <41E5D064.3090208@colorado.edu> Message-ID: <41E5D987.5040309@ucsd.edu> Fernando Perez wrote: > There is one important reason to want to have pure non-conjugating > functions: performance. If you are writing code which is using purely > real arrays, and dot/inner are at the center of your critical path, you > do NOT want constant type checks or no-op conjugations to be taking > place. Now, I am not arguing that the current choice of names was a > good one, but there is a very valid reason for having an explicitely, > purely real set of functions. I don't think that there would be a performance hit by making dot/innerproduct do a conjugation. dot has to dispatch on the typecodes in any case. It should just be a matter of changing a couple signs in the complex case (or calling the appropriate BLAS functions for dotblas). -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Wed Jan 12 21:23:02 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 12 Jan 2005 19:23:02 -0700 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E5D987.5040309@ucsd.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> <41E5C0B4.5010000@ksu.edu> <41E5C666.9000108@ucsd.edu> <41E5D064.3090208@colorado.edu> <41E5D987.5040309@ucsd.edu> Message-ID: <41E5DB86.5040706@colorado.edu> Robert Kern wrote: > Fernando Perez wrote: > > >>There is one important reason to want to have pure non-conjugating >>functions: performance. If you are writing code which is using purely >>real arrays, and dot/inner are at the center of your critical path, you >>do NOT want constant type checks or no-op conjugations to be taking >>place. Now, I am not arguing that the current choice of names was a >>good one, but there is a very valid reason for having an explicitely, >>purely real set of functions. > > > I don't think that there would be a performance hit by making > dot/innerproduct do a conjugation. dot has to dispatch on the typecodes > in any case. It should just be a matter of changing a couple signs in > the complex case (or calling the appropriate BLAS functions for dotblas). Well, if it could really be done with no measurable performance hit at all, then I'd be all for a 'mathematically correct' inner/dot. But I'd like to see that shown by measurement, across a large set of sizes. Maybe you're right, I'm just being a bit picky :) Cheers, f From aisaac at american.edu Wed Jan 12 21:23:16 2005 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 12 Jan 2005 21:23:16 -0500 (Eastern Standard Time) Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E5D987.5040309@ucsd.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu><41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu><41E5C0B4.5010000@ksu.edu> <41E5C666.9000108@ucsd.edu><41E5D064.3090208@colorado.edu><41E5D987.5040309@ucsd.edu> Message-ID: Providing unconjugated dot products does not seem rare. http://www.tacc.utexas.edu/resources/software/nagdoc/fl/html/F06_fl19.html Just an observation; not an argument. fwiw, Alan Isaac From perry at stsci.edu Wed Jan 12 22:18:15 2005 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 12 Jan 2005 22:18:15 -0500 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E5C45E.7020907@ksu.edu> Message-ID: Vladimir Roudnev wrote: > Lance Boyle wrote: > Something that many people that wander into Numeric assume is that it is entirely focused on linear algebra. In their world, that is their focus. But it isn't the case for Numeric. That's why the multiply operator doesn't do matriz multiplication (many have been upset at that too). It's an array package where element-by-element operations are the primary focus. And as hard as it may be to believe, there are those for which that is their primary focus. I wasn't there when these names were assigned, but I imagine that dot and innerproduct reflect that focus, and it is unfortunate that those cases are bad choices as far as names go, but it's been that way for many years. > > It has been bugs like this that have appeared in scipy in the past > > that have kept me a casual reader of this list. I simply don't have > > time to write test cases for numerical software when there exist other > > alternatives that have been tested for me. > > I'm just starting trying Python for my computations. Your comment makes > me feeling that the innerproduct() design problem is not the only one, > isn't it? I wonder if there is any good in relying on scipy developing a > serious project? Is it designed so badly in general? I've met a major > problem on the very first session... What's your opinion? > Hey, no one is making you use it. Lots of us find Python great for this sort of thing. Does that mean that you won't find problems (or that things weren't done the way you would have done them)? Of course not. Is scipy a finished, polished product? No. If that is what you want, go elsewhere (that's only my humble opinion). There's matlab, IDL, OCTAVE, J, etc., or you can do things in Fortran, C++ or whatever you please (but I suspect that you weren't entirely happy with those or you wouldn't be looking at scipy). If you want to contribute to improving scipy, that would be great, but understand, like anything, else, there are things like this that have some history to them and may not be changed just because you don't like it. > The other thread suggests that there are some performance issues > as well... > Based on that one (unsubstantiated) data point, I guess you have all you need to know about the issue. Perry From vroudnev at ksu.edu Wed Jan 12 23:41:50 2005 From: vroudnev at ksu.edu (Vladimir A. Roudnev) Date: Wed, 12 Jan 2005 22:41:50 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E5D064.3090208@colorado.edu> References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> <41E3A2E9.6040205@ucsd.edu> <41E5C0B4.5010000@ksu.edu> <41E5C666.9000108@ucsd.edu> <41E5D064.3090208@colorado.edu> Message-ID: <41E5FC0E.107@ksu.edu> Fernando Perez wrote: > There is one important reason to want to have pure non-conjugating > functions: performance. If you are writing code which is using purely > real arrays, and dot/inner are at the center of your critical path, > you do NOT want constant type checks or no-op conjugations to be > taking place. Now, I am not arguing that the current choice of names > was a good one, but there is a very valid reason for having an > explicitely, purely real set of functions. I strongly disagree with the performance argument. Performance problems rise when dealing with big arrays that are not supposed to be processed in python, but with a native method. It is the high-level language (Python) library which is responsible for choosing the right native method to call. For big arrays type-checking is (OK, expected to be) a relatively fast operation, and the proper BLAS call (xDOT or xDOTC, or xDOTU when appropriate) must be made on the base of this fast type-checking procedure. BW, VR From vroudnev at ksu.edu Thu Jan 13 00:41:53 2005 From: vroudnev at ksu.edu (Vladimir A. Roudnev) Date: Wed, 12 Jan 2005 23:41:53 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: References: Message-ID: <41E60A21.6040702@ksu.edu> Perry Greenfield wrote: >[...] That's why the multiply operator doesn't do matriz multiplication (many have been upset at that too). It's an array package where element-by-element operations >are the primary focus. And as hard as it may be to believe, there >are those for which that is their primary focus. > > What makes the situation essentially different for the subj, is the fact that the wrong (or inadequate, whatever we call it) implementation is not reflected in documentation at all. I believe, that if the bug has been mentioned in the docs, people would scream out loud much earlier. I still insist that this is a bug rather than a feature, possibly not an easy one to fix... >I wasn't there when these names were assigned, but I imagine that >dot and innerproduct reflect that focus, and it is unfortunate that >those cases are bad choices as far as names go, but it's been that way >for many years. > > OK, people are using FORTRAN and C for decades, so what? For developing technologies the question is not what people are using, but what are they going to use. Personally, if complex numbers have been implemented as a basic type in Java, I would not probably even start thinking about Python. >Hey, no one is making you use it. Lots of us find Python great for this >sort of thing. > And I still trust them. :) I like some ideas about the language that I hope to make my life easier. >Does that mean that you won't find problems (or that >things weren't done the way you would have done them)? Of course not. >Is scipy a finished, polished product? No. > And I'm trying to suggest improvements that will make scipy shining brighter, doing it from the perspective of my field of expertise. >There's matlab, IDL, >OCTAVE, J, etc., or you can do things in Fortran, C++ or whatever >you please (but I suspect that you weren't entirely happy with those >or you wouldn't be looking at scipy). > BTW, I think that D language has pretty good chances with scientific computing in some perspective. >If you want to contribute to >improving scipy, that would be great, but understand, like anything, >else, there are things like this that have some history to them and >may not be changed just because you don't like it. > > Hey, isn't this history still in our hands? Good thing about Scipy is, as I understand it, it is not a closed commercial product which would be overloaded by back-compatibility problems. I see it as an attempt to develop something really useful for scientific community, and developer's efforts should be taken to satisfy the community needs. If these needs are controversial, they must be satisfied by different modules rather than leading to compromised designs. If something is done wrong (as the implementation under discussion), it is better to change it sooner than later. Don't you agree? Indeed, fixing a substantial bug is not a an easy thing sometimes, but the more substantial bugs we ignore, the earlier our technology dies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkern at ucsd.edu Thu Jan 13 00:59:35 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 12 Jan 2005 21:59:35 -0800 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E60A21.6040702@ksu.edu> References: <41E60A21.6040702@ksu.edu> Message-ID: <41E60E47.4060008@ucsd.edu> Vladimir A. Roudnev wrote: > Perry Greenfield wrote: > >>[...] That's why the multiply operator doesn't do matriz multiplication (many have been upset at that too). It's an array package where element-by-element operations >>are the primary focus. And as hard as it may be to believe, there >>are those for which that is their primary focus. >> >> > What makes the situation essentially different for the subj, is the fact > that the wrong (or inadequate, whatever we call it) implementation is > not reflected in documentation at all. I believe, that if the bug has > been mentioned in the docs, people would scream out loud much earlier. I > still insist that this is a bug rather than a feature, possibly not an > easy one to fix... In [1]: dot? Type: function Base Class: String Form: Namespace: Interactive File: /Library/Python/2.3/Numeric/dotblas/__init__.py Definition: dot(a, b) Docstring: returns matrix-multiplication between a and b. The product-sum is over the last dimension of a and the second-to-last dimension of b. NB: No conjugation of complex arguments is performed. This version uses the BLAS optimized routines where possible. Okay, it looks like this only got documented in the dotblas version's docstring and not in the manual or the regular version's docstring. [snip] >>If you want to contribute to >>improving scipy, that would be great, but understand, like anything, >>else, there are things like this that have some history to them and >>may not be changed just because you don't like it. >> >> > Hey, isn't this history still in our hands? Good thing about Scipy is, > as I understand it, it is not a closed commercial product which would > be overloaded by back-compatibility problems. I see it as an attempt to > develop something really useful for scientific community, and > developer's efforts should be taken to satisfy the community needs. If > these needs are controversial, they must be satisfied by different > modules rather than leading to compromised designs. If something is done > wrong (as the implementation under discussion), it is better to change > it sooner than later. Don't you agree? > > Indeed, fixing a substantial bug is not a an easy thing sometimes, but > the more substantial bugs we ignore, the earlier our technology dies. For a lot of users, it is already later, not sooner. A lot of code depends on the current behaviour, and we would break that code by replacing dot with a different implementation. This is why I suggest adding a new function that does the inner product with conjugation for complex arguments. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From vroudnev at ksu.edu Thu Jan 13 01:42:15 2005 From: vroudnev at ksu.edu (Vladimir A. Roudnev) Date: Thu, 13 Jan 2005 00:42:15 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E60E47.4060008@ucsd.edu> References: <41E60A21.6040702@ksu.edu> <41E60E47.4060008@ucsd.edu> Message-ID: <41E61847.2020201@ksu.edu> Robert Kern wrote: > For a lot of users, it is already later, not sooner. A lot of code > depends on the current behaviour, and we would break that code by > replacing dot with a different implementation. Can we say, how many is "a lot"? Is it 10, 100, or 100000? I'm really curious. Is there any single user in this mail list who would fight to the finish for the wrong implementation to save his/her own codes? Developers is a different story, I suspect... :) From nwagner at mecha.uni-stuttgart.de Thu Jan 13 06:34:20 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Jan 2005 12:34:20 +0100 Subject: [SciPy-user] Possibly bug in optimize/zeros.py In-Reply-To: <41E54596.6080202@mecha.uni-stuttgart.de> References: <41E54596.6080202@mecha.uni-stuttgart.de> Message-ID: <41E65CBC.1070803@mecha.uni-stuttgart.de> Nils Wagner wrote: > Hi all, > > I am going to solve secular equations with scipy's optimize package. > This is the result of my short test program solomonoff.py > > numerix Numeric 23.7 > Internal Error occured. > Error when calling Python function. See traceback. > Traceback (most recent call last): > File "solomonoff.py", line 46, in ? > print optimize.zeros.brenth(f,d[i],d[i+1]) > File "/usr/lib/python2.3/site-packages/scipy/optimize/zeros.py", line > 137, in brenth > return _zeros._brenth(f,a,b,xtol,maxiter,args,full_output,disp) > TypeError: bad argument type for built-in operation > >>>> > > Is it a bug or a wrong function call ? > > Nils > The problem is that the function f exhibit some points where f is \pm \infty. Is it somehow possible to handle such exceptions in optimize ? Nils >------------------------------------------------------------------------ > >from scipy import * >from scipy.xplt import * >import gui_thread ># ># Eigenvalues of a rank-one perturbed diagonal matrix ># >def f(x): > > s = 0.0 > for i in arange(0,n): > s = s + z[i]**2/(d[i]-x) > return 1 + rho*s > >n = 5 >z = rand(n) >d = rand(n) ># ># The elements of z are sorted in increasing order ># >z = sort(z) >d = sort(d) ># ># The elements of z are sorted in increasing order (if z is complex) ># >#ind = argsort(z) >#z = take(z,ind) > >rho = -1.0 >D = diag(d) ># ># Symmetric Rank-one perturbation of a diagonal matrix ># >A = D + rho*outerproduct(z,z) >w = linalg.eigvals(A) >w = sort(w.real) >#ind = argsort(argsort(w)) >#w = take(w,ind) >xplt.hold('on') >xplt.plot(w.real,zeros(n),'b+') >xplt.plot(d,zeros(n),'ro') > >for i in arange(0,n-1): > > x0 = (d[i+1]-d[i])/2 ># print optimize.zeros.bisect(f,d[i],d[i+1]) > print optimize.zeros.brenth(f,d[i],d[i+1]) > print optimize.zeros.brentq(f,d[i],d[i+1]) ># print optimize.zeros.ridder(f,d[i],d[i+1]) > print i, root, w[i] > > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From pearu at scipy.org Thu Jan 13 06:38:29 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 13 Jan 2005 05:38:29 -0600 (CST) Subject: [SciPy-user] Possibly bug in optimize/zeros.py In-Reply-To: <41E54596.6080202@mecha.uni-stuttgart.de> References: <41E54596.6080202@mecha.uni-stuttgart.de> Message-ID: On Wed, 12 Jan 2005, Nils Wagner wrote: > I am going to solve secular equations with scipy's optimize package. > This is the result of my short test program solomonoff.py > > numerix Numeric 23.7 > Internal Error occured. > Error when calling Python function. See traceback. > Traceback (most recent call last): > File "solomonoff.py", line 46, in ? > print optimize.zeros.brenth(f,d[i],d[i+1]) > File "/usr/lib/python2.3/site-packages/scipy/optimize/zeros.py", line 137, > in brenth > return _zeros._brenth(f,a,b,xtol,maxiter,args,full_output,disp) > TypeError: bad argument type for built-in operation >>>> > > Is it a bug or a wrong function call ? It a wrong function call, note that d[i] in optimize.zeros.brenth(f,d[i],d[i+1]) is an array but optimize.zeros functions work only on univariate problems. Pearu From pearu at scipy.org Thu Jan 13 09:11:21 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 13 Jan 2005 08:11:21 -0600 (CST) Subject: [SciPy-user] Possibly bug in optimize/zeros.py In-Reply-To: <41E65CBC.1070803@mecha.uni-stuttgart.de> References: <41E54596.6080202@mecha.uni-stuttgart.de> <41E65CBC.1070803@mecha.uni-stuttgart.de> Message-ID: On Thu, 13 Jan 2005, Nils Wagner wrote: > The problem is that the function f exhibit some points where f is \pm \infty. > Is it somehow possible to handle such exceptions in optimize ? You are right (ignore my previous message). Now optimize.zeros functions in scipy CVS raise exceptions at the appropriate step so that users can see what is actually wrong, not just a vague message "bad argument type for built-in operation". Pearu From val at vtek.com Thu Jan 13 09:47:34 2005 From: val at vtek.com (val) Date: Thu, 13 Jan 2005 09:47:34 -0500 Subject: [SciPy-user] complex vector scalar product: wrong implementation References: <41E22FD8.2040104@ksu.edu> <41E236C7.8030203@ucsd.edu> <41E2EB00.70206@ksu.edu> Message-ID: <108201c4f97e$d17110a0$c400a8c0@sony> I agree with Robert, and i don't see any "design issues" with scipy. It is a working tool, not a "vector space code". If one understands her/his data and *what* needs to be done with the data, that's it. Python and scipy are flexible enough to satisfy any reasonable and specific needs. But any clarification to the docs is always welcome. I guess my point is: Enjoy life and scipy, and/or contribute in its improvement optimistical-ly y'rs, val ----- Original Message ----- From: "Vladimir Roudnev" To: "SciPy Users List" Sent: Monday, January 10, 2005 3:52 PM Subject: Re: [SciPy-user] complex vector scalar product: wrong implementation > Robert Kern wrote: > > >> innerproduct(z1,z2)==conjugate(innerproduct(z2,z1)) > >> Scipy implementation, however, does not satisfy this property, what > >> can lead to serious complications when adapting real vector > >> algorithms to complex arithmetics. In particular, the existing > >> implementation breaks the complex vector space metric [...] > > > > [...]If you are doing serious linear algebra with complex matrices > > (and not everyone who uses innerproduct on complex arrays does), I > > suggest you write a function that does the appropriate conjugation. > > > > def cdot(a, b): > > return dot(conjugate(a), b) > > Indeed, one can write the function, but my message was that the > misimplemented scalar product is a major Scipy library DESIGN ISSUE. I > am not even talking about the perfomance. The structure of a good > program must reflect the structure of the problem it solves, this is the > basic structure programming principle. It is the structure of the Scipy > linear algebra library that does not reflect the structure of linear > algebraic problems in general, and cheating with writing functions at > the user level does not fix it. A real vector space code should work > flawlessly with complex vector spaces when the algorithm is applicable, > isn't it in the Python programming spirit? Otherwise we end up with > programming good old Fortran 77. (Or Fortran 777, if you wish ;) ) > > BW, > VR > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From rkern at ucsd.edu Thu Jan 13 12:56:35 2005 From: rkern at ucsd.edu (Robert Kern) Date: Thu, 13 Jan 2005 09:56:35 -0800 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E61847.2020201@ksu.edu> References: <41E60A21.6040702@ksu.edu> <41E60E47.4060008@ucsd.edu> <41E61847.2020201@ksu.edu> Message-ID: <41E6B653.7010802@ucsd.edu> Vladimir A. Roudnev wrote: > Robert Kern wrote: > >> For a lot of users, it is already later, not sooner. A lot of code >> depends on the current behaviour, and we would break that code by >> replacing dot with a different implementation. > > > Can we say, how many is "a lot"? Is it 10, 100, or 100000? I'm really > curious. > Is there any single user in this mail list who would fight to the finish > for the wrong implementation to save his/her own codes? Developers is a > different story, I suspect... :) No, I don't know how much code would break. That's a problem with open source software: you never know who's using it. And please, it's not the wrong implementation; neither Numeric nor Scipy are just linear algebra tools. It's the wrong name, and the docs are incomplete. I suggest a general rule of thumb: don't break backwards compatibility for an issue that can be mostly alleviated by a documentation fix and a two-line workaround. def ladot(a, b): return dot(conjugate(a), b) If you're doing linear algebra, just use ladot() everywhere; it works just fine with reals and ints, too. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From aisaac at american.edu Thu Jan 13 13:49:58 2005 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 13 Jan 2005 13:49:58 -0500 (Eastern Standard Time) Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: References: Message-ID: On Wed, 12 Jan 2005, Perry Greenfield apparently wrote: > Something that many people that wander into Numeric assume is that it > is entirely focused on linear algebra. In their world, that is their > focus. But it isn't the case for Numeric. That's why the multiply > operator doesn't do matriz multiplication (many have been upset at > that too). Since this is a user's list that may contain new users, I'd just like to mention that the multiplication operator does matrix multiplication for matrix objects. I.e., arrays and matrices are handled differently. Cheers, Alan Isaac From aisaac at american.edu Thu Jan 13 13:59:23 2005 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 13 Jan 2005 13:59:23 -0500 (Eastern Standard Time) Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E61847.2020201@ksu.edu> References: <41E60A21.6040702@ksu.edu> <41E60E47.4060008@ucsd.edu><41E61847.2020201@ksu.edu> Message-ID: > Robert Kern wrote: >> For a lot of users, it is already later, not sooner. A lot of code >> depends on the current behaviour, and we would break that code by >> replacing dot with a different implementation. On Thu, 13 Jan 2005, "Vladimir A. Roudnev" apparently wrote: > Can we say, how many is "a lot"? Is it 10, 100, or 100000? I'm really > curious. > Is there any single user in this mail list who would fight to the finish > for the wrong implementation to save his/her own codes? Developers is a > different story, I suspect... :) So far the answer seems to be None. Perhaps a transition strategy is feasible while this is explored. E.g., allow 'dot' to take an argument specifying conjugation. Alan Isaac From vroudnev at ksu.edu Thu Jan 13 15:56:15 2005 From: vroudnev at ksu.edu (Vladimir Roudnev) Date: Thu, 13 Jan 2005 14:56:15 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: References: <41E60A21.6040702@ksu.edu> <41E60E47.4060008@ucsd.edu><41E61847.2020201@ksu.edu> Message-ID: <41E6E06F.1080609@ksu.edu> Alan G Isaac wrote: >On Thu, 13 Jan 2005, "Vladimir A. Roudnev" apparently wrote: > > >>Is there any single user in this mail list who would fight to the finish >>for the wrong implementation to save his/her own codes? Developers is a >>different story, I suspect... :) >> >> > > >So far the answer seems to be None. > >Perhaps a transition strategy is feasible >while this is explored. E.g., >allow 'dot' to take an argument specifying >conjugation. > > I would suggest introducing a new method, say scalarproduct(), that would do what it is supposed to do for linear algebra, adding there an optional parameter to switch the conjugation off, declaring the old ones as deprecated/unsupported to encourage new users applying this method. This way we would both keep the mythical codes that require the old implementation working and make the package structure more adequate. However, before introduciing a new method I would try to verify that there is a real user demand for keeping the current implementation. BW, VR -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Thu Jan 13 18:12:09 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 13 Jan 2005 16:12:09 -0700 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E61847.2020201@ksu.edu> References: <41E60A21.6040702@ksu.edu> <41E60E47.4060008@ucsd.edu> <41E61847.2020201@ksu.edu> Message-ID: <41E70049.3010901@ee.byu.edu> Vladimir A. Roudnev wrote: > Robert Kern wrote: > >> For a lot of users, it is already later, not sooner. A lot of code >> depends on the current behaviour, and we would break that code by >> replacing dot with a different implementation. > The desired function is vdot (vector dot product). It takes the conjugate of the first argument (if you use dotblas with Numeric it calls the right BLAS routine). -Travis O. From yichunwe at usc.edu Thu Jan 13 19:14:45 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Thu, 13 Jan 2005 16:14:45 -0800 Subject: [SciPy-user] Re: Help on performance of signal.convolve In-Reply-To: <41E58035.1000803@usc.edu> References: <41E58035.1000803@usc.edu> Message-ID: <41E70EF5.1090405@usc.edu> Hello Experts, Could you give some more detailed explanation about fftpack.convolve.init_convolution_kernel ? It is somewhat hard for me to guess its behavior... It seems not designed for general purpose convolution, but for constructing filters. - Yichun Yichun Wei wrote: > Dear Experts, > > Sorry if I was not concrete or even not correct last time I posted this > for help. > > I'd like to convolve a (64,64,41) kernel with a (64,64,1800) > array with mode='valid' . What would be the fastest method in scipy? > > Here I tried with signal.convolve and it takes >400 s to solve. > a.shape is (64,64,41), b.shape is (64,64,1800) > > res = signal.convolve (a, b, mode='valid') > > it took around 420 s CPU time to solve on my P-IV 1.8G CPU. I have the > file dumped from profile, if you want to have a look I can attach it. > 'same' and 'full' never ends when I ran them. I am using the Enthought > Python with scipy 0.3. Is this performance normal on a P-IV 1.8G CPU? > >>>> p.sort_stats('cumulative').print_stats(10) > > Wed Jan 12 11:31:08 2005 Profile_k_GetRespons_same > > 1631 function calls (1623 primitive calls) in 420.407 CPU seconds > > Ordered by: cumulative time > List reduced from 175 to 10 due to restriction <10> > > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.001 0.001 420.407 420.407 profile:0(res = > k.GetResponse()) > 1 0.000 0.000 420.406 420.406 :1(?) > 1 0.000 0.000 420.406 420.406 > F:\tmp\py\cte\kernel.py:173(GetResponse) > 1 419.705 419.705 419.705 419.705 > C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:79(convolve) > 5/1 0.000 0.000 0.701 0.701 > C:\Python23\Lib\site-packages\scipy_base\ppimport.py:299(__getattr__) > 5/1 0.033 0.007 0.701 0.701 > C:\Python23\Lib\site-packages\scipy_base\ppimport.py:252(_ppimport_importer) > > 1 0.091 0.091 0.699 0.699 > C:\Python23\Lib\site-packages\scipy\signal\__init__.py:5(?) > 1 0.000 0.000 0.395 0.395 > C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:4(?) > 1 0.091 0.091 0.313 0.313 > C:\Python23\Lib\site-packages\scipy\stats\__init__.py:5(?) > 1 0.019 0.019 0.196 0.196 > C:\Python23\Lib\site-packages\scipy\signal\bsplines.py:1(?) > > > > > > I read some performance guide like the one by Prabhu at > http://www.scipy.org/documentation/weave/weaveperformance.html. But > since this is only a function call to sigtools._correlateND, I think it > is already implemented in C++. If it is the case, I think it is not > profitable to use blitz, swig or f2py. > > Also, I find there is a fftpack.convolve, however I am not sure if it > works only on 1-d array, or if it is appropriate to use fft in this > convolution I will do. (I also find in numarray the convolution object > got an option to decide whether or not to use fft.) > > Could you be kind enought to point out where the effort should be put to > improve the performance of such a convolution? Any hint will be greatly > appreciated!! > > - yichun > > > From flory at fdu.edu Thu Jan 13 19:29:16 2005 From: flory at fdu.edu (David Flory) Date: Thu, 13 Jan 2005 19:29:16 -0500 Subject: [SciPy-user] complex vector scalar product: wrong implementation Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 A comment from a physicist on the issue of scalar products over the complex field. It is true that the most common inner product over the complex field is the traditional real one where (a,b)=(b,a)* and (a,a) is real. The linear transformations that preserve this inner product are unitary and we have quantum mechanics as the best example. However, there *are* applications for the "symmetric" inner product over the complex field where (a,b)=(b,a). The transformations that preserve this are "complex orthogonal" and there are applications from Minkowski space and representations of the Lorentz group. My point is that the categorical statement that only the traditional real inner product is legitimate is simply not true. Cheers, David Flory -----BEGIN PGP SIGNATURE----- Version: PGP 8.1 iQA/AwUBQecSW1e2/rcN3lp8EQJfvQCfRgOxCs7ixUt/Y6wkCo+euFx4m0EAoNnA sn1NxUNfuVxTZDZXeIhqyZv3 =DzEW -----END PGP SIGNATURE----- From oliphant at ee.byu.edu Thu Jan 13 20:07:38 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 13 Jan 2005 18:07:38 -0700 Subject: [SciPy-user] Re: Help on performance of signal.convolve In-Reply-To: <41E70EF5.1090405@usc.edu> References: <41E58035.1000803@usc.edu> <41E70EF5.1090405@usc.edu> Message-ID: <41E71B5A.6010908@ee.byu.edu> Yichun Wei wrote: > Hello Experts, > > Could you give some more detailed explanation about > fftpack.convolve.init_convolution_kernel ? It is somewhat hard for me > to guess its behavior... It seems not designed for general purpose > convolution, but for constructing filters. I'm not sure about fftpack.convolve but I think it is limited to 1-d convolution. Your large convolutions are usually done using the Fourier Transform (as the direct method implemented by convolveND will be slow for large data -- though it currently could use some optimizations). Basically, using the Fourier transform takes advantage of the fact that multiplication in the (discrete) Fourier domain is the same as *periodic* convolution in the originating domain. To make this the same as linear convolution, you have to zero pad to N1 + N2 - 1 where N1 and N2 are the lengths of the same dimensions of the two arrays. So, something like the following untested code should get you started: def convolvefft(arr1,arr2): s1 = array(arr1.shape()) s2 = array(arr2.shape()) fftsize = s1 + s2 - 1 # finds the closest power of 2 in each dimension (you may comment this out and compare speeds) fftsize= pow(2,ceil(log2(fftsize))) ARR1 = fftn(arr1,fftsize) ARR2 = fftn(arr2,fftsize) RES = ifftn(ARR1*ARR2) #RES=RES[validpart] # I'm not sure how to get the correct part --- first try would be to just truncate to the shape you wanted return RES From vroudnev at ksu.edu Thu Jan 13 20:25:27 2005 From: vroudnev at ksu.edu (Vladimir A. Roudnev) Date: Thu, 13 Jan 2005 19:25:27 -0600 Subject: [SciPy-user] complex vector scalar product: wrong implementation In-Reply-To: <41E70049.3010901@ee.byu.edu> References: <41E60A21.6040702@ksu.edu> <41E60E47.4060008@ucsd.edu> <41E61847.2020201@ksu.edu> <41E70049.3010901@ee.byu.edu> Message-ID: <41E71F87.8070406@ksu.edu> Travis Oliphant wrote: > The desired function is vdot (vector dot product). It takes the > conjugate of the first argument (if you use dotblas with Numeric it > calls the right BLAS routine). I think, this observation possibly extinguishes the flame. It works! The only thing to do is to reflect this knowledge in the scipy user manual. From yichunwe at usc.edu Mon Jan 17 15:02:50 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Mon, 17 Jan 2005 12:02:50 -0800 Subject: [SciPy-user] Re: Help on performance of signal.convolve In-Reply-To: <41E70EF5.1090405@usc.edu> References: <41E58035.1000803@usc.edu> <41E70EF5.1090405@usc.edu> Message-ID: <41EC19EA.3020506@usc.edu> Hi Travis, Travis Oliphant wrote: > I'm not sure about fftpack.convolve but I think it is limited to 1-d > convolution. > > Your large convolutions are usually done using the Fourier Transform (as > the direct method implemented by convolveND will be slow for large data > -- though it currently could use some optimizations). > > Basically, using the Fourier transform takes advantage of the fact that > multiplication in the (discrete) Fourier domain is the same as > *periodic* convolution in the originating domain. To make this the > same as linear convolution, you have to zero pad to N1 + N2 - 1 where N1 > and N2 are the lengths of the same dimensions of the two arrays. Thanks very much. > > So, something like the following untested code should get you started: > > def convolvefft(arr1,arr2): > s1 = array(arr1.shape()) > s2 = array(arr2.shape()) > fftsize = s1 + s2 - 1 > # finds the closest power of 2 in each dimension (you may comment > this out and compare speeds) > fftsize= pow(2,ceil(log2(fftsize))) > > ARR1 = fftn(arr1,fftsize) > ARR2 = fftn(arr2,fftsize) > > RES = ifftn(ARR1*ARR2) > #RES=RES[validpart] # I'm not sure how to get the correct part > --- first try would be to just truncate to the shape you wanted > return RES I tested this code using 1-dimesional data and found: 1. if I change fftsize to the closest power of 2, the returning array shape (RES.shape) changed together with it, making it difficult to truncate the result. 2. I have to use real_if_close to get the result. 3. The speeds is essentially the same when I tried 2 cases (finds the closest power of 2 in each dimension). I used the attached file to test this. Is it because I used a simple kernel and signal? Thanks for providing such a sweet piece of general purpose code. Actually the job I want to do could be done by 1-dimensional convolution (I convolve this 2 3d-arrays in 'valid' mode, and they have 2 dimesions in the same length. So a 1-d convolution on very point on a common plane and an summation should do the job.) Thus I am still trying to figure out how fftpack.init_convolution_kernel works... best, - yichun -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: convolvefft.py URL: From yichunwe at usc.edu Tue Jan 18 17:47:52 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Tue, 18 Jan 2005 14:47:52 -0800 Subject: [SciPy-user] Re: Help on performance of signal.convolve In-Reply-To: <41E70EF5.1090405@usc.edu> References: <41E58035.1000803@usc.edu> <41E70EF5.1090405@usc.edu> Message-ID: <41ED9218.4070100@usc.edu> Hi Travis, Travis Oliphant wrote: > So, something like the following untested code should get you started: > > def convolvefft(arr1,arr2): > s1 = array(arr1.shape()) > s2 = array(arr2.shape()) > fftsize = s1 + s2 - 1 > # finds the closest power of 2 in each dimension (you may comment > this out and compare speeds) > fftsize= pow(2,ceil(log2(fftsize))) I tried for larger kernels. This does matters! > > ARR1 = fftn(arr1,fftsize) > ARR2 = fftn(arr2,fftsize) > > RES = ifftn(ARR1*ARR2) > #RES=RES[validpart] # I'm not sure how to get the correct part > --- first try would be to just truncate to the shape you wanted > return RES Using this code a convolution of (16,16,40) kernel with (16,16,1800) signal takes 5s to solve on my 1.8G P-IV CPU. Because I am only interested in the "valid" convolution. What could be used to speed this up a bit more? I am really in need of speed for I have to do this convolution lots of times. best, - yichun -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: convolvefft.py URL: From oliphant at ee.byu.edu Tue Jan 18 17:56:09 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 18 Jan 2005 15:56:09 -0700 Subject: [SciPy-user] Re: Help on performance of signal.convolve In-Reply-To: <41ED9218.4070100@usc.edu> References: <41E58035.1000803@usc.edu> <41E70EF5.1090405@usc.edu> <41ED9218.4070100@usc.edu> Message-ID: <41ED9409.7050008@ee.byu.edu> Yichun Wei wrote: > Hi Travis, > > Travis Oliphant wrote: > >> So, something like the following untested code should get you started: >> >> def convolvefft(arr1,arr2): >> s1 = array(arr1.shape()) >> s2 = array(arr2.shape()) >> fftsize = s1 + s2 - 1 >> # finds the closest power of 2 in each dimension (you may >> comment this out and compare speeds) >> fftsize= pow(2,ceil(log2(fftsize))) > > > I tried for larger kernels. This does matters! > >> >> ARR1 = fftn(arr1,fftsize) >> ARR2 = fftn(arr2,fftsize) >> RES = ifftn(ARR1*ARR2) #RES=RES[validpart] # I'm >> not sure how to get the correct part --- first try would be to just >> truncate to the shape you wanted >> return RES > > > Using this code a convolution of (16,16,40) kernel with (16,16,1800) > signal takes 5s to solve on my 1.8G P-IV CPU. > > Because I am only interested in the "valid" convolution. What could be > used to speed this up a bit more? I am really in need of speed for I > have to do this convolution lots of times. You can also use numbers that factor easily into powers of 2, 3, or 5 to get speed in fftpack. You could try using djbfft (faster fft's). If you install it in usual places, scipy installation should pick it up and use it for the fft's. You may also get some speed up by reusing memory: RES = fftn(arr1,fftsize) RES *= fftn(arr2,fftsize) RES = ifftn(ARR1) The valid part is the going to be the center abs(s2-s1) + 1 of RES. Unfortunately, I don't know of a fast way to just evaluate the middle portion of the fft. So, any speed ups will be in reducing memory creation and element by element multiplication (you may get up to a 2x speed up by using weave to do the inplace multiplication). -Travis From yichunwe at usc.edu Wed Jan 19 19:44:51 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Wed, 19 Jan 2005 16:44:51 -0800 Subject: [SciPy-user] vectorize(function) did not return arrays? In-Reply-To: <41E70EF5.1090405@usc.edu> References: <41E58035.1000803@usc.edu> <41E70EF5.1090405@usc.edu> Message-ID: <41EEFF03.5060001@usc.edu> Dear Experts, 1) I suppose vectorize could return arrays, (I was evaluating a function over a 2-d grid, and each function call should return an array at every point in this grid as a result. I would like to construct a 3-d array with these results.) def helper(x,y): return myconvolve (a[x,y], k[x,y]) [NewAxis,NewAxis,:] vec_helper = vectorize(helper) x, y = ogrid[ 0:16, 0:16 ] lres = zeros((16,16),'O') lres = vec_helper(x,y) I read through the previous posts on this list by Travis, http://mail.python.org/pipermail/python-list/1999-June/004440.html Also I find on Travis's pylab pages: "There is also a class that allows wrapping an arbitrary Python function with scalar inputs or outpus so that the wrapped function behaves like a ufunc (taking array arguments and returning array array arguments". Thus I supposed this should be a mature feature in Scipy. However I could not get results as expected. Did I do anything wrong here. Also, 2) The code lres = zeros((16,16),'O') lres = vec_helper(x,y) does not work either. Long ago (also in 1999), there was someone complaining about this: http://mail.python.org/pipermail/python-list/1999-June/004440.html But there was no concrete anwser to this. The work-around does not work for me. I suppose there should be some better way to do this otherwise people won't stop talking about this issue... I dug a little via google but it could not tell me more on this... Do you have any suggestion? Thanks in advance! - yichun From yichunwe at usc.edu Wed Jan 19 19:47:14 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Wed, 19 Jan 2005 16:47:14 -0800 Subject: [SciPy-user] Re: vectorize(function) did not return arrays? In-Reply-To: <41EEFF03.5060001@usc.edu> References: <41E58035.1000803@usc.edu> <41E70EF5.1090405@usc.edu> <41EEFF03.5060001@usc.edu> Message-ID: <41EEFF92.1060400@usc.edu> I should have attached the code and the error message: Traceback (most recent call last): File "convolvefft.py", line 54, in ? lres = vec_helper(x,y) File "C:\Python23\Lib\site-packages\scipy_base\function_base.py", line 457, in __call__ return squeeze(arraymap(self.thefunc,args,self.otypes)) TypeError: only length-1 arrays can be converted to Python scalars. Yichun Wei wrote: > Dear Experts, > > 1) I suppose vectorize could return arrays, (I was evaluating a function > over a 2-d grid, and each function call should return an array at every > point in this grid as a result. I would like to construct a 3-d array > with these results.) > > def helper(x,y): > return myconvolve (a[x,y], k[x,y]) [NewAxis,NewAxis,:] > > vec_helper = vectorize(helper) > x, y = ogrid[ 0:16, 0:16 ] > lres = zeros((16,16),'O') > lres = vec_helper(x,y) > > I read through the previous posts on this list by Travis, > http://mail.python.org/pipermail/python-list/1999-June/004440.html > Also I find on Travis's pylab pages: > "There is also a class that allows wrapping an arbitrary Python function > with scalar inputs or outpus so that the wrapped function behaves like a > ufunc (taking array arguments and returning array array arguments". Thus > I supposed this should be a mature feature in Scipy. However I could not > get results as expected. Did I do anything wrong here. > > Also, > 2) The code > lres = zeros((16,16),'O') > lres = vec_helper(x,y) > does not work either. Long ago (also in 1999), there was someone > complaining about this: > http://mail.python.org/pipermail/python-list/1999-June/004440.html > But there was no concrete anwser to this. The work-around does not work > for me. I suppose there should be some better way to do this otherwise > people won't stop talking about this issue... I dug a little via google > but it could not tell me more on this... > > Do you have any suggestion? Thanks in advance! > > - yichun > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: convolvefft.py URL: From yichunwe at usc.edu Thu Jan 20 15:24:09 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Thu, 20 Jan 2005 12:24:09 -0800 Subject: [SciPy-user] vectorize(function)/arraymap did not return arrays? Message-ID: <41F01369.4000807@usc.edu> Hello Travis, Sorry I'm posting this for more detailed description. I have problem with vectorize class and/or arraymap method. I supposed vectorized callable objects constructed by scipy.vectorize class could return arrays, (I was evaluating a function over a 2-d array, and each function call should return an 1-d array at every point in this 2-d array as a result. I would like to construct a 3-d array with these results.) However I could not get the results as expected: #------------------------------ a = ones((16,16,100)) k = ones((16,16,10)) def helper(x,y): return myconvolve (a[x,y], k[x,y]) [NewAxis,NewAxis,:] vec_helper = vectorize(helper) x, y = ogrid[ 0:16, 0:16 ] lres = zeros((16,16),'O') lres = vec_helper(x,y) #------------------------------ I got the following error messages: Traceback (most recent call last): File "convolvefft.py", line 54, in ? lres = vec_helper(x,y) File "C:\Python23\Lib\site-packages\scipy_base\function_base.py", line 457, in __call__ return squeeze(arraymap(self.thefunc,args,self.otypes)) TypeError: only length-1 arrays can be converted to Python scalars. I read through the previous posts on this list by Travis: http://mail.python.org/pipermail/python-list/1999-June/004440.html Also I find on Travis's pylab pages: "There is also a class that allows wrapping an arbitrary Python function with scalar inputs or outpus so that the wrapped function behaves like a ufunc (taking array arguments and returning array array arguments". Thus I supposed this be a mature feature in Scipy. However I could not get my results as expected. Is it a wrong function call here? The call to arraymap is implemented in C, so I really do not know what can I do to manage this. The second problem is, according to a posts long ago (http://mail.python.org/pipermail/python-list/1999-June/004440.html) The code lres = zeros((16,16),'O') lres = vec_helper(x,y) does not work either even when vec_helper returns arrays as expected. There was no concrete anwser to this. The work-around suggested by Konrad Hinsen does not work for me. I suppose there should be some better elegant way to manage this, otherwise people won't stop talking about this issue... I dug a little via google but it could not tell me more on it. Thanks in advance! Also thanks for your fftconvolve, I saw it in CVS version of scipy. - yichun From scipy at dvdkwk.com Thu Jan 20 16:52:52 2005 From: scipy at dvdkwk.com (David K) Date: Thu, 20 Jan 2005 16:52:52 -0500 Subject: [SciPy-user] Python 2.4 Message-ID: <614401910.20050120165252@dvdkwk.com> Hi, I'm running ActiveState Python 2.4 on Win XP. Will scipy 0.3.2 be able run ok in this environment? I notice that the latest Numeric module for Win32 is named: Numeric-23.6.win32-py2.3.exe Has anyone tried to run this on Py 2.4 and found it OK? Thanks. -- Cheers, David From rkern at ucsd.edu Thu Jan 20 17:32:45 2005 From: rkern at ucsd.edu (Robert Kern) Date: Thu, 20 Jan 2005 14:32:45 -0800 Subject: [SciPy-user] Python 2.4 In-Reply-To: <614401910.20050120165252@dvdkwk.com> References: <614401910.20050120165252@dvdkwk.com> Message-ID: <41F0318D.5010700@ucsd.edu> David K wrote: > Hi, > > I'm running ActiveState Python 2.4 on Win XP. Will scipy 0.3.2 be > able run ok in this environment? > > I notice that the latest Numeric module for Win32 is named: > > Numeric-23.6.win32-py2.3.exe > > Has anyone tried to run this on Py 2.4 and found it OK? No. You won't be able to run binary extension modules compiled for the 2.3 series on a 2.4 interpreter. I don't know when the various people who make the binary releases will be compiling them for 2.4. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Thu Jan 20 19:16:51 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 20 Jan 2005 17:16:51 -0700 Subject: [SciPy-user] vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F01369.4000807@usc.edu> References: <41F01369.4000807@usc.edu> Message-ID: <41F049F3.5000404@ee.byu.edu> Yichun Wei wrote: > Hello Travis, > > Sorry I'm posting this for more detailed description. > > I have problem with vectorize class and/or arraymap method. I supposed > vectorized callable objects constructed by scipy.vectorize class could > return arrays, (I was evaluating a function over a 2-d array, and each > function call should return an 1-d array at every point in this 2-d > array as a result. I would like to construct a 3-d array with these > results.) However I could not get the results as expected: Unfortunately, this is not true. Vectorize expects the helper function to accept scalars and return scalars. It then converts the ordinary scalar function into a ufunc-like function (accepting arrays and returning arrays of the same size). It sounds like you need something different which will require looping. -Travis From yichunwe at usc.edu Thu Jan 20 20:58:38 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Thu, 20 Jan 2005 17:58:38 -0800 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F01369.4000807@usc.edu> References: <41F01369.4000807@usc.edu> Message-ID: <41F061CE.1050604@usc.edu> I turned to arraymap and vectorize after I find the nested loops in python are really too slow for me... Is there another way to speed nested loops? - yichun Travis Oliphant wrote: > Unfortunately, this is not true. Vectorize expects the helper function > to accept scalars and return scalars. It then converts the ordinary > scalar function into a ufunc-like function (accepting arrays and > returning arrays of the same size). > > It sounds like you need something different which will require looping. > > -Travis From yichunwe at usc.edu Thu Jan 20 22:02:51 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Thu, 20 Jan 2005 19:02:51 -0800 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F01369.4000807@usc.edu> References: <41F01369.4000807@usc.edu> Message-ID: <41F070DB.4010905@usc.edu> Hi Travis, I find someone suggesting this: for x in range( (imax-1 * jmax-1)-1 ): i, j = x / jmax, x % jmax array[i,j] = ... However, this does not look elegant. It would be great to be able to loop through arrays, mapping functions returning arrays in C. I guess it is too difficult to have a such a general mapping function. Thank you anyway. Do you know of any method which can speed up the nested loops? - yichun > Travis Oliphant wrote: > >> Unfortunately, this is not true. Vectorize expects the helper function to accept scalars and return scalars. It then converts the ordinary scalar function into a ufunc-like function (accepting arrays and returning arrays of the same size). >> It sounds like you need something different which will require looping. >> >> -Travis From nwagner at mecha.uni-stuttgart.de Fri Jan 21 09:47:42 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Jan 2005 15:47:42 +0100 Subject: [SciPy-user] Modified Sparse Row MSR format in scipy ?? Message-ID: <41F1160E.1020808@mecha.uni-stuttgart.de> Hi all, Is the MSR (modified sparse row format) supported by scipy ? Regards, Nils From oliphant at ee.byu.edu Fri Jan 21 14:43:00 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 21 Jan 2005 12:43:00 -0700 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F061CE.1050604@usc.edu> References: <41F01369.4000807@usc.edu> <41F061CE.1050604@usc.edu> Message-ID: <41F15B44.5010809@ee.byu.edu> Yichun Wei wrote: > I turned to arraymap and vectorize after I find the nested loops in > python are really too slow for me... Is there another way to speed > nested loops? Use weave or f2py to write the loop -- of course if the inner portion of the loop is a Python call this may not speed things up too much. Another thing to do is to re-write the algorithm using a different approach. Python has pretty fast loops compared to other interpreted languages but it can still cause slow downs. From oliphant at ee.byu.edu Fri Jan 21 14:44:29 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 21 Jan 2005 12:44:29 -0700 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F070DB.4010905@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> Message-ID: <41F15B9D.9070504@ee.byu.edu> Yichun Wei wrote: > Hi Travis, > > I find someone suggesting this: > > for x in range( (imax-1 * jmax-1)-1 ): > i, j = x / jmax, x % jmax > array[i,j] = ... > > > However, this does not look elegant. It would be great to be able to > loop through arrays, mapping functions returning arrays in C. I guess > it is too difficult to have a such a general mapping function. Thank > you anyway. Do you know of any method which can speed up the nested > loops? > Well, I don't know how difficult it would be to write such a general looping function --- I never tried to do it. It was not on my radar when I wrote arraymap. -Travis From yichunwe at usc.edu Fri Jan 21 16:06:00 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Fri, 21 Jan 2005 13:06:00 -0800 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F070DB.4010905@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> Message-ID: <41F16EB8.6080308@usc.edu> Well, I found list comprehension is actually fast enough. res = [myconvolve(a[x,y], k[x,y]) for x in range(16) for y in range(16)] which is much faster than for x in range(16): for y in range(16): ..... This thread says that list comprehension could be comparable to map, or even better if taking in to consideration the time calling helper functions in map: http://mail.python.org/pipermail/python-list/2005-January/259690.html I did not try list comprehension earlier because I roughly remember I was told that list comprehension is syntaxic suger of for loops... Travis Oliphant wrote: > Well, I don't know how difficult it would be to write such a general > looping function --- I never tried to do it. It was not on my radar > when I wrote arraymap. > > -Travis Thanks! - yichun From Fernando.Perez at colorado.edu Fri Jan 21 16:13:00 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 21 Jan 2005 14:13:00 -0700 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F16EB8.6080308@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> <41F16EB8.6080308@usc.edu> Message-ID: <41F1705C.3060905@colorado.edu> Yichun Wei wrote: > Well, I found list comprehension is actually fast enough. > > res = [myconvolve(a[x,y], k[x,y]) for x in range(16) for y in range(16)] > > which is much faster than > > for x in range(16): > for y in range(16): > ..... You might want to preallocate those ranges() outside: rng = range(NN) res = [myconvolve(a[x,y], k[x,y]) for x in rng for y in rng] For large NN, it will make a difference. You have to remember that the python 'compiler' is _extremely_ primitive, and it does not fold constants out of loops, no matter how trivial they may appear. I can bet that this (with the many possible variations on the idea) is probably the single most damaging cause of 'slow python loops'. By carefully doing certain things, you'd be surprised at how much you can get away with while still coding loops in python (I won't say you can beat Fortran/C, but a few simple things can really go a long way). Cheers, f From arne.keller at ppm.u-psud.fr Sat Jan 22 09:33:47 2005 From: arne.keller at ppm.u-psud.fr (Arne Keller) Date: Sat, 22 Jan 2005 15:33:47 +0100 Subject: [SciPy-user] Spherical Harmonics Message-ID: <1106404427.5791.2.camel@stmaur> Hi, I'm trying to use the sph_harm special function, but it has a very strange behavior, sometimes it fails with a segmentation fault. If someone can try to execute these lines of codes, to see if he obtains the same result: ########################################################## ## file test.py ########################################################## import scipy from scipy.special import sph_harm,lpmn,gammaln from scipy import * #n=1 #uncommenting this line produce a segmentation fault! #taken from special/basic.py: def sph_harmonic(m,n,theta,phi): """inputs of (m,n,theta,phi) returns spherical harmonic of order m,n (|m|<=n) and argument theta and phi: Y^m_n(theta,phi) """ x = cos(phi) m,n = int(m), int(n) Pmn,Pmnd = lpmn(m,n,x) val = Pmn[m,n] val *= sqrt((2*n+1)/4.0/pi) val *= exp(0.5*(gammaln(n-m+1)-gammaln(n+m+1))) val *= exp(1j*m*theta) return val n=1 m=1 theta=pi/2. phi=0. print sph_harmonic(m,n,phi,theta) ################End test ################################# it produces (-0.345494149471+0j) but When I uncomment the first line ('n=1'), then a segmentation fault occurs (if the line is uncommented but with 'n=2' instead of 'n=1' then the execution is normal). Please may anybody help? Many thanks From pearu at scipy.org Sat Jan 22 10:30:45 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 22 Jan 2005 09:30:45 -0600 (CST) Subject: [SciPy-user] Spherical Harmonics In-Reply-To: <1106404427.5791.2.camel@stmaur> References: <1106404427.5791.2.camel@stmaur> Message-ID: On Sat, 22 Jan 2005, Arne Keller wrote: > I'm trying to use the sph_harm special function, but it has a very > strange behavior, sometimes it fails with a segmentation fault. > > If someone can try to execute these lines of codes, to see if he obtains > the same result: > > ########################################################## > ## file test.py > ########################################################## > import scipy > from scipy.special import sph_harm,lpmn,gammaln > from scipy import * > > #n=1 #uncommenting this line produce a segmentation fault! > > #taken from special/basic.py: > def sph_harmonic(m,n,theta,phi): > """inputs of (m,n,theta,phi) returns spherical harmonic of order > m,n (|m|<=n) and argument theta and phi: Y^m_n(theta,phi) > """ > x = cos(phi) > m,n = int(m), int(n) > Pmn,Pmnd = lpmn(m,n,x) > val = Pmn[m,n] > val *= sqrt((2*n+1)/4.0/pi) > val *= exp(0.5*(gammaln(n-m+1)-gammaln(n+m+1))) > val *= exp(1j*m*theta) > return val > > n=1 > m=1 > theta=pi/2. > phi=0. > print sph_harmonic(m,n,phi,theta) > > ################End test ################################# > it produces (-0.345494149471+0j) > > > but When I uncomment the first line ('n=1'), then a segmentation fault > occurs (if the line is uncommented but with 'n=2' instead of 'n=1' then > the execution is normal). > > Please may anybody help? This segfault is probably related to the following code in special/specfun/specfun.f starting at line #6927: DO 35 I=0,M 35 PM(I,I+1)=(2.0D0*I+1.0D0)*X*PM(I,I) where PM has DIMENSION(0:M,0:N). So, if N==M then PM(M,M+1) will point out of the memory area located for PM. At the moment I am not sure whether the bug is in specfun.f or there should be an requirement m < n, specfun.pyf uses requirement m<=n. Pearu From pearu at scipy.org Sat Jan 22 12:29:14 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 22 Jan 2005 11:29:14 -0600 (CST) Subject: [SciPy-user] Spherical Harmonics In-Reply-To: References: <1106404427.5791.2.camel@stmaur> Message-ID: Hi again, I have fixed the segfault in scipy cvs. Pearu > From arne.keller at ppm.u-psud.fr Sat Jan 22 18:14:51 2005 From: arne.keller at ppm.u-psud.fr (Arne Keller) Date: Sun, 23 Jan 2005 00:14:51 +0100 Subject: [SciPy-user] Spherical Harmonics In-Reply-To: References: <1106404427.5791.2.camel@stmaur> Message-ID: <1106435691.5791.4.camel@stmaur> Many many thanks!! On Sat, 2005-01-22 at 11:29 -0600, Pearu Peterson wrote: > Hi again, > > I have fixed the segfault in scipy cvs. > > Pearu > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- Soutenez le mouvement SAUVONS LA RECHERCHE : http://recherche-en-danger.apinc.org/ --------------- Arne Keller Laboratoire de photophysique Mol?culaire du CNRS Bat 210 Universit? Paris Sud Orsay From drewes at interstice.com Sat Jan 22 20:38:51 2005 From: drewes at interstice.com (Rich Drewes) Date: Sat, 22 Jan 2005 17:38:51 -0800 Subject: [SciPy-user] Has some met such a problem? (genetic algorithm bugfix) Message-ID: <41F3002B.8010102@interstice.com> Qingliang: I had the same problem you reported trying to use the genetic algorithm example in scipy: "attributeError: rv_frozen instance has no attribute '__getitem__'". You can solve this by replacing in site-packages/scipy/ga/gene.py the line: f = rv.norm(old,w)[0] with: f = rv.norm.rvs(old,w)[0] I'm guessing that the interface to the Gaussian routine changed at some point and nobody back-ported a change to the genetic algorithm test program. With this change, the example program runs correctly. Rich Drewes From jrennie at csail.mit.edu Mon Jan 24 15:55:01 2005 From: jrennie at csail.mit.edu (Jason Rennie) Date: Mon, 24 Jan 2005 15:55:01 -0500 Subject: [SciPy-user] bug in fmin_bfgs? Message-ID: <20050124205501.GA4492@csail.mit.edu> I'm trying to use fmin_bfgs. Trouble is that fmin_bfgs calls my objective function with a badly-shaped parameter vector. I pass in an x0 with shape (2,2,2). First iteration, it calls objective and gradient functions with x0. Second iteration, it passes a parameter array with shape (2,2,2,2). An additional sign that something is wrong: some of the entries of the 2nd round parameter array are NaN's. Is fmin_bfgs not meant to be used with multidimensional arrays? Is there something obvious I'm overlooking? Any help greatly appreciated. I'm attaching my code. crf.py calculates the "real" objective/gradient. crf2.py is the "bare-bones" version. test_crf.py runs the test. Jason -------------- next part -------------- A non-text attachment was scrubbed... Name: crf.py Type: text/x-python Size: 3387 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: crf2.py Type: text/x-python Size: 229 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_crf.py Type: text/x-python Size: 902 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gdot.py Type: text/x-python Size: 339 bytes Desc: not available URL: From jrennie at csail.mit.edu Mon Jan 24 16:12:18 2005 From: jrennie at csail.mit.edu (Jason Rennie) Date: Mon, 24 Jan 2005 16:12:18 -0500 Subject: [SciPy-user] bug in fmin_bfgs? In-Reply-To: <20050124205501.GA4492@csail.mit.edu> References: <20050124205501.GA4492@csail.mit.edu> Message-ID: <20050124211218.GI2434@csail.mit.edu> On Mon, Jan 24, 2005 at 03:55:01PM -0500, Jason Rennie wrote: > Is fmin_bfgs not meant to be used with multidimensional arrays? Inspection of the scipy code answers my question: no it was not meant to be used with multidimensional arrays... Correct me if I'm wrong, but it doesn't look like it (I'm looking at lbfgsb.py) would be difficult to make it work with multdimensional parameter vectors... Jason From jrennie at csail.mit.edu Tue Jan 25 00:25:46 2005 From: jrennie at csail.mit.edu (Jason Rennie) Date: Tue, 25 Jan 2005 00:25:46 -0500 Subject: [SciPy-user] lbfgsb.py Message-ID: <20050125052546.GA8587@csail.mit.edu> Is the test included in lbfgsb.py supposed to work? I keep getting this error: jrennie at desk:/usr/lib/python2.3/site-packages/scipy/optimize$ python lbfgsb.py array_from_pyobj:intent(inout) array must be contiguous and with a proper type and size. Traceback (most recent call last): File "lbfgsb.py", line 237, in ? factr=factr, pgtol=pgtol) File "lbfgsb.py", line 174, in fmin_l_bfgs_b isave, dsave) _lbfgsb.error: failed in converting 15th argument `lsave' of _lbfgsb.setulb to C/Fortran array Any ideas? I noticed that lsave has this type in lbfgsb.pyf (Lib/optimize/lbfgsb-0.9): logical dimension(4),intent(inout) :: lsave which doesn't completely jive with the type in lbfgsb.py: lsave = NA.zeros((4,), NA.Int) Could this be the source of the problem? Thanks, Jason From yichunwe at usc.edu Tue Jan 25 23:38:12 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Tue, 25 Jan 2005 20:38:12 -0800 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F16EB8.6080308@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> <41F16EB8.6080308@usc.edu> Message-ID: <41F71EB4.3090504@usc.edu> Hi Expects, Sorry if I changed what I've said. The list comprehension could not work as fast as I expected... it does look like syntaxic suger when I use it in my application. This is my problem: basically what I need to do is, res = [convolve1d(a[x,y], k[x,y]) for x in range(16) for y in range(16)] res = sum(res) k.shape is (16,16,40), a.shape is (16,16,1800) or so. So the result will be a 1-d array of length 1839. I had to do this quickly for it will be repeated for more than 10000 times in optimization. I implemented this in matlab and find it runs fairly quick (conv in matlab is really quick. And I think matlab has been optimized for for-loops.) After playing with convolution of scipy and for loops for several days, I realized that I have to write some C++ code to accomplish this in python without a prominent performance headache. However, if I want to implement the whole nested loops in C++ using weave.inline, I have to throw out all the numeric vectorized syntax and rewrite the convolve1d function again in C++ (don't I? Would CXX helps some with this problem? I had to admit that I am reluctant to code in C++). Also all the materials I read suggest avoiding calling back python functions in weave. Could you share some opinion on this? Thanks in advance. - yichun Yichun Wei wrote: > > Well, I found list comprehension is actually fast enough. > > res = [myconvolve(a[x,y], k[x,y]) for x in range(16) for y in range(16)] > > which is much faster than > > for x in range(16): > for y in range(16): > ..... > > This thread says that list comprehension could be comparable to map, or > even better if taking in to consideration the time calling helper > functions in map: > http://mail.python.org/pipermail/python-list/2005-January/259690.html > > I did not try list comprehension earlier because I roughly remember I > was told that list comprehension is syntaxic suger of for loops... > > From cookedm at physics.mcmaster.ca Wed Jan 26 17:59:51 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 26 Jan 2005 17:59:51 -0500 Subject: [SciPy-user] bug in fmin_bfgs? References: <20050124205501.GA4492@csail.mit.edu> <20050124211218.GI2434@csail.mit.edu> Message-ID: Jason Rennie writes: > On Mon, Jan 24, 2005 at 03:55:01PM -0500, Jason Rennie wrote: >> Is fmin_bfgs not meant to be used with multidimensional arrays? > > Inspection of the scipy code answers my question: no it was not meant > to be used with multidimensional arrays... > > Correct me if I'm wrong, but it doesn't look like it (I'm looking at > lbfgsb.py) would be difficult to make it work with multdimensional > parameter vectors... BTW, fmin_bfgs != lbfgsb.fmin_l_bfgs_b fmin_bfgs is defined in Lib/optimize/optimize.py in the scipy code. What you could is flatten and reshape the parameter vectors: x0 = flat_x0 = ravel(x0) flat_xopt = fmin_bfgs(f, flat_x0, args=(x0.shape,) ...) xopt = resize(flat_xopt, x0.shape) and in your definition of f: def f(flat_x, shape): x = resize(flat_x, shape) ... do stuff with our multi-dimensional x ... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jrennie at csail.mit.edu Wed Jan 26 19:27:50 2005 From: jrennie at csail.mit.edu (Jason Rennie) Date: Wed, 26 Jan 2005 19:27:50 -0500 Subject: [SciPy-user] bug in fmin_bfgs? In-Reply-To: References: <20050124205501.GA4492@csail.mit.edu> <20050124211218.GI2434@csail.mit.edu> Message-ID: <20050127002750.GA2402@csail.mit.edu> On Wed, Jan 26, 2005 at 05:59:51PM -0500, David M. Cooke wrote: > BTW, fmin_bfgs != lbfgsb.fmin_l_bfgs_b Figured the lbfgsb code might be better to use since it's the limited memory version and it's an interface to the original fortran code. But, I haven't had any success getting lbfgsb.fmin_l_bfgs_b to work... (keep getting "failed in converting 15th argument `lsave'" error) > What you could is flatten and reshape the parameter vectors: That works much more nicely than attempting to modify the scipy code :) Thanks, Jason From gpajer at rider.edu Wed Jan 26 20:00:51 2005 From: gpajer at rider.edu (Gary Pajer) Date: Wed, 26 Jan 2005 20:00:51 -0500 Subject: [SciPy-user] fink installation with apple's x11 ? Message-ID: <41F83D43.8090505@rider.edu> I'm trying to install scipy on OS X 10.3 I've downloaded and installed the latest Apple Developer Tools, as well as Apple's X11. I've installed and selfupdate'ed fink. I've installed python23 and numeric-py23 from binaries (apt-get). Now when I try to install scipy (fink install scipy-py23) I fail with a message Can't resolve dependancy "gcc3.1" for package "xfree86-4.4.--14" fink list -i xfree reports three packages (from memory): system-xfree86 system-xfree86-dev system-xfree86-shlibs Can anyone help? Is the problem that it can't find (but needs) gcc3.1 or that it's looking specifically for the fink-style xfree86? Or something else? Thanks, Gary From nwagner at mecha.uni-stuttgart.de Thu Jan 27 03:28:59 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 27 Jan 2005 09:28:59 +0100 Subject: [SciPy-user] Vertical/horizontal lines with xplt Message-ID: <41F8A64B.20100@mecha.uni-stuttgart.de> Hi all, It is often useful to draw a line that stretches from the left to the right side of the axes at a given height. Matplotlib offers the commands l = axvline(x=1) l = axhline(y=0.1) for this purpose. Is there something similar available for xplt in scipy ? Nils From roman.milner at baesystems.com Thu Jan 27 14:07:28 2005 From: roman.milner at baesystems.com (Roman Milner) Date: Thu, 27 Jan 2005 11:07:28 -0800 Subject: [SciPy-user] gplt broken when there is a space in the path to wgnuplot.exe? Message-ID: <41F93BF0.8020908@baesystems.com> Hello. I've recently been using scipy and including it in a py2exe'ed project. The first user I gave it to dropped it in "c:\Program Files" and it didn't work. It turned out this is because I am putting wgnuplot.exe in the same directory as everything else and when pyPlot.py popen's the executables, it fails when there is a space in the path. I tried every combination I could think of putting the path in quotes, back slashing the quotes, double (and triple) back slashing the back slashes, back slashing the spaces, double back slashing the spaces... I could come up with no answer that way. I ended up hacking pyPlot.py so it will split the path to the wgnuplot.exe file, os.chdir into the same dir as the executable, run wgnuplot.exe without any path information, then os.chdir back into the original cwd. I'm not sure what repercussions this hack will have (seems like it might be bad in a threaded app). Is there a better solution? Thanks, ^Roman From elcorto at gmx.net Thu Jan 27 12:28:05 2005 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 27 Jan 2005 18:28:05 +0100 Subject: [SciPy-user] gplt broken when there is a space in the path to wgnuplot.exe? In-Reply-To: <41F93BF0.8020908@baesystems.com> References: <41F93BF0.8020908@baesystems.com> Message-ID: <41F924A5.6090403@gmx.net> Hi Hmm, I'm using gnuplot.py and remember defining the path of wgnuplot.exe like "C:/.../wgnuplot.exe". best steve Roman Milner wrote: > Hello. I've recently been using scipy and including it in a py2exe'ed > project. The first user I gave it to dropped it in "c:\Program Files" > and it didn't work. > > It turned out this is because I am putting wgnuplot.exe in the same > directory as everything else and when pyPlot.py popen's the executables, > it fails when there is a space in the path. > > I tried every combination I could think of putting the path in quotes, > back slashing the quotes, double (and triple) back slashing the back > slashes, back slashing the spaces, double back slashing the spaces... > > I could come up with no answer that way. I ended up hacking pyPlot.py so > it will split the path to the wgnuplot.exe file, os.chdir into the same > dir as the executable, run wgnuplot.exe without any path information, > then os.chdir back into the original cwd. > > I'm not sure what repercussions this hack will have (seems like it might > be bad in a threaded app). Is there a better solution? > > Thanks, > ^Roman > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > -- There are three types of people in this world: those who make things happen, those who watch things happen and those who wonder what happened. - Mary Kay Ash From pajer at iname.com Thu Jan 27 13:56:54 2005 From: pajer at iname.com (Gary) Date: Thu, 27 Jan 2005 13:56:54 -0500 Subject: [SciPy-user] gplt broken when there is a space in the path to wgnuplot.exe? In-Reply-To: <41F93BF0.8020908@baesystems.com> References: <41F93BF0.8020908@baesystems.com> Message-ID: <41F93976.1050302@iname.com> Roman Milner wrote: > Hello. I've recently been using scipy and including it in a py2exe'ed > project. The first user I gave it to dropped it in "c:\Program Files" > and it didn't work. > > How about c:\Progra~1\... Sometimes that works for me. -g From roman.milner at baesystems.com Thu Jan 27 16:34:29 2005 From: roman.milner at baesystems.com (Roman Milner) Date: Thu, 27 Jan 2005 13:34:29 -0800 Subject: [SciPy-user] gplt broken when there is a space in the path to wgnuplot.exe? In-Reply-To: <41F93976.1050302@iname.com> References: <41F93BF0.8020908@baesystems.com> <41F93976.1050302@iname.com> Message-ID: <41F95E65.8040509@baesystems.com> Do you know the details of how this works? Do you take the first 7 characters of the path name, add a ~, then I suppose the number represents how many paths match that particular 7 character string or something? How do you go about ordering all the paths that can't be disambiguated by the first 7 characters? I have to be able to support any path and can't hard code "Program Files". So I would have to know how to translate any path to the "7 characters ~#" style. Thanks! ^Roman Gary wrote: > How about c:\Progra~1\... > > Sometimes that works for me. > > -g > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From joe at enthought.com Thu Jan 27 18:46:05 2005 From: joe at enthought.com (Joe Cooper) Date: Thu, 27 Jan 2005 17:46:05 -0600 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS Message-ID: <41F97D3D.4010608@enthought.com> Hi all, There's something gone awry in recent scipy_core CVS. When I last packaged it up, it worked OK with bdist_rpm, but now it fails with the following traceback: Traceback (most recent call last): File "setup.py", line 75, in ? setup_package() File "setup.py", line 36, in setup_package configs.append(mod.configuration(parent_path=local_path)) File "/home/joe/redhat/SOURCES/scipy_core/build/bdist.linux-i686/rpm/BUILD/Scipy_core-0.3.3_132.2284/scipy_base/setup_scipy_base.py", line 104, in configuration _nc_compiled_base_ext = _config_compiled_base( File "/home/joe/redhat/SOURCES/scipy_core/build/bdist.linux-i686/rpm/BUILD/Scipy_core-0.3.3_132.2284/scipy_base/setup_scipy_base.py", line 37, in _config_compiled_base os.path.join(local_path, source)) File "/home/joe/redhat/SOURCES/scipy_core/build/bdist.linux-i686/rpm/BUILD/Scipy_core-0.3.3_132.2284/scipy_base/setup_scipy_base.py", line 19, in _temp_copy s = open(_from).read() IOError: [Errno 2] No such file or directory: 'scipy_base/_compiled_base.c' error: Bad exit status from /home/joe/redhat/tmp/rpm-tmp.11674 (%build) RPM build errors: Bad exit status from /home/joe/redhat/tmp/rpm-tmp.11674 (%build) error: command 'rpmbuild' failed with exit status 1 The file does exist: [joe at feynman scipy_core]$ find . -name _compiled_base.c ./scipy_base/_compiled_base.c A simple "build" doesn't have this problem, nor does "bdist". Any clues for me? Thanks! From answer at tnoo.net Sat Jan 22 19:37:06 2005 From: answer at tnoo.net (=?iso-8859-1?q?Martin_L=FCthi?=) Date: Sat, 22 Jan 2005 15:37:06 -0900 Subject: [SciPy-user] SystemError in optimize.bisplrep Message-ID: Hi There is a consistent SystemError when I try to use optimize.bisplrep on a certain data set. ================= Python 2.4 (#1, Nov 30 2004, 09:18:19) [GCC 3.3.3] on linux2 help(scipy) gives VERSION 0.3.2_302.4546 (CVS version from about January 20) help(Numeric) gives VERSION 23.1 ================= numerix Numeric 23.1 iopt,kx,ky,m= 0 3 3 16 nxest,nyest,nmax= 9 9 9 lwrk1,lwrk2,kwrk= 1298 571 20 xb,xe,yb,ye= 1711.91585 1102.19325 -2237.94674 -1573.73858 eps,s 1.E-16 10.3431458 Traceback (most recent call last): File "fehler.py", line 21, in ? xvelospl = scipy.interpolate.bisplrep(vc[:,0], vc[:,1], vc[:,2]) File "/usr/local/lib/python2.4/site-packages/scipy/interpolate/fitpack.py", line 611, in bisplrep tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) SystemError: error return without exception set ================== This code produces the error ================== import scipy vc = scipy.array([[ 1.71191585e+03, -2.23794674e+03, 4.75308808e+00], [ 2.12372140e+03, -1.93888718e+03, 5.81923099e+00], [ 3.35389426e+03, -1.04998143e+03, 1.62242033e+00], [ 2.94453166e+03, -1.34979104e+03, 4.88224816e+00], [ 2.53225224e+03, -1.64269489e+03, 8.05746675e+00], [ 2.23717430e+03, -1.22890371e+03, 4.83692941e+00], [ 1.35903071e+03, -1.88168793e-02, 1.74331390e+00], [ 1.65124228e+03, -4.07642789e+02, 1.90797462e+00], [ 1.89958300e+03, -5.17000000e+02, 2.26234838e+00], [ 1.40869886e+03, -9.83926136e+02, 3.04278075e+00], [ 1.69451200e+03, -1.28935600e+03, 4.54354841e+00], [ 1.73692665e+03, -9.59722949e+02, 3.16536871e+00], [ 2.49206700e+03, -7.63554000e+02, 2.68046469e+00], [ 2.14106300e+03, -1.31310900e+03, 5.31786607e+00], [ 7.18383558e+02, -1.74427543e+03, 2.47771836e+00], [ 1.10219325e+03, -1.57373858e+03, 2.79857398e+00]]) spl = scipy.interpolate.bisplrep(vc[:,0], vc[:,1], vc[:,2]) ================== The error does not occur with different data. Thanks for any help! Martin -- Martin L?thi answer at tnoo.net From Fernando.Perez at colorado.edu Thu Jan 27 19:08:38 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 27 Jan 2005 17:08:38 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41F97D3D.4010608@enthought.com> References: <41F97D3D.4010608@enthought.com> Message-ID: <41F98286.4030608@colorado.edu> Joe Cooper wrote: > Hi all, > > There's something gone awry in recent scipy_core CVS. When I last > packaged it up, it worked OK with bdist_rpm, but now it fails with the > following traceback: [...] I actually don't have a solution to this, the last version I built an RPM for was CVS from a few weeks ago: In [2]: scipy.__version__ Out[2]: '0.3.2_300.4526' It worked fine. But I want to mention something in this thread, b/c it's RPM related. If anyone is going to have a look at the scipy rpm build process, it might be worth fixing a few (very minor) issues. I had also promised to pass some of this info to others (esp. S. Walton), so here it goes. I recently set up a group of Fedora3 repositories for our systems, with multiple architectures and therefore different ATLAS releases (which implies different Numeric/Scipy RPMs as well). For the most part the process went quite well, I finally got it down to the following: cd /path/to/scipy/source/dir cd scipy_core pybrpm-arch Scipy_core cd .. pybrpm-arch SciPy where pybrpm-arch is a simple shell script (attached) which automates the building and installation of an rpm via bdist_rpm. But the packages had to be split in Scipy_core and SciPy (with that funny capitalization), otherwise things won't install, since Scipy_core contains the critical scipy_distutils. I'm not sure where the discussion stands on what to include into scipy_core and what should go in the main package. Currently, scipy_core includes: /scipy_base /scipy_distutils /scipy_test /weave and SciPy contains /gui_thread /scipy (all of this in /usr/lib/python2.3/site-packages). I don't have a problem with that, I just list it here for reference. The only thing I'd like to see fixed is the awkward naming of the packages: why not be consistent and just name the two proper RPMs scipy_core scipy This post is mainly to just ask for this minor change, and provide the install scripts which may be of use to others for managing Fedora-based environments. They all rely on a few environment variables which need to be correctly configured. This is from my root/.tcshrc: ############################################################################## # # Local yum configuration # alias yinst 'yum -y install' alias yup 'yum -y update' # Fedora Release version, as seen by YUM. setenv RELEASEVER 3 # Architecture-specific flag, as per the ATLAS binaries convention setenv ARCH P4SSE2 # Yum allows expanding $YUM0-9 variables in its .conf files, so by setting # this variable we can manage architecture-specific repositories. setenv YUM0 /usr/local/installers/yum/fc${RELEASEVER} setenv YUM1 $ARCH setenv YUM2 ${YUM0}-arch/${YUM1} ############################################################################## My local yum repo is organized as: root at planck[yum]# d /usr/local/installers/yum total 20 drwxr-xr-x 3 root 4096 Jan 27 15:47 fc3/ drwxr-xr-x 5 root 4096 Jan 7 12:00 fc3-arch/ drwxr-xr-x 3 root 4096 Jan 19 18:28 fc3-shared/ -rw-r--r-- 1 root 984 Jan 10 16:24 README -rwxr-xr-x 1 root 95 Jan 4 11:53 update-repo* where fc3-arch contains the architecture-specific repositories: root at planck[fc3-arch]# d /usr/local/installers/yum/fc3-arch total 12 drwxr-xr-x 3 root 4096 Jan 25 12:34 P4SSE2/ drwxr-xr-x 3 root 4096 Jan 19 18:29 P4SSE2_2HT/ drwxr-xr-x 3 root 4096 Jan 19 18:29 PIIISSE1/ and update-repo is a trivial one-liner I run whenever any repo changes (it's symlinked from all of them): root at planck[yum]# cat update-repo #!/bin/sh # as of FedoraCore3, this is the proper command to update a repository createrepo . ######################### This setup allows me to manage multiple heterogeneous machines which fetch their updates from a single NFS-shared group of yum repos, without any conflicts and using the same config files (in /etc/yum.repos.d) for all. Here are the relevant repo files: root at planck[yum.repos.d]# cat local.repo ### Local packages # These are directories with locally available rpms. Remember to run # 'createrepo .' when new rpms get added, so that the repodata/ subdir needed # by yum is updated. [local] name = Local packages (Fedora Core $releasever) baseurl = file://$YUM0 enabled=1 ######################### root at planck[yum.repos.d]# cat local-arch.repo ### Architecture-dependent local packages # These are directories with locally available rpms. Remember to run # 'createrepo .' when new rpms get added, so that the repodata/ subdir needed # by yum is updated. [local-arch] name = Architecture-dependent local packages (Arch: $YUM1) baseurl = file://$YUM2 enabled=1 ######################### root at planck[yum.repos.d]# cat local-shared.repo ### Architecture-dependent local packages # These are directories with locally available rpms. Remember to run # 'createrepo .' when new rpms get added, so that the repodata/ subdir needed # by yum is updated. # The -shared repo contain RPMs which were written to go into /usr/local/, and # hence are effectively shared via NFS. This repository should only be # enabled on the host which hosts /usr/local. [local-shared] name = Shared local packages baseurl = file://$YUM0-shared enabled=1 #################### I've written also some code to auto-generate architecture-specific RPMs out of the scipy.org ATLAS binary tarballs, which I can provide if anyone is interested. All this, combined with some more scripts to auto-generate kickstart install scripts, gives a pretty reasonable solution for managing the problem of multiple machines, with multiple architectures and in-house managed RPMs for rapidly changing code (like scipy, ipython and matplotlib), while keeeping my sanity and time to do my official job (research). We can now go from blank hard disk to fully updated box in about 1 hour, with about 3 minutes of human intervention. And the machines stay up to date via yum, including regarding our own managed packages. Hopefully some of this is useful to others. Cheers, f. ps. And Joe, good luck with your bug :) -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pybrpm-arch URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pybrpminst URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pybrpm-noarch URL: From cookedm at physics.mcmaster.ca Thu Jan 27 19:54:01 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 27 Jan 2005 19:54:01 -0500 Subject: [SciPy-user] bug in fmin_bfgs? In-Reply-To: <20050127002750.GA2402@csail.mit.edu> (Jason Rennie's message of "Wed, 26 Jan 2005 19:27:50 -0500") References: <20050124205501.GA4492@csail.mit.edu> <20050124211218.GI2434@csail.mit.edu> <20050127002750.GA2402@csail.mit.edu> Message-ID: Jason Rennie writes: > On Wed, Jan 26, 2005 at 05:59:51PM -0500, David M. Cooke wrote: >> BTW, fmin_bfgs != lbfgsb.fmin_l_bfgs_b > > Figured the lbfgsb code might be better to use since it's the limited > memory version and it's an interface to the original fortran code. > But, I haven't had any success getting lbfgsb.fmin_l_bfgs_b to > work... (keep getting "failed in converting 15th argument `lsave'" > error) I'm wanting to use this routine, and now come across this error also. (Plus, I originally wrote the wrapper :-) I'm guessing you're using a 64-bit platform, like me? It's an error where Numeric.Int (the generic 'int' type of no specified size) is used instead of the size-specific Numeric.Int32. I've attached a patch to issue 197 on the scipy bug tracker. Quicky dissection of the problem: Numeric.Int is 'l' -- which maps to PyArray_LONG at the C level. lsave is a logical in the Fortran code, so it should have the same size as a Fortran integer, which should have the same size as a Fortan real == 4 bytes, or so says the g77 manual. The F2PY wrapper checks for lsave having elements of type PyArray_INT, or C int's. Here's the rub: on 64-bit platforms, sizeof(long) == 8 bytes, whereas sizeof(int) == 4 bytes (at least on mine; on some 64-bit platforms sizeof(int) would be 8). So PyArray_LONG != PyArray_INT. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From joe at enthought.com Thu Jan 27 20:02:50 2005 From: joe at enthought.com (Joe Cooper) Date: Thu, 27 Jan 2005 19:02:50 -0600 Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS Message-ID: <41F98F3A.5030906@enthought.com> Hi again all, (What follows is wild conjecture based on a very cursory reading of the setup_scipy_base.py by someone (me) who doesn't speak very good Python and has no clue how distutils does what it does.) It looks like the problem appeared with the added support for Numeric/numarray choice in this bit here: def _config_compiled_base(package, local_path, numerix_prefix, macro, info): """_config_compiled_base returns the Extension object for an Numeric or numarray specific version of _compiled_base. """ from scipy_distutils.system_info import dict_append from scipy_distutils.core import Extension from scipy_distutils.misc_util import dot_join module = numerix_prefix + "_compiled_base" source = module + '.c' _temp_copy(os.path.join(local_path, "_compiled_base.c"), os.path.join(local_path, source)) sources = [source] ...snip... So, there's a copy into the build tree of _nc_compiled_base.c, which gets where it is supposed to go. But then when building within that tree, it is again looking for the _compiled_base.c file to copy somewhere, which doesn't exist. I don't see how to fix it, though...looks like the function is being called twice. Will keep digging... From mcosta at fc.up.pt Mon Jan 24 07:52:48 2005 From: mcosta at fc.up.pt (Miguel Dias Costa) Date: Mon, 24 Jan 2005 12:52:48 +0000 Subject: [SciPy-user] distribution fit() Message-ID: <41F4EFA0.9060305@fc.up.pt> Hello all, I'm trying to fit a Frechet distribution to some data and have had some success by comparing the histogram to the pdf and minimizing the sum of squared differences, but only after spending some time trying to get the distribution.fit() method to work, without success. This seems to take the actual data, not the histogram, and minimize a "negative log likelihood function" (nnlf(?) in the code). The fit() method only rearranges the arguments and keywords and calls optimize.fmin: def fit(self, data, *args, **kwds): loc0, scale0 = map(kwds.get, ['loc', 'scale'],[0.0, 1.0]) Narg = len(args) if Narg != self.numargs: if Narg > self.numargs: raise ValueError, "Too many input arguments." else: args += (1.0,)*(self.numargs-Narg) # location and scale are at the end x0 = args + (loc0, scale0) return optimize.fmin(self.nnlf,x0,args=(ravel(data),),disp=0) which should allow me to call distribution.fit(data) since it fills the missing parameters. But then optimize.fmin passes to nnlf the following arguments fsim[0] = apply(func,(x0,)+args) that is (x0,args), which in my case would amount to ((c, loc, scale),ravel(data)) or, by default, ((1.0,0.0,1.0),ravel(data)). However, nnlf expects: def nnlf(self, *args): # - sum (log pdf(x, theta)) # where theta are the parameters (including loc and scale) # try: x = args[-1] loc = args[-2] scale = args[-3] args = args[:-3] except IndexError: raise ValueError, "Not enough input arguments." which fails, but it seems to me that it should expect something like def nnlf(self, *args): # - sum (log pdf(x, theta)) # where theta are the parameters (including loc and scale) # try: x = args[-1] loc = args[-2][-2] scale = args[-2][-1] args = args[-2] except IndexError: raise ValueError, "Not enough input arguments." which works at first but the method still fails to find the parameters. Am I completely missing the point here? Thanks in advance. Miguel Costa P.S. (Almost) unrelated, there seems to be a missing "from math import *" in /scipy/optimize/anneal.py, at least I needed it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.kreiner at gmail.com Tue Jan 25 16:32:04 2005 From: mike.kreiner at gmail.com (Mike Kreiner) Date: Tue, 25 Jan 2005 16:32:04 -0500 Subject: [SciPy-user] scipy.signal.convolve2d() clips off part of my image Message-ID: <86af7b6905012513324c7ea33e@mail.gmail.com> I posted this message earlier to comp.lang.python, but figured this would be a better place to go. I'm using the convolve2d(image, mask) function from scipy to blur an image. The image I'm using is 512 by 384. When I display the image that convolve2d returns, the the left-most square of pixels (388 by 388) turn out fine, blurred and everything, however the right side of the image is all black. I've uploaded the practice image to: http://tinypic.com/1g3iox The output image is: http://tinypic.com/1g3iv9 here's what I entered at the intereactive window: >>> import scipy >>> img = scipy.imread("c:\\practice.jpg",flatten=True) >>> img.shape (384, 512) >>> mask = (1.0/115)*scipy.array([[2,4,5,4,2],[4,9,12,9,4],[5,12,15,12,5],[4,9,12,9,4],[2,4,5,4,2]]) >>> blurredImg = scipy.signal.convolve2d(img, mask) >>> scipy.imsave("c:\\blurred.jpg",blurredImg) Also, I noticed that the shape attribute is (384, 512), even though windows and my image editor say the image is 512 by 384. Could this have something to do with the reason convolve2d() only works right on the left-most 388 by 388 pixels? Thanks for any help. -Mike Kreiner From joe at enthought.com Thu Jan 27 22:03:18 2005 From: joe at enthought.com (Joe Cooper) Date: Thu, 27 Jan 2005 21:03:18 -0600 Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS In-Reply-To: <41F98F3A.5030906@enthought.com> References: <41F98F3A.5030906@enthought.com> Message-ID: <41F9AB76.7010500@enthought.com> Joe Cooper wrote: > Hi again all, > > (What follows is wild conjecture based on a very cursory reading of the > setup_scipy_base.py by someone (me) who doesn't speak very good Python > and has no clue how distutils does what it does.) > > It looks like the problem appeared with the added support for > Numeric/numarray choice in this bit here: > > def _config_compiled_base(package, local_path, numerix_prefix, macro, > info): > """_config_compiled_base returns the Extension object for an > Numeric or numarray specific version of _compiled_base. > """ > from scipy_distutils.system_info import dict_append > from scipy_distutils.core import Extension > from scipy_distutils.misc_util import dot_join > module = numerix_prefix + "_compiled_base" > source = module + '.c' > _temp_copy(os.path.join(local_path, "_compiled_base.c"), > os.path.join(local_path, source)) > sources = [source] > ...snip... > > So, there's a copy into the build tree of _nc_compiled_base.c, which > gets where it is supposed to go. But then when building within that > tree, it is again looking for the _compiled_base.c file to copy > somewhere, which doesn't exist. I don't see how to fix it, > though...looks like the function is being called twice. > > Will keep digging... OK, I see what's happening. The Numeric/Numarray selection is happening during sdist, which is premature. This selection should only happen for the build and bdist phases. This will have to be fixed for the next release regardless of SciPy, since it will only build from a CVS checkout or snapshot as-is, and then only for dist types that don't sdist as part of the build. Now if I only knew how distutils worked I might be able to fix it... (I'll give it a try anyway.) From jmiller at stsci.edu Thu Jan 27 22:04:56 2005 From: jmiller at stsci.edu (Todd Miller) Date: Thu, 27 Jan 2005 22:04:56 -0500 Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS In-Reply-To: <41F98F3A.5030906@enthought.com> References: <41F98F3A.5030906@enthought.com> Message-ID: <1106881496.5363.2.camel@jaytmiller.comcast.net> On Thu, 2005-01-27 at 19:02 -0600, Joe Cooper wrote: > Hi again all, > > (What follows is wild conjecture based on a very cursory reading of the > setup_scipy_base.py by someone (me) who doesn't speak very good Python > and has no clue how distutils does what it does.) > > It looks like the problem appeared with the added support for > Numeric/numarray choice in this bit here: > > def _config_compiled_base(package, local_path, numerix_prefix, macro, info): > """_config_compiled_base returns the Extension object for an > Numeric or numarray specific version of _compiled_base. > """ > from scipy_distutils.system_info import dict_append > from scipy_distutils.core import Extension > from scipy_distutils.misc_util import dot_join > module = numerix_prefix + "_compiled_base" > source = module + '.c' > _temp_copy(os.path.join(local_path, "_compiled_base.c"), > os.path.join(local_path, source)) > sources = [source] > ...snip... > > So, there's a copy into the build tree of _nc_compiled_base.c, which > gets where it is supposed to go. But then when building within that > tree, it is again looking for the _compiled_base.c file to copy > somewhere, which doesn't exist. I don't see how to fix it, > though...looks like the function is being called twice. > > Will keep digging... Sorry Joe. This is definitely a numarray/numerix problem. I'm taking a look now. Todd From jmiller at stsci.edu Fri Jan 28 00:23:03 2005 From: jmiller at stsci.edu (Todd Miller) Date: Fri, 28 Jan 2005 00:23:03 -0500 Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS In-Reply-To: <1106881496.5363.2.camel@jaytmiller.comcast.net> References: <41F98F3A.5030906@enthought.com> <1106881496.5363.2.camel@jaytmiller.comcast.net> Message-ID: <1106889784.5363.43.camel@jaytmiller.comcast.net> On Thu, 2005-01-27 at 22:04 -0500, Todd Miller wrote: > On Thu, 2005-01-27 at 19:02 -0600, Joe Cooper wrote: > > Hi again all, > > > > (What follows is wild conjecture based on a very cursory reading of the > > setup_scipy_base.py by someone (me) who doesn't speak very good Python > > and has no clue how distutils does what it does.) > > > > It looks like the problem appeared with the added support for > > Numeric/numarray choice in this bit here: > > > > def _config_compiled_base(package, local_path, numerix_prefix, macro, info): > > """_config_compiled_base returns the Extension object for an > > Numeric or numarray specific version of _compiled_base. > > """ > > from scipy_distutils.system_info import dict_append > > from scipy_distutils.core import Extension > > from scipy_distutils.misc_util import dot_join > > module = numerix_prefix + "_compiled_base" > > source = module + '.c' > > _temp_copy(os.path.join(local_path, "_compiled_base.c"), > > os.path.join(local_path, source)) > > sources = [source] > > ...snip... > > > > So, there's a copy into the build tree of _nc_compiled_base.c, which > > gets where it is supposed to go. But then when building within that > > tree, it is again looking for the _compiled_base.c file to copy > > somewhere, which doesn't exist. I don't see how to fix it, > > though...looks like the function is being called twice. > > > > Will keep digging... > > Sorry Joe. This is definitely a numarray/numerix problem. I'm taking a > look now. A work around for now, once _na_compiled_base.c and _nc_compiled_base.c already exist in the source tree (they can just be copied from _compiled_base.c if they don't exist), is to patch setup_scipy_base.py like this: Index: setup_scipy_base.py =================================================================== RCS file: /home/cvsroot/world/scipy_core/scipy_base/setup_scipy_base.py,v retrieving revision 1.28 diff -c -r1.28 setup_scipy_base.py *** setup_scipy_base.py 10 Jan 2005 19:28:55 -0000 1.28 --- setup_scipy_base.py 28 Jan 2005 03:33:30 -0000 *************** *** 33,40 **** from scipy_distutils.misc_util import dot_join module = numerix_prefix + "_compiled_base" source = module + '.c' ! _temp_copy(os.path.join(local_path, "_compiled_base.c"), ! os.path.join(local_path, source)) sources = [source] sources = [os.path.join(local_path,x) for x in sources] depends = sources --- 33,40 ---- from scipy_distutils.misc_util import dot_join module = numerix_prefix + "_compiled_base" source = module + '.c' ! # _temp_copy(os.path.join(local_path, "_compiled_base.c"), ! # os.path.join(local_path, source)) sources = [source] sources = [os.path.join(local_path,x) for x in sources] depends = sources The whole _temp_copy() scheme is a kludge to trick the distutils into building the same .c file two ways, once for Numeric (-DNUMERIC=1) and once for numarray (-DNUMARRAY=1). Different headers are included depending on the flag so the _compiled_base.c needs to be compiled twice; distutuils .o caching gets in the way; making temporary copies effectively circumvents the caching. Tonight's work around above assumes the _compiled_base.c is duplicated manually instead. The root problem appears to be that "_compiled_base.c" is not included in the BUILD tree for some reason. Because _compiled_base.c is not copied to BUILD..., when setup_scipy_base.py goes to make temporary copies of it, it fails. HTH for now, Todd From jmiller at stsci.edu Fri Jan 28 10:21:24 2005 From: jmiller at stsci.edu (Todd Miller) Date: Fri, 28 Jan 2005 10:21:24 -0500 Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS In-Reply-To: <1106889784.5363.43.camel@jaytmiller.comcast.net> References: <41F98F3A.5030906@enthought.com> <1106881496.5363.2.camel@jaytmiller.comcast.net> <1106889784.5363.43.camel@jaytmiller.comcast.net> Message-ID: <1106925684.29664.25.camel@halloween.stsci.edu> On Fri, 2005-01-28 at 00:23, Todd Miller wrote: > On Thu, 2005-01-27 at 22:04 -0500, Todd Miller wrote: > > On Thu, 2005-01-27 at 19:02 -0600, Joe Cooper wrote: > > > Hi again all, > > > > > > (What follows is wild conjecture based on a very cursory reading of the > > > setup_scipy_base.py by someone (me) who doesn't speak very good Python > > > and has no clue how distutils does what it does.) > > > > > > It looks like the problem appeared with the added support for > > > Numeric/numarray choice in this bit here: > > > > > > def _config_compiled_base(package, local_path, numerix_prefix, macro, info): > > > """_config_compiled_base returns the Extension object for an > > > Numeric or numarray specific version of _compiled_base. > > > """ > > > from scipy_distutils.system_info import dict_append > > > from scipy_distutils.core import Extension > > > from scipy_distutils.misc_util import dot_join > > > module = numerix_prefix + "_compiled_base" > > > source = module + '.c' > > > _temp_copy(os.path.join(local_path, "_compiled_base.c"), > > > os.path.join(local_path, source)) > > > sources = [source] > > > ...snip... > > > > > > So, there's a copy into the build tree of _nc_compiled_base.c, which > > > gets where it is supposed to go. But then when building within that > > > tree, it is again looking for the _compiled_base.c file to copy > > > somewhere, which doesn't exist. I don't see how to fix it, > > > though...looks like the function is being called twice. > > > > > > Will keep digging... > > > > Sorry Joe. This is definitely a numarray/numerix problem. I'm taking a > > look now. > > A work around for now, once _na_compiled_base.c and _nc_compiled_base.c > already exist in the source tree (they can just be copied from > _compiled_base.c if they don't exist), is to patch setup_scipy_base.py > like this: > > Index: setup_scipy_base.py > =================================================================== > RCS > file: /home/cvsroot/world/scipy_core/scipy_base/setup_scipy_base.py,v > retrieving revision 1.28 > diff -c -r1.28 setup_scipy_base.py > *** setup_scipy_base.py 10 Jan 2005 19:28:55 -0000 1.28 > --- setup_scipy_base.py 28 Jan 2005 03:33:30 -0000 > *************** > *** 33,40 **** > from scipy_distutils.misc_util import dot_join > module = numerix_prefix + "_compiled_base" > source = module + '.c' > ! _temp_copy(os.path.join(local_path, "_compiled_base.c"), > ! os.path.join(local_path, source)) > sources = [source] > sources = [os.path.join(local_path,x) for x in sources] > depends = sources > --- 33,40 ---- > from scipy_distutils.misc_util import dot_join > module = numerix_prefix + "_compiled_base" > source = module + '.c' > ! # _temp_copy(os.path.join(local_path, "_compiled_base.c"), > ! # os.path.join(local_path, source)) > sources = [source] > sources = [os.path.join(local_path,x) for x in sources] > depends = sources > > The whole _temp_copy() scheme is a kludge to trick the distutils into > building the same .c file two ways, once for Numeric (-DNUMERIC=1) and > once for numarray (-DNUMARRAY=1). Different headers are included > depending on the flag so the _compiled_base.c needs to be compiled > twice; distutuils .o caching gets in the way; making temporary copies > effectively circumvents the caching. Tonight's work around above > assumes the _compiled_base.c is duplicated manually instead. > > The root problem appears to be that "_compiled_base.c" is not included > in the BUILD tree for some reason. Because _compiled_base.c is not > copied to BUILD..., when setup_scipy_base.py goes to make temporary > copies of it, it fails. > > HTH for now, > Todd I think I have something a little better now. Rather than remove the _temp_copy() call, I added explicit one line includes to scipy/MANIFEST.in and scipy/scipy_core/MANIFEST.in. Those ensure that _compiled_base.c is included in the BUILD tree and so _temp_copy() works. I had a heck of a time with getting the distutils generated MANIFEST file to work. After you cvs update, first force the creation of a new MANIFEST like this: % cd scipy % cvs update % python setup.py sdist --force-manifest --manifest-only Then build the RPM as normal like this: % python setup.py bdist_rpm Please let me know directly (cc me on any scipy-dev or scipy-users post) if you have any more trouble. Regards, Todd From pearu at scipy.org Fri Jan 28 10:37:53 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 28 Jan 2005 09:37:53 -0600 (CST) Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS In-Reply-To: <1106925684.29664.25.camel@halloween.stsci.edu> References: <41F98F3A.5030906@enthought.com> <1106881496.5363.2.camel@jaytmiller.comcast.net> <1106925684.29664.25.camel@halloween.stsci.edu> Message-ID: Hi, I just wanted to let you guys know that scipy_distutils supports more robust solutions for building extension modules that depend on the environment than that in current setup_scipy_case.py. See for example scipy_distutils/{tests,examples} and Lib/xxx. I'll look into this issue within next few days for a proper fix. Pearu From pearu at scipy.org Fri Jan 28 11:37:56 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 28 Jan 2005 10:37:56 -0600 (CST) Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F98F3A.5030906@enthought.com> <1106881496.5363.2.camel@jaytmiller.comcast.net> Message-ID: On Fri, 28 Jan 2005, Pearu Peterson wrote: > I'll look into this issue within next few days for a proper fix. Ok, it's now fixed in scipy CVS. Pearu From Fernando.Perez at colorado.edu Fri Jan 28 11:42:06 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 28 Jan 2005 09:42:06 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41F97D3D.4010608@enthought.com> References: <41F97D3D.4010608@enthought.com> Message-ID: <41FA6B5E.3080502@colorado.edu> [sorry if this is repeated, but as far as I can tell it never reached the list, so I'm trying a resend] Joe Cooper wrote: > Hi all, > > There's something gone awry in recent scipy_core CVS. When I last > packaged it up, it worked OK with bdist_rpm, but now it fails with the > following traceback: [...] I actually don't have a solution to this, the last version I built an RPM for was CVS from a few weeks ago: In [2]: scipy.__version__ Out[2]: '0.3.2_300.4526' It worked fine. But I want to mention something in this thread, b/c it's RPM related. If anyone is going to have a look at the scipy rpm build process, it might be worth fixing a few (very minor) issues. I had also promised to pass some of this info to others (esp. S. Walton), so here it goes. I recently set up a group of Fedora3 repositories for our systems, with multiple architectures and therefore different ATLAS releases (which implies different Numeric/Scipy RPMs as well). For the most part the process went quite well, I finally got it down to the following: cd /path/to/scipy/source/dir cd scipy_core pybrpm-arch Scipy_core cd .. pybrpm-arch SciPy where pybrpm-arch is a simple shell script (attached) which automates the building and installation of an rpm via bdist_rpm. But the packages had to be split in Scipy_core and SciPy (with that funny capitalization), otherwise things won't install, since Scipy_core contains the critical scipy_distutils. I'm not sure where the discussion stands on what to include into scipy_core and what should go in the main package. Currently, scipy_core includes: /scipy_base /scipy_distutils /scipy_test /weave and SciPy contains /gui_thread /scipy (all of this in /usr/lib/python2.3/site-packages). I don't have a problem with that, I just list it here for reference. The only thing I'd like to see fixed is the awkward naming of the packages: why not be consistent and just name the two proper RPMs scipy_core scipy This post is mainly to just ask for this minor change, and provide the install scripts which may be of use to others for managing Fedora-based environments. They all rely on a few environment variables which need to be correctly configured. This is from my root/.tcshrc: ############################################################################## # # Local yum configuration # alias yinst 'yum -y install' alias yup 'yum -y update' # Fedora Release version, as seen by YUM. setenv RELEASEVER 3 # Architecture-specific flag, as per the ATLAS binaries convention setenv ARCH P4SSE2 # Yum allows expanding $YUM0-9 variables in its .conf files, so by setting # this variable we can manage architecture-specific repositories. setenv YUM0 /usr/local/installers/yum/fc${RELEASEVER} setenv YUM1 $ARCH setenv YUM2 ${YUM0}-arch/${YUM1} ############################################################################## My local yum repo is organized as: root at planck[yum]# d /usr/local/installers/yum total 20 drwxr-xr-x 3 root 4096 Jan 27 15:47 fc3/ drwxr-xr-x 5 root 4096 Jan 7 12:00 fc3-arch/ drwxr-xr-x 3 root 4096 Jan 19 18:28 fc3-shared/ -rw-r--r-- 1 root 984 Jan 10 16:24 README -rwxr-xr-x 1 root 95 Jan 4 11:53 update-repo* where fc3-arch contains the architecture-specific repositories: root at planck[fc3-arch]# d /usr/local/installers/yum/fc3-arch total 12 drwxr-xr-x 3 root 4096 Jan 25 12:34 P4SSE2/ drwxr-xr-x 3 root 4096 Jan 19 18:29 P4SSE2_2HT/ drwxr-xr-x 3 root 4096 Jan 19 18:29 PIIISSE1/ and update-repo is a trivial one-liner I run whenever any repo changes (it's symlinked from all of them): root at planck[yum]# cat update-repo #!/bin/sh # as of FedoraCore3, this is the proper command to update a repository createrepo . ######################### This setup allows me to manage multiple heterogeneous machines which fetch their updates from a single NFS-shared group of yum repos, without any conflicts and using the same config files (in /etc/yum.repos.d) for all. Here are the relevant repo files: root at planck[yum.repos.d]# cat local.repo ### Local packages # These are directories with locally available rpms. Remember to run # 'createrepo .' when new rpms get added, so that the repodata/ subdir needed # by yum is updated. [local] name = Local packages (Fedora Core $releasever) baseurl = file://$YUM0 enabled=1 ######################### root at planck[yum.repos.d]# cat local-arch.repo ### Architecture-dependent local packages # These are directories with locally available rpms. Remember to run # 'createrepo .' when new rpms get added, so that the repodata/ subdir needed # by yum is updated. [local-arch] name = Architecture-dependent local packages (Arch: $YUM1) baseurl = file://$YUM2 enabled=1 ######################### root at planck[yum.repos.d]# cat local-shared.repo ### Architecture-dependent local packages # These are directories with locally available rpms. Remember to run # 'createrepo .' when new rpms get added, so that the repodata/ subdir needed # by yum is updated. # The -shared repo contain RPMs which were written to go into /usr/local/, and # hence are effectively shared via NFS. This repository should only be # enabled on the host which hosts /usr/local. [local-shared] name = Shared local packages baseurl = file://$YUM0-shared enabled=1 #################### I've written also some code to auto-generate architecture-specific RPMs out of the scipy.org ATLAS binary tarballs, which I can provide if anyone is interested. All this, combined with some more scripts to auto-generate kickstart install scripts, gives a pretty reasonable solution for managing the problem of multiple machines, with multiple architectures and in-house managed RPMs for rapidly changing code (like scipy, ipython and matplotlib), while keeeping my sanity and time to do my official job (research). We can now go from blank hard disk to fully updated box in about 1 hour, with about 3 minutes of human intervention. And the machines stay up to date via yum, including regarding our own managed packages. Hopefully some of this is useful to others. Cheers, f. ps. And Joe, good luck with your bug :) -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pybrpm-arch URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pybrpminst URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pybrpm-noarch URL: From pearu at scipy.org Fri Jan 28 11:51:28 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 28 Jan 2005 10:51:28 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FA6B5E.3080502@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> Message-ID: On Fri, 28 Jan 2005, Fernando Perez wrote: > The only thing I'd like to see fixed is the awkward naming of the packages: > why not be consistent and just name the two proper RPMs > > scipy_core > scipy Fixed in CVS. Anyone using their own scripts to build scipy may need to fix the scripts accordingly. Pearu From joe at enthought.com Fri Jan 28 12:06:55 2005 From: joe at enthought.com (Joe Cooper) Date: Fri, 28 Jan 2005 11:06:55 -0600 Subject: [SciPy-user] Re: bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F98F3A.5030906@enthought.com> <1106881496.5363.2.camel@jaytmiller.comcast.net> Message-ID: <41FA712F.2090502@enthought.com> Pearu Peterson wrote: > > > On Fri, 28 Jan 2005, Pearu Peterson wrote: > >> I'll look into this issue within next few days for a proper fix. > > > Ok, it's now fixed in scipy CVS. Thanks for the rapid response, Pearu. I'll give it a try in a few minutes, and let you know how it goes. From joe at enthought.com Fri Jan 28 12:15:15 2005 From: joe at enthought.com (Joe Cooper) Date: Fri, 28 Jan 2005 11:15:15 -0600 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> Message-ID: <41FA7323.5030804@enthought.com> Pearu Peterson wrote: > > > On Fri, 28 Jan 2005, Fernando Perez wrote: > >> The only thing I'd like to see fixed is the awkward naming of the >> packages: >> why not be consistent and just name the two proper RPMs >> >> scipy_core >> scipy > > > Fixed in CVS. Anyone using their own scripts to build scipy may need to > fix the scripts accordingly. Excellent! (The inconsistent camelcaps always bugged me...but I was such a complainer about everything else, I figured I'd let it slide. ;-) Thanks! From rspringuel at smcvt.edu Fri Jan 28 15:09:47 2005 From: rspringuel at smcvt.edu (R. Padraic Springuel) Date: Fri, 28 Jan 2005 15:09:47 -0500 Subject: [SciPy-user] Universal constants Message-ID: <41FA9C0B.7060004@smcvt.edu> Does anybody know where within scipy (or its associated packages) pi and e are defined? Also, are any other constants predefined? I'm thinking of adding a few others to the list but don't want to duplicate effort if its already done and would like to keep everything in one place. -- R. Padraic Springuel From jdhunter at ace.bsd.uchicago.edu Fri Jan 28 15:14:47 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 28 Jan 2005 14:14:47 -0600 Subject: [SciPy-user] Universal constants In-Reply-To: <41FA9C0B.7060004@smcvt.edu> ("R. Padraic Springuel"'s message of "Fri, 28 Jan 2005 15:09:47 -0500") References: <41FA9C0B.7060004@smcvt.edu> Message-ID: >>>>> "R" == R Padraic Springuel writes: R> Does anybody know where within scipy (or its associated R> packages) pi and e are defined? Also, are any other constants R> predefined? I'm thinking of adding a few others to the list R> but don't want to duplicate effort if its already done and R> would like to keep everything in one place. >>> from scipy import pi, e >>> pi 3.1415926535897931 >>> e 2.7182818284590451 From prabhu_r at users.sf.net Fri Jan 28 15:25:35 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Sat, 29 Jan 2005 01:55:35 +0530 Subject: [SciPy-user] Universal constants In-Reply-To: <41FA9C0B.7060004@smcvt.edu> References: <41FA9C0B.7060004@smcvt.edu> Message-ID: <16890.40895.476462.633076@monster.linux.in> >>>>> "RPS" == R Padraic Springuel writes: RPS> Does anybody know where within scipy (or its associated RPS> packages) pi and e are defined? Also, are any other RPS> constants predefined? I'm thinking of adding a few others to RPS> the list but don't want to duplicate effort if its already RPS> done and would like to keep everything in one place. Both Numeric/numarray and the math modules define e and pi. python -c "import Numeric; print Numeric.e, Numeric.pi" python -c "import math; print math.e, math.pi" cheers, prabhu From oliphant at ee.byu.edu Fri Jan 28 15:50:57 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 28 Jan 2005 13:50:57 -0700 Subject: [SciPy-user] scipy.signal.convolve2d() clips off part of my image In-Reply-To: <86af7b6905012513324c7ea33e@mail.gmail.com> References: <86af7b6905012513324c7ea33e@mail.gmail.com> Message-ID: <41FAA5B1.2090403@ee.byu.edu> Mike Kreiner wrote: >I posted this message earlier to comp.lang.python, but figured this >would be a better place to go. > >I'm using the convolve2d(image, mask) function from scipy to blur an >image. The image I'm using is 512 by 384. When I display the image that >convolve2d returns, the the left-most square of pixels (388 by 388) >turn out fine, blurred and everything, however the right side of the >image is all black. > > Could be a bug. I'll look into it. I'm rewriting the convolve2d function right now, to be much faster. >I've uploaded the practice image to: http://tinypic.com/1g3iox >The output image is: http://tinypic.com/1g3iv9 > >here's what I entered at the intereactive window: > > > >>>>import scipy >>>>img = scipy.imread("c:\\practice.jpg",flatten=True) >>>>img.shape >>>> >>>> >(384, 512) > > >>>>mask = >>>> >>>> > >(1.0/115)*scipy.array([[2,4,5,4,2],[4,9,12,9,4],[5,12,15,12,5],[4,9,12,9,4],[2,4,5,4,2]]) > > > >>>>blurredImg = scipy.signal.convolve2d(img, mask) >>>>scipy.imsave("c:\\blurred.jpg",blurredImg) >>>> >>>> > >Also, I noticed that the shape attribute is (384, 512), even though >windows and my image editor say the image is 512 by 384. Could this >have something to do with the reason convolve2d() only works right on >the left-most 388 by 388 pixels? Thanks for any help. > > No, the two are unrelated. Scipy's convention is to report the shape of an array as (rows, columns). A lot of images are reported in terms of width x height. But width is the number of columns, and height is the number of rows. So a 512 x 384 (width x height) image would have a shape of 384 rows and 512 columns --- img.shape = (384, 512) Thanks for the report. -Travis From oliphant at ee.byu.edu Fri Jan 28 16:56:23 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 28 Jan 2005 14:56:23 -0700 Subject: [SciPy-user] scipy.signal.convolve2d() clips off part of my image In-Reply-To: <86af7b6905012513324c7ea33e@mail.gmail.com> References: <86af7b6905012513324c7ea33e@mail.gmail.com> Message-ID: <41FAB507.4060404@ee.byu.edu> Mike Kreiner wrote: >I posted this message earlier to comp.lang.python, but figured this >would be a better place to go. > >I'm using the convolve2d(image, mask) function from scipy to blur an >image. The image I'm using is 512 by 384. When I display the image that >convolve2d returns, the the left-most square of pixels (388 by 388) >turn out fine, blurred and everything, however the right side of the >image is all black. > > Mike, I will look into what is going on, as it looks like the "full" linear convolution may have a bug. Try using the "same" flag (this gives an output that is the same size as the input). This seems to work. convolve2d(image, mask, mode="same") -Travis From oliphant at ee.byu.edu Fri Jan 28 17:01:38 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 28 Jan 2005 15:01:38 -0700 Subject: [SciPy-user] scipy.signal.convolve2d() clips off part of my image In-Reply-To: <86af7b6905012513324c7ea33e@mail.gmail.com> References: <86af7b6905012513324c7ea33e@mail.gmail.com> Message-ID: <41FAB642.4040304@ee.byu.edu> Mike Kreiner wrote: >I posted this message earlier to comp.lang.python, but figured this >would be a better place to go. > >I'm using the convolve2d(image, mask) function from scipy to blur an >image. The image I'm using is 512 by 384. When I display the image that >convolve2d returns, the the left-most square of pixels (388 by 388) >turn out fine, blurred and everything, however the right side of the >image is all black. > > A tiny bug in the code that you hit when using the "full" flag (which returns the full linear convolution) with unequal dimensions was causing this. This is fixed in CVS now. But, you can use the "same" flag (which returns the same size output image as the input image -- and is usually what you want anyway in situations like this) the problem doesn't arise. Perhaps, changing the default to "same" would be a good idea. Thoughts? -Travis From stephen.walton at csun.edu Fri Jan 28 17:12:58 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 28 Jan 2005 14:12:58 -0800 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FA7323.5030804@enthought.com> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> Message-ID: <41FAB8EA.5010201@csun.edu> Joe Cooper wrote: > Pearu Peterson wrote: > >> Fixed in CVS. Anyone using their own scripts to build scipy may need >> to fix the scripts accordingly. > > > Excellent! (The inconsistent camelcaps always bugged me...but I was > such a complainer about everything else, I figured I'd let it slide. ;-) I just tried python setup.py bdist_rpm in the root scipy directory after a "cvs update -Pd" and got + env 'CFLAGS=-O2 -g -pipe -m32 -march=i386 -mtune=pentium4' python setup.py build Traceback (most recent call last): File "setup.py", line 34, in ? import scipy_distutils ImportError: No module named scipy_distutils error: Bad exit status from /var/tmp/rpm-tmp.32984 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.32984 (%build) error: command 'rpmbuild' failed with exit status 1 "python setup.py bdist_rpm" successed in the scipy_core subdirectory, however, and installing the scipy_core RPM before building the main scipy RPM seems to fix the problem. Now on to Fernando's helpful set of scripts :-). Kudos on these significant steps towards simplification of the scipy build and install process. Stephen From rkern at ucsd.edu Fri Jan 28 17:25:27 2005 From: rkern at ucsd.edu (Robert Kern) Date: Fri, 28 Jan 2005 14:25:27 -0800 Subject: [SciPy-user] Universal constants In-Reply-To: <41FA9C0B.7060004@smcvt.edu> References: <41FA9C0B.7060004@smcvt.edu> Message-ID: <41FABBD7.20706@ucsd.edu> R. Padraic Springuel wrote: > Does anybody know where within scipy (or its associated packages) pi and > e are defined? Also, are any other constants predefined? I'm thinking > of adding a few others to the list but don't want to duplicate effort if > its already done and would like to keep everything in one place. As other people mentioned, pi and e are defined in Numeric, numarray, and math. As for adding new constants, I think a new file, scipy_base/constants.py, would be a good place for them. I'd rather not pollute the global namespace (i.e. what you get when you "from scipy import *") with more constants, though. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Fri Jan 28 17:36:55 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 28 Jan 2005 15:36:55 -0700 Subject: [SciPy-user] Universal constants In-Reply-To: <41FABBD7.20706@ucsd.edu> References: <41FA9C0B.7060004@smcvt.edu> <41FABBD7.20706@ucsd.edu> Message-ID: <41FABE87.4030704@ee.byu.edu> Robert Kern wrote: > R. Padraic Springuel wrote: > >> Does anybody know where within scipy (or its associated packages) pi >> and e are defined? Also, are any other constants predefined? I'm >> thinking of adding a few others to the list but don't want to >> duplicate effort if its already done and would like to keep >> everything in one place. > > > As other people mentioned, pi and e are defined in Numeric, numarray, > and math. As for adding new constants, I think a new file, > scipy_base/constants.py, would be a good place for them. I'd rather > not pollute the global namespace (i.e. what you get when you "from > scipy import *") with more constants, though. Agreed. From pearu at scipy.org Fri Jan 28 18:04:51 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 28 Jan 2005 17:04:51 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FAB8EA.5010201@csun.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> Message-ID: On Fri, 28 Jan 2005, Stephen Walton wrote: > I just tried > > python setup.py bdist_rpm > > in the root scipy directory after a "cvs update -Pd" and got > > + env 'CFLAGS=-O2 -g -pipe -m32 -march=i386 -mtune=pentium4' python setup.py > build > Traceback (most recent call last): > File "setup.py", line 34, in ? > import scipy_distutils > ImportError: No module named scipy_distutils > error: Bad exit status from /var/tmp/rpm-tmp.32984 (%build) Hmm, if there would be a way to set PYTHONPATH=/path/to/cvs/scipy/scipy_core for bdist_rpm commands then it should also work without installing scipy_core. Pearu From Fernando.Perez at colorado.edu Fri Jan 28 19:33:10 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 28 Jan 2005 17:33:10 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> Message-ID: <41FAD9C6.5020305@colorado.edu> Pearu Peterson wrote: > > On Fri, 28 Jan 2005, Stephen Walton wrote: > > >>I just tried >> >>python setup.py bdist_rpm >> >>in the root scipy directory after a "cvs update -Pd" and got >> >>+ env 'CFLAGS=-O2 -g -pipe -m32 -march=i386 -mtune=pentium4' python setup.py >>build >>Traceback (most recent call last): >>File "setup.py", line 34, in ? >> import scipy_distutils >>ImportError: No module named scipy_distutils >>error: Bad exit status from /var/tmp/rpm-tmp.32984 (%build) > > > Hmm, if there would be a way to set > > PYTHONPATH=/path/to/cvs/scipy/scipy_core > > for bdist_rpm commands then it should also work without installing > scipy_core. Well, with the attached patch against current CVS (updated minutes ago), I get a successful build for the overall rpm with: root at planck[scipy]# python setup.py bdist_rpm [...] root at planck[dist]# d /usr/local/installers/src/scipy/dist total 21184 -rw-r--r-- 1 root 13052650 Jan 28 17:27 scipy-0.3.2_302.4556-1.i386.rpm -rw-r--r-- 1 root 2905231 Jan 28 17:26 scipy-0.3.2_302.4556-1.src.rpm -rw-r--r-- 1 root 2909055 Jan 28 17:21 scipy-0.3.2_302.4556.tar.gz -rw-r--r-- 1 root 2783499 Jan 28 17:27 scipy-debuginfo-0.3.2_302.4556-1.i386.rpm This ONLY works if the bdist_rpm command is run straight from the scipy source top-level dir, but I think that's a reasonable assumption to make. Before applying this, it should be tested on other platforms, though (though I did try to make the patch portable). Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_setup.diff Type: text/x-patch Size: 1876 bytes Desc: not available URL: From Fernando.Perez at colorado.edu Fri Jan 28 19:42:19 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 28 Jan 2005 17:42:19 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FAD9C6.5020305@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> Message-ID: <41FADBEB.20103@colorado.edu> Fernando Perez wrote: > Well, with the attached patch against current CVS (updated minutes ago), I get > a successful build for the overall rpm with: > > root at planck[scipy]# python setup.py bdist_rpm I should note, however, that I can't get scipy_core to build at all with current CVS (even without my patch). Something was broken in the scipy_core build, here's what I get: [...] build/src/scipy_base/_nc_compiled_base.c: At top level: build/src/scipy_base/_nc_compiled_base.c:636: error: syntax error before '*' token build/src/scipy_base/_nc_compiled_base.c: In function `build_output': build/src/scipy_base/_nc_compiled_base.c:641: error: `nout' undeclared (first use in thisfunction) build/src/scipy_base/_nc_compiled_base.c:641: error: `outarr' undeclared (first use in this function) build/src/scipy_base/_nc_compiled_base.c:641: warning: return makes pointer from integer without a cast build/src/scipy_base/_nc_compiled_base.c: In function `map_PyFunc': build/src/scipy_base/_nc_compiled_base.c:653: error: `PyArrayObject' undeclared (first use in this function) build/src/scipy_base/_nc_compiled_base.c:653: error: `inputarrays' undeclared (first use in this function) build/src/scipy_base/_nc_compiled_base.c:653: error: `outputarrays' undeclared (first usein this function) build/src/scipy_base/_nc_compiled_base.c:664: error: syntax error before ')' token error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -D_GNU_SOURCE -fPIC -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -fPIC -DNUMERIC -DNUMERIC_VERSION="\"23.7\"" -Iscipy_base -I/usr/include/python2.3 -I/usr/include/python2.3 -c build/src/scipy_base/_nc_compiled_base.c -o build/temp.linux-i686-2.3/build/src/scipy_base/_nc_compiled_base.o" failed with exit status 1 error: Bad exit status from /var/tmp/rpm-tmp.5211 (%build) Is this related to the problems Todd was discussing this morning? If the scipy_core build is fixed, it seems that with my patch we could have an easy way to build rpms for all of scipy in a few lines, without needing scipy_distutils installed. But I don't know how to fix the current breakage. Cheers, f From Fernando.Perez at colorado.edu Fri Jan 28 19:54:12 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 28 Jan 2005 17:54:12 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FAB8EA.5010201@csun.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> Message-ID: <41FADEB4.8020408@colorado.edu> Stephen Walton wrote: > Now on to Fernando's helpful set of scripts :-). Kudos on these > significant steps towards simplification of the scipy build and install > process. The other part of my setup (besides automated kickstart install building, which I won't discuss for now; ask me if interested) is how to quickly build RPMs out of the binary ATLAS tarballs which Pearu nicely compiles for us at scipy.org. This is accomplished with the attached code. Instructions: 1. Simply put all of this somewhere in your filesystem. 2. Make a src/ subdir there and put in src/ the ATLAS tarballs you want to make RPMs for from http://www.scipy.org/download/atlasbinaries/linux These should be the uncompressed, untouched .tgz files you download. Don't modify them in any way. Note that it is CRITICAL for this to work that they are named atlas_Linux_.tgz ARCH is the string which the scripts and configs in my other email all refer to, and which allow you to handle multiple architectures and YUM in harmony (via the $YUM variables, ARCH is stored as $YUM1). 3. Copy the COPYRIGHT notice to src/, so each RPM gets built with the right notice. This is to comply with the copyright requirements of ATLAS 4. Read the top-level docstring for make.py, it explains the running, which is trivial. This will give you rpms which you can then drop in your architecture-specific yum repositories (yum${RELEASEVER}-arch/${ARCH}), and you can then handle with the same config multiple machines with different architectures cleanly. I don't claim this to be perfect, but it works for me. If it breaks badly, you either get to keep all the pieces, or you can send me the _fixed_ version :) Regards, f ps. If this works for people, I'd like to ask Pearu to keep his current naming conventions for the future. It would be nice if we could keep these scripts in use for the long run without having to tweak them too much. Changning ATLAS version numbers is trivial (one variable), but mucking with the naming structure would require fiddling with the code. -------------- next part -------------- A non-text attachment was scrubbed... Name: make.py Type: application/x-python Size: 4363 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: atlas-base.spec URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: COPYRIGHT URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: rpmmacros URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: rpmrc URL: From stephen.walton at csun.edu Fri Jan 28 22:40:11 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 28 Jan 2005 19:40:11 -0800 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FADBEB.20103@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> Message-ID: <41FB059B.7030902@csun.edu> Fernando Perez wrote: > I should note, however, that I can't get scipy_core to build at all > with current CVS (even without my patch). Something was broken in the > scipy_core build, here's what I get: > > [...] > > build/src/scipy_base/_nc_compiled_base.c: At top level: Well, I get a successful build from the top level with your patch, Fernando. Examining the output I get, though, seems to show that _nc_compiled_base.c isn't getting compiled at all. Why might that be? I have numarray 1.2a from CVS installed. Stephen From jrennie at csail.mit.edu Sat Jan 29 00:08:28 2005 From: jrennie at csail.mit.edu (Jason Rennie) Date: Sat, 29 Jan 2005 00:08:28 -0500 Subject: [SciPy-user] bug in fmin_bfgs? In-Reply-To: References: <20050124205501.GA4492@csail.mit.edu> <20050124211218.GI2434@csail.mit.edu> <20050127002750.GA2402@csail.mit.edu> Message-ID: <20050129050828.GA7496@csail.mit.edu> On Thu, Jan 27, 2005 at 07:54:01PM -0500, David M. Cooke wrote: > I'm wanting to use this routine, and now come across this error also. > (Plus, I originally wrote the wrapper :-) I'm guessing you're using a > 64-bit platform, like me? Actually, no. (!?!?) I'm using a good-old 32-bit Intel processor. Does it work for you on 32-bit machines? Great to know that you're also interested in using this code. Even better that you wrote the wrapper! :) I'll take another look at this in a few days when I have some free time. Maybe try to use the CVS code with your patch. No time ATM: working toward a paper deadline! Jason From stephen.walton at csun.edu Sat Jan 29 00:18:25 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 28 Jan 2005 21:18:25 -0800 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FADEB4.8020408@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> Message-ID: <41FB1CA1.6030705@csun.edu> Fernando Perez wrote: > 4. Read the top-level docstring for make.py, it explains the running, > which is trivial. I followed your instructions; "python make.py all" on my system spits out *** Found architectures: ['ATHLONSSE1'] *** Saving spec file: atlas-ATHLONSSE1.spec *** Building RPM for arch: ATHLONSSE1 error: No compatible architectures found for build *** Done with arch: ATHLONSSE1 Am I missing some magic in ~/.rpmmacros? > ps. If this works for people, I'd like to ask Pearu to keep his > current naming conventions for the future. They aren't Pearu's conventions. They are the default you get if you do 'make rpm' in the ATLAS/lib/${ARCH} subdirectory. Stephen From Fernando.Perez at colorado.edu Sat Jan 29 03:37:54 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 29 Jan 2005 01:37:54 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FB059B.7030902@csun.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> <41FB059B.7030902@csun.edu> Message-ID: <41FB4B62.1030505@colorado.edu> Stephen Walton wrote: > Fernando Perez wrote: > > >>I should note, however, that I can't get scipy_core to build at all >>with current CVS (even without my patch). Something was broken in the >>scipy_core build, here's what I get: >> >>[...] >> >>build/src/scipy_base/_nc_compiled_base.c: At top level: > > > Well, I get a successful build from the top level with your patch, > Fernando. Examining the output I get, though, seems to show that > _nc_compiled_base.c isn't getting compiled at all. Why might that be? > I have numarray 1.2a from CVS installed. I have no idea, I'm still using Numeric and I only tested a build. But the patch seems to be a starting point towards a simpler build process, it's just going to require a bit of love from the distutils gurus in the house (the scipy build process is kind of complex and I don't understand it well). Cheers, f From Fernando.Perez at colorado.edu Sat Jan 29 04:04:01 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 29 Jan 2005 02:04:01 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FB1CA1.6030705@csun.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> <41FB1CA1.6030705@csun.edu> Message-ID: <41FB5181.6060207@colorado.edu> Stephen Walton wrote: > Fernando Perez wrote: > > >>4. Read the top-level docstring for make.py, it explains the running, >>which is trivial. > > > I followed your instructions; "python make.py all" on my system spits out > > *** Found architectures: ['ATHLONSSE1'] > *** Saving spec file: atlas-ATHLONSSE1.spec > *** Building RPM for arch: ATHLONSSE1 > error: No compatible architectures found for build > *** Done with arch: ATHLONSSE1 > > Am I missing some magic in ~/.rpmmacros? Mmh, the 'no compatible archs...' message is coming straight from the rpm command, not from make.py. I tested only with PIIISSE1, P4SSE2 and P4SSE2_HT architectures, but not with ATHLONs. I noticed the BuildArchitectures flag in the atlas-base.spec file is set to i686. You may have to read up on the syntax for that flag, to see whether it allows multiple archs to be listed (and how). I would try a simple comma-separated list, or a variation on that (I can't seem to find an example of the proper syntax for multiple archs). Otherwise, you could just make it another parameter of the expansion: just put in there something like __BUILDARCH__ as the value, and add a key for that to make.py. The code which does the generation is based on what I think is the smallest Python templating system possible, yet one which I found to be surprisingly powerful: def expand_template(tpl,dct): """Replace all occurrences of the keys of a dict in the template string, by the actual values in the dict.""" rex = re.compile('|'.join(['(?P<%s>%s)' % (k,k) for k in dct])) return rex.sub(lambda match: dct[match.group()],tpl) The beauty of this simple 2-liner is that it allows you to define on the fly your own syntax for what constitutes a 'key' to be evaluated. With this, you can cleanly auto-generate files where things like $ or % have a special meaning. These are normally a PITA to build with other templating strategies (like python's % interpolation or Itpl's $ one), because you end up jumping through multiple hoops for escaping those to get them into the final output. So feel free to add more tags as needed, and perhaps send back the improved versions so this can be used by more people. Here's some info about this tag, from http://www-eleves-isia.cma.fr/Doc/rpm-4.2.2/spec \subsection specfile_buildarchitectures BuildArchitectures: Tag This tag specifies the architecture which the resulting binary package will run on. Typically this is a CPU architecture like sparc, i386. The string 'noarch' is reserved for specifying that the resulting binary package is platform independent. Typical platform independent packages are html, perl, python, java, and ps packages. >>ps. If this works for people, I'd like to ask Pearu to keep his >>current naming conventions for the future. > > > They aren't Pearu's conventions. They are the default you get if you do > 'make rpm' in the ATLAS/lib/${ARCH} subdirectory. I think you meant just 'make', or 'make dist', or something similar. The ATLAS source downloads don't include RPM targets, I wouldn't have had to write this stuff if they did :) Cheers, f From pearu at scipy.org Sat Jan 29 04:21:27 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 29 Jan 2005 03:21:27 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FADBEB.20103@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> Message-ID: On Fri, 28 Jan 2005, Fernando Perez wrote: > Fernando Perez wrote: > >> Well, with the attached patch against current CVS (updated minutes ago), I >> get a successful build for the overall rpm with: >> >> root at planck[scipy]# python setup.py bdist_rpm > > I should note, however, that I can't get scipy_core to build at all with > current CVS (even without my patch). Something was broken in the scipy_core > build, here's what I get: > > [...] > > build/src/scipy_base/_nc_compiled_base.c: At top level: > build/src/scipy_base/_nc_compiled_base.c:636: error: syntax error before '*' > token > build/src/scipy_base/_nc_compiled_base.c: In function `build_output': > build/src/scipy_base/_nc_compiled_base.c:641: error: `nout' undeclared (first > use in thisfunction) > build/src/scipy_base/_nc_compiled_base.c:641: error: `outarr' undeclared > (first use in this function) > build/src/scipy_base/_nc_compiled_base.c:641: warning: return makes pointer > from integer without a cast > build/src/scipy_base/_nc_compiled_base.c: In function `map_PyFunc': > build/src/scipy_base/_nc_compiled_base.c:653: error: `PyArrayObject' > undeclared (first use in this function) I wonder what's in [...]? Undeclared PyArrayObject indicates that Numeric header files were not found. Also, try rm -rf build MANIFEST scipy_core/{build,MANIFEST,MANIFEST.in} before running setup.py build. Pearu From Fernando.Perez at colorado.edu Sat Jan 29 04:45:34 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 29 Jan 2005 02:45:34 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> Message-ID: <41FB5B3E.2090502@colorado.edu> Pearu Peterson wrote: >>Fernando Perez wrote: >> >> >>>Well, with the attached patch against current CVS (updated minutes ago), I >>>get a successful build for the overall rpm with: >>> >>>root at planck[scipy]# python setup.py bdist_rpm >> >>I should note, however, that I can't get scipy_core to build at all with >>current CVS (even without my patch). Something was broken in the scipy_core >>build, here's what I get: [...] > I wonder what's in [...]? Undeclared PyArrayObject indicates that Numeric > header files were not found. Don't worry about what was there, the news are good... > Also, try > > rm -rf build MANIFEST scipy_core/{build,MANIFEST,MANIFEST.in} > > before running setup.py build. OK, that did it: root at planck[scipy_core]# python setup.py bdist_rpm >& bdist_rpm.log root at planck[scipy_core]# head bdist_rpm.log numpy_info: FOUND: define_macros = [('NUMERIC_VERSION', '"\\"23.7\\""')] include_dirs = ['/usr/include/python2.3'] numarray_info: NOT AVAILABLE x11_info: FOUND: root at planck[scipy_core]# tail bdist_rpm.log + rm -rf /var/tmp/scipy_core-buildroot + exit 0 Executing(--clean): /bin/sh -e /var/tmp/rpm-tmp.11379 + umask 022 + cd /usr/local/installers/src/scipy/scipy_core/build/bdist.linux-i686/rpm/BUILD + rm -rf scipy_core-0.3.3_132.2284 + exit 0 moving build/bdist.linux-i686/rpm/SRPMS/scipy_core-0.3.3_132.2284-1.src.rpm -> dist moving build/bdist.linux-i686/rpm/RPMS/i386/scipy_core-0.3.3_132.2284-1.i386.rpm -> dist moving build/bdist.linux-i686/rpm/RPMS/i386/scipy_core-debuginfo-0.3.3_132.2284-1.i386.rpm -> dist So it looks like we're good to go! This made me think of the following, which is part of ipython's setup.py: # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly # update it when the contents of directories change. if os.path.exists('MANIFEST'): os.remove('MANIFEST') It might be worth adding a bit of such protection to scipy's setup files, no? This would make the process more robust, I think. Before applying my patch, I'd prefer to have a bit more confirmation from you/others that it doesn't break everything. Cheers, and thanks for the suggestions! f From pearu at scipy.org Sat Jan 29 05:40:36 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 29 Jan 2005 04:40:36 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FB5B3E.2090502@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> <41FB5B3E.2090502@colorado.edu> Message-ID: On Sat, 29 Jan 2005, Fernando Perez wrote: >> Also, try >> >> rm -rf build MANIFEST scipy_core/{build,MANIFEST,MANIFEST.in} >> >> before running setup.py build. > > OK, that did it: > So it looks like we're good to go! This made me think of the following, > which is part of ipython's setup.py: > > # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly > # update it when the contents of directories change. > if os.path.exists('MANIFEST'): os.remove('MANIFEST') > > It might be worth adding a bit of such protection to scipy's setup files, no? > This would make the process more robust, I think. > > Before applying my patch, I'd prefer to have a bit more confirmation from > you/others that it doesn't break everything. I have learned long time ago that using distutils MANIFEST feature is a source of trouble, especially in superpackage like scipy (distutils was not designed to support such packages, that's why we have scipy_distutils). So, in scipy all sources should be specified in setup_*.py files and that avoids any effects caused by distutils MANIFEST. Recent failures was due to the fact that a line was added to MANIFEST.in. This is now fixed but people should remove their MANIFEST files manually, or, we could also apply your patch. But note that the effect from your patch is only temporary; once broken MANIFEST file is removed, the patch is not needed anymore (until someone puts files to MANIFEST.in again;-). Although the patch is completely harmless, I wouldn't apply this patch because it is unnecessary and would (may be indirectly) indicate to developers that it is ok to use MANIFEST.in, while it's not, Finally, I recommend reading http://www.scipy.org/development/developscipy.txt to anyone who is interested in scipy setup structure. Pearu From Fernando.Perez at colorado.edu Sat Jan 29 05:47:26 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 29 Jan 2005 03:47:26 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> <41FB5B3E.2090502@colorado.edu> Message-ID: <41FB69BE.6080600@colorado.edu> Pearu Peterson wrote: > Recent failures was due to the fact that a line was added to MANIFEST.in. > This is now fixed but people should remove their MANIFEST files manually, > or, we could also apply your patch. But note that the effect from your > patch is only temporary; once broken MANIFEST file is removed, the patch > is not needed anymore (until someone puts files to MANIFEST.in again;-). > > Although the patch is completely harmless, I wouldn't apply this patch > because it is unnecessary and would (may be indirectly) indicate to > developers that it is ok to use MANIFEST.in, while it's not, Well, the original patch was just to add the scipy_core dir into both sys.path and the enviroment, so that bdist_rpm would work at the top level for the main scipy package. The MANIFEST stuff was _not_ included in that patch, I only pasted those lines into the message after your comment. Now, I don't know if you want to apply the actual patch or not, but I think it would be nice to be able to build both the _core and main rpms without having anything installed in between, no? Cheers, f From pearu at scipy.org Sat Jan 29 05:54:18 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 29 Jan 2005 04:54:18 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FB69BE.6080600@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> <41FB5B3E.2090502@colorado.edu> <41FB69BE.6080600@colorado.edu> Message-ID: On Sat, 29 Jan 2005, Fernando Perez wrote: > Well, the original patch was just to add the scipy_core dir into both > sys.path and the enviroment, so that bdist_rpm would work at the top level > for the main scipy package. The MANIFEST stuff was _not_ included in that > patch, I only pasted those lines into the message after your comment. > > Now, I don't know if you want to apply the actual patch or not, but I think > it would be nice to be able to build both the _core and main rpms without > having anything installed in between, no? I agree with you, I haven't reviewed your first patch, but I will. In my messages I was talking only about the patch that removes MANIFEST file. Pearu From pearu at scipy.org Sat Jan 29 09:09:50 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 29 Jan 2005 08:09:50 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FAD9C6.5020305@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> Message-ID: On Fri, 28 Jan 2005, Fernando Perez wrote: @@ -43,16 +61,6 @@ #------------------------------- def setup_package(ignore_packages=[]): - - if command_sdist and os.path.isdir('scipy_core'): - # Applying the same commands to scipy_core. - # Results can be found in scipy_core directory. - c = '%s %s %s' % (sys.executable, - os.path.abspath(os.path.join('scipy_core','setup.py')), - ' '.join(sys.argv[1:])) - print c - s = os.system(c) - assert not s,'failed on scipy_core' Fernando, could you convince me to remove this codelet as you did? The idea behind this code is that executing python setup.py install in cvs/scipy directory will install also scipy_core packages, so that people wouln't need to execute the same command in cvs/scipy/scipy_core directory. Pearu From pearu at scipy.org Sat Jan 29 09:22:24 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 29 Jan 2005 08:22:24 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> Message-ID: On Sat, 29 Jan 2005, Pearu Peterson wrote: > On Fri, 28 Jan 2005, Fernando Perez wrote: > > @@ -43,16 +61,6 @@ > #------------------------------- > > def setup_package(ignore_packages=[]): > - > - if command_sdist and os.path.isdir('scipy_core'): > - # Applying the same commands to scipy_core. > - # Results can be found in scipy_core directory. > - c = '%s %s %s' % (sys.executable, > - > os.path.abspath(os.path.join('scipy_core','setup.py')), > - ' '.join(sys.argv[1:])) > - print c > - s = os.system(c) > - assert not s,'failed on scipy_core' > > Fernando, could you convince me to remove this codelet as you did? > > The idea behind this code is that executing > > python setup.py install > > in cvs/scipy directory will install also scipy_core packages, > so that people wouln't need to execute the same command > in cvs/scipy/scipy_core directory. Actually, this idea holds only for commands like sdist, bdist_rpm, sdist_packagers, not for install. Pearu From ckkart at hoc.net Sat Jan 29 12:31:21 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 29 Jan 2005 18:31:21 +0100 Subject: [SciPy-user] MPI and weave Message-ID: <41FBC869.20109@hoc.net> Hi, has anybody any experiences using weave.inline and Scientific.MPI together? Sometimes, even if the code has been already compiled I get the following error message. import weave File "/usr/lib/python2.3/site-packages/weave/__init__.py", line 9, in ? from blitz_tools import blitz File "/usr/lib/python2.3/site-packages/weave/blitz_tools.py", line 9, in ? import converters File "/usr/lib/python2.3/site-packages/weave/converters.py", line 42, in ? default.insert(0,wx_spec.wx_converter()) File "/usr/lib/python2.3/site-packages/weave/wx_spec.py", line 58, in __init__ common_base_converter.__init__(self) File "/usr/lib/python2.3/site-packages/weave/c_spec.py", line 74, in __init__ self.init_info() File "/usr/lib/python2.3/site-packages/weave/wx_spec.py", line 113, in init_info cxxflags = get_wxconfig('cxxflags') File "/usr/lib/python2.3/site-packages/weave/wx_spec.py", line 15, in get_wxconfig res,settings = commands.getstatusoutput(wxconfig + ' --' + flag) File "/usr/lib/python2.3/commands.py", line 54, in getstatusoutput text = pipe.read() IOError: [Errno 4] Interrupted system call Putting a time.sleep(2) somewhere at the beginning of the program helps. This happens on a openmosix cluster, i.e. a quasi SMP machine without the necessity to login on a remote machine. On another cluster where each machine has nfs mounted homes, the compilation seems to fail: File "/home/ise/kristuka/ck/lib/python/weave/inline_tools.py", line 322, in inline results = attempt_function_call(code,local_dict,global_dict) File "/home/ise/kristuka/ck/lib/python/weave/inline_tools.py", line 372, in attempt_function_call function_list = function_catalog.get_functions(code,module_dir) File "/home/ise/kristuka/ck/lib/python/weave/catalog.py", line 568, in get_functions function_list = self.get_cataloged_functions(code) File "/home/ise/kristuka/ck/lib/python/weave/catalog.py", line 487, in get_cataloged_functions for path in self.build_search_order(): File "/home/ise/kristuka/ck/lib/python/weave/catalog.py", line 369, in build_search_order search_order.append(default_dir()) File "/home/ise/kristuka/ck/lib/python/weave/catalog.py", line 176, in default_dir if not is_writable(path): File "/home/ise/kristuka/ck/lib/python/weave/catalog.py", line 135, in is_writable os.unlink(dummy) OSError: [Errno 2] No such file or directory: '/home/ise/kristuka/.python23_compiled/dummy' And time.sleep() doesn't help in this case. Any ideas? Regards, Christian From Fernando.Perez at colorado.edu Sat Jan 29 13:23:53 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 29 Jan 2005 11:23:53 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> Message-ID: <41FBD4B9.5000909@colorado.edu> Pearu Peterson wrote: > > On Sat, 29 Jan 2005, Pearu Peterson wrote: > > >>On Fri, 28 Jan 2005, Fernando Perez wrote: >> >>@@ -43,16 +61,6 @@ >>#------------------------------- >> >>def setup_package(ignore_packages=[]): >>- >>- if command_sdist and os.path.isdir('scipy_core'): >>- # Applying the same commands to scipy_core. >>- # Results can be found in scipy_core directory. >>- c = '%s %s %s' % (sys.executable, >>- >>os.path.abspath(os.path.join('scipy_core','setup.py')), >>- ' '.join(sys.argv[1:])) >>- print c >>- s = os.system(c) >>- assert not s,'failed on scipy_core' >> >>Fernando, could you convince me to remove this codelet as you did? >> >>The idea behind this code is that executing >> >> python setup.py install >> >>in cvs/scipy directory will install also scipy_core packages, >>so that people wouln't need to execute the same command >>in cvs/scipy/scipy_core directory. > > > Actually, this idea holds only for commands like sdist, bdist_rpm, > sdist_packagers, not for install. Oh, sorry, I completely misunderstood the intent of that code (note to self, don't write patches for third-party code in a rush). With the first part of the patch fixing the bdist_rpm from the top-level working, I guess this could mean that a _single_ setup.py bdist_rpm could then create all the necessary rpms. Even better :) Ah! I remmeber that when I tried it first, I seemed to have some problems with getting this child call to complete sucessfully, which may have been why I removed it. But a bit of double-checking would be a good idea, I'm not sure at this point if that really was the reason or not. Cheers, f From stephen.walton at csun.edu Sat Jan 29 13:34:15 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Sat, 29 Jan 2005 10:34:15 -0800 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FB5181.6060207@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> <41FB1CA1.6030705@csun.edu> <41FB5181.6060207@colorado.edu> Message-ID: <41FBD727.4010500@csun.edu> Fernando Perez wrote: > Stephen Walton wrote: > > Mmh, the 'no compatible archs...' message is coming straight from the > rpm command, Yeah, I found that out when I ran rpmbuild from the shell. I'll look into your other suggestions. > >> >> They aren't Pearu's conventions. They are the default you get if you >> do 'make rpm' in the ATLAS/lib/${ARCH} subdirectory. > > > I think you meant just 'make', or 'make dist', or something similar Slip of the keyboard. I meant 'make tar'. Stephen From Fernando.Perez at colorado.edu Sat Jan 29 14:04:30 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 29 Jan 2005 12:04:30 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FBD727.4010500@csun.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> <41FB1CA1.6030705@csun.edu> <41FB5181.6060207@colorado.edu> <41FBD727.4010500@csun.edu> Message-ID: <41FBDE3E.7030402@colorado.edu> Stephen Walton wrote: > Fernando Perez wrote: > > >>Stephen Walton wrote: >> >>Mmh, the 'no compatible archs...' message is coming straight from the >>rpm command, > > > Yeah, I found that out when I ran rpmbuild from the shell. I'll look > into your other suggestions. OK, let me know how it goes or if you need more info from me. It would be nice if this could work for all users, it's a simple but handy utility to ease maintenance. Cheers, f From rspringuel at smcvt.edu Sat Jan 29 14:19:53 2005 From: rspringuel at smcvt.edu (R. Padraic Springuel) Date: Sat, 29 Jan 2005 14:19:53 -0500 Subject: [SciPy-user] Re: Universal Constants In-Reply-To: <20050129004238.3E74D3EB1B@www.scipy.com> References: <20050129004238.3E74D3EB1B@www.scipy.com> Message-ID: <41FBE1D9.80000@smcvt.edu> Thanks, that's what I decided to do. -- R. Padraic Springuel From pearu at scipy.org Sat Jan 29 14:48:28 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 29 Jan 2005 13:48:28 -0600 (CST) Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> <41FB5B3E.2090502@colorado.edu> <41FB69BE.6080600@colorado.edu> Message-ID: On Sat, 29 Jan 2005, Pearu Peterson wrote: > On Sat, 29 Jan 2005, Fernando Perez wrote: > >> Well, the original patch was just to add the scipy_core dir into both >> sys.path and the enviroment, so that bdist_rpm would work at the top level >> for the main scipy package. The MANIFEST stuff was _not_ included in that >> patch, I only pasted those lines into the message after your comment. >> >> Now, I don't know if you want to apply the actual patch or not, but I think >> it would be nice to be able to build both the _core and main rpms without >> having anything installed in between, no? > > I agree with you, I haven't reviewed your first patch, but I will. I have applied your patch with some minor modifications to CVS. Although now one can build scipy without installing scipy_core, one must still have f2py installed. And f2py requires scipy_distutils. So, packagers still need to take into account that the correct order of installing software is: scipy_core f2py scipy Thanks, Pearu From yichunwe at usc.edu Sat Jan 29 18:59:29 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Sat, 29 Jan 2005 15:59:29 -0800 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F16EB8.6080308@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> <41F16EB8.6080308@usc.edu> Message-ID: <41FC2361.5090709@usc.edu> Hi Travis, I just read this message. I am reading weave document but feel hard to figure out how to make Python call in the weave.inline expr strings. Coudl you give a simple example or point me to an example? If I implement all of this algorithm in weave using weave.inline, I think I have to implement the array summation, multiplying, reshaping operations in C..., which scipy does them quite well. I think I will try to write the loops in weave first, with a python call in the inner loop. Travis Oliphant Wrote: > Use weave or f2py to write the loop -- of course if the inner portion > of the loop is a Python call this may not speed things up too much. > Another thing to do is to re-write the algorithm using a different > approach. > > Python has pretty fast loops compared to other interpreted languages > but it can still cause slow downs. Thanks in advance! - yichun From pearu at cens.ioc.ee Sun Jan 30 14:26:21 2005 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 30 Jan 2005 21:26:21 +0200 (EET) Subject: [SciPy-user] ANN: F2PY - Fortran to Python Interface Generator Message-ID: F2PY - Fortran to Python Interface Generator -------------------------------------------- I am pleased to announce the ninth public release of F2PY, version 2.45.241_1926. The purpose of the F2PY project is to provide the connection between Python and Fortran programming languages. For more information, see http://cens.ioc.ee/projects/f2py2e/ Download: http://cens.ioc.ee/projects/f2py2e/2.x/F2PY-2-latest.tar.gz http://cens.ioc.ee/projects/f2py2e/2.x/F2PY-2-latest.win32.exe http://cens.ioc.ee/projects/f2py2e/2.x/scipy_distutils-latest.tar.gz http://cens.ioc.ee/projects/f2py2e/2.x/scipy_distutils-latest.win32.exe What's new? ------------ * Added support for wrapping signed integers and processing .pyf.src template files. * F2PY fortran objects have _cpointer attribute holding a C pointer to a wrapped function or a variable. When using _cpointer as a callback argument, the overhead of Python C/API is avoided giving for using callback arguments the same performance as calling Fortran or C function from Fortran or C, at the same time retaining the flexibility of Python. * Callback arguments can be built-in functions, fortran objects, and CObjects (hold by _cpointer attribute, for instance). * New attribute: ``intent(aux)`` to save parameter values. * New command line switches: --help-link and --link- * Numerous bugs are fixed. Support for ``usercode`` statement has been improved. * Documentation updates. Enjoy, Pearu Peterson ---------------

F2PY 2.45.241_1926 - The Fortran to Python Interface Generator (30-Jan-05) From prabhu_r at users.sf.net Mon Jan 31 07:45:47 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Mon, 31 Jan 2005 18:15:47 +0530 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41FC2361.5090709@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> <41F16EB8.6080308@usc.edu> <41FC2361.5090709@usc.edu> Message-ID: <16894.10363.240968.834129@monster.linux.in> >>>>> "YW" == Yichun Wei writes: YW> Hi Travis, I just read this message. I am reading weave YW> document but feel hard to figure out how to make Python call YW> in the weave.inline expr strings. Coudl you give a simple YW> example or point me to an example? Well, why do you need weave if you need to call Python from it? Why not write it in pure Python? If you are looking for speed, your weave code should be C. Weave lets you call C from within Python, doing the reverse is usually of no use. cheers, prabhu From yichunwe at usc.edu Mon Jan 31 14:00:11 2005 From: yichunwe at usc.edu (Yichun Wei) Date: Mon, 31 Jan 2005 11:00:11 -0800 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41F71EB4.3090504@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> <41F16EB8.6080308@usc.edu> <41F71EB4.3090504@usc.edu> Message-ID: <41FE803B.50108@usc.edu> I have a python function to compute 1d convolution. I need to evaluate this convolution on a 2d grid. So I have to loop through all the rows and columns on this grid, at this point I found the nested looping is kind of slow to my expectation, and the inner function call to the convolution function, since it's using scipy's fftpack or Numeric, I do not think the inner function call could be speeded a lot more by written C code in weave. Thus, to implement the inner function call in C, I have to reimplement all the stuff from array reshaping, multiplying to convolving. I was just curious about whether it would be a big win if I write the for loops in C, while keeping the inner-loop function the way it was (for I think it is fast enough). Prabhu Ramachandran wrote: > Well, why do you need weave if you need to call Python from it? Why > not write it in pure Python? If you are looking for speed, your weave > code should be C. Weave lets you call C from within Python, doing the > reverse is usually of no use. > > cheers, > prabhu Thank you, yichun From stephen.walton at csun.edu Mon Jan 31 14:33:34 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon, 31 Jan 2005 11:33:34 -0800 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FB5181.6060207@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> <41FB1CA1.6030705@csun.edu> <41FB5181.6060207@colorado.edu> Message-ID: <41FE880E.8080902@csun.edu> Fernando Perez wrote: > Mmh, the 'no compatible archs...' message is coming straight from the > rpm command, not from make.py. ... I noticed the BuildArchitectures > flag in the atlas-base.spec file is set to i686. OK, after some experimentation: BuildArchitectures consists of a space-separated list of architectures for which the package building is allowed. It really is not needed unless there's some reason the package can't build on some architectures or the package is scripts only, in which case setting BuildArchitectures to "noarch" is appropriate. To get an appropriately named RPM, you use the "--target" switch to rpmbuild. For example, I can run rpmbuild --target=athlon -bb --rcfile rpmrc --clean --rmsource atlas-ATHLONSSE1_2.spec on a P4 system after doing "python make.py spec". This creates atlas-3.6.0-ATHLONSSE1_2.athlon.rpm as I expect. But it doesn't really matter. If I wind up with atlas-3.6.0-ATHLONSSE1_2.i686.rpm, it still installs fine on my dual CPU Athlon system. And atlas-3.6.0-P4SSE2.athlon.rpm installs fine on a P4. It might be simplest to just add "--target=noarch" to the rpmbuild in make.py to ensure rpmbuild always runs. We're not actually compiling any code here, and the underlying architecture is set by the other part of the keywords. This will create RPMs named things like atlas-3.6.0-P4SSE2.noarch.rpm. > Otherwise, you could just make it another parameter of the expansion: > just put in there something like __BUILDARCH__ as the value, and add a > key for that to make.py. Except that, as noted, the __BUILDARCH__ really needs to be the string after "--target=" in the rpmbuild command, so it isn't really an addition to the template. Summary of my suggestions: Delete BuildArchitectures from atlas-base.spec. Add "--target=noarch" to the rpmbuild command in make.py. Stephen From Fernando.Perez at colorado.edu Mon Jan 31 14:57:32 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 31 Jan 2005 12:57:32 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FE880E.8080902@csun.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> <41FB1CA1.6030705@csun.edu> <41FB5181.6060207@colorado.edu> <41FE880E.8080902@csun.edu> Message-ID: <41FE8DAC.40209@colorado.edu> Hi Stephen, Stephen Walton wrote: > Fernando Perez wrote: > > >>Mmh, the 'no compatible archs...' message is coming straight from the >>rpm command, not from make.py. ... I noticed the BuildArchitectures >>flag in the atlas-base.spec file is set to i686. > > > OK, after some experimentation: BuildArchitectures consists of a > space-separated list of architectures for which the package building is > allowed. It really is not needed unless there's some reason the package > can't build on some architectures or the package is scripts only, in > which case setting BuildArchitectures to "noarch" is appropriate. [...] > Summary of my suggestions: Delete BuildArchitectures from > atlas-base.spec. Add "--target=noarch" to the rpmbuild command in make.py. OK, many thanks for tracking this down. I have a more generic question though, now that you're familiar with my approach: do you agree with it? What I concluded was that the RPM 'arch' flags were not sufficient to address the needs of something like ATLAS, since they lump everything since a Pentium Pro into i686, and are similarly coarse-grained for Athlon. That's why I decided to use the hack of using the release tag to encode an additional architecture flag according to ATLAS conventions, and take advantage of yum's new $YUM variables to load packages without conflicts. I realize it's a bit hackish, but I don't see a better solution. Do you agree with me, or do you think this can be solved in a cleaner way? If you agree, then I'll make the changes you suggested and we can use this to quickly use/test/update the ATLAS binaries Pearu so kindly provides (I should explicitly thank Pearu for this, it really is a great service to all of us). Cheers, f From stephen.walton at csun.edu Mon Jan 31 15:48:12 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon, 31 Jan 2005 12:48:12 -0800 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FE8DAC.40209@colorado.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> <41FB1CA1.6030705@csun.edu> <41FB5181.6060207@colorado.edu> <41FE880E.8080902@csun.edu> <41FE8DAC.40209@colorado.edu> Message-ID: <41FE998C.4030905@csun.edu> Hello, Fernando, Fernando Perez wrote: > OK, many thanks for tracking this down. I have a more generic > question though, now that you're familiar with my approach: do you > agree with it? What I concluded was that the RPM 'arch' flags were > not sufficient to address the needs of something like ATLAS, This approach works for me. I was not aware of the $YUM flags, and agree that it does neatly solve the whole problem. I take it, by the way, that you use a kickstart file to get the whole process rolling when you build a new system? I'd like to see that, probably off list. I would only add a couple of minor suggestions about your pybrpminst script, realizing this is something like a personal taste. My overall goal is to only do those things as root which must be done as root; in particular "setup.py bdist_rpm" should be done as an ordinary user. See footnote below. Thus, I'm changing my local copy of pybrpmdist to have an 'sudo' in front of the lines which touch the depot. I'm also deleting the 'yum install' from my copy, preferring to rely on my automatic nightly yum updates to do this for me. One comment/caveat: even ATLAS's naming convention may be a bit coarse grained. I've built ATLAS on my laptop (Athlon M CPU) and an Athlon desktop. ATLAS correctly sees that the cache sizes are different and builds accordingly, even though both architectures end up called ATHLONSSE1. Clint Whaley (ATLAS author) designed ATLAS for maximum performance, and wrote it on the assumption you would always tune it for each machine you're on. Stephen P.S. A true story showing why it is important not to build as root. The first version of the C compiler which HP shipped with HP-UX 10.20 had an interesting bug: it attempted an unlink of /dev/null. Of course, this succeeded if you ran cc as root and then your system mysteriously locked up. From Fernando.Perez at colorado.edu Mon Jan 31 17:17:17 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 31 Jan 2005 15:17:17 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FAD9C6.5020305@colorado.edu> <41FADBEB.20103@colorado.edu> <41FB5B3E.2090502@colorado.edu> <41FB69BE.6080600@colorado.edu> Message-ID: <41FEAE6D.7060207@colorado.edu> Pearu Peterson wrote: > I have applied your patch with some minor modifications to CVS. > > Although now one can build scipy without installing scipy_core, one > must still have f2py installed. And f2py requires scipy_distutils. So, > packagers still need to take into account that the correct order of > installing software is: > scipy_core > f2py > scipy Great, thanks. I can confirm that with current CVS (updated minutes ago) and a clean build (full removal of build/, dist/ and MANIFEST stuff), the process is now as simple as: root at planck[dist]# ./setup.py bdist_rpm --release=$ARCH where $ARCH is set according to the ATLAS conventions we've been discussing: root at planck[dist]# echo $ARCH P4SSE2 In my system, this leaves a valid scipy rpm in dist/ and a scipy_core rpm in scipy_core/dist/. I can then just put these two into my YUM architecture-specific repo and let yum do its magic. This is as clean an RPM management solution as I think we can expect, and it's an immense improvement from the early days. Thanks for all the help. Regards, f From Fernando.Perez at colorado.edu Mon Jan 31 18:16:02 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 31 Jan 2005 16:16:02 -0700 Subject: [SciPy-user] bdist_rpm build error in scipy_core from CVS In-Reply-To: <41FE998C.4030905@csun.edu> References: <41F97D3D.4010608@enthought.com> <41FA6B5E.3080502@colorado.edu> <41FA7323.5030804@enthought.com> <41FAB8EA.5010201@csun.edu> <41FADEB4.8020408@colorado.edu> <41FB1CA1.6030705@csun.edu> <41FB5181.6060207@colorado.edu> <41FE880E.8080902@csun.edu> <41FE8DAC.40209@colorado.edu> <41FE998C.4030905@csun.edu> Message-ID: <41FEBC32.9080208@colorado.edu> Stephen Walton wrote: > Hello, Fernando, > > Fernando Perez wrote: > > >>OK, many thanks for tracking this down. I have a more generic >>question though, now that you're familiar with my approach: do you >>agree with it? What I concluded was that the RPM 'arch' flags were >>not sufficient to address the needs of something like ATLAS, > > > This approach works for me. I was not aware of the $YUM flags, and > agree that it does neatly solve the whole problem. I take it, by the > way, that you use a kickstart file to get the whole process rolling when > you build a new system? I'd like to see that, probably off list. Yup, I have a setup for this. Please pester me directly if I haven't sent it in a day or two, I have to organize a few files before passing it to you, so just drop me a line if I let it slip. > I would only add a couple of minor suggestions about your pybrpminst > script, realizing this is something like a personal taste. My overall > goal is to only do those things as root which must be done as root; in > particular "setup.py bdist_rpm" should be done as an ordinary user. See > footnote below. Thus, I'm changing my local copy of pybrpmdist to have > an 'sudo' in front of the lines which touch the depot. I'm also > deleting the 'yum install' from my copy, preferring to rely on my > automatic nightly yum updates to do this for me. Valid changes indeed. I'll probably do the same. > One comment/caveat: even ATLAS's naming convention may be a bit coarse > grained. I've built ATLAS on my laptop (Athlon M CPU) and an Athlon > desktop. ATLAS correctly sees that the cache sizes are different and > builds accordingly, even though both architectures end up called > ATHLONSSE1. Clint Whaley (ATLAS author) designed ATLAS for maximum > performance, and wrote it on the assumption you would always tune it for > each machine you're on. I guess one could always extend this to renaming the ATLAS tarballs with extra tags for cache size and the like, even if the default ATLAS makefile doesn't. After all, it's just a string which defines what ATLAS thinks of as a 'unique architecture', which can be made still somewhat generic (it doesn't have to be unique to a given individual machine). > P.S. A true story showing why it is important not to build as root. > The first version of the C compiler which HP shipped with HP-UX 10.20 > had an interesting bug: it attempted an unlink of /dev/null. Of > course, this succeeded if you ran cc as root and then your system > mysteriously locked up. A version of LyX a while back had the exact same bug, though I don't remember if it was at ./configure or at install time. Lots of people ended up all of a sudden with unbootable systems :) Cheers, f From rkern at ucsd.edu Mon Jan 31 19:38:36 2005 From: rkern at ucsd.edu (Robert Kern) Date: Mon, 31 Jan 2005 16:38:36 -0800 Subject: [SciPy-user] Re: vectorize(function)/arraymap did not return arrays? In-Reply-To: <41FE803B.50108@usc.edu> References: <41F01369.4000807@usc.edu> <41F070DB.4010905@usc.edu> <41F16EB8.6080308@usc.edu> <41F71EB4.3090504@usc.edu> <41FE803B.50108@usc.edu> Message-ID: <41FECF8C.7070207@ucsd.edu> Yichun Wei wrote: > I have a python function to compute 1d convolution. I need to evaluate > this convolution on a 2d grid. So I have to loop through all the rows > and columns on this grid, at this point I found the nested looping is > kind of slow to my expectation, and the inner function call to the > convolution function, since it's using scipy's fftpack or Numeric, I do > not think the inner function call could be speeded a lot more by written > C code in weave. Thus, to implement the inner function call in C, I have > to reimplement all the stuff from array reshaping, multiplying to > convolving. I was just curious about whether it would be a big win if I > write the for loops in C, while keeping the inner-loop function the way > it was (for I think it is fast enough). Probably not. Calling back out to a Python function is an expensive process by itself. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter