From dineshbvadhia at hotmail.com Fri Feb 1 11:09:44 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 1 Feb 2008 08:09:44 -0800 Subject: [SciPy-user] tocsr() errors Message-ID: I'm converting a sparse coo matrix to a csr matrix and getting errors. The details are: My import statements are: > import numpy > import scipy > from scipy import sparse The coo_matrix statement is: > A = scipy.sparse.coo_matrix((scipy.data, (scipy.row, scipy.column)), (M,N)) When I use: > A = A.tocsr() The error message recieved is: File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 2174, in tocsr self.data) File "C:\Python25\lib\site-packages\scipy\sparse\sparsetools.py", line 176, in cootocsr return _sparsetools.cootocsr(*args) TypeError: Array must be have 1 dimensions. Given array has 2 dimensions When I try: > A = sparse.csr_matrix(A) The error message is: X = sparse.csr_matrix(A) File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 1162, in __init__ temp = s.tocsr() File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 2174, in tocsr self.data) File "C:\Python25\lib\site-packages\scipy\sparse\sparsetools.py", line 176, in cootocsr return _sparsetools.cootocsr(*args) TypeError: Array must be have 1 dimensions. Given array has 2 dimensions Is it saying that A must be of dimension 1? If so, surely it can't be as A is a (M,N) matrix. Any ideas? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Fri Feb 1 11:12:36 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 1 Feb 2008 08:12:36 -0800 Subject: [SciPy-user] Sparse Pickle Message-ID: Once a sparse matrix (of any format but in particular coo_matrix) has been populated can it (or a csr_matrix after a tocsr()) be Pickle'd? Cheers. Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.mientki at gmail.com Fri Feb 1 12:29:03 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Fri, 01 Feb 2008 18:29:03 +0100 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? Message-ID: <47A356DF.5070003@gmail.com> hello, I've a application, based on wxPython en SciPy. When I run the application in an IDE, everything works perfect without any error messages (about 1 out of 100 cases the error described below will happen in IDE too). When I run the application standalone, I get the following error message and when I press OK, the program continues and works perfect. Now I've searched for the string "wxmsw26u_vc_enthought.dll", but I can't find it anywhere ?? How can I prevent this error message ? Any other hints for finding the problem would also be appreciated. I run on winXP, SP2, Python 2.4.3 - Enthought Edition 1.1.0 (#69, Oct 6 2006, 12:53:45) [MSC v.1310 32 bit (Intel)] on win32. and updated wxPython to version 2.8. thanks, Stef Mientki -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: moz-screenshot-1.jpg Type: image/jpeg Size: 11547 bytes Desc: not available URL: From matthieu.brucher at gmail.com Fri Feb 1 12:37:55 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 1 Feb 2008 18:37:55 +0100 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? In-Reply-To: <47A356DF.5070003@gmail.com> References: <47A356DF.5070003@gmail.com> Message-ID: Hi, Are you using MAtplotlib with the wxAgg backend ? If it is, you should use the wx one, wxAgg is no longer supported with wxPython 2.8 Matthieu 2008/2/1, Stef Mientki : > > hello, > > I've a application, based on wxPython en SciPy. > When I run the application in an IDE, everything works perfect without any > error messages > (about 1 out of 100 cases the error described below will happen in IDE > too). > > When I run the application standalone, I get the following error message > > > > and when I press OK, the program continues and works perfect. > > Now I've searched for the string "wxmsw26u_vc_enthought.dll", > but I can't find it anywhere ?? > > How can I prevent this error message ? > Any other hints for finding the problem would also be appreciated. > > I run on winXP, SP2, > Python 2.4.3 - Enthought Edition 1.1.0 (#69, Oct 6 2006, 12:53:45) [MSC > v.1310 32 bit (Intel)] on win32. > and updated wxPython to version 2.8. > > thanks, > Stef Mientki > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: moz-screenshot-1.jpg Type: image/jpeg Size: 11547 bytes Desc: not available URL: From stef.mientki at gmail.com Fri Feb 1 12:41:11 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Fri, 01 Feb 2008 18:41:11 +0100 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? In-Reply-To: References: <47A356DF.5070003@gmail.com> Message-ID: <47A359B7.1090106@gmail.com> thanks Matthieu, Matthieu Brucher wrote: > Hi, > > Are you using MAtplotlib with the wxAgg backend ? Yes > If it is, you should use the wx one, wxAgg is no longer supported with > wxPython 2.8 Too bad, wx is much slower and uglier (no anti-aliasing) than wxAgg :-( cheers, Stef From dwf at cs.toronto.edu Fri Feb 1 12:41:04 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 1 Feb 2008 12:41:04 -0500 Subject: [SciPy-user] Sparse Pickle In-Reply-To: References: Message-ID: <86359361-3303-4374-BB0B-47E4D7C3A877@cs.toronto.edu> Hi Dinesh, There should be no problem pickling any of those, but what are you using it for? Pickle (even cPickle) tends to be pretty slow for large objects. In case you do need something quicker, if you look through the mailing list archives you'll see that Andrew Straw posted some really clever code for quickly saving/loading CSR and CSC sparse matrices to/from the filesystem. Cheers, David On 1-Feb-08, at 11:12 AM, Dinesh B Vadhia wrote: > Once a sparse matrix (of any format but in particular coo_matrix) > has been populated can it (or a csr_matrix after a tocsr()) be > Pickle'd? Cheers. > > Dinesh > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Fri Feb 1 13:55:35 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 1 Feb 2008 19:55:35 +0100 Subject: [SciPy-user] tocsr() errors In-Reply-To: References: Message-ID: <80c99e790802011055m12a1f8c4t95652f942cd9b9dc@mail.gmail.com> In your statement: > A = scipy.sparse.coo_matrix((scipy.data, (scipy.row, scipy.column)), (M,N)) data, row and column must be 1d arrays. for coo_matrix, data is a list of nonzero elements, row and column determine "where" data is. see docstring in scipy.sparse (if you use ipython, just type: scipy.sparse? ) for a more complete description of sparse matrices, see here: http://www.scipy.org/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7 hth, L. On Feb 1, 2008 5:09 PM, Dinesh B Vadhia wrote: > I'm converting a sparse coo matrix to a csr matrix and getting errors. > The details are: > > My import statements are: > > > import numpy > > import scipy > > from scipy import sparse > > The coo_matrix statement is: > > > A = scipy.sparse.coo_matrix((scipy.data, (scipy.row, scipy.column)), > (M,N)) > When I use: > > > A = A.tocsr() > > The error message recieved is: > > File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 2174, > in tocsr > self.data) > File "C:\Python25\lib\site-packages\scipy\sparse\sparsetools.py", line > 176, in cootocsr > return _sparsetools.cootocsr(*args) > TypeError: Array must be have 1 dimensions. Given array has 2 dimensions > > When I try: > > > A = sparse.csr_matrix(A) > > The error message is: > > X = sparse.csr_matrix(A) > File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 1162, > in __init__ > temp = s.tocsr() > File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 2174, > in tocsr > self.data) > File "C:\Python25\lib\site-packages\scipy\sparse\sparsetools.py", line > 176, in cootocsr > return _sparsetools.cootocsr(*args) > TypeError: Array must be have 1 dimensions. Given array has 2 dimensions > > Is it saying that A must be of dimension 1? If so, surely it can't be as A > is a (M,N) matrix. Any ideas? > > Dinesh > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjhnson at gmail.com Fri Feb 1 14:47:37 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Fri, 1 Feb 2008 11:47:37 -0800 Subject: [SciPy-user] Faster allclose, comparing arrays Message-ID: Frequently, I am comparing vectors to one another, and I am finding that a good portion of my time is spent in allclose. This is done through a class which stores a bunch of metadata and also a 'vector' attribute: def __eq__(self, other): return allclose(self.vector, other.vector, rtol=RTOL, atol=ATOL) So I place these objects in a list and check equality via "if x in objlist". Couple of questions: 1) Is this a "good" method for comparing arrays? 2) Is there any way to speed up allclose? Thanks. From s.mientki at ru.nl Fri Feb 1 14:53:34 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 01 Feb 2008 20:53:34 +0100 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? In-Reply-To: <47A359B7.1090106@gmail.com> References: <47A356DF.5070003@gmail.com> <47A359B7.1090106@gmail.com> Message-ID: <47A378BE.4070708@ru.nl> Stef Mientki wrote: > thanks Matthieu, > > Matthieu Brucher wrote: > >> Hi, >> >> Are you using MAtplotlib with the wxAgg backend ? >> > Yes > >> If it is, you should use the wx one, wxAgg is no longer supported with >> wxPython 2.8 >> > Too bad, wx is much slower and uglier (no anti-aliasing) than wxAgg :-( > And even worse: replacing wxAgg by wx, doesn't vanish the error message :-( anyway thanks, Stef > cheers, > Stef > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From matthieu.brucher at gmail.com Fri Feb 1 15:01:02 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 1 Feb 2008 21:01:02 +0100 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? In-Reply-To: <47A378BE.4070708@ru.nl> References: <47A356DF.5070003@gmail.com> <47A359B7.1090106@gmail.com> <47A378BE.4070708@ru.nl> Message-ID: > > > >> If it is, you should use the wx one, wxAgg is no longer supported with > >> wxPython 2.8 > >> > > Too bad, wx is much slower and uglier (no anti-aliasing) than wxAgg :-( > > > > And even worse: > replacing wxAgg by wx, doesn't vanish the error message :-( > anyway thanks, > > Stef > This shouldn't be the case. Do you have Visual Studio ? If this is the case, you can check which library in your setup (although I still suspect Matplotlib to be guilty) depends on this library with the Depend.exeapplication. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Fri Feb 1 17:12:45 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 1 Feb 2008 14:12:45 -0800 Subject: [SciPy-user] matrix-vector multiplication errors Message-ID: I'm performing a standard Scipy matrix* vector multiplication, b=Ax , (but not using the sparse module) with different sizes of A as follows: Assuming 8 bytes per float, then: 1. matrix A with M=10,000 and N=15,000 is of approximate size: 1.2Gb 2. matrix A with M=10,000 and N=5,000 is of approximate size: 390Mb 3. matrix A with M=10,000 and N=1,000 is of approximate size: 78Mb The Python/Scipy matrix initialization statements are: > A = scipy.asmatrix(scipy.empty((I,J), dtype=int)) > x = scipy.asmatrix(scipy.empty((J,1), dtype=float)) > b = scipy.asmatrix(scipy.empty((I,1), dtype=float)) I'm using a Windows XP SP2 PC with 2Gb RAM. Both matrices 1. and 2. fail with INDeterminate values in b. Matrix 3. works perfectly. As I have 2Gb of RAM why are matrices 1. and 2. failing? The odd thing is that Python doesn't return any error messages with 1. and 2. but we know the results are garbage (literally!) Cheers! Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Feb 1 17:24:35 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 1 Feb 2008 23:24:35 +0100 Subject: [SciPy-user] matrix-vector multiplication errors In-Reply-To: References: Message-ID: Hi, Try with some values so that the results can be reproduced (or first some real random values and not garbage). In your case, all that can be said is that some values in A and x must be indeterminated or NaN. Matthieu 2008/2/1, Dinesh B Vadhia : > > I'm performing a standard Scipy matrix* vector multiplication, b=Ax , > (but not using the sparse module) with different sizes of A as follows: > > Assuming 8 bytes per float, then: > 1. matrix A with M=10,000 and N=15,000 is of approximate size: 1.2Gb > 2. matrix A with M=10,000 and N=5,000 is of approximate size: 390Mb > 3. matrix A with M=10,000 and N=1,000 is of approximate size: 78Mb > > The Python/Scipy matrix initialization statements are: > > A = scipy.asmatrix(scipy.empty((I,J), dtype=int)) > > x = scipy.asmatrix(scipy.empty((J,1), dtype=float)) > > b = scipy.asmatrix(scipy.empty((I,1), dtype=float)) > > I'm using a Windows XP SP2 PC with 2Gb RAM. > > Both matrices 1. and 2. fail with INDeterminate values in b. Matrix 3. > works perfectly. As I have 2Gb of RAM why are matrices 1. and 2. failing? > > The odd thing is that Python doesn't return any error messages with 1. and > 2. but we know the results are garbage (literally!) > > Cheers! > > Dinesh > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Feb 2 07:07:01 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 02 Feb 2008 21:07:01 +0900 Subject: [SciPy-user] [ANN] Blas-Lapack superpack: click only blas/lapack installation (first alpha) Message-ID: <47A45CE5.2000305@ar.media.kyoto-u.ac.jp> Hi, I started working on an easy installer for blas/lapack. The idea is that you would use this installer so that building numpy and scipy from source is easy on windows (32 bits for now). It would give blas/lapack compiled correctly, with optional atlas optimized version. http://www.ar.media.kyoto-u.ac.jp/members/david/archives/blas_lapack_superpack.exe How to use ========== Run the setup.exe, click yes all the way. Add the installed dll in your path, or add the path where the dll are installed in your PATH. main features: ============== - Click only, easy installation of blas and lapack libraries (including atlas if supported, see below). - Install atlas *only if your cpu supports it*: that's the main feature, actually. The installer detects your cpu, and install ATLAS only if an ATLAS matching your CPU is found (only SSE3 supported for now, but other arch, included 3dnow and co, can easily be added depending on people help to provide the built ATLAS). What can you do with it: ======================== - compile numpy and scipy without wrong SSE problem, without bothering about compiling netlib BLAS/LAPACK, etc... - compile numpy wo any fortran compiler, VS only (no need for mingw, etc... thanks to VS import libraries + DLL). - use the installed lapack to build an optimized ATLAS for your architecture (using both gnu compilers and proprietary compilers should be possible). More details ============ - built with mingw g77 from linux (dll, unix-style static archives and def) - import libraries built with VS 2003. This means you can compile numpy wo any fortran compiler, in particular, no need for mingw. I don't know if this is compatible with other versions of VS, though. - Only SSE3 and above will get ATLAS for now. This is because compiling ATLAS on windows is a PITA, and I don't want to spend time on this, so if you want something else, you will have to provide me the atlas binary first. But having atlas for sse, sse2, 3dnow, etc... is entirely possible. - I do not register the DLL yet, because I am not sure yet how to do it in a safe way (thanks MS for a totally broken handling of shared libraries, BTW) - I do not guarantee that the built atlas is optimal. ATLAS performances depend on many parameters, not just sse/sse2/sse3 (size of L1/L2/L3 cache are significant, for example), and again, I cannot build many different libraries. - The installer is built using nsis. - The whole process of making the installer is not 100 % automatic yet, but I intend to make it so, and put the necessary scripts somewhere so people can improve it if they want. This is alpha software, and because it is an installer, it can screw up your computer if I did something wrong. However, I barely touch the registry, and only install files in one directory, so the chances are pretty minimal (that's why also you have to put dll manually in a path where they will be found manually: at some point, this will be done by the installer, but that's by far the most dangerous thing, so I prefer avoiding it for now). cheers, David From stef.mientki at gmail.com Sat Feb 2 10:01:10 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Sat, 02 Feb 2008 16:01:10 +0100 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? In-Reply-To: References: <47A356DF.5070003@gmail.com> <47A359B7.1090106@gmail.com> <47A378BE.4070708@ru.nl> Message-ID: <47A485B6.5020902@gmail.com> I already replied to this message, but it's hold up for moderation. But as I've news in the meanwhile: Matthieu Brucher wrote: > > > >> If it is, you should use the wx one, wxAgg is no longer > supported with > >> wxPython 2.8 > From the wxPython list I understand that this ABSOLUTELY NOT TRUE !! The problem was that I was using an old version of MatPlot (from the Enthought suite). After Updating MatPlot from version from 0.87.7 to 0.91.2 troubles were also gone, and hapilly I can still use wxAgg ;-) ;-) anyway thanks Matthieu ! cheers, Stef From matthieu.brucher at gmail.com Sat Feb 2 10:20:05 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 2 Feb 2008 16:20:05 +0100 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? In-Reply-To: <47A485B6.5020902@gmail.com> References: <47A356DF.5070003@gmail.com> <47A359B7.1090106@gmail.com> <47A378BE.4070708@ru.nl> <47A485B6.5020902@gmail.com> Message-ID: Strange though because when you compile Matplotlib, it explicitely tells that the wxAgg will not be compiled and thus not be used, but this could have changed recently. Matthieu 2008/2/2, Stef Mientki : > > I already replied to this message, > but it's hold up for moderation. > But as I've news in the meanwhile: > > Matthieu Brucher wrote: > > > > > > >> If it is, you should use the wx one, wxAgg is no longer > > supported with > > >> wxPython 2.8 > > > > From the wxPython list I understand that this ABSOLUTELY NOT TRUE !! > The problem was that I was using an old version of MatPlot (from the > Enthought suite). > After Updating MatPlot from version from 0.87.7 to 0.91.2 > troubles were also gone, > and hapilly I can still use wxAgg ;-) ;-) > anyway thanks Matthieu ! > > cheers, > Stef > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Sat Feb 2 10:22:14 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 2 Feb 2008 16:22:14 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine Message-ID: Dear All, I am currently using a Python script on my box to post-process some data (the process typically involves operations on 5000 by 5000 arrays). The Python script also relies heavily on some R scripts (imported via Rpy) and a compiled Fortran 90 routine (imported via f2py). I have recently made a new Debian testing installation for the amd64 architecture on my machine [an Intel Xeon Dual-core pc] so I wonder if there is any way to take advantage of both CPU's when running that script. Is it something which can be achieved "automatically" by installing and calling some libraries? Do I have to re-write and re-think my whole script? As you can figure out, I am completely new to multi-core machines and running codes in parallel. Any suggestions are welcome. Many thanks Lorenzo From dineshbvadhia at hotmail.com Sat Feb 2 10:54:38 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 2 Feb 2008 07:54:38 -0800 Subject: [SciPy-user] matrix-vector multiplication errors Message-ID: Matthieu We are using real data and are comparing it with results from a C++ implementation and hence know that the results from the matrix operations 1. and 2. are incorrect. The values of b after the A*x operation are all "-1#IND" for both 1. and 2. Any ideas? Dinesh -------------------------------------------------------------------------------- Message: 6 Date: Fri, 1 Feb 2008 23:24:35 +0100 From: "Matthieu Brucher" Subject: Re: [SciPy-user] matrix-vector multiplication errors To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi, Try with some values so that the results can be reproduced (or first some real random values and not garbage). In your case, all that can be said is that some values in A and x must be indeterminated or NaN. Matthieu 2008/2/1, Dinesh B Vadhia : > > I'm performing a standard Scipy matrix* vector multiplication, b=Ax , > (but not using the sparse module) with different sizes of A as follows: > > Assuming 8 bytes per float, then: > 1. matrix A with M=10,000 and N=15,000 is of approximate size: 1.2Gb > 2. matrix A with M=10,000 and N=5,000 is of approximate size: 390Mb > 3. matrix A with M=10,000 and N=1,000 is of approximate size: 78Mb > > The Python/Scipy matrix initialization statements are: > > A = scipy.asmatrix(scipy.empty((I,J), dtype=int)) > > x = scipy.asmatrix(scipy.empty((J,1), dtype=float)) > > b = scipy.asmatrix(scipy.empty((I,1), dtype=float)) > > I'm using a Windows XP SP2 PC with 2Gb RAM. > > Both matrices 1. and 2. fail with INDeterminate values in b. Matrix 3. > works perfectly. As I have 2Gb of RAM why are matrices 1. and 2. failing? > > The odd thing is that Python doesn't return any error messages with 1. and > 2. but we know the results are garbage (literally!) > > Cheers! > > Dinesh > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Sat Feb 2 12:52:43 2008 From: strawman at astraw.com (Andrew Straw) Date: Sat, 02 Feb 2008 09:52:43 -0800 Subject: [SciPy-user] wxmsw26u_vc_enthought.dll not found ? In-Reply-To: References: <47A356DF.5070003@gmail.com> <47A359B7.1090106@gmail.com> <47A378BE.4070708@ru.nl> <47A485B6.5020902@gmail.com> Message-ID: <47A4ADEB.6030908@astraw.com> It's not that you can't use the wxAgg backend in MPL -- it is that you don't need to compile an extension module to do so. Matthieu Brucher wrote: > Strange though because when you compile Matplotlib, it explicitely > tells that the wxAgg will not be compiled and thus not be used, but > this could have changed recently. > > Matthieu > > 2008/2/2, Stef Mientki >: > > I already replied to this message, > but it's hold up for moderation. > But as I've news in the meanwhile: > > Matthieu Brucher wrote: > > > > > > >> If it is, you should use the wx one, wxAgg is no longer > > supported with > > >> wxPython 2.8 > > > > From the wxPython list I understand that this ABSOLUTELY NOT TRUE !! > The problem was that I was using an old version of MatPlot (from the > Enthought suite). > After Updating MatPlot from version from 0.87.7 to 0.91.2 > troubles were also gone, > and hapilly I can still use wxAgg ;-) ;-) > anyway thanks Matthieu ! > > cheers, > Stef > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Sat Feb 2 14:36:39 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 2 Feb 2008 14:36:39 -0500 Subject: [SciPy-user] Faster allclose, comparing arrays In-Reply-To: References: Message-ID: On 01/02/2008, Tom Johnson wrote: > Frequently, I am comparing vectors to one another, and I am finding > that a good portion of my time is spent in allclose. This is done > through a class which stores a bunch of metadata and also a 'vector' > attribute: > > def __eq__(self, other): > return allclose(self.vector, other.vector, rtol=RTOL, atol=ATOL) > > So I place these objects in a list and check equality via "if x in > objlist". Couple of questions: > > 1) Is this a "good" method for comparing arrays? > 2) Is there any way to speed up allclose? That rather depends on what you want "comparing" to do. If you want exact equality, then allclose is doing the wrong thing; you want something like N.all(a==b). But I suspect you know that. There are potentially more subtle problems with allclose(), though. For a simple example, you can easily have N.allclose(a,b) but not N.allclose(1e3*a,1e3*b). For a more subtle example, suppose you want to compare a vector and a result obtained by Fourier transforming. If your vector is something like [1,2,3,4] allclose() will do pretty much what you want. But if your vector is something like [1e40,0,0,0], you might have a problem: the Fourier transform can be expected to introduce numerical errors in all the components of size about machine epsilon times the *largest component*. Since allclose() does an element-wise comparison, if you get [1e40+1,1,1], allclose returns False when the answer is true to numerical accuracy. On the other hand, sometimes the different elements of a vector have wildly differing sizes by design, so normalizing by the largest vector isn't what you want. I think of allclose() as a debugging function; if I want my code's result to depend on how close two vectors are, I write out explicitly what I mean by "close". How can you go faster? Well, that depends on whether you want allclose()'s semantics or something else. If you want real equality, all(a==b) will be slightly faster, but there are various hacks you can pull off - like making a python set of .tostring() values (once the arrays are put in some normalized form). You might be able to accelerate allclose()-like functions by writing a compiled bit of code that compares entry-by-entry and bails out as soon as any entry differs - for many applications that'll bring the cost down close to that of a single float comparison, on average. If you have *lots* of vectors, and you're looking for one close enough to yours, you can do better than simply trying all candidates. This is the domain of spatial data structures, and it's a huge topic (and my knowledge is quite out-of-date). But, for a simple example, you can arrange the vectors to be tested against on the leaves of a tree. Each node in the tree specifies a coordinate (the 7th, for example) and a value v; all arrays with a[7]>v go on one branch, and all with a[7]<=v go on the other. v can be chosen so that half the arrays go on either side. Then when searching for a test array in the collection, you can almost always test only one branch. (This is roughly a kd-tree; you can find much more information on this topic with a bit of googling.) Anne From zunzun at zunzun.com Sat Feb 2 18:13:48 2008 From: zunzun at zunzun.com (James Phillips) Date: Sat, 2 Feb 2008 17:13:48 -0600 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: <268756d30802021513i5b9ad0ccx7082b90126698f74@mail.gmail.com> I suggest you try Parallel Python at http://www.parallelpython.com/ James On 2/2/08, Lorenzo Isella wrote: > > Dear All, > I am currently using a Python script on my box to post-process some > data (the process typically involves operations on 5000 by 5000 > arrays). > The Python script also relies heavily on some R scripts (imported via > Rpy) and a compiled Fortran 90 routine (imported via f2py). > I have recently made a new Debian testing installation for the amd64 > architecture on my machine [an Intel Xeon Dual-core pc] so I wonder if > there is any way to take advantage of both CPU's when running that > script. > Is it something which can be achieved "automatically" by installing > and calling some libraries? Do I have to re-write and re-think my > whole script? > As you can figure out, I am completely new to multi-core machines and > running codes in parallel. > Any suggestions are welcome. > Many thanks > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Sat Feb 2 23:16:08 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 2 Feb 2008 20:16:08 -0800 Subject: [SciPy-user] Initializing COO/CSR matrix before function call Message-ID: I'm using a function to load a sparse matrix A using coo_matrix and then to transform it into a csr_matrix. We are testing a bunch of very large sized matrices A and hence the use of a function. In addition, A is available to many other functions in the program. Python says that A has to be defined (or initialized) before sending to the load function. But, doesn't that mean initializing A as 'empty' or 'zeroed', both of which impact memory use, defeats the purpose of using coo and csr? I've looked at the Sparse docstring help and cannot see a way out. Have I missed something? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sun Feb 3 00:03:30 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 3 Feb 2008 00:03:30 -0500 Subject: [SciPy-user] Initializing COO/CSR matrix before function call In-Reply-To: References: Message-ID: On 02/02/2008, Dinesh B Vadhia wrote: > I'm using a function to load a sparse matrix A using coo_matrix and then to > transform it into a csr_matrix. We are testing a bunch of very large sized > matrices A and hence the use of a function. In addition, A is available to > many other functions in the program. > > Python says that A has to be defined (or initialized) before sending to the > load function. But, doesn't that mean initializing A as 'empty' or > 'zeroed', both of which impact memory use, defeats the purpose of using coo > and csr? I've looked at the Sparse docstring help and cannot see a way out. > > Have I missed something? If I've correctly understood your problem, it is this: You want to make a sparse matrix A available to your whole program. The loading is done inside a special-purpose function, call it load(). But when you create A inside load(), it's not visible anywhere else. What are you to do? The most direct (though not necessarily the best) way to do what you're describing is to make A a global variable. That is, if you mention "A" anywhere in the whole program, it refers to *this* A that you just loaded. In most languages, declarations are used to indicate global variables. Python has somewhat complicated rules for this, but the easiest way to do what you want is: def load(): global A A = # whatever Now, if in some other function you write def frob(x): return A*x python will deduce that A here refers to the global A. If, however, you *assign* to A: def fiddle(): A = 2*A python will assume that A is a local variable in fiddle() and die because you have used it before assigning a value to it. To tell python that it's a global variable, use global again: def fiddle(): global A A = 2*A It never hurts to mark A as global in this way. I should say, though, that setting a global variable like this can be trouble. It means (for example), that when a function is run, what happens depends on the value A has, not just the values that get passed to the function. This can make functions spontaneously do something surprising if A accidentally gets modified, and it can be very difficult to track down where the problem is. The fact that there is only one A for the whole program can also be a major headache if you want to expand your program or use it as a tool from within another python program. The classical way to get rid of this is to explicitly pass A as a parameter to functions that need to use it. If this grows cumbersome, a common solution is to incorporate A (and possibly some other supporting data) into an object, and make functions that need to use A a method. These problems, and the techniques to solve them, are not numpy-specific; if you do some looking around for information on python and global variables, you should find much more information than I gave here. Good luck! Anne From dwf at cs.toronto.edu Sun Feb 3 00:20:50 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 3 Feb 2008 00:20:50 -0500 Subject: [SciPy-user] Initializing COO/CSR matrix before function call In-Reply-To: References: Message-ID: <10B25358-9941-4613-8E65-D94F203514A6@cs.toronto.edu> Dinesh, What sort of method are you using to load the matrices? It'd help if you posted some code. In general you shouldn't have to initialize something too big in order to load in a sparse matrix. I'm not sure that COO is terribly efficient for on-the-fly insertions. Maybe a dok_matrix would be more appropriate, which you can then convert to whatever you need, all at once, as then you'll know exactly how many non-zero elements you have to allocate space for. David On 2-Feb-08, at 11:16 PM, Dinesh B Vadhia wrote: > I'm using a function to load a sparse matrix A using coo_matrix and > then to transform it into a csr_matrix. We are testing a bunch of > very large sized matrices A and hence the use of a function. In > addition, A is available to many other functions in the program. > > Python says that A has to be defined (or initialized) before sending > to the load function. But, doesn't that mean initializing A as > 'empty' or 'zeroed', both of which impact memory use, defeats the > purpose of using coo and csr? I've looked at the Sparse docstring > help and cannot see a way out. > > Have I missed something? > > Dinesh > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sun Feb 3 02:19:19 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 03 Feb 2008 16:19:19 +0900 Subject: [SciPy-user] matrix-vector multiplication errors In-Reply-To: References: Message-ID: <47A56AF7.5030205@ar.media.kyoto-u.ac.jp> Dinesh B Vadhia wrote: > Matthieu > We are using real data and are comparing it with results from a C++ > implementation and hence know that the results from the matrix > operations 1. and 2. are incorrect. Well, depends on the implementation you are using. Which library are you using for the computation in C++ ? > The values of b after the A*x operation are all "-1#IND" for both 1. > and 2. > > Any ideas? Without more details about the matrices, it will be hard to say. Basically, if we cannot reproduce your problem, there is little chance we can say for sure what the problem is, cheers, David From dineshbvadhia at hotmail.com Sun Feb 3 14:49:10 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sun, 3 Feb 2008 11:49:10 -0800 Subject: [SciPy-user] Initializing COO/CSR matrix before function Message-ID: Hi David Please find some code below. There are three problems here: 1) correct method for initializing very large coo/csr matrices, 2) memory usage in initializing very large coo/csr matrices and, 3) using a function to load the coo matrix where different sized matrices are going to be used in the program. Thank-you! Dinesh def populateSparseMatrix (A, nnz, dataFile, I, J) # Populate matrix A by first loading data into a coo_matrix using coo_matrix(V, (I,J)), dims) method ij = numpy.array(numpy.empty((nnz, 2), dtype=int)) f = open(dataFile, 'rb') ij = pickle.load(f) row = ij[:,0] column = ij[:,1] data = scipy.ones(ij.shape[0], dtype=int) # Initialize A as coo_matrix, load data into A, convert A to csr_matrix A = sparse.coo_matrix((data, (row, column)), dims=(I,J)).tocsr() return A def anotherFunctionOperatingOnSparseMatrixA(A, a, b) blah blah blah blah blah blah return a, b # main program # imports import numpy import scipy from scipy import sparse # constants nnz = bigNonZeroNumber I = bigI J = bigJ dataFile = aFilename # Define and initialize all matrix and vectors # Create and load a coo_matrix and then transform into a csr_matrix using a function (ie. def populateSparseMatrix) so that we can use program with different sized matrices # Python requires that all parameters passed to functions be defined beforehand. # If so, what is the correct statement to use for initializing an empty coo_matrix? # Secondly, if I, J are very large then isn't the initialization step using up memory and hence defeating the purpose of using a coo/csr matrix? # nnz is from the millions to the tens of millions, the sparse data is just 1's. # For large I, J, I get 'memory error' on my 2Gb RAM machine which I shouldn't for using a coo/csr matrix A = sparse.coo_matrix(None, dims=(I, J), dtype=int) # What is the correct initialization statement (if any)? # Call the populate matrix A function A = populateSparseMatrix(A, nnz, dataFile, I, J) a, b = anotherFunctionOperatingOnSparseMatrixA(A, a, b) # assume a, b are defined before calling function ------------------------------ Message: 5 Date: Sun, 3 Feb 2008 00:20:50 -0500 From: David Warde-Farley Subject: Re: [SciPy-user] Initializing COO/CSR matrix before function call To: SciPy Users List Message-ID: <10B25358-9941-4613-8E65-D94F203514A6 at cs.toronto.edu> Content-Type: text/plain; charset="us-ascii" Dinesh, What sort of method are you using to load the matrices? It'd help if you posted some code. In general you shouldn't have to initialize something too big in order to load in a sparse matrix. I'm not sure that COO is terribly efficient for on-the-fly insertions. Maybe a dok_matrix would be more appropriate, which you can then convert to whatever you need, all at once, as then you'll know exactly how many non-zero elements you have to allocate space for. David On 2-Feb-08, at 11:16 PM, Dinesh B Vadhia wrote: > I'm using a function to load a sparse matrix A using coo_matrix and > then to transform it into a csr_matrix. We are testing a bunch of > very large sized matrices A and hence the use of a function. In > addition, A is available to many other functions in the program. > > Python says that A has to be defined (or initialized) before sending > to the load function. But, doesn't that mean initializing A as > 'empty' or 'zeroed', both of which impact memory use, defeats the > purpose of using coo and csr? I've looked at the Sparse docstring > help and cannot see a way out. > > Have I missed something? > > Dinesh > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Feb 3 15:55:04 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 3 Feb 2008 22:55:04 +0200 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: <20080203205504.GD25396@mentat.za.net> Hi Lorenzo On Sat, Feb 02, 2008 at 04:22:14PM +0100, Lorenzo Isella wrote: > I am currently using a Python script on my box to post-process some > data (the process typically involves operations on 5000 by 5000 > arrays). > The Python script also relies heavily on some R scripts (imported via > Rpy) and a compiled Fortran 90 routine (imported via f2py). > I have recently made a new Debian testing installation for the amd64 > architecture on my machine [an Intel Xeon Dual-core pc] so I wonder if > there is any way to take advantage of both CPU's when running that > script. > Is it something which can be achieved "automatically" by installing > and calling some libraries? Do I have to re-write and re-think my > whole script? Using a parallelised linear algebra library may address most of your problems. I think (and I hope someone will correct me if I'm wrong) that ATLAS can be compiled to use multiple threads, and I know MKL supports it as well. Another approach would be to parallelize the algorithm itself, using something like 'processing' (http://pypi.python.org/pypi/processing/). You can take that a step further by distributing the problem over several processes (running on one or more machines), using using ipython1 (http://ipython.scipy.org/moin/IPython1). Good luck! St?fan From peridot.faceted at gmail.com Sun Feb 3 17:29:23 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 3 Feb 2008 17:29:23 -0500 Subject: [SciPy-user] scipy.sparse.lil_matrix and fancy indexing Message-ID: Hi, It looks to me like there's an inconsistency between how numpy matrices handle fancy indexing and how scipy.sparse.lil_matrix handles fancy indexing: In [15]: A = N.zeros((3,3)) In [16]: A[[0,1,2],[0,1,2]] = 1 In [18]: A Out[18]: array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) In [19]: B = scipy.sparse.lil_matrix((3,3)) In [21]: B[[0,1,2],[0,1,2]] = [1,1,1] In [22]: print B (0, 0) 1 (0, 1) 1 (0, 2) 1 (1, 0) 1 (1, 1) 1 (1, 2) 1 (2, 0) 1 (2, 1) 1 (2, 2) 1 In [23]: B-A Out[23]: matrix([[ 0., 1., 1.], [ 1., 0., 1.], [ 1., 1., 0.]]) (Fancy indexing also does not accept scalars, but that's presumably just not been implemented yet.) In light of the following, this seems like unintended behaviour: In [24]: B[[0,1,2],[0,1,2]] = [1,2,3] In [25]: p B (0, 0) 1 (0, 1) 1 (0, 2) 1 (1, 0) 2 (1, 1) 2 (1, 2) 2 (2, 0) 3 (2, 1) 3 (2, 2) 3 At the least, I can't see why I would have expected this result. Is this intended behaviour? Failure to raise an exception on bad input? Just a bug? Thanks, Anne From wnbell at gmail.com Sun Feb 3 18:07:18 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 3 Feb 2008 17:07:18 -0600 Subject: [SciPy-user] scipy.sparse.lil_matrix and fancy indexing In-Reply-To: References: Message-ID: On Feb 3, 2008 4:29 PM, Anne Archibald wrote: > Is this intended behaviour? Failure to raise an exception on bad > input? Just a bug? That's unlikely to be the intended behaviour. The lil_matrix. __setitem__ code is fairly ugly, is anyone actively maintaining it? http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/lil.py#L273 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dwf at cs.toronto.edu Sun Feb 3 18:16:24 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 3 Feb 2008 18:16:24 -0500 Subject: [SciPy-user] scipy.sparse.lil_matrix and fancy indexing In-Reply-To: References: Message-ID: <35DA76AC-8306-47E2-AFBF-A5569A0F980C@cs.toronto.edu> On 3-Feb-08, at 5:29 PM, Anne Archibald wrote: > In [22]: print B > (0, 0) 1 > (0, 1) 1 > (0, 2) 1 > (1, 0) 1 > (1, 1) 1 > (1, 2) 1 > (2, 0) 1 > (2, 1) 1 > (2, 2) 1 I agree that this isn't what you'd expect, since it works completely differently on arrays. The equivalent behaviour for a numpy.array would be obtained by using A[0:3,:][:,0:3], and to add to the list of unexpected behaviours, lil_matrix accepts this type of indexing without raising an error but doesn't actually do anything then: In [18]: A = scipy.sparse.lil_matrix((3,3)) In [19]: x[0:3,:][:,0:3] = 1 In [20]: x Out[20]: <3x3 sparse matrix of type '' with 0 stored elements in LInked List format> David From ramercer at gmail.com Mon Feb 4 00:59:12 2008 From: ramercer at gmail.com (Adam Mercer) Date: Mon, 4 Feb 2008 00:59:12 -0500 Subject: [SciPy-user] crash from scipy.test() on Intel Mac OS X Leopard with python-2.4.4 Message-ID: <799406d60802032159i4ce7daby3f2838e316f3b1e8@mail.gmail.com> Hi I'm running into the following crash on Intel Mac OS X 10.5.1 with python-2.4.4 and scipy-0.6.0 (from MacPorts), on running scipy.test() I'm getting the following crash: Process: Python [18375] Path: /opt/local/Library/Frameworks/Python.framework/Versions/2.4/Resources/Python.app/Contents/MacOS/Python Identifier: Python Version: ??? (???) Code Type: X86 (Native) Parent Process: bash [18340] Date/Time: 2008-02-04 00:57:44.518 -0500 OS Version: Mac OS X 10.5.1 (9B18) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000000 Crashed Thread: 0 Thread 0 Crashed: 0 readline.so 0x000bbaa3 call_readline + 691 1 org.python.python 0x0016b9ae PyOS_Readline + 254 2 org.python.python 0x0016cf70 tok_nextc + 64 3 org.python.python 0x0016d7a5 PyTokenizer_Get + 101 4 org.python.python 0x00168512 parsetok + 210 5 org.python.python 0x00212992 PyRun_InteractiveOneFlags + 290 6 org.python.python 0x00212bb3 PyRun_InteractiveLoopFlags + 99 7 org.python.python 0x00213a69 PyRun_AnyFileExFlags + 185 8 org.python.python 0x0021da8a Py_Main + 3130 9 org.python.python 0x000018dc 0x1000 + 2268 10 org.python.python 0x00001809 0x1000 + 2057 Thread 0 crashed with X86 Thread State (32-bit): eax: 0x00000000 ebx: 0x000bb7fb ecx: 0xbfffe108 edx: 0x0030a500 edi: 0x00348db0 esi: 0x003460e0 ebp: 0xbfffe218 esp: 0xbfffe130 ss: 0x0000001f efl: 0x00010246 eip: 0x000bbaa3 cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x00000000 gs: 0x00000037 cr2: 0x00000000 Binary Images: 0x1000 - 0x1ff3 +org.python.python 2.4a0 (2.4alpha1) <3cd0de8bec82e6ad810f37fd6de2f7d2> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/Resources/Python.app/Contents/MacOS/Python 0x49000 - 0x49ffa +_bisect.so ??? (???) <029e9be854fdcdced22dfffdd29dbb56> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/_bisect.so 0xa2000 - 0xa6fff +_dotblas.so ??? (???) <784cbedda182c14fb1673314a934c44c> /opt/local/lib/python2.4/site-packages/numpy/core/_dotblas.so 0xaa000 - 0xabff5 +cStringIO.so ??? (???) <6b63cd6236e9758a34e0e4038df411ec> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/cStringIO.so 0xb0000 - 0xb2ff7 +mmap.so ??? (???) <37e8ce060c42c6311f5bc161d4d1c2f5> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/mmap.so 0xba000 - 0xbbff5 +readline.so ??? (???) <9333cff6814f32ccae4bac6172310910> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/readline.so 0xc1000 - 0xc2fff +time.so ??? (???) <66420cc53fe2c1fdc333b53199ccd4a2> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/time.so 0xc8000 - 0xcbff5 +itertools.so ??? (???) <9959aa7dbfb7e900acbeeaa3fa2f37b8> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/itertools.so 0xd2000 - 0xd3ff7 +_heapq.so ??? (???) <14d05523617748e3a24312862416ed6b> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/_heapq.so 0xd8000 - 0xdaff8 +operator.so ??? (???) <2cf7571d87d3eef8e6b3e0f62b24e258> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/operator.so 0x166000 - 0x246fe7 +org.python.python 2.4a0 (2.2) <4856ee0fbbfeacbf990457e2477dcdbc> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/Python 0x2cc000 - 0x2e2fea libedit.2.dylib ??? (???) /usr/lib/libedit.2.dylib 0x2ed000 - 0x2efff5 +strop.so ??? (???) <2bfa19ceefdc2393f6815582b9452c98> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/strop.so 0x2f5000 - 0x2f6fff +math.so ??? (???) <4b25bba2f941686b5941c510eb3ceae4> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/math.so 0x2fb000 - 0x2fcfff +_random.so ??? (???) /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/_random.so 0x400000 - 0x436fe7 +libncursesw.5.dylib ??? (???) /opt/local/lib/libncursesw.5.dylib 0x4d1000 - 0x4d3ffc +binascii.so ??? (???) <948ee1211808eb70ac4cd36d513ef900> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/binascii.so 0x4d9000 - 0x4daff4 +fcntl.so ??? (???) <61fcdf5f24383981786710c7ee909a19> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/fcntl.so 0x4de000 - 0x4e8ff3 +_curses.so ??? (???) <561b2728220e23aab7c67bda04c6bbe5> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/_curses.so 0x4f1000 - 0x558fef +multiarray.so ??? (???) <385335fedf01869086ce7898cb75daa6> /opt/local/lib/python2.4/site-packages/numpy/core/multiarray.so 0x589000 - 0x5afff7 +umath.so ??? (???) <7a6483274211c37ad233406a4d8740c8> /opt/local/lib/python2.4/site-packages/numpy/core/umath.so 0x60a000 - 0x61cffa +_sort.so ??? (???) <1f5f79c1306277540ea350aba544678b> /opt/local/lib/python2.4/site-packages/numpy/core/_sort.so 0x68c000 - 0x69aff7 +cPickle.so ??? (???) <9c9aa88a946e4c244a4487f0f2f4caa8> /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/cPickle.so 0x6e2000 - 0x6fcfff +scalarmath.so ??? (???) <37315ee6892a4ef68048a3ce9a5e3e59> /opt/local/lib/python2.4/site-packages/numpy/core/scalarmath.so 0x70a000 - 0x70bfff +_compiled_base.so ??? (???) <34a35fffd296e99378c07c0d2c0d4322> /opt/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so 0x715000 - 0x71affa +lapack_lite.so ??? (???) /opt/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so 0x71e000 - 0x726fff +fftpack_lite.so ??? (???) <6ce456b8076cad1e7c55e6ed4cc0ece5> /opt/local/lib/python2.4/site-packages/numpy/fft/fftpack_lite.so 0x72b000 - 0x75cfff +mtrand.so ??? (???) /opt/local/lib/python2.4/site-packages/numpy/random/mtrand.so 0x7b5000 - 0x7b7fff +struct.so ??? (???) /opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/lib-dynload/struct.so 0x8fe00000 - 0x8fe2d883 dyld 95.3 (???) <81592e798780564b5d46b988f7ee1a6a> /usr/lib/dyld 0x907df000 - 0x907e0fef libmathCommon.A.dylib ??? (???) /usr/lib/system/libmathCommon.A.dylib 0x91ba1000 - 0x91bd0ff7 libncurses.5.4.dylib ??? (???) <3b2ac2ca8190942b6b81d2a7012ea859> /usr/lib/libncurses.5.4.dylib 0x91bea000 - 0x91c5efef libvMisc.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x91dc0000 - 0x91e87ff2 com.apple.vImage 3.0 (3.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage 0x92fd8000 - 0x92fd8ffd com.apple.vecLib 3.4 (vecLib 3.4) /System/Library/Frameworks/vecLib.framework/Versions/A/vecLib 0x933cc000 - 0x933ccffd com.apple.Accelerate 1.4 (Accelerate 1.4) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate 0x93524000 - 0x9352bfe9 libgcc_s.1.dylib ??? (???) /usr/lib/libgcc_s.1.dylib 0x93917000 - 0x93917ffd com.apple.Accelerate.vecLib 3.4 (vecLib 3.4) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib 0x94cc9000 - 0x94cf6feb libvDSP.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x94e3d000 - 0x9524dfef libBLAS.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 0x967d3000 - 0x9692dfe3 libSystem.B.dylib ??? (???) <8ecc83dc0399be3946f7a46e88cf4bbb> /usr/lib/libSystem.B.dylib 0x96952000 - 0x969afffb libstdc++.6.dylib ??? (???) <04b812dcec670daa8b7d2852ab14be60> /usr/lib/libstdc++.6.dylib 0x969b0000 - 0x96d6efea libLAPACK.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib 0xffff0000 - 0xffff1780 libSystem.B.dylib ??? (???) /usr/lib/libSystem.B.dylib Any idea where to starting in debugging this? Cheers Adam From robert.kern at gmail.com Mon Feb 4 01:39:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 04 Feb 2008 00:39:04 -0600 Subject: [SciPy-user] crash from scipy.test() on Intel Mac OS X Leopard with python-2.4.4 In-Reply-To: <799406d60802032159i4ce7daby3f2838e316f3b1e8@mail.gmail.com> References: <799406d60802032159i4ce7daby3f2838e316f3b1e8@mail.gmail.com> Message-ID: <47A6B308.6010005@gmail.com> Adam Mercer wrote: > Hi > > I'm running into the following crash on Intel Mac OS X 10.5.1 with > python-2.4.4 and scipy-0.6.0 (from MacPorts), on running scipy.test() > I'm getting the following crash: > > Process: Python [18375] > Path: > /opt/local/Library/Frameworks/Python.framework/Versions/2.4/Resources/Python.app/Contents/MacOS/Python > Identifier: Python > Version: ??? (???) > Code Type: X86 (Native) > Parent Process: bash [18340] > > Date/Time: 2008-02-04 00:57:44.518 -0500 > OS Version: Mac OS X 10.5.1 (9B18) > Report Version: 6 > > Exception Type: EXC_BAD_ACCESS (SIGBUS) > Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000000 > Crashed Thread: 0 > > Thread 0 Crashed: > 0 readline.so 0x000bbaa3 call_readline + 691 > 1 org.python.python 0x0016b9ae PyOS_Readline + 254 > 2 org.python.python 0x0016cf70 tok_nextc + 64 > 3 org.python.python 0x0016d7a5 PyTokenizer_Get + 101 > 4 org.python.python 0x00168512 parsetok + 210 > 5 org.python.python 0x00212992 PyRun_InteractiveOneFlags + 290 > 6 org.python.python 0x00212bb3 PyRun_InteractiveLoopFlags + 99 > 7 org.python.python 0x00213a69 PyRun_AnyFileExFlags + 185 > 8 org.python.python 0x0021da8a Py_Main + 3130 > 9 org.python.python 0x000018dc 0x1000 + 2268 > 10 org.python.python 0x00001809 0x1000 + 2057 It looks like it's crashing in readline rather than anything in scipy. To determine whether scipy is causing the problem or not, run scipy.test() non-interactively. E.g. $ python -c "import scipy; scipy.test()" -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lorenzo.isella at gmail.com Mon Feb 4 03:52:30 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 4 Feb 2008 09:52:30 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine Message-ID: Hello, And thanks for your reply. A small aside: I am getting interested into parallel computing with Python since I am a bit surprised at the fact that postprocessing some relatively large arrays of data (5000 by 5000) takes a lot of time and memory on my laptop, but the situation does not improve dramatically on my desktop, which has more memory and is a 64-bit machine (with the amd64 Debian). A question: if I use arrays in Scipy without any special declaration, are they double precision arrays or something "more" as a default on 64-bit machines? If the latter is true, then can I use a single declaration (without chasing every single array) in order to default to standard double precision arithmetic? Cheers Lorenzo > Date: Sun, 3 Feb 2008 22:55:04 +0200 > From: Stefan van der Walt > Subject: Re: [SciPy-user] Python on Intel Xeon Dual Core Machine > To: scipy-user at scipy.org > Message-ID: <20080203205504.GD25396 at mentat.za.net> > Content-Type: text/plain; charset=iso-8859-1 > > Hi Lorenzo > > On Sat, Feb 02, 2008 at 04:22:14PM +0100, Lorenzo Isella wrote: > > I am currently using a Python script on my box to post-process some > > data (the process typically involves operations on 5000 by 5000 > > arrays). > > The Python script also relies heavily on some R scripts (imported via > > Rpy) and a compiled Fortran 90 routine (imported via f2py). > > I have recently made a new Debian testing installation for the amd64 > > architecture on my machine [an Intel Xeon Dual-core pc] so I wonder if > > there is any way to take advantage of both CPU's when running that > > script. > > Is it something which can be achieved "automatically" by installing > > and calling some libraries? Do I have to re-write and re-think my > > whole script? > > Using a parallelised linear algebra library may address most of your > problems. I think (and I hope someone will correct me if I'm wrong) > that ATLAS can be compiled to use multiple threads, and I know MKL > supports it as well. > > Another approach would be to parallelize the algorithm itself, using > something like 'processing' (http://pypi.python.org/pypi/processing/). > > You can take that a step further by distributing the problem over > several processes (running on one or more machines), using using > ipython1 (http://ipython.scipy.org/moin/IPython1). > > Good luck! > > St?fan From gael.varoquaux at normalesup.org Mon Feb 4 03:57:54 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 4 Feb 2008 09:57:54 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: <20080204085754.GF15252@phare.normalesup.org> On Mon, Feb 04, 2008 at 09:52:30AM +0100, Lorenzo Isella wrote: > A small aside: I am getting interested into parallel computing with > Python since I am a bit surprised at the fact that postprocessing some > relatively large arrays of data (5000 by 5000) takes a lot of time and > memory on my laptop, but the situation does not improve dramatically > on my desktop, which has more memory and is a 64-bit machine (with the > amd64 Debian). I suspect you are limited by disk IO, if you are loading a lot of file. Did you try profiling? Ga?l From matthieu.brucher at gmail.com Mon Feb 4 03:59:40 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 4 Feb 2008 09:59:40 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: 2008/2/4, Lorenzo Isella : > > Hello, > And thanks for your reply. > A small aside: I am getting interested into parallel computing with > Python since I am a bit surprised at the fact that postprocessing some > relatively large arrays of data (5000 by 5000) takes a lot of time and > memory on my laptop, but the situation does not improve dramatically > on my desktop, which has more memory and is a 64-bit machine (with the > amd64 Debian). > A question: if I use arrays in Scipy without any special declaration, > are they double precision arrays or something "more" as a default on > 64-bit machines? > If the latter is true, then can I use a single declaration (without > chasing every single array) in order to default to standard double > precision arithmetic? > Cheers > > Lorenzo The default is to use doubles on every platform (32 or 64 bits). BTW, single precision is not faster than double precision for not-vectorized loops (like additions), so if memory is not a problem, Numpy's behaviour is the best ;). Using long doubles will not enhance speed. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Mon Feb 4 09:21:34 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 4 Feb 2008 08:21:34 -0600 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: Hi, There is no general recommendation and it really does depend on what the scripts are doing. It is not trivial to identify what steps can be made parallel and can be even more complex to implement parallel steps. Given that you are calling R (yes I know R can run in parallel), you need to rethink and redesign your problem. If the script can be split into independent pieces (and I really mean completely independent) then just use threads such as the handythread.py code Anne Archibald provided on the numpy list or the Python Cookbook. (I would also suggest searching the numpy list especially for Anne's replies on this.) Otherwise you will have to learn sufficient about parallel computing. . Regards Bruce On Feb 4, 2008 2:52 AM, Lorenzo Isella wrote: > Hello, > And thanks for your reply. > A small aside: I am getting interested into parallel computing with > Python since I am a bit surprised at the fact that postprocessing some > relatively large arrays of data (5000 by 5000) takes a lot of time and > memory on my laptop, but the situation does not improve dramatically > on my desktop, which has more memory and is a 64-bit machine (with the > amd64 Debian). > A question: if I use arrays in Scipy without any special declaration, > are they double precision arrays or something "more" as a default on > 64-bit machines? > If the latter is true, then can I use a single declaration (without > chasing every single array) in order to default to standard double > precision arithmetic? > Cheers > > Lorenzo > > > > Date: Sun, 3 Feb 2008 22:55:04 +0200 > > From: Stefan van der Walt > > Subject: Re: [SciPy-user] Python on Intel Xeon Dual Core Machine > > To: scipy-user at scipy.org > > Message-ID: <20080203205504.GD25396 at mentat.za.net> > > Content-Type: text/plain; charset=iso-8859-1 > > > > Hi Lorenzo > > > > On Sat, Feb 02, 2008 at 04:22:14PM +0100, Lorenzo Isella wrote: > > > I am currently using a Python script on my box to post-process some > > > data (the process typically involves operations on 5000 by 5000 > > > arrays). > > > The Python script also relies heavily on some R scripts (imported via > > > Rpy) and a compiled Fortran 90 routine (imported via f2py). > > > I have recently made a new Debian testing installation for the amd64 > > > architecture on my machine [an Intel Xeon Dual-core pc] so I wonder if > > > there is any way to take advantage of both CPU's when running that > > > script. > > > Is it something which can be achieved "automatically" by installing > > > and calling some libraries? Do I have to re-write and re-think my > > > whole script? > > > > Using a parallelised linear algebra library may address most of your > > problems. I think (and I hope someone will correct me if I'm wrong) > > that ATLAS can be compiled to use multiple threads, and I know MKL > > supports it as well. > > > > Another approach would be to parallelize the algorithm itself, using > > something like 'processing' (http://pypi.python.org/pypi/processing/). > > > > You can take that a step further by distributing the problem over > > several processes (running on one or more machines), using using > > ipython1 (http://ipython.scipy.org/moin/IPython1). > > > > Good luck! > > > > St?fan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From emsellem at obs.univ-lyon1.fr Mon Feb 4 10:56:45 2008 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Mon, 04 Feb 2008 16:56:45 +0100 Subject: [SciPy-user] efficient "inside polygon" test for an array?? Message-ID: <47A735BD.7040007@obs.univ-lyon1.fr> Hi I have a polygon (defined by 4 vertices) and I wish to have an efficient way of selecting the points which are inside this polygon. So I would like something like: selection = pointsInPolygon(x,y,poly) where x and y are numpy arrays and poly the 2xN array defining the vertices of the polygons. I have the code for single points. But "vectorizing" it make this routine VERY slow and not exploitable (I have to do this for MANY polygons and x,y arrays which are big). Do you have something like that in scipy (or somewhere else)? thanks in advance! Eric From pgmdevlist at gmail.com Mon Feb 4 11:12:09 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 4 Feb 2008 11:12:09 -0500 Subject: [SciPy-user] efficient "inside polygon" test for an array?? In-Reply-To: <47A735BD.7040007@obs.univ-lyon1.fr> References: <47A735BD.7040007@obs.univ-lyon1.fr> Message-ID: <200802041112.09735.pgmdevlist@gmail.com> On Monday 04 February 2008 10:56:45 Eric Emsellem wrote: > I have a polygon (defined by 4 vertices) and I wish to have an efficient > way of selecting the points which are inside this polygon. Eric, It might be overkill, but have you considered gdal (http://gdal.org/) ? It is a very useful tool to manipulate geometries. In particular, it has functions to compute the intersection between polygons. From jdh2358 at gmail.com Mon Feb 4 11:15:09 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 4 Feb 2008 10:15:09 -0600 Subject: [SciPy-user] efficient "inside polygon" test for an array?? In-Reply-To: <47A735BD.7040007@obs.univ-lyon1.fr> References: <47A735BD.7040007@obs.univ-lyon1.fr> Message-ID: <88e473830802040815yfbdbe97r18cc994d403cca76@mail.gmail.com> On Feb 4, 2008 9:56 AM, Eric Emsellem wrote: > Hi > > I have a polygon (defined by 4 vertices) and I wish to have an efficient way of > selecting the points which are inside this polygon. > So I would like something like: > > selection = pointsInPolygon(x,y,poly) Where points is a sequence of x,y points and verts is a sequence of x,y vertices of a poygon >>> import matplotlib.nxutils as nxutils >>> mask = nxutils.points_inside_poly(points, verts) This is implemented in C using an efficient algorithm so should work well for you. JDH > > where x and y are numpy arrays and poly the 2xN array defining the vertices of > the polygons. > > I have the code for single points. But "vectorizing" it make this routine VERY > slow and not exploitable (I have to do this for MANY polygons and x,y arrays > which are big). > > Do you have something like that in scipy (or somewhere else)? > > thanks in advance! > Eric > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From emsellem at obs.univ-lyon1.fr Mon Feb 4 11:28:24 2008 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Mon, 04 Feb 2008 17:28:24 +0100 Subject: [SciPy-user] efficient "inside polygon" test for an array?? References: 47A735BD.7040007@obs.univ-lyon1.fr Message-ID: <47A73D28.1030809@obs.univ-lyon1.fr> Great! just tested, and it IS efficient indeed. THANKS thanks. problem solved. (I did a deep search on the web and nothing like that emerged because I missed to include the "nxutils" keyword: any way to improve this situation, for future similar searches ?) Eric From jdh2358 at gmail.com Mon Feb 4 13:10:13 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 4 Feb 2008 12:10:13 -0600 Subject: [SciPy-user] efficient "inside polygon" test for an array?? In-Reply-To: <47A73D28.1030809@obs.univ-lyon1.fr> References: <47A73D28.1030809@obs.univ-lyon1.fr> Message-ID: <88e473830802041010we75cacbpefa402926e6b6f63@mail.gmail.com> On Feb 4, 2008 10:28 AM, Eric Emsellem wrote: > Great! > > just tested, and it IS efficient indeed. THANKS > > thanks. problem solved. (I did a deep search on the web and nothing like that > emerged because I missed to include the "nxutils" keyword: any way to improve > this situation, for future similar searches ?) Well, there is no magic search, but if you search for: python point in polygon about 10 or so results from the top is a link to this thread: http://www.nabble.com/Cross-hair-and-polygon-drawing-tools.-td14199642.html which mentions the nxutils routing. JDH From dwf at cs.toronto.edu Mon Feb 4 13:44:44 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 4 Feb 2008 13:44:44 -0500 Subject: [SciPy-user] efficient "inside polygon" test for an array?? In-Reply-To: <88e473830802040815yfbdbe97r18cc994d403cca76@mail.gmail.com> References: <47A735BD.7040007@obs.univ-lyon1.fr> <88e473830802040815yfbdbe97r18cc994d403cca76@mail.gmail.com> Message-ID: On 4-Feb-08, at 11:15 AM, John Hunter wrote: > On Feb 4, 2008 9:56 AM, Eric Emsellem > wrote: >> Hi >> >> I have a polygon (defined by 4 vertices) and I wish to have an >> efficient way of >> selecting the points which are inside this polygon. >> So I would like something like: >> >> selection = pointsInPolygon(x,y,poly) It seems your problem has already been addressed, but some very good discussion of this problem (along with some 30-odd year old code, and a more recent C implementation) can be found at http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html I wouldn't be surprised if nxutils uses the same algorithm. David From robert.kern at gmail.com Mon Feb 4 14:12:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 04 Feb 2008 13:12:38 -0600 Subject: [SciPy-user] efficient "inside polygon" test for an array?? In-Reply-To: References: <47A735BD.7040007@obs.univ-lyon1.fr> <88e473830802040815yfbdbe97r18cc994d403cca76@mail.gmail.com> Message-ID: <47A763A6.8040302@gmail.com> David Warde-Farley wrote: > On 4-Feb-08, at 11:15 AM, John Hunter wrote: > >> On Feb 4, 2008 9:56 AM, Eric Emsellem >> wrote: >>> Hi >>> >>> I have a polygon (defined by 4 vertices) and I wish to have an >>> efficient way of >>> selecting the points which are inside this polygon. >>> So I would like something like: >>> >>> selection = pointsInPolygon(x,y,poly) > > It seems your problem has already been addressed, but some very good > discussion of this problem (along with some 30-odd year old code, and > a more recent C implementation) can be found at > > http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html > > I wouldn't be surprised if nxutils uses the same algorithm. It does. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dineshbvadhia at hotmail.com Mon Feb 4 18:26:35 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Mon, 4 Feb 2008 15:26:35 -0800 Subject: [SciPy-user] MemoryError transforming COO matrix to a CSR matrix Message-ID: Hello! Related to the post yesterday, I get a MemoryError when transforming a coo_matrix to a csr_matrix. The coo_matrix is loaded with about 32m int's (in fact just 1's) which using 4 bytes per int works out to about 122Mb for A. I have 2Gb of RAM on my Windows machine which should be ample for transforming A to a csr_matrix. Here is the error message followed by the code: Traceback (most recent call last): File "... \... .py", line 310, in A = sparse.csr_matrix(A) File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 1162, in __init__ temp = s.tocsr() File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 2175, in tocsr return csr_matrix((data, colind, indptr), self.shape, check=False) File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 1197, in __init__ self.data = asarray(s, dtype=self.dtype) File "C:\Python25\lib\site-packages\numpy\core\numeric.py", line 132, in asarray return array(a, dtype, copy=False, order=order) MemoryError # imports import numpy import scipy from scipy import sparse # constants nnz = 31398038 I = 20000 J = 80000 dataFile = aFilename # Initialize A as a coo_matrix with dimensions(I, J) > A = sparse.coo_matrix(None, dims=(I, J), dtype=int) # Populate matrix A by first loading data into a coo_matrix using the coo_matrix(V, (I,J)), dims) method > ij = numpy.array(numpy.empty((nnz, 2), dtype=int)) > f = open(dataFile, 'rb') > ij = pickle.load(f) > row = ij[:,0] > column = ij[:,1] > data = scipy.ones(ij.shape[0], dtype=int) # Load data into A, convert A to csr_matrix > A = sparse.coo_matrix((data, (row, column)), dims=(I,J)) > A = sparse.csr_matrix(A) Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From timmichelsen at gmx-topmail.de Mon Feb 4 19:23:01 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Tue, 05 Feb 2008 01:23:01 +0100 Subject: [SciPy-user] efficient "inside polygon" test for an array?? In-Reply-To: <200802041112.09735.pgmdevlist@gmail.com> References: <47A735BD.7040007@obs.univ-lyon1.fr> <200802041112.09735.pgmdevlist@gmail.com> Message-ID: Pierre GM schrieb: > On Monday 04 February 2008 10:56:45 Eric Emsellem wrote: >> I have a polygon (defined by 4 vertices) and I wish to have an efficient >> way of selecting the points which are inside this polygon. > > Eric, > It might be overkill, but have you considered gdal (http://gdal.org/) ? It is > a very useful tool to manipulate geometries. In particular, it has functions > to compute the intersection between polygons. You may also want to take a look at: Shapely 1.0 - Geospatial geometries, predicates, and operations http://pypi.python.org/pypi/Shapely If you need more GIS and Geo related tools look at: the Python category at FreeGIS: http://freegis.org/database/?cat=21 Kind regards, Timmie From jre at enthought.com Tue Feb 5 00:07:46 2008 From: jre at enthought.com (J. Ryan Earl) Date: Mon, 04 Feb 2008 23:07:46 -0600 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: <47A7EF22.4070900@enthought.com> Lorenzo Isella wrote: > I am a bit surprised at the fact that postprocessing some > relatively large arrays of data (5000 by 5000) takes a lot of time and > memory on my laptop, but the situation does not improve dramatically > on my desktop, which has more memory and is a 64-bit machine (with the > amd64 Debian). > A question: if I use arrays in Scipy without any special declaration, > are they double precision arrays or something "more" as a default on > 64-bit machines? I see a lot of confusion on this topic in general. When people talk about a "64-bit" machine in general CPU terms, they're talking about its address space. You're mixing up the size of address operands with the size of data operands. With SSE[1-4] intructions 32-bit processors are able to work on 128-bit data operands, or packed 64-bit operands. PPC can do similar though arguably better with its Altivect instructions. 64-bit is mainly going to be an advantage when you're working with processes that need to map more than 3GB of memory. In respect to x86-64 (ie AMD64/EM64T) you also get a little bit of extra performance because a lot of the x86 cludge is cleaned up, and in particular it provides twice as many registers to work with than it does in 32-bit mode. At best, you're looking at a 10% gain in performance over properly optimized 32-bit code if you're not memory constrained. This performance is mainly from the compiler being able to more aggressively unroll loops into the extra registers. -ryan From david at ar.media.kyoto-u.ac.jp Tue Feb 5 04:54:27 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 05 Feb 2008 18:54:27 +0900 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: <47A83253.2060601@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > > 2008/2/4, Lorenzo Isella >: > > Hello, > And thanks for your reply. > A small aside: I am getting interested into parallel computing with > Python since I am a bit surprised at the fact that postprocessing some > relatively large arrays of data (5000 by 5000) takes a lot of time and > memory on my laptop, but the situation does not improve dramatically > on my desktop, which has more memory and is a 64-bit machine (with the > amd64 Debian). > A question: if I use arrays in Scipy without any special declaration, > are they double precision arrays or something "more" as a default on > 64-bit machines? > If the latter is true, then can I use a single declaration (without > chasing every single array) in order to default to standard double > precision arithmetic? > Cheers > > Lorenzo > > > The default is to use doubles on every platform (32 or 64 bits). BTW, > single precision is not faster than double precision for > not-vectorized loops (like additions), so if memory is not a problem, > Numpy's behaviour is the best ;). Using long doubles will not enhance > speed. I am a bit suprised by this affirmation: at C level, float is certainly faster than double. It of course depends on many parameters, but for example ATLAS is (almost) twice faster for big matrices with float compared to double, on my two main machines: a pentium 4 and a CoreDuo2, which have extremely different behaviours with regard to their FPU. AFAIK, the different is mainly due to memory pressure (at CPU level, float and double are roughly the same, but this is not the limitation on currently available CPU). cheers, David From matthieu.brucher at gmail.com Tue Feb 5 05:15:29 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 5 Feb 2008 11:15:29 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47A83253.2060601@ar.media.kyoto-u.ac.jp> References: <47A83253.2060601@ar.media.kyoto-u.ac.jp> Message-ID: > > > The default is to use doubles on every platform (32 or 64 bits). BTW, > > single precision is not faster than double precision for > > not-vectorized loops (like additions), so if memory is not a problem, > > Numpy's behaviour is the best ;). Using long doubles will not enhance > > speed. > > I am a bit suprised by this affirmation: at C level, float is certainly > faster than double. It of course depends on many parameters, but for > example ATLAS is (almost) twice faster for big matrices with float > compared to double, on my two main machines: a pentium 4 and a CoreDuo2, > which have extremely different behaviours with regard to their FPU. > AFAIK, the different is mainly due to memory pressure (at CPU level, > float and double are roughly the same, but this is not the limitation on > currently available CPU). > In fact, it depends on what PU is used. If it is the usual x87 FPU, the floats are stored as doubles in the registers and they are both as fast. But if you use SSE or SSE2 instructions, then floats can get faster. As you said, if there are a lot of loads and stores, floats have the upper hand over doubles. If you work a lot on small arrays, then this difference may disappear. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From josegomez at gmx.net Tue Feb 5 10:15:40 2008 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Tue, 05 Feb 2008 16:15:40 +0100 Subject: [SciPy-user] Fastest way to read a matrix in Message-ID: <20080205151540.326870@gmx.net> Hi! I have a large set of matrices on ASCII files stored on disk. Each is made up of a number of M of rows, with N elements on each row separated by spaces. I know beforehand what M and N are, and I want to read them into an MxN array (or is it NxM? :D) I am using scipy.io.read_array(), but the performance is fairly slow (these are 80x80ish arrays). While they are on NFS mounts, other programs read the data in faster than python's scipy.io.read_array, so I was wondering whether there's a faster way of reading the data in (maybe giving hints on the number of columns and rows, using some other function, etc)? Any hints greatly appreciated, Jose -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger From wnbell at gmail.com Tue Feb 5 11:45:06 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 5 Feb 2008 10:45:06 -0600 Subject: [SciPy-user] Fastest way to read a matrix in In-Reply-To: <20080205151540.326870@gmx.net> References: <20080205151540.326870@gmx.net> Message-ID: On Feb 5, 2008 9:15 AM, Jose Luis Gomez Dans wrote: > Hi! > I have a large set of matrices on ASCII files stored on disk. Each is made up of a number of M of rows, with N elements on each row separated by spaces. I know beforehand what M and N are, and I want to read them into an MxN array (or is it NxM? :D) I am using scipy.io.read_array(), but the performance is fairly slow (these are 80x80ish arrays). While they are on NFS mounts, other programs read the data in faster than python's scipy.io.read_array, so I was wondering whether there's a faster way of reading the data in (maybe giving hints on the number of columns and rows, using some other function, etc)? Try numpy.fromfile() A = fromfile("myfile.txt", dtype=float, count=80*80, sep=' ').reshape(80,80) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From matthieu.brucher at gmail.com Tue Feb 5 11:59:04 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 5 Feb 2008 17:59:04 +0100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: References: <20080122172456.GC29954@mentat.za.net> <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> <20080122202148.GA17231@mentat.za.net> Message-ID: I will modify your code so that the cooccurence matrix can be computed between two blocks (for instance inter-channel cooccurence matrix for colour textures or 3D cooccurence matrix), I'll need it in the near future. If you're planning of making it a scikit, please let us know so that I can provide a patch. Matthieu 2008/1/22, Matthieu Brucher : > > > I'm happy to talk at more length about the possibility of cobbling > > > together such a scikit, if anyone's interested. > > > > I am all for the idea. Ndimage was written before numpy was on the > > scene, and now we can replace a lot of its functionality using Python > > code (that would execute just as fast!). > > > > It would be great, I'd like to see this, cooccurence coefficients can be > interesting in manifold learning :) > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From josegomez at gmx.net Tue Feb 5 12:04:34 2008 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Tue, 05 Feb 2008 18:04:34 +0100 Subject: [SciPy-user] Fastest way to read a matrix in In-Reply-To: References: <20080205151540.326870@gmx.net> Message-ID: <20080205170434.220620@gmx.net> Hi! > On Feb 5, 2008 9:15 AM, Jose Luis Gomez Dans wrote: > MxN array (or is it NxM? :D) I am using scipy.io.read_array(), but the > performance is fairly slow (these are 80x80ish arrays). While they are on NFS > mounts, other programs read the data in faster than python's > scipy.io.read_array, so I was wondering whether there's a faster way of reading the data in > (maybe giving hints on the number of columns and rows, using some other > function, etc)? > > Try numpy.fromfile() Aaaahhhh.... This was an improvement. It appears that numpy also has loadtxt(). A few simple examples show that read_array takes of the between 0.5-0.6 of wall time, with loadtxt taking 0.04 and fromfile() (your suggestion) 0.01 (same file, already in cache, repeat tests 10 times). That's 3 methods that look as if they do the same sort of thing, and three very different performances. Cheers! Jose -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger From listservs at mac.com Tue Feb 5 15:46:07 2008 From: listservs at mac.com (Chris) Date: Tue, 5 Feb 2008 20:46:07 +0000 (UTC) Subject: [SciPy-user] using f2py: module not found Message-ID: Hello, I'm trying to build a package on Linux (Ubuntu) that contains a fortran module, built using f2py. However, despite the module building and installing without error, python cannot seem to see it (see log below). This works fine on Windows and Mac; the problem only seems to happen on Linux: In [1]: import PyMC ----------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/tianhuil/ /usr/lib/python2.4/site-packages/PyMC/__init__.py /home/tianhuil/ /usr/lib/python2.4/site-packages/PyMC/MCMC.py ImportError: No module named flib /usr/lib/python2.4/site-packages/PyMC/MCMC.py Notice that the module exists in the site-packages directory: tianhuil at tianhuil:/usr/lib/python2.4/site-packages/PyMC$ ll total 432 drwxr-xr-x 2 root root 4096 2008-02-03 17:24 Backends -rwxrwx--- 1 root root 195890 2008-02-03 17:24 flib.so -rwxrwx--- 1 root root 259 2008-02-03 17:14 __init__.py -rw-r--r-- 1 root root 473 2008-02-03 17:24 __init__.pyc -rwxrwx--- 1 root root 10250 2008-02-03 17:14 Matplot.py -rw-r--r-- 1 root root 7516 2008-02-03 17:24 Matplot.pyc -rwxrwx--- 1 root root 98274 2008-02-03 17:14 MCMC.py -rw-r--r-- 1 root root 79039 2008-02-03 17:24 MCMC.pyc drwxr-xr-x 2 root root 4096 2008-02-03 17:24 Tests -rwxrwx--- 1 root root 6631 2008-02-03 17:14 TimeSeries.py -rw-r--r-- 1 root root 5043 2008-02-03 17:24 TimeSeries.pyc From robert.kern at gmail.com Tue Feb 5 15:47:28 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Feb 2008 14:47:28 -0600 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47A7EF22.4070900@enthought.com> References: <47A7EF22.4070900@enthought.com> Message-ID: <47A8CB60.6030407@gmail.com> J. Ryan Earl wrote: > Lorenzo Isella wrote: >> I am a bit surprised at the fact that postprocessing some >> relatively large arrays of data (5000 by 5000) takes a lot of time and >> memory on my laptop, but the situation does not improve dramatically >> on my desktop, which has more memory and is a 64-bit machine (with the >> amd64 Debian). >> A question: if I use arrays in Scipy without any special declaration, >> are they double precision arrays or something "more" as a default on >> 64-bit machines? > I see a lot of confusion on this topic in general. When people talk > about a "64-bit" machine in general CPU terms, they're talking about its > address space. You're mixing up the size of address operands with the > size of data operands. He's not really confusing the two. Many systems change the size of the data operands based on the size of the address operands. http://en.wikipedia.org/wiki/64-bit#64-bit_data_models As a general rule, though, only C integer types change size; the C standard is notoriously flexible in this regard. This has some downstream effects: Python's int object are stored with C longs and numpy's default "int" dtype is whatever size that is. While a system could theoretically change its default floating point type based on the 64-bitness of the CPU/compiler combination, I've never seen anything do that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jdh2358 at gmail.com Tue Feb 5 16:06:25 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 5 Feb 2008 15:06:25 -0600 Subject: [SciPy-user] piecewise linear approximation Message-ID: <88e473830802051306t2b20b7e3se5d9dfe99df5765b@mail.gmail.com> I would like to do a piecewise linear approximation to a time series using the constraint that I can use at most N piecewise linear segments where the error between the piecewise approximation and the original time series in minimized. N is an input to the algorithm. I suspect this problem is solved somewhere (using scipy!), so am wondering if someone can point me to the light. I've attached an example data set that is representative of the kind of time series I want to approximate. For output, I primarily need the indices of the segment end points Thanks, JDH -------------- next part -------------- A non-text attachment was scrubbed... Name: spy.dat Type: video/mpeg Size: 13125 bytes Desc: not available URL: From lorenzo.isella at gmail.com Tue Feb 5 16:52:30 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 05 Feb 2008 22:52:30 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: Message-ID: <47A8DA9E.1020001@gmail.com> Hello, And thanks everybody for the many replies. I partially solved the problem adding some extra RAM memory. A rather primitive solution, but now my desktop does not use any swap memory and the code runs faster. Unfortunately, the nature of the code does not easily lend itself to being split up into easier tasks. However, apart from the parallel python homepage, what is your recommendation for a beginner who wants a smattering in parallel computing (I have in mind C and Python at the moment)? Cheers Lorenzo Message: 5 Date: Mon, 4 Feb 2008 08:21:34 -0600 From: "Bruce Southey" Subject: Re: [SciPy-user] Python on Intel Xeon Dual Core Machine To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Hi, There is no general recommendation and it really does depend on what the scripts are doing. It is not trivial to identify what steps can be made parallel and can be even more complex to implement parallel steps. Given that you are calling R (yes I know R can run in parallel), you need to rethink and redesign your problem. If the script can be split into independent pieces (and I really mean completely independent) then just use threads such as the handythread.py code Anne Archibald provided on the numpy list or the Python Cookbook. (I would also suggest searching the numpy list especially for Anne's replies on this.) Otherwise you will have to learn sufficient about parallel computing. . Regards From Karl.Young at ucsf.edu Tue Feb 5 16:29:47 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 05 Feb 2008 13:29:47 -0800 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: References: <20080122172456.GC29954@mentat.za.net> <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> <20080122202148.GA17231@mentat.za.net> Message-ID: <47A8D54B.1010807@ucsf.edu> I'd be very interested in that as well - I just downloaded and played around with the code that St?fan graciously provided (greycomatrix). I'd be interested in seeing the code generalized along the lines described (arbitrary block shapes/sizes in arbitrary dimensions (well almost arbitrary - 3D and 4D would be arbitrary enough for me at the moment)). I was thinking about hacking it myself but I'm sure I wouldn't do nearly as nice a job of it as e.g. St?fan so was glad to see there might be more general interest. I'd be willing to contribute in whatever way would be useful. > I will modify your code so that the cooccurence matrix can be computed > between two blocks (for instance inter-channel cooccurence matrix for > colour textures or 3D cooccurence matrix), I'll need it in the near > future. > If you're planning of making it a scikit, please let us know so that I > can provide a patch. > > Matthieu > > 2008/1/22, Matthieu Brucher >: > >> I'm happy to talk at more length about the possibility of > cobbling >> together such a scikit, if anyone's interested. > > I am all for the idea. Ndimage was written before numpy was > on the > scene, and now we can replace a lot of its functionality using > Python > code (that would execute just as fast!). > > > It would be great, I'd like to see this, cooccurence coefficients > can be interesting in manifold learning :) > > Matthieu > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and > http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From Karl.Young at ucsf.edu Tue Feb 5 16:42:26 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 05 Feb 2008 13:42:26 -0800 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47A8DA9E.1020001@gmail.com> References: <47A8DA9E.1020001@gmail.com> Message-ID: <47A8D842.3080804@ucsf.edu> If you're interested in using MPI in python I got started by going through some general tutorials like those at the LAM site (http://www.lam-mpi.org/) and modifying some of the example scripts provided with pypar (http://datamining.anu.edu.au/~ole/pypar/). pypar is nice in that it provides a very simple, stripped down interface to MPI though I think there are more complete, robust versions these days like mpi4py (which when I get time to get back to hacking some parallel code I mean to start using). Ipython1 (http://ipython.scipy.org/moin/IPython1) is also a nice way to do parallel programming but it's kind of be nice to start with something simple like pypar which gives you a fairly limited range of options. There are probably better ways of generally doing parallel coding these days, i.e. combining threads and distributed memory models - I know there are some experts on this list far more qualified than I to provide general comments. >Hello, >And thanks everybody for the many replies. >I partially solved the problem adding some extra RAM memory. >A rather primitive solution, but now my desktop does not use any swap memory and the code runs faster. >Unfortunately, the nature of the code does not easily lend itself to being split up into easier tasks. >However, apart from the parallel python homepage, what is your recommendation for a beginner who wants a smattering in parallel computing (I have in mind C and Python at the moment)? >Cheers > >Lorenzo > > >Message: 5 >Date: Mon, 4 Feb 2008 08:21:34 -0600 >From: "Bruce Southey" >Subject: Re: [SciPy-user] Python on Intel Xeon Dual Core Machine >To: "SciPy Users List" >Message-ID: > >Content-Type: text/plain; charset=ISO-8859-1 > >Hi, > >There is no general recommendation and it really does depend on what >the scripts are doing. It is not trivial to identify what steps can be >made parallel and can be even more complex to implement parallel >steps. > >Given that you are calling R (yes I know R can run in parallel), you >need to rethink and redesign your problem. If the script can be split >into independent pieces (and I really mean completely independent) >then just use threads such as the handythread.py code Anne Archibald >provided on the numpy list or the Python Cookbook. (I would also >suggest searching the numpy list especially for Anne's replies on >this.) Otherwise you will have to learn sufficient about parallel >computing. > >. > >Regards > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From matthieu.brucher at gmail.com Tue Feb 5 17:11:29 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 5 Feb 2008 23:11:29 +0100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: <47A8D54B.1010807@ucsf.edu> References: <20080122172456.GC29954@mentat.za.net> <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> <20080122202148.GA17231@mentat.za.net> <47A8D54B.1010807@ucsf.edu> Message-ID: 2008/2/5, Karl Young : > > > I'd be very interested in that as well - I just downloaded and played > around with the code that St?fan graciously provided (greycomatrix). I'd > be interested in seeing the code generalized along the lines described > (arbitrary block shapes/sizes in arbitrary dimensions (well almost > arbitrary - 3D and 4D would be arbitrary enough for me at the moment)). > I was thinking about hacking it myself but I'm sure I wouldn't do nearly > as nice a job of it as e.g. St?fan so was glad to see there might be > more general interest. I'd be willing to contribute in whatever way > would be useful. If you have an article where this is explained (and used as well), I would be happy to help with it. For the moment, I only saw an almost 3D cooccurence matrix (doi:10.1016/S0730-725X(03)00201-7), but not for a full fledged 3D COM (with application to segmentation). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 5 17:53:56 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Feb 2008 16:53:56 -0600 Subject: [SciPy-user] piecewise linear approximation In-Reply-To: <88e473830802051306t2b20b7e3se5d9dfe99df5765b@mail.gmail.com> References: <88e473830802051306t2b20b7e3se5d9dfe99df5765b@mail.gmail.com> Message-ID: <47A8E904.3010306@gmail.com> John Hunter wrote: > I would like to do a piecewise linear approximation to a time series > using the constraint that I can use at most N piecewise linear > segments where the error between the piecewise approximation and the > original time series in minimized. N is an input to the algorithm. I > suspect this problem is solved somewhere (using scipy!), so am > wondering if someone can point me to the light. Not OOB, no. Most algorithms I am aware of (e.g. Douglas-Peucker) are constrained by the error rather than the number of segments. You might be able to find an algorithm that can be modified to do so in the references here (search in the page for "Piecewise Linear Approximation"; the first paper there is a good overview of several algorithms): http://appsrv.cse.cuhk.edu.hk/~mzhou/time%20series%20reading.htm One approach would be to use one of these error-limited algorithms and relax the error constraint until you only use N or fewer segments. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Karl.Young at ucsf.edu Tue Feb 5 17:28:23 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 05 Feb 2008 14:28:23 -0800 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: References: <20080122172456.GC29954@mentat.za.net> <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> <20080122202148.GA17231@mentat.za.net> <47A8D54B.1010807@ucsf.edu> Message-ID: <47A8E307.2010306@ucsf.edu> Matthieu, There's a very abstract definition of the usage in: Young K, Schuff N. Measuring structural complexity in brain images. Neuroimage Vol 39/4 pp 1721-1730 (2008) (if you can't get a copy let me know I can send you a pdf). To implement the special case re. the example in that paper I've got some kludgy code that essentially generates the cooccurence matrix for contiguous linear blocks of arbitrary length in dimensions up to 4. But many parts of that code are unnecessarily specific to linear blocks (as usual I was in too much of a hurry to generate results !). I'd love to see a more community based effort at producing something more general (and efficient, robust,...). But if this is something only I'm interested in I'm happy to just hack something that works and not clutter up another package with overly general (and kludgy) code. I just thought there might be some texture analysis types who might be interested in something like that. Thanks for the response; let me know if you have any interest, otherwise no worries. > > > 2008/2/5, Karl Young >: > > > I'd be very interested in that as well - I just downloaded and played > around with the code that St?fan graciously provided > (greycomatrix). I'd > be interested in seeing the code generalized along the lines described > (arbitrary block shapes/sizes in arbitrary dimensions (well almost > arbitrary - 3D and 4D would be arbitrary enough for me at the > moment)). > I was thinking about hacking it myself but I'm sure I wouldn't do > nearly > as nice a job of it as e.g. St?fan so was glad to see there might be > more general interest. I'd be willing to contribute in whatever way > would be useful. > > > If you have an article where this is explained (and used as well), I > would be happy to help with it. For the moment, I only saw an almost > 3D cooccurence matrix (doi:10.1016/S0730-725X(03)00201-7), but not for > a full fledged 3D COM (with application to segmentation). > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From dineshbvadhia at hotmail.com Tue Feb 5 18:08:17 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Tue, 5 Feb 2008 15:08:17 -0800 Subject: [SciPy-user] Creating coo_matrix from data in text file Message-ID: The sparse coo_matrix method performs really well but our data sets are very large and the working arrays (ie. ij, row, column and data) take up significant memory. The judicious use of helps but not that much. Is there a fast method available similar to coo_matrix to create a sparse matrix from a text file instead of through a set of interim working arrays? The file would contain the coordinates (i, j) and the value of each item. Once the sparse matrix has been created we can then save/load it at will (using Andrew Straw's fast load/save code). Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Tue Feb 5 18:44:16 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 05 Feb 2008 17:44:16 -0600 Subject: [SciPy-user] Fastest way to read a matrix in In-Reply-To: <20080205170434.220620@gmx.net> References: <20080205151540.326870@gmx.net> <20080205170434.220620@gmx.net> Message-ID: <47A8F4D0.10903@enthought.com> Jose Luis Gomez Dans wrote: > Hi! > > >> On Feb 5, 2008 9:15 AM, Jose Luis Gomez Dans wrote: >> MxN array (or is it NxM? :D) I am using scipy.io.read_array(), but the >> performance is fairly slow (these are 80x80ish arrays). While they are on NFS >> mounts, other programs read the data in faster than python's >> scipy.io.read_array, so I was wondering whether there's a faster way of reading the data in >> (maybe giving hints on the number of columns and rows, using some other >> function, etc)? >> >> Try numpy.fromfile() >> > > Aaaahhhh.... This was an improvement. It appears that numpy also has loadtxt(). A few simple examples show that read_array takes of the between 0.5-0.6 of wall time, with loadtxt taking 0.04 and fromfile() (your suggestion) 0.01 (same file, already in cache, repeat tests 10 times). That's 3 methods that look as if they do the same sort of thing, and three very different performances. > Yes, we are trying to fix this. In fact read_array will be deprecated in 0.7 and loadtxt will be promoted in NumPy. The fromfile will always exist as a low-level routine (without any bells and whistles) which can handle very uniform file-layout, but it will not be advertised in a tutorial. scipy.io.read_array suffers from feature creep which slows down simple operations. -Travis From wnbell at gmail.com Tue Feb 5 19:07:51 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 5 Feb 2008 18:07:51 -0600 Subject: [SciPy-user] Creating coo_matrix from data in text file In-Reply-To: References: Message-ID: On Feb 5, 2008 5:08 PM, Dinesh B Vadhia wrote: > The sparse coo_matrix method performs really well but our data sets are very > large and the working arrays (ie. ij, row, column and data) take up > significant memory. The judicious use of helps > but not that much. > > Is there a fast method available similar to coo_matrix to create a sparse > matrix from a text file instead of through a set of interim working arrays? > The file would contain the coordinates (i, j) and the value of each item. > Once the sparse matrix has been created we can then save/load it at will > (using Andrew Straw's fast load/save code). Suppose you have a file named matrix.txt with the following contents: $ cat matrix.txt 0 1 10 0 2 20 5 3 -5 6 4 14 now run this script: from numpy import fromfile from scipy.sparse import coo_matrix IJV = fromfile("matrix.txt",sep=" ").reshape(-1,3) row = IJV[:,0] col = IJV[:,1] data = IJV[:,2] A = coo_matrix( (data,(row,col)) ) print repr(A) print A.todense() You should see: <7x5 sparse matrix of type '' with 4 stored elements in COOrdinate format> [[ 0. 10. 20. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. -5. 0.] [ 0. 0. 0. 0. 14.]] This should be very fast. The only thing that would be faster is the recent scipy.io MATLAB file support which stores data in binary format (or storing your own binary format I suppose) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From david at ar.media.kyoto-u.ac.jp Tue Feb 5 21:47:26 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 06 Feb 2008 11:47:26 +0900 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47A8DA9E.1020001@gmail.com> References: <47A8DA9E.1020001@gmail.com> Message-ID: <47A91FBE.5090407@ar.media.kyoto-u.ac.jp> Lorenzo Isella wrote: > Hello, > And thanks everybody for the many replies. > I partially solved the problem adding some extra RAM memory. > A rather primitive solution, but now my desktop does not use any swap memory and the code runs faster. > Unfortunately, the nature of the code does not easily lend itself to being split up into easier tasks. > However, apart from the parallel python homepage, what is your recommendation for a beginner who wants a smattering in parallel computing (I have in mind C and Python at the moment)? > Cheers > Did you try the MKL, as suggested ? Since this only requires recompilation of numpy and scipy, that's the easiest path I could see. cheers, David From peridot.faceted at gmail.com Tue Feb 5 23:09:07 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 5 Feb 2008 23:09:07 -0500 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47A8DA9E.1020001@gmail.com> References: <47A8DA9E.1020001@gmail.com> Message-ID: On 05/02/2008, Lorenzo Isella wrote: > And thanks everybody for the many replies. > I partially solved the problem adding some extra RAM memory. > A rather primitive solution, but now my desktop does not use any swap memory and the code runs faster. > Unfortunately, the nature of the code does not easily lend itself to being split up into easier tasks. > However, apart from the parallel python homepage, what is your recommendation for a beginner who wants a smattering in parallel computing (I have in mind C and Python at the moment)? Really the first thing to do is figure out what's actually taking the time in your program. The python profiler has its limitations, but it's still worth using. Even just "print time.time()" can make a difference. If memory is a problem - as it was in your case - and you're swapping to disk, parallelizing your code may make things run slower. (Swapping is, as you probably noticed, *incredibly* slow, so anything that makes you do more of it, like trying to cram more stuff in memory at once, is going to make things much slower.) Even if you're already pretty sure you know which parts are slow, instrumenting it will tell you how much difference the various parallelization tricks you try are making. What kind of parallelizing you should do really depends on what's slow in your program, and on what you can change. At a conceptual idea, some operations parallelize easily and others require much thought. For example, if you're doing something ten times, and each time is independent of the others, that can be easily parallelized (that's what my little script handythread does). If you're doing something more complicated - sorting a list, say - that requires complicated sequencing, parallelizing it is going to be hard. Start by thinking about the time-consuming tasks you identified above. Does each task depend on the result of a previous task? If not, you can run them concurrently, using something like handythread, python's threading module, or parallel python. If they do depend on each other, start looking at each time-consuming task in turn. Could it be parallelized? This can mean one of two things: you could write code to make the task run in parallel, or you could make python use something like a parallelized linear-algebra library that automatically parallelizes (say) matrix multiplication (this is what the people who suggest MKL are suggesting). More generally, could the task be made to run faster in other ways? If you're reading text files, could you read binaries? If you're calling an external program thousands of times, could you use python or call it only once with more input? Parallel programming is a massive, complicated field, and many high-powered software tools exist to take advantage of it. Unfortunately, python has a limitation in this area: the Global Interpreter Lock. Basically it means no two CPUs can be running python code at the same time. This means that you get no speedup at all by parallelizing your python code - with a few important exceptions: while one thread is doing an array operation, other threads can run python code, and while one thread is waiting for I/O (reading from disk, for example), other threads can run python code. Parallel python is a toolkit that can avoid this problem by running multiple python interpreters (though I have little experience with it). Generally, parallelization works best when you don't need to move much data around. The fact that you're running short of memory suggests that you are doing that. Parallelization also always requires some restructuring of your code, and more if you want to be more efficient. Anne From matthieu.brucher at gmail.com Wed Feb 6 01:41:56 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 6 Feb 2008 07:41:56 +0100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: <47A8E307.2010306@ucsf.edu> References: <20080122172456.GC29954@mentat.za.net> <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> <20080122202148.GA17231@mentat.za.net> <47A8D54B.1010807@ucsf.edu> <47A8E307.2010306@ucsf.edu> Message-ID: 2008/2/5, Karl Young : > > > Matthieu, > > There's a very abstract definition of the usage in: Young K, Schuff N. > Measuring structural complexity in brain images. Neuroimage Vol 39/4 pp > 1721-1730 (2008) (if you can't get a copy let me know I can send you a > pdf). To implement the special case re. the example in that paper I've > got some kludgy code that essentially generates the cooccurence matrix > for contiguous linear blocks of arbitrary length in dimensions up to 4. > But many parts of that code are unnecessarily specific to linear blocks > (as usual I was in too much of a hurry to generate results !). I'd love > to see a more community based effort at producing something more general > (and efficient, robust,...). But if this is something only I'm > interested in I'm happy to just hack something that works and not > clutter up another package with overly general (and kludgy) code. I just > thought there might be some texture analysis types who might be > interested in something like that. Thanks for the response; let me know > if you have any interest, otherwise no worries. > I think I'm not the only one interested in what you have done, even if it is a special case (with some refactoring, it could be extended and besides, I'm in a hurry for results as well :)). I can get the article and will read it today. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Wed Feb 6 04:05:46 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 6 Feb 2008 10:05:46 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: <47A8DA9E.1020001@gmail.com> Message-ID: <20080206090546.GD17588@phare.normalesup.org> On Tue, Feb 05, 2008 at 11:09:07PM -0500, Anne Archibald wrote: > Parallel programming is a massive, [...] Anne, you give, again and again, very informative answers on this topic on the mailing list. I was wondering if you would be willing to compile a wiki page out of these different answers and your seemingly endless knowledge. It could be useful to point people to, or just for people to google up and read. Cheers, Ga?l From ramercer at gmail.com Wed Feb 6 11:51:47 2008 From: ramercer at gmail.com (Adam Mercer) Date: Wed, 6 Feb 2008 11:51:47 -0500 Subject: [SciPy-user] crash from scipy.test() on Intel Mac OS X Leopard with python-2.4.4 In-Reply-To: <47A6B308.6010005@gmail.com> References: <799406d60802032159i4ce7daby3f2838e316f3b1e8@mail.gmail.com> <47A6B308.6010005@gmail.com> Message-ID: <799406d60802060851k7e45bd9bn45da4d846bbb19f@mail.gmail.com> On Feb 4, 2008 1:39 AM, Robert Kern wrote: > It looks like it's crashing in readline rather than anything in scipy. To > determine whether scipy is causing the problem or not, run scipy.test() > non-interactively. E.g. > > $ python -c "import scipy; scipy.test()" That works, just got to find out why readline is causing the crash. Cheers Adam From karl.young at ucsf.edu Wed Feb 6 13:01:02 2008 From: karl.young at ucsf.edu (Young, Karl) Date: Wed, 6 Feb 2008 10:01:02 -0800 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine References: <47A8DA9E.1020001@gmail.com> Message-ID: <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> > Parallel programming is a massive, complicated field, and many > high-powered software tools exist to take advantage of it. > Unfortunately, python has a limitation in this area: the Global > Interpreter Lock. Basically it means no two CPUs can be running python > code at the same time. This means that you get no speedup at all by > parallelizing your python code - with a few important exceptions: > while one thread is doing an array operation, other threads can run > python code, and while one thread is waiting for I/O (reading from > disk, for example), other threads can run python code. Parallel python > is a toolkit that can avoid this problem by running multiple python > interpreters (though I have little experience with it). Well yes, but on a cluster (distributed memory) you can still take advantage of parallelization using python tools particularly if the parallelization is close to trivial (admittedly it doesn't sound like that is Lorenzo's situation). But I agree with everything Anne says in that parallel programming is a massively complicated area and it's really important to do things like profiling your code first. But I was (and am) fairly ignorant of many of the important details, other than realizing that unfavorable communication to computation ratios could kill you, and was still able to get close to linear speedup (though my problem, while not completely, was close to trivially parallelizable, i.e I just needed to pass a few things around between steps requiring independent chunks requiring long calculations). So I still think it's useful to do a quick and dirty estimate of communication/computation and if that looks favorable explore some "simple" parallel programming tools like pypar. From william.ratcliff at gmail.com Wed Feb 6 13:58:15 2008 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 6 Feb 2008 13:58:15 -0500 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> References: <47A8DA9E.1020001@gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> Message-ID: <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> Has anyone played with openmp using ctypes or weave? Cheers, William On Feb 6, 2008 1:01 PM, Young, Karl wrote: > > > Parallel programming is a massive, complicated field, and many > > high-powered software tools exist to take advantage of it. > > Unfortunately, python has a limitation in this area: > the Global > > Interpreter Lock. Basically it means no two CPUs can be running python > > code at the same time. This means that you get no speedup at all by > > parallelizing your python code - with a few important exceptions: > > while one thread is doing an array operation, other threads can run > > python code, and while one thread is waiting for I/O (reading from > > disk, for example), other threads can run python code. Parallel python > > is a toolkit that can avoid this problem by running multiple python > > interpreters (though I have little experience with it). > > Well yes, but on a cluster (distributed memory) you can still take > advantage of parallelization using python tools particularly > if the parallelization is close to trivial (admittedly it doesn't sound > like that is Lorenzo's situation). But I agree with everything > Anne says in that parallel programming is a massively complicated area and > it's really important to do things like profiling your code first. > But I was (and am) fairly ignorant of many of the important details, other > than realizing that unfavorable communication to computation > ratios could kill you, and was still able to get close to linear speedup > (though my problem, while not completely, was close to > trivially parallelizable, i.e I just needed to pass a few things around > between steps requiring independent chunks requiring long > calculations). So I still think it's useful to do a quick and dirty > estimate of communication/computation and if that looks favorable > explore some "simple" parallel programming tools like pypar. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Feb 6 14:06:55 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 6 Feb 2008 13:06:55 -0600 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> References: <47A8DA9E.1020001@gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> Message-ID: On Feb 6, 2008 12:58 PM, william ratcliff wrote: > Has anyone played with openmp using ctypes or weave? Just FYI I tried some openmp code with gcc 4.2 and found that I couldn't load the module dynamically. Here's a similar report: http://newsgroups.derkeiler.com/Archive/Comp/comp.soft-sys.matlab/2008-01/msg00893.html This was using SWIG, but I think you'd encounter the same problem with ctypes or weave. It's a known bug that should be fixed in a future release. I wouldn't think that ICC or the MS compiler would exhibit this problem. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From lorenzo.isella at gmail.com Wed Feb 6 14:31:12 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Wed, 06 Feb 2008 20:31:12 +0100 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine Message-ID: <47AA0B00.1050500@gmail.com> Hello, Unfortunately I am very close to some deadlines and I had to go for the easiest way of adding some RAM memory. To be honest, I do not see a straightforward way to speed up the code. Furthermore, my knowledge of Python, operating systems and computers in general is on a different league wrt the one of many people on this list. So, I'll list some points I may come back to when I post again (1) profiling with python; I am learning how to do that. I think I am getting somewhere following the online tutorial (http://docs.python.org/lib/profile-instant.html) As to your suggestion, I added print time.time() at the end of my code but I am puzzled. My code starts with these lines #! /usr/bin/env python import scipy as s from scipy import stats #I need this module for the linear fit import numpy as n import pylab as p import rpy as r #from rpy import r #import distance_calc as d_calc and the final statement print time.time() leads to: Traceback (most recent call last): File "", line 403, in ? TypeError: 'numpy.ndarray' object is not callable where line 403 is the one with the time statement. Should I not get some time statistics instead? 2) Unless something really odd happens, there are 2 bottlenecks in my code: (a) calculation of a sort of "distance" [not exactly that] between 5000 particles ( an O(5000x5000) operation) That is done by a Fortran 90 compiled code imported as Python module via f2py (b)once I have the distances between my particles, the igraph library (http://cran.r-project.org/src/contrib/Descriptions/igraph.html) to find the connected components. This R library is called via rpy. If (a) and (b) cannot be parallelized, then this is hopeless I think. (3) MKL: is the intel math library at http://www.intel.com/support/performancetools/libraries/mkl/linux/ what I am supposed to install and tune for my multi-cpu machine? If so, is it a complicated business? Many thanks Lorenzo Date: Tue, 5 Feb 2008 23:09:07 -0500 From: "Anne Archibald" Subject: Re: [SciPy-user] Python on Intel Xeon Dual Core Machine To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=UTF-8 On 05/02/2008, Lorenzo Isella wrote: > > And thanks everybody for the many replies. > > I partially solved the problem adding some extra RAM memory. > > A rather primitive solution, but now my desktop does not use any swap memory and the code runs faster. > > Unfortunately, the nature of the code does not easily lend itself to being split up into easier tasks. > > However, apart from the parallel python homepage, what is your recommendation for a beginner who wants a smattering in parallel computing (I have in mind C and Python at the moment)? > Really the first thing to do is figure out what's actually taking the time in your program. The python profiler has its limitations, but it's still worth using. Even just "print time.time()" can make a difference. If memory is a problem - as it was in your case - and you're swapping to disk, parallelizing your code may make things run slower. (Swapping is, as you probably noticed, *incredibly* slow, so anything that makes you do more of it, like trying to cram more stuff in memory at once, is going to make things much slower.) Even if you're already pretty sure you know which parts are slow, instrumenting it will tell you how much difference the various parallelization tricks you try are making. What kind of parallelizing you should do really depends on what's slow in your program, and on what you can change. At a conceptual idea, some operations parallelize easily and others require much thought. For example, if you're doing something ten times, and each time is independent of the others, that can be easily parallelized (that's what my little script handythread does). If you're doing something more complicated - sorting a list, say - that requires complicated sequencing, parallelizing it is going to be hard. Start by thinking about the time-consuming tasks you identified above. Does each task depend on the result of a previous task? If not, you can run them concurrently, using something like handythread, python's threading module, or parallel python. If they do depend on each other, start looking at each time-consuming task in turn. Could it be parallelized? This can mean one of two things: you could write code to make the task run in parallel, or you could make python use something like a parallelized linear-algebra library that automatically parallelizes (say) matrix multiplication (this is what the people who suggest MKL are suggesting). More generally, could the task be made to run faster in other ways? If you're reading text files, could you read binaries? If you're calling an external program thousands of times, could you use python or call it only once with more input? Parallel programming is a massive, complicated field, and many high-powered software tools exist to take advantage of it. Unfortunately, python has a limitation in this area: the Global Interpreter Lock. Basically it means no two CPUs can be running python code at the same time. This means that you get no speedup at all by parallelizing your python code - with a few important exceptions: while one thread is doing an array operation, other threads can run python code, and while one thread is waiting for I/O (reading from disk, for example), other threads can run python code. Parallel python is a toolkit that can avoid this problem by running multiple python interpreters (though I have little experience with it). Generally, parallelization works best when you don't need to move much data around. The fact that you're running short of memory suggests that you are doing that. Parallelization also always requires some restructuring of your code, and more if you want to be more efficient. Anne From f.braennstroem at gmx.de Wed Feb 6 15:54:43 2008 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Wed, 06 Feb 2008 20:54:43 +0000 Subject: [SciPy-user] compare two csv files In-Reply-To: <09B4C5D9-E1F0-42F7-9CBB-6831F1772ED4@yale.edu> References: <47990EC3.1070304@astraw.com> <09B4C5D9-E1F0-42F7-9CBB-6831F1772ED4@yale.edu> Message-ID: Hi Zachary, Zachary Pincus schrieb am 01/29/2008 04:10 PM: > Hi Fabian, > > Perhaps you could specify your problem more clearly. Basically, you > want to write a python function that takes two values and calls them > "equal" or not (in a fuzzy manner), and then you want to apply that > function along a column of data? > > This is probably best handled in pure python, until you get a little > more comfortable with the basic language and want to learn numpy/ > scipy. But first things first. > > So -- you need to specify *exactly* what sort of "fuzzy" matches are > acceptable. Then you need to transform this specification into a > python function. Given this, it's easy to compare two lists: > > > list1 = [...whatever...] > list2 = [...whatever...] > > def are_fuzzy_equal(element1, element2): > ...whatever... > > list3 = [] > for element1, element2 in zip(list1, list2): > if are_fuzzy_equal(element1, element2): > list3.append(element1) > > If your question is about how to implement are_fuzzy_equal, you'll > need to (a) specify that clearly, and (b) probably want to ask on a > basic python-language list. Or I'm sure some folks here would help in > a pinch. Sorry for the delay... thanks for your help! It seems, that fuzzy stuff can be done using the 'levenshtein_distance'. Thanks! Fabian From peridot.faceted at gmail.com Wed Feb 6 15:09:13 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 6 Feb 2008 15:09:13 -0500 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47AA0B00.1050500@gmail.com> References: <47AA0B00.1050500@gmail.com> Message-ID: On 06/02/2008, Lorenzo Isella wrote: > Unfortunately I am very close to some deadlines and I had to go for the easiest way of adding some RAM memory. Don't knock it, it worked. All too often we waste countless hours trying to tune the performance of our code when we should spend a few dollars on hardware to make the code we've got run faster. Less effort, less bugs, results sooner. > So, I'll list some points I may come back to when I post again > (1) profiling with python; I am learning how to do that. I think I am getting somewhere following the online tutorial (http://docs.python.org/lib/profile-instant.html) > As to your suggestion, I added print time.time() at the end of my code but I am puzzled. Ah. Well, just adding that line won't help. You have to import the time module; then calling time.time() gives you a floating-point number telling you what time it is right now. So a quick and dirty alternative to sophisticated profiling looks like: t = time.time() # do something potentially time-consuming print "Operation took %g seconds" % (time.time()-t) The weird error you gets sounds like you have something else you're calling time somewhere in your code. You can get around that by doing from time import time as what_time_is_it_now() or whatever name you like that doesn't conflict with a variable name in your code. > 2) Unless something really odd happens, there are 2 bottlenecks in my code: > (a) calculation of a sort of "distance" [not exactly that] between 5000 particles ( an O(5000x5000) operation) > That is done by a Fortran 90 compiled code imported as Python module via f2py It should be possible to accelerate this, depending on how it's calculated. If you're just calculating it in a brute-force way (supplying each pair to a function), then this can definitely be parallelized; for example something like distances = handythread.parallel_map(mydistance, ((M[i],M[i+1:]) for i in xrange(n-1))) where M is your list of points, and mydistance takes a single point and an array of points and returns an array of distances between the first point and the rest. You'll get back a "triangular" list of arrays containing all the points, and it'll get run on two (or however many you ask for) processors. It may require you to modify the calling interface of your F90 code. If the result is sparse, that is, you almost all zeros (or infinities), you should think about also making the Fortran code return a sparse representation. Reducing memory use can drastically accelerate code on modern processors (which are much much faster than RAM can keep up with). > (b)once I have the distances between my particles, the igraph library (http://cran.r-project.org/src/contrib/Descriptions/igraph.html) to find the connected components. > This R library is called via rpy. It's quite possible that rpy is slow. I don't know anything about it, never used either it or R; I would look for code implemented in python or C or Fortran. In fact, it looks like igraph has a python binding. I'd try this, in case going through rpy is slowing you down. Parallelizing igraph would involve rewriting the important algorithms in a parallel fashion. This would be a challenge comparable to writing igraph in the first place. > If (a) and (b) cannot be parallelized, then this is hopeless I think. If the slow step is producing the distances - and it sounds like it might be - you will probably get a speedup by close to a factor of two (or however many processors you have) by rearranging your code so that pairwise distances can be computed in parallel. > (3) MKL: is the intel math library at > http://www.intel.com/support/performancetools/libraries/mkl/linux/ > what I am supposed to install and tune for my multi-cpu machine? > If so, is it a complicated business? That would be it. I've never done it, but I imagine Intel has gone to some lengths to make it convenient. This will only help with operations like matrix multiplication and inversion, none of which, by the sound of it, are performance-critical. Find out what's slow before going to the trouble. Good luck, Anne From reckoner at gmail.com Wed Feb 6 17:43:09 2008 From: reckoner at gmail.com (Reckoner) Date: Wed, 6 Feb 2008 14:43:09 -0800 Subject: [SciPy-user] legend for bar charts? Message-ID: is it possible to use matplotlib's legend() for a bar chart? I am plotting a number of bars with different colors on the same axes and I would like to label each color. legend () seems to want to label every single bar on my bar chart. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 6 18:28:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Feb 2008 17:28:14 -0600 Subject: [SciPy-user] legend for bar charts? In-Reply-To: References: Message-ID: <47AA428E.1060406@gmail.com> Reckoner wrote: > is it possible to use matplotlib's legend() for a bar chart? I am > plotting a number of bars with different colors on the same axes and I > would like to label each color. You will want to ask this question on the matplotlib mailing list. https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dineshbvadhia at hotmail.com Thu Feb 7 04:26:26 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Thu, 7 Feb 2008 01:26:26 -0800 Subject: [SciPy-user] MemoryError transforming COO matrix to a CSR matrix Message-ID: Hello! This is a resend. I get a MemoryError when transforming a coo_matrix to a csr_matrix. The coo_matrix is loaded with about 32m integers (in fact, just binary 1's) which at 4 bytes per int works out to about 122Mb for the matirix. As I have 2Gb of RAM on my Windows machine this should be ample for transforming A to a csr_matrix. Here is the error message followed by the code: Traceback (most recent call last): File "... \... .py", line 310, in A = sparse.csr_matrix(A) File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 1162, in __init__ temp = s.tocsr() File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 2175, in tocsr return csr_matrix((data, colind, indptr), self.shape, check=False) File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 1197, in __init__ self.data = asarray(s, dtype=self.dtype) File "C:\Python25\lib\site-packages\numpy\core\numeric.py", line 132, in asarray return array(a, dtype, copy=False, order=order) MemoryError # imports import numpy import scipy from scipy import sparse # constants nnz = 31398038 I = 20000 J = 80000 dataFile = aFilename # Initialize A as a coo_matrix with dimensions(I, J) > A = sparse.coo_matrix(None, dims=(I, J), dtype=int) # Populate matrix A by first loading data into a coo_matrix using the coo_matrix(V, (I,J)), dims) method > ij = numpy.array(numpy.empty((nnz, 2), dtype=int)) > f = open(dataFile, 'rb') > ij = pickle.load(f) > row = ij[:,0] > column = ij[:,1] > data = scipy.ones(ij.shape[0], dtype=int) # Load data into A, convert A to csr_matrix > A = sparse.coo_matrix((data, (row, column)), dims=(I,J)) > A = sparse.csr_matrix(A) # the MemoryError occurs here Any ideas? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmarais at sun.ac.za Thu Feb 7 10:39:13 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Thu, 7 Feb 2008 15:39:13 +0000 (UTC) Subject: [SciPy-user] Construct sparse matrix from sparse blocks Message-ID: Hi, I have several sparse blocks defined separately. E.eg. A_aa, A_ab, A_ba, A_bb, A_cc, I want to construct a new sparse matrix like this: A = [A_aa A_ab 0 ] [A_ba A_bb 0 ] [0 0 A_cc] Is there currently an easy way to do this, or will I have to roll some of my own? If the latter, any suggestions of what to look at? Thanks Neilen From dwf at cs.toronto.edu Thu Feb 7 10:45:32 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 7 Feb 2008 10:45:32 -0500 Subject: [SciPy-user] Construct sparse matrix from sparse blocks In-Reply-To: References: Message-ID: On 7-Feb-08, at 10:39 AM, Neilen Marais wrote: > I have several sparse blocks defined separately. E.eg. > > A_aa, A_ab, A_ba, A_bb, A_cc, I want to construct a new sparse matrix > like this: > > A = [A_aa A_ab 0 ] > [A_ba A_bb 0 ] > [0 0 A_cc] What scipy.sparse type are you using to store them, or if you haven't written that part yet, how are these matrices represented? If they're stored as a vector of row indices, a vector of column indices and a vector of values (as in the scipy.sparse.coo_matrix ) then constructing it should be as straightforward as doing a few array concatenations (or copies). This format can then be efficiently converted to CSR or CSC with the tocsr() or tocsc() methods, which is the format you want it in if you're going to be doing any multiplies, etc. David From wnbell at gmail.com Thu Feb 7 11:51:53 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 7 Feb 2008 10:51:53 -0600 Subject: [SciPy-user] MemoryError transforming COO matrix to a CSR matrix In-Reply-To: References: Message-ID: On Feb 7, 2008 3:26 AM, Dinesh B Vadhia wrote: > > > I get a MemoryError when transforming a coo_matrix to a csr_matrix. The > coo_matrix is loaded with about 32m integers (in fact, just binary 1's) > which at 4 bytes per int works out to about 122Mb for the matirix. As I > have 2Gb of RAM on my Windows machine this should be ample for transforming > A to a csr_matrix. Here is the error message followed by the code: > Actually, it's (slightly more than) 32m*( 4 + 8) = 384Mb because SciPy is upcasting your ints to doubles. The dev version supports smaller dtypes, which would lower it to (slightly more than) 32m*( 4 + 1 ) = 160Mb. Your COO matrix takes 32m*(4 + 4 + 8) = 512Mb The ij array takes 32m*2*(4) = 256Mb (the COO matrix can't use row = ij[:,0] and column = ij[:,1] directly, because those arrays are not contiguous) Do this instead: # imports import numpy import scipy from scipy import sparse # constants nnz = 31398038 I = 20000 J = 80000 dataFile = aFilename # Initialize A as a coo_matrix with dimensions(I, J) # this does nothing A = sparse.coo_matrix(None, dims=(I, J), dtype=int) # Populate matrix A by first loading data into a coo_matrix using the coo_matrix(V, (I,J)), dims) method # this does nothing ij = numpy.array(numpy.empty((nnz, 2), dtype=int)) > f = open(dataFile, 'rb') > ij = pickle.load(f) > row = numpy.ascontiguousarray(ij[:,0],dtype='intc') > column = numpy.ascontiguousarray(ij[:,1],dtype='intc') > del ij > data = scipy.ones(ij.shape[0], dtype='float32') # Load data into A, convert A to csr_matrix > A = sparse.csr_matrix((data, (row, column)), dims=(I,J)) # implicit COO->CSR conversion If this doesn't work then you either need to make ij[:,0] and ij[:,1] contiguous or use a developers version of SciPy which supports smaller data types like 'int8'. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ed at lamedomain.net Thu Feb 7 12:35:25 2008 From: ed at lamedomain.net (Ed Rahn) Date: Thu, 7 Feb 2008 09:35:25 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <478BE61D.9090309@ucsf.edu> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> Message-ID: <20080207093525.e28fea23.ed@lamedomain.net> The author of Openbayes does not mind integrating it into scipy, the discussion can be found in the attached email. >From this repo http://svn.berlios.de/svnroot/repos/pybayes/branches/Public I have converted it from numarray to numpy, the patch can be found at: http://lamedomain.net/openbayes/numpy.diff - Ed On Mon, 14 Jan 2008 14:45:49 -0800 Karl Young wrote: > > I'm starting to play with Bayes nets in a way that will require a little > more than just using some of the black box packages around (e.g. I'd > like to play around with using various regression models at the nodes) > and would love to do my exploring in the context of SciPy but I didn't > see any such packages currently available. I did find a python package > called OpenBayes (http://www.openbayes.org/) that after a very cursory > examination looked pretty nice but apparently is no longer being > developed. Does anyone know if there has ever been any discussion with > the author of that package re. incorporating it into SciPy ? > > -- > > Karl Young > Center for Imaging of Neurodegenerative Diseases, UCSF > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > 4150 Clement Street FAX: (415) 668-2864 > San Francisco, CA 94121 Email: karl young at ucsf edu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An embedded message was scrubbed... From: Kosta Gaitanis Subject: Re: openbayes Date: Wed, 06 Feb 2008 17:45:27 +0100 Size: 5015 URL: From karl.young at ucsf.edu Thu Feb 7 13:26:05 2008 From: karl.young at ucsf.edu (Young, Karl) Date: Thu, 7 Feb 2008 10:26:05 -0800 Subject: [SciPy-user] Bayes net question References: <20080114162237.74c586f2@jakubik.ta3.sk><478BE61D.9090309@ucsf.edu> <20080207093525.e28fea23.ed@lamedomain.net> Message-ID: <9D202D4E86A4BF47BA6943ABDF21BE78039F0A6E@EXVS06.net.ucsf.edu> Sounds good, thanks for the update. Given the conversations subsequent to this post re. establishing that Kevin Murphy has given the go ahead re. porting and relicensing his Bayes Net matlab toolbox for integration into scipy, it sounds like one of the first orders of business for those who signed up for this project might be to try to establish whether the Openbayes project, or minor modifications/extensions, could provide a useful framework for the port. I'll continue to look at both but my understanding at this point is that the matlab toolbox currently has significantly more functionality than the Openbayes project. Looking at Kevin Murphy's description of his toolbox it seems he took a broader approach than some Bayes net packages re. general approaches to estimation in graphical models (which was attractive to me) and that might entail a different approach to the structure of the package than the author of Openbayes took (but this is just speculation at this point - I'll shut up). Karl Young Center for Imaging of Neurodegenerative Disease, UCSF VA Medical Center, MRS Unit (114M) Phone: (415) 221-4810 x3114 FAX: (415) 668-2864 Email: karl young at ucsf edu -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of Ed Rahn Sent: Thu 2/7/2008 9:35 AM To: scipy-user at scipy.org Subject: Re: [SciPy-user] Bayes net question The author of Openbayes does not mind integrating it into scipy, the discussion can be found in the attached email. >From this repo http://svn.berlios.de/svnroot/repos/pybayes/branches/Public I have converted it from numarray to numpy, the patch can be found at: http://lamedomain.net/openbayes/numpy.diff - Ed On Mon, 14 Jan 2008 14:45:49 -0800 Karl Young wrote: > > I'm starting to play with Bayes nets in a way that will require a little > more than just using some of the black box packages around (e.g. I'd > like to play around with using various regression models at the nodes) > and would love to do my exploring in the context of SciPy but I didn't > see any such packages currently available. I did find a python package > called OpenBayes (http://www.openbayes.org/) that after a very cursory > examination looked pretty nice but apparently is no longer being > developed. Does anyone know if there has ever been any discussion with > the author of that package re. incorporating it into SciPy ? > > -- > > Karl Young > Center for Imaging of Neurodegenerative Diseases, UCSF > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > 4150 Clement Street FAX: (415) 668-2864 > San Francisco, CA 94121 Email: karl young at ucsf edu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ed at lamedomain.net Thu Feb 7 14:29:51 2008 From: ed at lamedomain.net (Ed Rahn) Date: Thu, 7 Feb 2008 11:29:51 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <9D202D4E86A4BF47BA6943ABDF21BE78039F0A6E@EXVS06.net.ucsf.edu> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <20080207093525.e28fea23.ed@lamedomain.net> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A6E@EXVS06.net.ucsf.edu> Message-ID: <20080207112951.85af688d.ed@lamedomain.net> OpenBayes was an earlier attempt to port BNT to python. It uses similar network representation. - Ed On Thu, 7 Feb 2008 10:26:05 -0800 "Young, Karl" wrote: > > Sounds good, thanks for the update. Given the conversations subsequent to this post re. establishing that Kevin Murphy has given the go ahead re. porting and relicensing his Bayes Net matlab toolbox for integration into scipy, it sounds like one of the first orders of business for those who signed up for this project might be to try to establish whether the Openbayes project, or minor modifications/extensions, could provide a useful framework for the port. I'll continue to look at both but my understanding at this point is that the matlab toolbox currently has significantly more functionality than the Openbayes project. Looking at Kevin Murphy's description of his toolbox it seems he took a broader approach than some Bayes net packages re. general approaches to estimation in graphical models (which was attractive to me) and that might entail a different approach to the structure of the package than the author of Openbayes took (but this is just speculation at this point - I 'l > l shut up). > > Karl Young > Center for Imaging of Neurodegenerative Disease, UCSF > VA Medical Center, MRS Unit (114M) > Phone: (415) 221-4810 x3114 > FAX: (415) 668-2864 > Email: karl young at ucsf edu > > > > -----Original Message----- > From: scipy-user-bounces at scipy.org on behalf of Ed Rahn > Sent: Thu 2/7/2008 9:35 AM > To: scipy-user at scipy.org > Subject: Re: [SciPy-user] Bayes net question > > The author of Openbayes does not mind integrating it into scipy, the > discussion can be found in the attached email. > > >From this repo > http://svn.berlios.de/svnroot/repos/pybayes/branches/Public > I have converted it from numarray to numpy, the patch can be found at: > http://lamedomain.net/openbayes/numpy.diff > > - Ed > > On Mon, 14 Jan 2008 14:45:49 -0800 > Karl Young wrote: > > > > > I'm starting to play with Bayes nets in a way that will require a little > > more than just using some of the black box packages around (e.g. I'd > > like to play around with using various regression models at the nodes) > > and would love to do my exploring in the context of SciPy but I didn't > > see any such packages currently available. I did find a python package > > called OpenBayes (http://www.openbayes.org/) that after a very cursory > > examination looked pretty nice but apparently is no longer being > > developed. Does anyone know if there has ever been any discussion with > > the author of that package re. incorporating it into SciPy ? > > > > -- > > > > Karl Young > > Center for Imaging of Neurodegenerative Diseases, UCSF > > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > > 4150 Clement Street FAX: (415) 668-2864 > > San Francisco, CA 94121 Email: karl young at ucsf edu > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From Karl.Young at ucsf.edu Thu Feb 7 15:19:27 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Thu, 07 Feb 2008 12:19:27 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <20080207112951.85af688d.ed@lamedomain.net> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <20080207093525.e28fea23.ed@lamedomain.net> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A6E@EXVS06.net.ucsf.edu> <20080207112951.85af688d.ed@lamedomain.net> Message-ID: <47AB67CF.2020207@ucsf.edu> Great, sounds to me like the first question we need to answer then (should be easy now) is whether the approach should be to just start porting BNT functionality into the OpenBayes framework, or to first review the OpenBayes framework in some detail re. whether anyone thinks we should do any refactoring. >OpenBayes was an earlier attempt to port BNT to python. It uses similar >network representation. > >- Ed > >On Thu, 7 Feb 2008 10:26:05 -0800 >"Young, Karl" wrote: > > > >>Sounds good, thanks for the update. Given the conversations subsequent to this post re. establishing that Kevin Murphy has given the go ahead re. porting and relicensing his Bayes Net matlab toolbox for integration into scipy, it sounds like one of the first orders of business for those who signed up for this project might be to try to establish whether the Openbayes project, or minor modifications/extensions, could provide a useful framework for the port. I'll continue to look at both but my understanding at this point is that the matlab toolbox currently has significantly more functionality than the Openbayes project. Looking at Kevin Murphy's description of his toolbox it seems he took a broader approach than some Bayes net packages re. general approaches to estimation in graphical models (which was attractive to me) and that might entail a different approach to the structure of the package than the author of Openbayes took (but this is just speculation at this point - I >> >> > > 'l > > >> l shut up). >> >>Karl Young >>Center for Imaging of Neurodegenerative Disease, UCSF >>VA Medical Center, MRS Unit (114M) >>Phone: (415) 221-4810 x3114 >>FAX: (415) 668-2864 >>Email: karl young at ucsf edu >> >> >> >>-----Original Message----- >>From: scipy-user-bounces at scipy.org on behalf of Ed Rahn >>Sent: Thu 2/7/2008 9:35 AM >>To: scipy-user at scipy.org >>Subject: Re: [SciPy-user] Bayes net question >> >>The author of Openbayes does not mind integrating it into scipy, the >>discussion can be found in the attached email. >> >>>From this repo >>http://svn.berlios.de/svnroot/repos/pybayes/branches/Public >>I have converted it from numarray to numpy, the patch can be found at: >>http://lamedomain.net/openbayes/numpy.diff >> >>- Ed >> >>On Mon, 14 Jan 2008 14:45:49 -0800 >>Karl Young wrote: >> >> >> >>>I'm starting to play with Bayes nets in a way that will require a little >>>more than just using some of the black box packages around (e.g. I'd >>>like to play around with using various regression models at the nodes) >>>and would love to do my exploring in the context of SciPy but I didn't >>>see any such packages currently available. I did find a python package >>>called OpenBayes (http://www.openbayes.org/) that after a very cursory >>>examination looked pretty nice but apparently is no longer being >>>developed. Does anyone know if there has ever been any discussion with >>>the author of that package re. incorporating it into SciPy ? >>> >>>-- >>> >>>Karl Young >>>Center for Imaging of Neurodegenerative Diseases, UCSF >>>VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab >>>4150 Clement Street FAX: (415) 668-2864 >>>San Francisco, CA 94121 Email: karl young at ucsf edu >>> >>>_______________________________________________ >>>SciPy-user mailing list >>>SciPy-user at scipy.org >>>http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.org >>http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From hoytak at gmail.com Thu Feb 7 15:57:18 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Thu, 7 Feb 2008 12:57:18 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <20080207112951.85af688d.ed@lamedomain.net> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <20080207093525.e28fea23.ed@lamedomain.net> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A6E@EXVS06.net.ucsf.edu> <20080207112951.85af688d.ed@lamedomain.net> Message-ID: <4db580fd0802071257l7cbb9bep7c0f776946e7ee56@mail.gmail.com> I'm too busy currently to help much with porting BNT, but I'm a grad student of Kevin Murphy (and an avid scipy user), so if people porting the toolbox have specific questions or issues for me to ask Kevin about, they can email me and I can discuss them with him. I'll probably be following the development closely, but I don't have time to get involved with the coding. --Hoyt On Feb 7, 2008 11:29 AM, Ed Rahn wrote: > OpenBayes was an earlier attempt to port BNT to python. It uses similar > network representation. > > - Ed > > > On Thu, 7 Feb 2008 10:26:05 -0800 > "Young, Karl" wrote: > > > > > Sounds good, thanks for the update. Given the conversations subsequent to this post re. establishing that Kevin Murphy has given the go ahead re. porting and relicensing his Bayes Net matlab toolbox for integration into scipy, it sounds like one of the first orders of business for those who signed up for this project might be to try to establish whether the Openbayes project, or minor modifications/extensions, could provide a useful framework for the port. I'll continue to look at both but my understanding at this point is that the matlab toolbox currently has significantly more functionality than the Openbayes project. Looking at Kevin Murphy's description of his toolbox it seems he took a broader approach than some Bayes net packages re. general approaches to estimation in graphical models (which was attractive to me) and that might entail a different approach to the structure of the package than the author of Openbayes took (but this is just speculation at this point - I > 'l > > l shut up). > > > > Karl Young > > Center for Imaging of Neurodegenerative Disease, UCSF > > VA Medical Center, MRS Unit (114M) > > Phone: (415) 221-4810 x3114 > > FAX: (415) 668-2864 > > Email: karl young at ucsf edu > > > > > > > > -----Original Message----- > > From: scipy-user-bounces at scipy.org on behalf of Ed Rahn > > Sent: Thu 2/7/2008 9:35 AM > > To: scipy-user at scipy.org > > Subject: Re: [SciPy-user] Bayes net question > > > > The author of Openbayes does not mind integrating it into scipy, the > > discussion can be found in the attached email. > > > > >From this repo > > http://svn.berlios.de/svnroot/repos/pybayes/branches/Public > > I have converted it from numarray to numpy, the patch can be found at: > > http://lamedomain.net/openbayes/numpy.diff > > > > - Ed > > > > On Mon, 14 Jan 2008 14:45:49 -0800 > > Karl Young wrote: > > > > > > > > I'm starting to play with Bayes nets in a way that will require a little > > > more than just using some of the black box packages around (e.g. I'd > > > like to play around with using various regression models at the nodes) > > > and would love to do my exploring in the context of SciPy but I didn't > > > see any such packages currently available. I did find a python package > > > called OpenBayes (http://www.openbayes.org/) that after a very cursory > > > examination looked pretty nice but apparently is no longer being > > > developed. Does anyone know if there has ever been any discussion with > > > the author of that package re. incorporating it into SciPy ? > > > > > > -- > > > > > > Karl Young > > > Center for Imaging of Neurodegenerative Diseases, UCSF > > > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > > > 4150 Clement Street FAX: (415) 668-2864 > > > San Francisco, CA 94121 Email: karl young at ucsf edu > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From dineshbvadhia at hotmail.com Thu Feb 7 17:49:55 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Thu, 7 Feb 2008 14:49:55 -0800 Subject: [SciPy-user] Creating coo_matrix from data in text file Message-ID: Thank-you Nathan but I was looking for a method that didn't use the interim arrays: row = IJV[:,0] col = IJV[:,1] data = IJV[:,2] because our datasets are very large and using these interim arrays causes out of memory errors. We are looking for a method to populate a coo_matrix (or csr_matrix) directly from a file (containing the i,j, v items). We can then save/load the csr_matrix using Andrew Straw's fast code. Hope this makes sense! Dinesh ------------------------------ Date: Tue, 5 Feb 2008 18:07:51 -0600 From: "Nathan Bell" Subject: Re: [SciPy-user] Creating coo_matrix from data in text file To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=ISO-8859-1 On Feb 5, 2008 5:08 PM, Dinesh B Vadhia wrote: > The sparse coo_matrix method performs really well but our data sets are very > large and the working arrays (ie. ij, row, column and data) take up > significant memory. The judicious use of helps > but not that much. > > Is there a fast method available similar to coo_matrix to create a sparse > matrix from a text file instead of through a set of interim working arrays? > The file would contain the coordinates (i, j) and the value of each item. > Once the sparse matrix has been created we can then save/load it at will > (using Andrew Straw's fast load/save code). Suppose you have a file named matrix.txt with the following contents: $ cat matrix.txt 0 1 10 0 2 20 5 3 -5 6 4 14 now run this script: from numpy import fromfile from scipy.sparse import coo_matrix IJV = fromfile("matrix.txt",sep=" ").reshape(-1,3) row = IJV[:,0] col = IJV[:,1] data = IJV[:,2] A = coo_matrix( (data,(row,col)) ) print repr(A) print A.todense() You should see: <7x5 sparse matrix of type '' with 4 stored elements in COOrdinate format> [[ 0. 10. 20. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. -5. 0.] [ 0. 0. 0. 0. 14.]] This should be very fast. The only thing that would be faster is the recent scipy.io MATLAB file support which stores data in binary format (or storing your own binary format I suppose) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Feb 8 01:39:48 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 08 Feb 2008 15:39:48 +0900 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: <47A8DA9E.1020001@gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> Message-ID: <47ABF934.70003@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > On Feb 6, 2008 12:58 PM, william ratcliff wrote: > >> Has anyone played with openmp using ctypes or weave? >> > > Just FYI I tried some openmp code with gcc 4.2 and found that I > couldn't load the module dynamically. Here's a similar report: > http://newsgroups.derkeiler.com/Archive/Comp/comp.soft-sys.matlab/2008-01/msg00893.html > > This was using SWIG, but I think you'd encounter the same problem with > ctypes or weave. It's a known bug that should be fixed in a future > release. > FWIW, I could dynamically load a trivial openmp shared library through the dlopen machinery, which is what ctypes uses (to be exact, all python extensions which are not static use it at some point). I tried with centos 5.0 gcc, and then with one directly built from sources (4.2.1), both with success. Which compiler (version of gcc) are you using ? Do you have the code which fails ? cheers, David From wnbell at gmail.com Fri Feb 8 02:41:01 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 8 Feb 2008 01:41:01 -0600 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47ABF934.70003@ar.media.kyoto-u.ac.jp> References: <47A8DA9E.1020001@gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> <47ABF934.70003@ar.media.kyoto-u.ac.jp> Message-ID: On Feb 8, 2008 12:39 AM, David Cournapeau wrote: > FWIW, I could dynamically load a trivial openmp shared library through > the dlopen machinery, which is what ctypes uses (to be exact, all python > extensions which are not static use it at some point). I tried with > centos 5.0 gcc, and then with one directly built from sources (4.2.1), > both with success. > > Which compiler (version of gcc) are you using ? Do you have the code > which fails ? That's interesting. I'd like to parallelize scipy.sparse.sparsetools, so that was my experiment. Here's what I did: (1) Added a single OpenMP pragma to one of the sparsetools functions void csr_diagonal(const I n_row, const I N = std::min(n_row, n_col); #pragma omp parallel for for(I i = 0; i < N; i++){ I row_start = Ap[i]; (2) Compile with g++-4.2 $ g++-4.2 --version g++-4.2 (GCC) 4.2.1 (Ubuntu 4.2.1-5ubuntu4) g++-4.2 -Wall -ansi -c -O2 -fPIC -fopenmp sparsetools_wrap.cxx -I /usr/include/python2.5/ -I /usr/lib/python2.5/site-packages/numpy/core/include/ g++-4.2 -shared sparsetools_wrap.o -o _sparsetools.so -lgomp (3) Import into Python $ python Python 2.5.1 (r251:54863, Oct 5 2007, 13:36:32) [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import _sparsetools Traceback (most recent call last): File "", line 1, in ImportError: libgomp.so.1: shared object cannot be dlopen()ed The .so seems OK $ ldd ./_sparsetools.so linux-gate.so.1 => (0xffffe000) libgomp.so.1 => /usr/lib/libgomp.so.1 (0xb7d06000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7c13000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7bed000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7be2000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7a98000) librt.so.1 => /lib/tls/i686/cmov/librt.so.1 (0xb7a8f000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7a77000) /lib/ld-linux.so.2 (0x80000000) Here's a thread about the issue: http://gcc.gnu.org/ml/gcc-help/2007-04/msg00300.html Here's the GCC bug report: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28482 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From david at ar.media.kyoto-u.ac.jp Fri Feb 8 03:13:43 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 08 Feb 2008 17:13:43 +0900 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: <47A8DA9E.1020001@gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> <47ABF934.70003@ar.media.kyoto-u.ac.jp> Message-ID: <47AC0F37.4050908@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > On Feb 8, 2008 12:39 AM, David Cournapeau wrote: > >> FWIW, I could dynamically load a trivial openmp shared library through >> the dlopen machinery, which is what ctypes uses (to be exact, all python >> extensions which are not static use it at some point). I tried with >> centos 5.0 gcc, and then with one directly built from sources (4.2.1), >> both with success. >> >> Which compiler (version of gcc) are you using ? Do you have the code >> which fails ? > > That's interesting. I'd like to parallelize scipy.sparse.sparsetools, > so that was my experiment. > > Here's what I did: > > (1) Added a single OpenMP pragma to one of the sparsetools functions > > void csr_diagonal(const I n_row, > > const I N = std::min(n_row, n_col); > > #pragma omp parallel for > for(I i = 0; i < N; i++){ > I row_start = Ap[i]; > > (2) Compile with g++-4.2 > > $ g++-4.2 --version > g++-4.2 (GCC) 4.2.1 (Ubuntu 4.2.1-5ubuntu4) > > g++-4.2 -Wall -ansi -c -O2 -fPIC -fopenmp sparsetools_wrap.cxx -I > /usr/include/python2.5/ -I > /usr/lib/python2.5/site-packages/numpy/core/include/ > g++-4.2 -shared sparsetools_wrap.o -o _sparsetools.so -lgomp > > (3) Import into Python > > $ python > Python 2.5.1 (r251:54863, Oct 5 2007, 13:36:32) > [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. Here lies your problem I think: the default compiler on ubuntu, the one all softwares are compiled with, is 4.1, which does not have openmp (I don't think it is backported by ubuntu, contrary to say fedora). So it cannot find open mp library implementation. I don't know an easy solution: one thing would be to see if just saying where to find libgomp.so is enough (e.g. with LD_LIBRARY_PATH and co). But I would not trust the thing too much: the much safer alternative would be to recompile python with gcc 4.2. But I checked again: dlopening a library with open mp does work: here is an archive with a trivial program using a lib dlopened, works on ubuntu with gcc 4.2: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/dynamic_openmp.tar.bz2 cheers, David From wnbell at gmail.com Fri Feb 8 04:45:29 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 8 Feb 2008 03:45:29 -0600 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: <47AC0F37.4050908@ar.media.kyoto-u.ac.jp> References: <47A8DA9E.1020001@gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> <47ABF934.70003@ar.media.kyoto-u.ac.jp> <47AC0F37.4050908@ar.media.kyoto-u.ac.jp> Message-ID: On Feb 8, 2008 2:13 AM, David Cournapeau wrote: > But I checked again: dlopening a library with open mp does work: here is > an archive with a trivial program using a lib dlopened, works on ubuntu > with gcc 4.2: > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/dynamic_openmp.tar.bz2 Are you saying that you can compile this .so with Ubuntu's g++-4.2 and use it on the same system? Or are you compiling it elsewhere and running on Ubuntu? I get the same error as before: $ make gcc-4.2 -W -Wall -c -o taylor.o taylor.c gcc-4.2 -c -fPIC -fopenmp -W -Wall -o compute.o compute.c gcc-4.2 -shared compute.o -o libcompute.so -Wl,-soname,libcompute.so -lgomp gcc-4.2 -o taylor taylor.o compute.o -lgomp -L. -Wl,-rpath,. -ldl $ python Python 2.5.1 (r251:54863, Oct 5 2007, 13:36:32) [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import libcompute Traceback (most recent call last): File "", line 1, in ImportError: libgomp.so.1: shared object cannot be dlopen()ed -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From nmarais at sun.ac.za Fri Feb 8 09:15:14 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 8 Feb 2008 14:15:14 +0000 (UTC) Subject: [SciPy-user] Construct sparse matrix from sparse blocks References: Message-ID: David, On Thu, 07 Feb 2008 10:45:32 -0500, David Warde-Farley wrote: > If they're stored as a vector of row indices, a vector of column indices > and a vector of values (as in the scipy.sparse.coo_matrix ) then > constructing it should be as straightforward as doing a few array > concatenations (or copies). This format can then be efficiently > converted to CSR or CSC with the tocsr() or tocsc() methods, which is > the format you want it in if you're going to be doing any multiplies, > etc. Thanks for the suggestion. I ended up writing the function at the end of the message. If anyone else finds it useful I think it may be a good idea to put it somewhere in scipy.sparse? Thanks Neilen def merge_sparse_blocks(block_mats, format='coo', dtype=N.float64): """ Merge several sparse matrix blocks into a single sparse matrix Input Params ============ block_mats -- sequence of block matrix offsets and block matrices such that block_mats[i] == ((row_offset, col_offset), block_matrix) format -- Desired sparse format of output matrix dtype -- Desired dtype, defaults to N.float64 Output ====== Global matrix containing the input blocks at the desired block locations. If csr or csc matrices are requested it is ensured that their indices are sorted. Example ======= The 5x5 matrix A containing a 3x3 upper diagonal block, A_aa and 2x2 lower diagonal block A_bb: A = [A_aa 0 ] [0 A_bb] A = merge_sparse_blocks( ( ((0,0), A_aa), ((3,3), A_bb)) ) """ nnz = sum(m.nnz for o,m in block_mats) data = N.empty(nnz, dtype=dtype) row = N.empty(nnz, dtype=N.intc) col = N.empty(nnz, dtype=N.intc) nnz_o = 0 for (row_o, col_o), bm in block_mats: bm = bm.tocoo() data[nnz_o:nnz_o+bm.nnz] = bm.data row[nnz_o:nnz_o+bm.nnz] = bm.row+row_o col[nnz_o:nnz_o+bm.nnz] = bm.col+col_o nnz_o += bm.nnz merged_mat = sparse.coo_matrix((data, (row, col)), dtype=dtype) if format != 'coo': merged_mat = getattr(merged_mat, 'to'+format)() if format == 'csc' or format == 'csr': if not merged_mat.has_sorted_indices: merged_mat.sort_indices () return merged_mat > > David From jim at well.com Fri Feb 8 11:38:05 2008 From: jim at well.com (jim stockford) Date: Fri, 8 Feb 2008 08:38:05 -0800 Subject: [SciPy-user] bayPIGgies meets Thursday, 2/21: Guido van Rossum on Python 3.0 Message-ID: <79bd29d366173130d05e3a7a6e6aaaae@well.com> * SPECIAL NOTE: because Valentine's Day is on the second * Thursday of February (2/14) bayPIGgies has moved our * meeting to the third Thursday of the month, 2/21. bayPIGgies meeting Thursday 2/21: Guido van Rossum on Python 3.0 by Guido van Rossum Guido previews his keynote about Python 3000 at PyCon next month. Hear all about what Python 3000 means for your code, what tools will be available to help you in the transition, and how to be prepared for the next millennium. Location: Google Campus in Mountain View, CA Building 40, the Kiev room (first floor) bayPIGgies meeting information: http://baypiggies.net/new/plone * Please sign up in advance to have your google access badge ready: http://wiki.python.org/moin/BayPiggiesGoogleMeetings (no later than close of business on Wednesday.) Agenda----------------------------- ..... 7:30 PM ........................... General hubbub, inventory end-of-meeting announcements, any first-minute announcements. ..... 7:35 PM to 8:45 PM ................ The Talk (may extend a bit late) ..... 8:45 PM to 9:00 PM or After The Talk ................ Mapping and Random Access Mapping is a rapid-fire audience announcement of topics the announcers are interested in. Random Access follows immediately to allow follow up individually on the announcements and other topics of interest. ..... The March Meeting ................ TBD From nmarais at sun.ac.za Fri Feb 8 11:39:37 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 8 Feb 2008 16:39:37 +0000 (UTC) Subject: [SciPy-user] Construct sparse matrix from sparse blocks References: Message-ID: Hi, Self On Fri, 08 Feb 2008 14:15:14 +0000, Neilen Marais wrote: > Thanks for the suggestion. I ended up writing the function at the end of > the > message. If anyone else finds it useful I think it may be a good idea to > put > it somewhere in scipy.sparse? > > Thanks > Neilen Well, I created a ticket with attachment for it: http://scipy.org/scipy/ scipy/ticket/602 Regards Neilen From wnbell at gmail.com Fri Feb 8 13:02:41 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 8 Feb 2008 12:02:41 -0600 Subject: [SciPy-user] Construct sparse matrix from sparse blocks In-Reply-To: References: Message-ID: On Feb 8, 2008 8:15 AM, Neilen Marais wrote: > Thanks for the suggestion. I ended up writing the function at the end of > the message. If anyone else finds it useful I think it may be a good idea to > put it somewhere in scipy.sparse? Thanks for sharing this with us. I think it would make a useful addition to scipy.sparse, so if there are no objections I'll integrate it in some form. Do you actually need the the row and column offset, or would a sparse analog of numpy's bmat() be more appropriate? http://www.scipy.org/Numpy_Example_List_With_Doc#bmat Specifically, would it be preferable to produce your example A = [A_aa 0 ] [0 A_bb] using the interface merge_sparse_blocks currently supports or something like bmat_sparse( [[A_aa, None],[None,A_bb]] ) We could also provide hstack_sparse() and vstack_sparse() functions also. Thoughts? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From nmarais at sun.ac.za Fri Feb 8 15:43:18 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 8 Feb 2008 20:43:18 +0000 (UTC) Subject: [SciPy-user] Construct sparse matrix from sparse blocks References: Message-ID: Hi Nathan On Fri, 08 Feb 2008 12:02:41 -0600, Nathan Bell wrote: > On Feb 8, 2008 8:15 AM, Neilen Marais wrote: > Thanks for sharing this with us. I think it would make a useful > addition to scipy.sparse, so if there are no objections I'll integrate > it in some form. +1 from me :) > > Do you actually need the the row and column offset, or would a sparse > analog of numpy's bmat() be more appropriate? > http://www.scipy.org/Numpy_Example_List_With_Doc#bmat > > Specifically, would it be preferable to produce your example > A = [A_aa 0 ] > [0 A_bb] > using the interface merge_sparse_blocks currently supports or something > like bmat_sparse( [[A_aa, None],[None,A_bb]] ) > > We could also provide hstack_sparse() and vstack_sparse() functions > also. Thoughts? I was unaware of the bmat like interface. I simply implemented the interface that seemed easiest implementation wise to me. Both the bmat and sparse hstack/vstack interfaces sound pretty nice to me... Avoids the possiblity of messing up the offsets. Regards Neilen From rmay at ou.edu Fri Feb 8 15:50:36 2008 From: rmay at ou.edu (Ryan May) Date: Fri, 08 Feb 2008 14:50:36 -0600 Subject: [SciPy-user] scipy.signal.chebwin Message-ID: <47ACC09C.4070906@ou.edu> Hi, Can anyone attest to the correctness of scipy.signal.chebwin? I can't get anything but a set of NaN's for any attenuation value greater than 20dB. Matlab's implementation default's to _100_, but even most examples I'm looking at are in the 50 dB range. Thanks, Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From wnbell at gmail.com Fri Feb 8 16:08:30 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 8 Feb 2008 15:08:30 -0600 Subject: [SciPy-user] Construct sparse matrix from sparse blocks In-Reply-To: References: Message-ID: On Feb 8, 2008 2:43 PM, Neilen Marais wrote: > I was unaware of the bmat like interface. I simply implemented the > interface that seemed easiest implementation wise to me. Both the bmat > and sparse hstack/vstack interfaces sound pretty nice to me... Avoids the > possiblity of messing up the offsets. Very well, I'll adapt your code to provide bmat/hstack/vstack-like functionality. Regarding names, we already have spkron ~= kron and spdiags ~= diag. I think we'll have to abandon this approach for the proposed functions since spbmat, sphstack, spvstack are terrible :) Which is better, column_stack_sparse row_stack_sparse block_matrix_sparse or sparse_column_stack sparse_row_stack sparse_block_matrix I prefer the XXX_sparse format since it aids tab completion. Suggestions? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From peridot.faceted at gmail.com Fri Feb 8 16:29:03 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 8 Feb 2008 22:29:03 +0100 Subject: [SciPy-user] Construct sparse matrix from sparse blocks In-Reply-To: References: Message-ID: On 08/02/2008, Nathan Bell wrote: > Regarding names, we already have spkron ~= kron and spdiags ~= diag. > I think we'll have to abandon this approach for the proposed functions > since spbmat, sphstack, spvstack are terrible :) > > Which is better, > > column_stack_sparse > row_stack_sparse > block_matrix_sparse > > or > > sparse_column_stack > sparse_row_stack > sparse_block_matrix > > > I prefer the XXX_sparse format since it aids tab completion. Suggestions? What's wrong with scipy.splinalg.hstack? Or scipy.sparse.hstack? This is what namespaces are *for*... Anne From stefan at sun.ac.za Fri Feb 8 16:33:33 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 8 Feb 2008 23:33:33 +0200 Subject: [SciPy-user] Construct sparse matrix from sparse blocks In-Reply-To: References: Message-ID: <20080208213333.GD23594@mentat.za.net> On Fri, Feb 08, 2008 at 03:08:30PM -0600, Nathan Bell wrote: > Regarding names, we already have spkron ~= kron and spdiags ~= diag. > I think we'll have to abandon this approach for the proposed functions > since spbmat, sphstack, spvstack are terrible :) > > Which is better, > > column_stack_sparse > row_stack_sparse > block_matrix_sparse > > or > > sparse_column_stack > sparse_row_stack > sparse_block_matrix Since this is already under the sparse namespace, maybe we can just use column_stack? Regards St?fan From scott at cse.ucdavis.edu Fri Feb 8 16:35:12 2008 From: scott at cse.ucdavis.edu (Scott Beardsley) Date: Fri, 08 Feb 2008 13:35:12 -0800 Subject: [SciPy-user] scipy.test() from trunk segfaults Message-ID: <47ACCB10.4000504@cse.ucdavis.edu> I'm having a problem building/testing scipy from the svn trunk. I built the ATLAS libraries from scratch (including the dynamic libs). All seems to go great with the numpy install (also the trunk). numpy tests all finish successfully. I've tried searching the archives and found a message about adding libg2c and libm but that didn't seem to work. As you can see from the log below scipy.test() segfaults: $ python Python 2.5.1 (r251:54863, Jul 17 2007, 16:59:41) [GCC 3.4.6 20060404 (Red Hat 3.4.6-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> print numpy.__version__ 1.0.5.dev4775 >> numpy.test() Numpy is installed in /share/apps/python-2.5.1/lib/python2.5/site-packages/numpy Numpy version 1.0.5.dev4775 Python version 2.5.1 (r251:54863, Jul 17 2007, 16:59:41) [GCC 3.4.6 20060404 (Red Hat 3.4.6-8)] Found 10/10 tests for numpy.core.defmatrix Found 36/36 tests for numpy.core.ma Found 237/237 tests for numpy.core.multiarray Found 65/65 tests for numpy.core.numeric Found 31/31 tests for numpy.core.numerictypes Found 12/12 tests for numpy.core.records Found 6/6 tests for numpy.core.scalarmath Found 14/14 tests for numpy.core.umath Found 4/4 tests for numpy.ctypeslib Found 5/5 tests for numpy.distutils.misc_util Found 2/2 tests for numpy.fft.fftpack Found 3/3 tests for numpy.fft.helper Found 10/10 tests for numpy.lib.arraysetops Found 0/0 tests for numpy.lib.format Found 46/46 tests for numpy.lib.function_base Found 5/5 tests for numpy.lib.getlimits Found 4/4 tests for numpy.lib.index_tricks Found 3/3 tests for numpy.lib.polynomial Found 49/49 tests for numpy.lib.shape_base Found 15/15 tests for numpy.lib.twodim_base Found 43/43 tests for numpy.lib.type_check Found 1/1 tests for numpy.lib.ufunclike Found 40/40 tests for numpy.linalg Found 3/3 tests for numpy.random Found 0/0 tests for __main__ .................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ---------------------------------------------------------------------- Ran 708 tests in 0.651s OK >>> import scipy >>> print scipy.__version__ 0.7.0.dev3905 >>> scipy.test() /share/apps/python-2.5.1/lib/python2.5/site-packages/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.splinalg.dsolve warn('scipy.linsolve has moved to scipy.splinalg.dsolve', DeprecationWarning) .../share/apps/python-2.5.1/lib/python2.5/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................Residual: 1.05006987327e-07 ............../share/apps/python-2.5.1/lib/python2.5/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ........................................... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ./share/apps/python-2.5.1/lib/python2.5/site-packages/numpy/lib/utils.py:111: DeprecationWarning: write_array is deprecated warnings.warn(str1, DeprecationWarning) /share/apps/python-2.5.1/lib/python2.5/site-packages/numpy/lib/utils.py:111: DeprecationWarning: read_array is deprecated warnings.warn(str1, DeprecationWarning) ..................../share/apps/python-2.5.1/lib/python2.5/site-packages/numpy/lib/utils.py:111: DeprecationWarning: npfile is deprecated warnings.warn(str1, DeprecationWarning) ..............F..FF.........caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .. **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** .........................FFFFFFFFFFFFFFFFFNO ATLAS INFO AVAILABLE ......................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ....F.Segmentation fault $ When I run numpy/distutils/system_info.py it looks like it finds everything except I do get and "undefined reference" error from liblapack.so. See: $ python /share/apps/python-2.5.1/lib/python2.5/site-packages/numpy/distutils/system_info.py lapack_info: libraries lapack not found in /share/apps/fftw-3.2alpha3-test/lib FOUND: libraries = ['lapack'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = f77 lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /share/apps/fftw-3.2alpha3-test/lib libraries mkl,vml,guide not found in /share/apps/ATLAS-3.8.0/lib libraries mkl,vml,guide not found in /share/apps/UMFPACK-5.2.0/lib libraries mkl,vml,guide not found in /share/apps/AMD-2.2.0/lib libraries mkl,vml,guide not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas not found in /share/apps/ATLAS-3.8.0/lib __main__.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = f77 customize GnuFCompiler Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/share/apps/ATLAS-3.8.0/lib -llapack -lptf77blas -lptcblas -latlas -o _configtest /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `e_wsfe' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `do_fio' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_cat' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_wsfe' collect2: ld returned 1 exit status /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `e_wsfe' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `do_fio' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_cat' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_wsfe' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Status: 255 Output: FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 2)] lapack_atlas_info: libraries lapack_atlas,f77blas,cblas,atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas,f77blas,cblas,atlas not found in /share/apps/ATLAS-3.8.0/lib libraries lapack_atlas not found in /share/apps/ATLAS-3.8.0/lib libraries lapack_atlas,f77blas,cblas,atlas not found in /share/apps/UMFPACK-5.2.0/lib libraries lapack_atlas not found in /share/apps/UMFPACK-5.2.0/lib libraries lapack_atlas,f77blas,cblas,atlas not found in /share/apps/AMD-2.2.0/lib libraries lapack_atlas not found in /share/apps/AMD-2.2.0/lib libraries lapack_atlas,f77blas,cblas,atlas not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 libraries lapack_atlas not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 __main__.lapack_atlas_info NOT AVAILABLE umfpack_info: libraries umfpack not found in /share/apps/fftw-3.2alpha3-test/lib libraries umfpack not found in /share/apps/ATLAS-3.8.0/lib amd_info: libraries amd not found in /share/apps/fftw-3.2alpha3-test/lib libraries amd not found in /share/apps/ATLAS-3.8.0/lib libraries amd not found in /share/apps/UMFPACK-5.2.0/lib FOUND: libraries = ['amd'] library_dirs = ['/share/apps/AMD-2.2.0/lib'] swig_opts = ['-I/share/apps/AMD-2.2.0/include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/share/apps/AMD-2.2.0/include'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/share/apps/UMFPACK-5.2.0/lib', '/share/apps/AMD-2.2.0/lib'] swig_opts = ['-I/share/apps/UMFPACK-5.2.0/include', '-I/share/apps/AMD-2.2.0/include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/share/apps/UMFPACK-5.2.0/include', '/share/apps/AMD-2.2.0/include'] _pkg_config_info: Found executable /usr/bin/pkg-config NOT AVAILABLE lapack_atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas,ptf77blas,ptcblas,atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas,ptf77blas,ptcblas,atlas not found in /share/apps/ATLAS-3.8.0/lib libraries lapack_atlas not found in /share/apps/ATLAS-3.8.0/lib libraries lapack_atlas,ptf77blas,ptcblas,atlas not found in /share/apps/UMFPACK-5.2.0/lib libraries lapack_atlas not found in /share/apps/UMFPACK-5.2.0/lib libraries lapack_atlas,ptf77blas,ptcblas,atlas not found in /share/apps/AMD-2.2.0/lib libraries lapack_atlas not found in /share/apps/AMD-2.2.0/lib libraries lapack_atlas,ptf77blas,ptcblas,atlas not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 libraries lapack_atlas not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 __main__.lapack_atlas_threads_info NOT AVAILABLE x11_info: libraries X11 not found in /share/apps/fftw-3.2alpha3-test/lib libraries X11 not found in /share/apps/ATLAS-3.8.0/lib libraries X11 not found in /share/apps/UMFPACK-5.2.0/lib libraries X11 not found in /share/apps/AMD-2.2.0/lib libraries X11 not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 NOT AVAILABLE blas_info: libraries blas not found in /share/apps/fftw-3.2alpha3-test/lib FOUND: libraries = ['blas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = f77 fftw_info: FOUND: libraries = ['fftw3', 'fftw3'] library_dirs = ['/share/apps/fftw-3.2alpha3-test/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/share/apps/fftw-3.2alpha3-test/include'] f2py_info: FOUND: sources = ['/share/apps/python-2.5.1/lib/python2.5/site-packages/numpy/f2py/src/fortranobject.c'] include_dirs = ['/share/apps/python-2.5.1/lib/python2.5/site-packages/numpy/f2py/src'] gdk_pixbuf_xlib_2_info: FOUND: libraries = ['gdk_pixbuf_xlib-2.0', 'gdk_pixbuf-2.0', 'm', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] extra_link_args = ['-Wl,--export-dynamic'] define_macros = [('GDK_PIXBUF_XLIB_2_INFO', '"\\"2.4.13\\""'), ('GDK_PIXBUF_XLIB_VERSION_2_4_13', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/include/glib-2.0', '/usr/lib64/glib-2.0/include'] dfftw_threads_info: libraries drfftw_threads,dfftw_threads not found in /share/apps/fftw-3.2alpha3-test/lib libraries drfftw_threads,dfftw_threads not found in /share/apps/ATLAS-3.8.0/lib libraries drfftw_threads,dfftw_threads not found in /share/apps/UMFPACK-5.2.0/lib libraries drfftw_threads,dfftw_threads not found in /share/apps/AMD-2.2.0/lib libraries drfftw_threads,dfftw_threads not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 dfftw threads not found NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /share/apps/fftw-3.2alpha3-test/lib FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = c fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/share/apps/fftw-3.2alpha3-test/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/share/apps/fftw-3.2alpha3-test/include'] blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /share/apps/fftw-3.2alpha3-test/lib libraries mkl,vml,guide not found in /share/apps/ATLAS-3.8.0/lib libraries mkl,vml,guide not found in /share/apps/UMFPACK-5.2.0/lib libraries mkl,vml,guide not found in /share/apps/AMD-2.2.0/lib libraries mkl,vml,guide not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /share/apps/fftw-3.2alpha3-test/lib Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = c customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/share/apps/ATLAS-3.8.0/lib -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.8.0 built by root on Thu Feb 7 16:08:30 PST 2008: UNAME : Linux tribe.cse.ucdavis.edu 2.6.9-67.ELsmp #1 SMP Fri Nov 16 12:49:06 EST 2007 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_HAMMER -DATL_CPUMHZ=1808 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_3DNow -DATL_USE64BITS -DATL_GAS_x8664 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 0 F77 : gfortran, version GNU Fortran 95 (GCC) 4.1.1 20070105 (Red Hat 4.1.1-53) F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m64 SMC : gcc, version gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-8) SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m64 SKC : gcc, version gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-8) SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"3.8.0\\""')] sfftw_info: libraries srfftw,sfftw not found in /share/apps/fftw-3.2alpha3-test/lib libraries srfftw,sfftw not found in /share/apps/ATLAS-3.8.0/lib libraries srfftw,sfftw not found in /share/apps/UMFPACK-5.2.0/lib libraries srfftw,sfftw not found in /share/apps/AMD-2.2.0/lib libraries srfftw,sfftw not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 sfftw not found NOT AVAILABLE xft_info: FOUND: libraries = ['Xft', 'X11', 'freetype', 'Xrender', 'fontconfig'] library_dirs = ['/usr/X11R6/lib64'] define_macros = [('XFT_INFO', '"\\"2.1.2.2\\""'), ('XFT_VERSION_2_1_2_2', None)] include_dirs = ['/usr/X11R6/include', '/usr/include/freetype2', '/usr/include/freetype2/config'] fft_opt_info: djbfft_info: NOT AVAILABLE FOUND: libraries = ['fftw3'] library_dirs = ['/share/apps/fftw-3.2alpha3-test/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/share/apps/fftw-3.2alpha3-test/include'] gdk_x11_2_info: FOUND: libraries = ['gdk-x11-2.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] extra_link_args = ['-Wl,--export-dynamic'] define_macros = [('GDK_X11_2_INFO', '"\\"2.4.13\\""'), ('GDK_X11_VERSION_2_4_13', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib64/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib64/glib-2.0/include'] agg2_info: NOT AVAILABLE numarray_info: NOT AVAILABLE blas_src_info: NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /share/apps/fftw-3.2alpha3-test/lib libraries rfftw,fftw not found in /share/apps/ATLAS-3.8.0/lib libraries rfftw,fftw not found in /share/apps/UMFPACK-5.2.0/lib libraries rfftw,fftw not found in /share/apps/AMD-2.2.0/lib libraries rfftw,fftw not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 fftw2 not found NOT AVAILABLE fftw_threads_info: libraries rfftw_threads,fftw_threads not found in /share/apps/fftw-3.2alpha3-test/lib libraries rfftw_threads,fftw_threads not found in /share/apps/ATLAS-3.8.0/lib libraries rfftw_threads,fftw_threads not found in /share/apps/UMFPACK-5.2.0/lib libraries rfftw_threads,fftw_threads not found in /share/apps/AMD-2.2.0/lib libraries rfftw_threads,fftw_threads not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 fftw threads not found NOT AVAILABLE _numpy_info: NOT AVAILABLE wx_info: Could not locate executable wx-config File not found: None. Cannot determine wx info. NOT AVAILABLE gdk_info: FOUND: libraries = ['gdk', 'Xi', 'Xext', 'X11', 'm', 'glib'] library_dirs = ['/usr/X11R6/lib64'] define_macros = [('GDK_INFO', '"\\"1.2.10\\""'), ('GDK_VERSION_1_2_10', None)] include_dirs = ['/usr/include/gtk-1.2', '/usr/X11R6/include', '/usr/include/glib-1.2', '/usr/lib64/glib/include'] gtkp_x11_2_info: FOUND: libraries = ['gtk-x11-2.0', 'gdk-x11-2.0', 'atk-1.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] extra_link_args = ['-Wl,--export-dynamic'] define_macros = [('GTKP_X11_2_INFO', '"\\"2.4.13\\""'), ('GTK_X11_VERSION_2_4_13', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib64/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/atk-1.0', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib64/glib-2.0/include'] sfftw_threads_info: libraries srfftw_threads,sfftw_threads not found in /share/apps/fftw-3.2alpha3-test/lib libraries srfftw_threads,sfftw_threads not found in /share/apps/ATLAS-3.8.0/lib libraries srfftw_threads,sfftw_threads not found in /share/apps/UMFPACK-5.2.0/lib libraries srfftw_threads,sfftw_threads not found in /share/apps/AMD-2.2.0/lib libraries srfftw_threads,sfftw_threads not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 sfftw threads not found NOT AVAILABLE boost_python_info: FOUND: libraries = [('boost_python_src', {'sources': ['./boost_1_34_1/libs/python/src/long.cpp', './boost_1_34_1/libs/python/src/errors.cpp... ... ...python/src/converter/registry.cpp'], 'include_dirs': ['./boost_1_34_1', '/share/apps/python-2.5.1/include/python2.5']})] include_dirs = ['./boost_1_34_1'] freetype2_info: FOUND: libraries = ['freetype', 'z'] define_macros = [('FREETYPE2_INFO', '"\\"9.7.3\\""'), ('FREETYPE2_VERSION_9_7_3', None)] include_dirs = ['/usr/include/freetype2'] gdk_2_info: FOUND: libraries = ['gdk-x11-2.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] extra_link_args = ['-Wl,--export-dynamic'] define_macros = [('GDK_2_INFO', '"\\"2.4.13\\""'), ('GDK_VERSION_2_4_13', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib64/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib64/glib-2.0/include'] dfftw_info: libraries drfftw,dfftw not found in /share/apps/fftw-3.2alpha3-test/lib libraries drfftw,dfftw not found in /share/apps/ATLAS-3.8.0/lib libraries drfftw,dfftw not found in /share/apps/UMFPACK-5.2.0/lib libraries drfftw,dfftw not found in /share/apps/AMD-2.2.0/lib libraries drfftw,dfftw not found in /usr/lib/gcc/x86_64-redhat-linux/3.4.3 dfftw not found NOT AVAILABLE lapack_src_info: NOT AVAILABLE gtkp_2_info: FOUND: libraries = ['gtk-x11-2.0', 'gdk-x11-2.0', 'atk-1.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] extra_link_args = ['-Wl,--export-dynamic'] define_macros = [('GTKP_2_INFO', '"\\"2.4.13\\""'), ('GTK_VERSION_2_4_13', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib64/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/atk-1.0', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib64/glib-2.0/include'] gdk_pixbuf_2_info: FOUND: libraries = ['gdk_pixbuf-2.0', 'm', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] extra_link_args = ['-Wl,--export-dynamic'] define_macros = [('GDK_PIXBUF_2_INFO', '"\\"2.4.13\\""'), ('GDK_PIXBUF_VERSION_2_4_13', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/include/glib-2.0', '/usr/lib64/glib-2.0/include'] atlas_info: libraries f77blas,cblas,atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas not found in /share/apps/fftw-3.2alpha3-test/lib libraries lapack_atlas not found in /share/apps/ATLAS-3.8.0/lib __main__.atlas_info FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = f77 Numeric_info: NOT AVAILABLE numerix_info: numpy_info: FOUND: define_macros = [('NUMPY_VERSION', '"\\"1.0.5.dev4775\\""'), ('NUMPY', None)] FOUND: define_macros = [('NUMPY_VERSION', '"\\"1.0.5.dev4775\\""'), ('NUMPY', None)] Any ideas? Here is my site.cfg: [DEFAULT] library_dirs = /share/apps/fftw-3.2alpha3-test/lib:/share/apps/ATLAS-3.8.0/lib:/share/apps/UMFPACK-5.2.0/lib:/share/apps/AMD-2.2.0/lib:/usr/lib/gcc/x86_64-redhat-linux/3.4.3 include_dirs = /share/apps/fftw-3.2alpha3-test/include:/share/apps/ATLAS-3.8.0/include:/share/apps/UMFPACK-5.2.0/include:/share/apps/AMD-2.2.0/include [blas_opt] libraries = g2c, m, ptf77blas, ptcblas, atlas [lapack_opt] libraries = g2c, m, lapack, ptf77blas, ptcblas, atlas [amd] amd_libs = amd [umfpack] umfpack_libs = umfpack [fftw] libraries = fftw3 Scott From nmarais at sun.ac.za Fri Feb 8 16:42:13 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 8 Feb 2008 21:42:13 +0000 (UTC) Subject: [SciPy-user] Construct sparse matrix from sparse blocks References: Message-ID: Nathan, On Fri, 08 Feb 2008 15:08:30 -0600, Nathan Bell wrote: > Regarding names, we already have spkron ~= kron and spdiags ~= diag. I > think we'll have to abandon this approach for the proposed functions > since spbmat, sphstack, spvstack are terrible :) > > Which is better, > > column_stack_sparse > row_stack_sparse > block_matrix_sparse > > or > > sparse_column_stack > sparse_row_stack > sparse_block_matrix > I prefer the XXX_sparse format since it aids tab completion. > Suggestions? All else being equal I prefer verbose and tab-completable, although it can get a bit much. I'm actually in favour of Anne's suggestion of using namespaces, though I'd put them in the sparse, rather than splinalg namespaces. I envision working like this: import scipy.sparse as sp import numpy as N # sparse stuff A = sp.hstack(....) B = sp.bmat(....) # dense (like me) stuff D = N.hstack(.....) etc. or this: from scipy.sparse import hstack as sphstack if that's your bag.... While the original numpy names are a bit concise, I think consistency goes along way. This let's us use namespaces to combine brevity with explicitness. In the immortal words of import this: Explicit is better than implicit. Namespaces are one honking great idea -- let's do more of those! ;) Regards Neilen From robert.kern at gmail.com Fri Feb 8 16:49:57 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 08 Feb 2008 15:49:57 -0600 Subject: [SciPy-user] scipy.test() from trunk segfaults In-Reply-To: <47ACCB10.4000504@cse.ucdavis.edu> References: <47ACCB10.4000504@cse.ucdavis.edu> Message-ID: <47ACCE85.2050100@gmail.com> Scott Beardsley wrote: > I'm having a problem building/testing scipy from the svn trunk. I built > the ATLAS libraries from scratch (including the dynamic libs). All seems > to go great with the numpy install (also the trunk). numpy tests all > finish successfully. I've tried searching the archives and found a > message about adding libg2c and libm but that didn't seem to work. As > you can see from the log below scipy.test() segfaults: When reporting a segfault in the test suite, please run it with scipy.test(verbosity=2). This will print out the name of the test before running it. Only the last few lines before the segfault are required to be posted here for us to identify the segfault. Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From scott at cse.ucdavis.edu Fri Feb 8 16:55:17 2008 From: scott at cse.ucdavis.edu (Scott Beardsley) Date: Fri, 08 Feb 2008 13:55:17 -0800 Subject: [SciPy-user] scipy.test() from trunk segfaults In-Reply-To: <47ACCE85.2050100@gmail.com> References: <47ACCB10.4000504@cse.ucdavis.edu> <47ACCE85.2050100@gmail.com> Message-ID: <47ACCFC5.5020109@cse.ucdavis.edu> Robert Kern wrote: > When reporting a segfault in the test suite, please run it with > scipy.test(verbosity=2). I assume you mean verbose=2... >>> scipy.test(verbosity=2)" Traceback (most recent call last): File "", line 1, in TypeError: test() got an unexpected keyword argument 'verbosity' >>> scipy.test(verbose=2) test_diag (test_basic.TestTriu) ... ok test_cblas (test_blas.TestBLAS) ... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ok test_fblas (test_blas.TestBLAS) ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... FAIL test_axpy (test_blas.TestFBLAS1Simple) ... ok test_complex_dotc (test_blas.TestFBLAS1Simple) ... Segmentation fault From scott at cse.ucdavis.edu Fri Feb 8 18:45:11 2008 From: scott at cse.ucdavis.edu (Scott Beardsley) Date: Fri, 08 Feb 2008 15:45:11 -0800 Subject: [SciPy-user] scipy.test() from trunk segfaults In-Reply-To: <47ACCFC5.5020109@cse.ucdavis.edu> References: <47ACCB10.4000504@cse.ucdavis.edu> <47ACCE85.2050100@gmail.com> <47ACCFC5.5020109@cse.ucdavis.edu> Message-ID: <47ACE987.2050107@cse.ucdavis.edu> Scott Beardsley wrote: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses fblas instead of cblas. FYI, atlas is found in numpy's system_info.py but I do get the following message: $ python numpy/distutils/system_info.py compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/share/apps/ATLAS-3.8.0/lib -llapack -lptf77blas -lptcblas -latlas -o _configtest /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `e_wsfe' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `do_fio' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_cat' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_wsfe' collect2: ld returned 1 exit status /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `e_wsfe' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_exp' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `do_fio' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `z_sqrt' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_cat' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `c_abs' /share/apps/ATLAS-3.8.0/lib/liblapack.so: undefined reference to `s_wsfe' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Status: 255 Output: FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/share/apps/ATLAS-3.8.0/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 2)] The above source will compile if I add -lg2c and -lm to the compilation command. Is there a way to add this to the site.cfg so it compiles _configtest.c successfully? I'm guessing it is failing because it thinks lapack is incomplete. BTW, the INSTALL.txt mentions doing the following: 5) ATLAS version, the locations of atlas and lapack libraries, building information if any. If you have ATLAS version 3.3.6 or newer, then give the output of the last command in :: cd scipy/Lib/linalg python setup_atlas_version.py build_ext --inplace --force python -c 'import atlas_version' But it looks to be out-of-date because I don't have a Lib directory in /scipy/. I found a couple setup_atlas_version.py scripts but they don't seem to like the above commands. Scott From stefan at sun.ac.za Fri Feb 8 20:20:58 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 9 Feb 2008 03:20:58 +0200 Subject: [SciPy-user] undefined symbol: clapack_sgesv Message-ID: <20080209012058.GA14648@mentat.za.net> Hi all, I am having some trouble compiling and running scipy (latest SVN). When I try to import scipy.linalg, I see ImportError: /home/stefan/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv I then investigated clapack_sgesv with ldd: $ ldd /home/stefan/lib/python2.5/site-packages/scipy/linalg/clapack.so linux-gate.so.1 => (0xffffe000) libf77blas.so.3 => /usr/lib/sse2/libf77blas.so.3 (0xb79e3000) libcblas.so.3 => /usr/lib/sse2/libcblas.so.3 (0xb74d5000) libatlas.so.3 => /usr/lib/sse2/libatlas.so.3 (0xb6f2c000) liblapack.so.3 => /usr/lib/atlas/sse2/liblapack.so.3 (0xb68dc000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb68b5000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb6890000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb6885000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb6736000) libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb6159000) /lib/ld-linux.so.2 (0x80000000) And $ nm /usr/lib/atlas/sse2/liblapack.a | grep clapack_sgesv clapack_sgesv.o: 00000000 T clapack_sgesv I believe I am missing something obvious, and I hope someone can point it out. I also tried modifying my site.cfg to include [blas_opt] libraries = ptf77blas, ptcblas, lapack_atlas [lapack_opt] libraries = lapack-3, ptf77blas, ptcblas, lapack_atlas (the scipy.cfg example says [atlas] is deprecated) but it doesn't look like anything is linked against lapack_atlas. Any ideas? Regards St?fan From brad.malone at gmail.com Fri Feb 8 20:33:02 2008 From: brad.malone at gmail.com (Brad Malone) Date: Fri, 8 Feb 2008 17:33:02 -0800 Subject: [SciPy-user] FFTN usage In-Reply-To: References: Message-ID: Hi, I am wanting to do a 3-dimensional Fourier transform of a grid of values using fftn but I am unsure as to how to properly input the array. This may be very basic but it's my first time doing a multidimensional fourier transform in any language, so I greatly appreciate any answers to 'stupid' questions I might have. So let's say I have a grid of values with [i,j,k] where i,j,k each go from 0 to 3. How do I compute the Fourier transform of this grid using fftn? Is the first argument a 1D array that goes through my 3D array along some standard path (last column goes fastest, or something like this?), or is the first argument actually a 3D array itself? I looked at the documentation but it wasn't clear to me. Thanks for your time, I appreciate it. Brad From akumar at ee.iitm.ac.in Fri Feb 8 20:40:03 2008 From: akumar at ee.iitm.ac.in (Kumar Appaiah) Date: Sat, 9 Feb 2008 07:10:03 +0530 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: <47ACC09C.4070906@ou.edu> References: <47ACC09C.4070906@ou.edu> Message-ID: On 09/02/2008, Ryan May wrote: > Hi, > > Can anyone attest to the correctness of scipy.signal.chebwin? I can't > get anything but a set of NaN's for any attenuation value greater than > 20dB. Matlab's implementation default's to _100_, but even most > examples I'm looking at are in the 50 dB range. I can. And the discussion is here: http://projects.scipy.org/scipy/scipy/ticket/581 HTH. Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600036 From stefan at sun.ac.za Fri Feb 8 20:45:25 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 9 Feb 2008 03:45:25 +0200 Subject: [SciPy-user] FFTN usage In-Reply-To: References: Message-ID: <20080209014525.GB14648@mentat.za.net> Hi Brad On Fri, Feb 08, 2008 at 05:33:02PM -0800, Brad Malone wrote: > So let's say I have a grid of values with [i,j,k] where i,j,k each go > from 0 to 3. How do I compute the Fourier transform of this grid using > fftn? Is the first argument a 1D array that goes through my 3D array > along some standard path (last column goes fastest, or something like > this?), or is the first argument actually a 3D array itself? I looked > at the documentation but it wasn't clear to me. I assume your values are packed into an array of shape (4,4,4). `fftn` can be seen as the equivalent of running a one-dimensional FFT along every axis of that array, i.e. along rows, columns and depth. You can further adjust the working by specifying keywords: np.fft.fftn(a, s=None, axes=None) fftn(a, s=None, axes=None) The n-dimensional fft of a. s is a sequence giving the shape of the input an result along the transformed axes, as n for fft. Results are packed analogously to fft: the term for zero frequency in all axes is in the low-order corner, while the term for the Nyquist frequency in all axes is in the middle. If neither s nor axes is specified, the transform is taken along all axes. If s is specified and axes is not, the last len(s) axes are used. If axes are specified and s is not, the input shape along the specified axes is used. If s and axes are both specified and are not the same length, an exception is raised. Btw, this docstring was obtained by typing import numpy as np np.fft.fftn? in IPython. Regards St?fan From stefan at sun.ac.za Fri Feb 8 20:47:15 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 9 Feb 2008 03:47:15 +0200 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: References: <47ACC09C.4070906@ou.edu> Message-ID: <20080209014715.GC14648@mentat.za.net> On Sat, Feb 09, 2008 at 07:10:03AM +0530, Kumar Appaiah wrote: > On 09/02/2008, Ryan May wrote: > > Hi, > > > > Can anyone attest to the correctness of scipy.signal.chebwin? I can't > > get anything but a set of NaN's for any attenuation value greater than > > 20dB. Matlab's implementation default's to _100_, but even most > > examples I'm looking at are in the 50 dB range. > > I can. And the discussion is here: > http://projects.scipy.org/scipy/scipy/ticket/581 I applied the patch earlier this evening. I'll check it in as soon as I can get my linalg working again (I'm having ATLAS/lapack problems). Regards St?fan From yennifersantiago at gmail.com Fri Feb 8 22:49:14 2008 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Fri, 8 Feb 2008 23:49:14 -0400 Subject: [SciPy-user] SciPy_ERROR Message-ID: <41bc705b0802081949l45b89c96n2b4d6a32a3e88e91@mail.gmail.com> Hello... I installed python-scipy with apt-get in Ubuntu, when I try to execute the Example.py of Genetic Algorithms generate the following error: carolina at carolinapc:~/AG$ python Example.py RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 Traceback (most recent call last): File "Example.py", line 1, in ? from scipy import ga ImportError: cannot import name ga The characteristics of numpy version that I use are: Paquete: python-numpy Estado: instalado Instalado autom?ticamente: no Versi?n: 1:1.0rc1-0ubuntu1 Prioridad: opcional Secci?n: universe/python Desarrollador: Ubuntu MOTU Developers Tama?o sin comprimir: 6693k Depende de: python-central (>= 0.5), python (< 2.6), python (>= 2.4), atlas3-base | lapack3 | liblapack.so.3, atlas3-base | refblas3 | libblas.so.3, libc6 (>= 2.4-1) Sugiere: python-numpy-doc Tiene conflictos con: python-f2py, python2.3-f2py, python2.4-f2py, python-scipy (<= 0.5.0-2), python-matplotlib (<= 0.87.4-3) Remplaza: python-f2py, python2.3-f2py, python2.4-f2py Proporciona: python2.4-numpy, python2.5-numpy I don't know what is the problem... -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay at ou.edu Fri Feb 8 23:32:33 2008 From: rmay at ou.edu (Ryan May) Date: Fri, 08 Feb 2008 22:32:33 -0600 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: References: <47ACC09C.4070906@ou.edu> Message-ID: <47AD2CE1.2080600@ou.edu> Kumar Appaiah wrote: > On 09/02/2008, Ryan May wrote: >> Hi, >> >> Can anyone attest to the correctness of scipy.signal.chebwin? I can't >> get anything but a set of NaN's for any attenuation value greater than >> 20dB. Matlab's implementation default's to _100_, but even most >> examples I'm looking at are in the 50 dB range. > > I can. And the discussion is here: > http://projects.scipy.org/scipy/scipy/ticket/581 > Well, it does help to an extent. However, what numbers did you use in your comparison with Matlab? I'm currently having trouble replicating my results from matlab. Using: chebwin(34,40) I get: array([ 0.15091791, 0.12635953, 0.17403453, 0.22943129, 0.29196621, 0.36068193, 0.43425971, 0.51105165, 0.58912968, 0.66634529, 0.74039742, 0.80892035, 0.86961657, 0.92043888, 0.95976796, 0.98649886, 1. , 1. , 0.98649886, 0.95976796, 0.92043888, 0.86961657, 0.80892035, 0.74039742, 0.66634529, 0.58912968, 0.51105165, 0.43425971, 0.36068193, 0.29196621, 0.22943129, 0.17403453, 0.12635953, 0.15091791]) But with matlab I get: ans = 0.1494 0.1249 0.1724 0.2276 0.2899 0.3584 0.4316 0.5081 0.5859 0.6629 0.7368 0.8053 0.8664 0.9180 0.9583 0.9859 1.0000 1.0000 0.9859 0.9583 0.9180 0.8664 0.8053 0.7368 0.6629 0.5859 0.5081 0.4316 0.3584 0.2899 0.2276 0.1724 0.1249 0.1494 But more problematic, here's what I get for chebwin(53,40) (trying to replicate a book figure): array([-0.16010146, -0.16010146, -0.16010146, -0.16010146, -0.16010146, -0.16010147, -0.16010148, -0.16010149, -0.1601015 , -0.1601015 , -0.16010145, -0.16010096, -0.16009716, -0.16007336, -0.15994973, -0.15941238, -0.15743963, -0.15127378, -0.13476733, -0.09676449, -0.02138783, 0.10725105, 0.29505955, 0.52638443, 0.7591664 , 0.93452305, 1. , 0.93452305, 0.7591664 , 0.52638443, 0.29505955, 0.10725105, -0.02138783, -0.09676449, -0.13476733, -0.15127378, -0.15743963, -0.15941238, -0.15994973, -0.16007336, -0.16009716, -0.16010096, -0.16010145, -0.1601015 , -0.1601015 , -0.16010149, -0.16010148, -0.16010147, -0.16010146, -0.16010146, -0.16010146, -0.16010146, -0.16010146]) Clearly, all of those negative values are *not* correct. (And the problems are not limited to the numbers above.) Any ideas? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From akumar at iitm.ac.in Sat Feb 9 01:46:04 2008 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sat, 9 Feb 2008 12:16:04 +0530 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: <47AD2CE1.2080600@ou.edu> References: <47ACC09C.4070906@ou.edu> <47AD2CE1.2080600@ou.edu> Message-ID: <20080209064604.GD4122@debian.akumar.iitm.ac.in> On Fri, Feb 08, 2008 at 10:32:33PM -0600, Ryan May wrote: > chebwin(34,40) > > I get: > > array([ 0.15091791, 0.12635953, 0.17403453, 0.22943129, 0.29196621, > 0.36068193, 0.43425971, 0.51105165, 0.58912968, 0.66634529, > 0.74039742, 0.80892035, 0.86961657, 0.92043888, 0.95976796, > 0.98649886, 1. , 1. , 0.98649886, 0.95976796, > 0.92043888, 0.86961657, 0.80892035, 0.74039742, 0.66634529, > 0.58912968, 0.51105165, 0.43425971, 0.36068193, 0.29196621, > 0.22943129, 0.17403453, 0.12635953, 0.15091791]) > > But with matlab I get: > > ans = > 0.1494 0.1249 0.1724 0.2276 0.2899 > 0.3584 0.4316 0.5081 0.5859 0.6629 > 0.7368 0.8053 0.8664 0.9180 0.9583 > 0.9859 1.0000 1.0000 0.9859 0.9583 > 0.9180 0.8664 0.8053 0.7368 0.6629 > 0.5859 0.5081 0.4316 0.3584 0.2899 > 0.2276 0.1724 0.1249 0.1494 I get this: [kumar at debian ~] python chebwin.py [ 0.14490233 0.12074828 0.16763814 0.22212255 0.28362753 0.35121151 0.42357774 0.49910747 0.5759161 0.65194005 0.72506923 0.7933214 0.85497322 0.90847191 0.95205773 0.98342321 1. 1. 0.98342321 0.95205773 0.90847191 0.85497322 0.7933214 0.72506923 0.65194005 0.5759161 0.49910747 0.42357774 0.35121151 0.28362753 0.22212255 0.16763814 0.12074828 0.14490233] There is something wrong. > But more problematic, here's what I get for chebwin(53,40) (trying to > replicate a book figure): > > array([-0.16010146, -0.16010146, -0.16010146, -0.16010146, -0.16010146, > -0.16010147, -0.16010148, -0.16010149, -0.1601015 , -0.1601015 , > -0.16010145, -0.16010096, -0.16009716, -0.16007336, -0.15994973, > -0.15941238, -0.15743963, -0.15127378, -0.13476733, -0.09676449, > -0.02138783, 0.10725105, 0.29505955, 0.52638443, 0.7591664 , > 0.93452305, 1. , 0.93452305, 0.7591664 , 0.52638443, > 0.29505955, 0.10725105, -0.02138783, -0.09676449, -0.13476733, > -0.15127378, -0.15743963, -0.15941238, -0.15994973, -0.16007336, > -0.16009716, -0.16010096, -0.16010145, -0.1601015 , -0.1601015 , > -0.16010149, -0.16010148, -0.16010147, -0.16010146, -0.16010146, > -0.16010146, -0.16010146, -0.16010146]) > > Clearly, all of those negative values are *not* correct. (And the > problems are not limited to the numbers above.) Any ideas? Let me try to figure it out. Then I'll let you know. Thanks. Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From wnbell at gmail.com Sat Feb 9 02:32:55 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 9 Feb 2008 01:32:55 -0600 Subject: [SciPy-user] Construct sparse matrix from sparse blocks In-Reply-To: References: Message-ID: On Feb 8, 2008 3:42 PM, Neilen Marais wrote: > All else being equal I prefer verbose and tab-completable, although it > can get a bit much. I'm actually in favour of Anne's suggestion of using > namespaces, though I'd put them in the sparse, rather than splinalg > namespaces. I envision working like this: scipy.sparse.bmat lives! http://projects.scipy.org/scipy/scipy/changeset/3908 I haven't done it yet, but I think sparse.hstack() and sparse.vstack() can be implemented via sparse.bmat() quite easily. E.g. something like: def hstack( blocks ): return bmat( [blocks] ) def vstack( blocks ): return bmat( [ [b] for b in blocks ] ) Feel free to contribute additional unittests for bmat() as the current ones are fairly simple. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From david at ar.media.kyoto-u.ac.jp Sat Feb 9 06:35:12 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 Feb 2008 20:35:12 +0900 Subject: [SciPy-user] Python on Intel Xeon Dual Core Machine In-Reply-To: References: <47A8DA9E.1020001@gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78039F0A61@EXVS06.net.ucsf.edu> <827183970802061058r77fe3be1o88126c9eb62e6808@mail.gmail.com> <47ABF934.70003@ar.media.kyoto-u.ac.jp> <47AC0F37.4050908@ar.media.kyoto-u.ac.jp> Message-ID: <47AD8FF0.7020909@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > On Feb 8, 2008 2:13 AM, David Cournapeau wrote: >> But I checked again: dlopening a library with open mp does work: here is >> an archive with a trivial program using a lib dlopened, works on ubuntu >> with gcc 4.2: >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/dynamic_openmp.tar.bz2 > > Are you saying that you can compile this .so with Ubuntu's g++-4.2 and > use it on the same system? Or are you compiling it elsewhere and > running on Ubuntu? I get the same error as before: > > $ make > gcc-4.2 -W -Wall -c -o taylor.o taylor.c > gcc-4.2 -c -fPIC -fopenmp -W -Wall -o compute.o compute.c > gcc-4.2 -shared compute.o -o libcompute.so -Wl,-soname,libcompute.so -lgomp > gcc-4.2 -o taylor taylor.o compute.o -lgomp -L. -Wl,-rpath,. -ldl > > $ python > Python 2.5.1 (r251:54863, Oct 5 2007, 13:36:32) > [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. Sorry, I may not have been clear: the small program taylor shows that opening a library (libcompute.so here) through dlopen does work on ubuntu with gcc 4.2, nothing else. It indeed does not work with python on my machine either. The problem, I think, is that since ubuntu still uses gcc 4.1 to compile python, python cannot dlopen libraries which depends on gcc 4.2 specific runtime services (libgomp is not an usual library, it is a library implementing open mp for gcc, hence gcc version specific, contrary to say libgtk or any usual library). So I would try compiling python with gcc 4.2 to see if this is indeed the problem. cheers, David From david at ar.media.kyoto-u.ac.jp Sat Feb 9 06:51:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 Feb 2008 20:51:36 +0900 Subject: [SciPy-user] scipy.test() from trunk segfaults In-Reply-To: <47ACCB10.4000504@cse.ucdavis.edu> References: <47ACCB10.4000504@cse.ucdavis.edu> Message-ID: <47AD93C8.2050908@ar.media.kyoto-u.ac.jp> Scott Beardsley wrote: > I'm having a problem building/testing scipy from the svn trunk. I built > the ATLAS libraries from scratch (including the dynamic libs). All seems > to go great with the numpy install (also the trunk). numpy tests all > finish successfully. I've tried searching the archives and found a > message about adding libg2c and libm but that didn't seem to work. As > you can see from the log below scipy.test() segfaults: It looks like you did not compile atlas correctly. The problem is that by default, ATLAS will pick up gfortran if you have it, whereas numpy and scipy will pick up g77 first if you have it. Since you are using RH 3, which uses g77 for all its fortran code, you should not use gfortran *at all* for anything related to numpy. g77 and gfortran are *not* ABI compatible, and mixing code from one with the other is just asking for trouble. Reconfigure and rebuilt atlas with the option "-C if g77". Make sure that none of the softwares you are using to build numpy is using gfortran. Also, I noticed that you are using some alpha softwares as libraries (fftw 3.2). Please use released softwares first to see if you can reproduce the bug. cheers, David From david at ar.media.kyoto-u.ac.jp Sat Feb 9 06:55:15 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 Feb 2008 20:55:15 +0900 Subject: [SciPy-user] undefined symbol: clapack_sgesv In-Reply-To: <20080209012058.GA14648@mentat.za.net> References: <20080209012058.GA14648@mentat.za.net> Message-ID: <47AD94A3.1040702@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > Hi all, > > I am having some trouble compiling and running scipy (latest SVN). > When I try to import scipy.linalg, I see > > ImportError: /home/stefan/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv > > I then investigated clapack_sgesv with ldd: > > $ ldd /home/stefan/lib/python2.5/site-packages/scipy/linalg/clapack.so > linux-gate.so.1 => (0xffffe000) > libf77blas.so.3 => /usr/lib/sse2/libf77blas.so.3 (0xb79e3000) > libcblas.so.3 => /usr/lib/sse2/libcblas.so.3 (0xb74d5000) > libatlas.so.3 => /usr/lib/sse2/libatlas.so.3 (0xb6f2c000) > liblapack.so.3 => /usr/lib/atlas/sse2/liblapack.so.3 (0xb68dc000) > libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb68b5000) > libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb6890000) > libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb6885000) > libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb6736000) > libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb6159000) > /lib/ld-linux.so.2 (0x80000000) > > And > > $ nm /usr/lib/atlas/sse2/liblapack.a | grep clapack_sgesv > clapack_sgesv.o: > 00000000 T clapack_sgesv Not that it should matter since it looks like you are using atlas packaged by debian, but what does ldd says for liblapack.so ? > > I believe I am missing something obvious, and I hope someone can point > it out. I also tried modifying my site.cfg to include > > [blas_opt] > libraries = ptf77blas, ptcblas, lapack_atlas > > [lapack_opt] > libraries = lapack-3, ptf77blas, ptcblas, lapack_atlas > > (the scipy.cfg example says [atlas] is deprecated) > > but it doesn't look like anything is linked against lapack_atlas. You don't care about that, because debian packages atlas in a clever way: libblas.so and liblapack.so are drop-in replacements for netlib blas and lapack, but with ATLAS as an implementation. cheers, David From wnbell at gmail.com Sat Feb 9 16:40:35 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 9 Feb 2008 15:40:35 -0600 Subject: [SciPy-user] Construct sparse matrix from sparse blocks In-Reply-To: References: Message-ID: On Feb 9, 2008 1:32 AM, Nathan Bell wrote: Also, in light of the strong preference for using namespaces over function prefixes/suffixes I've deprecated speye/spidentity/spkron in favor of sparse.eye/sparse.identity/sparse.kron. spdiags() gets a pass since it's the name people expect and there's no dense diags() -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From stefan at sun.ac.za Sat Feb 9 17:32:12 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 10 Feb 2008 00:32:12 +0200 Subject: [SciPy-user] undefined symbol: clapack_sgesv In-Reply-To: <47AD94A3.1040702@ar.media.kyoto-u.ac.jp> References: <20080209012058.GA14648@mentat.za.net> <47AD94A3.1040702@ar.media.kyoto-u.ac.jp> Message-ID: <20080209223212.GH14648@mentat.za.net> Hi David On Sat, Feb 09, 2008 at 08:55:15PM +0900, David Cournapeau wrote: > > $ nm /usr/lib/atlas/sse2/liblapack.a | grep clapack_sgesv > > clapack_sgesv.o: > > 00000000 T clapack_sgesv > Not that it should matter since it looks like you are using atlas > packaged by debian, but what does ldd says for liblapack.so ? $ ldd /usr/lib/liblapack.so linux-gate.so.1 => (0xffffe000) libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb72fb000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb72d4000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7184000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb715f000) /lib/ld-linux.so.2 (0x80000000) That looks about right, doesn't it? Regards St?fan From akumar at iitm.ac.in Sat Feb 9 20:32:31 2008 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sun, 10 Feb 2008 07:02:31 +0530 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: <20080209064604.GD4122@debian.akumar.iitm.ac.in> References: <47ACC09C.4070906@ou.edu> <47AD2CE1.2080600@ou.edu> <20080209064604.GD4122@debian.akumar.iitm.ac.in> Message-ID: <20080210013231.GA4049@debian.akumar.iitm.ac.in> On Sat, Feb 09, 2008 at 12:16:04PM +0530, Kumar Appaiah wrote: > > array([-0.16010146, -0.16010146, -0.16010146, -0.16010146, -0.16010146, > > -0.16010147, -0.16010148, -0.16010149, -0.1601015 , -0.1601015 , > > -0.16010145, -0.16010096, -0.16009716, -0.16007336, -0.15994973, > > -0.15941238, -0.15743963, -0.15127378, -0.13476733, -0.09676449, > > -0.02138783, 0.10725105, 0.29505955, 0.52638443, 0.7591664 , > > 0.93452305, 1. , 0.93452305, 0.7591664 , 0.52638443, > > 0.29505955, 0.10725105, -0.02138783, -0.09676449, -0.13476733, > > -0.15127378, -0.15743963, -0.15941238, -0.15994973, -0.16007336, > > -0.16009716, -0.16010096, -0.16010145, -0.1601015 , -0.1601015 , > > -0.16010149, -0.16010148, -0.16010147, -0.16010146, -0.16010146, > > -0.16010146, -0.16010146, -0.16010146]) > > > > Clearly, all of those negative values are *not* correct. (And the > > problems are not limited to the numbers above.) Any ideas? > > Let me try to figure it out. Then I'll let you know. I am unable to figure out where the problem could be, though I guess it would have to do with the Chebyshev polynomial evaluation. I could really do with a little help in debugging the chebwin fix. :-) Of course, I shall try it again... let's see. Thanks. Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From robince at gmail.com Mon Feb 11 10:52:26 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 15:52:26 +0000 Subject: [SciPy-user] how to import umfpack? Message-ID: Hi, I just setup a fresh svn install on a new machine, and I'm having trouble getting some code to work (Im sure its something simple!) Previously I was doing import scipy.linsolve.umfpack as um but I was expecting to have to change this soon with the move to splinalg. However in the new version, although it umfpack appears to have build successfully and there are files in splinalg/dsolve/umfpack (including __umfpack.so) I can't import it. import scipy.splinalg.dsolve.umfpack as um fails ('module' object has no attribute 'umfpack') How can I get at umfpack now? Thanks Robin From cimrman3 at ntc.zcu.cz Mon Feb 11 11:08:09 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 11 Feb 2008 17:08:09 +0100 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: Message-ID: <47B072E9.6060806@ntc.zcu.cz> Robin wrote: > However in the new version, although it umfpack appears to have build > successfully and there are files in splinalg/dsolve/umfpack (including > __umfpack.so) I can't import it. > import scipy.splinalg.dsolve.umfpack as um > fails ('module' object has no attribute 'umfpack') This should work, I do the same. Just tried with '0.7.0.dev3913'. r. From ivilata at carabos.com Mon Feb 11 11:17:12 2008 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Mon, 11 Feb 2008 17:17:12 +0100 Subject: [SciPy-user] [ANN] Release of the second PyTables video Message-ID: <20080211161712.GC17180@tardis.terramar.selidor.net> ====================================== Release of the second PyTables video ====================================== Carabos [1]_ is happy to announce the second of a series of videos dedicated to introducing the main features of PyTables to the public in a visual and easy to grasp manner: http://www.carabos.com/videos/pytables-2-tables PyTables [2]_ is a Free/Open Source package designed to handle massive amounts of data in a simple, but highly efficient way, using the HDF5 file format and NumPy data containers. .. [1] http://www.carabos.com/ .. [2] http://www.pytables.org/ Our second video explains how to work with tables, PyTables' main data container. It shows how to: * describe the structure of a table * create a table * iterate over a table * access tables by blocks * handle big tables * query a table The video is only 15 minutes long, so you can watch it while you enjoy a nice cup of coffee. If you are used to SQL databases, you may also be interested in the introduction to tables at http://www.pytables.org/moin/HintsForSQLUsers You can also see more on table queries in the latest video about ViTables (our PyTables GUI) at http://www.carabos.com/videos/vitables-2-queries More videos about PyTables will be published in the near future, so stay tuned on www.pytables.org for further announcements. We would like to hear your opinion on the video so we can do it better the next time. We are also open to suggestions for the topics of future videos. You can contact us at pytables at carabos.com. Best regards, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: Digital signature URL: From robince at gmail.com Mon Feb 11 11:22:35 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 16:22:35 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: <47B072E9.6060806@ntc.zcu.cz> References: <47B072E9.6060806@ntc.zcu.cz> Message-ID: On Feb 11, 2008 4:08 PM, Robert Cimrman wrote: > This should work, I do the same. Just tried with '0.7.0.dev3913'. Actually I've upgraded my laptop and it works there with '0.7.0.dev3913' but on this new install (64 bit Linux) I still have this problem. As far as I can see umfpack has built correctly: robince at bob64:/usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/umfpack$ ls info.py __init__.py setup.py tests umfpack.py umfpack.pyc info.pyc __init__.pyc setup.pyc _umfpack.py _umfpack.pyc __umfpack.so and a simple grep for umfpack in dsolve doesn't show any difference between the working and non-working one. However: In [2]: scipy.__version__ Out[2]: '0.7.0.dev3912' In [3]: import scipy.splinalg.dsolve.umfpack as um --------------------------------------------------------------------------- Traceback (most recent call last) /home/robince/ in () : 'module' object has no attribute 'umfpack' In [4]: import scipy.splinalg.dsolve as dsolve In [5]: dir(dsolve) Out[5]: ['SparseEfficiencyWarning', 'Tester', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__path__', '_csuperlu', '_dsuperlu', '_ssuperlu', '_superlu', '_zsuperlu', 'asarray', 'csc_matrix', 'factorized', 'isUmfpack', 'isspmatrix', 'isspmatrix_csc', 'isspmatrix_csr', 'linsolve', 'splu', 'spsolve', 'superLU_transtabl', 'test', 'useUmfpack', 'use_solver', 'warn'] I'd appreciate any help/suggestions to get this working - the problematic new install is on a powerful machine I only have access too for a limited time so I'd like to get it running as soon as possible. Thanks, Robin From cimrman3 at ntc.zcu.cz Mon Feb 11 11:28:37 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 11 Feb 2008 17:28:37 +0100 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <47B072E9.6060806@ntc.zcu.cz> Message-ID: <47B077B5.9030803@ntc.zcu.cz> Robin wrote: > On Feb 11, 2008 4:08 PM, Robert Cimrman wrote: >> This should work, I do the same. Just tried with '0.7.0.dev3913'. > > Actually I've upgraded my laptop and it works there with > '0.7.0.dev3913' but on this new install (64 bit Linux) I still have > this problem. > > As far as I can see umfpack has built correctly: > robince at bob64:/usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/umfpack$ ls > info.py __init__.py setup.py tests umfpack.py umfpack.pyc > info.pyc __init__.pyc setup.pyc _umfpack.py _umfpack.pyc __umfpack.so > > and a simple grep for umfpack in dsolve doesn't show any difference > between the working and non-working one. > However: > > In [2]: scipy.__version__ > Out[2]: '0.7.0.dev3912' > > In [3]: import scipy.splinalg.dsolve.umfpack as um > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /home/robince/ in () > > : 'module' object has no attribute 'umfpack' > ... > I'd appreciate any help/suggestions to get this working - the > problematic new install is on a powerful machine I only have access > too for a limited time so I'd like to get it running as soon as > possible. Did it work on the other computer with some older version of scipy? Is there the umfpack proper installed, actually (libumfpack.a, or .so)? r. From robince at gmail.com Mon Feb 11 11:41:37 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 16:41:37 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: <47B077B5.9030803@ntc.zcu.cz> References: <47B072E9.6060806@ntc.zcu.cz> <47B077B5.9030803@ntc.zcu.cz> Message-ID: On Feb 11, 2008 4:28 PM, Robert Cimrman wrote: > Did it work on the other computer with some older version of scipy? > Is there the umfpack proper installed, actually (libumfpack.a, or .so)? I've had problems on the new install with 0.7.0.dev3912 and 0.7.0.dev3913. (on 64 bit linux) It works for me on my latop - mac os x, 0.7.0.dev3913 I build libumfpack.a and scipy finds it and builds without errors. useUmfPack is set to True, it just seems that dsolve doesn't have a umfpack attribute. Here is the relevant output from the distutils config: FOUND: libraries = ['amd'] library_dirs = ['/home/robince/scipy_build/lib'] swig_opts = ['-I/home/robince/scipy_build/lib/include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/home/robince/scipy_build/lib/include'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/home/robince/scipy_build/lib'] swig_opts = ['-I/home/robince/scipy_build/lib/include', '-I/home/robince/scipy_build/lib/include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/home/robince/scipy_build/lib/include'] Robin From cimrman3 at ntc.zcu.cz Mon Feb 11 11:49:00 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 11 Feb 2008 17:49:00 +0100 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <47B072E9.6060806@ntc.zcu.cz> <47B077B5.9030803@ntc.zcu.cz> Message-ID: <47B07C7C.40408@ntc.zcu.cz> Robin wrote: > On Feb 11, 2008 4:28 PM, Robert Cimrman wrote: >> Did it work on the other computer with some older version of scipy? >> Is there the umfpack proper installed, actually (libumfpack.a, or .so)? > > I've had problems on the new install with 0.7.0.dev3912 and > 0.7.0.dev3913. (on 64 bit linux) > It works for me on my latop - mac os x, 0.7.0.dev3913 > > I build libumfpack.a and scipy finds it and builds without errors. > useUmfPack is set to True, it just seems that dsolve doesn't have a > umfpack attribute. This is probably caused by the following lines in splinalg/dsolve/umfpack/umpfack.py: try: # Silence import error. import _umfpack as _um except: _um = None Try moving the import line out of the try-except statement to see what actually went wrong when importing the shared library _umfpack. r. From wnbell at gmail.com Mon Feb 11 11:52:23 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 11 Feb 2008 10:52:23 -0600 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <47B072E9.6060806@ntc.zcu.cz> <47B077B5.9030803@ntc.zcu.cz> Message-ID: On Feb 11, 2008 10:41 AM, Robin wrote: > I've had problems on the new install with 0.7.0.dev3912 and > 0.7.0.dev3913. (on 64 bit linux) > It works for me on my latop - mac os x, 0.7.0.dev3913 No problems for me with '0.7.0.dev3912' on 64-bit linux. > I build libumfpack.a and scipy finds it and builds without errors. > useUmfPack is set to True, it just seems that dsolve doesn't have a > umfpack attribute. I don't know what could cause that. By "new install" do you mean that you've removed scipy/build and /usr/lib/python2.5/site-packages/scipy* ? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robince at gmail.com Mon Feb 11 12:39:05 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 17:39:05 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: <47B07C7C.40408@ntc.zcu.cz> References: <47B072E9.6060806@ntc.zcu.cz> <47B077B5.9030803@ntc.zcu.cz> <47B07C7C.40408@ntc.zcu.cz> Message-ID: On Feb 11, 2008 4:49 PM, Robert Cimrman wrote: > Try moving the import line out of the try-except statement to see what > actually went wrong when importing the shared library _umfpack. Thanks, this revealed an error: : /usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/umfpack/__umfpack.so: undefined symbol: _gfortran_st_write_done I linked umf against gfortran as seemed to be required in UFconfig.mk, and also need to add -L/usr/lib/gcc/x86_64-linux-gnu/4.2.1 flag to get the umf build to find libgfortran. I guess this is the problem. I tried setting export LD_LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/4.2.1 hoping this would let the import pick it up at runtime but it didn't seem to work. On Feb 11, 2008 4:52 PM, Nathan Bell wrote: > By "new install" do you mean that you've removed scipy/build and > /usr/lib/python2.5/site-packages/scipy* ? By new install I mean the machine didn't have any numpy/scipy stuff on it before so lapack, atlas, umfpack are all installed fresh (rather than working then broken with an update). Thanks again, Robin From robince at gmail.com Mon Feb 11 12:46:09 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 17:46:09 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <47B072E9.6060806@ntc.zcu.cz> <47B077B5.9030803@ntc.zcu.cz> <47B07C7C.40408@ntc.zcu.cz> Message-ID: Having googled a bit, I thought I should add I don't think is due to mixing compilers, g77 isn't installed, only gfortran, so as far as I know everything (ATLAS, umf, scipy) is built with that. (lapack atlas etc all built from source, gfortran standard ubuntu version) Robin From nmarais at sun.ac.za Mon Feb 11 13:05:51 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 11 Feb 2008 18:05:51 +0000 (UTC) Subject: [SciPy-user] What happened to ARPACK shift-invert/general eigenproblem routine? Message-ID: Hi, I used to use scipy.sandbox.arpack.speigs.ARPACK_gen_eigs() to use the shift-invert mode of ARPACK to solve my problems. With the move of arpack from sandbox to splinalg.arpack I can't seem to find this function. Any hints? Thanks Neilen From robince at gmail.com Mon Feb 11 15:02:22 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 20:02:22 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <47B072E9.6060806@ntc.zcu.cz> <47B077B5.9030803@ntc.zcu.cz> <47B07C7C.40408@ntc.zcu.cz> Message-ID: So I've tried to play around with rebuilding UMFPACK and scipy, but haven't had any luck. In UFconfig.mk this is my BLAS and LAPACK settings: BLAS = -L/usr/lib/gcc/x86_64-linux-gnu/4.2.1 -L/home/robince/scipy_build/lib -llapack -lf77blas -lcblas -latlas -lgfortran LAPACK = -L/usr/lib/gcc/x86_64-linux-gnu/4.2.1 -L/home/robince/scipy_build/lib -llapack -lf77blas -lcblas -latlas -lgfortran Also CFLAGS = -O3 -fPIC -m64 -fexceptions This is the only way I have been able to get UMFPACK to build, anything I tried to change (about linking with gfortran) caused build errors). For Lapack I added -fPIC and -m64 to all the FLAGS and otherwise used the make.inc.gfortran values. For ATLAS I used the following configure command: ../configure -b 64 -Fa alg -fPIC --with-netlib-lapack=/home/robince/scipy_build/lapack-3.1.1/lapack_LINUX.a For scipy: python setup.py build and from the config output it is picking up gfortran, ATLAS, UMFPACK, AMD ok. Build completed without errors. Any ideas where I'm going wrong that is causing this problem with gfortran symbols when import umfpack? The only thing I can think is that it is to do with libgfortran being in a funny place (/usr/lib/gcc/x86_64-linux-gnu/4.2.1), but as I mentioned earlier LD_LIBRARY_PATH doesn't seem to make any difference. Here is ldd -r on __umfpack.so if it can through any light. ldd doesn't report any dependency from __umfpack.so on libgfortran anyway - how should that get there? (at which stage). I have to admit I'm reaching the limits of my knowledge in terms of building/shared libraries (as I always seem to when trying to build numpy/scipy!) robince at bob64:/usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/umfpack$ ldd -r __umfpack.so undefined symbol: PyObject_GenericGetAttr (./__umfpack.so) undefined symbol: PyExc_ImportError (./__umfpack.so) undefined symbol: PyExc_ValueError (./__umfpack.so) undefined symbol: PyInstance_Type (./__umfpack.so) undefined symbol: PyExc_SystemError (./__umfpack.so) undefined symbol: PyList_Type (./__umfpack.so) undefined symbol: PyExc_TypeError (./__umfpack.so) undefined symbol: PyInt_Type (./__umfpack.so) undefined symbol: _PyWeakref_CallableProxyType (./__umfpack.so) undefined symbol: PyExc_SyntaxError (./__umfpack.so) undefined symbol: PyExc_ZeroDivisionError (./__umfpack.so) undefined symbol: PyExc_IndexError (./__umfpack.so) undefined symbol: PyExc_MemoryError (./__umfpack.so) undefined symbol: PyTuple_Type (./__umfpack.so) undefined symbol: PyExc_RuntimeError (./__umfpack.so) undefined symbol: PyType_Type (./__umfpack.so) undefined symbol: PyExc_IOError (./__umfpack.so) undefined symbol: PyLong_Type (./__umfpack.so) undefined symbol: _Py_NoneStruct (./__umfpack.so) undefined symbol: PyExc_OverflowError (./__umfpack.so) undefined symbol: PyExc_AttributeError (./__umfpack.so) undefined symbol: _PyWeakref_ProxyType (./__umfpack.so) undefined symbol: PyCObject_Type (./__umfpack.so) undefined symbol: PyModule_AddObject (./__umfpack.so) undefined symbol: PyDict_SetItemString (./__umfpack.so) undefined symbol: PyString_AsString (./__umfpack.so) undefined symbol: PyArg_UnpackTuple (./__umfpack.so) undefined symbol: sqrt (./__umfpack.so) undefined symbol: _gfortran_st_write_done (./__umfpack.so) undefined symbol: Py_InitModule4_64 (./__umfpack.so) undefined symbol: PyLong_FromVoidPtr (./__umfpack.so) undefined symbol: _gfortran_transfer_integer (./__umfpack.so) undefined symbol: PyCObject_FromVoidPtr (./__umfpack.so) undefined symbol: PyBool_FromLong (./__umfpack.so) undefined symbol: ceil (./__umfpack.so) undefined symbol: _PyObject_GetDictPtr (./__umfpack.so) undefined symbol: PyObject_CallFunctionObjArgs (./__umfpack.so) undefined symbol: PyObject_IsTrue (./__umfpack.so) undefined symbol: PyString_FromStringAndSize (./__umfpack.so) undefined symbol: PyLong_AsLong (./__umfpack.so) undefined symbol: PyCObject_Import (./__umfpack.so) undefined symbol: PyErr_Format (./__umfpack.so) undefined symbol: PyFloat_FromDouble (./__umfpack.so) undefined symbol: PyArg_ParseTuple (./__umfpack.so) undefined symbol: PyObject_GetAttr (./__umfpack.so) undefined symbol: PyErr_Occurred (./__umfpack.so) undefined symbol: _gfortran_stop_numeric (./__umfpack.so) undefined symbol: _PyInstance_Lookup (./__umfpack.so) undefined symbol: _gfortran_st_write (./__umfpack.so) undefined symbol: PySequence_Concat (./__umfpack.so) undefined symbol: PyString_FromString (./__umfpack.so) undefined symbol: PyString_FromFormat (./__umfpack.so) undefined symbol: PyInt_FromLong (./__umfpack.so) undefined symbol: PyModule_GetDict (./__umfpack.so) undefined symbol: PyDict_GetItem (./__umfpack.so) undefined symbol: PyInt_AsLong (./__umfpack.so) undefined symbol: PyCObject_AsVoidPtr (./__umfpack.so) undefined symbol: PyType_IsSubtype (./__umfpack.so) undefined symbol: PyObject_Init (./__umfpack.so) undefined symbol: PyObject_Malloc (./__umfpack.so) undefined symbol: PyObject_GetAttrString (./__umfpack.so) undefined symbol: PyList_Append (./__umfpack.so) undefined symbol: PyObject_Call (./__umfpack.so) undefined symbol: PyErr_Print (./__umfpack.so) undefined symbol: PyString_ConcatAndDel (./__umfpack.so) undefined symbol: PyObject_Free (./__umfpack.so) undefined symbol: PyImport_ImportModule (./__umfpack.so) undefined symbol: PyErr_Clear (./__umfpack.so) undefined symbol: PyTuple_New (./__umfpack.so) undefined symbol: PyTuple_SetItem (./__umfpack.so) undefined symbol: PyErr_SetString (./__umfpack.so) undefined symbol: _gfortran_transfer_character (./__umfpack.so) undefined symbol: PyList_SetItem (./__umfpack.so) undefined symbol: PyInstance_NewRaw (./__umfpack.so) undefined symbol: PyList_New (./__umfpack.so) undefined symbol: PyString_Format (./__umfpack.so) undefined symbol: PyDict_SetItem (./__umfpack.so) undefined symbol: PyDict_New (./__umfpack.so) libpthread.so.0 => /lib/libpthread.so.0 (0x00002b23f84ed000) libc.so.6 => /lib/libc.so.6 (0x00002b23f8708000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) Thanks again, Robin From nwagner at iam.uni-stuttgart.de Mon Feb 11 15:50:11 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 11 Feb 2008 21:50:11 +0100 Subject: [SciPy-user] What happened to ARPACK shift-invert/general eigenproblem routine? In-Reply-To: References: Message-ID: On Mon, 11 Feb 2008 18:05:51 +0000 (UTC) Neilen Marais wrote: > Hi, > > I used to use >scipy.sandbox.arpack.speigs.ARPACK_gen_eigs() to use the > shift-invert mode of ARPACK to solve my problems. With >the move of arpack > from sandbox to splinalg.arpack I can't seem to find >this function. Any hints? > > Thanks > Neilen > Did you try from scipy.splinalg.eigen.arpack import speigs >>> dir (speigs) ['ARPACK_eigs', 'ARPACK_gen_eigs', 'ARPACK_iteration', 'ArpackException', '__all___', '__builtins__', '__doc__', '__file__', '__name__', '_arpack', 'check_init', 'init_debug', 'init_postproc_workspace', 'init_workspaces', 'np', 'postproc', 'warnings'] Nils From answer at tnoo.net Mon Feb 11 16:48:54 2008 From: answer at tnoo.net (Martin =?iso-8859-1?Q?L=FCthi?=) Date: Mon, 11 Feb 2008 21:48:54 +0000 Subject: [SciPy-user] how to import umfpack? References: Message-ID: <87ir0vcgft.fsf@tnoo.net> Hi Robin writes: > I just setup a fresh svn install on a new machine, and I'm having Is this on Ubuntu? I had to change the site.cfg file to [umfpack] umfpack_libs = umfpack include_dirs = /usr/include/suitesparse HTH -- Martin L?thi answer at tnoo.net From scott at cse.ucdavis.edu Mon Feb 11 17:01:35 2008 From: scott at cse.ucdavis.edu (Scott Beardsley) Date: Mon, 11 Feb 2008 14:01:35 -0800 Subject: [SciPy-user] scipy.test() from trunk segfaults In-Reply-To: <47AD93C8.2050908@ar.media.kyoto-u.ac.jp> References: <47ACCB10.4000504@cse.ucdavis.edu> <47AD93C8.2050908@ar.media.kyoto-u.ac.jp> Message-ID: <47B0C5BF.3050503@cse.ucdavis.edu> David Cournapeau wrote: > Reconfigure and rebuilt atlas with the option "-C if g77". Make > sure that none of the softwares you are using to build numpy is using > gfortran. Ahha! This fixed it. I do have a couple test failures still but after searching the mailing list it looks like they are known issues[1]. We have a somewhat complicated environment on these systems. I'd really like to get this all working using our Pathscale compiler (for benchmark purposes mainly) but it didn't look to easy so I went with path of least resistance. > Also, I noticed that you are using some alpha softwares as libraries > (fftw 3.2). Please use released softwares first to see if you can > reproduce the bug. I have scipy 0.5.2 working with the fftw3.2alpha2 so I'm pretty sure using alpha3 is not a problem. Good idea for a next step though. Thanks a ton for the help! Scott ---------------------- [1] http://www.scipy.org/scipy/scipy/ticket/586 From robince at gmail.com Mon Feb 11 17:10:07 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 22:10:07 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: <87ir0vcgft.fsf@tnoo.net> References: <87ir0vcgft.fsf@tnoo.net> Message-ID: On Feb 11, 2008 9:48 PM, Martin L?thi wrote: > Hi > > Robin writes: > Is this on Ubuntu? I had to change the site.cfg file to > > [umfpack] > umfpack_libs = umfpack > include_dirs = /usr/include/suitesparse > > HTH Thanks, it is on Ubuntu. Here is my site.cfg. I don't have that, but I have the amd and umfpack headers in /home/robince/scipy_build/libs/include which is added to the default include path. I think this is OK - I would have thought I would get errors during the build if it wasn't finding some header files. [DEFAULT] library_dirs = /usr/local/lib:/home/robince/scipy_build/lib include_dirs = /usr/local/include:/home/robince/scipy_build/lib/include [atlas] atlas_libs = lapack, f77blas, cblas, atlas [amd] amd_libs = amd [umfpack] umfpack_libs = umfpack [fftw] libraries = fftw3 From robince at gmail.com Mon Feb 11 17:19:06 2008 From: robince at gmail.com (Robin) Date: Mon, 11 Feb 2008 22:19:06 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> Message-ID: I don't know if it could be related (or whether I should start a seperate thread) but also scipy.test() doesn't pick up any tests with this installation. I have installed the ubuntu python-nose package. Is there anything else that is needed? In [2]: scipy.__version__ Out[2]: '0.7.0.dev3913' In [3]: scipy.test() ---------------------------------------------------------------------- Ran 0 tests in 0.005s OK In [5]: scipy.test(label='full') ---------------------------------------------------------------------- Ran 0 tests in 0.004s OK From answer at tnoo.net Mon Feb 11 17:21:19 2008 From: answer at tnoo.net (Martin =?iso-8859-1?Q?L=FCthi?=) Date: Mon, 11 Feb 2008 22:21:19 +0000 Subject: [SciPy-user] how to import umfpack? References: <87ir0vcgft.fsf@tnoo.net> Message-ID: <87d4r3cexs.fsf@tnoo.net> Robin writes: > On Feb 11, 2008 9:48 PM, Martin L?thi wrote: >> Robin writes: >> Is this on Ubuntu? I had to change the site.cfg file to >> >> [umfpack] >> umfpack_libs = umfpack >> include_dirs = /usr/include/suitesparse > > Thanks, it is on Ubuntu. Here is my site.cfg. I don't have that, but I > have the amd and umfpack headers in Oh, I must have missed that you installed the libraries yourself. The above entry is the only thing I had to add to site.cfg, having installed libsuitesparse[-dev] (which includes UMFpack). As for Atlas I used thought you atlas3-ss2-dev. Best, Martin -- Martin L?thi answer at tnoo.net From matthew.brett at gmail.com Mon Feb 11 18:19:51 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 11 Feb 2008 23:19:51 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> Message-ID: <1e2af89e0802111519t43523f42p18e5a9c1c896c8ad@mail.gmail.com> Hi, On Feb 11, 2008 10:19 PM, Robin wrote: > I don't know if it could be related (or whether I should start a > seperate thread) but also scipy.test() doesn't pick up any tests with > this installation. I have installed the ubuntu python-nose package. Is > there anything else that is needed? Thanks for the report - I didn't know until Stefan pointed it out to me recently that the older versions of nose run no tests - I've just put a version check into SVN. You seem to need a version 0.10 at least - I've got 0.10.1. easy_install might or might not get that version for you, otherwise see: http://somethingaboutorange.com/mrl/projects/nose/ Matthew From yosefmel at post.tau.ac.il Tue Feb 12 03:08:58 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Tue, 12 Feb 2008 10:08:58 +0200 Subject: [SciPy-user] exp2 or 2** In-Reply-To: References: Message-ID: <200802121008.58745.yosefmel@post.tau.ac.il> On Thursday 31 January 2008 05:08:02 Tom Johnson wrote: > Are there reasons to use scipy.special.exp2 over 2** when operating on > arrays? If so, what are they? Using 2** seems to be faster... I've run a test with 3 methods of finding exp2: >>> from timeit import Timer >>> Timer(setup="import scipy; a = scipy.arange(1000)", stmt="2**a").timeit(1000) 0.14900803565979004 >>> Timer(setup="import scipy; a = scipy.arange(1000)", stmt="scipy.special.exp2(a)").timeit(1000) 0.10561108589172363 >>> Timer(setup="import scipy; a = scipy.arange(64)", stmt="scipy.special.exp2 (a)").timeit(10000) 0.13341283798217773 >>> Timer(setup="import scipy; a = scipy.arange(64)", stmt="scipy.special.exp2 (a)").timeit(10000) 0.13643097877502441 >>> Timer(setup="import scipy; a = scipy.arange(64)", stmt="2**a").timeit(10000) 0.12771892547607422 >>> Timer(setup="import scipy; a = scipy.arange(64)", stmt="2**a").timeit(10000) 0.13031315803527832 >>> Timer(setup="import scipy; a = scipy.arange(64)", stmt="1 << a").timeit(10000) 0.048883914947509766 >>> Timer(setup="import scipy; a = scipy.arange(64)", stmt="1 << a").timeit(10000) 0.049239873886108398 So using exp2() is faster on my system for longer arrays (or bigger exponents) but slower for short ones. using shift left is always faster, but is only suitable for integers. This is the source of exp2() with documentation on what's going on: http://svn.scipy.org/svn/scipy/trunk/scipy/special/cephes/exp2.c It seems that exp2() uses some polynomial approximation method instead of calculating the actual exponent. From cimrman3 at ntc.zcu.cz Tue Feb 12 05:15:15 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Feb 2008 11:15:15 +0100 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> Message-ID: <47B171B3.5020409@ntc.zcu.cz> Robin wrote: > On Feb 11, 2008 9:48 PM, Martin L?thi wrote: >> Hi >> >> Robin writes: >> Is this on Ubuntu? I had to change the site.cfg file to >> >> [umfpack] >> umfpack_libs = umfpack >> include_dirs = /usr/include/suitesparse >> >> HTH > > Thanks, it is on Ubuntu. Here is my site.cfg. I don't have that, but I > have the amd and umfpack headers in > /home/robince/scipy_build/libs/include which is added to the default > include path. I think this is OK - I would have thought I would get > errors during the build if it wasn't finding some header files. > > [DEFAULT] > library_dirs = /usr/local/lib:/home/robince/scipy_build/lib > include_dirs = /usr/local/include:/home/robince/scipy_build/lib/include > > [atlas] > atlas_libs = lapack, f77blas, cblas, atlas > > [amd] > amd_libs = amd > > [umfpack] > umfpack_libs = umfpack > > [fftw] > libraries = fftw3 > _______________________________________________ In addition to what you have, I have this in the site.cfg: [blas_opt] libraries = f77blas, cblas, atlas [lapack_opt] library_dirs = /usr/lib libraries = lapack, f77blas, cblas, atlas ldd -r produces (no _gfortran_* undefined symbols, only the usual Py*...): linux-gate.so.1 => (0xffffe000) libumfpack.so.0 => /usr/lib/libumfpack.so.0 (0xb7df6000) libamd.so.0 => /usr/lib/libamd.so.0 (0xb7de5000) libblas.so.0 => /usr/lib/libblas.so.0 (0xb7dc9000) libgfortran.so.1 => /usr/lib/gcc/i486-pc-linux-gnu/4.1.2/libgfortran.so.1 (0xb7d4e000) libm.so.6 => /lib/libm.so.6 (0xb7d27000) libgcc_s.so.1 => /usr/lib/gcc/i486-pc-linux-gnu/4.1.2/libgcc_s.so.1 (0xb7d1b000) libc.so.6 => /lib/libc.so.6 (0xb7bea000) libatlas.so.0 => /usr/lib/libatlas.so.0 (0xb781a000) /lib/ld-linux.so.2 (0x80000000) Otherwise all you do seems fine, I have no other clue what is the problem. r. From robince at gmail.com Tue Feb 12 10:56:28 2008 From: robince at gmail.com (Robin) Date: Tue, 12 Feb 2008 15:56:28 +0000 Subject: [SciPy-user] Memory error loading .mat file Message-ID: Hello, I am having trouble loading a matlab file with latest SVN's on Windows. The matlab file is about 350MB on disk. The machine has 2gb of ram. The same code loads the file OK on a mac os x system (same scipy version) with 2gb ram and also on a ubuntu 64 bit linux system. Is this a bug, or do I just need more RAM on windows? Is there anything I can do to get the data to load. Thanks, Robin ----- In [1]: numpy.__version__ Out[1]: '1.0.5.dev4788' In [2]: scipy.__version__ Out[2]: '0.7.0.dev3920' In [6]: d = ds.LudtkeData() --------------------------------------------------------------------------- Traceback (most recent call last) C:\phd\maxent\python\ in () C:\phd\maxent\python\datasource.py in __init__(self, feature) 34 Load data, with selected output feature""" 35 data = sio.loadmat( ---> 36 '..\ludtke_data\parameters.mat') 37 # skip first 2 blank variables 38 self.input = data['param'][2:,:] C:\Python25\Lib\site-packages\scipy\io\matlab\mio.py in loadmat(file_name, mdict , appendmat, basename, **kwargs) 94 ''' 95 MR = mat_reader_factory(file_name, appendmat, **kwargs) ---> 96 matfile_dict = MR.get_variables() 97 if mdict is not None: 98 mdict.update(matfile_dict) C:\Python25\Lib\site-packages\scipy\io\matlab\miobase.py in get_variables(self, variable_names) 268 mdict['__globals__'] = [] 269 while not self.end_of_stream(): --> 270 getter = self.matrix_getter_factory() 271 name = getter.name 272 if variable_names and name not in variable_names: C:\Python25\Lib\site-packages\scipy\io\matlab\mio5.py in matrix_getter_factory(s elf) 531 532 def matrix_getter_factory(self): --> 533 return self._array_reader.matrix_getter_factory() 534 535 def guess_byte_order(self): C:\Python25\Lib\site-packages\scipy\io\matlab\mio5.py in matrix_getter_factory(s elf) 238 next_pos = self.mat_stream.tell() + byte_count 239 if mdtype == miCOMPRESSED: --> 240 getter = Mat5ZArrayReader(self, byte_count).matrix_getter_fa ctory() 241 elif not mdtype == miMATRIX: 242 raise TypeError, \ C:\Python25\Lib\site-packages\scipy\io\matlab\mio5.py in __init__(self, array_re ader, byte_count) 290 data = array_reader.mat_stream.read(byte_count) 291 super(Mat5ZArrayReader, self).__init__( --> 292 StringIO(zlib.decompress(data)), 293 array_reader.dtypes, 294 array_reader.processor_func, : In [7]: From robince at gmail.com Tue Feb 12 11:30:13 2008 From: robince at gmail.com (Robin) Date: Tue, 12 Feb 2008 16:30:13 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: <47B171B3.5020409@ntc.zcu.cz> References: <87ir0vcgft.fsf@tnoo.net> <47B171B3.5020409@ntc.zcu.cz> Message-ID: On Feb 12, 2008 10:15 AM, Robert Cimrman wrote: > > ldd -r produces (no _gfortran_* undefined symbols, only the usual Py*...): > linux-gate.so.1 => (0xffffe000) > libumfpack.so.0 => /usr/lib/libumfpack.so.0 (0xb7df6000) > libamd.so.0 => /usr/lib/libamd.so.0 (0xb7de5000) > libblas.so.0 => /usr/lib/libblas.so.0 (0xb7dc9000) > libgfortran.so.1 => > /usr/lib/gcc/i486-pc-linux-gnu/4.1.2/libgfortran.so.1 (0xb7d4e000) > libm.so.6 => /lib/libm.so.6 (0xb7d27000) > libgcc_s.so.1 => > /usr/lib/gcc/i486-pc-linux-gnu/4.1.2/libgcc_s.so.1 (0xb7d1b000) > libc.so.6 => /lib/libc.so.6 (0xb7bea000) > libatlas.so.0 => /usr/lib/libatlas.so.0 (0xb781a000) > /lib/ld-linux.so.2 (0x80000000) > > > Otherwise all you do seems fine, I have no other clue what is the problem. So it seems there are a lot of dependencies missing from my __umfpack.so: robince at bob64:/usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/umfpack$ ldd __umfpack.so libpthread.so.0 => /lib/libpthread.so.0 (0x00002b2cc544c000) libc.so.6 => /lib/libc.so.6 (0x00002b2cc5667000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) (no umfpack, amd, atlas or gfortran) And yet compilation and linking seemed to complete without errors. Is there any way I can get more detail about the build process of the umfpack module, perhaps try it by hand, or try to debug it further (more verbose output seeing all the commands run to build it etc.)? Thanks, Robin From robince at gmail.com Tue Feb 12 11:45:49 2008 From: robince at gmail.com (Robin) Date: Tue, 12 Feb 2008 16:45:49 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> <47B171B3.5020409@ntc.zcu.cz> Message-ID: I was thinking that despite the lack of a dependency from __umfpack.so to libumfpack, libamd, libatlas etc. there are no underfined references shown by ldd -r (shown in earlier post) other than the Py* and a couple of _gfortran functions. So it doesn't seem that there are any umf related functions linked into this file. Could there be something going wrong with distutils during the build of the umfpack module? I've also tried compiling UMFPACK many times, with every permutation of options in UFconfig.mk that I can get to build successfully, but each time (on rebuilding scipy, including deleting site_packages/scipy and build directory) the result is the same. Cheers, Robin From nwagner at iam.uni-stuttgart.de Tue Feb 12 11:57:41 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 12 Feb 2008 17:57:41 +0100 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> <47B171B3.5020409@ntc.zcu.cz> Message-ID: On Tue, 12 Feb 2008 16:45:49 +0000 Robin wrote: > I was thinking that despite the lack of a dependency >from __umfpack.so > to libumfpack, libamd, libatlas etc. there are no >underfined > references shown by ldd -r (shown in earlier post) other >than the Py* > and a couple of _gfortran functions. So it doesn't seem >that there are > any umf related functions linked into this file. > > Could there be something going wrong with distutils >during the build > of the umfpack module? > > I've also tried compiling UMFPACK many times, with every >permutation > of options in UFconfig.mk that I can get to build >successfully, but > each time (on rebuilding scipy, including deleting >site_packages/scipy > and build directory) the result is the same. > > Cheers, > > Robin > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Robin, I am using umfpackv4.4 without any problems. I have compiled and installed umfpack from scratch. My site.cfg consists of [amd] library_dirs=/home/nwagner/src/UMFPACKv4.4/AMD/Lib include_dirs=/home/nwagner/src/UMFPACKv4.4/AMD/Include [umfpack] library_dirs=/home/nwagner/src/UMFPACKv4.4/UMFPACK/Lib include_dirs=/home/nwagner/src/UMFPACKv4.4/UMFPACK/Include Now umfpack is a scikits package svn co http://svn.scipy.org/svn/scikits/trunk/umfpack umfpack Cheers, Nils From cimrman3 at ntc.zcu.cz Tue Feb 12 12:04:41 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Feb 2008 18:04:41 +0100 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> <47B171B3.5020409@ntc.zcu.cz> Message-ID: <47B1D1A9.1090507@ntc.zcu.cz> Robin wrote: > On Feb 12, 2008 10:15 AM, Robert Cimrman wrote: >> ldd -r produces (no _gfortran_* undefined symbols, only the usual Py*...): >> linux-gate.so.1 => (0xffffe000) >> libumfpack.so.0 => /usr/lib/libumfpack.so.0 (0xb7df6000) >> libamd.so.0 => /usr/lib/libamd.so.0 (0xb7de5000) >> libblas.so.0 => /usr/lib/libblas.so.0 (0xb7dc9000) >> libgfortran.so.1 => >> /usr/lib/gcc/i486-pc-linux-gnu/4.1.2/libgfortran.so.1 (0xb7d4e000) >> libm.so.6 => /lib/libm.so.6 (0xb7d27000) >> libgcc_s.so.1 => >> /usr/lib/gcc/i486-pc-linux-gnu/4.1.2/libgcc_s.so.1 (0xb7d1b000) >> libc.so.6 => /lib/libc.so.6 (0xb7bea000) >> libatlas.so.0 => /usr/lib/libatlas.so.0 (0xb781a000) >> /lib/ld-linux.so.2 (0x80000000) >> >> >> Otherwise all you do seems fine, I have no other clue what is the problem. > > So it seems there are a lot of dependencies missing from my __umfpack.so: > robince at bob64:/usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/umfpack$ > ldd __umfpack.so > libpthread.so.0 => /lib/libpthread.so.0 (0x00002b2cc544c000) > libc.so.6 => /lib/libc.so.6 (0x00002b2cc5667000) > /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) > > (no umfpack, amd, atlas or gfortran) > And yet compilation and linking seemed to complete without errors. Is > there any way I can get more detail about the build process of the > umfpack module, perhaps try it by hand, or try to debug it further > (more verbose output seeing all the commands run to build it etc.)? well, with shared libraries you get complains only when a symbol is actually needed (i.e. when you import). Is it possible force linking the libgfortran somehow? I do not know enough about the scipy build system to give advice here. r. From robince at gmail.com Tue Feb 12 12:17:44 2008 From: robince at gmail.com (Robin) Date: Tue, 12 Feb 2008 17:17:44 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> <47B171B3.5020409@ntc.zcu.cz> Message-ID: On Feb 12, 2008 4:57 PM, Nils Wagner wrote: > Now umfpack is a scikits package > > svn co http://svn.scipy.org/svn/scikits/trunk/umfpack > umfpack Hi, I tried building from the scikits package with exactly the same results (no umfpack stuff in __umfpack.so, just Py* stuff and a couple of _gfortran symbols. The umfpack version I am using is the current from the website - I think it is 5. I don't think the site.cfg is the problem, since the headers and libraries are found OK (If I move them then they aren't found and the build fails). Somehow the problem must be with what distutils is doing to build the __umfpack.so object? Is there any way I can get more detail about this? Below is the output of setup.py config for the umfpack scikit... umfpack_info: amd_info: FOUND: libraries = ['amd'] library_dirs = ['/home/robince/scipy_build/AMD/Lib'] swig_opts = ['-I/home/robince/scipy_build/AMD/Include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/home/robince/scipy_build/AMD/Include'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/home/robince/scipy_build/UMFPACK/Lib', '/home/robince/scipy_build/AMD/Lib'] swig_opts = ['-I/home/robince/scipy_build/UMFPACK/Include', '-I/home/robince/scipy_build/AMD/Include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/home/robince/scipy_build/UMFPACK/Include', '/home/robince/scipy_build/AMD/Include'] blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /home/robince/scipy_build/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,f77blas,cblas,atlas not found in /usr/local/lib Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/robince/scipy_build/lib'] language = c customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 Could not locate executable pgf77 customize AbsoftFCompiler Could not locate executable f90 customize NAGFCompiler Found executable /usr/bin/f95 customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/home/robince/scipy_build/lib -llapack -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.8.0 built by robince on Mon Feb 11 07:12:11 EST 2008: UNAME : Linux bob64 2.6.22-14-generic #1 SMP Sun Oct 14 21:45:15 GMT 2007 x86_64 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Core2Duo -DATL_CPUMHZ=2333 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 131072 F77 : gfortran, version GNU Fortran (GCC) 4.2.1 (Ubuntu 4.2.1-5ubuntu4) F77FLAGS : -O -fPIC -m64 SMC : gcc, version gcc (GCC) 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2) SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC -m64 SKC : gcc, version gcc (GCC) 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2) SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/robince/scipy_build/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"3.8.0\\""')] running config From nmarais at sun.ac.za Tue Feb 12 12:42:24 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 12 Feb 2008 17:42:24 +0000 (UTC) Subject: [SciPy-user] What happened to ARPACK shift-invert/general eigenproblem routine? References: Message-ID: On Mon, 11 Feb 2008 21:50:11 +0100, Nils Wagner wrote: > Did you try > > from scipy.splinalg.eigen.arpack import speigs > >>>> dir (speigs) > ['ARPACK_eigs', 'ARPACK_gen_eigs', 'ARPACK_iteration', > 'ArpackException', '__all___', '__builtins__', '__doc__', '__file__', > '__name__', '_arpack', 'check_init', 'init_debug', > 'init_postproc_workspace', 'init_workspaces', 'np', 'postproc', > 'warnings'] Ah, thanks. I was confused since In [2]: from scipy.splinalg.eigen import arpack In [3]: dir(arpack) Out[3]: ['__all___', '__builtins__', '__doc__', '__docformat__', '__file__', '__name__', '_arpack', '_ndigits', '_type_conv', 'aslinearoperator', 'eigen', 'eigen_symmetric', 'np', 'warnings'] yielded a blank. Perhaps I should have looked further :) Regards Neilen > > Nils From robince at gmail.com Tue Feb 12 14:15:32 2008 From: robince at gmail.com (Robin) Date: Tue, 12 Feb 2008 19:15:32 +0000 Subject: [SciPy-user] how to import umfpack? In-Reply-To: References: <87ir0vcgft.fsf@tnoo.net> <47B171B3.5020409@ntc.zcu.cz> Message-ID: Hi, I've made some progress... I was able to get umfpack scikit to build properly. I found the -v option to setup.py which gave more information about the command it's running. Then by adding -lgfortran and appropriate -L option and running the command again by hand, the resulting __umfpack.so didn't have the unresolved symbols and scikits.umfpack could be installed successfully. The command distutils runs is: gcc -pthread -shared -Wl,-O1 build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scikits/umfpack/_umfpack_wrap.o -L/home/robince/scipy_build/UMFPACK/Lib -L/home/robince/scipy_build/AMD/Lib -L/home/robince/scipy_build/lib -lumfpack -lamd -llapack -lf77blas -lcblas -latlas -o build/lib.linux-x86_64-2.5/scikits/umfpack/__umfpack.so However to get it work I needed to add -L/usr/lib/gcc/x86_64-linux-gnu/4.2.1 and -lgfortran: gcc -pthread -shared -Wl,-O1 build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scikits/umfpack/_umfpack_wrap.o -L/home/robince/scipy_build/UMFPACK/Lib -L/home/robince/scipy_build/AMD/Lib -L/home/robince/scipy_build/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.2.1 -lumfpack -lamd -llapack -lf77blas -lcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.5/scikits/umfpack/__umfpack.so Is there any way I can get distutils to add these commands so I don't have to run the command by hand? Thanks Robin From robince at gmail.com Wed Feb 13 10:19:14 2008 From: robince at gmail.com (Robin) Date: Wed, 13 Feb 2008 15:19:14 +0000 Subject: [SciPy-user] splinalg import error Message-ID: Hi, Having finally completed a successful build on 64 bit linux I followed exactly the same procedure (which I added to the wiki) on a fresh machine. Unfortunately I now seem to have some fresh issues (which must be due to a recent change in SVN). First splinalg wont import: In [2]: scipy.__version__ Out[2]: '0.7.0.dev3934' In [3]: import scipy.splinalg --------------------------------------------------------------------------- Traceback (most recent call last) /usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/ in () /usr/lib/python2.5/site-packages/scipy/splinalg/__init__.py in () 4 5 from isolve import * ----> 6 from dsolve import * 7 from interface import * 8 from eigen import * /usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/__init__.py in () 7 del umfpack 8 ----> 9 from linsolve import * 10 11 __all__ = filter(lambda s:not s.startswith('_'),dir()) /usr/lib/python2.5/site-packages/scipy/splinalg/dsolve/linsolve.py in () 16 isUmfpack = hasattr( umfpack, 'UMFPACK_OK' ) 17 ---> 18 if isUmfpack and noScikit: 19 warn( 'scipy.splinalg.dsolve.umfpack will be removed,' 20 ' install scikits.umfpack instead', DeprecationWarning ) : name 'isUmfpack' is not defined think line 16 above needs to be taken out of the else clause. However, even with this done, although scipy.splinalg.dsolve.umfpack imports, it seems to be missing a lot of attributes, and in fact I can't find __umfpack.so anywhere. Is it now necessary to use the scikit, or is there something else wrong with my installation? Robin From robince at gmail.com Wed Feb 13 10:26:57 2008 From: robince at gmail.com (Robin) Date: Wed, 13 Feb 2008 15:26:57 +0000 Subject: [SciPy-user] splinalg import error In-Reply-To: References: Message-ID: On Feb 13, 2008 3:19 PM, Robin wrote: > However, even with this done, although scipy.splinalg.dsolve.umfpack > imports, it seems to be missing a lot of attributes, and in fact I > can't find __umfpack.so anywhere. Is it now necessary to use the > scikit, or is there something else wrong with my installation? Please ignore this, setup.py wasn't finding the umfpack library, but it was easy to fix. However I still think the change to linsolve.py is required. Thanks, Robin From wnbell at gmail.com Wed Feb 13 12:31:24 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 13 Feb 2008 11:31:24 -0600 Subject: [SciPy-user] splinalg import error In-Reply-To: References: Message-ID: On Feb 13, 2008 9:26 AM, Robin wrote: > On Feb 13, 2008 3:19 PM, Robin wrote: > > However, even with this done, although scipy.splinalg.dsolve.umfpack > > imports, it seems to be missing a lot of attributes, and in fact I > > can't find __umfpack.so anywhere. Is it now necessary to use the > > scikit, or is there something else wrong with my installation? > > Please ignore this, setup.py wasn't finding the umfpack library, but > it was easy to fix. However I still think the change to linsolve.py is > required. I've fixed linsolve.py per your suggestion. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From jeffrey.fogel at gmail.com Wed Feb 13 14:33:33 2008 From: jeffrey.fogel at gmail.com (Jeffrey Fogel) Date: Wed, 13 Feb 2008 14:33:33 -0500 Subject: [SciPy-user] Installing Scipy on OS X 10.4 - wrong architecture files Message-ID: <2cbd91330802131133r7d2982ebt4973ef0356c79521@mail.gmail.com> I have been trying to install scipy 0.6.0 from the source code onto my Mac (10.4.11, Intel chip) and I'm having some trouble that I hope someone here can help me with. I try installing using: python setup.py build sudo python setup.py install Which seems to compile and install scipy (with a number of warnings, but no errors). However, when I run scipy.test(1,10) it crashes with the following error: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so: mach-o, but wrong architecture Looking in the directory mentioned above, a large number of the .so files seem to be compiled for ppc instead of i386. I have been unable to find a switch for setup.py that sets the architecture. Is there a way to force the proper architecture so that I can get it to install properly? I am running python 2.5, also installed from the source from python.org. The compilers I'm using are gcc 4.0.1 and gfortran 4.1.0. Thanks for your help. -Jeffrey From robert.kern at gmail.com Wed Feb 13 14:41:32 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Feb 2008 13:41:32 -0600 Subject: [SciPy-user] Installing Scipy on OS X 10.4 - wrong architecture files In-Reply-To: <2cbd91330802131133r7d2982ebt4973ef0356c79521@mail.gmail.com> References: <2cbd91330802131133r7d2982ebt4973ef0356c79521@mail.gmail.com> Message-ID: <47B347EC.7030503@gmail.com> Jeffrey Fogel wrote: > I have been trying to install scipy 0.6.0 from the source code onto my > Mac (10.4.11, Intel chip) and I'm having some trouble that I hope > someone here can help me with. I try installing using: > > python setup.py build > sudo python setup.py install > > Which seems to compile and install scipy (with a number of warnings, > but no errors). However, when I run scipy.test(1,10) it crashes with > the following error: > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > 2): no suitable image found. Did find: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so: > mach-o, but wrong architecture > > Looking in the directory mentioned above, a large number of the .so > files seem to be compiled for ppc instead of i386. I have been unable > to find a switch for setup.py that sets the architecture. Is there a > way to force the proper architecture so that I can get it to install > properly? > > I am running python 2.5, also installed from the source from > python.org. Are you sure this was built correctly? The binary on python.org should be configured to make Universal extension modules correctly. > The compilers I'm using are gcc 4.0.1 and gfortran 4.1.0. Where did gfortran come from? Is it configured correctly to make Universal binaries? If necessary, do this: $ python setup.py config_fc --arch="-arch i386 -arch ppc" --fcompiler=gnu95 build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From R.Springuel at umit.maine.edu Wed Feb 13 15:54:46 2008 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 13 Feb 2008 15:54:46 -0500 Subject: [SciPy-user] New cluster package Message-ID: <47B35916.3040709@umit.maine.edu> I've written a new package of cluster analysis algorithms and was wondering how I would go about submitting them to be included in scipy (i.e. to replace or add to the current cluster package). The package can be downloaded here: http://www.umit.maine.edu/~r.springuel/FOV18-000CCFE8/cluster.zip in case anyone is interested in looking it over. Also, as I'm thinking about it, I used some of my personal statistics functions in writing the above package. Thus, in order for cluster to be added to scipy, those statistics functions would have to be added too. http://www.umit.maine.edu/~r.springuel/FOV18-000CCFE8/Statistics.py.zip Some of them probably look familiar as similar functions already exist in numpy/scipy, but I've added some unique features to them that my cluster package takes advantage of. Where do I go from here? -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By appointment only From matthieu.brucher at gmail.com Wed Feb 13 16:02:49 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 13 Feb 2008 22:02:49 +0100 Subject: [SciPy-user] New cluster package In-Reply-To: <47B35916.3040709@umit.maine.edu> References: <47B35916.3040709@umit.maine.edu> Message-ID: Hi, Clustering is now supposed to be part of a scikit package, the learn scikit (IIRC). What has been added/modified compared to the current package ? Matthieu 2008/2/13, R. Padraic Springuel : > > I've written a new package of cluster analysis algorithms and was > wondering how I would go about submitting them to be included in scipy > (i.e. to replace or add to the current cluster package). The package > can be downloaded here: > > http://www.umit.maine.edu/~r.springuel/FOV18-000CCFE8/cluster.zip > > in case anyone is interested in looking it over. > > Also, as I'm thinking about it, I used some of my personal statistics > functions in writing the above package. Thus, in order for cluster to > be added to scipy, those statistics functions would have to be added too. > > http://www.umit.maine.edu/~r.springuel/FOV18-000CCFE8/Statistics.py.zip > > Some of them probably look familiar as similar functions already exist > in numpy/scipy, but I've added some unique features to them that my > cluster package takes advantage of. > > Where do I go from here? > -- > > R. Padraic Springuel > Research Assistant > Department of Physics and Astronomy > University of Maine > Bennett 309 > Office Hours: By appointment only > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Wed Feb 13 16:05:03 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 13 Feb 2008 15:05:03 -0600 Subject: [SciPy-user] New cluster package In-Reply-To: <47B35916.3040709@umit.maine.edu> References: <47B35916.3040709@umit.maine.edu> Message-ID: <47B35B7F.1050809@enthought.com> R. Padraic Springuel wrote: > I've written a new package of cluster analysis algorithms and was > wondering how I would go about submitting them to be included in scipy > (i.e. to replace or add to the current cluster package). The package > can be downloaded here: > > http://www.umit.maine.edu/~r.springuel/FOV18-000CCFE8/cluster.zip > > in case anyone is interested in looking it over. > > Also, as I'm thinking about it, I used some of my personal statistics > functions in writing the above package. Thus, in order for cluster to > be added to scipy, those statistics functions would have to be added too. > > http://www.umit.maine.edu/~r.springuel/FOV18-000CCFE8/Statistics.py.zip > > Some of them probably look familiar as similar functions already exist > in numpy/scipy, but I've added some unique features to them that my > cluster package takes advantage of. > > Where do I go from here? > We love contributions, but there is still some logistical work that must be done which we need help with if you want to add your code to SciPy. Basically, you can create a ticket and attach the files (or put links to them) in the ticket (on the Developer Trac pages for SciPy). It is easiest if the files are done in the form of a patch to current SciPy. Changes to SciPy functions requires some discussion. The addition of new functions is easier, but may still require discussion. The other approach is to create a scikit which you distribute and maintain separately from SciPy. BSD-like licenses are prefered in scikits as much code that will go into SciPy is being staged in scikits. Good luck and thanks for your work. Best regards, -Travis O. From brad.malone at gmail.com Wed Feb 13 19:03:46 2008 From: brad.malone at gmail.com (Brad Malone) Date: Wed, 13 Feb 2008 16:03:46 -0800 Subject: [SciPy-user] fftn output format Message-ID: I hope I'm not asking a question that is provided in the documentation, but here I go. For fft the output format is [y(0),y(1),.......y(n/2-1),y(-n/2)......y(-1)], but what about for fftn where the array is 3x3x3? Is it simply the same idea applied to each dimension? Thanks, Brad From barrywark at gmail.com Wed Feb 13 19:32:40 2008 From: barrywark at gmail.com (Barry Wark) Date: Wed, 13 Feb 2008 16:32:40 -0800 Subject: [SciPy-user] disabling fftw3 during scipy build Message-ID: Hi all, I would be happy to start producing OS X eggs of scipy for distribution. At our site, we build scipy with fftw3 from MacPorts. I know the preference is to build eggs for release without the fftw3 dependency. Is there a way to turn off linking with fftw3 during the scipy build process, even if fftw3 is present on the system? I can disable fftw3 via MacPorts, build, then re-enable, but I'd prefer to keep the build process a little more contained. Forgive me if rkern's already told us all how to do this years ago... Barry From vincefn at users.sourceforge.net Thu Feb 14 03:27:42 2008 From: vincefn at users.sourceforge.net (Favre-Nicolin Vincent) Date: Thu, 14 Feb 2008 09:27:42 +0100 Subject: [SciPy-user] fftn output format In-Reply-To: References: Message-ID: <200802140927.43101.vincefn@users.sourceforge.net> On jeudi 14 f?vrier 2008, Brad Malone wrote: > I hope I'm not asking a question that is provided in the > documentation, but here I go. > > For fft the output format is > [y(0),y(1),.......y(n/2-1),y(-n/2)......y(-1)], but what about for > fftn where the array is 3x3x3? Is it simply the same idea applied to > each dimension? The format for fftw-calculated transforms is detailed in their manual: http://fftw.org/fftw3_doc/ See ?4.7 "What FFTW Really Computes" OTOH I'm not 100% sure scipy always returns directly what fftw computes, without any transform. -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From cimrman3 at ntc.zcu.cz Thu Feb 14 03:58:35 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Feb 2008 09:58:35 +0100 Subject: [SciPy-user] splinalg import error In-Reply-To: References: Message-ID: <47B402BB.3030705@ntc.zcu.cz> Nathan Bell wrote: > On Feb 13, 2008 9:26 AM, Robin wrote: >> On Feb 13, 2008 3:19 PM, Robin wrote: >>> However, even with this done, although scipy.splinalg.dsolve.umfpack >>> imports, it seems to be missing a lot of attributes, and in fact I >>> can't find __umfpack.so anywhere. Is it now necessary to use the >>> scikit, or is there something else wrong with my installation? >> Please ignore this, setup.py wasn't finding the umfpack library, but >> it was easy to fix. However I still think the change to linsolve.py is >> required. > > I've fixed linsolve.py per your suggestion. 1. an apology: sorry, that was my fault. 2. a little grumble: is there a reason to require a nose version >= 10? I am not able to run the tests for the moment, as I am reluctant to install nose manually and my distribution uses nose-0.9.3. IMHO all people should be able to run unit tests without requiring a bleeding edge version of some external package. r. From c.j.lee at tnw.utwente.nl Thu Feb 14 04:20:00 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 14 Feb 2008 10:20:00 +0100 Subject: [SciPy-user] strange addition behavior Message-ID: <47B407C0.70306@tnw.utwente.nl> Hi All, I have encountered some very strange addition behavior. A quick web search didn't reveal anything obvious, so I expect that I am misunderstanding some aspect of python here. Code snip: print 'dividing up the X axis' print 'current range is ', xCMin, xCMax xSpace = 0.5*(xCMax - xCMin) print 'the new span is ', xSpace xNew = xSpace + xCMin print 'the boundary value is ', xNew xCMin = -0.03 xCMax = 0.05 xSpace = 0.04 xNew = 0.07 I expect that xNew = 0.01 The obvious answer is the addition is using the absolute value of both parties.... but if this is so, then that is very scary Running on Intel Core 2 Duo, Ubuntu linux, python 2.5.1, numpy 1.0.3 Cheers Chris -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From arnar.flatberg at gmail.com Thu Feb 14 06:49:42 2008 From: arnar.flatberg at gmail.com (Arnar Flatberg) Date: Thu, 14 Feb 2008 12:49:42 +0100 Subject: [SciPy-user] How to call hypergeometric cdf Message-ID: <5d3194020802140349q248fdec3m7ea85c968f067d0b@mail.gmail.com> Hi list I am having some troubles understanding how to call the hypergeometric distribution from the stats module. The probability mass function works great (stats.hypergeom.pmf), however the cdf (and cf, ppf and isf) complains on the way it is called. Am I calling the cdf correct? I find the documentation a little bit sparse :-) Setup: Python 2.5.1 (r251:54863, Oct 5 2007, 13:36:32) [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 >>> import scipy >>> scipy.__version__ '0.6.0' Example: #---------------- import scipy from scipy import stats # Hypergeometric example paramters m = 5 # number of white balls in urn n = 5 # number of black balls in urn N = 5 # number of balls drawn from urn x = 4 # number of white balls in drawn # x = scipy.arange(1,5) # the pmf works: stats.hypergeom.pmf(x, m+n, m, N) # the cdf does not stats.hypergeom.cdf(x, m+n, m, N) #------------------ /usr/lib/python2.5/site-packages/scipy/stats/distributions.py in cdf(self, k, *args, **kwds) 3546 place(output,cond2*(cond0==cond0), 1.0) 3547 goodargs = argsreduce(cond, *((k,)+args)) -> 3548 place(output,cond,self._cdf(*goodargs)) 3549 return output 3550 /usr/lib/python2.5/site-packages/scipy/stats/distributions.py in _cdf(self, x, *args) 3455 def _cdf(self, x, *args): 3456 k = floor(x) -> 3457 return self._cdfvec(k,*args) 3458 3459 def _sf(self, x, *args): /usr/lib/python2.5/site-packages/numpy/lib/function_base.py in __call__(self, *args) 949 if self.nin: 950 if (nargs > self.nin) or (nargs < self.nin_wo_defaults): --> 951 raise ValueError, "mismatch between python function inputs"\ 952 " and received arguments" 953 : mismatch between python function inputs and received arguments Thanks, Arnar From david at ar.media.kyoto-u.ac.jp Thu Feb 14 07:12:51 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Feb 2008 21:12:51 +0900 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: Message-ID: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Barry Wark wrote: > Hi all, > > I would be happy to start producing OS X eggs of scipy for > distribution. At our site, we build scipy with fftw3 from MacPorts. I > know the preference is to build eggs for release without the fftw3 > dependency. Is there a way to turn off linking with fftw3 during the > scipy build process, even if fftw3 is present on the system? I can > disable fftw3 via MacPorts, build, then re-enable, but I'd prefer to > keep the build process a little more contained. Forgive me if rkern's > already told us all how to do this years ago... > You can do something like FFTW3=None python setup.py build This works for any library, actually (you can do ATLAS=None, etc...). cheers, David From hasslerjc at comcast.net Thu Feb 14 08:18:21 2008 From: hasslerjc at comcast.net (John Hassler) Date: Thu, 14 Feb 2008 08:18:21 -0500 Subject: [SciPy-user] strange addition behavior In-Reply-To: <47B407C0.70306@tnw.utwente.nl> References: <47B407C0.70306@tnw.utwente.nl> Message-ID: <47B43F9D.5050805@comcast.net> It works correctly for me. Does this snippit (by itself) give the incorrect answer, or does it have to be inside of a larger program? john Chris Lee wrote: > Hi All, > > I have encountered some very strange addition behavior. A quick web > search didn't reveal anything obvious, so I expect that I am > misunderstanding some aspect of python here. > > Code snip: > > print 'dividing up the X axis' > print 'current range is ', xCMin, xCMax > xSpace = 0.5*(xCMax - xCMin) > print 'the new span is ', xSpace > xNew = xSpace + xCMin > print 'the boundary value is ', xNew > > xCMin = -0.03 > xCMax = 0.05 > xSpace = 0.04 > xNew = 0.07 > > I expect that xNew = 0.01 > > The obvious answer is the addition is using the absolute value of both > parties.... but if this is so, then that is very scary > > Running on Intel Core 2 Duo, Ubuntu linux, python 2.5.1, numpy 1.0.3 > > Cheers > Chris > > From aisaac at american.edu Thu Feb 14 09:06:54 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 14 Feb 2008 09:06:54 -0500 Subject: [SciPy-user] strange addition behavior In-Reply-To: <47B407C0.70306@tnw.utwente.nl> References: <47B407C0.70306@tnw.utwente.nl> Message-ID: On Thu, 14 Feb 2008, Chris Lee apparently wrote: > print 'dividing up the X axis' > print 'current range is ', xCMin, xCMax > xSpace = 0.5*(xCMax - xCMin) > print 'the new span is ', xSpace > xNew = xSpace + xCMin > print 'the boundary value is ', xNew > xCMin = -0.03 > xCMax = 0.05 > xSpace = 0.04 > xNew = 0.07 No way this can be true. Show me in the interpreter. For example: >>> xCMin = -0.03 >>> xCMax = 0.05 >>> print 'dividing up the X axis' dividing up the X axis >>> print 'current range is ', xCMin, xCMax current range is -0.03 0.05 >>> xSpace = 0.5*(xCMax - xCMin) >>> print 'the new span is ', xSpace the new span is 0.04 >>> xNew = xSpace + xCMin >>> print 'the boundary value is ', xNew the boundary value is 0.01 >>> Looks like you have a sign error in your code. Cheers, Alan Isaac From c.j.lee at tnw.utwente.nl Thu Feb 14 10:24:32 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 14 Feb 2008 16:24:32 +0100 Subject: [SciPy-user] strange addition behavior In-Reply-To: References: <47B407C0.70306@tnw.utwente.nl> Message-ID: <47B45D30.3040401@tnw.utwente.nl> Code was fine. Reading comprehension problem instead Sorry all. Cheers Chris Alan G Isaac wrote: > On Thu, 14 Feb 2008, Chris Lee apparently wrote: > >> print 'dividing up the X axis' >> print 'current range is ', xCMin, xCMax >> xSpace = 0.5*(xCMax - xCMin) >> print 'the new span is ', xSpace >> xNew = xSpace + xCMin >> print 'the boundary value is ', xNew >> > > >> xCMin = -0.03 >> xCMax = 0.05 >> xSpace = 0.04 >> xNew = 0.07 >> > > > No way this can be true. > Show me in the interpreter. > For example: > > >>> xCMin = -0.03 > >>> xCMax = 0.05 > >>> print 'dividing up the X axis' > dividing up the X axis > >>> print 'current range is ', xCMin, xCMax > current range is -0.03 0.05 > >>> xSpace = 0.5*(xCMax - xCMin) > >>> print 'the new span is ', xSpace > the new span is 0.04 > >>> xNew = xSpace + xCMin > >>> print 'the boundary value is ', xNew > the boundary value is 0.01 > >>> > > Looks like you have a sign error in your code. > > Cheers, > Alan Isaac > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From jeffrey.fogel at gmail.com Thu Feb 14 11:43:43 2008 From: jeffrey.fogel at gmail.com (Jeffrey Fogel) Date: Thu, 14 Feb 2008 11:43:43 -0500 Subject: [SciPy-user] Installing Scipy on OS X 10.4 - wrong architecture files In-Reply-To: <47B347EC.7030503@gmail.com> References: <2cbd91330802131133r7d2982ebt4973ef0356c79521@mail.gmail.com> <47B347EC.7030503@gmail.com> Message-ID: <2cbd91330802140843n1c2950a2n806e5f951a8be4c@mail.gmail.com> On Wed, Feb 13, 2008 at 2:41 PM, Robert Kern wrote: > > Jeffrey Fogel wrote: > > I have been trying to install scipy 0.6.0 from the source code onto my > > Mac (10.4.11, Intel chip) and I'm having some trouble that I hope > > someone here can help me with. I try installing using: > > > > python setup.py build > > sudo python setup.py install > > > > Which seems to compile and install scipy (with a number of warnings, > > but no errors). However, when I run scipy.test(1,10) it crashes with > > the following error: > > > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > > 2): no suitable image found. Did find: > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so: > > mach-o, but wrong architecture > > > > Looking in the directory mentioned above, a large number of the .so > > files seem to be compiled for ppc instead of i386. I have been unable > > to find a switch for setup.py that sets the architecture. Is there a > > way to force the proper architecture so that I can get it to install > > properly? > > > > I am running python 2.5, also installed from the source from > > python.org. > > Are you sure this was built correctly? The binary on python.org should be > configured to make Universal extension modules correctly. > > > > The compilers I'm using are gcc 4.0.1 and gfortran 4.1.0. > > Where did gfortran come from? Is it configured correctly to make Universal binaries? > > If necessary, do this: > > $ python setup.py config_fc --arch="-arch i386 -arch ppc" --fcompiler=gnu95 build > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Thanks for your help, the problem turned out to be the (very old) gfortran compiler. The version of 4.1 I had was ppc only. I've upgraded to 4.3 and scipy now compiles without any problems. Thanks again. -Jeffrey From stefan at sun.ac.za Thu Feb 14 04:58:40 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 14 Feb 2008 11:58:40 +0200 Subject: [SciPy-user] splinalg import error In-Reply-To: <47B402BB.3030705@ntc.zcu.cz> References: <47B402BB.3030705@ntc.zcu.cz> Message-ID: <20080214095840.GD31612@mentat.za.net> Hi Robert On Thu, Feb 14, 2008 at 09:58:35AM +0100, Robert Cimrman wrote: > 2. a little grumble: > is there a reason to require a nose version >= 10? I am not able to run > the tests for the moment, as I am reluctant to install nose manually and > my distribution uses nose-0.9.3. IMHO all people should be able to run > unit tests without requiring a bleeding edge version of some external > package. For some reason, nose v0.9.x is still the default at the Python package index as well -- but 0.10.1 is the latest release. Nose is a python-only package, so it is trivial to install using easy_install --prefix=${HOME} nose==dev If you use --prefix=${HOME} the packages are installed into ~/lib/python2.5/site-package, so you don't mess up the packaging system of your distro. Just remember to add that to your Python path in, say, ~/.bashrc. I agree that this situation isn't ideal for a release, though -- maybe Matthew can provide a more satisfying workaround. Regards St?fan From rowen at cesmail.net Thu Feb 14 15:48:19 2008 From: rowen at cesmail.net (Russell E. Owen) Date: Thu, 14 Feb 2008 12:48:19 -0800 Subject: [SciPy-user] disabling fftw3 during scipy build References: Message-ID: In article , "Barry Wark" wrote: > Hi all, > > I would be happy to start producing OS X eggs of scipy for > distribution. At our site, we build scipy with fftw3 from MacPorts. I > know the preference is to build eggs for release without the fftw3 > dependency. Is there a way to turn off linking with fftw3 during the > scipy build process, even if fftw3 is present on the system? I can > disable fftw3 via MacPorts, build, then re-enable, but I'd prefer to > keep the build process a little more contained. Forgive me if rkern's > already told us all how to do this years ago... Would it make sense to statically link fftw3, instead? To do that I think all you would have to do is delete (or temporarily hide) the shared library, build scipy, then the shared library back. Then all users of your egg would gain the benefits of fftw3. I'm sure there is a more elegant solution involving copying the fftw3 static library in a special directory and using environment variable magic to make it visible and fink NOT visible to the scipy build. But I don't know the magic (if I did I'd use it to build matplotlib). -- Russell From robert.kern at gmail.com Thu Feb 14 16:11:16 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Feb 2008 15:11:16 -0600 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: Message-ID: <3d375d730802141311h529a2c42qb235fd714d5ad2f6@mail.gmail.com> On Thu, Feb 14, 2008 at 2:48 PM, Russell E. Owen wrote: > In article > , > > "Barry Wark" wrote: > > > Hi all, > > > > I would be happy to start producing OS X eggs of scipy for > > distribution. At our site, we build scipy with fftw3 from MacPorts. I > > know the preference is to build eggs for release without the fftw3 > > dependency. Is there a way to turn off linking with fftw3 during the > > scipy build process, even if fftw3 is present on the system? I can > > disable fftw3 via MacPorts, build, then re-enable, but I'd prefer to > > keep the build process a little more contained. Forgive me if rkern's > > already told us all how to do this years ago... > > Would it make sense to statically link fftw3, instead? To do that I > think all you would have to do is delete (or temporarily hide) the > shared library, build scipy, then the shared library back. Then all > users of your egg would gain the benefits of fftw3. > > I'm sure there is a more elegant solution involving copying the fftw3 > static library in a special directory and using environment variable > magic to make it visible and fink NOT visible to the scipy build. But I > don't know the magic (if I did I'd use it to build matplotlib). The reason FFTW is discouraged in this case is because it is GPLed. For official/semi-official binaries, we are asking that no GPLed code be included in the binary. We would like to be able to say that the official binaries are under the BSD license. For unofficial binaries, we have no opinion, of course. In order to disable the use of FFTW, use the environment variable "FFTW" like so: $ FFTW=None python setup.py build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From barrywark at gmail.com Thu Feb 14 19:59:24 2008 From: barrywark at gmail.com (Barry Wark) Date: Thu, 14 Feb 2008 16:59:24 -0800 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: <47B43043.9050309@ar.media.kyoto-u.ac.jp> References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: Thanks for the info. I've built a couple eggs for OS X 10.5 (Universal) from SVN trunk. Neither include fftw3. One is statically linked to libgfortran (using setupegg.py bdist_ext -L ), one dynamic. Both are available at http://rieke-server.physiol.washington.edu/~barry/python/ (you'll have to download them via a browser instead of easy_install -f since both eggs are in the same folder). I would appreciate if some folks could give them a test. If things look kosher, I'll build eggs for the latest stable and for trunk. Barry On Thu, Feb 14, 2008 at 4:12 AM, David Cournapeau wrote: > > Barry Wark wrote: > > Hi all, > > > > I would be happy to start producing OS X eggs of scipy for > > distribution. At our site, we build scipy with fftw3 from MacPorts. I > > know the preference is to build eggs for release without the fftw3 > > dependency. Is there a way to turn off linking with fftw3 during the > > scipy build process, even if fftw3 is present on the system? I can > > disable fftw3 via MacPorts, build, then re-enable, but I'd prefer to > > keep the build process a little more contained. Forgive me if rkern's > > already told us all how to do this years ago... > > > You can do something like > > FFTW3=None python setup.py build > > This works for any library, actually (you can do ATLAS=None, etc...). > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stephenlists at gmail.com Thu Feb 14 21:23:57 2008 From: stephenlists at gmail.com (Stephen Uhlhorn) Date: Thu, 14 Feb 2008 21:23:57 -0500 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: Barry, Building a scipy that is linked with fftw3 sounds pretty nice. Are there any tricks involved with the macports version off fftw3? Will that build a suitable gfortran also? Sorry if this is a little OT. Thanks- -stephen On Thu, Feb 14, 2008 at 7:59 PM, Barry Wark wrote: > Thanks for the info. I've built a couple eggs for OS X 10.5 > (Universal) from SVN trunk. Neither include fftw3. > > One is statically linked to libgfortran (using setupegg.py bdist_ext > -L ), one dynamic. > > Both are available at > http://rieke-server.physiol.washington.edu/~barry/python/ (you'll have > to download them via a browser instead of easy_install -f since both > eggs are in the same folder). I would appreciate if some folks could > give them a test. If things look kosher, I'll build eggs for the > latest stable and for trunk. > > Barry From barrywark at gmail.com Thu Feb 14 21:48:24 2008 From: barrywark at gmail.com (Barry Wark) Date: Thu, 14 Feb 2008 18:48:24 -0800 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: Stephen, There are no issues with using MacPorts' fftw3 (that's what we're using at my site). We're using the gfortran from http://r.research.att.com/tools/. I haven't tried gfortran via MacPorts. Barry On Thu, Feb 14, 2008 at 6:23 PM, Stephen Uhlhorn wrote: > Barry, > > Building a scipy that is linked with fftw3 sounds pretty nice. Are > there any tricks involved with the macports version off fftw3? Will > that build a suitable gfortran also? > > Sorry if this is a little OT. > > Thanks- > -stephen > > > > > > On Thu, Feb 14, 2008 at 7:59 PM, Barry Wark wrote: > > Thanks for the info. I've built a couple eggs for OS X 10.5 > > (Universal) from SVN trunk. Neither include fftw3. > > > > One is statically linked to libgfortran (using setupegg.py bdist_ext > > -L ), one dynamic. > > > > Both are available at > > http://rieke-server.physiol.washington.edu/~barry/python/ (you'll have > > to download them via a browser instead of easy_install -f since both > > eggs are in the same folder). I would appreciate if some folks could > > give them a test. If things look kosher, I'll build eggs for the > > latest stable and for trunk. > > > > Barry > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From c.j.lee at tnw.utwente.nl Fri Feb 15 08:47:19 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Fri, 15 Feb 2008 14:47:19 +0100 Subject: [SciPy-user] histogramdd bug fix Message-ID: <47B597E7.7080407@tnw.utwente.nl> Hi All, There is a bug in histogramdd when more than three dimensions is used. Essentially a permute operation must be performed at the end to align the output matrix with the input dimensions. Up to three dimensions, this can always be completed by a single round of axis swaps. However, in 4+ dimensions more than one round are required. The following has been hacked so that it will not return until the correct permutation has been found. It is not smart and will certainly slow you down if you are using more than say 6 dimensions but otherwise it seems to work. Since I know nothing about using svn or the rules associated with numpy, someone else will have to put the change in the numpy build Cheers Chris def histogramdd(sample, bins=10, range=None, normed=False, weights=None): """histogramdd(sample, bins=10, range=None, normed=False, weights=None) Return the N-dimensional histogram of the sample. Parameters: sample : sequence or array A sequence containing N arrays or an NxM array. Input data. bins : sequence or scalar A sequence of edge arrays, a sequence of bin counts, or a scalar which is the bin count for all dimensions. Default is 10. range : sequence A sequence of lower and upper bin edges. Default is [min, max]. normed : boolean If False, return the number of samples in each bin, if True, returns the density. weights : array Array of weights. The weights are normed only if normed is True. Should the sum of the weights not equal N, the total bin count will not be equal to the number of samples. Returns: hist : array Histogram array. edges : list List of arrays defining the lower bin edges. SeeAlso: histogram Example >>> x = random.randn(100,3) >>> hist3d, edges = histogramdd(x, bins = (5, 6, 7)) """ try: # Sample is an ND-array. N, D = sample.shape except (AttributeError, ValueError): # Sample is a sequence of 1D arrays. sample = atleast_2d(sample).T N, D = sample.shape nbin = empty(D, int) edges = D*[None] dedges = D*[None] if weights is not None: weights = asarray(weights) try: M = len(bins) if M != D: raise AttributeError, 'The dimension of bins must be a equal to the dimension of the sample x.' except TypeError: bins = D*[bins] # Select range for each dimension # Used only if number of bins is given. if range is None: smin = atleast_1d(array(sample.min(0), float)) smax = atleast_1d(array(sample.max(0), float)) else: smin = zeros(D) smax = zeros(D) for i in arange(D): smin[i], smax[i] = range[i] # Make sure the bins have a finite width. for i in arange(len(smin)): if smin[i] == smax[i]: smin[i] = smin[i] - .5 smax[i] = smax[i] + .5 # Create edge arrays for i in arange(D): if isscalar(bins[i]): nbin[i] = bins[i] + 2 # +2 for outlier bins edges[i] = linspace(smin[i], smax[i], nbin[i]-1) else: edges[i] = asarray(bins[i], float) nbin[i] = len(edges[i])+1 # +1 for outlier bins dedges[i] = diff(edges[i]) nbin = asarray(nbin) # Compute the bin number each sample falls into. Ncount = {} for i in arange(D): Ncount[i] = digitize(sample[:,i], edges[i]) # Using digitize, values that fall on an edge are put in the right bin. # For the rightmost bin, we want values equal to the right # edge to be counted in the last bin, and not as an outlier. outliers = zeros(N, int) for i in arange(D): # Rounding precision decimal = int(-log10(dedges[i].min())) +6 # Find which points are on the rightmost edge. on_edge = where(around(sample[:,i], decimal) == around(edges[i][-1], decimal))[0] # Shift these points one bin to the left. Ncount[i][on_edge] -= 1 # Flattened histogram matrix (1D) hist = zeros(nbin.prod(), int) # Compute the sample indices in the flattened histogram matrix. ni = nbin.argsort() shape = [] xy = zeros(N, int) for i in arange(0, D-1): xy += Ncount[ni[i]] * nbin[ni[i+1:]].prod() xy += Ncount[ni[-1]] # Compute the number of repetitions in xy and assign it to the flattened histmat. if len(xy) == 0: return zeros(nbin-2, int), edges flatcount = bincount(xy, weights) a = arange(len(flatcount)) hist[a] = flatcount # Shape into a proper matrix hist = hist.reshape(sort(nbin)) if printOut: print ni mustPermute = True while mustPermute: nothingChanged = True for i in arange(nbin.size): j = ni[i] if j != i: nothingChanged = False hist = hist.swapaxes(i,j) ni[i],ni[j] = ni[j],ni[i] if nothingChanged: mustPermute = False #print "THis is after swapping the axis ", hist.shape # Remove outliers (indices 0 and -1 for each dimension). core = D*[slice(1,-1)] hist = hist[core] # Normalize if normed is True if normed: s = hist.sum() for i in arange(D): shape = ones(D, int) shape[i] = nbin[i]-2 hist = hist / dedges[i].reshape(shape) hist /= s return hist, edges -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From c.j.lee at tnw.utwente.nl Fri Feb 15 08:51:40 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Fri, 15 Feb 2008 14:51:40 +0100 Subject: [SciPy-user] histogramdd once more Message-ID: <47B598EC.9060105@tnw.utwente.nl> Hi All, I didn't strip all of the debug print comments from the earlier snippet. I think this is clean def histogramdd(sample, bins=10, range=None, normed=False, weights=None): """histogramdd(sample, bins=10, range=None, normed=False, weights=None) Return the N-dimensional histogram of the sample. Parameters: sample : sequence or array A sequence containing N arrays or an NxM array. Input data. bins : sequence or scalar A sequence of edge arrays, a sequence of bin counts, or a scalar which is the bin count for all dimensions. Default is 10. range : sequence A sequence of lower and upper bin edges. Default is [min, max]. normed : boolean If False, return the number of samples in each bin, if True, returns the density. weights : array Array of weights. The weights are normed only if normed is True. Should the sum of the weights not equal N, the total bin count will not be equal to the number of samples. Returns: hist : array Histogram array. edges : list List of arrays defining the lower bin edges. SeeAlso: histogram Example >>> x = random.randn(100,3) >>> hist3d, edges = histogramdd(x, bins = (5, 6, 7)) """ try: # Sample is an ND-array. N, D = sample.shape except (AttributeError, ValueError): # Sample is a sequence of 1D arrays. sample = atleast_2d(sample).T N, D = sample.shape nbin = empty(D, int) edges = D*[None] dedges = D*[None] if weights is not None: weights = asarray(weights) try: M = len(bins) if M != D: raise AttributeError, 'The dimension of bins must be a equal to the dimension of the sample x.' except TypeError: bins = D*[bins] # Select range for each dimension # Used only if number of bins is given. if range is None: smin = atleast_1d(array(sample.min(0), float)) smax = atleast_1d(array(sample.max(0), float)) else: smin = zeros(D) smax = zeros(D) for i in arange(D): smin[i], smax[i] = range[i] # Make sure the bins have a finite width. for i in arange(len(smin)): if smin[i] == smax[i]: smin[i] = smin[i] - .5 smax[i] = smax[i] + .5 # Create edge arrays for i in arange(D): if isscalar(bins[i]): nbin[i] = bins[i] + 2 # +2 for outlier bins edges[i] = linspace(smin[i], smax[i], nbin[i]-1) else: edges[i] = asarray(bins[i], float) nbin[i] = len(edges[i])+1 # +1 for outlier bins dedges[i] = diff(edges[i]) nbin = asarray(nbin) # Compute the bin number each sample falls into. Ncount = {} for i in arange(D): Ncount[i] = digitize(sample[:,i], edges[i]) # Using digitize, values that fall on an edge are put in the right bin. # For the rightmost bin, we want values equal to the right # edge to be counted in the last bin, and not as an outlier. outliers = zeros(N, int) for i in arange(D): # Rounding precision decimal = int(-log10(dedges[i].min())) +6 # Find which points are on the rightmost edge. on_edge = where(around(sample[:,i], decimal) == around(edges[i][-1], decimal))[0] # Shift these points one bin to the left. Ncount[i][on_edge] -= 1 # Flattened histogram matrix (1D) hist = zeros(nbin.prod(), int) # Compute the sample indices in the flattened histogram matrix. ni = nbin.argsort() shape = [] xy = zeros(N, int) for i in arange(0, D-1): xy += Ncount[ni[i]] * nbin[ni[i+1:]].prod() xy += Ncount[ni[-1]] # Compute the number of repetitions in xy and assign it to the flattened histmat. if len(xy) == 0: return zeros(nbin-2, int), edges flatcount = bincount(xy, weights) a = arange(len(flatcount)) hist[a] = flatcount # Shape into a proper matrix hist = hist.reshape(sort(nbin)) mustPermute = True while mustPermute: nothingChanged = True for i in arange(nbin.size): j = ni[i] if j != i: nothingChanged = False hist = hist.swapaxes(i,j) ni[i],ni[j] = ni[j],ni[i] if nothingChanged: mustPermute = False #print "THis is after swapping the axis ", hist.shape # Remove outliers (indices 0 and -1 for each dimension). core = D*[slice(1,-1)] hist = hist[core] # Normalize if normed is True if normed: s = hist.sum() for i in arange(D): shape = ones(D, int) shape[i] = nbin[i]-2 hist = hist / dedges[i].reshape(shape) hist /= s return hist, edges -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From cimrman3 at ntc.zcu.cz Fri Feb 15 09:44:23 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 15 Feb 2008 15:44:23 +0100 Subject: [SciPy-user] problem accessing arpack Message-ID: <47B5A547.5080401@ntc.zcu.cz> I have problems with SVN version of scipy (0.7.0.dev3942) to access the arpack module. It seems to me caused by the fact that the names of the function 'eigen' and the module 'eigen' clash. How to access /scipy/splinalg/eigen/arpack/speigs.py? thanks, r. From matthew.brett at gmail.com Fri Feb 15 10:00:17 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 15 Feb 2008 15:00:17 +0000 Subject: [SciPy-user] splinalg import error In-Reply-To: <20080214095840.GD31612@mentat.za.net> References: <47B402BB.3030705@ntc.zcu.cz> <20080214095840.GD31612@mentat.za.net> Message-ID: <1e2af89e0802150700v45da49dan88ae8c89b0705e3a@mail.gmail.com> Hi Stefan, and all, > I agree that this situation isn't ideal for a release, though -- maybe > Matthew can provide a more satisfying workaround. I'd like to be satisfying! But, you mean, try and make the earlier version of nose find the tests? I will have a look at the nose command line to see if there's an easy fix. I'm sorry for the test pain, but I am sure it will be major test gain quite soon - nose tests are very easy to write and maintain. Matthew From ndbecker2 at gmail.com Fri Feb 15 10:59:49 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2008 10:59:49 -0500 Subject: [SciPy-user] scipy.io.savemat works for multi-dim array? Message-ID: I have a 3-d complex array: In [63]: data Out[63]: array([[[ 1.13042464+0.28343508j, 1.13035927+0.28343549j, 1.13029389+0.2834359j , ..., 0.76646124-0.30087241j, 0.76654565-0.30080131j, 0.76663007-0.3007302j ]]]) I save it like this: fades = {'data' : data} from scipy.io import savemat savemat ('fades.mat', fades) It seems to have been saved as 1-d: l = loadmat ('fades.mat') In [66]: l Out[66]: {'__globals__': [], 'data': array([ 1.13042464+0.28343508j, 1.13035927+0.28343549j, 1.13029389+0.2834359j , ..., 0.76646124-0.30087241j, 0.76654565-0.30080131j, 0.76663007-0.3007302j ])} From ndbecker2 at gmail.com Fri Feb 15 11:15:38 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2008 11:15:38 -0500 Subject: [SciPy-user] mio5 works? Message-ID: I noticed this message: http://permalink.gmane.org/gmane.comp.python.scientific.devel/6850 A = ones((10,20,30)) f = open('test.mat','wb') MW = scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) f.close() When I try this, I get this error: /usr/tmp/python-AjDvrb.py in () 5 f = open('test.mat','wb') 6 MW = scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) ----> 7 MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) 8 f.close() 9 ## data = cPickle.load (open ('fade_plots', 'r')) /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in put_variables(self, mdict) 735 for name, var in mdict.items(): 736 is_global = name in self.global_vars --> 737 self.writer_getter.rewind() 738 self.writer_getter.matrix_writer_factory( 739 var, /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in rewind(self) 638 639 def rewind(self): --> 640 self.stream.seek(0) 641 642 def matrix_writer_factory(self, arr, name, is_global=False): AttributeError: 'builtin_function_or_method' object has no attribute 'seek' From ndbecker2 at gmail.com Fri Feb 15 11:21:26 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2008 11:21:26 -0500 Subject: [SciPy-user] mio5 works? References: Message-ID: Neal Becker wrote: > I noticed this message: > http://permalink.gmane.org/gmane.comp.python.scientific.devel/6850 > > > A = ones((10,20,30)) > f = open('test.mat','wb') > MW = > scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) > MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) > f.close() > > When I try this, I get this error: > /usr/tmp/python-AjDvrb.py in () > 5 f = open('test.mat','wb') > 6 MW = > scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) > ----> 7 > MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) > 8 f.close() > 9 ## data = cPickle.load (open ('fade_plots', 'r')) > > /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in put_variables(self, > mdict) > 735 for name, var in mdict.items(): > 736 is_global = name in self.global_vars > --> 737 self.writer_getter.rewind() > 738 self.writer_getter.matrix_writer_factory( > 739 var, > > /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in rewind(self) > 638 > 639 def rewind(self): > --> 640 self.stream.seek(0) > 641 > 642 def matrix_writer_factory(self, arr, name, is_global=False): > > AttributeError: 'builtin_function_or_method' object has no attribute > 'seek' A bit more info: ipdb> self.stream ipdb> self.stream.seek *** AttributeError: 'builtin_function_or_method' object has no attribute 'seek' From david.huard at gmail.com Fri Feb 15 11:25:22 2008 From: david.huard at gmail.com (David Huard) Date: Fri, 15 Feb 2008 11:25:22 -0500 Subject: [SciPy-user] histogramdd once more In-Reply-To: <47B598EC.9060105@tnw.utwente.nl> References: <47B598EC.9060105@tnw.utwente.nl> Message-ID: <91cf711d0802150825v4373e068r80cd01916d65b2e1@mail.gmail.com> Chris, Thanks again for the bug report. I implemented a solution where you need only one axes-swapping pass, no matter what. I just submitted the fix into SVN and the test you sent me passes. Maybe check the fix does the appropriate thing for you too. Cheers, David 2008/2/15, Chris Lee : > > Hi All, > > I didn't strip all of the debug print comments from the earlier snippet. > I think this is clean > > def histogramdd(sample, bins=10, range=None, normed=False, weights=None): > """histogramdd(sample, bins=10, range=None, normed=False, > weights=None) > > Return the N-dimensional histogram of the sample. > > Parameters: > > sample : sequence or array > A sequence containing N arrays or an NxM array. Input data. > > bins : sequence or scalar > A sequence of edge arrays, a sequence of bin counts, or a > scalar > which is the bin count for all dimensions. Default is 10. > > range : sequence > A sequence of lower and upper bin edges. Default is [min, > max]. > > normed : boolean > If False, return the number of samples in each bin, if True, > returns the density. > > weights : array > Array of weights. The weights are normed only if normed is > True. > Should the sum of the weights not equal N, the total bin > count will > not be equal to the number of samples. > > Returns: > > hist : array > Histogram array. > > edges : list > List of arrays defining the lower bin edges. > > SeeAlso: > > histogram > > Example > > >>> x = random.randn(100,3) > >>> hist3d, edges = histogramdd(x, bins = (5, 6, 7)) > > """ > > try: > # Sample is an ND-array. > N, D = sample.shape > except (AttributeError, ValueError): > # Sample is a sequence of 1D arrays. > sample = atleast_2d(sample).T > N, D = sample.shape > nbin = empty(D, int) > edges = D*[None] > dedges = D*[None] > if weights is not None: > weights = asarray(weights) > > try: > M = len(bins) > if M != D: > raise AttributeError, 'The dimension of bins must be a equal > to the dimension of the sample x.' > except TypeError: > bins = D*[bins] > > # Select range for each dimension > # Used only if number of bins is given. > if range is None: > smin = atleast_1d(array(sample.min(0), float)) > smax = atleast_1d(array(sample.max(0), float)) > else: > smin = zeros(D) > smax = zeros(D) > for i in arange(D): > smin[i], smax[i] = range[i] > > # Make sure the bins have a finite width. > for i in arange(len(smin)): > if smin[i] == smax[i]: > smin[i] = smin[i] - .5 > smax[i] = smax[i] + .5 > > # Create edge arrays > for i in arange(D): > if isscalar(bins[i]): > nbin[i] = bins[i] + 2 # +2 for outlier bins > edges[i] = linspace(smin[i], smax[i], nbin[i]-1) > else: > edges[i] = asarray(bins[i], float) > nbin[i] = len(edges[i])+1 # +1 for outlier bins > dedges[i] = diff(edges[i]) > > nbin = asarray(nbin) > > # Compute the bin number each sample falls into. > Ncount = {} > for i in arange(D): > Ncount[i] = digitize(sample[:,i], edges[i]) > > # Using digitize, values that fall on an edge are put in the right > bin. > # For the rightmost bin, we want values equal to the right > # edge to be counted in the last bin, and not as an outlier. > outliers = zeros(N, int) > for i in arange(D): > # Rounding precision > decimal = int(-log10(dedges[i].min())) +6 > # Find which points are on the rightmost edge. > on_edge = where(around(sample[:,i], decimal) == > around(edges[i][-1], decimal))[0] > # Shift these points one bin to the left. > Ncount[i][on_edge] -= 1 > > # Flattened histogram matrix (1D) > hist = zeros(nbin.prod(), int) > > # Compute the sample indices in the flattened histogram matrix. > ni = nbin.argsort() > shape = [] > xy = zeros(N, int) > for i in arange(0, D-1): > xy += Ncount[ni[i]] * nbin[ni[i+1:]].prod() > xy += Ncount[ni[-1]] > > # Compute the number of repetitions in xy and assign it to the > flattened histmat. > if len(xy) == 0: > return zeros(nbin-2, int), edges > > flatcount = bincount(xy, weights) > a = arange(len(flatcount)) > hist[a] = flatcount > # Shape into a proper matrix > hist = hist.reshape(sort(nbin)) > mustPermute = True > while mustPermute: > nothingChanged = True > for i in arange(nbin.size): > j = ni[i] > if j != i: > nothingChanged = False > hist = hist.swapaxes(i,j) > ni[i],ni[j] = ni[j],ni[i] > if nothingChanged: > mustPermute = False > #print "THis is after swapping the axis ", hist.shape > # Remove outliers (indices 0 and -1 for each dimension). > core = D*[slice(1,-1)] > hist = hist[core] > > # Normalize if normed is True > if normed: > s = hist.sum() > for i in arange(D): > shape = ones(D, int) > shape[i] = nbin[i]-2 > hist = hist / dedges[i].reshape(shape) > hist /= s > > return hist, edges > > -- > ********************************************** > * Chris Lee * > * Laser physics and nonlinear optics group * > * MESA+ Institute * > * University of Twente * > * Phone: ++31 (0)53 489 3968 * > * fax: ++31 (0) 53 489 1102 * > ********************************************** > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthe.loic at gmail.com Fri Feb 15 11:47:06 2008 From: berthe.loic at gmail.com (BERTHE Loic) Date: Fri, 15 Feb 2008 17:47:06 +0100 Subject: [SciPy-user] Scipy.test() fails on Linux 64 Message-ID: Hi, I'm trying to install scipy from source on a linux 64 box. Here is my configuration : > uname -a Linux 2.4.21-32.ELsmp #1 SMP Fri Apr 15 21:03:28 EDT 2005 x86_64 x86_64 x86_64 GNU/Linux > gcc --version gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52) I've installed python 2.5.1, lapack 3.1.1, atlas 3.8, numpy 1.0.4. I've seen no compilation errors, and Numpy's test suite runs OK. Then, I tried to install scipy 0.60 . It compiles fine, but I've got errors when running the test suite : > grep FAIL 11_scipy.bash.log Warning: FAILURE importing tests for Warning: FAILURE importing tests for FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) FAIL: check_simple (scipy.linalg.tests.test_decomp.test_eig) FAIL: check_simple (scipy.linalg.tests.test_decomp.test_eigvals) FAIL: check_simple_tr (scipy.linalg.tests.test_decomp.test_eigvals) FAIL: test_explicit (scipy.tests.test_odr.test_odr) FAIL: test_multi (scipy.tests.test_odr.test_odr) FAILED (failures=7) Could you please have a look at my installation logs (which are attached to this mail), and help me installing Scipy : - I'm not used to compile on that Linux64 box. I've added options like '-fPIC and -m 64' , but i'm not sure this is enough. Have you got any advices ? - scipy.linalg seems to have some pb but I didn't have any pb when running numpy.test(), What are the differences between scipy.linalg and numpy.linalg ? Why do I have pb with scipy and not with numpy ? - Besides, I did'nt have any pb compiling lapack and atlas, but I see pb with lapack. Any Idea on where it comes from ? - I see pb with odr, and I'm not sure to need this module. If I don't succeed in solving this pb, is there a way to desactivate this package, and prevent someone to use a "buggy" or "bad-installed" module ? Thanks -- LB -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 11_scipy.bash.log Type: text/x-log Size: 69401 bytes Desc: not available URL: From stefan at sun.ac.za Fri Feb 15 12:01:36 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 15 Feb 2008 19:01:36 +0200 Subject: [SciPy-user] splinalg import error In-Reply-To: <1e2af89e0802150700v45da49dan88ae8c89b0705e3a@mail.gmail.com> References: <47B402BB.3030705@ntc.zcu.cz> <20080214095840.GD31612@mentat.za.net> <1e2af89e0802150700v45da49dan88ae8c89b0705e3a@mail.gmail.com> Message-ID: <20080215170136.GH7365@mentat.za.net> Hi Matthew On Fri, Feb 15, 2008 at 03:00:17PM +0000, Matthew Brett wrote: > > I agree that this situation isn't ideal for a release, though -- maybe > > Matthew can provide a more satisfying workaround. > > I'd like to be satisfying! But, you mean, try and make the earlier > version of nose find the tests? I will have a look at the nose > command line to see if there's an easy fix. I was curious as to why 0.9.2 didn't pick up the tests -- have they drastically modified the test discovery since? If we can't work around it, I don't think it is the end of the world -- upgrading to 0.10.x is trivial, and will soon be the standard in most distributions anyway. > I'm sorry for the test pain, but I am sure it will be major test gain > quite soon - nose tests are very easy to write and maintain. It's absolutely worth it -- the new testing machinery addresses a number of problems we've had in the past, a major one of which was tests not being picked up on all platforms. Looking at the buildbot results, I saw the number of tests varying between 714 and 803. Thanks for all the hard work you put into this! I'm a satisfied customer, for sure. Regards St?fan From cimrman3 at ntc.zcu.cz Fri Feb 15 12:05:41 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 15 Feb 2008 18:05:41 +0100 Subject: [SciPy-user] splinalg import error In-Reply-To: <1e2af89e0802150700v45da49dan88ae8c89b0705e3a@mail.gmail.com> References: <47B402BB.3030705@ntc.zcu.cz> <20080214095840.GD31612@mentat.za.net> <1e2af89e0802150700v45da49dan88ae8c89b0705e3a@mail.gmail.com> Message-ID: <47B5C665.3040005@ntc.zcu.cz> Matthew Brett wrote: > Hi Stefan, and all, > >> I agree that this situation isn't ideal for a release, though -- maybe >> Matthew can provide a more satisfying workaround. > > I'd like to be satisfying! But, you mean, try and make the earlier > version of nose find the tests? I will have a look at the nose > command line to see if there's an easy fix. > > I'm sorry for the test pain, but I am sure it will be major test gain > quite soon - nose tests are very easy to write and maintain. If it is not an easy fix, do not bother please. It is easy to install the new nose version, it just adds up when there are several such things. Thanks for working on the testing framework. cheers, r. From nwagner at iam.uni-stuttgart.de Fri Feb 15 12:12:04 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 15 Feb 2008 18:12:04 +0100 Subject: [SciPy-user] Scipy.test() fails on Linux 64 In-Reply-To: References: Message-ID: On Fri, 15 Feb 2008 17:47:06 +0100 "BERTHE Loic" wrote: > Hi, > > I'm trying to install scipy from source on a linux 64 >box. > Here is my configuration : > > uname -a > Linux 2.4.21-32.ELsmp #1 SMP Fri Apr 15 21:03:28 EDT >2005 x86_64 x86_64 > x86_64 GNU/Linux > > gcc --version > gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52) > > I've installed python 2.5.1, lapack 3.1.1, atlas 3.8, >numpy 1.0.4. > I've seen no compilation errors, and Numpy's test suite > runs OK. > > Then, I tried to install scipy 0.60 . > It compiles fine, but I've got errors when running the >test suite : > >> grep FAIL 11_scipy.bash.log > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack' from > '.../linsolve/umfpack/__init__.pyc'> > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack.umfpack' > from '...y/linsolve/umfpack/umfpack.pyc'> >FAIL: check_syevr >(scipy.lib.tests.test_lapack.test_flapack_float) >FAIL: check_syevr_irange >(scipy.lib.tests.test_lapack.test_flapack_float) >FAIL: check_simple >(scipy.linalg.tests.test_decomp.test_eig) >FAIL: check_simple >(scipy.linalg.tests.test_decomp.test_eigvals) >FAIL: check_simple_tr >(scipy.linalg.tests.test_decomp.test_eigvals) >FAIL: test_explicit (scipy.tests.test_odr.test_odr) >FAIL: test_multi (scipy.tests.test_odr.test_odr) >FAILED (failures=7) > > Could you please have a look at my installation logs >(which are attached to > this mail), and help me installing Scipy : > > - I'm not used to compile on that Linux64 box. I've >added options like > '-fPIC and -m 64' , > but i'm not sure this is enough. Have you got any >advices ? > > - scipy.linalg seems to have some pb but I didn't >have any pb when > running numpy.test(), > What are the differences between scipy.linalg and >numpy.linalg ? > Why do I have pb with scipy and not with numpy ? > > - Besides, I did'nt have any pb compiling lapack and >atlas, but I see pb > with lapack. > Any Idea on where it comes from ? > > - I see pb with odr, and I'm not sure to need this >module. If I don't > succeed in solving this pb, is there a way to >desactivate this package, and > prevent someone to use a "buggy" or "bad-installed" >module ? > > Thanks > > -- > LB This is a known issue. See http://projects.scipy.org/scipy/scipy/ticket/375 Cheers, Nils From robert.kern at gmail.com Fri Feb 15 12:31:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 15 Feb 2008 11:31:55 -0600 Subject: [SciPy-user] mio5 works? In-Reply-To: References: Message-ID: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> On Fri, Feb 15, 2008 at 10:15 AM, Neal Becker wrote: > I noticed this message: > http://permalink.gmane.org/gmane.comp.python.scientific.devel/6850 > > > A = ones((10,20,30)) > f = open('test.mat','wb') > MW = scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) > MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) > f.close() > > When I try this, I get this error: > /usr/tmp/python-AjDvrb.py in () > 5 f = open('test.mat','wb') > 6 MW = scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) > ----> 7 MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) > 8 f.close() > 9 ## data = cPickle.load (open ('fade_plots', 'r')) > > /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in put_variables(self, mdict) > 735 for name, var in mdict.items(): > 736 is_global = name in self.global_vars > --> 737 self.writer_getter.rewind() > 738 self.writer_getter.matrix_writer_factory( > 739 var, > > /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in rewind(self) > 638 > 639 def rewind(self): > --> 640 self.stream.seek(0) > 641 > 642 def matrix_writer_factory(self, arr, name, is_global=False): > > AttributeError: 'builtin_function_or_method' object has no attribute 'seek' This appears to have been fixed in r3435. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Fri Feb 15 13:11:32 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2008 13:11:32 -0500 Subject: [SciPy-user] mio5 works? References: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> Message-ID: Robert Kern wrote: > On Fri, Feb 15, 2008 at 10:15 AM, Neal Becker wrote: >> I noticed this message: >> http://permalink.gmane.org/gmane.comp.python.scientific.devel/6850 >> >> >> A = ones((10,20,30)) >> f = open('test.mat','wb') >> MW = >> scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) >> MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) >> f.close() >> >> When I try this, I get this error: >> /usr/tmp/python-AjDvrb.py in () >> 5 f = open('test.mat','wb') >> 6 MW = >> scipy.io.mio5.MatFile5Writer(f,do_compression=True,unicode_strings=True) >> ----> 7 >> MW.put_variables({'A1':A,'A2':A+1j*A,'s1':'string1','s2':u'string2'}) >> 8 f.close() >> 9 ## data = cPickle.load (open ('fade_plots', 'r')) >> >> /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in >> put_variables(self, mdict) >> 735 for name, var in mdict.items(): >> 736 is_global = name in self.global_vars >> --> 737 self.writer_getter.rewind() >> 738 self.writer_getter.matrix_writer_factory( >> 739 var, >> >> /usr/lib64/python2.5/site-packages/scipy/io/mio5.py in rewind(self) >> 638 >> 639 def rewind(self): >> --> 640 self.stream.seek(0) >> 641 >> 642 def matrix_writer_factory(self, arr, name, is_global=False): >> >> AttributeError: 'builtin_function_or_method' object has no attribute >> 'seek' > > This appears to have been fixed in r3435. > I'm using scipy-0.6.0. Should I try the current svn? Does it appear to be relatively stable? (usable?) From wnbell at gmail.com Fri Feb 15 13:18:37 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 15 Feb 2008 12:18:37 -0600 Subject: [SciPy-user] mio5 works? In-Reply-To: References: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> Message-ID: On Fri, Feb 15, 2008 at 12:11 PM, Neal Becker wrote: > > I'm using scipy-0.6.0. > > Should I try the current svn? Does it appear to be relatively stable? (usable?) IMO the io code in SVN is more reliable than that in scipy-0.6.0 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ziegen at rhrk.uni-kl.de Fri Feb 15 14:42:31 2008 From: ziegen at rhrk.uni-kl.de (Gerolf Ziegenhain) Date: Fri, 15 Feb 2008 20:42:31 +0100 Subject: [SciPy-user] ImportError: No module named multiarray Message-ID: Hi scipy-group, Today I tried to compile scipy/numpy on my own, because the Debian/Etch versions don't contain the delaunay and fitting stuff. Following some hints in the web I installed umfpack and adjusted the site.cfg for numpy. Then I removed the original Debian packages in order to have a clean setup. Then I compiled and installed numpy. This was working without any error message. When I tried to compile scipy afterwards I realized the failure: somehow multiarray is not accessible. What can I do now? I didn't find any useful hint in the web so far... setup.py (scipy) says: Writing /usr/lib/python2.4/site-packages/numpy-1.0.4.egg-info Traceback (most recent call last): File "./setup.py", line 55, in ? setup_package() File "./setup.py", line 28, in setup_package from numpy.distutils.core import setup File "/usr/lib/python2.4/site-packages/PIL/__init__.py", line 39, in ? File "/usr/lib/python2.4/site-packages/PIL/__init__.py", line 5, in ? # package placeholder ImportError: No module named multiarray ipython replies to import numpy: ImportError Traceback (most recent call last) /home/gerolf/src/python/ /usr/lib/python2.4/site-packages/numpy/__init__.py /usr/lib/python2.4/site-packages/numpy/core/__init__.py ImportError: No module named multiarray Best regards: Gerolf -- Dipl. Phys. Gerolf Ziegenhain Office: Room 46-332 - Erwin-Schr?dinger-Str.46 - TU Kaiserslautern - Germany Web: gerolf.ziegenhain.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Fri Feb 15 15:02:34 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2008 15:02:34 -0500 Subject: [SciPy-user] mio5 works? References: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> Message-ID: Nathan Bell wrote: > On Fri, Feb 15, 2008 at 12:11 PM, Neal Becker wrote: >> >> I'm using scipy-0.6.0. >> >> Should I try the current svn? Does it appear to be relatively stable? >> (usable?) > > IMO the io code in SVN is more reliable than that in scipy-0.6.0 > OK, but are you saying I should replace all of scipy with svn trunk version, or just replace the io code? From wnbell at gmail.com Fri Feb 15 15:12:38 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 15 Feb 2008 14:12:38 -0600 Subject: [SciPy-user] mio5 works? In-Reply-To: References: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> Message-ID: On Fri, Feb 15, 2008 at 2:02 PM, Neal Becker wrote: > > OK, but are you saying I should replace all of scipy with svn trunk version, > or just replace the io code? It's probably an all-or-nothing thing. I would try the current SVN. FWIW I run the current SVN version and update almost daily. Some things have changed since 0.6.0, but you'll need to be aware of those anyway for 0.7.0 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From kc106_2005-scipy at yahoo.com Fri Feb 15 15:21:16 2008 From: kc106_2005-scipy at yahoo.com (kc106_2005-scipy at yahoo.com) Date: Fri, 15 Feb 2008 12:21:16 -0800 (PST) Subject: [SciPy-user] Complex sparse support Message-ID: <602036.54150.qm@web51408.mail.re2.yahoo.com> Hi all, I am evaluating scipy to see if it can help me. I saw this message: http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/3365806 and read the subsequent responses. Not knowing anything about scipy, does that means the complex sparse matrix part is buggy? Does anybody has a working sample to show that how to make it works? Regards, -- John Henry From rmay at ou.edu Fri Feb 15 15:33:05 2008 From: rmay at ou.edu (Ryan May) Date: Fri, 15 Feb 2008 14:33:05 -0600 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: <20080210013231.GA4049@debian.akumar.iitm.ac.in> References: <47ACC09C.4070906@ou.edu> <47AD2CE1.2080600@ou.edu> <20080209064604.GD4122@debian.akumar.iitm.ac.in> <20080210013231.GA4049@debian.akumar.iitm.ac.in> Message-ID: <47B5F701.6060906@ou.edu> Kumar Appaiah wrote: > On Sat, Feb 09, 2008 at 12:16:04PM +0530, Kumar Appaiah wrote: >>> array([-0.16010146, -0.16010146, -0.16010146, -0.16010146, -0.16010146, >>> -0.16010147, -0.16010148, -0.16010149, -0.1601015 , -0.1601015 , >>> -0.16010145, -0.16010096, -0.16009716, -0.16007336, -0.15994973, >>> -0.15941238, -0.15743963, -0.15127378, -0.13476733, -0.09676449, >>> -0.02138783, 0.10725105, 0.29505955, 0.52638443, 0.7591664 , >>> 0.93452305, 1. , 0.93452305, 0.7591664 , 0.52638443, >>> 0.29505955, 0.10725105, -0.02138783, -0.09676449, -0.13476733, >>> -0.15127378, -0.15743963, -0.15941238, -0.15994973, -0.16007336, >>> -0.16009716, -0.16010096, -0.16010145, -0.1601015 , -0.1601015 , >>> -0.16010149, -0.16010148, -0.16010147, -0.16010146, -0.16010146, >>> -0.16010146, -0.16010146, -0.16010146]) >>> >>> Clearly, all of those negative values are *not* correct. (And the >>> problems are not limited to the numbers above.) Any ideas? >> Let me try to figure it out. Then I'll let you know. > > I am unable to figure out where the problem could be, though I guess > it would have to do with the Chebyshev polynomial evaluation. I could > really do with a little help in debugging the chebwin fix. :-) > Got it! It seems scipy.special.chebyt doesn't quite do what we want it to do. If I replace the call to chebyt with this function: def myT(order, x): retval = N.cosh(order*N.arccosh(N.abs(x))) retval[N.abs(x)<=1] = N.cos(order*N.arccos(x[N.abs(x)<=1])) return retval I get the correct window values. I had checked plots of chebyt and the analytic definition, and they looked the same, but the actual values differed. For most values they're close, but not identical. Anybody know the difference between the Chebyshev polynomial defined as above and the one in scipy.special.chebyt? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From rmay at ou.edu Fri Feb 15 16:09:50 2008 From: rmay at ou.edu (Ryan May) Date: Fri, 15 Feb 2008 15:09:50 -0600 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: <47B5F701.6060906@ou.edu> References: <47ACC09C.4070906@ou.edu> <47AD2CE1.2080600@ou.edu> <20080209064604.GD4122@debian.akumar.iitm.ac.in> <20080210013231.GA4049@debian.akumar.iitm.ac.in> <47B5F701.6060906@ou.edu> Message-ID: <47B5FF9E.20302@ou.edu> Ryan May wrote: > Kumar Appaiah wrote: >> On Sat, Feb 09, 2008 at 12:16:04PM +0530, Kumar Appaiah wrote: >>>> array([-0.16010146, -0.16010146, -0.16010146, -0.16010146, -0.16010146, >>>> -0.16010147, -0.16010148, -0.16010149, -0.1601015 , -0.1601015 , >>>> -0.16010145, -0.16010096, -0.16009716, -0.16007336, -0.15994973, >>>> -0.15941238, -0.15743963, -0.15127378, -0.13476733, -0.09676449, >>>> -0.02138783, 0.10725105, 0.29505955, 0.52638443, 0.7591664 , >>>> 0.93452305, 1. , 0.93452305, 0.7591664 , 0.52638443, >>>> 0.29505955, 0.10725105, -0.02138783, -0.09676449, -0.13476733, >>>> -0.15127378, -0.15743963, -0.15941238, -0.15994973, -0.16007336, >>>> -0.16009716, -0.16010096, -0.16010145, -0.1601015 , -0.1601015 , >>>> -0.16010149, -0.16010148, -0.16010147, -0.16010146, -0.16010146, >>>> -0.16010146, -0.16010146, -0.16010146]) >>>> >>>> Clearly, all of those negative values are *not* correct. (And the >>>> problems are not limited to the numbers above.) Any ideas? >>> Let me try to figure it out. Then I'll let you know. >> I am unable to figure out where the problem could be, though I guess >> it would have to do with the Chebyshev polynomial evaluation. I could >> really do with a little help in debugging the chebwin fix. :-) >> > Got it! > > It seems scipy.special.chebyt doesn't quite do what we want it to do. > If I replace the call to chebyt with this function: > > def myT(order, x): > retval = N.cosh(order*N.arccosh(N.abs(x))) > retval[N.abs(x)<=1] = N.cos(order*N.arccos(x[N.abs(x)<=1])) > return retval This actually needs to be: def myT(order, x): retval = N.zeros_like(x) retval[x > 1] = N.cosh(order*N.arccosh(x[x>1])) retval[x < -1] = N.cosh(order*N.arccosh(-x[x<-1]))*((-1)*(order%2)) retval[N.abs(x)<=1] = N.cos(order*N.arccos(x[N.abs(x)<=1])) return retval I missed a problem with odd ordered Tn. See http://en.wikipedia.org/wiki/Chebyshev_polynomials. > I get the correct window values. I had checked plots of chebyt and the > analytic definition, and they looked the same, but the actual values > differed. For most values they're close, but not identical. Anybody > know the difference between the Chebyshev polynomial defined as above > and the one in scipy.special.chebyt? In talking with __pv on #scipy, it seems that the problem is likely just due to accumulated round off error in using the recurrence relations to implement the polynomial. Since the N-point window function needs an (N-1)th degree polynomial, you can see how this begins to really show the error. Is there any reason not to just implement chebyt as above within scipy.special? It would seem to me that a formula would always give superior results, but I could (admittedly) be naive about this. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From wnbell at gmail.com Fri Feb 15 16:23:40 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 15 Feb 2008 15:23:40 -0600 Subject: [SciPy-user] Complex sparse support In-Reply-To: <602036.54150.qm@web51408.mail.re2.yahoo.com> References: <602036.54150.qm@web51408.mail.re2.yahoo.com> Message-ID: On Fri, Feb 15, 2008 at 2:21 PM, wrote: > Hi all, > > I am evaluating scipy to see if it can help me. I saw > this message: > > http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/3365806 > > and read the subsequent responses. Not knowing > anything about scipy, does that means the complex > sparse matrix part is buggy? > > Does anybody has a working sample to show that how to > make it works? These issues were fixed in scipy 0.6.0 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From nwagner at iam.uni-stuttgart.de Fri Feb 15 16:24:47 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 15 Feb 2008 22:24:47 +0100 Subject: [SciPy-user] Complex sparse support In-Reply-To: <602036.54150.qm@web51408.mail.re2.yahoo.com> References: <602036.54150.qm@web51408.mail.re2.yahoo.com> Message-ID: On Fri, 15 Feb 2008 12:21:16 -0800 (PST) wrote: > Hi all, > > I am evaluating scipy to see if it can help me. I saw > this message: > > http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/3365806 > > and read the subsequent responses. Not knowing > anything about scipy, does that means the complex > sparse matrix part is buggy? > > Does anybody has a working sample to show that how to > make it works? > > Regards, > > -- > John Henry > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user If you have installed numpy, scipy via svn you can try the following from scipy import sparse from scipy.splinalg import spsolve, use_solver from numpy import linalg from numpy.random import rand A = sparse.lil_matrix((500, 500)) A[0, :100] = rand(100)+rand(100)*1j A[1, 100:200] = A[0, :100] A.setdiag(rand(500)+1j*rand(500)) A = A.tocsr() b = rand(500) x = spsolve(A, b) x_ = linalg.solve(A.todense(), b) err = linalg.norm(x-x_) print err < 1e-10, err from pylab import spy, show spy(A.todense()) show() Nils From pav at iki.fi Fri Feb 15 18:16:12 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 15 Feb 2008 23:16:12 +0000 (UTC) Subject: [SciPy-user] scipy.signal.chebwin References: <47ACC09C.4070906@ou.edu> <47AD2CE1.2080600@ou.edu> <20080209064604.GD4122@debian.akumar.iitm.ac.in> <20080210013231.GA4049@debian.akumar.iitm.ac.in> <47B5F701.6060906@ou.edu> <47B5FF9E.20302@ou.edu> Message-ID: Fri, 15 Feb 2008 15:09:50 -0600, Ryan May wrote: > Ryan May wrote: [clip] > In talking with __pv on #scipy, it seems that the problem is likely just > due to accumulated round off error in using the recurrence relations to > implement the polynomial. Since the N-point window function needs an > (N-1)th degree polynomial, you can see how this begins to really show > the error. Is there any reason not to just implement chebyt as above > within scipy.special? It would seem to me that a formula would always > give superior results, but I could (admittedly) be naive about this. I think the special.chebyt function in scipy does not use recurrence relations for evaluating the polynomials at some point (which probably would be numerically stable), but instead creates a numpy.poly1d object containing the coefficients of the polynomial and uses them to calculate results. As the magnitude of the coefficients increases as polynomial degree increases, error from cancellations in floating-point arithmetic becomes of the order of 1 for n > 60. This limitation would be useful to mention in the docstrings of functions in orthopoly.py A possible fix could be to implement a way to evaluate polynomials stably in numpy.poly1d. (Horner scheme? Using the roots which are anyway known? I guess there are several options.) -- Pauli Virtanen From ndbecker2 at gmail.com Fri Feb 15 18:56:57 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2008 18:56:57 -0500 Subject: [SciPy-user] mio5 works? References: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> Message-ID: Nathan Bell wrote: > On Fri, Feb 15, 2008 at 2:02 PM, Neal Becker wrote: >> >> OK, but are you saying I should replace all of scipy with svn trunk >> version, or just replace the io code? > > It's probably an all-or-nothing thing. I would try the current SVN. > > FWIW I run the current SVN version and update almost daily. Some > things have changed since 0.6.0, but you'll need to be aware of those > anyway for 0.7.0 > scipy svn depends on numpy svn? INSTALL.txt seems to say so: 2) NumPy__ 1.0.5 or newer From wnbell at gmail.com Fri Feb 15 19:13:50 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 15 Feb 2008 18:13:50 -0600 Subject: [SciPy-user] mio5 works? In-Reply-To: References: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> Message-ID: On Fri, Feb 15, 2008 at 5:56 PM, Neal Becker wrote: > > scipy svn depends on numpy svn? INSTALL.txt seems to say so: > 2) NumPy__ 1.0.5 or newer I'm using numpy svn. I don't know if it's a requirement or not. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ndbecker2 at gmail.com Fri Feb 15 19:51:39 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2008 19:51:39 -0500 Subject: [SciPy-user] mio5 works? References: <3d375d730802150931h16526fd4gcd1d24c2424387b2@mail.gmail.com> Message-ID: I'm trying numpy svn trunk + scipy svn trunk. Seems to work: from scipy.io.matlab.mio import loadmat, savemat savemat ('fades.mat', fades, format='5') d2 = loadmat ('fades.mat') Seems OK. I suggest that a warning be given if attempt is made to save an array not supported by format 4, instead of silently messing it up. From kc106_2005-scipy at yahoo.com Fri Feb 15 22:58:22 2008 From: kc106_2005-scipy at yahoo.com (kc106_2005-scipy at yahoo.com) Date: Fri, 15 Feb 2008 19:58:22 -0800 (PST) Subject: [SciPy-user] Cannot import name cscmux (was: Re: Complex sparse support) Message-ID: <143240.47324.qm@web51406.mail.re2.yahoo.com> In response to the following response, I downloaded the latest version of numpy and scipy, reinstalled and now I am unable to start the sample code at all. Google search suggested that I uninstall the old versions, and then reinstall. I did that and didn't make any difference. Tried both Python 2.3, and 2.5: same problem. Any suggestions? Thanks, > -----Original Message----- > > > On Fri, Feb 15, 2008 at 2:21 PM, wrote: > > Hi all, > > > > I am evaluating scipy to see if it can help me. I saw > > this message: > > > > http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/3365806 > > > > and read the subsequent responses. Not knowing > > anything about scipy, does that means the complex > > sparse matrix part is buggy? > > > > Does anybody has a working sample to show that how to > > make it works? > > These issues were fixed in scipy 0.6.0 > > -- > Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ > -- John Henry From akumar at iitm.ac.in Sat Feb 16 07:01:26 2008 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sat, 16 Feb 2008 17:31:26 +0530 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: <47B5F701.6060906@ou.edu> References: <47ACC09C.4070906@ou.edu> <47AD2CE1.2080600@ou.edu> <20080209064604.GD4122@debian.akumar.iitm.ac.in> <20080210013231.GA4049@debian.akumar.iitm.ac.in> <47B5F701.6060906@ou.edu> Message-ID: <20080216120126.GC23895@debian.akumar.iitm.ac.in> On Fri, Feb 15, 2008 at 02:33:05PM -0600, Ryan May wrote: > > I am unable to figure out where the problem could be, though I guess > > it would have to do with the Chebyshev polynomial evaluation. I could > > really do with a little help in debugging the chebwin fix. :-) > > > Got it! > > It seems scipy.special.chebyt doesn't quite do what we want it to do. > If I replace the call to chebyt with this function: > > def myT(order, x): > retval = N.cosh(order*N.arccosh(N.abs(x))) > retval[N.abs(x)<=1] = N.cos(order*N.arccos(x[N.abs(x)<=1])) > return retval > > I get the correct window values. I had checked plots of chebyt and the > analytic definition, and they looked the same, but the actual values > differed. For most values they're close, but not identical. Anybody > know the difference between the Chebyshev polynomial defined as above > and the one in scipy.special.chebyt? I owe you a $BEVERAGE (non-alcoholic). Could you please help me in getting the fix in by updating the patch? Thanks. Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From berthe.loic at gmail.com Sun Feb 17 11:16:10 2008 From: berthe.loic at gmail.com (LB) Date: Sun, 17 Feb 2008 08:16:10 -0800 (PST) Subject: [SciPy-user] Scipy.test() fails on Linux 64 In-Reply-To: References: Message-ID: <1803dae1-220c-494e-b306-ea518d8a2b0d@o77g2000hsf.googlegroups.com> The issue you mentioned concerns scipy.lib.tests.test_lapack.test_flapack_float, and if I understand correctly, this is due to too strict criteria and these failures are not significant. Concerning the other failures I reported earlier, I didn't find a clue : - concerning scipy.linalg.tests.test_decomp.test_eig, I didn't see any ticket. Besides, for these tests, the errors cannot be considered as unsignifiant. - concerning scipy.tests.test_odr.test_odr, I've seen three tickets (#357, #375 and #469) but I ouldn't see any solution. Did I miss something obvious or should I consider that building scipy 0.6 from the official tarball on a linux 64 is hopeless ? -- LB From a.g.basden at durham.ac.uk Mon Feb 18 04:59:14 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Mon, 18 Feb 2008 09:59:14 +0000 (GMT) Subject: [SciPy-user] scipy.special.kv Message-ID: Hi, I have just installed scipy on a x86_64 machine (version 0.6.0) and am having problems with scipy.special: >>> import numpy,scipy.special >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32606705, 1.32606705, 1.32606705, 1.32606705, 1.32606705, 1.32606705, 1.32606705, 1.32606705, 1.32606705, 1.32606705]) >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.25795758e+177, 1.25795758e+177, 1.25795758e+177, 1.25795758e+177, 1.25795758e+177, 1.25795758e+177, 1.25795758e+177, 1.25795758e+177, 1.25795758e+177, 1.25795758e+177]) Can anyone tell me why the results are different each time? (the first set are almost in agreement with other installations, but the second set, which should be the same, are well out). numpy version is 1.0.4 (ie latest). Thanks... From nwagner at iam.uni-stuttgart.de Sun Feb 17 12:08:47 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 17 Feb 2008 18:08:47 +0100 Subject: [SciPy-user] Scipy.test() fails on Linux 64 In-Reply-To: <1803dae1-220c-494e-b306-ea518d8a2b0d@o77g2000hsf.googlegroups.com> References: <1803dae1-220c-494e-b306-ea518d8a2b0d@o77g2000hsf.googlegroups.com> Message-ID: On Sun, 17 Feb 2008 08:16:10 -0800 (PST) LB wrote: > The issue you mentioned concerns > scipy.lib.tests.test_lapack.test_flapack_float, > and if I understand correctly, this is due to too strict >criteria and > these > failures are not significant. > > Concerning the other failures I reported earlier, I >didn't find a > clue : > > - concerning scipy.linalg.tests.test_decomp.test_eig, >I didn't see > any > ticket. Besides, for these tests, the errors cannot >be considered > as > unsignifiant. > > - concerning scipy.tests.test_odr.test_odr, I've seen >three > tickets (#357, > #375 and #469) but I ouldn't see any solution. > > Did I miss something obvious or should I consider that >building scipy > 0.6 from > the official tarball on a linux 64 is hopeless ? > > -- > LB I am using the svn versions of numpy/scipy. I cannot reproduce the test failures you have reported. >>> scipy.__version__ '0.7.0.dev3946' >>> import numpy >>> numpy.__version__ '1.0.5.dev4807' On a 32 bit system scipy.test() yields ====================================================================== ERROR: Failure: ImportError (cannot import name _bspline) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose/loader.py", line 363, in loadTestsFromName module = self.importer.importFromPath( File "/usr/lib/python2.4/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.4/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_bspline.py", line 9, in ? import scipy.stats.models.bspline as B File "/usr/lib/python2.4/site-packages/scipy/stats/models/bspline.py", line 23, in ? from scipy.stats.models import _bspline ImportError: cannot import name _bspline ====================================================================== ERROR: test_huber (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_scale.py", line 35, in test_huber m = scale.huber(X) File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", line 866, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== ERROR: test_huberaxes (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_scale.py", line 40, in test_huberaxes m = scale.huber(X, axis=0) File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", line 866, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== FAIL: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/testing/decorators.py", line 83, in skipper return f(*args, **kwargs) File "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", line 25, in test_imresize assert_equal(im1.shape,(11,22)) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 137, in assert_equal assert_equal(len(actual),len(desired),err_msg,verbose) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 0 DESIRED: 2 ====================================================================== FAIL: test1 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/ndimage/tests/test_segment.py", line 21, in test1 assert_almost_equal(objects[7]['bLength'], 1215.70980000, 4) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1215.7023 DESIRED: 1215.7098000000001 ====================================================================== FAIL: test2 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/ndimage/tests/test_segment.py", line 31, in test2 assert_almost_equal(ROIList[7]['bLength'], 1215.70980000, 4) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1215.7023 DESIRED: 1215.7098000000001 ====================================================================== FAIL: test_namespace (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_formula.py", line 119, in test_namespace self.assertEqual(xx.namespace, Y.namespace) AssertionError: {} != {'Y': array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98]), 'X': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49])} ---------------------------------------------------------------------- Ran 2044 tests in 117.724s FAILED (failures=4, errors=3) From a.g.basden at durham.ac.uk Mon Feb 18 05:10:50 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Mon, 18 Feb 2008 10:10:50 +0000 (GMT) Subject: [SciPy-user] scipy.special.kv In-Reply-To: References: Message-ID: Hi, don't know if its relevant, but the compiler was gfortran... 4.0.2 gcc 4.0.2. From a.g.basden at durham.ac.uk Mon Feb 18 10:11:08 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Mon, 18 Feb 2008 15:11:08 +0000 (GMT) Subject: [SciPy-user] numpy install errors In-Reply-To: References: Message-ID: Hi, am trying to (re) install numpy, and keep getting dependency errors: /usr/bin/gfortran -g -Wall -L/usr/local/Cluster-Apps/blas/gcc/lib64 build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o -L/data/hamilton/dph1agb/lib -llapack -lptf77blas -lptcblas -latlas -lg2c -o build/lib.linux-x86_64-2.5/numpy/linalg/lapack_lite.so build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:827: undefined reference to `Py_InitModule4_64'build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': build/src.linux-x86_64-2.5/numpy/core/__multiarray_api.h:945: undefined reference to `PyImport_ImportModule' etc etc Any idea why it is not finding python libraries? Thanks... From robert.kern at gmail.com Mon Feb 18 12:10:09 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 Feb 2008 11:10:09 -0600 Subject: [SciPy-user] numpy install errors In-Reply-To: References: Message-ID: <3d375d730802180910u5d3e36cct85a5a47b36d4d9fd@mail.gmail.com> On Feb 18, 2008 9:11 AM, Alastair Basden wrote: > Hi, > am trying to (re) install numpy, and keep getting dependency errors: > > /usr/bin/gfortran -g -Wall -L/usr/local/Cluster-Apps/blas/gcc/lib64 > build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o > -L/data/hamilton/dph1agb/lib -llapack -lptf77blas -lptcblas -latlas -lg2c > -o build/lib.linux-x86_64-2.5/numpy/linalg/lapack_lite.so > build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o: In function > `initlapack_lite': > numpy/linalg/lapack_litemodule.c:827: undefined reference to > `Py_InitModule4_64'build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o: > In function `initlapack_lite': > build/src.linux-x86_64-2.5/numpy/core/__multiarray_api.h:945: undefined > reference to `PyImport_ImportModule' > > etc > etc > > Any idea why it is not finding python libraries? You probably have LDFLAGS and CFLAGS defined. For FORTRAN code, which is invoked when linking to ATLAS, these environment variables override the link flags rather than add to them. This is necessary because we can't keep up with all of the changes that FORTRAN compiler vendors make to their link flags, so we need a way for the user to override everything. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.g.basden at durham.ac.uk Mon Feb 18 12:22:00 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Mon, 18 Feb 2008 17:22:00 +0000 (GMT) Subject: [SciPy-user] numpy install errors In-Reply-To: References: Message-ID: Hi Robert, thanks for the reply - my LDFLAGS and CFLAGS are not set... should they be? I have had to edit my numpy/distutils/fcompiler/gnu.py file replacing 'g77' with 'gfortran', though am not sure this is the problem, since I have had it compiling previously... Any further ideas? It seems to be having problems finding basic python functions eg PyImport_ImportModule etc (hundreds not found). Thanks... From a.g.basden at durham.ac.uk Mon Feb 18 12:42:18 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Mon, 18 Feb 2008 17:42:18 +0000 (GMT) Subject: [SciPy-user] numpy install errors In-Reply-To: References: Message-ID: Hi, installation now okay - was something to do with environment variables, though not sure which ones... So the question now is why does the following happen: >>> import scipy.special >>> scipy.special.kv(6./5,1) 0.70066931017889988 >>> scipy.special.kv(6./5,1) 0.70066931017889988 >>> scipy.special.kv(6./5,1) 0.70066931017889988 >>> scipy.special.kv(6./5,[1,1]) array([ 0.70066931, 0.70066931]) >>> scipy.special.kv(6./5,1) 6.1853203003937157e-282 >>> scipy.special.kv(6./5,1) 6.1853203003937157e-282 ie it seems to go wrong after the first ufunc is called... but is okay on single values... Thanks... From robert.kern at gmail.com Mon Feb 18 12:46:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 Feb 2008 11:46:18 -0600 Subject: [SciPy-user] numpy install errors In-Reply-To: References: Message-ID: <3d375d730802180946w14e04836red0960090596d20c@mail.gmail.com> On Feb 18, 2008 11:22 AM, Alastair Basden wrote: > Hi Robert, > thanks for the reply - my LDFLAGS and CFLAGS are not set... should they > be? No. How is this flag getting set, then? "-L/usr/local/Cluster-Apps/blas/gcc/lib64" > I have had to edit my numpy/distutils/fcompiler/gnu.py file replacing > 'g77' with 'gfortran', though am not sure this is the problem, since I > have had it compiling previously... Okay, that's wrong, but probably not the cause of your problem. Instead, to use gfortran, do this: $ python setup.py config_fc --fcompiler=gnu95 build > Any further ideas? It seems to be having problems finding basic python > functions eg PyImport_ImportModule etc (hundreds not found). Can you show us the full output of running the above command? If you have a site.cfg file, please provide it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Feb 18 12:48:33 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 Feb 2008 11:48:33 -0600 Subject: [SciPy-user] numpy install errors In-Reply-To: References: Message-ID: <3d375d730802180948t408b8d8dx9af956d10e02afcc@mail.gmail.com> On Feb 18, 2008 11:42 AM, Alastair Basden wrote: > Hi, > installation now okay - was something to do with environment variables, > though not sure which ones... > > So the question now is why does the following happen: > > >>> import scipy.special > >>> scipy.special.kv(6./5,1) > 0.70066931017889988 > >>> scipy.special.kv(6./5,1) > 0.70066931017889988 > >>> scipy.special.kv(6./5,1) > 0.70066931017889988 > >>> scipy.special.kv(6./5,[1,1]) > array([ 0.70066931, 0.70066931]) > >>> scipy.special.kv(6./5,1) > 6.1853203003937157e-282 > >>> scipy.special.kv(6./5,1) > 6.1853203003937157e-282 > > ie it seems to go wrong after the first ufunc is called... but is okay on > single values... There's a bug somewhere. A similar problem has been reported recently. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From berthe.loic at gmail.com Mon Feb 18 12:56:49 2008 From: berthe.loic at gmail.com (LB) Date: Mon, 18 Feb 2008 09:56:49 -0800 (PST) Subject: [SciPy-user] Scipy.test() fails on Linux 64 In-Reply-To: References: <1803dae1-220c-494e-b306-ea518d8a2b0d@o77g2000hsf.googlegroups.com> Message-ID: Unfortunately, I cannot access to svn from this computer. As I didn't see any svn tarball on scipy.org, I couldn't test this. Besides, I really need a *stable* version of scipy and numpy : I need a good traceability for the tools I use at work, and this is mandatory if I want to convert some of my colleagues to numpy/scipy. Have you got any problem with the official version of numpy (1.0.4) and scipy (0.6) ? Did you use any special compilation option to make this work on linux 64 ? Regards, -- LB From nwagner at iam.uni-stuttgart.de Mon Feb 18 13:33:11 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 18 Feb 2008 19:33:11 +0100 Subject: [SciPy-user] numpy install errors In-Reply-To: <3d375d730802180948t408b8d8dx9af956d10e02afcc@mail.gmail.com> References: <3d375d730802180948t408b8d8dx9af956d10e02afcc@mail.gmail.com> Message-ID: On Mon, 18 Feb 2008 11:48:33 -0600 "Robert Kern" wrote: > On Feb 18, 2008 11:42 AM, Alastair Basden > wrote: >> Hi, >> installation now okay - was something to do with >>environment variables, >> though not sure which ones... >> >> So the question now is why does the following happen: >> >> >>> import scipy.special >> >>> scipy.special.kv(6./5,1) >> 0.70066931017889988 >> >>> scipy.special.kv(6./5,1) >> 0.70066931017889988 >> >>> scipy.special.kv(6./5,1) >> 0.70066931017889988 >> >>> scipy.special.kv(6./5,[1,1]) >> array([ 0.70066931, 0.70066931]) >> >>> scipy.special.kv(6./5,1) >> 6.1853203003937157e-282 >> >>> scipy.special.kv(6./5,1) >> 6.1853203003937157e-282 >> >> ie it seems to go wrong after the first ufunc is >>called... but is okay on >> single values... > > There's a bug somewhere. A similar problem has been >reported recently. > > -- > Robert Kern > I cannot reproduce the problem with recent svn versions. Python 2.4.1 (#1, May 25 2007, 18:41:31) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy,scipy.special >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202]) >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202]) >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202]) >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202]) >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202]) >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202]) >>> scipy.special.kv(11./6,numpy.ones((10,),numpy.float64)) array([ 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202, 1.32620202]) >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,1) 0.70107989955789207 >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.special.kv(6./5,[1,1]) array([ 0.7010799, 0.7010799]) >>> scipy.__version__ '0.7.0.dev3946' >>> numpy.__version__ '1.0.5.dev4811' >>> scipy.special.kv(6./5,1) 0.70107989955789207 Nils From s.mientki at ru.nl Mon Feb 18 15:18:19 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 18 Feb 2008 21:18:19 +0100 Subject: [SciPy-user] "chunk" filter ? Message-ID: <47B9E80B.107@ru.nl> hello, I need to filter (e.g. lowpass) a real time signal, the samples are taken equidistant, but the samples will arrive asynchronous and I want to display the filtered signal as soon as it arrives If I take a very simple lowpass filter, I can easily do it myself: a=0.1 y = 0 new = 1 # test signal = step function for i in range (100) : # getting 100 samples y = a*new + (1-a)*y for i in range (18) : # getting 18 samples y = a*new + (1-a)*y print y ... and so on But to be able to use more complicated filters, I would like to use the signal.lfilter, therefor I obvious need to keep track of some history data, but I can't find the right solution. Does anyone has suggestions or a solution ? thanks, Stef Mientki From stephenlists at gmail.com Mon Feb 18 15:30:03 2008 From: stephenlists at gmail.com (Stephen Uhlhorn) Date: Mon, 18 Feb 2008 15:30:03 -0500 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: Last question Barry, I swear. If I go with the MacPorts build of fftw, do I use the 'fortran' or 'g95' variant? Thanks -stephen On Feb 14, 2008 9:48 PM, Barry Wark wrote: > Stephen, > > There are no issues with using MacPorts' fftw3 (that's what we're > using at my site). We're using the gfortran from > http://r.research.att.com/tools/. I haven't tried gfortran via > MacPorts. > > Barry From ramercer at gmail.com Mon Feb 18 15:36:21 2008 From: ramercer at gmail.com (Adam Mercer) Date: Mon, 18 Feb 2008 15:36:21 -0500 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: <799406d60802181236t1c7e30cdmfc37fe14d2cc420c@mail.gmail.com> On Feb 18, 2008 3:30 PM, Stephen Uhlhorn wrote: > If I go with the MacPorts build of fftw, do I use the 'fortran' or > 'g95' variant? I'd advise using the gfortran variant as there are a number of strange issues in building against g95 - issues with the linking flags. Cheers Adam From barrywark at gmail.com Mon Feb 18 17:38:03 2008 From: barrywark at gmail.com (Barry Wark) Date: Mon, 18 Feb 2008 14:38:03 -0800 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: Stephen, Heh. I've always used the default, whatever that is. I have gfortran installed separately (see previous post), so I assume fftw-3 is built with that... Like Adam said, I'd stay away from g95 and stick with gfortran. barry On Feb 18, 2008 12:30 PM, Stephen Uhlhorn wrote: > Last question Barry, I swear. > > If I go with the MacPorts build of fftw, do I use the 'fortran' or > 'g95' variant? > > Thanks > -stephen > > On Feb 14, 2008 9:48 PM, Barry Wark wrote: > > Stephen, > > > > There are no issues with using MacPorts' fftw3 (that's what we're > > using at my site). We're using the gfortran from > > http://r.research.att.com/tools/. I haven't tried gfortran via > > MacPorts. > > > > Barry > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From youngsu999 at gmail.com Tue Feb 19 00:57:39 2008 From: youngsu999 at gmail.com (Youngsu Park) Date: Tue, 19 Feb 2008 14:57:39 +0900 Subject: [SciPy-user] Cannot import name cscmux (was: Re: Complexs parse support) Message-ID: <338357900802182157q5485f44dw486166e0ec01a6f0@mail.gmail.com> Hi, I had a same problem. The following URL will help. http://projects.scipy.org/pipermail/scipy-user/2007-November/014583.html >Message: 6 >Date: Fri, 15 Feb 2008 19:58:22 -0800 (PST) >From: >Subject: [SciPy-user] Cannot import name cscmux (was: Re: Complex > sparse support) >To: scipy-user at scipy.org >Message-ID: <143240.47324.qm at web51406.mail.re2.yahoo.com> >Content-Type: text/plain; charset=iso-8859-1 > >In response to the following response, I downloaded >the latest version of numpy and scipy, reinstalled and >now I am unable to start the sample code at all. > >Google search suggested that I uninstall the old >versions, and then reinstall. I did that and didn't >make any difference. > >Tried both Python 2.3, and 2.5: same problem. > >Any suggestions? > >Thanks, From berthe.loic at gmail.com Tue Feb 19 03:05:20 2008 From: berthe.loic at gmail.com (LB) Date: Tue, 19 Feb 2008 00:05:20 -0800 (PST) Subject: [SciPy-user] Scipy.test() fails on Linux 64 In-Reply-To: References: Message-ID: Is there any way to desactivate a package during SciPy 's builkding process ? I've tried : ODR=None $python setup.py build ODR=None $python setup.py install or ODRPACK=None $python setup.py build ODRPACK=None $python setup.py install but the odr module is still being build, and there still failures when running the test suite. -- LB From david at ar.media.kyoto-u.ac.jp Tue Feb 19 04:15:04 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 19 Feb 2008 18:15:04 +0900 Subject: [SciPy-user] numpy install errors In-Reply-To: References: Message-ID: <47BA9E18.2010508@ar.media.kyoto-u.ac.jp> Alastair Basden wrote: > Hi, > am trying to (re) install numpy, and keep getting dependency errors: > > /usr/bin/gfortran -g -Wall -L/usr/local/Cluster-Apps/blas/gcc/lib64 > build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o > -L/data/hamilton/dph1agb/lib -llapack -lptf77blas -lptcblas -latlas -lg2c > -o build/lib.linux-x86_64-2.5/numpy/linalg/lapack_lite.so > build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o: In function > `initlapack_lite': > numpy/linalg/lapack_litemodule.c:827: undefined reference to > `Py_InitModule4_64'build/temp.linux-x86_64-2.5/numpy/linalg/lapack_litemodule.o: > In function `initlapack_lite': > build/src.linux-x86_64-2.5/numpy/core/__multiarray_api.h:945: undefined > reference to `PyImport_ImportModule' > > etc > etc > > Any idea why it is not finding python libraries? > When reporting installation problem, please include your OS, and the compilers. Here, I can see you are using gnu and linux, but depending on the distributions, the problems may be different. thanks, David From david at ar.media.kyoto-u.ac.jp Tue Feb 19 04:17:07 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 19 Feb 2008 18:17:07 +0900 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: <47BA9E93.5000801@ar.media.kyoto-u.ac.jp> Barry Wark wrote: > Stephen, > > Heh. I've always used the default, whatever that is. I have gfortran > installed separately (see previous post), so I assume fftw-3 is built > with that... > > Like Adam said, I'd stay away from g95 and stick with gfortran. What is really important is not to mix g95 and gfortran (or g77 for that matter). I am not sure whether mac os X supports ldd, but when you see a fortran problem, always check that all your extensions use the same fortran runtime (if you see both g2c and gfortran, for example, you can be sure you will have problems). cheers, David From a.g.basden at durham.ac.uk Tue Feb 19 06:49:50 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Tue, 19 Feb 2008 11:49:50 +0000 (GMT) Subject: [SciPy-user] scipy install errors In-Reply-To: References: Message-ID: Hi, Thanks for the various responses. Have tried installing numpy/scipy with the svn versions as suggested, using python setup.py config_fc --fcompiler=gnu95 build This has worked for numpy, but not for scipy: Found executable /usr/bin/gfortran customize Gnu95FCompiler using build_clib building 'arpack' library compiling Fortran sources Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -march=opteron -mmmx -m3dnow -msse2 -msse Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=opteron -mmmx -m3dnow -msse2 -msse Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=opteron -mmmx -m3dnow -msse2 -msse compile options: '-Iscipy/splinalg/eigen/arpack/ARPACK/SRC -c' gfortran:f77: scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f:0: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. See for instructions. scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f:0: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. See for instructions. error: Command "/usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -march=opteron -mmmx -m3dnow -msse2 -msse -Iscipy/splinalg/eigen/arpack/ARPACK/SRC -c -c scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f -o build/temp.linux-x86_64-2.5/scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.o" failed with exit status 1 If I try the scipy installation with just setup.py build, it seems to work, but then in python: >>import scipy.special Traceback (most recent call last): File "", line 1, in File "/data/hamilton/dph1agb/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/data/hamilton/dph1agb/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: /data/hamilton/dph1agb/lib/python2.5/site-packages/scipy/special/_cephes.so: undefined symbol: _gfortran_filename So (not surprisingly) it seems to need the gfortran specified... Note - using the svn numpy and the scipy0.6, the scipy.special.kv function is still behaving incorrectly, so it is a problem with scipy, not numpy (I had wondered whether it was the ufunc interface or something). Thanks... Platform is Suse 10.0 on AMD 64 processors. lapack/atlas has been compiled by me. From stephenlists at gmail.com Tue Feb 19 10:55:14 2008 From: stephenlists at gmail.com (Stephen Uhlhorn) Date: Tue, 19 Feb 2008 10:55:14 -0500 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: On Feb 14, 2008 7:59 PM, Barry Wark wrote: > Thanks for the info. I've built a couple eggs for OS X 10.5 > (Universal) from SVN trunk. Neither include fftw3. After the lengthy discussion re: fftw3, I decided to try Barry's eggs. I installed gfortran from: http://r.research.att.com/tools/ . I tried to install the dynamically linked scipy: scipy-0.7.0.dev3940-py2.5-macosx-10.3-i386.egg and ended up with the following error: target build/src.macosx-10.5-i386-2.5/_fftpackmodule.c does not exist: Assuming _fftpackmodule.c was generated with "build_src --inplace" command. error: Setup script exited with error: '_fftpackmodule.c' missing Where's fftpackmodule? Should I install fftw3? -stephen From ed at lamedomain.net Tue Feb 19 13:25:47 2008 From: ed at lamedomain.net (Ed Rahn) Date: Tue, 19 Feb 2008 10:25:47 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <20080207093525.e28fea23.ed@lamedomain.net> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <20080207093525.e28fea23.ed@lamedomain.net> Message-ID: <20080219102547.9a3a8f4d.ed@lamedomain.net> I'm rather disappointed with the response this generated. It seems people are more interested in talking about it, than actually doing it. I have made further changes and would like to continue in the scipy sandbox. Who would I talk to about getting access to the repository? - Ed On Thu, 7 Feb 2008 09:35:25 -0800 Ed Rahn wrote: > The author of Openbayes does not mind integrating it into scipy, the > discussion can be found in the attached email. > > From this repo > http://svn.berlios.de/svnroot/repos/pybayes/branches/Public > I have converted it from numarray to numpy, the patch can be found at: > http://lamedomain.net/openbayes/numpy.diff > > - Ed > > On Mon, 14 Jan 2008 14:45:49 -0800 > Karl Young wrote: > > > > > I'm starting to play with Bayes nets in a way that will require a little > > more than just using some of the black box packages around (e.g. I'd > > like to play around with using various regression models at the nodes) > > and would love to do my exploring in the context of SciPy but I didn't > > see any such packages currently available. I did find a python package > > called OpenBayes (http://www.openbayes.org/) that after a very cursory > > examination looked pretty nice but apparently is no longer being > > developed. Does anyone know if there has ever been any discussion with > > the author of that package re. incorporating it into SciPy ? > > > > -- > > > > Karl Young > > Center for Imaging of Neurodegenerative Diseases, UCSF > > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > > 4150 Clement Street FAX: (415) 668-2864 > > San Francisco, CA 94121 Email: karl young at ucsf edu > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From barrywark at gmail.com Tue Feb 19 13:46:03 2008 From: barrywark at gmail.com (Barry Wark) Date: Tue, 19 Feb 2008 10:46:03 -0800 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: Stephen, It looks like my attempt to build without fftw3 didn't work (thanks for being the first tester!). This egg was build with David's suggestion of building with FFTW3=None at the command line. It looks like the C fftpack that scipy defaults to when it can't find FFTW3 is missing from this egg. Can any of the scipy gurus offer insight? Barry On Feb 19, 2008 7:55 AM, Stephen Uhlhorn wrote: > On Feb 14, 2008 7:59 PM, Barry Wark wrote: > > Thanks for the info. I've built a couple eggs for OS X 10.5 > > (Universal) from SVN trunk. Neither include fftw3. > > > After the lengthy discussion re: fftw3, I decided to try Barry's eggs. > > I installed gfortran from: http://r.research.att.com/tools/ . > > I tried to install the dynamically linked scipy: > scipy-0.7.0.dev3940-py2.5-macosx-10.3-i386.egg and ended up with the > following error: > > target build/src.macosx-10.5-i386-2.5/_fftpackmodule.c does not exist: > Assuming _fftpackmodule.c was generated with "build_src --inplace" command. > error: Setup script exited with error: '_fftpackmodule.c' missing > > Where's fftpackmodule? Should I install fftw3? > > -stephen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From kc106_2005-scipy at yahoo.com Tue Feb 19 13:51:32 2008 From: kc106_2005-scipy at yahoo.com (kc106_2005-scipy at yahoo.com) Date: Tue, 19 Feb 2008 10:51:32 -0800 (PST) Subject: [SciPy-user] Cannot import name cscmux Message-ID: <844157.96650.qm@web51404.mail.re2.yahoo.com> Thanks for the tip. Works fine now. > Message: 6 > Date: Tue, 19 Feb 2008 14:57:39 +0900 > From: "Youngsu Park" > Subject: Re: [SciPy-user] Cannot import name cscmux (was: Re: Complexs > parse support) > To: scipy-user at scipy.org > Message-ID: > <338357900802182157q5485f44dw486166e0ec01a6f0 at mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 > > Hi, I had a same problem. > > The following URL will help. > > http://projects.scipy.org/pipermail/scipy-user/2007-November/0 > 14583.html > > >Message: 6 > >Date: Fri, 15 Feb 2008 19:58:22 -0800 (PST) > >From: > >Subject: [SciPy-user] Cannot import name cscmux (was: Re: Complex > > sparse support) > >To: scipy-user at scipy.org > >Message-ID: <143240.47324.qm at web51406.mail.re2.yahoo.com> > >Content-Type: text/plain; charset=iso-8859-1 > > > >In response to the following response, I downloaded > >the latest version of numpy and scipy, reinstalled and > >now I am unable to start the sample code at all. > > > >Google search suggested that I uninstall the old > >versions, and then reinstall. I did that and didn't > >make any difference. > > > >Tried both Python 2.3, and 2.5: same problem. > > > >Any suggestions? > > > >Thanks, > > > ------------------------------ > -- John Henry From kc106_2005-scipy at yahoo.com Tue Feb 19 14:06:40 2008 From: kc106_2005-scipy at yahoo.com (kc106_2005-scipy at yahoo.com) Date: Tue, 19 Feb 2008 11:06:40 -0800 (PST) Subject: [SciPy-user] no module named splinalg Message-ID: <991327.85558.qm@web51411.mail.re2.yahoo.com> I went further into testing scipy. Now, I am stuck with a "no module named splinalg" message with: from scipy.splinalg import spsolve, use_solver Any suggestions? Thanks, -- John Henry From robert.kern at gmail.com Tue Feb 19 15:26:46 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 19 Feb 2008 14:26:46 -0600 Subject: [SciPy-user] disabling fftw3 during scipy build In-Reply-To: References: <47B43043.9050309@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730802191226x2133b113p64b52a36d2528722@mail.gmail.com> On Feb 19, 2008 9:55 AM, Stephen Uhlhorn wrote: > On Feb 14, 2008 7:59 PM, Barry Wark wrote: > > Thanks for the info. I've built a couple eggs for OS X 10.5 > > (Universal) from SVN trunk. Neither include fftw3. > > > After the lengthy discussion re: fftw3, I decided to try Barry's eggs. > > I installed gfortran from: http://r.research.att.com/tools/ . > > I tried to install the dynamically linked scipy: > scipy-0.7.0.dev3940-py2.5-macosx-10.3-i386.egg and ended up with the > following error: > > target build/src.macosx-10.5-i386-2.5/_fftpackmodule.c does not exist: > Assuming _fftpackmodule.c was generated with "build_src --inplace" command. > error: Setup script exited with error: '_fftpackmodule.c' missing > > Where's fftpackmodule? Should I install fftw3? Actually, _fftpackmodule is the common extension module for *all* of the FFT backends, not just the FORTRAN library FFTPACK. Most likely there was an error earlier in the build. Please attach the full output of the failed build. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Feb 19 15:40:51 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 19 Feb 2008 14:40:51 -0600 Subject: [SciPy-user] scipy install errors In-Reply-To: References: Message-ID: <3d375d730802191240l73e56750ud4a345706ddd529@mail.gmail.com> On Feb 19, 2008 5:49 AM, Alastair Basden wrote: > Hi, > > Thanks for the various responses. > > Have tried installing numpy/scipy with the svn versions as suggested, > using > python setup.py config_fc --fcompiler=gnu95 build > > This has worked for numpy, but not for scipy: > > > Found executable /usr/bin/gfortran > customize Gnu95FCompiler using build_clib > building 'arpack' library > compiling Fortran sources > Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -fPIC -O3 -funroll-loops -march=opteron -mmmx > -m3dnow -msse2 -msse > Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC > -O3 -funroll-loops -march=opteron -mmmx -m3dnow -msse2 -msse > Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 > -funroll-loops -march=opteron -mmmx -m3dnow -msse2 -msse > compile options: '-Iscipy/splinalg/eigen/arpack/ARPACK/SRC -c' > gfortran:f77: scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f > scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f:0: internal compiler > error: Segmentation fault Ouch. Which gfortran are you using? Where did it come from? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From karl.young at ucsf.edu Tue Feb 19 15:50:02 2008 From: karl.young at ucsf.edu (Young, Karl) Date: Tue, 19 Feb 2008 12:50:02 -0800 Subject: [SciPy-user] Bayes net question References: <20080114162237.74c586f2@jakubik.ta3.sk><478BE61D.9090309@ucsf.edu><20080207093525.e28fea23.ed@lamedomain.net> <20080219102547.9a3a8f4d.ed@lamedomain.net> Message-ID: <9D202D4E86A4BF47BA6943ABDF21BE78039F0A82@EXVS06.net.ucsf.edu> > I'm rather disappointed with the response this generated. It seems > people are more interested in talking about it, than actually doing it. Good point (I certainly stand guilty as charged). I'm willing to contribute but unfortunately too swamped to lead the project. E.g. if someone is willing to make a list of things that could be ported from Kevin Murphy's toolbox or elsewhere into OpenBayes and provide an example as a sort of a first pass at a design for an api I'd be willing to work on individual chunks. But so far it seems that everyone that originally signed up on the wiki is apparently in a similar situation. > I have made further changes and would like to continue in the scipy > sandbox. Who would I talk to about getting access to the repository? Maybe Robert or Jarrod could point you in the right direction ? On Thu, 7 Feb 2008 09:35:25 -0800 Ed Rahn wrote: > The author of Openbayes does not mind integrating it into scipy, the > discussion can be found in the attached email. > > From this repo > http://svn.berlios.de/svnroot/repos/pybayes/branches/Public > I have converted it from numarray to numpy, the patch can be found at: > http://lamedomain.net/openbayes/numpy.diff > > - Ed > > On Mon, 14 Jan 2008 14:45:49 -0800 > Karl Young wrote: > > > > > I'm starting to play with Bayes nets in a way that will require a little > > more than just using some of the black box packages around (e.g. I'd > > like to play around with using various regression models at the nodes) > > and would love to do my exploring in the context of SciPy but I didn't > > see any such packages currently available. I did find a python package > > called OpenBayes (http://www.openbayes.org/) that after a very cursory > > examination looked pretty nice but apparently is no longer being > > developed. Does anyone know if there has ever been any discussion with > > the author of that package re. incorporating it into SciPy ? > > > > -- > > > > Karl Young > > Center for Imaging of Neurodegenerative Diseases, UCSF > > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > > 4150 Clement Street FAX: (415) 668-2864 > > San Francisco, CA 94121 Email: karl young at ucsf edu > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From a.g.basden at durham.ac.uk Wed Feb 20 05:11:43 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Wed, 20 Feb 2008 10:11:43 +0000 (GMT) Subject: [SciPy-user] scipy install errors In-Reply-To: References: Message-ID: Hi Robert, > gfortran -v Using built-in specs. Target: x86_64-suse-linux Configured with: ../configure --enable-threads=posix --prefix=/usr --with-local-prefix=/usr/local --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64 --enable-languages=c,c++,objc,f95,java,ada --disable-checking --with-gxx-include-dir=/usr/include/c++/4.0.2 --enable-java-awt=gtk --disable-libjava-multilib --with-slibdir=/lib64 --with-system-zlib --enable-shared --enable-__cxa_atexit --without-system-libunwind --host=x86_64-suse-linux Thread model: posix gcc version 4.0.2 20050901 (prerelease) (SUSE Linux) Its on a suse 10.0 machine (to which I don't have root acces). ps - sorry these posts are appearing in funny orders! From a.g.basden at durham.ac.uk Wed Feb 20 05:36:54 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Wed, 20 Feb 2008 10:36:54 +0000 (GMT) Subject: [SciPy-user] scipy install errors In-Reply-To: References: Message-ID: Hi, further investigations - I've managed to get scipy compiled using gfortran except for 2 files (where it segmented), and I used g77. However, then when I import scipy.special: >>> import scipy.special Traceback (most recent call last): File "", line 1, in File "/data/hamilton/dph1agb/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/data/hamilton/dph1agb/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: /data/hamilton/dph1agb/lib/python2.5/site-packages/scipy/special/_cephes.so: undefined symbol: _gfortran_filename I guess these are something to do with the g77 compiled files: g77 -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -mmmx -m3dnow -msse2 -msse -Iscipy/splinalg/eigen/arpack/ARPACK/SRC -c -c scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f -o build/temp.linux-x86_64-2.5/scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.o and g77 -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -mmmx -m3dnow -msse2 -msse -Iscipy/splinalg/eigen/arpack/ARPACK/SRC -c -c scipy/splinalg/eigen/arpack/ARPACK/SRC/snaupe.f -o build/temp.linux-x86_64-2.5/scipy/splinalg/eigen/arpack/ARPACK/SRC/snaupe.o From cohen at slac.stanford.edu Wed Feb 20 05:42:58 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Wed, 20 Feb 2008 10:42:58 -0000 Subject: [SciPy-user] problem building scipy Message-ID: <1194903469.4689.4.camel@localhost.localdomain> I have the current SVN trunk, and I built lapack and ATLAS following the doc in the scipy web site. I also built numpy from SVN. Now when trying toinstall scipy I get : [cohen at localhost scipy-svn]$ su -c 'python setup.py install' Password: Traceback (most recent call last): File "setup.py", line 92, in setup_package() File "setup.py", line 63, in setup_package from numpy.distutils.core import setup File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 43, in import linalg File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in from linalg import * File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in from numpy.linalg import lapack_lite ImportError: liblapack.so: cannot open shared object file: No such file or directory but I do have this library: [cohen at localhost scipy-svn]$ ls -l /usr/local/lib/liblapack.so lrwxrwxrwx 1 root root 33 2007-11-12 09:29 /usr/local/lib/liblapack.so -> /usr/local/atlas/lib/liblapack.so and it should be in my paths. Moreover, I can issue the offending line without problem: [cohen at localhost scipy-svn]$ ipython Python 2.5 (r25:51908, Oct 19 2007, 09:47:40) Type "copyright", "credits" or "license" for more information. IPython 0.8.2.svn.r2848 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: from numpy.linalg import lapack_lite In [2]: dir(lapack_lite) Out[2]: ['LapackError', '__doc__', '__file__', '__name__', 'dgeev', 'dgelsd', 'dgeqrf', 'dgesdd', 'dgesv', 'dgetrf', 'dorgqr', 'dpotrf', 'dsyevd', 'zgeev', 'zgelsd', 'zgeqrf', 'zgesdd', 'zgesv', 'zgetrf', 'zheevd', 'zpotrf', 'zungqr'] \What is going on? thanks, Johann From david at ar.media.kyoto-u.ac.jp Wed Feb 20 05:33:23 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 20 Feb 2008 19:33:23 +0900 Subject: [SciPy-user] scipy install errors In-Reply-To: References: Message-ID: <47BC01F3.7000303@ar.media.kyoto-u.ac.jp> Alastair Basden wrote: > Hi, > further investigations - I've managed to get scipy compiled using gfortran > except for 2 files (where it segmented), and I used g77. > You cannot mix g77 and gfortran together, it will not work. Either everything is compiled with g77, or everything is compiled with gfortran. Since your gfortran is old and looks buggy, maybe you don't have a choice and should use g77 for everything. The other option is to compile your own gfortran. cheers, David From a.g.basden at durham.ac.uk Wed Feb 20 08:28:33 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Wed, 20 Feb 2008 13:28:33 +0000 (GMT) Subject: [SciPy-user] scipy install errors In-Reply-To: References: Message-ID: Hi David, does this apply to the lapack/atlas libraries too? They're currently compiled with gfortran (I did try g77 but it didn't seem to work), which is what atlas recommends. Is it okay to mix versions of gfortran - ie can I use a newer gfortran for scipy, without recompiling the atlas/lapack? Thanks... From david at ar.media.kyoto-u.ac.jp Wed Feb 20 08:42:34 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 20 Feb 2008 22:42:34 +0900 Subject: [SciPy-user] scipy install errors In-Reply-To: References: Message-ID: <47BC2E4A.6040405@ar.media.kyoto-u.ac.jp> Alastair Basden wrote: > Hi David, > does this apply to the lapack/atlas libraries too? They're currently > compiled with gfortran (I did try g77 but it didn't seem to work), which > is what atlas recommends. > Yes, it applies to everything. > Is it okay to mix versions of gfortran - ie can I use a newer gfortran for > scipy, without recompiling the atlas/lapack? > I don't know if it is ok to mix versions of gfortran. Normally, gcc doesn't break ABI (binary compatibility) between minor versions, so you could try. Note that you should be able to change the fortran compiler for ATLAS without recompiling everything (fortran is only used for the fortran interface, ATLAS itself is pure C). David From robert.kern at gmail.com Wed Feb 20 13:27:50 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 20 Feb 2008 12:27:50 -0600 Subject: [SciPy-user] problem building scipy In-Reply-To: <1194903469.4689.4.camel@localhost.localdomain> References: <1194903469.4689.4.camel@localhost.localdomain> Message-ID: <3d375d730802201027p259d10f3kb0448d54b93c2e84@mail.gmail.com> On Mon, Nov 12, 2007 at 3:37 PM, Johann Cohen-Tanugi wrote: > I have the current SVN trunk, and I built lapack and ATLAS following the > doc in the scipy web site. I also built numpy from SVN. > Now when trying toinstall scipy I get : > [cohen at localhost scipy-svn]$ su -c 'python setup.py install' > Password: > Traceback (most recent call last): > File "setup.py", line 92, in > setup_package() > File "setup.py", line 63, in setup_package > from numpy.distutils.core import setup > File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 43, in > > import linalg > File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line > 4, in > from linalg import * > File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line > 25, in > from numpy.linalg import lapack_lite > ImportError: liblapack.so: cannot open shared object file: No such file > or directory > > but I do have this library: > [cohen at localhost scipy-svn]$ ls -l /usr/local/lib/liblapack.so > lrwxrwxrwx 1 root root 33 2007-11-12 09:29 /usr/local/lib/liblapack.so > -> /usr/local/atlas/lib/liblapack.so > and it should be in my paths. > Moreover, I can issue the offending line without problem: > [cohen at localhost scipy-svn]$ ipython > Python 2.5 (r25:51908, Oct 19 2007, 09:47:40) > Type "copyright", "credits" or "license" for more information. > > IPython 0.8.2.svn.r2848 -- An enhanced Interactive Python. > ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: from numpy.linalg import lapack_lite Are you sure that this is the same numpy as the one being picked up during the scipy build? Since you are executing the scipy build as root, you may be getting different numpy packages. In order to see the path of the numpy package, do this: In [1]: import numpy In [2]: numpy Out[2]: -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Feb 20 21:38:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 21 Feb 2008 11:38:55 +0900 Subject: [SciPy-user] problem building scipy In-Reply-To: <3d375d730802201027p259d10f3kb0448d54b93c2e84@mail.gmail.com> References: <1194903469.4689.4.camel@localhost.localdomain> <3d375d730802201027p259d10f3kb0448d54b93c2e84@mail.gmail.com> Message-ID: <47BCE43F.6000409@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > Are you sure that this is the same numpy as the one being picked up > during the scipy build? Since you are executing the scipy build as > root, you may be getting different numpy packages. Another consequence of installing as root through su is that many environment variables are not imported for security reasons. LD_LIBRARY_PATH comes to mind (LD_LIBRARY_PATH is also ignored by set-uid programs, which is the case of su) ; now, /usr/local is a bit special (may be handled specially by ld), but I think it is handled differently by different distro on Linux. For example, Ubuntu (and debian ?) do put /usr/local/lib in ld.so.conf, whereas fedora does, at least from some version (4 ?). Concretely, an easy way to check is to simply do "su ldd" on a numpy module, to see if it can be found by the loader for programs with id 0. cheers, David From david at ar.media.kyoto-u.ac.jp Wed Feb 20 21:48:16 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 21 Feb 2008 11:48:16 +0900 Subject: [SciPy-user] problem building scipy In-Reply-To: <47BCE43F.6000409@ar.media.kyoto-u.ac.jp> References: <1194903469.4689.4.camel@localhost.localdomain> <3d375d730802201027p259d10f3kb0448d54b93c2e84@mail.gmail.com> <47BCE43F.6000409@ar.media.kyoto-u.ac.jp> Message-ID: <47BCE670.4020202@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Another consequence of installing as root through su is that many > environment variables are not imported for security reasons. > LD_LIBRARY_PATH comes to mind (LD_LIBRARY_PATH is also ignored by > set-uid programs, which is the case of su) ; now, /usr/local is a bit > special (may be handled specially by ld), but I think it is handled > differently by different distro on Linux. For example, Ubuntu (and > debian ?) do put /usr/local/lib in ld.so.conf Grr, should be read "Ubuntu (and Debian ?) does NOT put /usr/local/lib in ld.so.conf." cheers, David From charlie.xia.fdu at gmail.com Wed Feb 20 22:42:36 2008 From: charlie.xia.fdu at gmail.com (charlie) Date: Wed, 20 Feb 2008 19:42:36 -0800 Subject: [SciPy-user] Questions on scipy.io.read_array() Message-ID: <11c6cf4e0802201942j2969c859s7339f4180aa965b2@mail.gmail.com> Hi, I am a newbie to scipy. I am currently using it to deal with some statistical problems with possible missing values. these values are labeled 'na' in my data file. However when I tried to read in my data into an array and substitute 'na' with -1 (for example) by: read_array( datafile, ..., missing=-1) The array I got doesn't cast 'na' value into -1, but 0 - the default value of parameter "missing". And when I check mail list, I found the issue has already be raised by Joris De Ridder: http://article.gmane.org/gmane.comp.python.scientific.user/3700/match=read%5farray+missing So I guess there is something wrong with regard to scipy.io library. Does anybody come across the same problem? Should I raise a ticket for this seemingly bug? Also, I'd like to ask for two general questions: first, how efficient is python+numpy+scipy 's with major calls to statistics distribution functions, as compared to Matlab, C++ with CEPHES or GSL, and etc. I compared it with my old R program, it seems python+numpy+scipy is little bit faster. Can anybody provide with some references to this? Another question is there a good package handle missing values well within scipy? Such as it can store the value as missing and fill it with different inference method when desired. Thanks! Charlie -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand.prabhakar.patil at gmail.com Wed Feb 20 23:22:39 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Wed, 20 Feb 2008 20:22:39 -0800 Subject: [SciPy-user] Multithreading cookbook entry Message-ID: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> Hi all, I have a question primarily for Anne Archibald, the author of the cookbook entry on multithreading, http://www.scipy.org/Cookbook/Multithreading. I tried replacing the 'if name=='__main__' clause in the attachment handythread.py with from numpy import ones, exp def f(x): print x y = ones(10000000) exp(y) and the wall-clock time with foreach was 4.72s vs 6.68s for a simple for-loop. First of all, that's amazing! I've been internally railing against the GIL for months. But it looks like only a portion of f is being done concurrently. In fact if I comment out the 'exp(y)', I don't see any speedup at all. It makes sense that you can't malloc simultaneously from different threads... but if I replace 'ones' with 'empty', the time drops precipitously, indicating that most of the time taken by 'ones' is spent actually filling the array with ones. It seems like you should be able to do that concurrently. So my question is, what kinds of numpy functions tend to release the GIL? Is there a system to it, so that one can figure out ahead of time where a speedup is likely, or do you have to try and see? Do third-party f2py functions with the 'threadsafe' option release the GIL? Thanks, Anand From berthold.hoellmann at gl-group.com Thu Feb 21 05:27:34 2008 From: berthold.hoellmann at gl-group.com (=?iso-8859-15?Q?Berthold_=22H=F6llmann=22?=) Date: Thu, 21 Feb 2008 11:27:34 +0100 Subject: [SciPy-user] How to tell scipy setup that I have a INTEL fortran ATLAS/BLAS/LAPACK instead of g77 Message-ID: (Message resent, i was not subscribed, so yesterdays version reached the moderator only) No matter what I do, I can't tell scipy to use the INTEL fortran API conventions instead of the g77 conventions for fortran routine names containing underscores: hoel at pc047299:scipy-0.6.0 nm /usr/local/gltools/linux/lib/libf77blas_ifc91.so.3.8|grep atl_f77wrap_dtrsv 0000cfe0 T atl_f77wrap_dtrsv_ hoel at pc047299:scipy-0.6.0 nm build/lib.linux-i686-2.5/scipy/linsolve/_zsuperlu.so| grep atl_f77wrap_dtrsv U atl_f77wrap_dtrsv__ How can I set up scipy in a way that superlu tries to access atl_f77wrap_dtrsv_ instead of atl_f77wrap_dtrsv__? Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com This e-mail and any attachment thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as total or partial copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. Germanischer Lloyd AG, 31393 AG HH, Hamburg, Vorstand: Dr. Hermann J. Klein, Dr. Joachim Segatz, Vorsitzender des Aufsichtsrats: Dr. Wolfgang Peiner From a.g.basden at durham.ac.uk Thu Feb 21 06:23:17 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Thu, 21 Feb 2008 11:23:17 +0000 (GMT) Subject: [SciPy-user] scipy.special.kv Message-ID: Hi, after previous problems, I finally got the svn version of scipy/numpy to build and install. This was using gfortran 4.0.2 (I had to remove files scipy/splinalg/eigen/arpack/ARPACK/SRC/snaupe.f and scipy/splinalg/eigen/arpack/ARPACK/SRC/dnaupe.f because they had zero size, and were causing gfortran to segment. It then went on to install fine. However, I still have the problem with scipy.special.kv: >>> scipy.special.kv(6./5,1.) 0.70066931017889988 >>> scipy.special.kv(6./5,1.) 0.70066931017889988 >>> scipy.special.kv(6./5,[1.,1.]) array([ 0.70066931, 0.70066931]) >>> scipy.special.kv(6./5,1.) 1.0989331402998264e+151 >>> scipy.special.kv(6./5,1.) 0.0 >>> scipy.special.kv(6./5,1.) 1.0185579799004822e-312 So, the problem still exists. lapack/atlas were compiled with the same gfortran. This is Suse 10.0, x86_64 platform. Other ufuncs, eg scipy.cos work fine. Thanks... From lorenzo.isella at gmail.com Thu Feb 21 06:33:28 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 21 Feb 2008 12:33:28 +0100 Subject: [SciPy-user] List Conversion to SciPy Array Message-ID: Dear All, So far I have mainly used arrays for my computations. Simply, I had no particular need of lists, dictionaries and so on. A library I am using extensively right now, has to be fed with lists (so I have to use a tolist command to convert arrays into lists) and it also returns lists. However, then I do miss badly SciPy's tools to manipulate them. At the moment, the best I can do is to create empty SciPy's arrays and copy element-wise (with loops) the list content into them. Is there a "better" way of doing this? I googled "toarray" and I gave it some try, but unsuccessfully. Many thanks Lorenzo From lbolla at gmail.com Thu Feb 21 06:47:26 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 21 Feb 2008 12:47:26 +0100 Subject: [SciPy-user] List Conversion to SciPy Array In-Reply-To: References: Message-ID: <80c99e790802210347k50707d97qc27b0e91ca7ec1b5@mail.gmail.com> what's wrong with using: numpy.asarray? L. On 2/21/08, Lorenzo Isella wrote: > > Dear All, > So far I have mainly used arrays for my computations. Simply, I had no > particular need of lists, dictionaries and so on. > A library I am using extensively right now, has to be fed with lists > (so I have to use a tolist command to convert arrays into lists) and > it also returns lists. > However, then I do miss badly SciPy's tools to manipulate them. > At the moment, the best I can do is to create empty SciPy's arrays and > copy element-wise (with loops) the list content into them. > Is there a "better" way of doing this? I googled "toarray" and I gave > it some try, but unsuccessfully. > Many thanks > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From akumar at iitm.ac.in Thu Feb 21 07:55:34 2008 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Thu, 21 Feb 2008 18:25:34 +0530 Subject: [SciPy-user] scipy.signal.chebwin In-Reply-To: <47B5FF9E.20302@ou.edu> References: <47ACC09C.4070906@ou.edu> <47AD2CE1.2080600@ou.edu> <20080209064604.GD4122@debian.akumar.iitm.ac.in> <20080210013231.GA4049@debian.akumar.iitm.ac.in> <47B5F701.6060906@ou.edu> <47B5FF9E.20302@ou.edu> Message-ID: <20080221125534.GA8669@debian.akumar.iitm.ac.in> On Fri, Feb 15, 2008 at 03:09:50PM -0600, Ryan May wrote: > def myT(order, x): > retval = N.zeros_like(x) > retval[x > 1] = N.cosh(order*N.arccosh(x[x>1])) > retval[x < -1] = N.cosh(order*N.arccosh(-x[x<-1]))*((-1)*(order%2)) > retval[N.abs(x)<=1] = N.cos(order*N.arccos(x[N.abs(x)<=1])) > return retval > > I missed a problem with odd ordered Tn. See > http://en.wikipedia.org/wiki/Chebyshev_polynomials. I have referred to Ryan's mail in the ticket for this bug[1]. Is there any chance of someone suggesting how this can be integrated into the SciPy SVN? Thanks. Kumar [1]: http://scipy.org/scipy/scipy/ticket/581 -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From a.g.basden at durham.ac.uk Thu Feb 21 07:56:21 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Thu, 21 Feb 2008 12:56:21 +0000 (GMT) Subject: [SciPy-user] scipy.special.kv In-Reply-To: References: Message-ID: Hi, I've noticed that this problem may have been identified (without a resolution) previously: http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2567409 I wonder if this is a bug that has been reintroduced, or only present on certain platforms or something? Thanks... From lou_boog2000 at yahoo.com Thu Feb 21 09:58:32 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 21 Feb 2008 06:58:32 -0800 (PST) Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> Message-ID: <806280.91279.qm@web34412.mail.mud.yahoo.com> I have no answer, but I want to add another question to Ms. Archibald or anyone. Will the GIL prevent in any way the threading of C extensions? That is, I want to call a C extension in several threads and the program will stay there doing a long calculation and then return to Python when finished. Perhaps this is obvious, but I admit I don't fully understand the GIL. Thanks for any info. Simple example: I want to evaluate a function using a C extension (implemented with ctypes) for several parameters in the function. The parameters are in a list. Then I use the handythread.py approach and for each thread call the C extension function with a new parameter value from the list and, when the thread returns, I add the result (say, a float number) to a result list. Will, the GIL let the threads run independently? I hope my example is clear. Thanks for any info. --- Anand Patil wrote: > Hi all, > > I have a question primarily for Anne Archibald, the > author of the > cookbook entry on multithreading, > http://www.scipy.org/Cookbook/Multithreading. > > I tried replacing the 'if name=='__main__' clause in > the attachment > handythread.py with > > from numpy import ones, exp > def f(x): > print x > y = ones(10000000) > exp(y) > > and the wall-clock time with foreach was 4.72s vs > 6.68s for a simple for-loop. > > First of all, that's amazing! I've been internally > railing against the > GIL for months. But it looks like only a portion of > f is being done > concurrently. In fact if I comment out the 'exp(y)', > I don't see any > speedup at all. > > It makes sense that you can't malloc simultaneously > from different > threads... but if I replace 'ones' with 'empty', the > time drops > precipitously, indicating that most of the time > taken by 'ones' is > spent actually filling the array with ones. It seems > like you should > be able to do that concurrently. > > So my question is, what kinds of numpy functions > tend to release the > GIL? Is there a system to it, so that one can figure > out ahead of time > where a speedup is likely, or do you have to try and > see? Do > third-party f2py functions with the 'threadsafe' > option release the > GIL? > > Thanks, > Anand > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping From peridot.faceted at gmail.com Thu Feb 21 11:30:51 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 21 Feb 2008 17:30:51 +0100 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> References: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> Message-ID: On 21/02/2008, Anand Patil wrote: > First of all, that's amazing! I've been internally railing against the > GIL for months. But it looks like only a portion of f is being done > concurrently. In fact if I comment out the 'exp(y)', I don't see any > speedup at all. > > It makes sense that you can't malloc simultaneously from different > threads... but if I replace 'ones' with 'empty', the time drops > precipitously, indicating that most of the time taken by 'ones' is > spent actually filling the array with ones. It seems like you should > be able to do that concurrently. > > So my question is, what kinds of numpy functions tend to release the > GIL? Is there a system to it, so that one can figure out ahead of time > where a speedup is likely, or do you have to try and see? Do > third-party f2py functions with the 'threadsafe' option release the > GIL? In general, the answer is that if a C extension can function outside the GIL, it has to explicitly release it. TBH, I'm not sure what it has to do first to make sure the interpreter is in a safe state - maybe nothing - but it has to explicitly declare that it's not going to modify any interpreter state. Many numpy functions - exp is obviously an example - do this. Others don't. It would be useful to go through the code looking at which ones do and don't release the GIL, and put it in their docstrings; it might be possible to make more release the GIL. It's a pretty safe bet that the ufuncs do; I would guess that the linear algebra functions do too. Probably not much else. If an extension uses ctypes, whether it releases the GIL is up to ctypes. I would guess that it doesn't, since ctypes knows nothing about the C function, but I have never actually used ctypes. Anne From anand.prabhakar.patil at gmail.com Thu Feb 21 11:58:26 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Thu, 21 Feb 2008 08:58:26 -0800 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: References: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> Message-ID: <2bc7a5a50802210858p19a28e5bg7920e6ab9d839732@mail.gmail.com> On Thu, Feb 21, 2008 at 8:30 AM, Anne Archibald wrote: > It would be useful to go through the code looking at which ones > do and don't release the GIL, and put it in their docstrings; it might > be possible to make more release the GIL. It's a pretty safe bet that > the ufuncs do; I would guess that the linear algebra functions do too. > Probably not much else. I second that suggestion. In fact I'd be willing to help out if it's a tedious but simple job. > If an extension uses ctypes, whether it releases the GIL is up to > ctypes. I would guess that it doesn't, since ctypes knows nothing > about the C function, but I have never actually used ctypes. Makes sense. Does anyone know about f2py extensions with 'cf2py threadsafe' set? From the f2py user's guide, the threadsafe option will Use Py_BEGIN_ALLOW_THREADS .. Py_END_ALLOW_THREADS block around the call to Fortran/C function. Is that sufficien to release the GIL? What if the functions have callbacks?? Anand From bsouthey at gmail.com Thu Feb 21 11:59:59 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 21 Feb 2008 10:59:59 -0600 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> References: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> Message-ID: Hi, I would strongly suggest not using 'from numpy import *' etc. but use 'import numpy'. In particular you want to use ensure that you are using numpy.exp not math.exp on numpy objects. Also, please ensure that you have at least 3 processors available (the default). If not, you may introduce problems especially if you only have two processors because one processor will be used by system for other tasks. Without knowing your 'simple for-loop' I do not see you apparently see. from numpy import ones, exp import time if __name__=='__main__': def f(x): y = ones(10000000) exp(y) t1=time.time() foreach(f,range(100)) t2=time.time() for ndx in range(100): y = ones(10000000) exp(y) t3=time.time() print 'Handythread / simple loop)=, (t3-t2)/(t2-t1) With this code, the 'for loop' takes about 2.7 times as long as the handythread loop for a quad-core system. Further, on my Linux system I can see via 'top' that handythread is using 3 (of the four cores) and then this drops to 1 with the loop. Note this is not 3 to 1 as would be expected if linear speed but rather close - there is overhead involved. If you have limited resources (ie memory or processors) or another OS that is not fully multithreaded, you may run into additional problems since handythread.py assumes everything is possible. Regards Bruce On Wed, Feb 20, 2008 at 10:22 PM, Anand Patil wrote: > Hi all, > > I have a question primarily for Anne Archibald, the author of the > cookbook entry on multithreading, > http://www.scipy.org/Cookbook/Multithreading. > > I tried replacing the 'if name=='__main__' clause in the attachment > handythread.py with > > from numpy import ones, exp > def f(x): > print x > y = ones(10000000) > exp(y) > > and the wall-clock time with foreach was 4.72s vs 6.68s for a simple for-loop. > > First of all, that's amazing! I've been internally railing against the > GIL for months. But it looks like only a portion of f is being done > concurrently. In fact if I comment out the 'exp(y)', I don't see any > speedup at all. > > It makes sense that you can't malloc simultaneously from different > threads... but if I replace 'ones' with 'empty', the time drops > precipitously, indicating that most of the time taken by 'ones' is > spent actually filling the array with ones. It seems like you should > be able to do that concurrently. > > So my question is, what kinds of numpy functions tend to release the > GIL? Is there a system to it, so that one can figure out ahead of time > where a speedup is likely, or do you have to try and see? Do > third-party f2py functions with the 'threadsafe' option release the > GIL? > > Thanks, > Anand > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From anand.prabhakar.patil at gmail.com Thu Feb 21 12:37:11 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Thu, 21 Feb 2008 09:37:11 -0800 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: References: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> Message-ID: <2bc7a5a50802210937mb39c5b5x958a266c9db5b3ed@mail.gmail.com> Bruce, > from numpy import ones, exp > import time > > if __name__=='__main__': > def f(x): > > y = ones(10000000) > exp(y) > t1=time.time() > foreach(f,range(100)) > t2=time.time() > for ndx in range(100): > > y = ones(10000000) > exp(y) > t3=time.time() > print 'Handythread / simple loop)=, (t3-t2)/(t2-t1) > > With this code, the 'for loop' takes about 2.7 times as long as the > handythread loop for a quad-core system. That's very interesting. I set the 'threads' option to 2, since I have a dual-core system, and the handythread example is still only about 1.5x faster than the for-loop example, even though I can see that both my cores are being fully utilized. That could be because my machine devotes a good fraction of one of its cores to just being a Mac, but it doesn't look like that's what is making the difference. The strange thing is that for me the 'for-loop' version above takes 67s, whereas a version with f modified as follows: def f(x): y = ones(10000000) # exp(y) takes 13s whether I use handythread or a for-loop. I think that means 'ones' can only be executed by one thread at a time. Based on that, if my machine had three free cores I would expect about a 2.16X speedup tops, but you're seeing a 2.7X speedup. That means our machines are doing something differently (yours is better). Do you see any speedup from handythread with the modified version of f? Anand From david.huard at gmail.com Thu Feb 21 12:49:15 2008 From: david.huard at gmail.com (David Huard) Date: Thu, 21 Feb 2008 12:49:15 -0500 Subject: [SciPy-user] Questions on scipy.io.read_array() In-Reply-To: <11c6cf4e0802201942j2969c859s7339f4180aa965b2@mail.gmail.com> References: <11c6cf4e0802201942j2969c859s7339f4180aa965b2@mail.gmail.com> Message-ID: <91cf711d0802210949w4aeb2b9bm6e41e29ff705307e@mail.gmail.com> Charlie, Numpy has a module called ma providing an array object that deals with missing values. It still lack however an official loadtxt function, but I worked on one a while ago. If you end up using it, I'd be grateful if you could provide some feedback. As for the bug in scipy.io, I think this function is being replaced by numpy.loadtxt. If you are dealing with time series, look at the timeseries module in scikits (only in SVN for now). Cheers, David 2008/2/20, charlie : > > Hi, > > I am a newbie to scipy. > I am currently using it to deal with some statistical problems with > possible missing values. > these values are labeled 'na' in my data file. > However when I tried to read in my data into an array and substitute 'na' > with -1 (for example) by: > read_array( datafile, ..., missing=-1) > The array I got doesn't cast 'na' value into -1, but 0 - the default value > of parameter "missing". > And when I check mail list, I found the issue has already be raised by > Joris De Ridder: > > http://article.gmane.org/gmane.comp.python.scientific.user/3700/match=read%5farray+missing > So I guess there is something wrong with regard to scipy.io library. > Does anybody come across the same problem? > Should I raise a ticket for this seemingly bug? > > Also, I'd like to ask for two general questions: > first, how efficient is python+numpy+scipy 's with major calls to > statistics distribution functions, > as compared to Matlab, C++ with CEPHES or GSL, and etc. > I compared it with my old R program, it seems python+numpy+scipy is little > bit faster. > Can anybody provide with some references to this? > > Another question is there a good package handle missing values well within > scipy? > Such as it can store the value as missing and fill it with different > inference method when desired. > > Thanks! > > Charlie > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.dat Type: application/octet-stream Size: 1392 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: loadtxt.py Type: application/octet-stream Size: 4726 bytes Desc: not available URL: From lou_boog2000 at yahoo.com Thu Feb 21 12:51:43 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 21 Feb 2008 09:51:43 -0800 (PST) Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: Message-ID: <10745.76340.qm@web34408.mail.mud.yahoo.com> --- Anne Archibald wrote: > In general, the answer is that if a C extension can > function outside > the GIL, it has to explicitly release it. TBH, I'm > not sure what it > has to do first to make sure the interpreter is in a > safe state - > maybe nothing - but it has to explicitly declare > that it's not going > to modify any interpreter state. > > Many numpy functions - exp is obviously an example - > do this. Others > don't. It would be useful to go through the code > looking at which ones > do and don't release the GIL, and put it in their > docstrings; it might > be possible to make more release the GIL. It's a > pretty safe bet that > the ufuncs do; I would guess that the linear algebra > functions do too. > Probably not much else. > > If an extension uses ctypes, whether it releases the > GIL is up to > ctypes. I would guess that it doesn't, since ctypes > knows nothing > about the C function, but I have never actually used > ctypes. Anne, Thanks for your answers. They are helping, but I'm still vague on the GIL. I have a few more questions, two on your handythread.py code and one on releasing the GIL for a C extension. Thanks for you patience and help. BTW, I have a MacBook Pro with 2 CPUs. (1) In your code if return_ = True I get a return value from the foreach function only when nthreads>1, but not when nthreads=1. Looking at the code the nthreads=1 ends up in the else: at the bottom which looks like: else: if return_: for v in l: f(v) else: return and is puzzling. Nothing is returned in the if part and f is not even called in the else part. Is this a bug? (2) If I replace the sleep(0.5) call in your f function with a loop that just does a simple calculation to eat up time, then in the call to foreach when nthreads=2 the time to run the code goes up by factors of ~100 or so. I'm guessing here that it's because the GIL is not release for my version, but is release in the sleep(0.5) function in your version. Is that right? (3) You mention that ctypes probably doesn't release the GIL. I would guess that too, since it would be dangerous as I (vaguely) understand the GIL. But does the GIL have to be released in the Cextension or can it be release in the step just before I call the C extension from Python? I.e. is release on the Python side possible? If not, I guess I will have to look over the numpy code as you suggest. If possible, I suppose the GIL must be enabled immediately on return from the C extension. Thanks, again. -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping From bsouthey at gmail.com Thu Feb 21 13:33:18 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 21 Feb 2008 12:33:18 -0600 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <2bc7a5a50802210937mb39c5b5x958a266c9db5b3ed@mail.gmail.com> References: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> <2bc7a5a50802210937mb39c5b5x958a266c9db5b3ed@mail.gmail.com> Message-ID: Hi, Removing the expy(y) gives about the same time, which you can take either way. But really you need to understand Python. >From http://docs.python.org/api/threads.html: "Therefore, the rule exists that only the thread that has acquired the global interpreter lock may operate on Python objects or call Python/C API functions. In order to support multi-threaded Python programs, the interpreter regularly releases and reacquires the lock -- by default, every 100 bytecode instructions (this can be changed with sys.setcheckinterval()). " If the operation is fast enough, then it will be done before the lock is released by the interpreter does can release and reacquire the lock. Thus there is no advantage in threading as in this case. So by doing more work, this release/reacquire action becomes more important to the overall performance. This feature is also part of the reason why you can not get a linear speedup for this using Python. It is better to set the number of threads in handythread.py: N threads Ratio of handythread.py to a for loop 1 0.995360257543 2 1.81112657674 3 2.51939329739 4 2.95551097958 5 3.04222213598 I do not get 100% of cpu time of each processor even for the for-loop part. So until that happens, threads are not going to be as good as they could be. Also, I can not comment on the OS but I do know some are better than others for threading performance. Regards Bruce On Thu, Feb 21, 2008 at 11:37 AM, Anand Patil wrote: > Bruce, > > > > from numpy import ones, exp > > import time > > > > if __name__=='__main__': > > def f(x): > > > > y = ones(10000000) > > exp(y) > > t1=time.time() > > foreach(f,range(100)) > > t2=time.time() > > for ndx in range(100): > > > > y = ones(10000000) > > exp(y) > > t3=time.time() > > print 'Handythread / simple loop)=, (t3-t2)/(t2-t1) > > > > With this code, the 'for loop' takes about 2.7 times as long as the > > handythread loop for a quad-core system. > > That's very interesting. I set the 'threads' option to 2, since I have > a dual-core system, and the handythread example is still only about > 1.5x faster than the for-loop example, even though I can see that both > my cores are being fully utilized. That could be because my machine > devotes a good fraction of one of its cores to just being a Mac, but > it doesn't look like that's what is making the difference. > > > The strange thing is that for me the 'for-loop' version above takes > 67s, whereas a version with f modified as follows: > > > def f(x): > y = ones(10000000) > # exp(y) > > takes 13s whether I use handythread or a for-loop. I think that means > 'ones' can only be executed by one thread at a time. Based on that, if > my machine had three free cores I would expect about a 2.16X speedup > tops, but you're seeing a 2.7X speedup. > > That means our machines are doing something differently (yours is > better). Do you see any speedup from handythread with the modified > version of f? > > > > Anand > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Thu Feb 21 13:43:21 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 21 Feb 2008 19:43:21 +0100 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <2bc7a5a50802210858p19a28e5bg7920e6ab9d839732@mail.gmail.com> References: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> <2bc7a5a50802210858p19a28e5bg7920e6ab9d839732@mail.gmail.com> Message-ID: On 21/02/2008, Anand Patil wrote: > On Thu, Feb 21, 2008 at 8:30 AM, Anne Archibald > wrote: > > > It would be useful to go through the code looking at which ones > > do and don't release the GIL, and put it in their docstrings; it might > > be possible to make more release the GIL. It's a pretty safe bet that > > the ufuncs do; I would guess that the linear algebra functions do too. > > Probably not much else. > > > I second that suggestion. In fact I'd be willing to help out if it's a > tedious but simple job. Well, what needs to happen is that someone needs to go through and track down occurrences of Py_BEGIN_ALLOW_THREADS .. Py_END_ALLOW_THREADS in numpy. A brute-force way of finding code that probably doesn't do it would be to simply run each function in a foreach() with two threads and then with one and see if there's any speedup. Messy and crude; probably better just to look at the code, but numpy can be labyrinthine. > Makes sense. Does anyone know about f2py extensions with 'cf2py > threadsafe' set? From the f2py user's guide, the threadsafe option > will > > Use Py_BEGIN_ALLOW_THREADS .. Py_END_ALLOW_THREADS block around the > call to Fortran/C function. > > Is that sufficien to release the GIL? What if the functions have callbacks?? That's exactly what is needed to release the GIL. I think, from looking at the code, that F2PY does nothing to reacquire the GIL if it's entering a callback; this would mean that using callbacks in a "threadsafe" function would cause a crash. So Don't Do That. (But I'm not totally sure; maybe try generating one just to check.) Anne From peridot.faceted at gmail.com Thu Feb 21 14:00:56 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 21 Feb 2008 20:00:56 +0100 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <10745.76340.qm@web34408.mail.mud.yahoo.com> References: <10745.76340.qm@web34408.mail.mud.yahoo.com> Message-ID: On 21/02/2008, Lou Pecora wrote: > > (1) In your code if return_ = True I get a return > value from the foreach function only when nthreads>1, > but not when nthreads=1. Looking at the code the > nthreads=1 ends up in the else: at the bottom which > looks like: > > else: > if return_: > for v in l: > f(v) > else: > return > > and is puzzling. Nothing is returned in the if part > and f is not even called in the else part. Is this a > bug? Yep. Oops. Fixed in the v2 versions of the files. The wiki doesn't make a very good version control system. Is it worth incorporating those files into scipy? > (2) If I replace the sleep(0.5) call in your f > function with a loop that just does a simple > calculation to eat up time, then in the call to > foreach when nthreads=2 the time to run the code goes > up by factors of ~100 or so. I'm guessing here that > it's because the GIL is not release for my version, > but is release in the sleep(0.5) function in your > version. Is that right? Depends what your function does, really. If your function takes half a second but never releases the GIL, it should take twice as long. If your function releases the GIL, then it should take about the same time. But it's quite tricky to write a function that works hard and releases the GIL. A good rule of thumb is to count the lines of python that are getting executed. If there are only a few - say you're doing sum(log(exp(arange(1000000)))) - there's a good chance the GIL will be released. If you're running millions of python instructions, the GIL is held all that time, and you won't get a speedup. > (3) You mention that ctypes probably doesn't release > the GIL. I would guess that too, since it would be > dangerous as I (vaguely) understand the GIL. But does > the GIL have to be released in the Cextension or can > it be release in the step just before I call the C > extension from Python? I.e. is release on the Python > side possible? If not, I guess I will have to look > over the numpy code as you suggest. If possible, I > suppose the GIL must be enabled immediately on return > from the C extension. You can't execute any python bytecodes without holding the GIL, so it's impossible for python code to release the GIL. But it would be perfectly possible, in principle, for SWIG, F2PY, or ctypes to put a "release the GIL" in their wrappers. This will be a problem for some functions - either ones that aren't reentrant, or ones that call back to python (though in principle it might be possible to reacquire the GIL for the duration of a callback). But for a typical C function that acts only on data you give it and that doesn't know anything about python, it should be safe to run it without the GIL engaged. It seems like f2py can actually do this for functions marked as threadsafe; I don't know about ctypes or SWIG. Anne From matthieu.brucher at gmail.com Thu Feb 21 14:12:42 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 21 Feb 2008 20:12:42 +0100 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: References: <10745.76340.qm@web34408.mail.mud.yahoo.com> Message-ID: > > I don't know about ctypes or SWIG. > I blogged about this here : http://matt.eifelle.com/item/7 Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand.prabhakar.patil at gmail.com Thu Feb 21 14:39:48 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Thu, 21 Feb 2008 11:39:48 -0800 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: References: <10745.76340.qm@web34408.mail.mud.yahoo.com> Message-ID: <2bc7a5a50802211139u3cc0d0b6g4d3dbfda1cec4886@mail.gmail.com> > Yep. Oops. Fixed in the v2 versions of the files. The wiki doesn't > make a very good version control system. Is it worth incorporating > those files into scipy? I vote yes. In my opinion the following would combine to form a killer feature: - The handythread idea is developed a little, maybe to provide functionality comparable to OpenMP - Instructions for releasing the GIL in different extension types (swig, f2py, pyrex) are combined in one place - The numpy functions that release the GIL are clearly enumerated. Seriously, this is too big of a deal to be just a cookbook entry. I spent a full week last month beating my head against OpenMP trying to do something embarrassingly parallel in an f2py extension. I had to apply a patch to gcc 4.2's libgomp, compile it manually, learn how linking works, and try several other options because OpenMP was so frustrating. Now it works but I have tons of bug-prone code duplication in Fortran because I couldn't figure out how to just apply the same parallelism structure to all subroutines. The ability to multithread from Python would have saved me all of that work. Anand From lou_boog2000 at yahoo.com Thu Feb 21 14:47:45 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 21 Feb 2008 11:47:45 -0800 (PST) Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: Message-ID: <180343.643.qm@web34401.mail.mud.yahoo.com> --- Anne Archibald wrote: > Yep. Oops. Fixed in the v2 versions of the files. > The wiki doesn't > make a very good version control system. Is it worth > incorporating > those files into scipy? If you mean put a new version of the example code up to SciPy cookbook, then yes because bugs confuse newbies like me. :-) > Depends what your function does, really. If your > function takes half a > second but never releases the GIL, it should take > twice as long. If > your function releases the GIL, then it should take > about the same > time. But it's quite tricky to write a function that > works hard and > releases the GIL. A good rule of thumb is to count > the lines of python > that are getting executed. If there are only a few - > say you're doing > sum(log(exp(arange(1000000)))) - there's a good > chance the GIL will be > released. If you're running millions of python > instructions, the GIL > is held all that time, and you won't get a speedup. Hmmm... gotta think about that. > > (3) You mention that ctypes probably doesn't > release > > the GIL. I would guess that too, since it would > be > > dangerous as I (vaguely) understand the GIL. But > does > > the GIL have to be released in the Cextension or > can > > it be release in the step just before I call the > C > > extension from Python? I.e. is release on the > Python > > side possible? If not, I guess I will have to > look > > over the numpy code as you suggest. If possible, > I > > suppose the GIL must be enabled immediately on > return > > from the C extension. > > You can't execute any python bytecodes without > holding the GIL, so > it's impossible for python code to release the GIL. > But it would be > perfectly possible, in principle, for SWIG, F2PY, or > ctypes to put a > "release the GIL" in their wrappers. This will be a > problem for some > functions - either ones that aren't reentrant, or > ones that call back > to python (though in principle it might be possible > to reacquire the > GIL for the duration of a callback). But for a > typical C function that > acts only on data you give it and that doesn't know > anything about > python, it should be safe to run it without the GIL > engaged. It seems > like f2py can actually do this for functions marked > as threadsafe; I > don't know about ctypes or SWIG. Sounds like it's better to call those macros: Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS on the C side. Is that all that's needed? Then will code like your handythread.py work with threads if f calls a C extension that uses those macros? Or is there more that needs to be done to set this up? -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs From lou_boog2000 at yahoo.com Thu Feb 21 14:52:49 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 21 Feb 2008 11:52:49 -0800 (PST) Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <2bc7a5a50802211139u3cc0d0b6g4d3dbfda1cec4886@mail.gmail.com> Message-ID: <668137.61758.qm@web34405.mail.mud.yahoo.com> --- Anand Patil wrote: > > Yep. Oops. Fixed in the v2 versions of the files. > The wiki doesn't > > make a very good version control system. Is it > worth incorporating > > those files into scipy? > > I vote yes. In my opinion the following would > combine to form a killer feature: > > - The handythread idea is developed a little, maybe > to provide > functionality comparable to OpenMP > - Instructions for releasing the GIL in different > extension types > (swig, f2py, pyrex) are combined in one place > - The numpy functions that release the GIL are > clearly enumerated. Yes, this is good, but I recognize that it's laying a lot of work on someone with initials A.A. I would be happy to have the handythread.py along with simple instructions of how to use Py_BEGIN_ALLOW_THREADS and Py_BEGIN_ALLOW_THREADS in the C extension to make it all work together ... Providing that can be done easily with a few C calls. Maybe it's more complicated than I realize. In which case: OY ! -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs From anand.prabhakar.patil at gmail.com Thu Feb 21 15:01:29 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Thu, 21 Feb 2008 12:01:29 -0800 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <668137.61758.qm@web34405.mail.mud.yahoo.com> References: <2bc7a5a50802211139u3cc0d0b6g4d3dbfda1cec4886@mail.gmail.com> <668137.61758.qm@web34405.mail.mud.yahoo.com> Message-ID: <2bc7a5a50802211201h33211948t8484b7ed7cba645b@mail.gmail.com> On Thu, Feb 21, 2008 at 11:52 AM, Lou Pecora wrote: > > --- Anand Patil > wrote: > > > > > Yep. Oops. Fixed in the v2 versions of the files. > > The wiki doesn't > > > make a very good version control system. Is it > > worth incorporating > > > those files into scipy? > > > > I vote yes. In my opinion the following would > > combine to form a killer feature: > > > > - The handythread idea is developed a little, maybe > > to provide > > functionality comparable to OpenMP > > - Instructions for releasing the GIL in different > > extension types > > (swig, f2py, pyrex) are combined in one place > > - The numpy functions that release the GIL are > > clearly enumerated. > > Yes, this is good, but I recognize that it's laying a > lot of work on someone with initials A.A. I would be > happy to have the handythread.py along with simple > instructions of how to use Py_BEGIN_ALLOW_THREADS and > Py_BEGIN_ALLOW_THREADS in the C extension to make it > all work together ... Providing that can be done > easily with a few C calls. Maybe it's more > complicated than I realize. In which case: OY ! I guess I was kind of thinking other people might jump in. :-) Surely there are lots of us who want to multithread from Python? I've already volunteered to look through the numpy functions and find which ones release the GIL. I'd be happy to contribute to the handythread-like library, too. Anand From karl.young at ucsf.edu Thu Feb 21 15:09:03 2008 From: karl.young at ucsf.edu (Young, Karl) Date: Thu, 21 Feb 2008 12:09:03 -0800 Subject: [SciPy-user] Multithreading cookbook entry References: <10745.76340.qm@web34408.mail.mud.yahoo.com> <2bc7a5a50802211139u3cc0d0b6g4d3dbfda1cec4886@mail.gmail.com> Message-ID: <9D202D4E86A4BF47BA6943ABDF21BE78039F0AA0@EXVS06.net.ucsf.edu> Sorry for the huge picture, really ignorant question (my specialty !) but why couldn't all this be encapsulated in something like a version of MPI for python (since I don't know anything about the GIL I assume that might be the answer) - I recall years ago that Sun had a beautiful design (amazing !) re. their version of MPI that ran on their NUMA architecture - the library allowed the code to know whether one was communicating/running in shared and/or distributed memory and dispatched things appropriately (i.e. threaded or not). I know this is a bit futuristic but all of this should just be transparent to the non expert user; one should only have to learn details when parallelization is non-trivial. Maybe MPI isn't the best model for consolidation but it seems to abstract parallel operations in a reasonable way. Karl Young Center for Imaging of Neurodegenerative Disease, UCSF VA Medical Center, MRS Unit (114M) Phone: (415) 221-4810 x3114 FAX: (415) 668-2864 Email: karl young at ucsf edu -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of Anand Patil Sent: Thu 2/21/2008 11:39 AM To: SciPy Users List Subject: Re: [SciPy-user] Multithreading cookbook entry > Yep. Oops. Fixed in the v2 versions of the files. The wiki doesn't > make a very good version control system. Is it worth incorporating > those files into scipy? I vote yes. In my opinion the following would combine to form a killer feature: - The handythread idea is developed a little, maybe to provide functionality comparable to OpenMP - Instructions for releasing the GIL in different extension types (swig, f2py, pyrex) are combined in one place - The numpy functions that release the GIL are clearly enumerated. Seriously, this is too big of a deal to be just a cookbook entry. I spent a full week last month beating my head against OpenMP trying to do something embarrassingly parallel in an f2py extension. I had to apply a patch to gcc 4.2's libgomp, compile it manually, learn how linking works, and try several other options because OpenMP was so frustrating. Now it works but I have tons of bug-prone code duplication in Fortran because I couldn't figure out how to just apply the same parallelism structure to all subroutines. The ability to multithread from Python would have saved me all of that work. Anand _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From charlie.xia.fdu at gmail.com Thu Feb 21 16:06:37 2008 From: charlie.xia.fdu at gmail.com (charlie) Date: Thu, 21 Feb 2008 13:06:37 -0800 Subject: [SciPy-user] Questions on scipy.io.read_array() In-Reply-To: <91cf711d0802210949w4aeb2b9bm6e41e29ff705307e@mail.gmail.com> References: <11c6cf4e0802201942j2969c859s7339f4180aa965b2@mail.gmail.com> <91cf711d0802210949w4aeb2b9bm6e41e29ff705307e@mail.gmail.com> Message-ID: <11c6cf4e0802211306k2f2f0bdcve18976dccbab33bf@mail.gmail.com> Hi, David I guess this function will work! And thanks to your suggestions. I will take a look at these two modules and surely I will write some feedback if they are useful to my project. Charlie On Thu, Feb 21, 2008 at 9:49 AM, David Huard wrote: > Charlie, > > Numpy has a module called ma providing an array object that deals with > missing values. It still lack however an official loadtxt function, but I > worked on one a while ago. If you end up using it, I'd be grateful if you > could provide some feedback. As for the bug in scipy.io, I think this > function is being replaced by numpy.loadtxt. > > If you are dealing with time series, look at the timeseries module in > scikits (only in SVN for now). > > Cheers, > > David > > > > 2008/2/20, charlie : > > > > Hi, > > > > I am a newbie to scipy. > > I am currently using it to deal with some statistical problems with > > possible missing values. > > these values are labeled 'na' in my data file. > > However when I tried to read in my data into an array and substitute > > 'na' with -1 (for example) by: > > read_array( datafile, ..., missing=-1) > > The array I got doesn't cast 'na' value into -1, but 0 - the default > > value of parameter "missing". > > And when I check mail list, I found the issue has already be raised by > > Joris De Ridder: > > > > http://article.gmane.org/gmane.comp.python.scientific.user/3700/match=read%5farray+missing > > So I guess there is something wrong with regard to scipy.io library. > > Does anybody come across the same problem? > > Should I raise a ticket for this seemingly bug? > > > > Also, I'd like to ask for two general questions: > > first, how efficient is python+numpy+scipy 's with major calls to > > statistics distribution functions, > > as compared to Matlab, C++ with CEPHES or GSL, and etc. > > I compared it with my old R program, it seems python+numpy+scipy is > > little bit faster. > > Can anybody provide with some references to this? > > > > Another question is there a good package handle missing values well > > within scipy? > > Such as it can store the value as missing and fill it with different > > inference method when desired. > > > > Thanks! > > > > Charlie > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Thu Feb 21 16:27:09 2008 From: theller at ctypes.org (Thomas Heller) Date: Thu, 21 Feb 2008 22:27:09 +0100 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: References: <2bc7a5a50802202022t8691292i4efaac859a730b3@mail.gmail.com> Message-ID: Anne Archibald schrieb: > In general, the answer is that if a C extension can function outside > the GIL, it has to explicitly release it. TBH, I'm not sure what it > has to do first to make sure the interpreter is in a safe state - > maybe nothing - but it has to explicitly declare that it's not going > to modify any interpreter state. > > Many numpy functions - exp is obviously an example - do this. Others > don't. It would be useful to go through the code looking at which ones > do and don't release the GIL, and put it in their docstrings; it might > be possible to make more release the GIL. It's a pretty safe bet that > the ufuncs do; I would guess that the linear algebra functions do too. > Probably not much else. > > If an extension uses ctypes, whether it releases the GIL is up to > ctypes. I would guess that it doesn't, since ctypes knows nothing > about the C function, but I have never actually used ctypes. Of course does ctypes release the GIL on foreign function calls. And the GIL is acquired if Python implemented callback functions call back into Python code. There is nothing that ctypes needs to know about the C function - if the C function is not thread safe, you must not call it from other threads. Except - if the C function makes Python api calls, however, the GIL must not be released. In this case you should use the Python calling convention; for details look up the docs (pydll and such). This is even documented ;-) Thomas From R.Springuel at umit.maine.edu Thu Feb 21 17:01:00 2008 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 21 Feb 2008 17:01:00 -0500 Subject: [SciPy-user] Count Message-ID: <47BDF49C.9050004@umit.maine.edu> Is there a numpy or scipy command that works on arrays like the count property works on lists? I.e. if I want to know how many times a certain value occurs in an array, is there a single command that will allow me to do that? -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By appointment only From dwf at cs.toronto.edu Thu Feb 21 17:07:59 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 21 Feb 2008 17:07:59 -0500 Subject: [SciPy-user] Count In-Reply-To: <47BDF49C.9050004@umit.maine.edu> References: <47BDF49C.9050004@umit.maine.edu> Message-ID: <436AE72D-75D6-4800-90F2-E94E5E26C23A@cs.toronto.edu> Hi, One solution is to do a boolean comparison and then call sum() on the resulting boolean array. It'll treat the True's as 1's and so you end up with the number of occurrences. i.e. In [2]: x = array([2,2,2,2,3,4,5,6,7,8]) In [3]: x == 2 Out[3]: array([ True, True, True, True, False, False, False, False, False, False], dtype=bool) In [4]: sum(x == 2) Out[4]: 4 There might be other ways, of course. David On 21-Feb-08, at 5:01 PM, R. Padraic Springuel wrote: > Is there a numpy or scipy command that works on arrays like the count > property works on lists? > > I.e. if I want to know how many times a certain value occurs in an > array, is there a single command that will allow me to do that? > -- > > R. Padraic Springuel > Research Assistant > Department of Physics and Astronomy > University of Maine > Bennett 309 > Office Hours: By appointment only > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From stefan at sun.ac.za Thu Feb 21 18:38:44 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 22 Feb 2008 01:38:44 +0200 Subject: [SciPy-user] Count In-Reply-To: <436AE72D-75D6-4800-90F2-E94E5E26C23A@cs.toronto.edu> References: <47BDF49C.9050004@umit.maine.edu> <436AE72D-75D6-4800-90F2-E94E5E26C23A@cs.toronto.edu> Message-ID: <20080221233844.GE8095@mentat.za.net> On Thu, Feb 21, 2008 at 05:07:59PM -0500, David Warde-Farley wrote: > Hi, > > One solution is to do a boolean comparison and then call sum() on the > resulting boolean array. It'll treat the True's as 1's and so you end > up with the number of occurrences. > > i.e. > > In [2]: x = array([2,2,2,2,3,4,5,6,7,8]) > > In [3]: x == 2 > Out[3]: array([ True, True, True, True, False, False, False, False, > False, False], dtype=bool) > > In [4]: sum(x == 2) > Out[4]: 4 > > > There might be other ways, of course. If you need to know the occurrence of all values, you can calculate the histogram: In [32]: x = np.array([2,2,2,2,3,4,5,6,7,8]) In [33]: np.histogram(x,np.unique(x)) Out[33]: (array([4, 1, 1, 1, 1, 1, 1]), array([2, 3, 4, 5, 6, 7, 8])) Regards Stefan From lou_boog2000 at yahoo.com Thu Feb 21 19:02:00 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 21 Feb 2008 16:02:00 -0800 (PST) Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: Message-ID: <459674.9216.qm@web34414.mail.mud.yahoo.com> --- Thomas Heller wrote: > Of course does ctypes release the GIL on foreign > function calls. And the GIL > is acquired if Python implemented callback functions > call back into > Python code. I'm sorry, I don't understand what you just said. Can you restate it? I will also check the ctypes docs. > There is nothing that ctypes needs to know about the > C function - if the > C function is not thread safe, you must not call it > from other threads. How do I tell if the C function is thread safe? > Except - if the C function makes Python api calls, > however, the GIL must not be > released. In this case you should use the Python > calling convention; for details > look up the docs (pydll and such). My C function will make NO Python API calls. Can I just call the Py_BEGIN_ALLOW_THREADS and Py_BEGIN_ALLOW_THREADS macros in the C function to allow return to another thread while the C function calculates? Can the C function be called for another thread? There are lots of docs. Which do you suggest for me? -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ From robert.kern at gmail.com Thu Feb 21 19:38:19 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Feb 2008 18:38:19 -0600 Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <459674.9216.qm@web34414.mail.mud.yahoo.com> References: <459674.9216.qm@web34414.mail.mud.yahoo.com> Message-ID: <3d375d730802211638r2356ef4n89341e29bb444eb1@mail.gmail.com> On Thu, Feb 21, 2008 at 6:02 PM, Lou Pecora wrote: > > --- Thomas Heller wrote: > > > Of course does ctypes release the GIL on foreign > > function calls. And the GIL > > is acquired if Python implemented callback functions > > call back into > > Python code. > > I'm sorry, I don't understand what you just said. Can > you restate it? I will also check the ctypes docs. ctypes releases the GIL when it calls a C function. Some C functions take callbacks; ctypes lets you pass Python functions as these callbacks. There is a C stub wrapped around the Python function to handle the communication. This stub reacquires the GIL before calling the Python function. > > There is nothing that ctypes needs to know about the > > C function - if the > > C function is not thread safe, you must not call it > > from other threads. > > How do I tell if the C function is thread safe? You have to analyze the C function and the way you are calling it. It's not necessarily an easy thing. Basically, you have to make sure that concurrent calls to your functions don't touch the same data. > > Except - if the C function makes Python api calls, > > however, the GIL must not be > > released. In this case you should use the Python > > calling convention; for details > > look up the docs (pydll and such). > > My C function will make NO Python API calls. Can I > just call the Py_BEGIN_ALLOW_THREADS and > Py_BEGIN_ALLOW_THREADS macros in the C function to > allow return to another thread while the C function > calculates? With ctypes, this is not necessary. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Fri Feb 22 07:47:43 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 22 Feb 2008 04:47:43 -0800 (PST) Subject: [SciPy-user] Multithreading cookbook entry In-Reply-To: <3d375d730802211638r2356ef4n89341e29bb444eb1@mail.gmail.com> Message-ID: <435664.62737.qm@web34401.mail.mud.yahoo.com> --- Robert Kern wrote: > ctypes releases the GIL when it calls a C function. > Some C functions > take callbacks; ctypes lets you pass Python > functions as these > callbacks. There is a C stub wrapped around the > Python function to > handle the communication. This stub reacquires the > GIL before calling > the Python function. > > How do I tell if the C function is thread safe? > You have to analyze the C function and the way you > are calling it. > It's not necessarily an easy thing. Basically, you > have to make sure > that concurrent calls to your functions don't touch > the same data. > > My C function will make NO Python API calls. Can > I > > just call the Py_BEGIN_ALLOW_THREADS and > > Py_BEGIN_ALLOW_THREADS macros in the C function > to > > allow return to another thread while the C > function > > calculates? > With ctypes, this is not necessary. Robert, thanks very much for clarifying that. I get it. ctypes is certainly more sophisticated than I realized! Very nice. I am even more in debt to those who pushed me to use it. -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs From markbak at gmail.com Fri Feb 22 08:39:02 2008 From: markbak at gmail.com (Mark Bakker) Date: Fri, 22 Feb 2008 14:39:02 +0100 Subject: [SciPy-user] scipy.special.kv Message-ID: <6946b9500802220539p6b391993q3586f9474eb01603@mail.gmail.com> I have tried it on my platform (win32) and don't get the error. So it definitely seems platform related. I do get erroneous answers for iv with large argument. A bug report has been filed, but it has not been resolved, as far as I know. Mark > Date: Thu, 21 Feb 2008 12:56:21 +0000 (GMT) > From: Alastair Basden > Subject: Re: [SciPy-user] scipy.special.kv > To: scipy-user at scipy.org > Message-ID: > Content-Type: TEXT/PLAIN; charset=US-ASCII > > Hi, > I've noticed that this problem may have been identified (without a > resolution) previously: > > http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2567409 > > I wonder if this is a bug that has been reintroduced, or only present on > certain platforms or something? > > Thanks... > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthe.loic at gmail.com Fri Feb 22 13:19:47 2008 From: berthe.loic at gmail.com (LB) Date: Fri, 22 Feb 2008 10:19:47 -0800 (PST) Subject: [SciPy-user] Scipy.test() fails on Linux 64 In-Reply-To: References: Message-ID: I investigated a little bit, and now I'm very close to make numpy 1.0.4 and scipy 0.6 work, and I wanted to share the information : - concerning the odr failure : this pb is related in the ticket #357 and fixed by the changeset 3498. Applying the corresponding patch solves the problem - concerning the scipy.linalg failure, I didn't find any ticket, but I've tried to compile numpy+scipy on different computers and this seems linked to the compiler. Inded, these failures disappear with recent version of gcc (4.2) and withgfortran. I d'ont know if this is linked to gcc or to gfortran. Changing the version of the compiler was not enough to solve the case, as a new pb appeared : runing the test suite caused an illegal instruction. After checking Trac, I saw that a patch already existed. This is related to the ticket #404 and the corresponding patch is in the changeset 3450. I've still have one failure during the test suite, with the lapack.float test, but this seems to be acceptable. Now, I will have to compile all this on another machine where I should Use the Portland Group Fortran compiler. Does anybody have any feedback with this compiler ? -- LB From josegomez at gmx.net Fri Feb 22 13:56:33 2008 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Fri, 22 Feb 2008 19:56:33 +0100 Subject: [SciPy-user] Weave and scipy/other libs Message-ID: <20080222185633.139930@gmx.net> Hi! I am thinking of rewriting some bit of heavy processing using weave. However, it would be nice to be able to use some of scipy's functionality, as well as that of other python libraries. From within the weave C++ code, is it possible to call these python functions? Thanks! Jose -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger From matthieu.brucher at gmail.com Fri Feb 22 14:15:40 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 22 Feb 2008 20:15:40 +0100 Subject: [SciPy-user] Weave and scipy/other libs In-Reply-To: <20080222185633.139930@gmx.net> References: <20080222185633.139930@gmx.net> Message-ID: Hi, Not directly, I don't think so. Weave code is just embedded in another C++ file, so the same rules as for usual C++ code apply (unless I'm mistaken ;)) Matthieu 2008/2/22, Jose Luis Gomez Dans : > > Hi! > I am thinking of rewriting some bit of heavy processing using weave. > However, it would be nice to be able to use some of scipy's functionality, > as well as that of other python libraries. From within the weave C++ code, > is it possible to call these python functions? > > Thanks! > Jose > > -- > Psssst! Schon vom neuen GMX MultiMessenger geh?rt? > Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Fri Feb 22 14:43:11 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 22 Feb 2008 13:43:11 -0600 Subject: [SciPy-user] Weave and scipy/other libs In-Reply-To: <20080222185633.139930@gmx.net> References: <20080222185633.139930@gmx.net> Message-ID: <47BF25CF.5050102@enthought.com> Jose Luis Gomez Dans wrote: > Hi! > I am thinking of rewriting some bit of heavy processing using weave. However, it would be nice to be able to use some of scipy's functionality, as well as that of other python libraries. From within the weave C++ code, is it possible to call these python functions? > You can call it using the standard Python API (PyObject_Call, etc.). -Travis O. From eric at enthought.com Fri Feb 22 17:49:04 2008 From: eric at enthought.com (eric jones) Date: Fri, 22 Feb 2008 16:49:04 -0600 Subject: [SciPy-user] Weave and scipy/other libs In-Reply-To: <20080222185633.139930@gmx.net> References: <20080222185633.139930@gmx.net> Message-ID: <47BF5160.9080302@enthought.com> Here is a quick example showing how to do it. Also look at functional.py in the examples directory. Note that it will not be any faster than standard Python, because the big cost is the call into Python. However, in combination with other weaved calculations, perhaps it is potentially faster -- although I haven't found occasion to use this capability much. eric # example from numpy import array, sum from scipy import weave a = array((1,2,3,4)) func = sum # Note the use of py_a in this example, because a is the int* pointer # to the actual data, and weave doesn't know how to coerce it back to # a numpy array. Instead, use the equivalent python object (py_a); code = """ py::tuple args(1); args[0] = py_a; return_val = func.call(args); """ result = weave.inline(code,['func','a']) print result # A list doesn't take as much work, because it is represented as an py::list a = [1,2,3,4] code = """ py::tuple args(1); args[0] = a; return_val = func.call(args); """ result = weave.inline(code,['func','a']) print result Jose Luis Gomez Dans wrote: > Hi! > I am thinking of rewriting some bit of heavy processing using weave. However, it would be nice to be able to use some of scipy's functionality, as well as that of other python libraries. From within the weave C++ code, is it possible to call these python functions? > > Thanks! > Jose > From wkerzendorf at googlemail.com Fri Feb 22 22:25:22 2008 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sat, 23 Feb 2008 12:25:22 +0900 Subject: [SciPy-user] scipy interpolate on Leopard 10.5.2 (x86) Message-ID: <13B7C461-A9B5-4073-9EDD-204DBE7EED7C@gmail.com> Dear all, When importing scipy.interpolate I get the following error: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages/scipy/special/_cephes.so, 2): Symbol not found: ___libm_sse2_atan Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/ lib/python2.5/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup Any ideas? Thanks in advance Wolfgang From robert.kern at gmail.com Fri Feb 22 22:28:17 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Feb 2008 21:28:17 -0600 Subject: [SciPy-user] scipy interpolate on Leopard 10.5.2 (x86) In-Reply-To: <13B7C461-A9B5-4073-9EDD-204DBE7EED7C@gmail.com> References: <13B7C461-A9B5-4073-9EDD-204DBE7EED7C@gmail.com> Message-ID: <3d375d730802221928t2282bef2ifbacc3180a795191@mail.gmail.com> On Fri, Feb 22, 2008 at 9:25 PM, Wolfgang Kerzendorf wrote: > Dear all, > When importing scipy.interpolate I get the following error: > dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ > site-packages/scipy/special/_cephes.so, 2): Symbol not found: > ___libm_sse2_atan > Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/ > lib/python2.5/site-packages/scipy/special/_cephes.so > Expected in: dynamic lookup How did you build scipy? If you didn't build it yourself, where did you get the binary from? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wkerzendorf at googlemail.com Fri Feb 22 22:34:18 2008 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sat, 23 Feb 2008 12:34:18 +0900 Subject: [SciPy-user] scipy interpolate on Leopard 10.5.2 (x86) In-Reply-To: <3d375d730802221928t2282bef2ifbacc3180a795191@mail.gmail.com> References: <13B7C461-A9B5-4073-9EDD-204DBE7EED7C@gmail.com> <3d375d730802221928t2282bef2ifbacc3180a795191@mail.gmail.com> Message-ID: Dear Robert, I used the following guide to build scipty on leopard. http://www.scipy.org/Installing_SciPy/Mac_OS_X It seemed to have used my intel fortran compiler and gcc: i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465) I hope that helps Thanks in advance WOlfgang On 23/02/2008, at 12:28, Robert Kern wrote: > On Fri, Feb 22, 2008 at 9:25 PM, Wolfgang Kerzendorf > wrote: >> Dear all, >> When importing scipy.interpolate I get the following error: >> dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/ >> site-packages/scipy/special/_cephes.so, 2): Symbol not found: >> ___libm_sse2_atan >> Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/ >> lib/python2.5/site-packages/scipy/special/_cephes.so >> Expected in: dynamic lookup > > How did you build scipy? If you didn't build it yourself, where did > you get the binary from? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Fri Feb 22 22:45:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Feb 2008 21:45:43 -0600 Subject: [SciPy-user] scipy interpolate on Leopard 10.5.2 (x86) In-Reply-To: References: <13B7C461-A9B5-4073-9EDD-204DBE7EED7C@gmail.com> <3d375d730802221928t2282bef2ifbacc3180a795191@mail.gmail.com> Message-ID: <3d375d730802221945y7619e63blc74439a57a56e637@mail.gmail.com> On Fri, Feb 22, 2008 at 9:34 PM, Wolfgang Kerzendorf wrote: > Dear Robert, > I used the following guide to build scipty on leopard. > http://www.scipy.org/Installing_SciPy/Mac_OS_X > It seemed to have used my intel fortran compiler and gcc: > i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465) It is possible that the correct math library for the Intel Fortran compiler did not get linked in correctly. Alternately, you may not have ifc's environment set up at runtime; the math library won't be found, then. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sat Feb 23 05:21:26 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 23 Feb 2008 19:21:26 +0900 Subject: [SciPy-user] How to tell scipy setup that I have a INTEL fortran ATLAS/BLAS/LAPACK instead of g77 In-Reply-To: References: Message-ID: <47BFF3A6.2090701@ar.media.kyoto-u.ac.jp> Berthold H?llmann wrote: > (Message resent, i was not subscribed, so yesterdays version reached > the moderator only) > > No matter what I do, I can't tell scipy to use the INTEL fortran API > conventions instead of the g77 conventions for fortran routine names > containing underscores: > > hoel at pc047299:scipy-0.6.0 nm /usr/local/gltools/linux/lib/libf77blas_ifc91.so.3.8|grep atl_f77wrap_dtrsv > 0000cfe0 T atl_f77wrap_dtrsv_ > hoel at pc047299:scipy-0.6.0 nm build/lib.linux-i686-2.5/scipy/linsolve/_zsuperlu.so| grep atl_f77wrap_dtrsv > U atl_f77wrap_dtrsv__ > > How can I set up scipy in a way that superlu tries to access > atl_f77wrap_dtrsv_ instead of atl_f77wrap_dtrsv__? > Hi, You don't give enough details to answer you completely (which compiler are you using for ATLAS and for numpy), but assuming you did compile atlas with intel compiler and numpy with g77, this will not work. You cannot tell g77 to follow "intel" convention (different mangling is only the tip of the iceberg; other issues are more subtle and more difficult to track). You should use the same fortran compiler for numpy and for atlas. Mixing fortran compilers is not a good idea, and will often give unpredictable results. If your problem is telling numpy to be compiled with intel fortran compiler, than this is what you should use: python setup.py build --fcompiler=intel cheers, David From luke.olson at gmail.com Sat Feb 23 14:24:20 2008 From: luke.olson at gmail.com (Luke Olson) Date: Sat, 23 Feb 2008 19:24:20 +0000 (UTC) Subject: [SciPy-user] scipy mac os x leopard installation: scipy.test(1, 10) fails Message-ID: I've been following the installation procedure for Mac OS X Leopard here: http://www.scipy.org/Installing_SciPy/Mac_OS_X I'm using the Apple python installation and the svn numpy as on the page. After the build line in the link above completes, scipy.test(1,10) reported that the nose package was missing, so I installed it. Now I'm left with the following error from scipy.test(1,10): -> import scipy -> scipy.test(1,10) Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.5/site-packages/scipy/testing/nosetester.py", line 115, in test argv = self._test_argv(label, verbose, extra_argv) File "/Library/Python/2.5/site-packages/scipy/testing/nosetester.py", line 98, in _test_argv raise TypeError, 'Selection label should be a string' TypeError: Selection label should be a string Any ideas? From gnurser at googlemail.com Sat Feb 23 15:09:23 2008 From: gnurser at googlemail.com (George Nurser) Date: Sat, 23 Feb 2008 20:09:23 +0000 Subject: [SciPy-user] scipy mac os x leopard installation: scipy.test(1, 10) fails In-Reply-To: References: Message-ID: <1d1e6ea70802231209s374f6ffbw450b7ac83aafdb7d@mail.gmail.com> It wants the number as a string. e.g. scipy.test('10') works, as does scipy.test('1,10') HTH. George Nurser. From luke.olson at gmail.com Sat Feb 23 15:36:07 2008 From: luke.olson at gmail.com (Luke Olson) Date: Sat, 23 Feb 2008 20:36:07 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?scipy_mac_os_x_leopard_installation=3A_sci?= =?utf-8?b?cHkudGVzdCgxLAkxMCkgZmFpbHM=?= References: <1d1e6ea70802231209s374f6ffbw450b7ac83aafdb7d@mail.gmail.com> Message-ID: George Nurser googlemail.com> writes: > > It wants the number as a string. > e.g. > scipy.test('10') works, as does scipy.test('1,10') > > HTH. George Nurser. > Oops...I'll read the error message next time :) Thanks. From manuhack at gmail.com Sat Feb 23 22:55:40 2008 From: manuhack at gmail.com (Manu Hack) Date: Sat, 23 Feb 2008 22:55:40 -0500 Subject: [SciPy-user] quantile function in scipy Message-ID: <50af02ed0802231955o74075112tf0ec4511c8f181ab@mail.gmail.com> Hi, I'd like to know if there is any function to find the quantiles, given two lists where one is the value and another the frequency. For example, v = [1, 2, 3], freq = [3, 2, 1], then I would like to have the quantiles for the samples: 1, 1, 1, 2, 2, 3. Have browsed around the doc of scipy.stats but didn't get any thing close. Thanks a lot. Manu From david at ar.media.kyoto-u.ac.jp Sun Feb 24 00:58:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 24 Feb 2008 14:58:36 +0900 Subject: [SciPy-user] quantile function in scipy In-Reply-To: <50af02ed0802231955o74075112tf0ec4511c8f181ab@mail.gmail.com> References: <50af02ed0802231955o74075112tf0ec4511c8f181ab@mail.gmail.com> Message-ID: <47C1078C.6000208@ar.media.kyoto-u.ac.jp> Manu Hack wrote: > Hi, > > I'd like to know if there is any function to find the quantiles, given > two lists where one is the value and another the frequency. > > For example, v = [1, 2, 3], freq = [3, 2, 1], then I would like to > have the quantiles for the samples: 1, 1, 1, 2, 2, 3. > > Have browsed around the doc of scipy.stats but didn't get any thing > close. Thanks a lot. > > Would scipy.stats.percentileofscore solve your problem ? cheers, David From manuhack at gmail.com Sun Feb 24 01:13:53 2008 From: manuhack at gmail.com (Manu Hack) Date: Sun, 24 Feb 2008 01:13:53 -0500 Subject: [SciPy-user] quantile function in scipy In-Reply-To: <47C1078C.6000208@ar.media.kyoto-u.ac.jp> References: <50af02ed0802231955o74075112tf0ec4511c8f181ab@mail.gmail.com> <47C1078C.6000208@ar.media.kyoto-u.ac.jp> Message-ID: <50af02ed0802232213w57c91549o50caff02468db8a5@mail.gmail.com> On Sun, Feb 24, 2008 at 12:58 AM, David Cournapeau wrote: > > Manu Hack wrote: > > Hi, > > > > I'd like to know if there is any function to find the quantiles, given > > two lists where one is the value and another the frequency. > > > > For example, v = [1, 2, 3], freq = [3, 2, 1], then I would like to > > have the quantiles for the samples: 1, 1, 1, 2, 2, 3. > > > > Have browsed around the doc of scipy.stats but didn't get any thing > > close. Thanks a lot. > > > > > Would scipy.stats.percentileofscore solve your problem ? It's close. But the problem is that the freq list in my application is going to be a huge number so to make a list from v and freq to put in that function may not be feasible. I looked at the source code, in the comment it suggests the Gnu R functions. So I may be using the wtd.quantile under Hmisc library of R via rpy. Thanks. Manu From jeremy.mayes at gmail.com Sun Feb 24 13:34:18 2008 From: jeremy.mayes at gmail.com (Jeremy Mayes) Date: Sun, 24 Feb 2008 12:34:18 -0600 Subject: [SciPy-user] Trying to build scipy 32-bit on a 64-bit machine Message-ID: <890c2bf00802241034r175f9ce8h1c388f7e7739d201@mail.gmail.com> Hi, I'm trying to build scipy for a target of i686 ( i.e., 32-bit ) but on an x86_64 host using gcc/4.1.1. I've been struggling with this and haven't seen any reference in the archives ( I apologize if I missed it ). I've been trying to set CFLAGS and LDFLAGS to pass -m32, but, I get undefined symbol errors ( MAIN__ ). If I just let it run, then, I get errors with LONG_BIT defined in pyport.h ( python successfully build 32-bit ). Any pointers? -- --jlm -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Mon Feb 25 06:12:12 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 25 Feb 2008 12:12:12 +0100 Subject: [SciPy-user] order in profiles and packages Message-ID: <47C2A28C.6080405@slac.stanford.edu> hello, I have a package and a profile to launch it. My profile looks like : -------------------- include ipythonrc execute print "\n WELCOME \n" import_mod my_package ------------------- and say my package has a __init__.py where, say, there is only a 'print 'HELLO' ' statement. When executing ipython -profile glastgrb I get HELLO WELCOME in other words the execute in the profile seems to be executed *after* the loading of my_package, despite the fact that it precedes the import_mod statement in the profile..... Is that normal? Am I doing something wrong? thanks in advance, Johann P.S. [cohen at jarrett GRBSCRIPTS]$ ipython -V 0.8.3.svn.r3001 From Christoph.Scheit at lstm.uni-erlangen.de Mon Feb 25 09:05:16 2008 From: Christoph.Scheit at lstm.uni-erlangen.de (Christoph Scheit) Date: Mon, 25 Feb 2008 15:05:16 +0100 Subject: [SciPy-user] handling of huge files for post-processing Message-ID: <47C2D92C0200002A000005E0@KAMILLA.rrze.uni-erlangen.de> Hello everybody, I get from a Fortran-Code (CFD) binary files containing the acoustic pressure at some distinct points. The files has N "lines" which look like this: TimeStep(int) DebugInfo (int) AcousticPressure(float) and is binary. My problem is now, that the file can be huge (> 100 MB) and that after several runs on a cluster indeed not only one but 20 - 50 files of that size are to be post-processed. Since the CFD code runs parallel, I have to sum up the results from different cpu's (cpu 1 calculates only a fraction of the acoustic pressure of point p and time step t, so that I have to sum over all cpu's) Currently I'm reading all the data into a sqlite-table, than I group the data, summing up over the processors and then I'm writing out files containing the data of the single points. This approach works for smaller files somehow, but does not seem to be working for big files like described above. Do you have some ideas on this problem? Thank you very much in advance, Christoph From david.huard at gmail.com Mon Feb 25 09:53:31 2008 From: david.huard at gmail.com (David Huard) Date: Mon, 25 Feb 2008 09:53:31 -0500 Subject: [SciPy-user] handling of huge files for post-processing In-Reply-To: <47C2D92C0200002A000005E0@KAMILLA.rrze.uni-erlangen.de> References: <47C2D92C0200002A000005E0@KAMILLA.rrze.uni-erlangen.de> Message-ID: <91cf711d0802250653g652df1f9mdd9aaa5adf869bc5@mail.gmail.com> Hi Cristoph, I am not sure exactly what causes your method to fail but it might be that you are trying to hold all the arrays in memory at once. Can you do your calculation using iterators/generators ? The idea is to load into memory only the part of the array that you need for a given calculation, store the result and continue iterating. I used to process ~2GB files using iterators from PyTables tables and it worked smoothly. David 2008/2/25, Christoph Scheit : > > Hello everybody, > > I get from a Fortran-Code (CFD) binary files containing > the acoustic pressure at some distinct points. > The files has N "lines" which look like this: > > TimeStep(int) DebugInfo (int) AcousticPressure(float) > > and is binary. My problem is now, that the file can be > huge (> 100 MB) and that after several runs on a cluster > indeed not only one but 20 - 50 files of that size are > to be post-processed. > > Since the CFD code runs parallel, I have to sum up > the results from different cpu's (cpu 1 calculates only > a fraction of the acoustic pressure of point p and time step > t, so that I have to sum over all cpu's) > > Currently I'm reading all the data into a sqlite-table, than > I group the data, summing up over the processors and > then I'm writing out files containing the data of the single > points. This approach works for smaller files somehow, > but does not seem to be working for big files like described > above. > > Do you have some ideas on this problem? Thank you very > much in advance, > > Christoph > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Mon Feb 25 09:58:13 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 25 Feb 2008 15:58:13 +0100 Subject: [SciPy-user] order in profiles and packages In-Reply-To: <47C2A28C.6080405@slac.stanford.edu> References: <47C2A28C.6080405@slac.stanford.edu> Message-ID: <47C2D785.9090405@slac.stanford.edu> my apologies, this was the wrong list.... I submitted it to ipython list. Johan From shane at vetta.org Mon Feb 25 11:14:27 2008 From: shane at vetta.org (Shane Legg) Date: Mon, 25 Feb 2008 17:14:27 +0100 Subject: [SciPy-user] Bug in matplotlib plot_wireframe? Message-ID: Hi, I'm new here so if this isn't the right place to ask just let me know where I should head. Thanks. I think there is a significant bug in plot_wireframe in matplotlib where it incorrectly displays the Z axis values. The code below demonstrates the problem: import scipy import pylab as p import matplotlib.axes3d as p3 from numpy import * """ # If you do a wire frame of the following, the graph is correct: Z = scipy.array( [[ 0.52, 0.00020], [ 0.45, 0.00018], [ 0.34, 0.00016]] ) """ # but if you put negative signs in: Z = scipy.array( [[ -0.52, -0.00020], [ -0.45, -0.00018], [ -0.34, -0.00016]] ) """ the graph displays: [[ -0.62, -0.10020 ], [ -0.55, -0.10018 ], [ -0.44, -0.10016 ]] """ X, Y = meshgrid(arange(0, 3, 1.0), arange(0, 4, 1.0)) fig = p.figure() ax = p3.Axes3D(fig) ax.plot_wireframe(X, Y, Z) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') p.show() I'm running Ubuntu 7.10 x64 with python 2.5.1-1ubuntu2 and python-scipy 0.5.2-9ubuntu4 both installed from the .deb files. I sent the above code to somebody with a 32bit Linux system and they had the same problem. Any help appreciated! Cheers Shane -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Feb 25 11:53:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Feb 2008 10:53:22 -0600 Subject: [SciPy-user] Bug in matplotlib plot_wireframe? In-Reply-To: References: Message-ID: <3d375d730802250853j112bb67ah84847faef07b1255@mail.gmail.com> On Mon, Feb 25, 2008 at 10:14 AM, Shane Legg wrote: > Hi, > > I'm new here so if this isn't the right place to ask just let > me know where I should head. Thanks. The appropriate matplotlib list is here: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Christoph.Scheit at lstm.uni-erlangen.de Tue Feb 26 04:01:05 2008 From: Christoph.Scheit at lstm.uni-erlangen.de (Christoph Scheit) Date: Tue, 26 Feb 2008 10:01:05 +0100 Subject: [SciPy-user] handling of huge files for post-processing Message-ID: <47C3E3610200002A000005FF@KAMILLA.rrze.uni-erlangen.de> Hello David, I guess that everythink is kept in memory... but I don't know how to handle this problem using iterators. Can you give me some more detail? You read your files all in once? One problem is, that, let's assume I have three files a, b and c, then b depends on data from a c depends on data from b (and maybe from a, but this might be not the case in 99%) This is due to differences in signal runtime... christoph ------------------------------ Message: 4 Date: Mon, 25 Feb 2008 09:53:31 -0500 From: "David Huard" Subject: Re: [SciPy-user] handling of huge files for post-processing To: "SciPy Users List" Message-ID: <91cf711d0802250653g652df1f9mdd9aaa5adf869bc5 at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" Hi Cristoph, I am not sure exactly what causes your method to fail but it might be that you are trying to hold all the arrays in memory at once. Can you do your calculation using iterators/generators ? The idea is to load into memory only the part of the array that you need for a given calculation, store the result and continue iterating. I used to process ~2GB files using iterators from PyTables tables and it worked smoothly. David 2008/2/25, Christoph Scheit : > > Hello everybody, > > I get from a Fortran-Code (CFD) binary files containing > the acoustic pressure at some distinct points. > The files has N "lines" which look like this: > > TimeStep(int) DebugInfo (int) AcousticPressure(float) > > and is binary. My problem is now, that the file can be > huge (> 100 MB) and that after several runs on a cluster > indeed not only one but 20 - 50 files of that size are > to be post-processed. > > Since the CFD code runs parallel, I have to sum up > the results from different cpu's (cpu 1 calculates only > a fraction of the acoustic pressure of point p and time step > t, so that I have to sum over all cpu's) > > Currently I'm reading all the data into a sqlite-table, than > I group the data, summing up over the processors and > then I'm writing out files containing the data of the single > points. This approach works for smaller files somehow, > but does not seem to be working for big files like described > above. > > Do you have some ideas on this problem? Thank you very > much in advance, > > Christoph > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080225/33d1fb1c/attachment-0001.html ------------------------------ Message: 5 Date: Mon, 25 Feb 2008 15:58:13 +0100 From: Johann Cohen-Tanugi Subject: Re: [SciPy-user] order in profiles and packages To: SciPy Users List Message-ID: <47C2D785.9090405 at slac.stanford.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed my apologies, this was the wrong list.... I submitted it to ipython list. Johan ------------------------------ Message: 6 Date: Mon, 25 Feb 2008 17:14:27 +0100 From: "Shane Legg" Subject: [SciPy-user] Bug in matplotlib plot_wireframe? To: scipy-user at scipy.org Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi, I'm new here so if this isn't the right place to ask just let me know where I should head. Thanks. I think there is a significant bug in plot_wireframe in matplotlib where it incorrectly displays the Z axis values. The code below demonstrates the problem: import scipy import pylab as p import matplotlib.axes3d as p3 from numpy import * """ # If you do a wire frame of the following, the graph is correct: Z = scipy.array( [[ 0.52, 0.00020], [ 0.45, 0.00018], [ 0.34, 0.00016]] ) """ # but if you put negative signs in: Z = scipy.array( [[ -0.52, -0.00020], [ -0.45, -0.00018], [ -0.34, -0.00016]] ) """ the graph displays: [[ -0.62, -0.10020 ], [ -0.55, -0.10018 ], [ -0.44, -0.10016 ]] """ X, Y = meshgrid(arange(0, 3, 1.0), arange(0, 4, 1.0)) fig = p.figure() ax = p3.Axes3D(fig) ax.plot_wireframe(X, Y, Z) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') p.show() I'm running Ubuntu 7.10 x64 with python 2.5.1-1ubuntu2 and python-scipy 0.5.2-9ubuntu4 both installed from the .deb files. I sent the above code to somebody with a 32bit Linux system and they had the same problem. Any help appreciated! Cheers Shane -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080225/6f9bbe82/attachment-0001.html ------------------------------ Message: 7 Date: Mon, 25 Feb 2008 10:53:22 -0600 From: "Robert Kern" Subject: Re: [SciPy-user] Bug in matplotlib plot_wireframe? To: shane at vetta.org, "SciPy Users List" Message-ID: <3d375d730802250853j112bb67ah84847faef07b1255 at mail.gmail.com> Content-Type: text/plain; charset=UTF-8 On Mon, Feb 25, 2008 at 10:14 AM, Shane Legg wrote: > Hi, > > I'm new here so if this isn't the right place to ask just let > me know where I should head. Thanks. The appropriate matplotlib list is here: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user End of SciPy-user Digest, Vol 54, Issue 48 ****************************************** From david at ar.media.kyoto-u.ac.jp Tue Feb 26 07:01:29 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 26 Feb 2008 21:01:29 +0900 Subject: [SciPy-user] Trying to build scipy 32-bit on a 64-bit machine In-Reply-To: <890c2bf00802241034r175f9ce8h1c388f7e7739d201@mail.gmail.com> References: <890c2bf00802241034r175f9ce8h1c388f7e7739d201@mail.gmail.com> Message-ID: <47C3FF99.1040505@ar.media.kyoto-u.ac.jp> Jeremy Mayes wrote: > Hi, > > I'm trying to build scipy for a target of i686 ( i.e., 32-bit ) but on > an x86_64 host using gcc/4.1.1. I've been struggling with this and > haven't seen any reference in the archives ( I apologize if I missed it ). It won't be easy: distutils (the python package used to build numpy) does not support cross-compiling. Already cross-compiling python itself is difficult, and you will need that first. > > I've been trying to set CFLAGS and LDFLAGS to pass -m32, but, I get > undefined symbol errors ( MAIN__ ). If I just let it run, then, I get > errors with LONG_BIT defined in pyport.h ( python successfully build > 32-bit ). Modifying flags will not work. Different architectures have different python installations (different headers, with different values: that's certainly the cause of the above error). Do you have any experience cross-compiling ? Because cross-compilation is already difficult, and python is not an easy package to cross-compile (bootstrapping issues, etc...), specially since the installation process of python does not support cross-compilation (you can find patches, but I don't know if they are updated for recent python). cheers, David From david.huard at gmail.com Tue Feb 26 09:17:00 2008 From: david.huard at gmail.com (David Huard) Date: Tue, 26 Feb 2008 09:17:00 -0500 Subject: [SciPy-user] handling of huge files for post-processing In-Reply-To: <47C3E3610200002A000005FF@KAMILLA.rrze.uni-erlangen.de> References: <47C3E3610200002A000005FF@KAMILLA.rrze.uni-erlangen.de> Message-ID: <91cf711d0802260617o4d768824wbf5fae702b59f00a@mail.gmail.com> Cristoph, Do you mean that b depends on the entire dataset a ? In this case, you might consider buying additional memory; this is often way cheaper in terms of time than trying to optimize the code. What I mean by iterators is that when you open a binary file, you generally have the possibility to iterate over each element in the file. For instance, when reading an ascii file: for line in f.readline(): some operation on the current line. instead of loading all the file in memory: lines = f.readlines() This way, only one line is kept in memory at a time. If you can write your code in this manner, this might solve your memory problem. For instance, here is a generator that opens two files and will return the current line of each file each time it's next() method is called def read(): a = open('filea', 'r') b = open('fileb', 'r') la = a.readline() lb = b.readline() while (la and lb): yield la,lb la = a.readline() lb = b.readline() for a, b in read(): some operation on a,b HTH, David 2008/2/26, Christoph Scheit : > > Hello David, > > I guess that everythink is kept in memory... but I don't > know how to handle this problem using iterators. Can > you give me some more detail? You read your files > all in once? > > One problem is, that, let's assume I have three files > a, b and c, then > b depends on data from a > c depends on data from b (and maybe from a, but > this might be not the case in 99%) > This is due to differences in signal runtime... > > christoph > > ------------------------------ > > Message: 4 > Date: Mon, 25 Feb 2008 09:53:31 -0500 > From: "David Huard" > Subject: Re: [SciPy-user] handling of huge files for post-processing > To: "SciPy Users List" > Message-ID: > <91cf711d0802250653g652df1f9mdd9aaa5adf869bc5 at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > > Hi Cristoph, > > I am not sure exactly what causes your method to fail but it might be that > you are trying to hold all the arrays in memory at once. Can you do your > calculation using iterators/generators ? The idea is to load into memory > only the part of the array that you need for a given calculation, store > the > result and continue iterating. I used to process ~2GB files using > iterators > from PyTables tables and it worked smoothly. > > David > > > 2008/2/25, Christoph Scheit : > > > > Hello everybody, > > > > I get from a Fortran-Code (CFD) binary files containing > > the acoustic pressure at some distinct points. > > The files has N "lines" which look like this: > > > > TimeStep(int) DebugInfo (int) AcousticPressure(float) > > > > and is binary. My problem is now, that the file can be > > huge (> 100 MB) and that after several runs on a cluster > > indeed not only one but 20 - 50 files of that size are > > to be post-processed. > > > > Since the CFD code runs parallel, I have to sum up > > the results from different cpu's (cpu 1 calculates only > > a fraction of the acoustic pressure of point p and time step > > t, so that I have to sum over all cpu's) > > > > Currently I'm reading all the data into a sqlite-table, than > > I group the data, summing up over the processors and > > then I'm writing out files containing the data of the single > > points. This approach works for smaller files somehow, > > but does not seem to be working for big files like described > > above. > > > > Do you have some ideas on this problem? Thank you very > > much in advance, > > > > Christoph > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://projects.scipy.org/pipermail/scipy-user/attachments/20080225/33d1fb1c/attachment-0001.html > > ------------------------------ > > Message: 5 > Date: Mon, 25 Feb 2008 15:58:13 +0100 > From: Johann Cohen-Tanugi > Subject: Re: [SciPy-user] order in profiles and packages > To: SciPy Users List > Message-ID: <47C2D785.9090405 at slac.stanford.edu> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > my apologies, this was the wrong list.... I submitted it to ipython list. > Johan > > > ------------------------------ > > Message: 6 > Date: Mon, 25 Feb 2008 17:14:27 +0100 > From: "Shane Legg" > Subject: [SciPy-user] Bug in matplotlib plot_wireframe? > To: scipy-user at scipy.org > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I'm new here so if this isn't the right place to ask just let > me know where I should head. Thanks. > > I think there is a significant bug in plot_wireframe in matplotlib > where it incorrectly displays the Z axis values. The code below > demonstrates the problem: > > > import scipy > import pylab as p > import matplotlib.axes3d as p3 > from numpy import * > > """ > # If you do a wire frame of the following, the graph is correct: > Z = scipy.array( > [[ 0.52, 0.00020], > [ 0.45, 0.00018], > [ 0.34, 0.00016]] ) > """ > > # but if you put negative signs in: > Z = scipy.array( > [[ -0.52, -0.00020], > [ -0.45, -0.00018], > [ -0.34, -0.00016]] ) > > """ > the graph displays: > [[ -0.62, -0.10020 ], > [ -0.55, -0.10018 ], > [ -0.44, -0.10016 ]] > """ > > X, Y = meshgrid(arange(0, 3, 1.0), arange(0, 4, 1.0)) > > fig = p.figure() > ax = p3.Axes3D(fig) > ax.plot_wireframe(X, Y, Z) > > ax.set_xlabel('X') > ax.set_ylabel('Y') > ax.set_zlabel('Z') > > p.show() > > > I'm running Ubuntu 7.10 x64 with python 2.5.1-1ubuntu2 and > python-scipy 0.5.2-9ubuntu4 both installed from the .deb files. > I sent the above code to somebody with a 32bit Linux system > and they had the same problem. > > Any help appreciated! > > Cheers > Shane > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://projects.scipy.org/pipermail/scipy-user/attachments/20080225/6f9bbe82/attachment-0001.html > > ------------------------------ > > Message: 7 > Date: Mon, 25 Feb 2008 10:53:22 -0600 > From: "Robert Kern" > Subject: Re: [SciPy-user] Bug in matplotlib plot_wireframe? > To: shane at vetta.org, "SciPy Users List" > Message-ID: > <3d375d730802250853j112bb67ah84847faef07b1255 at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Mon, Feb 25, 2008 at 10:14 AM, Shane Legg wrote: > > Hi, > > > > I'm new here so if this isn't the right place to ask just let > > me know where I should head. Thanks. > > The appropriate matplotlib list is here: > > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > > > ------------------------------ > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > End of SciPy-user Digest, Vol 54, Issue 48 > ****************************************** > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy.mayes at gmail.com Tue Feb 26 09:26:38 2008 From: jeremy.mayes at gmail.com (Jeremy Mayes) Date: Tue, 26 Feb 2008 08:26:38 -0600 Subject: [SciPy-user] Trying to build scipy 32-bit on a 64-bit machine In-Reply-To: <47C3FF99.1040505@ar.media.kyoto-u.ac.jp> References: <890c2bf00802241034r175f9ce8h1c388f7e7739d201@mail.gmail.com> <47C3FF99.1040505@ar.media.kyoto-u.ac.jp> Message-ID: <890c2bf00802260626h6b2f9abdx42316ac77f3b322c@mail.gmail.com> Hi, Thanks for the response. It wasn't too bad building python. And let me change my initial statement slightly. I'm trying to build 32-bit binaries to run on an x86_64 machine. You only have to pass gcc the -m32 flag to get it to do that, so, it's not horrible. Most configure scripts will pay attention to the CFLAGS env variable. It's when I get into distutils land that I'm less sure of what to do. It definitely seems that distutils also pays attention to the CFLAGS/LDFLAGS env variables, but, not sure about numpy/scipy for fortran. I've tried FFLAGS, but, that leads to other problems in addition to the numerous threads I've read on NOT setting those vars when building. On Tue, Feb 26, 2008 at 6:01 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Jeremy Mayes wrote: > > Hi, > > > > I'm trying to build scipy for a target of i686 ( i.e., 32-bit ) but on > > an x86_64 host using gcc/4.1.1. I've been struggling with this and > > haven't seen any reference in the archives ( I apologize if I missed it > ). > > It won't be easy: distutils (the python package used to build numpy) > does not support cross-compiling. Already cross-compiling python itself > is difficult, and you will need that first. > > > > > I've been trying to set CFLAGS and LDFLAGS to pass -m32, but, I get > > undefined symbol errors ( MAIN__ ). If I just let it run, then, I get > > errors with LONG_BIT defined in pyport.h ( python successfully build > > 32-bit ). > > Modifying flags will not work. Different architectures have different > python installations (different headers, with different values: that's > certainly the cause of the above error). Do you have any experience > cross-compiling ? Because cross-compilation is already difficult, and > python is not an easy package to cross-compile (bootstrapping issues, > etc...), specially since the installation process of python does not > support cross-compilation (you can find patches, but I don't know if > they are updated for recent python). > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- --jlm -------------- next part -------------- An HTML attachment was scrubbed... URL: From Christoph.Scheit at lstm.uni-erlangen.de Tue Feb 26 09:51:24 2008 From: Christoph.Scheit at lstm.uni-erlangen.de (Christoph Scheit) Date: Tue, 26 Feb 2008 15:51:24 +0100 Subject: [SciPy-user] weave question Message-ID: <47C4357C0200002A00000608@KAMILLA.rrze.uni-erlangen.de> Hi everybody, I have a small sniplet in C++ that I run using weave.inline in a python-script. Now my problem is, that although it should be that way, values of mutable arguments passed to the c++-part are only changed locally, i.e. after the inlined part are the python-arguments still with the same values... Here is the sniplet: obsvPtsMinRt = ones(nPts, dtype=int) code = """ double rt, rabs; int irt, maxrt, minrt; int nPts = NobsvPts[0]; int nElems = NeMidPts[0]; blitz::TinyVector r; // initial value r(0) = obsvPts(0, 0) - eMidPts(0, 0); r(1) = obsvPts(0, 1) - eMidPts(0, 1); r(2) = obsvPts(0, 2) - eMidPts(0, 2); rabs = sqrt(r[0] * r[0] + r[1] * r[1] + r[2] * r[2]); rt = rabs / c0; minrt = maxrt = (int) (rt / dt); // loop over observer pts for (int i = 0; i < nPts; i++) { for (int j = 0; j < nElems; j++) { r(0) = obsvPts(i, 0) - eMidPts(j, 0); r(1) = obsvPts(i, 1) - eMidPts(j, 1); r(2) = obsvPts(i, 2) - eMidPts(j, 2); rabs = sqrt(r[0] * r[0] + r[1] * r[1] + r[2] * r[2]); rt = rabs / c0; irt = (int) (rt / dt); if (irt > maxrt) { maxrt = irt; } else if (irt < minrt) { minrt = irt; } } obsvPtsMinRt[i] = minrt; } obsvPtsMinRt[3] = 10.; """ weave.inline(code, ['obsvPtsMinRt', 'obsvPtsMaxRt', 'obsvPts', 'eMidPts', 'dt', 'c0'], type_converters=converters.blitz, compiler = 'gcc', headers=["", "", ""]) print obsvPtsMinRt ok, afer running this code, obsvPtsMinRt should contain values different from 1... shouldn't? For some reason I get [1 1 1 1...] as output.... Does somebody has an idea what I'm doing wrong? I tried also a small example, and everything behaved like expected: def test(): arr = ones(7) print arr code = """ int len = Narr[0]; printf("len: %d\\n", len); arr[3] = 5.; """ inline(code, ['arr'], type_converters=converters.blitz) print arr here the I get a five at position 4... Thank you very much in advance, christoph From Christoph.Scheit at lstm.uni-erlangen.de Tue Feb 26 10:27:42 2008 From: Christoph.Scheit at lstm.uni-erlangen.de (Christoph Scheit) Date: Tue, 26 Feb 2008 16:27:42 +0100 Subject: [SciPy-user] handling of huge files for post-processing Message-ID: <47C43DFE0200002A0000060C@KAMILLA.rrze.uni-erlangen.de> Hello David, indeed data in file a depends on data in file b... that the biggest problem and consequently I guess I need something that operates better on the file-system than in main memory. Do you think, it's possible to use PyTables to tackle the problem? I would need something that can group together such enormous data-sets. sqlite is nice to group data of a table together, but I guess my data-sets are just to big... Acutally I unfortunately don't see the possibility to iterate over the entries of the files in the manner you described below.... Thanks, Christoph ------------------------------ Message: 3 Date: Tue, 26 Feb 2008 09:17:00 -0500 From: "David Huard" Subject: Re: [SciPy-user] handling of huge files for post-processing To: "SciPy Users List" Message-ID: <91cf711d0802260617o4d768824wbf5fae702b59f00a at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" Cristoph, Do you mean that b depends on the entire dataset a ? In this case, you might consider buying additional memory; this is often way cheaper in terms of time than trying to optimize the code. What I mean by iterators is that when you open a binary file, you generally have the possibility to iterate over each element in the file. For instance, when reading an ascii file: for line in f.readline(): some operation on the current line. instead of loading all the file in memory: lines = f.readlines() This way, only one line is kept in memory at a time. If you can write your code in this manner, this might solve your memory problem. For instance, here is a generator that opens two files and will return the current line of each file each time it's next() method is called def read(): a = open('filea', 'r') b = open('fileb', 'r') la = a.readline() lb = b.readline() while (la and lb): yield la,lb la = a.readline() lb = b.readline() for a, b in read(): some operation on a,b HTH, David From david.huard at gmail.com Tue Feb 26 12:23:01 2008 From: david.huard at gmail.com (David Huard) Date: Tue, 26 Feb 2008 12:23:01 -0500 Subject: [SciPy-user] handling of huge files for post-processing In-Reply-To: <47C43DFE0200002A0000060C@KAMILLA.rrze.uni-erlangen.de> References: <47C43DFE0200002A0000060C@KAMILLA.rrze.uni-erlangen.de> Message-ID: <91cf711d0802260923x63498b1apd5983f0d5ae0c20@mail.gmail.com> Whether or not PyTables is going to make a difference really depends on how much data you need at a given time to perform the computation. If this exceeds your RAM, it doesn't matter what binary format you are using. That being said, I am not familiar with sqlite, so I don't know if there is some limitations regarding the database size. Storing your data using PyTables will allow you to store as many GB in a single file as you wish. The tricky part will then be to extract only the data that you need to perform your computations and make sure this always stays below the RAM limit, or else the swap memory will be used and it will slow down things considerably. I suggest you try to estimate how much memory you'll be needing for your computations, see how much RAM you have, and decide whether or not you should just spend some euros and install additional RAM. Servus, David 2008/2/26, Christoph Scheit : > > Hello David, > > indeed data in file a depends on data in file b... > that the biggest problem and consequently > I guess I need something that operates better > on the file-system than in main memory. > > Do you think, it's possible to use PyTables to > tackle the problem? I would need something > that can group together such enormous > data-sets. sqlite is nice to group data of > a table together, but I guess my data-sets are > just to big... > > Acutally I unfortunately don't see the possibility > to iterate over the entries of the files in the > manner you described below.... > > Thanks, > > Christoph > ------------------------------ > > Message: 3 > Date: Tue, 26 Feb 2008 09:17:00 -0500 > > From: "David Huard" > Subject: Re: [SciPy-user] handling of huge files for post-processing > To: "SciPy Users List" > Message-ID: > > <91cf711d0802260617o4d768824wbf5fae702b59f00a at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" > > > Cristoph, > > Do you mean that b depends on the entire dataset a ? In this case, you > might > consider buying additional memory; this is often way cheaper in terms of > time than trying to optimize the code. > > What I mean by iterators is that when you open a binary file, you > generally > have the possibility to iterate over each element in the file. For > instance, > when reading an ascii file: > > for line in f.readline(): > some operation on the current line. > > instead of loading all the file in memory: > lines = f.readlines() > > This way, only one line is kept in memory at a time. If you can write your > code in this manner, this might solve your memory problem. For instance, > here is a generator that opens two files and will return the current line > of > each file each time it's next() method is called > def read(): > a = open('filea', 'r') > b = open('fileb', 'r') > la = a.readline() > lb = b.readline() > while (la and lb): > yield la,lb > la = a.readline() > lb = b.readline() > > for a, b in read(): > some operation on a,b > > HTH, > > David > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Tue Feb 26 14:50:53 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 26 Feb 2008 20:50:53 +0100 Subject: [SciPy-user] Convert a string list to array ? Message-ID: <47C46D9D.2060904@ru.nl> hello, I have a string list that I want to convert to an integer array data_all = [ '12', '24' ] and now I would expect this would work, but it crashes: data_all = asarray ( data_all, dtype = int ) so I need to do the conversion in 2 steps data_all = asarray ( asarray ( data_all ), dtype = int ) Is there a better (more elegant) way ? And maybe the string list was even the wrong first step, the datafile I'm reading looks like this: 254 48 57 58 52 53 58 51 54 254 32 32 32 32 32 So there might even be better ways to start ? thanks, Stef From emanuelez at gmail.com Tue Feb 26 15:07:18 2008 From: emanuelez at gmail.com (Emanuele Zattin) Date: Tue, 26 Feb 2008 21:07:18 +0100 Subject: [SciPy-user] Convert a string list to array ? In-Reply-To: <47C46D9D.2060904@ru.nl> References: <47C46D9D.2060904@ru.nl> Message-ID: Hi, I think there's something about it in the cookbook: http://www.scipy.org/Cookbook/InputOutput On Tue, Feb 26, 2008 at 8:50 PM, Stef Mientki wrote: > hello, > > I have a string list that I want to convert to an integer array > data_all = [ '12', '24' ] > > and now I would expect this would work, but it crashes: > data_all = asarray ( data_all, dtype = int ) > > so I need to do the conversion in 2 steps > data_all = asarray ( asarray ( data_all ), dtype = int ) > > Is there a better (more elegant) way ? > > > And maybe the string list was even the wrong first step, > the datafile I'm reading looks like this: > > 254 48 57 > 58 52 53 > 58 51 54 > 254 32 32 > 32 32 32 > > So there might even be better ways to start ? > > > thanks, > Stef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From wnbell at gmail.com Tue Feb 26 15:09:02 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 26 Feb 2008 14:09:02 -0600 Subject: [SciPy-user] Convert a string list to array ? In-Reply-To: <47C46D9D.2060904@ru.nl> References: <47C46D9D.2060904@ru.nl> Message-ID: On Tue, Feb 26, 2008 at 1:50 PM, Stef Mientki wrote: > Is there a better (more elegant) way ? > > > And maybe the string list was even the wrong first step, > the datafile I'm reading looks like this: > > 254 48 57 > 58 52 53 > 58 51 54 > 254 32 32 > 32 32 32 > > So there might even be better ways to start ? Try numpy.fromfile() A = fromfile( 'myfile.txt', dtype=int, sep=' ') This should be very fast and memory efficient. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From haase at msg.ucsf.edu Tue Feb 26 15:13:46 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 26 Feb 2008 21:13:46 +0100 Subject: [SciPy-user] Convert a string list to array ? In-Reply-To: <47C46D9D.2060904@ru.nl> References: <47C46D9D.2060904@ru.nl> Message-ID: On Tue, Feb 26, 2008 at 8:50 PM, Stef Mientki wrote: > hello, > > I have a string list that I want to convert to an integer array > data_all = [ '12', '24' ] > > and now I would expect this would work, but it crashes: > data_all = asarray ( data_all, dtype = int ) > do really mean a "full blown crash", i.e. seg-fault ?? -Sebastian Haase From ckkart at hoc.net Tue Feb 26 15:37:22 2008 From: ckkart at hoc.net (Christian K.) Date: Tue, 26 Feb 2008 21:37:22 +0100 Subject: [SciPy-user] weave question In-Reply-To: <47C4357C0200002A00000608@KAMILLA.rrze.uni-erlangen.de> References: <47C4357C0200002A00000608@KAMILLA.rrze.uni-erlangen.de> Message-ID: Christoph Scheit wrote: > Hi everybody, > > I have a small sniplet in C++ that I run using weave.inline > in a python-script. Now my problem is, that although it should > be that way, values of mutable arguments passed to the > c++-part are only changed locally, i.e. after the inlined part > are the python-arguments still with the same values... Here > is the sniplet: > > obsvPtsMinRt = ones(nPts, dtype=int) > > code = """ [....] > obsvPtsMinRt[i] = minrt; > } > obsvPtsMinRt[3] = 10.; > """ > weave.inline(code, > ['obsvPtsMinRt', 'obsvPtsMaxRt', 'obsvPts', 'eMidPts', 'dt', 'c0'], > type_converters=converters.blitz, > compiler = 'gcc', > headers=["", "", ""]) > > print obsvPtsMinRt I haven't used weave for a long time, but I think it's due using the wrong kind of brackets. Use () to index the ndarrays. Then I don't understand why your small example works, though. Christian From elcorto at gmx.net Tue Feb 26 15:47:11 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 26 Feb 2008 21:47:11 +0100 Subject: [SciPy-user] Convert a string list to array ? In-Reply-To: References: <47C46D9D.2060904@ru.nl> Message-ID: <20080226204711.GA23347@ramrod.de> On Tue, Feb 26, 2008 at 02:09:02PM -0600, Nathan Bell wrote: > On Tue, Feb 26, 2008 at 1:50 PM, Stef Mientki wrote: > > Is there a better (more elegant) way ? > > > > > > And maybe the string list was even the wrong first step, > > the datafile I'm reading looks like this: > > > > 254 48 57 > > 58 52 53 > > 58 51 54 > > 254 32 32 > > 32 32 32 > > > > So there might even be better ways to start ? > > Try numpy.fromfile() > > A = fromfile( 'myfile.txt', dtype=int, sep=' ') > > This should be very fast and memory efficient. Isn't numpy.loadtxt designed to handle this? With fromfile I get all data in a 1d array and have to do an additional reshape() if the data in the file represents a 2d array. cheers, steve From reckoner at gmail.com Tue Feb 26 16:20:10 2008 From: reckoner at gmail.com (Reckoner) Date: Tue, 26 Feb 2008 13:20:10 -0800 Subject: [SciPy-user] scipy cygwin install-able? Message-ID: are there plans to make scipy cygwin install-able using the usual cygwin setup process? I know that would make my life a lot easier. Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.mientki at gmail.com Tue Feb 26 16:51:39 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 26 Feb 2008 22:51:39 +0100 Subject: [SciPy-user] Convert a string list to array ? In-Reply-To: References: <47C46D9D.2060904@ru.nl> Message-ID: <47C489EB.3030108@gmail.com> hello, thank you all for your suggestions, but unfortunately I couldn't get one working. Also the Cookbook is much more complicated (for me) than the orginal "double asarray", maybe the CR/LF is causing the trouble. As the datasets are not that big it's not a real issue. Sebastian Haase wrote: > On Tue, Feb 26, 2008 at 8:50 PM, Stef Mientki wrote: > >> hello, >> >> I have a string list that I want to convert to an integer array >> data_all = [ '12', '24' ] >> >> and now I would expect this would work, but it crashes: >> data_all = asarray ( data_all, dtype = int ) >> >> > > do really mean a "full blown crash", i.e. seg-fault ?? > > No I mean traceable crashes, here are few on the suggestions made by others: data_all = loadtxt(filename) x,y,z = loadtxt ( filename, unpack=True ) """ Traceback (most recent call last): File "D:\data_to_test\signal_workbench.py", line 594, in ? Read_New_Akto_File ( filename, True) File "D:\data_to_test\signal_workbench.py", line 563, in Read_New_Akto_File data_all = loadtxt(filename) File "P:\Python\lib\site-packages\numpy-1.0.3.dev3722-py2.4-win32.egg\numpy\core\numeric.py", line 725, in loadtxt X = array(X, dtype) ValueError: setting an array element with a sequence. """ x,y,z = load(filename) """ Traceback (most recent call last): File "D:\data_to_test\signal_workbench.py", line 659, in ? Read_New_Akto_File ( filename, True) File "D:\data_to_test\signal_workbench.py", line 628, in Read_New_Akto_File x,y,z = load(filename) File "P:\Python\lib\site-packages\numpy-1.0.3.dev3722-py2.4-win32.egg\numpy\core\numeric.py", line 611, in load return _cload(file) UnpicklingError: unpickling stack underflow """ data_all = loadtxt ( filename, usecols=(0,1,2) ) """ Traceback (most recent call last): File "D:\data_to_test\signal_workbench.py", line 673, in ? Read_New_Akto_File ( filename, True) File "D:\data_to_test\signal_workbench.py", line 640, in Read_New_Akto_File data_all = loadtxt ( filename, usecols=(0,1,2) ) File "P:\Python\lib\site-packages\numpy-1.0.3.dev3722-py2.4-win32.egg\numpy\core\numeric.py", line 718, in loadtxt row = [converterseq[j](vals[j]) for j in usecols] IndexError: list index out of range """ cheers, Stef > -Sebastian Haase > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Tue Feb 26 17:36:23 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Feb 2008 16:36:23 -0600 Subject: [SciPy-user] scipy cygwin install-able? In-Reply-To: References: Message-ID: <3d375d730802261436n11774e76i36ab6ccc38a0e83b@mail.gmail.com> On Tue, Feb 26, 2008 at 3:20 PM, Reckoner wrote: > are there plans to make scipy cygwin install-able using the usual cygwin > setup process? I don't think so, no. We welcome contributions to that end, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at enthought.com Tue Feb 26 17:46:18 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 26 Feb 2008 16:46:18 -0600 Subject: [SciPy-user] Convert a string list to array ? In-Reply-To: <47C46D9D.2060904@ru.nl> References: <47C46D9D.2060904@ru.nl> Message-ID: <47C496BA.3080008@enthought.com> Stef Mientki wrote: > hello, > > I have a string list that I want to convert to an integer array > data_all = [ '12', '24' ] > > and now I would expect this would work, but it crashes: > data_all = asarray ( data_all, dtype = int ) > Yeah, perhaps unexpectedly, the conversion code requires integers not strings when you specify dtype=int. You can also do: asarray(data_all).astype(int) or asarray([int(x) for x in data_all],dtype=int) -Travis O. From ryanlists at gmail.com Tue Feb 26 18:18:26 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 26 Feb 2008 17:18:26 -0600 Subject: [SciPy-user] probable bug in signal.lsim with repeated poles Message-ID: I think I have discovered a bug in lsim for systems with repeated roots in the charactersitic equation. The attached python file attempts to find the step response of a second order system that is critially damped: TF(s) = wn**2 -------------------------------- s**2 + 2*z*wn*s + wn**2 where z = 1. The step response of this system based on inverse Laplace transform (see attached wxMaxima file) should be y(t) = -t*exp(-t)-exp(-t)+1 when wn = 1.0 signal.lsim2 comes up with the correct numeric answer (see good.png). Using signal.lsim, no error is thrown, but the answer is quite strange (see bad.png). I assume this has something to do with a repeated eigenvalue in the A matrix: In [69]: sys.A Out[69]: array([[-2., -1.], [ 1., 0.]]) In [70]: eig(sys.A) Out[70]: (array([-1., -1.]), array([[-0.70710678, -0.70710678], [ 0.70710678, 0.70710678]])) It would be fine with me if lsim threw an error after checking for repeated eigenvalues. My code catches any errors thrown by lsim and tries to run lsim2 in an except clause. But giving a bad answer with no errors messed me up. Any thoughts? Ryan -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: repeated_root_problem.py URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: good.png Type: image/png Size: 38207 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bad.png Type: image/png Size: 26812 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: repeated_root_step_response.wxm Type: application/octet-stream Size: 600 bytes Desc: not available URL: From ryanlists at gmail.com Tue Feb 26 18:22:27 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 26 Feb 2008 17:22:27 -0600 Subject: [SciPy-user] probable bug in signal.lsim with repeated poles In-Reply-To: References: Message-ID: This may already be fixed or may be windows specific. The computer having the problem is running the latest exe files: In [3]: scipy.__version__ Out[3]: '0.6.0' In [4]: numpy.__version__ Out[4]: '1.0.4' I just tried running the file on an Ubuntu box running 0.7.0 from svn a few weeks ago, and lsim and lsim2 give the same results. Is an update to the windows exe's scheduled for anytime soon? Thanks, Ryan On 2/26/08, Ryan Krauss wrote: > I think I have discovered a bug in lsim for systems with repeated > roots in the charactersitic equation. The attached python file > attempts to find the step response of a second order system that is > critially damped: > > TF(s) = wn**2 > -------------------------------- > s**2 + 2*z*wn*s + wn**2 > > where z = 1. > > The step response of this system based on inverse Laplace transform > (see attached wxMaxima file) should be > y(t) = -t*exp(-t)-exp(-t)+1 > when wn = 1.0 > signal.lsim2 comes up with the correct numeric answer (see good.png). > Using signal.lsim, no error is thrown, but the answer is quite strange > (see bad.png). > > I assume this has something to do with a repeated eigenvalue in the A matrix: > > In [69]: sys.A > Out[69]: > array([[-2., -1.], > [ 1., 0.]]) > > In [70]: eig(sys.A) > Out[70]: > (array([-1., -1.]), > array([[-0.70710678, -0.70710678], > [ 0.70710678, 0.70710678]])) > > It would be fine with me if lsim threw an error after checking for > repeated eigenvalues. My code catches any errors thrown by lsim and > tries to run lsim2 in an except clause. But giving a bad answer with > no errors messed me up. > > Any thoughts? > > > Ryan > > From david at ar.media.kyoto-u.ac.jp Tue Feb 26 22:02:31 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 27 Feb 2008 12:02:31 +0900 Subject: [SciPy-user] Trying to build scipy 32-bit on a 64-bit machine In-Reply-To: <890c2bf00802260626h6b2f9abdx42316ac77f3b322c@mail.gmail.com> References: <890c2bf00802241034r175f9ce8h1c388f7e7739d201@mail.gmail.com> <47C3FF99.1040505@ar.media.kyoto-u.ac.jp> <890c2bf00802260626h6b2f9abdx42316ac77f3b322c@mail.gmail.com> Message-ID: <47C4D2C7.2090908@ar.media.kyoto-u.ac.jp> Jeremy Mayes wrote: > Hi, > > Thanks for the response. It wasn't too bad building python. And let > me change my initial statement slightly. I'm trying to build 32-bit > binaries to run on an x86_64 machine. You only have to pass gcc the > -m32 flag to get it to do that, so, it's not horrible. Yes, configuring compiler for cross-compilation is not hard, if you already have one. But that's not the problem. When you set -m32, gcc does not just emit different machine code for the output; it also uses different runtime, and different headers for the C library. It is actually a different compiler, only the front end is the same. It looks easy because most of the work is done by your distribution (if you did not build the compiler by yourself); and with pure C programs using autoconf, it is not too difficult, because autoconf detects cross-compilation (although I doubt it worked just by setting -m32; but maybe 64 bits OS can run 32 bits binaries, in which case it may not be detected as cross compilation by autoconf ? I tried the opposite, and it certainly does not work, since a 32 bits OS with a 32 bits CPU cannot execute amd64 binaries, of course). But in the case of python, it is a different matter: are you sure you managed to build a 32 bits python ? How did you do it ? Because it is far from trivial, and setting CFLAGS is certainly not enough (unless, again, amd64 linux can run 32 bits binaries "natively"). Concerning distutils: setting CFLAGS won't work as you would expect from autoconf projects. It is ackward to control CFLAGS with distutils, unfortunately. cheers, David From a.g.basden at durham.ac.uk Wed Feb 27 05:14:53 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Wed, 27 Feb 2008 10:14:53 +0000 (GMT) Subject: [SciPy-user] scipy installation without root access Message-ID: Hi, can anyone give me some help/tips installing a working scipy on a Suse 10.0 machine (86_64) without root access? I can't use RPMs, as some of the dependencies are non-relocatable... The main problem that I'm having (as I see it) is getting the gfortran/g77 mix right, such that the functions all work when installed... Thanks... From david at ar.media.kyoto-u.ac.jp Wed Feb 27 05:18:45 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 27 Feb 2008 19:18:45 +0900 Subject: [SciPy-user] scipy installation without root access In-Reply-To: References: Message-ID: <47C53905.9040501@ar.media.kyoto-u.ac.jp> Alastair Basden wrote: > Hi, > can anyone give me some help/tips installing a working scipy on a Suse > 10.0 machine (86_64) without root access? > I can't use RPMs, as some of the dependencies are non-relocatable... > > The main problem that I'm having (as I see it) is getting the gfortran/g77 > mix right, such that the functions all work when installed... > The only solution is not to mix gfortran and g77: this cannot work. Problem is, on Suse 10.0, some dependencies were broken (blas, lapack), and I don't know the status on this (I do not use Suse). If you need to build everything (for example, numpy + BLAS + LAPACK), you need to do everything with g77 OR gfortran. Make one mistake somewhere and it will not work; one solution to avoid making mistakes consists in having a bogus g77 in your path (for example, g77 could be a shell script which always fail), so that anytime it is called, you will get an error. On suse 10.0, gfortran is the default fortran compiler, which is why you would be better using it instead of g77. To build numpy with gfortran, you need to use the option --fcompiler=gnu95. cheers, David From gmane at eml.cc Wed Feb 27 08:22:10 2008 From: gmane at eml.cc (martin smith) Date: Wed, 27 Feb 2008 08:22:10 -0500 Subject: [SciPy-user] I'd like to submit some code for inclusion... Message-ID: in scipy. It's a python-interfaced implementation of multi-taper fourier spectral analysis. The code currently builds and installs under linux, but not windows yet. I haven't been able to figure out how to submit code. I tried introducing a ticket last week but I don't know how if it's going anywhere. If there's a better way, I'd appreciate hearing about it. martin smith From oliphant at enthought.com Wed Feb 27 09:58:16 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 27 Feb 2008 08:58:16 -0600 Subject: [SciPy-user] I'd like to submit some code for inclusion... In-Reply-To: References: Message-ID: <47C57A88.4070101@enthought.com> martin smith wrote: > in scipy. > > It's a python-interfaced implementation of multi-taper fourier spectral > analysis. The code currently builds and installs under linux, but not > windows yet. > > I haven't been able to figure out how to submit code. I tried > introducing a ticket last week but I don't know how if it's going > anywhere. If there's a better way, I'd appreciate hearing about it. > Introducing a ticket is the right way. Then, if you want to make sure it gets attention, posting to this list with a description of what your code does and why it would be a good idea to include in SciPy is the next step. As soon as there has been some feedback on your ticket and some indication that it would be useful to include in SciPy, then keep reminding somebody on the steering committee (me, Robert, Jarrod, or Eric at the moment) to include it into SciPy and it will happen when the O.K. is given. The process can take anywhere from a few days to a few weeks if you are persistent and a few months if you are not. If we decide not to include it in SciPy, then the next step would be to make it a scikit (which you could do immediately as well, if you like). Thanks for your willingness to contribute and your patience with our process (which is more fluid than I've led on). Best regards, -Travis O. From bsouthey at gmail.com Wed Feb 27 10:46:02 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 27 Feb 2008 09:46:02 -0600 Subject: [SciPy-user] handling of huge files for post-processing In-Reply-To: <91cf711d0802260923x63498b1apd5983f0d5ae0c20@mail.gmail.com> References: <47C43DFE0200002A0000060C@KAMILLA.rrze.uni-erlangen.de> <91cf711d0802260923x63498b1apd5983f0d5ae0c20@mail.gmail.com> Message-ID: Hi, Christoph, I am unclear exactly what you are really doing, are you just reading, converting, grouping and summing across files? (A small example always helps.) Based on what you have indicated, I doubt that just switching PyTables will be sufficient. Your emails suggest that a different scheme is required than what you are currently doing. It should be expected that large or many files will be resource intensive - the key is to determine which bottlenecks can be removed. Depending on what need to be done, you can process files one at a one and accumulate the results which means that you only deal with one file at a time. Alternatively you can process chunks where you only use specific chunks of the files but requires you to reread files multiple times. Regards, Bruce On Tue, Feb 26, 2008 at 11:23 AM, David Huard wrote: > Whether or not PyTables is going to make a difference really depends on how > much data you need at a given time to perform the computation. If this > exceeds your RAM, it doesn't matter what binary format you are using. That > being said, I am not familiar with sqlite, so I don't know if there is some > limitations regarding the database size. > > Storing your data using PyTables will allow you to store as many GB in a > single file as you wish. The tricky part will then be to extract only the > data that you need to perform your computations and make sure this always > stays below the RAM limit, or else the swap memory will be used and it will > slow down things considerably. > > I suggest you try to estimate how much memory you'll be needing for your > computations, see how much RAM you have, and decide whether or not you > should just spend some euros and install additional RAM. > > Servus, > > David > > 2008/2/26, Christoph Scheit : > > Hello David, > > > > indeed data in file a depends on data in file b... > > that the biggest problem and consequently > > I guess I need something that operates better > > on the file-system than in main memory. > > > > Do you think, it's possible to use PyTables to > > tackle the problem? I would need something > > that can group together such enormous > > data-sets. sqlite is nice to group data of > > a table together, but I guess my data-sets are > > just to big... > > > > Acutally I unfortunately don't see the possibility > > to iterate over the entries of the files in the > > manner you described below.... > > > > Thanks, > > > > Christoph > > ------------------------------ > > > > Message: 3 > > Date: Tue, 26 Feb 2008 09:17:00 -0500 > > > > From: "David Huard" > > Subject: Re: [SciPy-user] handling of huge files for post-processing > > To: "SciPy Users List" > > Message-ID: > > > > <91cf711d0802260617o4d768824wbf5fae702b59f00a at mail.gmail.com> > > > > Content-Type: text/plain; charset="iso-8859-1" > > > > > > Cristoph, > > > > Do you mean that b depends on the entire dataset a ? In this case, you > might > > consider buying additional memory; this is often way cheaper in terms of > > time than trying to optimize the code. > > > > What I mean by iterators is that when you open a binary file, you > generally > > have the possibility to iterate over each element in the file. For > instance, > > when reading an ascii file: > > > > for line in f.readline(): > > some operation on the current line. > > > > instead of loading all the file in memory: > > lines = f.readlines() > > > > This way, only one line is kept in memory at a time. If you can write your > > code in this manner, this might solve your memory problem. For instance, > > here is a generator that opens two files and will return the current line > of > > each file each time it's next() method is called > > def read(): > > a = open('filea', 'r') > > b = open('fileb', 'r') > > la = a.readline() > > lb = b.readline() > > while (la and lb): > > yield la,lb > > la = a.readline() > > lb = b.readline() > > > > for a, b in read(): > > some operation on a,b > > > > HTH, > > > > David > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Wed Feb 27 11:47:46 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 Feb 2008 10:47:46 -0600 Subject: [SciPy-user] I'd like to submit some code for inclusion... In-Reply-To: References: Message-ID: <3d375d730802270847r66a39ceajdaddf958329ad0c5@mail.gmail.com> On Wed, Feb 27, 2008 at 7:22 AM, martin smith wrote: > in scipy. > > It's a python-interfaced implementation of multi-taper fourier spectral > analysis. The code currently builds and installs under linux, but not > windows yet. > > I haven't been able to figure out how to submit code. I tried > introducing a ticket last week but I don't know how if it's going > anywhere. If there's a better way, I'd appreciate hearing about it. I have commented on your ticket (#608 for everyone else). If you would like to receive email notifications when the ticket is modified, add your name and email address to your Trac settings: http://scipy.org/scipy/scipy/settings -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hoytak at gmail.com Wed Feb 27 12:24:16 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Wed, 27 Feb 2008 09:24:16 -0800 Subject: [SciPy-user] scipy installation without root access In-Reply-To: <47C53905.9040501@ar.media.kyoto-u.ac.jp> References: <47C53905.9040501@ar.media.kyoto-u.ac.jp> Message-ID: <4db580fd0802270924h6f57a1ech2e8b8b592f10835f@mail.gmail.com> Hey David, I had the same situation, and I ended up installing and compiling everything from source. Basically I created an alternate /usr-like directory (I called it sysroot) where I could install alternate packages. I then used environment variables to get it to work. I can't really comment on the gfortran/g77 issue, as I didn't run into that problem. But since I had to do it on a couple machines, I kept a record of the commands I used. Here's how I did it in the chance it's any help to you. I first have a setup script that sets up my environment. I source this in ~/.bashrc: hn=`hostname` if [ $hn = "hk-laptop" ]; then export SYSROOT=/opt/sysroot export GCC_VERSION=4.1.2 export PYTHON_VERSION=2.5 elif [ $hn = "albani" ]; then export SYSROOT=/tmp/hoytak/sysroot export GCC_VERSION=4.1.2 export PYTHON_VERSION=2.4 else export SYSROOT=/cs/SCRATCH/hoytak/sysroot export GCC_VERSION=4.1.2 export PYTHON_VERSION=2.4 fi export SRC_DIR=$SYSROOT/src export PATH=${SYSROOT}/bin:${PATH} export PYTHONPATH=${SYSROOT}/lib/python${PYTHON_VERSION}/site-packages:${PYTHONPATH} export NUMPY=${SYSROOT}/src/numpy export SCIPY=${SYSROOT}/src/scipy export LD_LIBRARY_PATH=$SYSROOT/lib export INCLUDE_PATH=$SYSROOT/include:$SYSROOT/src/UFconfig: export PKG_CONFIG_PATH=$SYSROOT/lib/pkgconfig:${PKG_CONFIG_PATH} And then here's the record of commands I used to set up all the numpy/scipy dependencies. I tried to edit it a little to make it into a working script, but it's definitely not robust and I don't recommend just straight running it -- I've never gotten it to work once. However, I've found it quite useful to follow and judiciously cut and paste into the command line. #!/bin/bash # SETUP # 1. You need to have UFconfig preinstalled in the src directory (I.e. unpacked into src/UFconfig) # 2. Download GotoBLASS manually from # http://www.tacc.utexas.edu/resources/software/, gotoBLAS_version=1.09 gotoBLAS_file=~/download/GotoBLAS-${gotoBLAS_version}.tar.gz ####################################################### # Some functions ##### This runs through the directory creation. Assume the $SYSROOT is already there. cd $SYSROOT mkdir bin mkdir lib mkdir include # Set up the environment variables source setupenv set -e cd src ######################################## # Now simply download and get the latest libraries # Do the ones that are most prone to error first # GotoBLAS cp "$gotoBLAS_file" ./ tar -zxf "GotoBLAS-${gotoBLAS_version}.tar.gz" rm "GotoBLAS-${gotoBLAS_version}.tar.gz" cd GotoBLAS make ln -s $SYSROOT/src/GotoBLAS/libgoto.a $SYSROOT/lib/libgoto.a cd $SYSROOT/src # xerbla ; also checks for UFpack cd UFconfig/xerbla make cd $SYSROOT/src #LAPACK lapack_version=3.1.1 wget "http://www.netlib.org/lapack/lapack-lite-${lapack_version}.tgz" tar -zxf "lapack-lite-${lapack_version}.tgz" rm "lapack-lite-${lapack_version}.tgz" mv lapack-lite-${lapack_version} lapack cd lapack cp INSTALL/make.inc.LINUX make.inc make lapacklib cd $SYSROOT/src #metis, for sparse stuff metis_version=4.0 wget http://glaros.dtc.umn.edu/gkhome/fetch/sw/metis/metis-${metis_version}.tar.gz tar -zxf metis-${metis_version}.tar.gz rm metis-${metis_version}.tar.gz mv metis-${metis_version} metis cd metis make cd $SYSROOT/src #AMD wget "http://www.cise.ufl.edu/research/sparse/amd/current/AMD.tar.gz" tar -zxf AMD.tar.gz rm AMD.tar.gz cd AMD make cd $SYSROOT/src #FFTW fftw_version=3.1.2 wget http://www.fftw.org/fftw-${fftw_version}.tar.gz tar -zxf fftw-${fftw_version}.tar.gz rm fftw-${fftw_version}.tar.gz mv fftw-${fftw_version} fftw cd fftw ./configure --prefix=$SYSROOT make make install cd $SYSROOT/src ######################################## # swig swig_version=1.3.31 wget http://easynews.dl.sourceforge.net/sourceforge/swig/swig-${swig_version}.tar.gz tar -zxf swig-${swig_version}.tar.gz mv swig-${swig_version} swig cd swig ./configure --prefix=$SYSROOT make make install cd $SYSROOT/src #ATLAS atlas_version=3.8.0 wget "http://easynews.dl.sourceforge.net/sourceforge/math-atlas/atlas${atlas_version}.tar.bz2" tar -jxf "atlas${atlas_version}.tar.bz2" rm "atlas${atlas_version}.tar.bz2" cd ATLAS mkdir build cd build ../configure --prefix=$SYSROOT --with-netlib-lapack=$SYSROOT/src/lapack/lapack_LINUX.a make make install cd $SYSROOT/src #UMFPACK wget http://www.cise.ufl.edu/research/sparse/umfpack/current/UMFPACK.tar.gz tar -zxf UMFPACK.tar.gz cd UMFPACK make cd $SYSROOT/src ######################################## #Numpy svn co http://svn.scipy.org/svn/numpy/trunk numpy cd numpy #create the site.cfg file cat > site.cfg < site.cfg < wrote: > Alastair Basden wrote: > > Hi, > > can anyone give me some help/tips installing a working scipy on a Suse > > 10.0 machine (86_64) without root access? > > I can't use RPMs, as some of the dependencies are non-relocatable... > > > > The main problem that I'm having (as I see it) is getting the gfortran/g77 > > mix right, such that the functions all work when installed... > > > > The only solution is not to mix gfortran and g77: this cannot work. > Problem is, on Suse 10.0, some dependencies were broken (blas, lapack), > and I don't know the status on this (I do not use Suse). > > If you need to build everything (for example, numpy + BLAS + LAPACK), > you need to do everything with g77 OR gfortran. Make one mistake > somewhere and it will not work; one solution to avoid making mistakes > consists in having a bogus g77 in your path (for example, g77 could be a > shell script which always fail), so that anytime it is called, you will > get an error. On suse 10.0, gfortran is the default fortran compiler, > which is why you would be better using it instead of g77. > > To build numpy with gfortran, you need to use the option --fcompiler=gnu95. > > cheers, > > David > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jesper.webmail at gmail.com Wed Feb 27 17:27:51 2008 From: jesper.webmail at gmail.com (Jesper Larsen) Date: Wed, 27 Feb 2008 23:27:51 +0100 Subject: [SciPy-user] 2D interpolation containing missing values Message-ID: Hi SciPy users, I would like to interpolate data from a 2D irregular grid to a limited number of points. The input data is defined by: a 2D longitude array, a 2D latitude array and a 2D array containing the data values. The data array is a numpy masked array (numpy.ma) and may contain masked out values. It can for example represent an ocean temperature field which is masked out at land points. If a land point is included in the interpolation to one of the output points I would like the result to be nan or something else identifiable so that I can mask it out. I have tried to do the interpolation using the delaunay package. Unfortunately it does not seem to be able to handle masked arrays. I have therefore ended up with a solution in which I only keep the non-masked values of the data in the interpolation. The result of this is that land values are interpolated from the nearest ocean values. I can of course mask out these values afterwards by finding the nearest point in my input data array and check if it is masked. But that is probably not very effective and I was wondering if anyone has a better solution? Are some of the other interpolation routines better suited for such a problem? Regards, Jesper From scipy-user at onnodb.com Thu Feb 28 03:57:00 2008 From: scipy-user at onnodb.com (scipy-user at onnodb.com) Date: Thu, 28 Feb 2008 09:57:00 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets Message-ID: <1065457655.20080228095700@xs4all.nl> Hi all, Using LabVIEW software for our data analysis at the moment, I'm currently looking for alternatives. Especially since LabVIEW's "graphical" programming language is somewhat cumbersome for some of the things we're doing --- an iterative language would often be much easier. (Actually, I personally don't like the graphical way of programming at all :) ) Python seems to be a great alternative, although I haven't been able yet to get things up & running the way I'd like. The main 'problem' is that LabVIEW contains a lot of high-performance library code for plotting data. I've been experimenting with SciPy and matplotlib, but those libraries are just *way* slower than LabVIEW when plotting large data sets (in our case, it's a current trace with a few MBs of data). I'd like to plot a current trace, so that the user can quickly zoom in & out, and pan using a horizontal scroll bar, but how should I do this? (I've looked around for examples a bit, but being a newbie, it can be hard to find your way around such a huge community). Another issue appears to be the creation of simple user interfaces. This is very intuitive in LabVIEW, but could someone here give some advice on a way to combine a UI and plot windows in a not-so-difficult to learn way in Python? What's your own experience? I've spent some time on looking at various packages and frameworks like Traits, Chaco and Envisage, but I just can't seem to wrap my head around them. I'm looking forward to any help; thank you very much in advance! Best regards, -- Onno Broekmans From matthieu.brucher at gmail.com Thu Feb 28 05:29:45 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 28 Feb 2008 11:29:45 +0100 Subject: [SciPy-user] [sparse] Sum over lines or columns Message-ID: Hi, Is there an efficient way of summig a csr_matrix over lines or columns ? Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.g.basden at durham.ac.uk Thu Feb 28 08:10:45 2008 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Thu, 28 Feb 2008 13:10:45 +0000 (GMT) Subject: [SciPy-user] scipy installation without root access In-Reply-To: References: Message-ID: Hi Hoyt/David, thanks for the replies... the problem is that it installs fine without errors, but the special.kv function doesn't work, and since my code needs this, its a problem! Does anyone have any idea how I could try to correct this? From what I can gather, the function is in scipy/special/amos/zbesk.f I have a feeling that amos_wrappers.c has something to do with it, as does _cephes.so, which is imported by basic.py (imported by init). amos_wrappers seems to wrap the fortran to c. _cephesmodule.c then seems to use this by calliny PyUFunc_FromFuncAndData. So I suspect the error is either in PyUFunc or in the fortran... This uses cephes2cp_functions which are PyUFunc_ff_f_As_dD_D, etc. Are there any known bugs in the ufunc interface that could be causing this? (see posts from a week or so ago for the problem with kv). I actually now think there are further errors in kv, on all platforms that I've tried... eg: scipy.special.kv(6./5,1) scipy.special.kv(6./5,0) scipy.special.kv(6./5,0) scipy.special.kv(6./5,[1,1,1]) scipy.special.kv(6./5,[1,0,0]) scipy.special.kv(6./5,[0,0,1]) For values with 0 in the 2nd arguement, I think they should raise a python error, or at least give some warning - however, they dont. One thing that I notice is that when running kv(6./5,1) twice (on my attempted installation), the second time returns with a fortran error of IERR=2 (I inserted a print statement in amos_wrappers.c). So this is definitely not the correct behaviour... The failing is occuring at line 208 of zbesk.f: if (AZ.LT.UFL) GO TO 180 Here, AZ is 1 as expected, but UFL==1200. Not sure why this should be The line previous defines UFL as: UFL = D1MACH(1)*1.0D+3 which means that D1MACH(1) is returning 1.2 - which I would think is blatantly wrong! (looking at the code, I would have expected it to return 2.22507386e-308). Strange... the only thing that I can think of is that the static variable small(2) is being overwritten elsewhere... maybe its a fortran compiler issue. Thanks... From matthieu.brucher at gmail.com Thu Feb 28 08:33:53 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 28 Feb 2008 14:33:53 +0100 Subject: [SciPy-user] [sparse] Sum over lines or columns In-Reply-To: References: Message-ID: Sorry for the nise, there is a sum method... Matthieu 2008/2/28, Matthieu Brucher : > > Hi, > > Is there an efficient way of summig a csr_matrix over lines or columns ? > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Thu Feb 28 18:43:56 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 29 Feb 2008 00:43:56 +0100 Subject: [SciPy-user] nice floating point display ? Message-ID: <47C7473C.3070906@ru.nl> hello, I'm creating an float / log slider, now I need a nice representation of the min / max / value of the slider. For some unknown reason set_printoptions(precision=2) doesn't work, and I doubt if it is adequate in my application, because precision and notation depends more on the range than on the actual value itself. Does anyone has a elegant solution ? thanks, Stef Mientki From robert.kern at gmail.com Thu Feb 28 19:07:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 Feb 2008 18:07:04 -0600 Subject: [SciPy-user] nice floating point display ? In-Reply-To: <47C7473C.3070906@ru.nl> References: <47C7473C.3070906@ru.nl> Message-ID: <3d375d730802281607g5832f7a8n2118cd944bc9d554@mail.gmail.com> On Thu, Feb 28, 2008 at 5:43 PM, Stef Mientki wrote: > hello, > > I'm creating an float / log slider, > now I need a nice representation of the min / max / value > of the slider. > > For some unknown reason > set_printoptions(precision=2) > doesn't work, > and I doubt if it is adequate in my application, > because precision and notation depends more on the range > than on the actual value itself. > > Does anyone has a elegant solution ? set_printoptions() doesn't affect Python floats at all, and that looks like what you are trying to print. You can explicitly format the numbers yourself: http://docs.python.org/dev/library/stdtypes.html#string-formatting-operations In [39]: x = 1.2345678901 In [40]: '%f' % x Out[40]: '1.234568' In [41]: '%g' % x Out[41]: '1.23457' In [42]: '%1.2f' % x Out[42]: '1.23' In [43]: '%1.10f' % x Out[43]: '1.2345678901' In [46]: '%f' % (x*1e20) Out[46]: '123456789009999986688.000000' In [47]: '%g' % (x*1e20) Out[47]: '1.23457e+20' In [48]: '%e' % x Out[48]: '1.234568e+00' In [56]: '%.*f' % (3, x) Out[56]: '1.235' -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kdsudac at yahoo.com Thu Feb 28 20:09:12 2008 From: kdsudac at yahoo.com (Keith Suda-Cederquist) Date: Thu, 28 Feb 2008 17:09:12 -0800 (PST) Subject: [SciPy-user] Power spectrum scaling Message-ID: <737196.42157.qm@web54302.mail.re2.yahoo.com> Hi All, I'm doing some image processing and am taking the 1-D FFT of a linescan (basically I've average along one dimension of the image to get a 1-d line then take the FFT). I then am looking at the magnitude and/or power of the spectrum. This is straight-forward but I'm running into some problems when I try to divide my signal up an take the FFT of the sections. For example: t=0.5*scipy.arange(0,1000) #time y=2*scipy.sin(51*t) #arbitrary sinusoidal signal N=1024 #I've tried this with different values of N and can't figure out how best to handle it Y=scipy.fft(y,N) y1=y[0:500] #first half of signal y2=y[500:] #second half of singal Y1=scipy.fft(y1,512) #again I'm open to using different values for N Y2=scipy.fft(y2,512) I then do some scaling of the frequency axis to get the peaks to line up. But I can't get the height of the peaks to be in very good agreement. I get good agreement between the spectrums of Y1 and Y2, but not with Y. Since I have basically a fixed frequency I'd think that the first and second halves (Y1 and Y2) of the original signal should have the same frequency characteristics as the original (Y) and I'd just have to do some clever scaling to get the magnitude and/or power spectrums to be almost the same. I've tried all sorts of different scaling techniques to try to get the spectral signature to be about the same but haven't had any luck. I've scoured the weba nd looked in a fourier tranform book (by Ronald Bracewell) I have but haven't been able to figure it out. I think part of my problems are related to zero-padding and the difference between the number of samples. Any signal processing gurus out there who can help me out? Thanks, Keith ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Feb 29 00:02:35 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 29 Feb 2008 14:02:35 +0900 Subject: [SciPy-user] scipy installation without root access In-Reply-To: References: Message-ID: <47C791EB.8090103@ar.media.kyoto-u.ac.jp> Alastair Basden wrote: > Hi Hoyt/David, > thanks for the replies... > the problem is that it installs fine without errors, but the special.kv > function doesn't work, and since my code needs this, its a problem! > > Does anyone have any idea how I could try to correct this? From what I > can gather, the function is in scipy/special/amos/zbesk.f Give us the build log. It is possible that something went wrong during the build, even if successful. cheers, David From robert.kern at gmail.com Fri Feb 29 00:51:00 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 Feb 2008 23:51:00 -0600 Subject: [SciPy-user] 2D interpolation containing missing values In-Reply-To: References: Message-ID: <3d375d730802282151s6641403btec9280544bd8b9cc@mail.gmail.com> On Wed, Feb 27, 2008 at 4:27 PM, Jesper Larsen wrote: > Hi SciPy users, > > I would like to interpolate data from a 2D irregular grid to a limited > number of points. The input data is defined by: a 2D longitude array, > a 2D latitude array and a 2D array containing the data values. The > data array is a numpy masked array (numpy.ma) and may contain masked > out values. It can for example represent an ocean temperature field > which is masked out at land points. If a land point is included in the > interpolation to one of the output points I would like the result to > be nan or something else identifiable so that I can mask it out. > > I have tried to do the interpolation using the delaunay package. > Unfortunately it does not seem to be able to handle masked arrays. I > have therefore ended up with a solution in which I only keep the > non-masked values of the data in the interpolation. The result of this > is that land values are interpolated from the nearest ocean values. I > can of course mask out these values afterwards by finding the nearest > point in my input data array and check if it is masked. But that is > probably not very effective and I was wondering if anyone has a better > solution? That's pretty much how I would do it. One trick you might try is to interpolate a dataset which is 0.0 on land and 1.0 on ocean. If you use the linear interpolator rather than the natural-neighbor interpolator, an interpolated value of 1.0 will occur only when the point is entirely contained in a triangle whose points are all ocean in the Delaunay triangulation. Treat 0.0 as land, 1.0 as ocean, and make decision about what to do with the values in between. This will only work if you have actually sampled the land point sufficiently in the triangulation. If you only have ocean points, then everything in the convex hull of the ocean points will be considered ocean, which is almost certainly not what you want. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From beckers at orn.mpg.de Fri Feb 29 04:39:25 2008 From: beckers at orn.mpg.de (Gabriel J.L. Beckers) Date: Fri, 29 Feb 2008 10:39:25 +0100 Subject: [SciPy-user] Power spectrum scaling In-Reply-To: <737196.42157.qm@web54302.mail.re2.yahoo.com> References: <737196.42157.qm@web54302.mail.re2.yahoo.com> Message-ID: <1204277965.9742.15.camel@gabriel-desktop> Hi Keith, I am not a signal processing guru but I think you want to divide your magnitude spectrum by the number of samples in your FFT. Did you take that into account? I don't know what your mean by "spectral signature", but note that the shapes of the spectrums are not expected to be the same if one compares a 512 sample sine and a zero-padded 1000 sample sine. I think you should look for more detailed help in a dsp forum. You can trust that there is nothing wrong with how scipy calculates a fft. Gabriel On Thu, 2008-02-28 at 17:09 -0800, Keith Suda-Cederquist wrote: > Hi All, > > I'm doing some image processing and am taking the 1-D FFT of a > linescan (basically I've average along one dimension of the image to > get a 1-d line then take the FFT). I then am looking at the magnitude > and/or power of the spectrum. This is straight-forward but I'm > running into some problems when I try to divide my signal up > an take the FFT of the sections. > > For example: > > t=0.5*scipy.arange(0,1000) #time > y=2*scipy.sin(51*t) #arbitrary sinusoidal signal > N=1024 #I've tried this with different values of N and can't figure > out how best to handle it > > Y=scipy.fft(y,N) > > y1=y[0:500] #first half of signal > y2=y[500:] #second half of singal > > Y1=scipy.fft(y1,512) #again I'm open to using different values for N > Y2=scipy.fft(y2,512) > > I then do some scaling of the frequency axis to get the peaks to line > up. But I can't get the height of the peaks to be in very good > agreement. > > I get good agreement between the spectrums of Y1 and Y2, but not with > Y. Since I have basically a fixed frequency I'd think that the first > and second halves (Y1 and Y2) of the original signal should have the > same frequency characteristics as the original (Y) and I'd just have > to do some clever scaling to get the magnitude and/or power spectrums > to be almost the same. I've tried all sorts of different scaling > techniques to try to get the spectral signature to be about the same > but haven't had any luck. > > I've scoured the weba nd looked in a fourier tranform book (by Ronald > Bracewell) I have but haven't been able to figure it out. > > I think part of my problems are related to zero-padding and the > difference between the number of samples. > > Any signal processing gurus out there who can help me out? > > Thanks, > Keith > > > > ______________________________________________________________________ > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try > it now. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From pearu at cens.ioc.ee Fri Feb 29 07:12:40 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 29 Feb 2008 14:12:40 +0200 (EET) Subject: [SciPy-user] ANN: sympycore version 0.1 released Message-ID: <46012.129.240.228.53.1204287160.squirrel@cens.ioc.ee> We are proud to present a new Python package: sympycore - an efficient pure Python Computer Algebra System Sympycore is available for download from http://sympycore.googlecode.com/ Sympycore is released under the New BSD License. Sympycore provides efficient data structures for representing symbolic expressions and methods to manipulate them. Sympycore uses a very clear algebra oriented design that can be easily extended. Sympycore is a pure Python package with no external dependencies, it requires Python version 2.5 or higher to run. Sympycore uses Mpmath for fast arbitrary-precision floating-point arithmetic that is included into sympycore package. Sympycore is to our knowledge the most efficient pure Python implementation of a Computer Algebra System. Its speed is comparable to Computer Algebra Systems implemented in compiled languages. Some comparison benchmarks are available in * http://code.google.com/p/sympycore/wiki/Performance * http://code.google.com/p/sympycore/wiki/PerformanceHistory and it is our aim to continue seeking for more efficient ways to manipulate symbolic expressions: http://cens.ioc.ee/~pearu/sympycore_bench/ Sympycore version 0.1 provides the following features: * symbolic arithmetic operations * basic expression manipulation methods: expanding, substituting, and pattern matching. * primitive algebra to represent unevaluated symbolic expressions * calculus algebra of symbolic expressions, unevaluated elementary functions, differentiation and polynomial integration methods * univariate and multivariate polynomial rings * matrix rings * expressions with physical units * SympyCore User's Guide and API Docs are available online. Take a look at the demo for sympycore 0.1 release: http://sympycore.googlecode.com/svn/trunk/doc/html/demo0_1.html However, one should be aware that sympycore does not implement many features that other Computer Algebra Systems do. The version number 0.1 speaks for itself:) Sympycore is inspired by many attempts to implement CAS for Python and it is created to fix SymPy performance and robustness issues. Sympycore does not yet have nearly as many features as SymPy. Our goal is to work on in direction of merging the efforts with the SymPy project in the near future. Enjoy! * Pearu Peterson * Fredrik Johansson Acknowledgments: * The work of Pearu Peterson on the SympyCore project is supported by a Center of Excellence grant from the Norwegian Research Council to Center for Biomedical Computing at Simula Research Laboratory. From berthold.hoellmann at gl-group.com Fri Feb 29 10:36:18 2008 From: berthold.hoellmann at gl-group.com (=?iso-8859-15?Q?Berthold_=22H=F6llmann=22?=) Date: Fri, 29 Feb 2008 16:36:18 +0100 Subject: [SciPy-user] How to tell scipy setup that I have a INTEL fortran ATLAS/BLAS/LAPACK instead of g77 In-Reply-To: <47BFF3A6.2090701@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Sat\, 23 Feb 2008 19\:21\:26 +0900") References: <47BFF3A6.2090701@ar.media.kyoto-u.ac.jp> Message-ID: Sorry for taking so long for the answer. David Cournapeau writes: > Berthold H?llmann wrote: >> >> No matter what I do, I can't tell scipy to use the INTEL fortran API >> conventions instead of the g77 conventions for fortran routine names >> containing underscores: >> >> hoel at pc047299:scipy-0.6.0 nm /usr/local/gltools/linux/lib/libf77blas_ifc91.so.3.8|grep atl_f77wrap_dtrsv >> 0000cfe0 T atl_f77wrap_dtrsv_ >> hoel at pc047299:scipy-0.6.0 nm build/lib.linux-i686-2.5/scipy/linsolve/_zsuperlu.so| grep atl_f77wrap_dtrsv >> U atl_f77wrap_dtrsv__ >> >> How can I set up scipy in a way that superlu tries to access >> atl_f77wrap_dtrsv_ instead of atl_f77wrap_dtrsv__? >> > Hi, > > You don't give enough details to answer you completely (which > compiler are you using for ATLAS and for numpy), but assuming you did > compile atlas with intel compiler and numpy with g77, this will not > work. You cannot tell g77 to follow "intel" convention (different > mangling is only the tip of the iceberg; other issues are more subtle > and more difficult to track). You should use the same fortran compiler > for numpy and for atlas. Mixing fortran compilers is not a good idea, > and will often give unpredictable results. > > If your problem is telling numpy to be compiled with intel fortran > compiler, than this is what you should use: > > python setup.py build --fcompiler=intel I am aware of some of the problems when mixing object files from different compiler brands. Numpy is compiled using Intel Fortran compiler as well as scipy. The affected code is pure C code. The superLU code somewhere seems to use a wrapper to call BLAS routines like "DTRSV" from C. These wrapper routines seem to include ATLAS header files that define mappings from the routine name to some ATLAS wrapper routine name. This ATLAS wrapper name depends on the FORTRAN compiler that ATLAS is build for. The ATLAS header files use defines like "Add_" and "Add__" to distinguish between different FORTRAN compiler naming conventions. When compiling "superLU" from scipy "Add__" seems to used(some kind of default I guess), whereas "Add_" would be right for INTEL Fortran. Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com This e-mail and any attachment thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as total or partial copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. Germanischer Lloyd AG, 31393 AG HH, Hamburg, Vorstand: Dr. Hermann J. Klein, Dr. Joachim Segatz, Pekka Paasivaara, Vorsitzender des Aufsichtsrats: Dr. Wolfgang Peiner From bing.jian at siemens.com Fri Feb 29 13:38:32 2008 From: bing.jian at siemens.com (Jian, Bing (MED US)) Date: Fri, 29 Feb 2008 13:38:32 -0500 Subject: [SciPy-user] nonnegative linear squares (NNLS, lsqnonneg) in Python Message-ID: <1D37B5D0C584B04B902222300E9B2B1FB467A3@USMLVV1EXCTV06.ww005.siemens.net> Hi, I am wondering if there is a non-negative linear squares solver in scipy/numpy which is equivalent to the lsqnonneg() in MATLAB? If not, then probably I need to write my own extensions based on existing C code. Thanks! Bing ---------------------------------------------------------------------------- This message and any included attachments are from Siemens Medical Solutions and are intended only for the addressee(s). The information contained herein may include trade secrets or privileged or otherwise confidential information. Unauthorized review, forwarding, printing, copying, distributing, or using such information is strictly prohibited and may be unlawful. If you received this message in error, or have reason to believe you are not authorized to receive it, please promptly delete this message and notify the sender by e-mail with a copy to Central.SecurityOffice at siemens.com Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Fri Feb 29 13:51:23 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 29 Feb 2008 19:51:23 +0100 Subject: [SciPy-user] nice floating point display ? In-Reply-To: <3d375d730802281607g5832f7a8n2118cd944bc9d554@mail.gmail.com> References: <47C7473C.3070906@ru.nl> <3d375d730802281607g5832f7a8n2118cd944bc9d554@mail.gmail.com> Message-ID: <47C8542B.6090003@ru.nl> thanks Robert, Robert Kern wrote: > On Thu, Feb 28, 2008 at 5:43 PM, Stef Mientki wrote: > >> hello, >> >> I'm creating an float / log slider, >> now I need a nice representation of the min / max / value >> of the slider. >> >> For some unknown reason >> set_printoptions(precision=2) >> doesn't work, >> and I doubt if it is adequate in my application, >> because precision and notation depends more on the range >> than on the actual value itself. >> >> Does anyone has a elegant solution ? >> > > set_printoptions() doesn't affect Python floats at all, and that looks > like what you are trying to print. > > I didn't know that, but it sounds very plausible after all ;-) > You can explicitly format the numbers yourself: > > http://docs.python.org/dev/library/stdtypes.html#string-formatting-operations > > > In [39]: x = 1.2345678901 > > In [40]: '%f' % x > Out[40]: '1.234568' > > Yes that's part of the answer ( I already knew), but the rest of the problem looks more complicated to me, (whether I have a range of 1..1000 or 0.0000004 ...0.00000041 etc) and I think a number of people has already solved this part of the problem (like in MatPlotlib). For now I'm going to for a very simple solution, let the user specify the format ;-) cheers, Stef -------------- next part -------------- An HTML attachment was scrubbed... URL: From webb.sprague at gmail.com Fri Feb 29 19:30:34 2008 From: webb.sprague at gmail.com (Webb Sprague) Date: Fri, 29 Feb 2008 16:30:34 -0800 Subject: [SciPy-user] minpack.error / fsolve problem Message-ID: I am having a problem with convergence (I think) for an optimization. Every so often I do a non-linear fit of a parameter using fsolve (as input a constant vector of base death rates, a constant vector multiplier of those, and a variable scalar multiplier -- the last is what I am trying to fit) and I get a "minpack.error" with the message that "Error occured while calling the Python function named f" with no other information. The information normally returned from fsolve (ier, message, infodict) are all set to none. I am kind of at a loss for how to proceed, at least without taking a class on optimization algorithms. How do I get more information from fsolve? Is there a better optimization function to use? optimize.golden()? Or ... ? See attached for how I set up the optimizer, and here is the back trace from the web application using this code: TRACEBACK: Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib64/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib64/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib64/python2.5/site-packages/mod_python/publisher.py", line 213, in handler published = publish_object(req, object) File "/usr/lib64/python2.5/site-packages/mod_python/publisher.py", line 425, in publish_object return publish_object(req,util.apply_fs_data(object, req.form, req=req)) File "/usr/lib64/python2.5/site-packages/mod_python/util.py", line 554, in apply_fs_data return object(**args) File "/var/www/localhost/htdocs/lcfit/lc.py", line 221, in ProcessRatesCoherent return ProcessRatesCoherent_(req) File "/home/webbs/larry/INTERNET_APPLICATION/LcPageObjects.py", line 425, in __call__ obj = self.targetClass(**formData) File "/home/webbs/larry/INTERNET_APPLICATION/LcSinglePopObject.py", line 530, in __init__ self._do_lc() File "/home/webbs/larry/INTERNET_APPLICATION/LcCoherentPopObject.py", line 205, in _do_lc flattenBx=self.flattenBx, doTS=True)) File "/home/webbs/larry/INTERNET_APPLICATION/LcSinglePopObject.py", line 338, in lcInfer kt = fitMultiKt(ax, bx, copy.copy(kt_unfit), nmx[goodRowsNum,:], lifeTableParams) File "/home/webbs/larry/INTERNET_APPLICATION/LcSinglePopObject.py", line 103, in fitMultiKt fittedKt[i] = fitSingleKt(ax, bx, kt[i], nmx[i,:], lifeTableParams) # ages go across File "/home/webbs/larry/INTERNET_APPLICATION/LcSinglePopObject.py", line 88, in fitSingleKt fittedKt = LcUtil.fitX(func=LcUtil.kt2e0, target=target_e0, ax=ax, bx=bx, lifeTableParams=lifeTableParams) File "/home/webbs/larry/INTERNET_APPLICATION/LcUtil.py", line 460, in fitX "funcKwargs: %s\n" % pprint.pformat(funcKwargs)) Exception: Caught exception and weird failure in fitting kt to empirical e_0. e: "Error occured while calling the Python function named f". error __doc__: "None". type: "". out: "None". infodict: "None". ier: "None". mesg: "None". funcArgs: () funcKwargs: {'ax': array([-4.231837, -6.92667 , -7.537601, -7.668495, -6.554903, -5.971107, -5.759917, -5.514402, -5.230182, -4.900118, -4.559831, -4.244538, -3.937256, -3.602081, -3.269744, -2.901267, -2.51867 , -2.118842, -1.746651, -1.402097, -1.085182, -0.795903, -0.534262, -0.300259]), 'bx': array([ 0.135533, 0.126908, 0.117685, 0.095685, 0.029012, 0.019818, 0.025011, 0.003613, -0.00145 , -0.03515 , -0.0542 , -0.062684, -0.051495, -0.039603, -0.029531, -0.011641, -0.002807, -0.018365, -0.028617, -0.033565, -0.033207, -0.027544, -0.016576, -0.000302]), 'lifeTableParams': {'ageCutoff': 80, 'beginFuncParam': 0.0, 'endFuncParam': 0.0, 'extensionMethod': 'mxExtend_Boe', 'gender': 'combined', 'ltFuncType': 'ex'}} -------------- next part -------------- A non-text attachment was scrubbed... Name: fsolve-error.py Type: text/x-python Size: 2448 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Fri Feb 29 23:42:03 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 01 Mar 2008 13:42:03 +0900 Subject: [SciPy-user] How to tell scipy setup that I have a INTEL fortran ATLAS/BLAS/LAPACK instead of g77 In-Reply-To: References: <47BFF3A6.2090701@ar.media.kyoto-u.ac.jp> Message-ID: <47C8DE9B.1040201@ar.media.kyoto-u.ac.jp> Berthold H?llmann wrote: > Sorry for taking so long for the answer. > No problem. > > > I am aware of some of the problems when mixing object files from > different compiler brands. It is not different brand, it is different version. Intel compiler on linux would have no chance of success if it was not compatible with gcc. C being a relatively non moving target, extremely used, and "not difficult", there is an clear ABI for C, even on linux; that's the only language which did not have ABI incompatibilities between gcc 3 and 4 (of fortran, C++ and C). recent ifort is only compatible with gcc 4, that is gfortran for fortran. You cannot mix g77 and gfortran at all without huge pain, and thus, you cannot mix ifort and g77 (but you can mix gfortran and ifort). > Numpy is compiled using Intel Fortran > compiler as well as scipy. The affected code is pure C code. The > superLU code somewhere seems to use a wrapper to call BLAS routines > like "DTRSV" from C. These wrapper routines seem to include ATLAS > header files that define mappings from the routine name to some ATLAS > wrapper routine name. This ATLAS wrapper name depends on the FORTRAN > compiler that ATLAS is build for. The ATLAS header files use defines > like "Add_" and "Add__" to distinguish between different FORTRAN > compiler naming conventions. When compiling "superLU" from scipy > "Add__" seems to used(some kind of default I guess), whereas "Add_" > would be right for INTEL Fortran. Yes, I understand the problem. g77 appends two underscore to function names with an underscore in the original name, that is foobar will be foobar_, and foo_bar will be foo_bar__. gfortran (and fortran compiler) do not differentiate: foobar becomes foobar_, foo_bar becomes foo_bar_. But this is only the tip of the iceberg, which is what I wanted to say in my previous email. In particular, the way to pass arguments between C and Fortran is different in g77 and gfortran. It is different, non compatible (pass by value vs pass by reference, for some data type - COMPLEX type - for example). So even if you managed to solve the mangling issue, you would still have problems. IOW, g77 and gfortran does NOT have the same ABI, and it is impossible to make them compatible without recompiling. And even if you wanted to recompile, it is not advised: man gfortran, section -ff2c says: """ -ff2c Generate code designed to be compatible with code generated by g77 and f2c. The calling conventions used by g77 (originally implemented in f2c) require functions that return type default "REAL" to actually return the C type "double", and functions that return type "COMPLEX" to return the values via an extra argument in the calling sequence that points to where to store the return value. Under the default GNU calling conventions, such functions simply return their results as they would in GNU C---default "REAL" functions return the C type "float", and "COMPLEX" functions return the GNU C type "complex". Additionally, this option implies the -fsecond-underscore option, unless -fno-second-underscore is explicitly requested. This does not affect the generation of code that interfaces with the libgfortran library. Caution: It is not a good idea to mix Fortran code compiled with -ff2c with code compiled with the default -fno-f2c calling conventions as, calling "COMPLEX" or default "REAL" functions between program parts which were compiled with different calling conventions will break at execution time. Caution: This will break code which passes intrinsic functions of type default "REAL" or "COMPLEX" as actual arguments, as the library implementations use the -fno-f2c calling conventions. """ Actually, it is a good thing to have different name mangling, because if the name mangling was the same between g77 and gfortran, instead having people complaining of missing symbols (easy to spot and solve), we would have people complaining about data corruption, segmentation fault and the likes. So in your case, the solution is to use compatible fortran compilers for numpy/scipy and the fortran libraries you are using. Both ifort or gfortran are fine if you are using atlas compiled with ifort. cheers, David