From eric at scipy.org Mon Apr 1 06:56:53 2002 From: eric at scipy.org (eric) Date: Mon, 1 Apr 2002 06:56:53 -0500 Subject: [SciPy-dev] Re: SciPy RPM References: <200204010653.g316rb019653@coral.phys.uvic.ca> Message-ID: <0aec01c1d974$50ab93a0$6b01a8c0@ericlaptop> Mark, I've forwarded this to the scipy development list. Some people there may have more experience with the RPM issues. eric ----- Original Message ----- From: "Mark Fardal" To: ; Sent: Monday, April 01, 2002 1:53 AM Subject: SciPy RPM > > Hi, > > I finally got access to python2 on some Red hat machines. Time to try > out SciPy. There is an interesting setup here where /usr/local is on > a central server. It means the sysop just has to install major > packages once, but the result is that "local" means "remote"... > > [root at coral incoming]# rpm --test -iv SciPy-0.1-1.i686.rpm > Preparing packages for installation... > > I think this means there were no dependency conflicts. > > [root at coral incoming]# rpm -ivh SciPy-0.1-1.i686.rpm > Preparing... ########################################### [100%] > 1:SciPy error: unpacking of archive failed on file /usr/local/include/python2.1/Numeric/arrayobject.h: cpio: mkdir failed - Permission denied > > but, I can't write to /usr/local. Looking in rpm manual discover an > interesting-looking option > > [root at coral incoming]# rpm --relocate /usr/local/=/astro/fardal/usr/local/ -ivh > SciPy-0.1-1.i686.rpm > Preparing... ########################################### [100%] > path /usr/local in package SciPy-0.1-1 is not relocateable > > does this mean I can't install SciPy? that seems odd. why shouldn't I > be able to change the install point? > > thanks, > Mark Fardal > University of Victoria > > PS: The page > http://www.scipy.org/download/scipy_rpm > lists Eric as the contact but Travis as recipient of problems, > so I'll send this to both. > From pearu at cens.ioc.ee Mon Apr 1 13:03:20 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 1 Apr 2002 21:03:20 +0300 (EEST) Subject: [SciPy-dev] preparing for release 0.2! In-Reply-To: <053401c1d53c$a52453e0$6b01a8c0@ericlaptop> Message-ID: Hi Eric, On Tue, 26 Mar 2002, eric wrote: > this is the case, I'd like to shoot for a 0.2 release candidate by Friday, April > 5th. Here's a partial list of the things I know of before release: > > 1. Re-factor scipy_lite module (pending discussion). > 2. Fix floating point NaN issues on multiple platforms (cephes module). > 3. Have several stats people review the stats module > 4. Add lu_factor and lu_solve (qr_factor and qr_solve, etc?) pairs to > linalg. > 5. Clean up weave documentation to match current implementation. > 6. Update install instructions. > 7. Test on multiple platforms. 8. Apply system_info.py changes from David M. Cooke that fixes SciPy build for Debian Sid and implements site.cfg hooks. > I guess Pearu and Travis O. are the main guys to add other "todo" items. Also, > we should coordinate a 2-4 hour period for one or two days next week on ICQ or > IRC to work as a group on final clean up. Right now, I'd say Monday and > Thursday are the best for me. Will that work for yall? Pearu, what time of day > works best for you? Usually after 6pm in local time that makes it morning for you, I guess. But I cannot promise that I'll be easily available this and in the coming week as I'll be busy with summarizing my real work. But I'll try to help solving any problems with releasing 0.2 as much as possible. > I'm excited about getting this one out, because it is getting close to having a > "functionally complete" core now. The next revision we can concentrate on docs > and testing (everyone's favorite I know...). I'll look forward for it too, Pearu From cookedm at physics.mcmaster.ca Mon Apr 1 16:30:56 2002 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 01 Apr 2002 16:30:56 -0500 Subject: [SciPy-dev] preparing for release 0.2! In-Reply-To: (Pearu Peterson's message of "Mon, 1 Apr 2002 21:03:20 +0300 (EEST)") References: Message-ID: At some point, Pearu Peterson wrote: > Hi Eric, > > On Tue, 26 Mar 2002, eric wrote: > >> this is the case, I'd like to shoot for a 0.2 release candidate by Friday, April >> 5th. Here's a partial list of the things I know of before release: >> >> 1. Re-factor scipy_lite module (pending discussion). >> 2. Fix floating point NaN issues on multiple platforms (cephes module). >> 3. Have several stats people review the stats module >> 4. Add lu_factor and lu_solve (qr_factor and qr_solve, etc?) pairs to >> linalg. >> 5. Clean up weave documentation to match current implementation. >> 6. Update install instructions. >> 7. Test on multiple platforms. > > 8. Apply system_info.py changes from David M. Cooke that fixes > SciPy build for Debian Sid and implements site.cfg hooks. I've updated it to add more hooks. This should appease the user who has his atlas libraries compiled into 'blas' and 'lapack'. Now the names of the libraries are configurable, and there's some documentation at the top. [How would you prefer to get changes in the future? Is it all right to post the source, or should I put them up on a website? I'd normally send a CVS diff, but they're about as big as the diff :-)] -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at physics.mcmaster.ca -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: system_info.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: site.cfg URL: From pearu at cens.ioc.ee Mon Apr 1 16:42:24 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 2 Apr 2002 00:42:24 +0300 (EEST) Subject: [SciPy-dev] preparing for release 0.2! In-Reply-To: Message-ID: On Mon, 1 Apr 2002, David M. Cooke wrote: > I've updated it to add more hooks. This should appease the user who > has his atlas libraries compiled into 'blas' and 'lapack'. Now the > names of the libraries are configurable, and there's some > documentation at the top. Great! > [How would you prefer to get changes in the future? Is it all right to > post the source, or should I put them up on a website? I'd normally send a CVS > diff, but they're about as big as the diff :-)] Does your system_info.py include the latest changes from SciPy CVS? If yes, then it will be easier to apply your changes. You can send the source now, when it is commited to CVS, after that diff's are better. Pearu From pearu at cens.ioc.ee Mon Apr 1 16:50:57 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 2 Apr 2002 00:50:57 +0300 (EEST) Subject: [SciPy-dev] preparing for release 0.2! In-Reply-To: Message-ID: On Tue, 2 Apr 2002, Pearu Peterson wrote: > yes, then it will be easier to apply your changes. You can send the source > now, when it is commited to CVS, after that diff's are better. Sorry, I didn't noticed the attached files right a way. Thanks, Pearu From cookedm at physics.mcmaster.ca Mon Apr 1 16:52:20 2002 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 01 Apr 2002 16:52:20 -0500 Subject: [SciPy-dev] preparing for release 0.2! In-Reply-To: (Pearu Peterson's message of "Tue, 2 Apr 2002 00:42:24 +0300 (EEST)") References: Message-ID: At some point, Pearu Peterson wrote: > On Mon, 1 Apr 2002, David M. Cooke wrote: > >> I've updated it to add more hooks. This should appease the user who >> has his atlas libraries compiled into 'blas' and 'lapack'. Now the >> names of the libraries are configurable, and there's some >> documentation at the top. > > Great! > >> [How would you prefer to get changes in the future? Is it all right to >> post the source, or should I put them up on a website? I'd normally send a CVS >> diff, but they're about as big as the diff :-)] > > Does your system_info.py include the latest changes from SciPy CVS? If > yes, then it will be easier to apply your changes. You can send the source > now, when it is commited to CVS, after that diff's are better. >From the CVS log it looks like there's been no changes since I first made this. (Last entry for system_info.py is 2002/03/24, I started 2002/03/25). And the latest changes are superseded by my version anyways. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at physics.mcmaster.ca From dmorrill at scipy.org Mon Apr 1 16:56:24 2002 From: dmorrill at scipy.org (David C. Morrill) Date: Mon, 1 Apr 2002 15:56:24 -0600 Subject: [SciPy-dev] scipy import issue References: Message-ID: <003a01c1d9c8$1147a880$6501a8c0@Dave> Recently I've been working on a Python app that uses scipy (among several other packages). I'm now in a position where I want to create a nice Windows installer that will allow users of the program to install/run the program without even knowing that the program is written in Python (i.e. they will not need to install Python or any requisite packages themselves). To that end, I have been using Gordon McMillan's Python Installer tool to automatically analyze/build the collection of files I will need to ship to users. However, I have been having a problem with the scipy parts of the app because of the way that scipy's __init__.py files currently are written. Installer basically performs a static dependency analysis of both the Python code and the shared library files to determine the set of files that comprise the program. For Python source files, this means looking for 'import' statements. Unfortunately, scipy dynamically imports all of its internal modules using functions, which causes Installer to incorrectly analyze most of scipy. I've gone through all of the scipy __init__.py modules and modified them so that they statically import all of the modules. I've preserved the functions calls that previously imported the modules, but now they are only used to build up the value of the __all__ variable (i.e. I've removed the lines that dynamically import modules). This change allows tools like Installer to correctly analyze programs that use scipy, and still seems to give the same result for 'from scipy import *' when executed from within a Python interpreter shell. I'd like to commit these changes to CVS, but being a scipy newbie, I want to to see if anyone has any comments before I go ahead with this change. Dave Morrill From pearu at cens.ioc.ee Mon Apr 1 18:56:28 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 2 Apr 2002 02:56:28 +0300 (EEST) Subject: Enchanced system_info (was Re: [SciPy-dev] preparing for release 0.2!) In-Reply-To: Message-ID: Hi! On Mon, 1 Apr 2002, David M. Cooke wrote: > I've updated it to add more hooks. This should appease the user who > has his atlas libraries compiled into 'blas' and 'lapack'. Now the > names of the libraries are configurable, and there's some > documentation at the top. I have commited your changes to CVS with some changes: 1) site.cfg is in CVS under the name sample_site.cfg. In order to have this file effect, one must rename it back to site.cfg. This ensures that local changes in site.cfg will not be lost when upgrading. Open issues: a) where should be file site.cfg be? Currently it is in scipy_distutils directory but if it is inconvinient then we could also move it up into the scipy directory. Or should it be just looked there by system_info.py? b) installing site.cfg as a data file. It depends where is its final destination. Is it useful to install site.cfg at all? 2) Moved prefix /usr to the end of the prefix list that starts with /usr/local, /opt. 3) Added /lib/{atlas,ATLAS}* to the list of paths where atlas libraries are searched. Additional hooks may be needed for Win32. 4) Fixed "-llapack must occure before -latlas" issue. 5) Added additional hooks for finding X11 libraries. 6) Some other minor changes. Please, test the new system_info.py file and let us know if there are any problems. And David, thanks again for your contribution! Thanks, Pearu From pearu at scipy.org Mon Apr 1 19:10:22 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 1 Apr 2002 18:10:22 -0600 (CST) Subject: [SciPy-dev] scipy import issue In-Reply-To: <003a01c1d9c8$1147a880$6501a8c0@Dave> Message-ID: Hi Dave, On Mon, 1 Apr 2002, David C. Morrill wrote: > I'd like to commit these changes to CVS, but being a scipy newbie, I want to > to see if anyone has any comments before I go ahead with this change. You may have noticed that Travis has been reorganizing scipy __init__ lately. In fact, the current version of __init__.py in CVS seems to import various modules explicitely already. Are you using the latest CVS? Also a diff of your changes would be useful to see in order to make any additional comments. Thanks, Pearu From rob at pythonemproject.com Tue Apr 2 11:35:43 2002 From: rob at pythonemproject.com (rob) Date: Tue, 02 Apr 2002 08:35:43 -0800 Subject: [SciPy-dev] my long awaited FreeBSD crash report Message-ID: <3CA9DDDF.BFA8B666@pythonemproject.com> This is with the untouched, unmodified weave: (don't see g++ in there anywhere) running build_ext building 'sc_f4c0e618f32e7fa45339f71dcafc6f0d0' extension cc -O -pipe -march=pentiumpro -D_THREAD_SAFE -fPIC -I/usr/local/lib/python2.1/site-packages/weave -I/usr/local/lib/python2.1/site-packages/weave/blitz-20001213 -I/usr/local/include/python2.1 -c /home/rob/.python21_compiled/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.cpp -o /tmp/python21_intermediate/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.o skipping /usr/local/lib/python2.1/site-packages/weave/CXX/cxxextensions.c (/tmp/python21_intermediate/cxxextensions.o up-to-date) skipping /usr/local/lib/python2.1/site-packages/weave/CXX/cxxsupport.cxx (/tmp/python21_intermediate/cxxsupport.o up-to-date) skipping /usr/local/lib/python2.1/site-packages/weave/CXX/IndirectPythonInterface.cxx (/tmp/python21_intermediate/IndirectPythonInterface.o up-to-date) skipping /usr/local/lib/python2.1/site-packages/weave/CXX/cxx_extensions.cxx (/tmp/python21_intermediate/cxx_extensions.o up-to-date) cc -shared -pthread /tmp/python21_intermediate/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.o /tmp/python21_intermediate/cxxextensions.o /tmp/python21_intermediate/cxxsupport.o /tmp/python21_intermediate/IndirectPythonInterface.o /tmp/python21_intermediate/cxx_extensions.o -o /home/rob/.python21_compiled/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.so Traceback (most recent call last): File "/home/rob/upml/UFO2blitz.py", line 666, in ? blitz(field1,verbose=2) File "/usr/local/lib/python2.1/site-packages/weave/blitz_tools.py", line 99, in blitz type_factories = blitz_type_factories, File "/usr/local/lib/python2.1/site-packages/weave/inline_tools.py", line 432, in compile_function exec 'import ' + module_name File "", line 1, in ? ImportError: /home/rob/.python21_compiled/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.so: Undefined symbol "cerr" -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com From eric at scipy.org Tue Apr 2 11:00:34 2002 From: eric at scipy.org (eric) Date: Tue, 2 Apr 2002 11:00:34 -0500 Subject: [SciPy-dev] my long awaited FreeBSD crash report References: <3CA9DDDF.BFA8B666@pythonemproject.com> Message-ID: <0c7901c1da5f$85d34b90$6b01a8c0@ericlaptop> Looks like the problem here is that the compiler is the command line compiler name is cc and not gcc. weave specifically checks for gcc and uses g++ if it is found so that stdc++, etc and all the correct library paths for C++ code are used. cc will definitely slip through the check. Looks like a little more sophisticated check needs to be made -- perhaps trying cc -v to see if it reports that it is gcc. Thanks for the report. I'll look into this when scipy_base is finished. eric ----- Original Message ----- From: "rob" To: Sent: Tuesday, April 02, 2002 11:35 AM Subject: [SciPy-dev] my long awaited FreeBSD crash report > This is with the untouched, unmodified weave: (don't see g++ in there > anywhere) > > running build_ext > building 'sc_f4c0e618f32e7fa45339f71dcafc6f0d0' extension > cc -O -pipe -march=pentiumpro -D_THREAD_SAFE -fPIC > -I/usr/local/lib/python2.1/site-packages/weave > -I/usr/local/lib/python2.1/site-packages/weave/blitz-20001213 > -I/usr/local/include/python2.1 -c > /home/rob/.python21_compiled/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.cpp -o > /tmp/python21_intermediate/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.o > skipping > /usr/local/lib/python2.1/site-packages/weave/CXX/cxxextensions.c > (/tmp/python21_intermediate/cxxextensions.o up-to-date) > skipping /usr/local/lib/python2.1/site-packages/weave/CXX/cxxsupport.cxx > (/tmp/python21_intermediate/cxxsupport.o up-to-date) > skipping > /usr/local/lib/python2.1/site-packages/weave/CXX/IndirectPythonInterface.cxx > (/tmp/python21_intermediate/IndirectPythonInterface.o up-to-date) > skipping > /usr/local/lib/python2.1/site-packages/weave/CXX/cxx_extensions.cxx > (/tmp/python21_intermediate/cxx_extensions.o up-to-date) > cc -shared -pthread > /tmp/python21_intermediate/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.o > /tmp/python21_intermediate/cxxextensions.o > /tmp/python21_intermediate/cxxsupport.o > /tmp/python21_intermediate/IndirectPythonInterface.o > /tmp/python21_intermediate/cxx_extensions.o -o > /home/rob/.python21_compiled/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.so > Traceback (most recent call last): > File "/home/rob/upml/UFO2blitz.py", line 666, in ? > blitz(field1,verbose=2) > File "/usr/local/lib/python2.1/site-packages/weave/blitz_tools.py", > line 99, in blitz > type_factories = blitz_type_factories, > File "/usr/local/lib/python2.1/site-packages/weave/inline_tools.py", > line 432, in compile_function > exec 'import ' + module_name > File "", line 1, in ? > ImportError: > /home/rob/.python21_compiled/sc_f4c0e618f32e7fa45339f71dcafc6f0d0.so: > Undefined symbol "cerr" > > > > -- > ----------------------------- > The Numeric Python EM Project > > www.pythonemproject.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Tue Apr 2 13:13:00 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 2 Apr 2002 21:13:00 +0300 (EEST) Subject: [SciPy-dev] my long awaited FreeBSD crash report In-Reply-To: <0c7901c1da5f$85d34b90$6b01a8c0@ericlaptop> Message-ID: On Tue, 2 Apr 2002, eric wrote: > Looks like the problem here is that the compiler is the command line compiler > name is cc and not gcc. weave specifically checks for gcc and uses g++ if it is > found so that stdc++, etc and all the correct library paths for C++ code are > used. cc will definitely slip through the check. > > Looks like a little more sophisticated check needs to be made -- perhaps trying > > cc -v > > to see if it reports that it is gcc. I know very little about FreeBSD and weave, but may be weave should check also c++ that might be the C++ compiler in FreeBSD. If it is nonsense, then just ignore it. Regards, Pearu From rob at pythonemproject.com Tue Apr 2 14:33:07 2002 From: rob at pythonemproject.com (rob) Date: Tue, 02 Apr 2002 11:33:07 -0800 Subject: [SciPy-dev] re: bug report Message-ID: <3CAA0773.51D75912@pythonemproject.com> Thanks Eric. I also tried putting -L/usr/lib -lstdc++ into my system wide CXXFLAGS variable in make.conf, and it didn't work either. I could put it into CFLAGS, but that seems like a weird solution. In other news, I got weave to work on Windows at work with MingW by breaking my blitz()'s into smaller pieces. Otherwise I got stack overflow errors. Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com From pearu at cens.ioc.ee Tue Apr 2 15:14:21 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 2 Apr 2002 23:14:21 +0300 (EEST) Subject: [SciPy-dev] Required version of Numeric is 21.x Message-ID: Hi! I am using Numeric versions 20.2.1 and 20.3 (for Python 2.1 and 2.2, respectively) and I get the following error while importing scipy (latest CVS build): >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "scipy/__init__.py", line 29, in ? from scipy_base import * File "scipy_base/__init__.py", line 5, in ? import limits File "scipy_base/limits.py", line 10, in ? from utility import toFloat32, toFloat64 File "scipy_base/utility.py", line 30, in ? cast = {Numeric.Character: toChar, AttributeError: 'Numeric' module has no attribute 'Character' Travis, you mentioned earlier that such error can only occure if one has "unusual" Numeric installed. I checked from NumPy CVS and verified that attribute 'Character' was introduced first time to Numeric version 21.0b1. So, shall we agree that SciPy requires NumPy version 21.x? It has a short term drawback because those who use NumPy from their OS distribution, cannot use SciPy for awhile. For example, latest NumPy in debian woody is 20.3. Regards, Pearu PS: In order to build the latest SciPy from CVS, one needs to create a directory scipy_base/tests From eric at scipy.org Tue Apr 2 14:33:28 2002 From: eric at scipy.org (eric) Date: Tue, 2 Apr 2002 14:33:28 -0500 Subject: [SciPy-dev] Required version of Numeric is 21.x References: Message-ID: <0ce101c1da7d$4412bb50$6b01a8c0@ericlaptop> scipy_base is in major flux right now. I have a whole bunch of changes to check in once I get everything tested (on my windows box anyway). As for Character, if this is the only compatibility issue, I think we should work around it so that all (recent) versions of Numeric work. I'll look at this before I check things in. eric ----- Original Message ----- From: "Pearu Peterson" To: Sent: Tuesday, April 02, 2002 3:14 PM Subject: [SciPy-dev] Required version of Numeric is 21.x > > Hi! > > I am using Numeric versions 20.2.1 and 20.3 (for Python 2.1 and 2.2, > respectively) and I get the following error while importing scipy > (latest CVS build): > > >>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > File "scipy/__init__.py", line 29, in ? > from scipy_base import * > File "scipy_base/__init__.py", line 5, in ? > import limits > File "scipy_base/limits.py", line 10, in ? > from utility import toFloat32, toFloat64 > File "scipy_base/utility.py", line 30, in ? > cast = {Numeric.Character: toChar, > AttributeError: 'Numeric' module has no attribute 'Character' > > Travis, you mentioned earlier that such error can only occure if one has > "unusual" Numeric installed. I checked from NumPy CVS and verified that > attribute 'Character' was introduced first time to Numeric version > 21.0b1. > > So, shall we agree that SciPy requires NumPy version 21.x? > > It has a short term drawback because those who use NumPy from their > OS distribution, cannot use SciPy for awhile. For example, latest NumPy in > debian woody is 20.3. > > Regards, > Pearu > > PS: In order to build the latest SciPy from CVS, one needs to create a > directory > scipy_base/tests > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Tue Apr 2 14:53:45 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 2 Apr 2002 14:53:45 -0500 (EST) Subject: [SciPy-dev] Required version of Numeric is 21.x In-Reply-To: Message-ID: > File "scipy_base/__init__.py", line 5, in ? > import limits > File "scipy_base/limits.py", line 10, in ? > from utility import toFloat32, toFloat64 > File "scipy_base/utility.py", line 30, in ? > cast = {Numeric.Character: toChar, > AttributeError: 'Numeric' module has no attribute 'Character' > > Travis, you mentioned earlier that such error can only occure if one has > "unusual" Numeric installed. I checked from NumPy CVS and verified that > attribute 'Character' was introduced first time to Numeric version > 21.0b1. My mistake, I didn't realize there was such an obvious oversite. > > So, shall we agree that SciPy requires NumPy version 21.x? No, let's just fix it, this is a minor issue. We don't have to require Numeric.Character > PS: In order to build the latest SciPy from CVS, one needs to create a > directory > scipy_base/tests > I've seen this too. -Travis From eric at scipy.org Tue Apr 2 17:35:31 2002 From: eric at scipy.org (eric) Date: Tue, 2 Apr 2002 17:35:31 -0500 Subject: [SciPy-dev] major changes on CVS... Message-ID: <0d1401c1da97$62fe89d0$6b01a8c0@ericlaptop> scipy_base just went through a major change. I've checked it in so that Travis O. can begin to clean some thing after the changes. The CVS will be broken for builds for a little while until we clean things up. I'd suggest not updating until the all clear signal is given if you rely on scipy for real work. I'll email when all seems to work. eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From eric at scipy.org Tue Apr 2 23:25:16 2002 From: eric at scipy.org (eric) Date: Tue, 2 Apr 2002 23:25:16 -0500 Subject: [SciPy-dev] CVS updates complete Message-ID: <0d5c01c1dac7$8eb15d20$6b01a8c0@ericlaptop> Well, not complete. There is still some cleanup/rearranging to do, but everything builds again. Also, all tests (304 of them) scipy.test(level=10) pass for me on Windows with Python 2.1.1 and Numeric 21.b1. A few notes. * scipy_base is now split out. It is meant to be a (somewhat) minimal function set of things often used in linalg, optimize, integrate, and other low level functions. It also holds any changes to the standard Numeric module. The only extension module at the moment is fastumath. * scipy_base has nearly 100% testing coverage (mostly thanks to Travis Oliphant). I think we need a few more tests, but it is very close to being a (reasonably) complete set of tests. * The issue with Numeric.Character with NumPy pre 21.x should be fixed. * We're temporarily using some python based ieee floating point code by Tim Peters while a few of the cephes functions (isnan and isfinite) get moved into fastumath. Please tests and let us know of failures. Also, the contents of scipy_base and any desired additions or omissions from it are open for comment. thanks, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From jochen at unc.edu Wed Apr 3 00:36:42 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 03 Apr 2002 00:36:42 -0500 Subject: [SciPy-dev] CVS updates complete In-Reply-To: <0d5c01c1dac7$8eb15d20$6b01a8c0@ericlaptop> References: <0d5c01c1dac7$8eb15d20$6b01a8c0@ericlaptop> Message-ID: On Tue, 2 Apr 2002 23:25:16 -0500 eric wrote: eric> Well, not complete. There is still some cleanup/rearranging to do, but eric> everything builds again. Also, all tests (304 of them) scipy.test(level=10) eric> pass for me on Windows with Python 2.1.1 and Numeric 21.b1. > cvs up -A -P -d > python setup.py build [...] error: package directory '/home/jochen/source/numeric/scipy/scipy_base/tests' does not exist Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From eric at scipy.org Wed Apr 3 08:24:57 2002 From: eric at scipy.org (eric) Date: Wed, 3 Apr 2002 08:24:57 -0500 Subject: [SciPy-dev] CVS updates complete References: <0d5c01c1dac7$8eb15d20$6b01a8c0@ericlaptop> Message-ID: <0d9f01c1db12$f3091ec0$6b01a8c0@ericlaptop> Ok. Try again. I thought I was checking that directory checked in -- but obviously wasn't. To test, I built from a clean CVS checkout, and it worked on windows. eric ----- Original Message ----- From: "Jochen K?pper" To: Sent: Wednesday, April 03, 2002 12:36 AM Subject: Re: [SciPy-dev] CVS updates complete > On Tue, 2 Apr 2002 23:25:16 -0500 eric wrote: > > eric> Well, not complete. There is still some cleanup/rearranging to do, but > eric> everything builds again. Also, all tests (304 of them) scipy.test(level=10) > eric> pass for me on Windows with Python 2.1.1 and Numeric 21.b1. > > > cvs up -A -P -d > > > python setup.py build > [...] > error: package directory '/home/jochen/source/numeric/scipy/scipy_base/tests' does not exist > > Greetings, > Jochen > -- > University of North Carolina phone: +1-919-962-4403 > Department of Chemistry phone: +1-919-962-1579 > Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 > Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Wed Apr 3 10:47:42 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 3 Apr 2002 18:47:42 +0300 (EEST) Subject: [SciPy-dev] CVS updates complete In-Reply-To: <0d9f01c1db12$f3091ec0$6b01a8c0@ericlaptop> Message-ID: On Wed, 3 Apr 2002, eric wrote: > Ok. Try again. I thought I was checking that directory checked in -- but > obviously wasn't. > > To test, I built from a clean CVS checkout, and it worked on windows. Eric, have you tried to solve weave issues on Python 2.2? A typical error from testing weave is given at the end of this message. Under Suse linux, SciPy builds fine, but when importing I got >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "scipy/__init__.py", line 56, in ? import xplt File "scipy/xplt/__init__.py", line 31, in ? from Mplot import * File "scipy/xplt/Mplot.py", line 57, in ? hist = scipy.histogram AttributeError: 'module' object has no attribute 'histogram' As a fix, I changed line #56 in Mplot.py hist = scipy.histogram to from scipy.stats import histogram as hist (Note, I have not commited this fix to CVS, feel free to do so.) Pearu ====================================================================== ERROR: check_complex_var_in (test_scalar_spec.test_gcc_complex_converter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/tests/test_scalar_spec.py", line 213, in check_complex_var_in mod.compile(location = test_dir, compiler = self.compiler) File "scipy/weave/ext_tools.py", line 321, in compile verbose = verbose, **kw) File "scipy/weave/build_tools.py", line 194, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/peterson/opt/lib/python2.2/distutils/core.py", line 157, in setup raise SystemExit, "error: " + str(msg) CompileError: error: file '/home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/cxxsupport.cxx' does not exist From eric at scipy.org Wed Apr 3 10:09:17 2002 From: eric at scipy.org (eric) Date: Wed, 3 Apr 2002 10:09:17 -0500 Subject: [SciPy-dev] CVS updates complete References: Message-ID: <0de501c1db21$86542630$6b01a8c0@ericlaptop> > > On Wed, 3 Apr 2002, eric wrote: > > > Ok. Try again. I thought I was checking that directory checked in -- but > > obviously wasn't. > > > > To test, I built from a clean CVS checkout, and it worked on windows. > > Eric, have you tried to solve weave issues on Python 2.2? A typical error > from testing weave is given at the end of this message. > > Under Suse linux, SciPy builds fine, but when importing I got > >>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > File "scipy/__init__.py", line 56, in ? > import xplt > File "scipy/xplt/__init__.py", line 31, in ? > from Mplot import * > File "scipy/xplt/Mplot.py", line 57, in ? > hist = scipy.histogram > AttributeError: 'module' object has no attribute 'histogram' > > As a fix, I changed line #56 in Mplot.py > hist = scipy.histogram > to > from scipy.stats import histogram as hist > > (Note, I have not commited this fix to CVS, feel free to do so.) Thanks. Done. > ====================================================================== > ERROR: check_complex_var_in (test_scalar_spec.test_gcc_complex_converter) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/tests/test_scalar _spec.py", > line 213, in check_complex_var_in > mod.compile(location = test_dir, compiler = self.compiler) > File "scipy/weave/ext_tools.py", line 321, in compile > verbose = verbose, **kw) > File "scipy/weave/build_tools.py", line 194, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "/home/peterson/opt/lib/python2.2/distutils/core.py", line 157, in > setup > raise SystemExit, "error: " + str(msg) > CompileError: error: file > '/home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/cxxsupport.cx x' > does not exist Can you confirm that this file actually does not exists? That directory should look something like: C:\Python21\scipy\weave\CXX>ls Config.hxx IndirectPythonInterface.cxx cxx_extensions.cxx Exception.hxx IndirectPythonInterface.hxx cxxextensions.c Extensions.hxx Objects.hxx cxxsupport.cxx If it isn't there, something is wrong with weave's setup script (at least for Python2.2) that causes it not to install data files correctly. I'll look at this when I look into the cc error on FreeBSD. thanks, eric > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From pearu at scipy.org Wed Apr 3 11:26:04 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 3 Apr 2002 10:26:04 -0600 (CST) Subject: [SciPy-dev] CVS updates complete In-Reply-To: <0de501c1db21$86542630$6b01a8c0@ericlaptop> Message-ID: Eric, On Wed, 3 Apr 2002, eric wrote: > Can you confirm that this file actually does not exists? I didn't install scipy and therefore these files in CXX were not there. Building does not copy the data files. So, it was my bad. But when I copy these files, and run tests, I get two types of failures with weave (see below). These failures may be related to the fact that I am using g++ version 3.0.3 in this machine. And there seems 'rand' undefined in scipy/tests/test_index_tricks.py (see deep below). I have not looked what is the cause of this yet. Pearu Run: (10, 10) f In file included from /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/numinquire.h:60, from /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/array/expr.h:63, from /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/array.h:2469, from /home/peterson/.python22_compiled/48892/sc_5599df30197fe981824ad8ec934a784e0.cpp:3: /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/limits-hack.h:30: multiple definition of `enum std::float_round_style' /home/peterson/opt/include/g++-v3/bits/std_limits.h:866: previous definition here /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/limits-hack.h:31: conflicting types for `round_indeterminate' /home/peterson/opt/include/g++-v3/bits/std_limits.h:867: previous declaration as `std::float_round_style round_indeterminate' /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/limits-hack.h:32: conflicting types for `round_toward_zero' /home/peterson/opt/include/g++-v3/bits/std_limits.h:868: previous declaration as `std::float_round_style round_toward_zero' /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/limits-hack.h:33: conflicting types for `round_to_nearest' /home/peterson/opt/include/g++-v3/bits/std_limits.h:869: previous declaration as `std::float_round_style round_to_nearest' /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/blitz/limits-hack.h:34: conflicting test printing a value:2 ../home/peterson/.python22_compiled/48892/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp: In function `PyObject* compiled_func(PyObject*, PyObject*)': /home/peterson/.python22_compiled/48892/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp:418: no match for `Py::String& < int' operator /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:390: candidates are: bool Py::Object::operator<(const Py::Object&) const /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:1433: bool Py::operator<(const Py::SeqBase::const_iterator&, const Py::SeqBase::const_iterator&) /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:1426: bool Py::operator<(const Py::SeqBase::iterator&, const Py::SeqBase::iterator&) .................................................................................................................................................................E............................................................................................. ====================================================================== ERROR: result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] ---------------------------------------------------------------------- Traceback (most recent call last): File "scipy/weave/tests/test_blitz_tools.py", line 154, in check_5point_avg_2d self.generic_2d(expr) File "scipy/weave/tests/test_blitz_tools.py", line 128, in generic_2d mod_location) File "scipy/weave/tests/test_blitz_tools.py", line 84, in generic_test blitz_tools.blitz(expr,arg_dict,{},verbose=0) #, File "scipy/weave/blitz_tools.py", line 72, in blitz type_converters = converters.blitz, File "scipy/weave/inline_tools.py", line 426, in compile_function verbose=verbose, **kw) File "scipy/weave/ext_tools.py", line 321, in compile verbose = verbose, **kw) File "scipy/weave/build_tools.py", line 194, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/peterson/opt/lib/python2.2/distutils/core.py", line 157, in setup raise SystemExit, "error: " + str(msg) CompileError: error: command 'gcc' failed with exit status 1 ====================================================================== ERROR: check_2d (test_index_tricks.test_concatenator) ---------------------------------------------------------------------- Traceback (most recent call last): File "scipy/tests/test_index_tricks.py", line 41, in check_2d b = rand(5,5) NameError: global name 'rand' is not defined ---------------------------------------------------------------------- Ran 304 tests in 197.076s FAILED (errors=2) From eric at scipy.org Wed Apr 3 10:31:32 2002 From: eric at scipy.org (eric) Date: Wed, 3 Apr 2002 10:31:32 -0500 Subject: [SciPy-dev] CVS updates complete References: <0de501c1db21$86542630$6b01a8c0@ericlaptop> Message-ID: <0df601c1db24$a2641c60$6b01a8c0@ericlaptop> > > ====================================================================== > > ERROR: check_complex_var_in (test_scalar_spec.test_gcc_complex_converter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "/home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/tests/test_scalar > _spec.py", > > line 213, in check_complex_var_in > > mod.compile(location = test_dir, compiler = self.compiler) > > File "scipy/weave/ext_tools.py", line 321, in compile > > verbose = verbose, **kw) > > File "scipy/weave/build_tools.py", line 194, in build_extension > > setup(name = module_name, ext_modules = [ext],verbose=verb) > > File "/home/peterson/opt/lib/python2.2/distutils/core.py", line 157, in > > setup > > raise SystemExit, "error: " + str(msg) > > CompileError: error: file > > > '/home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/cxxsupport.cx > x' > > does not exist > > Can you confirm that this file actually does not exists? That directory should > look something like: > > C:\Python21\scipy\weave\CXX>ls > Config.hxx IndirectPythonInterface.cxx cxx_extensions.cxx > Exception.hxx IndirectPythonInterface.hxx cxxextensions.c > Extensions.hxx Objects.hxx cxxsupport.cxx > > If it isn't there, something is wrong with weave's setup script (at least for > Python2.2) that causes it not to install data files correctly. I'll look at > this when I look into the cc error on FreeBSD. I just tried this on Windows with Python2.2, and I don't see the problem you are having. >>> import scipy.weave >>> scipy.weave.test(level=10) worked for a number of tests -- however I do get a seg-fault when the tests get to complex number routines. I think I'd rather be having your problem... :-| I also got a seg-fault when running the scipy_base tests, so it may not be a weave specific problem. Anyway, something unpleasant is happening. eric From eric at scipy.org Wed Apr 3 10:38:39 2002 From: eric at scipy.org (eric) Date: Wed, 3 Apr 2002 10:38:39 -0500 Subject: [SciPy-dev] CVS updates complete References: Message-ID: <0dfc01c1db25$a0b3b7d0$6b01a8c0@ericlaptop> > > Can you confirm that this file actually does not exists? > > I didn't install scipy and therefore these files in CXX were not there. > Building does not copy the data files. So, it was my bad. > > But when I copy these files, and run tests, I get two types of failures > with weave (see below). These failures may be related to the fact that I > am using > > g++ version 3.0.3 > > in this machine. Yes. I think Fernando Perez also tried with gcc v 3.x and didn't have any luck. I'd love for this to work, but don't have time to get it going right now. It's a long shot, but maybe the blitz++ crowd has already fixed the issue and we can just update to a newer version of blitz... > > And there seems 'rand' undefined in scipy/tests/test_index_tricks.py > (see deep below). I have not looked what is the cause of this yet. Yeah, this one should be fixed now. > > Pearu > > > Run: (10, 10) f > In file included from > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/numinquire.h:60, > from > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/array/expr.h:63, > from > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/array.h:2469, > from > /home/peterson/.python22_compiled/48892/sc_5599df30197fe981824ad8ec934a784e0.cpp :3: > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/limits-hack.h:30: multiple > definition of `enum std::float_round_style' > /home/peterson/opt/include/g++-v3/bits/std_limits.h:866: previous > definition > here > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/limits-hack.h:31: conflicting > types for `round_indeterminate' > /home/peterson/opt/include/g++-v3/bits/std_limits.h:867: previous > declaration > as `std::float_round_style round_indeterminate' > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/limits-hack.h:32: conflicting > types for `round_toward_zero' > /home/peterson/opt/include/g++-v3/bits/std_limits.h:868: previous > declaration > as `std::float_round_style round_toward_zero' > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/limits-hack.h:33: conflicting > types for `round_to_nearest' > /home/peterson/opt/include/g++-v3/bits/std_limits.h:869: previous > declaration > as `std::float_round_style round_to_nearest' > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/blitz-20001213/bli tz/limits-hack.h:34: conflicting > > > > > > test printing a value:2 > ../home/peterson/.python22_compiled/48892/sc_9a25bc84add18fe6c75501f6b01bd84e1.c pp: In > function `PyObject* compiled_func(PyObject*, PyObject*)': > /home/peterson/.python22_compiled/48892/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp :418: no > match for `Py::String& < int' operator > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:39 0: candidates > are: bool Py::Object::operator<(const Py::Object&) const > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:14 33: > bool Py::operator<(const > Py::SeqBase::const_iterator&, const > Py::SeqBase::const_iterator&) > /home/peterson/cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:14 26: > bool Py::operator<(const > Py::SeqBase::iterator&, > const Py::SeqBase::iterator&) > ................................................................................ ................................................................................ .E.............................................................................. ............... > ====================================================================== > ERROR: result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "scipy/weave/tests/test_blitz_tools.py", line 154, in > check_5point_avg_2d > self.generic_2d(expr) > File "scipy/weave/tests/test_blitz_tools.py", line 128, in generic_2d > mod_location) > File "scipy/weave/tests/test_blitz_tools.py", line 84, in generic_test > blitz_tools.blitz(expr,arg_dict,{},verbose=0) #, > File "scipy/weave/blitz_tools.py", line 72, in blitz > type_converters = converters.blitz, > File "scipy/weave/inline_tools.py", line 426, in compile_function > verbose=verbose, **kw) > File "scipy/weave/ext_tools.py", line 321, in compile > verbose = verbose, **kw) > File "scipy/weave/build_tools.py", line 194, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "/home/peterson/opt/lib/python2.2/distutils/core.py", line 157, in > setup > raise SystemExit, "error: " + str(msg) > CompileError: error: command 'gcc' failed with exit status 1 > > ====================================================================== > ERROR: check_2d (test_index_tricks.test_concatenator) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "scipy/tests/test_index_tricks.py", line 41, in check_2d > b = rand(5,5) > NameError: global name 'rand' is not defined > > ---------------------------------------------------------------------- > Ran 304 tests in 197.076s > > FAILED (errors=2) > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Wed Apr 3 15:01:46 2002 From: eric at scipy.org (eric) Date: Wed, 3 Apr 2002 15:01:46 -0500 Subject: [SciPy-dev] CVS updates complete References: Message-ID: <0e9c01c1db4a$62a92680$6b01a8c0@ericlaptop> > > Eric, > > On Wed, 3 Apr 2002, eric wrote: > > > Can you confirm that this file actually does not exists? > > I didn't install scipy and therefore these files in CXX were not there. > Building does not copy the data files. So, it was my bad. This brings up a good point though. Testing is a little difficult because you have to do a python setup.py install to get everything in the correct location. Should we alter scipy_distutils to copy files into the build directory when "python setup.py build" is used? That way you can just cd to the build/lib.xxx directory and try things out without installing. eric From heiko at hhenkelmann.de Wed Apr 3 16:13:37 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Wed, 3 Apr 2002 23:13:37 +0200 Subject: [SciPy-dev] problems to find ATLAS during build References: <0d5c01c1dac7$8eb15d20$6b01a8c0@ericlaptop> Message-ID: <004401c1db54$6c053160$5660e03e@arrow> Hello there, the following used to work before (last time with an update from Monday): file: build\generated_pyfs\fblas.pyf file: build\generated_pyfs\cblas.pyf atlas_info: NOT AVAILABLE Traceback (most recent call last): File "setup.py", line 127, in ? install_package() File "setup.py", line 94, in install_package config.extend([get_package_config(x,parent_package)for x in standard_package s]) File "setup.py", line 46, in get_package_config config = mod.configuration(parent) File "linalg2\setup_linalg2.py", line 29, in configuration raise AtlasNotFoundError,AtlasNotFoundError.__doc__ scipy_distutils.system_info.AtlasNotFoundError: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Either install them in /usr/local/lib/atlas or /usr/lib/atlas and retry setup.py. One can use also ATLAS environment variable to indicate the location of Atlas libraries. Thanx Heiko From pearu at scipy.org Wed Apr 3 16:12:39 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 3 Apr 2002 15:12:39 -0600 (CST) Subject: [SciPy-dev] problems to find ATLAS during build In-Reply-To: <004401c1db54$6c053160$5660e03e@arrow> Message-ID: On Wed, 3 Apr 2002, Heiko Henkelmann wrote: > > Hello there, > > the following used to work before (last time with an update from Monday): > > > file: build\generated_pyfs\fblas.pyf > file: build\generated_pyfs\cblas.pyf > atlas_info: > NOT AVAILABLE system_info.py was enhanced quite a bit. May be some path patterns were missed during the changes. Could you give more information about your ATLAS installation? System? Location? Files in it? Thanks, Pearu From heiko at hhenkelmann.de Wed Apr 3 16:32:02 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Wed, 3 Apr 2002 23:32:02 +0200 Subject: [SciPy-dev] problems to find ATLAS during build References: <0d5c01c1dac7$8eb15d20$6b01a8c0@ericlaptop> <004401c1db54$6c053160$5660e03e@arrow> Message-ID: <005001c1db56$fe9c2180$5660e03e@arrow> Sorry, I forgot to mention that I'm using Python 2.1 on a Windows box. ----- Original Message ----- From: "Heiko Henkelmann" To: Sent: Wednesday, April 03, 2002 11:13 PM Subject: [SciPy-dev] problems to find ATLAS during build > > Hello there, > > the following used to work before (last time with an update from Monday): > > > file: build\generated_pyfs\fblas.pyf > file: build\generated_pyfs\cblas.pyf > atlas_info: > NOT AVAILABLE > > Traceback (most recent call last): > File "setup.py", line 127, in ? > install_package() > File "setup.py", line 94, in install_package > config.extend([get_package_config(x,parent_package)for x in > standard_package > s]) > File "setup.py", line 46, in get_package_config > config = mod.configuration(parent) > File "linalg2\setup_linalg2.py", line 29, in configuration > raise AtlasNotFoundError,AtlasNotFoundError.__doc__ > scipy_distutils.system_info.AtlasNotFoundError: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Either install them in /usr/local/lib/atlas or /usr/lib/atlas > and retry setup.py. One can use also ATLAS environment variable > to indicate the location of Atlas libraries. > > > Thanx > > Heiko > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Wed Apr 3 16:24:59 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 3 Apr 2002 15:24:59 -0600 (CST) Subject: [SciPy-dev] CVS updates complete In-Reply-To: <0e9c01c1db4a$62a92680$6b01a8c0@ericlaptop> Message-ID: On Wed, 3 Apr 2002, eric wrote: > This brings up a good point though. Testing is a little difficult because you > have to do a python setup.py install to get everything in the correct location. > Should we alter scipy_distutils to copy files into the build directory when > "python setup.py build" is used? That way you can just cd to the build/lib.xxx > directory and try things out without installing. In general it makes sense not to copy data files when building. I think we do not need to alter scipy_distutils for testing. Doing, for example setup.py install --prefix=/tmp cd /tmp/lib/python2.2/site-packages python -c 'import scipy;scipy.test()' should give a work around for this problem. BTW, weave tests all pass with Python 2.2 and gcc version 2.95.4. Though I still get /home/users/pearu/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp: In function `struct PyObject * compiled_func(PyObject *, PyObject *)': /home/users/pearu/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp:418: no match for `Py::String & < int' /home/users/pearu/src_cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:390: candidates are: bool Py::Object::operator <(const Py::Object &) const /home/users/pearu/src_cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:1433: bool Py::operator <(const Py::SeqBase::const_iterator &, const Py::SeqBase::const_iterator &) /home/users/pearu/src_cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects.hxx:1426: bool Py::operator <(const Py::SeqBase::iterator &, const Py::SeqBase::iterator &) .warning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations Pearu From pearu at scipy.org Wed Apr 3 16:33:48 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 3 Apr 2002 15:33:48 -0600 (CST) Subject: [SciPy-dev] problems to find ATLAS during build In-Reply-To: <005001c1db56$fe9c2180$5660e03e@arrow> Message-ID: On Wed, 3 Apr 2002, Heiko Henkelmann wrote: > Sorry, I forgot to mention that I'm using Python 2.1 on a Windows box. Under Win32, atlas libraries are currently searched only in directories C:\ C:\ATLAS* C:\atlas* Should there be others? I have no idea if C:\Atlas is matched though. Pearu From heiko at hhenkelmann.de Wed Apr 3 16:44:49 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Wed, 3 Apr 2002 23:44:49 +0200 Subject: [SciPy-dev] problems to find ATLAS during build Message-ID: <005d01c1db58$c7eeb740$5660e03e@arrow> And the content and location of my atlas directory: E:\USR\LIB\ATLAS MinGW>ls -l total 10308 -rw-r--r-- 1 henkelma 544 4587352 Mar 26 21:03 LIBATLAS.A -rw-r--r-- 1 henkelma 544 242394 Mar 26 20:43 LIBCBLAS.A -rw-r--r-- 1 henkelma 544 257842 Mar 26 21:06 libf77blas.a -rw-r--r-- 1 henkelma 544 5466190 Mar 27 20:01 liblapack.a ----- Original Message ----- From: "Heiko Henkelmann" To: Sent: Wednesday, April 03, 2002 11:32 PM Subject: Re: [SciPy-dev] problems to find ATLAS during build > Sorry, I forgot to mention that I'm using Python 2.1 on a Windows box. > > > ----- Original Message ----- > From: "Heiko Henkelmann" > To: > Sent: Wednesday, April 03, 2002 11:13 PM > Subject: [SciPy-dev] problems to find ATLAS during build > > > > > > Hello there, > > > > the following used to work before (last time with an update from Monday): > > > > > > file: build\generated_pyfs\fblas.pyf > > file: build\generated_pyfs\cblas.pyf > > atlas_info: > > NOT AVAILABLE > > > > Traceback (most recent call last): > > File "setup.py", line 127, in ? > > install_package() > > File "setup.py", line 94, in install_package > > config.extend([get_package_config(x,parent_package)for x in > > standard_package > > s]) > > File "setup.py", line 46, in get_package_config > > config = mod.configuration(parent) > > File "linalg2\setup_linalg2.py", line 29, in configuration > > raise AtlasNotFoundError,AtlasNotFoundError.__doc__ > > scipy_distutils.system_info.AtlasNotFoundError: > > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > > Either install them in /usr/local/lib/atlas or /usr/lib/atlas > > and retry setup.py. One can use also ATLAS environment variable > > to indicate the location of Atlas libraries. > > > > > > Thanx > > > > Heiko > > > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-dev > > > From pearu at scipy.org Wed Apr 3 16:46:08 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 3 Apr 2002 15:46:08 -0600 (CST) Subject: [SciPy-dev] problems to find ATLAS during build In-Reply-To: <005d01c1db58$c7eeb740$5660e03e@arrow> Message-ID: On Wed, 3 Apr 2002, Heiko Henkelmann wrote: > And the content and location of my atlas directory: > > > E:\USR\LIB\ATLAS > MinGW>ls -l > total 10308 > -rw-r--r-- 1 henkelma 544 4587352 Mar 26 21:03 LIBATLAS.A > -rw-r--r-- 1 henkelma 544 242394 Mar 26 20:43 LIBCBLAS.A > -rw-r--r-- 1 henkelma 544 257842 Mar 26 21:06 libf77blas.a > -rw-r--r-- 1 henkelma 544 5466190 Mar 27 20:01 liblapack.a This should have been detected with the latest system_info.py from CVS. If not, what is sys.prefix ? (It should be E:\USR) Try also renaming LIBATLAS.A and LIBCBLAS.A to lower case names. This might be the problem. Let us know if how it works. Pearu From eric at scipy.org Wed Apr 3 15:48:39 2002 From: eric at scipy.org (eric) Date: Wed, 3 Apr 2002 15:48:39 -0500 Subject: [SciPy-dev] CVS updates complete References: Message-ID: <0ec601c1db50$ef1482d0$6b01a8c0@ericlaptop> ----- Original Message ----- From: To: Sent: Wednesday, April 03, 2002 4:24 PM Subject: Re: [SciPy-dev] CVS updates complete > > > On Wed, 3 Apr 2002, eric wrote: > > > This brings up a good point though. Testing is a little difficult because you > > have to do a python setup.py install to get everything in the correct location. > > Should we alter scipy_distutils to copy files into the build directory when > > "python setup.py build" is used? That way you can just cd to the build/lib.xxx > > directory and try things out without installing. > > In general it makes sense not to copy data files when building. > I think we do not need to alter scipy_distutils for testing. Doing, for > example > > setup.py install --prefix=/tmp > cd /tmp/lib/python2.2/site-packages > python -c 'import scipy;scipy.test()' > > should give a work around for this problem. Ok. I'll try using this. > > BTW, weave tests all pass with Python 2.2 and gcc version 2.95.4. > Though I still get > > /home/users/pearu/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp: In > function `struct PyObject * compiled_func(PyObject *, PyObject *)': > /home/users/pearu/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp:41 8: no > match for `Py::String & < int' > /home/users/pearu/src_cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects .hxx:390: candidates > are: bool Py::Object::operator <(const Py::Object &) const > /home/users/pearu/src_cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects .hxx:1433: bool > Py::operator <(const Py::SeqBase::const_iterator &, const > Py::SeqBase::const_iterator &) > /home/users/pearu/src_cvs/scipy/build/lib.linux-i686-2.2/scipy/weave/CXX/Objects .hxx:1426: bool > Py::operator <(const Py::SeqBase::iterator &, const > Py::SeqBase::iterator &) > .warning: specified build_dir '_bad_path_' does not exist or is or is not > writable. Trying default locations These are errors that are caught and warned about. I think all is working fine. So is Python2.2 passing all tests for you? Travis O. is having some problems on Mandrake as we speak with the underflow problem reported recently on the Numeric lists. Also, my last tests on windows with 2.2b1 (I think) caused seg-faults. We're both upgrading to 2.2.1c2 to see if that helps. eric > > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From cookedm at physics.mcmaster.ca Wed Apr 3 16:59:21 2002 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 03 Apr 2002 16:59:21 -0500 Subject: [SciPy-dev] problems to find ATLAS during build In-Reply-To: <005d01c1db58$c7eeb740$5660e03e@arrow> ("Heiko Henkelmann"'s message of "Wed, 3 Apr 2002 23:44:49 +0200") References: <005d01c1db58$c7eeb740$5660e03e@arrow> Message-ID: At some point, "Heiko Henkelmann" wrote: > And the content and location of my atlas directory: > > > E:\USR\LIB\ATLAS > MinGW>ls -l > total 10308 > -rw-r--r-- 1 henkelma 544 4587352 Mar 26 21:03 LIBATLAS.A > -rw-r--r-- 1 henkelma 544 242394 Mar 26 20:43 LIBCBLAS.A > -rw-r--r-- 1 henkelma 544 257842 Mar 26 21:06 libf77blas.a > -rw-r--r-- 1 henkelma 544 5466190 Mar 27 20:01 liblapack.a > Have a look at scipy_distutils/sample_site.cfg. Copy it to site.cfg (same directory) and edit it so that the [atlas] section looks like [atlas] lib_dir = E:\USR\LIB\ATLAS Then it will find your libraries. Actually, that's not going to work since I wrote the code to split that like a Unix path, so it'll look in E and \USR\LIB\ATLAS (which is not what you want). I've attached a patch to the current CVS that splits path lists using os.pathsep as the separator character (';' for win32 and ':' for POSIX). With this the above should work. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at mcmaster.ca -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From pearu at scipy.org Wed Apr 3 16:54:07 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 3 Apr 2002 15:54:07 -0600 (CST) Subject: [SciPy-dev] CVS updates complete In-Reply-To: <0ec601c1db50$ef1482d0$6b01a8c0@ericlaptop> Message-ID: On Wed, 3 Apr 2002, eric wrote: > So is Python2.2 passing all tests for you? Yes, and with test level=10. > Travis O. is having some problems on > Mandrake as we speak with the underflow problem reported recently on the Numeric > lists. I don't have them. > Also, my last tests on windows with 2.2b1 (I think) caused seg-faults. We're > both upgrading to 2.2.1c2 to see if that helps. I am using Python 2.2.1c1 (#1, Mar 15 2002, 08:13:47) [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 on debian woody. Pearu From pearu at scipy.org Wed Apr 3 17:02:09 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 3 Apr 2002 16:02:09 -0600 (CST) Subject: [SciPy-dev] problems to find ATLAS during build In-Reply-To: Message-ID: On Wed, 3 Apr 2002, David M. Cooke wrote: > > Have a look at scipy_distutils/sample_site.cfg. Copy it to site.cfg > (same directory) and edit it so that the [atlas] section looks like > > [atlas] > lib_dir = E:\USR\LIB\ATLAS > > Then it will find your libraries. > > Actually, that's not going to work since I wrote the code to split > that like a Unix path, so it'll look in E and \USR\LIB\ATLAS (which is > not what you want). > > I've attached a patch to the current CVS that splits path lists using > os.pathsep as the separator character (';' for win32 and ':' for POSIX). > With this the above should work. Thanks for catching this. I have commited this fix (and also replaced other occurances of ':' with os.pathsep) to CVS now. Get the latest. Pearu From eric at scipy.org Wed Apr 3 16:21:22 2002 From: eric at scipy.org (eric) Date: Wed, 3 Apr 2002 16:21:22 -0500 Subject: [SciPy-dev] CVS updates complete References: Message-ID: <0edf01c1db55$812e1ab0$6b01a8c0@ericlaptop> Good. Well, it is nice to have one success report. 2.2 on windows isn't happy either. Maybe we can just get everyone to switch to woody, and then I can go home. ;-) eric ----- Original Message ----- From: To: Sent: Wednesday, April 03, 2002 4:54 PM Subject: Re: [SciPy-dev] CVS updates complete > > > On Wed, 3 Apr 2002, eric wrote: > > > So is Python2.2 passing all tests for you? > > Yes, and with test level=10. > > > Travis O. is having some problems on > > Mandrake as we speak with the underflow problem reported recently on the Numeric > > lists. > > I don't have them. > > > Also, my last tests on windows with 2.2b1 (I think) caused seg-faults. We're > > both upgrading to 2.2.1c2 to see if that helps. > > I am using > > Python 2.2.1c1 (#1, Mar 15 2002, 08:13:47) > [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 > > on debian woody. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant.travis at ieee.org Thu Apr 4 00:38:49 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 3 Apr 2002 22:38:49 -0700 Subject: [SciPy-dev] lu_solve problem -- f2py related In-Reply-To: References: Message-ID: Pearu, I discovered an interesting behavior in f2py's wrapping of clapack_dgetrs. This is the function that solves a system of equations. It results in wrong answers when the argument b is a one-dimensional array that is not contiguous and the functions clapack_dgetrs (or clapack_gesv) are called. This may have implications for other functions as well, I'm not sure. The easiest way to show the problem is to look at linalg2.solve for the following problem. A = rand(5,5) b1 = rand(5,2) x1 = linalg2.solve(A,b1) x2 = linalg2.solve(A,b1[:,0]) x3 = linalg2.solve(A,b1[:,0].copy()) print x1[:,0] print x2 print x3 Notice that x2 is not correct. The problem I think is the fact that in array_from_pyobj there is a check for (rank > 1) arrays before lazy-transposes are done. I think a check should be made for rank-1 arrays as well, to handle this case. For, now I've just made a hack in the solve routines, to check for this case. I'd love to hear your perspective on this problem, though, Pearu. -Travis From oliphant.travis at ieee.org Thu Apr 4 01:05:30 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 3 Apr 2002 23:05:30 -0700 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: Message-ID: Thanks to Pearu for these benchmarks. I just ran some linalg2 benchmarks on my ATHLON 1.1 GHz processor (gcc-2.96, Mandrake -8.2). It would be interesting to see what others are getting. >>> linalg2.basic.test() ......................EEEEEEEEEE Finding matrix determinant ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.26 | 0.50 | 0.25 | 0.58 (secs for 2000 calls) 100 | 0.45 | 2.01 | 0.43 | 2.30 (secs for 300 calls) 500 | 0.50 | 2.92 | 0.48 | 3.06 (secs for 4 calls) . Solving system of linear equations ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.40 | 0.43 | 0.40 | 0.50 (secs for 2000 calls) 100 | 0.50 | 1.79 | 0.51 | 2.25 (secs for 300 calls) 500 | 0.49 | 2.98 | 0.48 | 3.15 (secs for 4 calls) . Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.52 | 0.89 | 0.51 | 0.96 (secs for 2000 calls) 100 | 1.18 | 5.73 | 1.18 | 6.01 (secs for 300 calls) 500 | 1.31 | 15.34 | 1.30 | 15.82 (secs for 4 calls) . Look at that speed up.... fantastic. -Travis From pearu at scipy.org Thu Apr 4 03:15:16 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 4 Apr 2002 02:15:16 -0600 (CST) Subject: [SciPy-dev] Re: lu_solve problem -- f2py related In-Reply-To: Message-ID: Hi Travis, On Wed, 3 Apr 2002, Travis Oliphant wrote: > The easiest way to show the problem is to look at linalg2.solve for the > following problem. > > A = rand(5,5) > b1 = rand(5,2) > > x1 = linalg2.solve(A,b1) > x2 = linalg2.solve(A,b1[:,0]) > x3 = linalg2.solve(A,b1[:,0].copy()) > > print x1[:,0] > print x2 > print x3 Here is what I get with f2py 2.13.175-1239 and ..-1242: >>> from scipy_base.testing import rand >>> import linalg2 >>> A = rand(5,5) >>> b1 = rand(5,2) >>> x1 = linalg2.solve(A,b1) >>> x2 = linalg2.solve(A,b1[:,0]) >>> x3 = linalg2.solve(A,b1[:,0].copy()) >>> print x1[:,0] [ 0.71772198 1.32782188 0.39482474 -3.19123014 1.71486623] >>> print x2 [ 0.71772198 1.32782188 0.39482474 -3.19123014 1.71486623] >>> print x3 [ 0.71772198 1.32782188 0.39482474 -3.19123014 1.71486623] Looks good to me. > Notice that x2 is not correct. The problem I think is the fact that in > array_from_pyobj there is a check for (rank > 1) arrays before > lazy-transposes are done. I think a check should be made for rank-1 arrays > as well, to handle this case. What version of f2py are you using? It might be an old and already fixed problem. Pearu From pearu at scipy.org Thu Apr 4 03:28:21 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 4 Apr 2002 02:28:21 -0600 (CST) Subject: [SciPy-dev] Re: lu_solve problem -- f2py related In-Reply-To: Message-ID: Travis, ignore my previous success report. I was getting it with your fix. I'll look what is the problem with f2py.. Pearu From pearu at scipy.org Thu Apr 4 05:10:20 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 4 Apr 2002 04:10:20 -0600 (CST) Subject: [SciPy-dev] Solved: Re: lu_solve problem -- f2py related In-Reply-To: Message-ID: Hi! On Wed, 3 Apr 2002, Travis Oliphant wrote: > For, now I've just made a hack in the solve routines, to check for this case. After you upgrade f2py to 2.13.175-1250, you can undo these hacks. > I'd love to hear your perspective on this problem, though, Pearu. It was a bug in copy_ND_array and occurs only for one-dimensional and non-contigiuous arrays (there were no calculations for instep and outstep, they were just set to 1 and 1, respectively). I have also made a new snashot of f2py available with a bug fix. Travis, thanks for pointing out this bug. Pearu From pearu at scipy.org Thu Apr 4 15:11:03 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 4 Apr 2002 14:11:03 -0600 (CST) Subject: [SciPy-dev] Testing scipy_base Message-ID: Hi, While running scipy_base tests, I get ====================================================================== ERROR: check_complex1 (test_type_check.test_isnan) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.1/site-packages/scipy_base/tests/test_type_check.py", line 92, in check_complex1 assert(isnan(array(0+0j)/0.) == 1) ValueError: math domain error ====================================================================== ERROR: check_ind (test_type_check.test_isnan) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.1/site-packages/scipy_base/tests/test_type_check.py", line 84, in check_ind assert(isnan(array((0.,))/0.) == 1) OverflowError: math range error I am using Python 2.1.2 (#1, Apr 1 2002, 18:23:14) [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 and >>> Numeric.__version__ '20.2.1' Do you get these errors with the latest Numeric 21.0? On Windows? Any other ideas? Pearu From eric at scipy.org Thu Apr 4 15:07:05 2002 From: eric at scipy.org (eric) Date: Thu, 4 Apr 2002 15:07:05 -0500 Subject: [SciPy-dev] Testing scipy_base References: Message-ID: <115601c1dc14$4afb3bc0$6b01a8c0@ericlaptop> Hey Pearu, All tests pass for me on windows 2000, Python2.2.1c2 with Numeric 21.0. I haven' tested today with Python2.1.1, but all tests were passing yesterday. eric > > Hi, > > While running scipy_base tests, I get > > ====================================================================== > ERROR: check_complex1 (test_type_check.test_isnan) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.1/site-packages/scipy_base/tests/test_type_check.py", > line 92, in check_complex1 > assert(isnan(array(0+0j)/0.) == 1) > ValueError: math domain error > ====================================================================== > ERROR: check_ind (test_type_check.test_isnan) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.1/site-packages/scipy_base/tests/test_type_check.py", > line 84, in check_ind > assert(isnan(array((0.,))/0.) == 1) > OverflowError: math range error > > > > I am using > > Python 2.1.2 (#1, Apr 1 2002, 18:23:14) > [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 > > and > > >>> Numeric.__version__ > '20.2.1' > > > Do you get these errors with the latest Numeric 21.0? On Windows? > Any other ideas? > > Pearu > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Thu Apr 4 15:25:22 2002 From: eric at scipy.org (eric) Date: Thu, 4 Apr 2002 15:25:22 -0500 Subject: [SciPy-dev] issues with distributing Numeric separately on windows Message-ID: <115e01c1dc16$d8c56190$6b01a8c0@ericlaptop> Group, When upgrading to Python2.2 I re-discovered a windows specific issue that Travis O. and I fought with before releasing SciPy-0.1. It has to do with Numeric and fastumath. These guys work very closely together. On windows, the pre-compiled binaries available from source-forge are compiled with MSVC compiler. SciPy, and hence fastumath, is compiled with gcc. This works fine except for calls from Numeric C extensions into the C code of fastumath that return complex numbers. There is a binary incompatibility between MSVC and gcc in their structure layout that results in segmentation faults on complex number operations. So: >>> from scipy_base import * >>> array(1+1j)/1. will cause a seg fault if you use the standard Numeric module with SciPy. Some people balked when SciPy was distributed with Numeric last time. We've endeavored to separate things out this time so that that wasn't necessary. However, this is pretty much a show stopper on windows -- and was major reason we bundled them last time. I think we'll have to distribute a gcc compiled version of Numeric with SciPy on windows. This will get installed over (will clobber) your old installation of Numeric in the process. It does not affect Numeric's other than making it version 21.0. By this I mean, the gcc and MSVC versions will work identically and work with all your extension modules. If anyone has run into the binary incompatibility issue before and discovered a solution, I'd love to here it. I found the --native-struct flag for mingw, but it didn't seem to help. One other solution is to get SciPy working with MSVC. I think it might be possible (though there are some big issues), but it isn't a high priority. So, let me know if you have a solution, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From eric at scipy.org Thu Apr 4 15:39:11 2002 From: eric at scipy.org (eric) Date: Thu, 4 Apr 2002 15:39:11 -0500 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week Message-ID: <117a01c1dc18$c70390b0$6b01a8c0@ericlaptop> Group, It doesn't look like SciPy-0.2 will roll out tomorrow. There are still some outstanding issues. 1. setup.py needs some additions to bundle Numeric into exe files when building a windows distribution. 2. Where are we on the linalg to linalg2 transition? Has everything been moved over? I haven't looked at the cblas/fblas wrappers in a while. Also, I'm not up to speed on all the work Travis and Pearu have done. Is it release ready? 3. Documentation and installation guides needs to be updated. There have been substantial updates and rearrangments that we should document. I haven't looked at this yet. 4. source and binary distributions need to be built and tested. I'd like to have Windows,Mac,Debian, and Linux RH rpms available. I can do the Windows and maybe the rpms on a RH 7.x box. I know binary RPMs have a lot of issues, so I guess we should have a source RPM also? 5. Fix weave for FreeBSD. 6. Travis, is the stats module finished? 2,3, and 4 are the bulk of the work. If linalg2 is about ready to roll, then mid-next week should be possible. eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From jochen at unc.edu Thu Apr 4 21:39:26 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 04 Apr 2002 21:39:26 -0500 Subject: [SciPy-dev] Linalg2 benchmarks Message-ID: Travis showed really impressive numbers for linal2. Here is what I get -- less impressive, still ok? ,---- | >>> scipy.linalg2.basic.test() | ................................ | Finding matrix determinant | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.35 | 0.75 | 0.35 | 0.87 (secs for 2000 calls) | 100 | 0.82 | 1.43 | 0.83 | 1.84 (secs for 300 calls) | 500 | 0.89 | 1.09 | 0.87 | 1.29 (secs for 4 calls) | . | Solving system of linear equations | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.53 | 0.65 | 0.53 | 0.77 (secs for 2000 calls) | 100 | 0.86 | 1.13 | 0.86 | 1.70 (secs for 300 calls) | 500 | 0.87 | 0.95 | 0.86 | 1.15 (secs for 4 calls) | . | Finding matrix inverse | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.76 | 1.24 | 0.76 | 1.35 (secs for 2000 calls) | 100 | 2.20 | 3.66 | 2.20 | 4.07 (secs for 300 calls) | 500 | 2.32 | 3.31 | 2.30 | 3.51 (secs for 4 calls) | . | ---------------------------------------------------------------------- | Ran 35 tests in 50.385s | | OK `---- These numbers are repeatably the same (+-2 in the 1/100 s). I don't know what the problem is, but it looks as scipy scales worse than Numeric? Interesting to note is also that here Numeric seems to be much faster, and scale much better, than for Travis -- who has the faster CPU (1.1GHz Athlon vs. 800MHz PIII)! This was run on a dual CPU PIII/800, RedHat-7.0 + gcc-3.0.4 + python-2.2.1c1. ATLAS was compiled with the system kgcc, which is egcs-1.1.2. scipy is compiled with gcc-3.0.4, compiler options ,---- | -march=i686 -O3 -unroll_loops -fPIC `---- The machine was busy with a single-thread (single cpu) job, but plenty of physical RAM was free (about half of 512 MB). Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From eric at scipy.org Thu Apr 4 20:49:28 2002 From: eric at scipy.org (eric) Date: Thu, 4 Apr 2002 20:49:28 -0500 Subject: [SciPy-dev] Linalg2 benchmarks References: Message-ID: <121801c1dc44$1fad1c10$6b01a8c0@ericlaptop> Hey Jochen, > ATLAS was compiled with the system kgcc, which is egcs-1.1.2. > scipy is compiled with gcc-3.0.4, compiler options > ,---- > | -march=i686 -O3 -unroll_loops -fPIC > `---- I think gcc 3.0 may be the problem. R. Clint Whaley, the head developer on ATLAS, regularly harps on how poor gcc3 does with ATLAS. This is definitely a bummer. I think he has even posted a bug report about some specific issues and basically been told that is wasn't gonna get fixed. He recommends using pre3.0 compilers for ATLAS. Here is a note that discusses it. http://www.cs.utk.edu/~rwhaley/ATLAS/gcc30.html eric > > > The machine was busy with a single-thread (single cpu) job, but plenty > of physical RAM was free (about half of 512 MB). > > Greetings, > Jochen > -- > University of North Carolina phone: +1-919-962-4403 > Department of Chemistry phone: +1-919-962-1579 > Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 > Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jochen at unc.edu Thu Apr 4 22:32:54 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 04 Apr 2002 22:32:54 -0500 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: <121801c1dc44$1fad1c10$6b01a8c0@ericlaptop> References: <121801c1dc44$1fad1c10$6b01a8c0@ericlaptop> Message-ID: On Thu, 4 Apr 2002 20:49:28 -0500 eric wrote: >> ATLAS was compiled with the system kgcc, which is egcs-1.1.2. ^^^^ ^^^^^^^^^^ eric> I think gcc 3.0 may be the problem. /vide supra/ Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From oliphant.travis at ieee.org Fri Apr 5 00:01:21 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 4 Apr 2002 22:01:21 -0700 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: <121801c1dc44$1fad1c10$6b01a8c0@ericlaptop> Message-ID: On Thursday 04 April 2002 08:32 pm, you wrote: > On Thu, 4 Apr 2002 20:49:28 -0500 eric wrote: > >> ATLAS was compiled with the system kgcc, which is egcs-1.1.2. > > ^^^^ ^^^^^^^^^^ > > eric> I think gcc 3.0 may be the problem. > The other question is are you using ATLAS for Numeric as well? How was your Numeric installed. Which version of Numeric do you have? Those numbers mean your Numeric must be using a more optimized version of lapack, anyway. -Travis From oliphant.travis at ieee.org Fri Apr 5 00:13:05 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 4 Apr 2002 22:13:05 -0700 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: Message-ID: > > The other question is are you using ATLAS for Numeric as well? How was > your Numeric installed. Which version of Numeric do you have? > > Those numbers mean your Numeric must be using a more optimized version of > lapack, anyway. For example. Here is the comparison when I link Numeric against ATLAS -- not the default install configuration (if you install binaries you might be getting an optimized Numeric). >>> import scipy.linalg2 >>> scipy.linalg2.basic.test() ................................ Finding matrix determinant ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.25 | 0.49 | 0.24 | 0.56 (secs for 2000 calls) 100 | 0.40 | 0.78 | 0.39 | 1.05 (secs for 300 calls) 500 | 0.46 | 0.62 | 0.46 | 0.78 (secs for 4 calls) . Solving system of linear equations ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.39 | 0.43 | 0.38 | 0.49 (secs for 2000 calls) 100 | 0.44 | 0.62 | 0.44 | 0.98 (secs for 300 calls) 500 | 0.46 | 0.53 | 0.46 | 0.69 (secs for 4 calls) . Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.51 | 0.74 | 0.50 | 0.80 (secs for 2000 calls) 100 | 1.08 | 1.92 | 1.09 | 2.28 (secs for 300 calls) 500 | 1.24 | 1.65 | 1.24 | 1.81 (secs for 4 calls) . ---------------------------------------------------------------------- Ran 35 tests in 27.988s OK The speed up isn't nearly so impressive as before and I also see the relative improvement of Numeric in going from 100 to 500 size matrices over scipy's modest decrease (SciPy's f2py-optimized interface is still faster though) --- nearly twice as fast for small matrices. -Travis From jochen at jochen-kuepper.de Fri Apr 5 00:50:00 2002 From: jochen at jochen-kuepper.de (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 05 Apr 2002 00:50:00 -0500 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: Message-ID: One more. Thsi is a AMD Duron 800 MHz, 128 MB. RedHat 7.1 + gcc-3.0.4 20020120 (prerelease). This time I am not sure what compiler was used for ATLAS, probably a "bad" one. Also this is an older ATLAS. ,---- | Python 2.2.1c1 (#1, Mar 20 2002, 22:13:20) | [GCC 3.0.4 20020120 (prerelease)] on linux2 | Type "help", "copyright", "credits" or "license" for more information. | >>> import scipy | >>> import scipy.linalg2 | exceptions.ImportError: /usr/local/lib/python2.2/site-packages/scipy/linalg2/clapack2.so: undefined symbol: clapack_sgetri | >>> scipy.linalg2.basic.test() | ................................ | Finding matrix determinant | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.77 | 1.61 | 0.75 | 1.81 (secs for 2000 calls) | 100 | 1.63 | 3.25 | 1.63 | 4.33 (secs for 300 calls) | 500 | 2.49 | 2.39 | 1.88 | 2.82 (secs for 4 calls) | . | Solving system of linear equations | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 1.07 | 1.39 | 1.12 | 1.61 (secs for 2000 calls) | 100 | 1.72 | 2.48 | 1.71 | 3.89 (secs for 300 calls) | 500 | 2.06 | 2.22 | 1.96 | 2.60 (secs for 4 calls) | . | Finding matrix inverse | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 1.71 | 2.58 | 1.81 | 2.75 (secs for 2000 calls) | 100 | 4.74 | 7.09 | 4.78 | 7.99 (secs for 300 calls) | 500 | 5.44 | 7.24 | 5.57 | 7.55 (secs for 4 calls) | . | ---------------------------------------------------------------------- | Ran 35 tests in 109.383s | | OK | >>> `---- Greetings, Jochen -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert?, ?galit?, Fraternit? GnuPG key: 44BCCD8E Sex, drugs and rock-n-roll From eric at scipy.org Fri Apr 5 00:00:55 2002 From: eric at scipy.org (eric) Date: Fri, 5 Apr 2002 00:00:55 -0500 Subject: [SciPy-dev] Linalg2 benchmarks References: Message-ID: <127201c1dc5e$de4f9ac0$6b01a8c0@ericlaptop> W2K, 850MHz PII laptop. Interesting to see that the 1.1 GHz Athlon is a factor of 2 faster. Finding matrix determinant ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.30 | 0.56 | 0.30 | 0.63 (secs for 2000 calls) 100 | 0.72 | 2.59 | 0.74 | 2.78 (secs for 300 calls) 500 | 1.06 | 3.77 | 0.98 | 3.93 (secs for 4 calls) . Solving system of linear equations ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.50 | 0.47 | 0.48 | 0.53 (secs for 2000 calls) 100 | 0.89 | 2.45 | 0.82 | 2.69 (secs for 300 calls) 500 | 1.11 | 3.67 | 0.95 | 3.85 (secs for 4 calls) . Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.65 | 1.04 | 0.67 | 1.11 (secs for 2000 calls) 100 | 2.16 | 8.38 | 2.13 | 8.61 (secs for 300 calls) 500 | 2.62 | 22.03 | 2.60 | 22.29 (secs for 4 calls) ----- Original Message ----- From: "Travis Oliphant" To: Sent: Thursday, April 04, 2002 1:05 AM Subject: [SciPy-dev] Linalg2 benchmarks > > Thanks to Pearu for these benchmarks. > > I just ran some linalg2 benchmarks on my ATHLON 1.1 GHz processor (gcc-2.96, > Mandrake -8.2). It would be interesting to see what others are getting. > > >>> linalg2.basic.test() > ......................EEEEEEEEEE > Finding matrix determinant > ================================== > | contiguous | non-contiguous > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 0.26 | 0.50 | 0.25 | 0.58 (secs for 2000 calls) > 100 | 0.45 | 2.01 | 0.43 | 2.30 (secs for 300 calls) > 500 | 0.50 | 2.92 | 0.48 | 3.06 (secs for 4 calls) > . > Solving system of linear equations > ================================== > | contiguous | non-contiguous > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 0.40 | 0.43 | 0.40 | 0.50 (secs for 2000 calls) > 100 | 0.50 | 1.79 | 0.51 | 2.25 (secs for 300 calls) > 500 | 0.49 | 2.98 | 0.48 | 3.15 (secs for 4 calls) > . > Finding matrix inverse > ================================== > | contiguous | non-contiguous > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 0.52 | 0.89 | 0.51 | 0.96 (secs for 2000 calls) > 100 | 1.18 | 5.73 | 1.18 | 6.01 (secs for 300 calls) > 500 | 1.31 | 15.34 | 1.30 | 15.82 (secs for 4 calls) > . > > Look at that speed up.... fantastic. > > -Travis > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jochen at jochen-kuepper.de Fri Apr 5 01:19:07 2002 From: jochen at jochen-kuepper.de (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 05 Apr 2002 01:19:07 -0500 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: <121801c1dc44$1fad1c10$6b01a8c0@ericlaptop> Message-ID: On Thu, 4 Apr 2002 22:01:21 -0700 Travis Oliphant wrote: Travis> The other question is are you using ATLAS for Numeric as well? Yes, sure. Anybody on this list not linking Numeric against lapack/blas? Travis> Which version of Numeric do you have? usually latest cvs Travis> Those numbers mean your Numeric must be using a more optimized Travis> version of lapack, anyway. Numeric is linked against lapack and blas. And surely I don't use the Fortran reference implementation of BLAS... For the PIII results I am not sure whether Numeric might be linked against an older ATLAS version. I'll run the test tomorrow, making sure I use the same lapack and blas for Numeric and scipy. For the Duron results below scipy and Numeric use the same ATLAS-3.3.13 (exactly the same, gcc-3 compiled IIRC). ,---- | >>> scipy.linalg2.basic.test() | ................................ | Finding matrix determinant | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.39 | 0.78 | 0.38 | 0.90 (secs for 2000 calls) | 100 | 0.83 | 1.59 | 0.80 | 2.12 (secs for 300 calls) | 500 | 0.92 | 1.14 | 0.93 | 1.34 (secs for 4 calls) | . | Solving system of linear equations | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.56 | 0.67 | 0.54 | 0.75 (secs for 2000 calls) | 100 | 0.88 | 1.16 | 0.88 | 1.88 (secs for 300 calls) | 500 | 0.99 | 1.06 | 0.99 | 1.27 (secs for 4 calls) | . | Finding matrix inverse | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.90 | 1.23 | 0.90 | 1.35 (secs for 2000 calls) | 100 | 2.32 | 3.46 | 2.30 | 3.95 (secs for 300 calls) | 500 | 2.68 | 3.52 | 2.67 | 3.72 (secs for 4 calls) | . | ---------------------------------------------------------------------- | Ran 35 tests in 53.275s | | OK `---- Greetings, Jochen -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert?, ?galit?, Fraternit? GnuPG key: 44BCCD8E Sex, drugs and rock-n-roll From pearu at scipy.org Fri Apr 5 10:01:21 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Fri, 5 Apr 2002 09:01:21 -0600 (CST) Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: Message-ID: On 4 Apr 2002, Jochen K?pper wrote: > Travis showed really impressive numbers for linal2. Here is what I > get -- less impressive, still ok? Yes, it is still ok. > I don't know what the problem is, but it looks as scipy scales worse > than Numeric? I don't understand how can you conclude that but if scipy and Numeric use the same ATLAS (the Jochen case) then: 1) for n->oo, where n is the size of the problem, there would be no difference in speeds as the hard computation is done by the same ATLAS routine. 2) for n fixed but repeating the computation for c times, then for c->oo you would find that scipy is 2-3 times faster that Numeric. This speed up is gained only because of the f2py generated interface between Python and the ATLAS routines that scipy uses. BUT, if Numeric is linked with its lapack_lite (the Travis case), then you will have huge speedups (approx. 10 times) mainly because scipy uses highly optimized ATLAS routines. So, I don't find these testing results strange as you commented. What I find surprising is that there is very small difference in the results for contiguous and non-contiguous input data. This shows that memory copy is a really cheap operation and one should not worry too much if the input data is non-contiguous, at least, if you have plenty of memory in your computer. More surprising is that sometimes with non-contiguous input data the calculation is actually faster(!) and not slower as I would expect. I have no explanation for this one. Regards, Pearu From pearu at scipy.org Fri Apr 5 10:33:42 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Fri, 5 Apr 2002 09:33:42 -0600 (CST) Subject: [SciPy-dev] issues with distributing Numeric separately on windows In-Reply-To: <115e01c1dc16$d8c56190$6b01a8c0@ericlaptop> Message-ID: Hi, On Thu, 4 Apr 2002, eric wrote: > Some people balked when SciPy was distributed with Numeric last time. We've > endeavored to separate things out this time so that that wasn't necessary. > However, this is pretty much a show stopper on windows -- and was major reason > we bundled them last time. I think we'll have to distribute a gcc compiled > version of Numeric with SciPy on windows. This will get installed over (will > clobber) your old installation of Numeric in the process. It does not affect > Numeric's other than making it version 21.0. By this I mean, the gcc and MSVC > versions will work identically and work with all your extension modules. Personally and in general, I would not feel good if some package overwrites already installed version of it. It might happen that the installed version of the package is newer and as a result the overwrite would break this package (as it may contain new features, bug fixes). Would it be possible if NumPy team would provide also gcc compiled binaries for windows? And to make clear for Windows installers that SciPy will work only with gcc compiled Numeric. May be it is possible to check that the correct Numeric is used already in the SciPy setup.py script and it would raise an exception if Numeric is not compiled with gcc. Pearu From steven.robbins at videotron.ca Fri Apr 5 10:53:54 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Fri, 5 Apr 2002 10:53:54 -0500 Subject: [SciPy-user] Re: [SciPy-dev] issues with distributing Numeric separately on windows In-Reply-To: References: <115e01c1dc16$d8c56190$6b01a8c0@ericlaptop> Message-ID: <20020405155354.GK1441@nyongwa.montreal.qc.ca> On Fri, Apr 05, 2002 at 09:33:42AM -0600, pearu at scipy.org wrote: > Personally and in general, I would not feel good if some package > overwrites already installed version of it. Yes, I agree. > It might happen that the > installed version of the package is newer and as a result the overwrite > would break this package (as it may contain new features, bug fixes). Though now fixed, this has happened to me with the Debian packages in the past. It is quite a nuisance. I realise the current discussion is about the MS windows packaging. I presume that the Debian packager could continue to keep 'em separated? > And to make clear for Windows installers that SciPy will work only with > gcc compiled Numeric. May be it is possible to check that > the correct Numeric is used already in the SciPy setup.py script and it > would raise an exception if Numeric is not compiled with gcc. Presumably you could build a test based on the code that is known to fail --- that in eric's message? -Steve -- by Rocket to the Moon, by Airplane to the Rocket, by Taxi to the Airport, by Frontdoor to the Taxi, by throwing back the blanket and laying down the legs ... - They Might Be Giants From jochen at unc.edu Fri Apr 5 11:34:08 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 05 Apr 2002 11:34:08 -0500 Subject: [SciPy-dev] issues with distributing Numeric separately on windows In-Reply-To: References: Message-ID: On Fri, 5 Apr 2002 09:33:42 -0600 (CST) pearu wrote: pearu> Personally and in general, I would not feel good if some package pearu> overwrites already installed version of it. Absolutely. pearu> Would it be possible if NumPy team would provide also gcc pearu> compiled binaries for windows? I am sure nobody (incl. Paul Dubois) would complain if you give him a binary package of the latest numpy to be included on the download page... pearu> And to make clear for Windows installers that SciPy will work pearu> only with gcc compiled Numeric. May be it is possible to check pearu> that the correct Numeric is used already in the SciPy setup.py pearu> script and it would raise an exception if Numeric is not pearu> compiled with gcc. Wouldn't just checking fo "gcc2_compiled" in some numpy so be enough? Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From jochen at unc.edu Fri Apr 5 12:29:55 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 05 Apr 2002 12:29:55 -0500 Subject: [SciPy-dev] cvs probs Message-ID: Ok, i try to run some more benchmarks. I do a cvs up -A, build, I get a ,---- | ... | scipy_base.fastumath fastumath needs fortran libraries 0 0 | building 'scipy_base.fastumath' extension | error: file '/home/jochen/source/numeric/scipy/scipy_base/scipy_base.fastumathmodule.c' does not exist `---- ^^^^^^^^^^^^^^^^^^^^^^ So I check what has changed since yesterday... ,---- | > cvs diff -u -D yesterday | ? linalg2/cblas2.pyf | ? linalg2/clapack2.pyf | ? linalg2/fblas2.pyf | ? linalg2/flapack2.pyf | ? scipy_base/mconf_lite.h | cvs server: Help.py no longer exists, no comparison available | cvs server: MANIFEST no longer exists, no comparison available | Index: __cvs_version__.py | =================================================================== | RCS file: /home/cvsroot/world/scipy/__cvs_version__.py,v | retrieving revision 1.68.1455.3520 | retrieving revision 1.68.1455.3499 | cvs [server aborted]: could not find desired version 1.68.1455.3499 in /home/cvsroot/world/scipy/__cvs_version__.py,v `---- What's going wrong here? Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From pearu at scipy.org Fri Apr 5 13:48:27 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Fri, 5 Apr 2002 12:48:27 -0600 (CST) Subject: [SciPy-dev] cvs probs In-Reply-To: Message-ID: On 5 Apr 2002, Jochen K?pper wrote: > | error: file '/home/jochen/source/numeric/scipy/scipy_base/scipy_base.fastumathmodule.c' does not exist > `---- > ^^^^^^^^^^^^^^^^^^^^^^ It seems to be fixed now in CVS. > ,---- > | > cvs diff -u -D yesterday > | =================================================================== > | RCS file: /home/cvsroot/world/scipy/__cvs_version__.py,v > | retrieving revision 1.68.1455.3520 > | retrieving revision 1.68.1455.3499 > | cvs [server aborted]: could not find desired version 1.68.1455.3499 in /home/cvsroot/world/scipy/__cvs_version__.py,v > `---- > > What's going wrong here? In order to run this cvs command, you'll first need to remove __cvs_version__.py file. It is there for a hack that calculates CVS version numbers. Pearu From jochen at unc.edu Fri Apr 5 14:27:49 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 05 Apr 2002 14:27:49 -0500 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: Message-ID: Here are results from the same dual CPU PIII/800 machine. ATLAS compiled with gcc-2.7.2.3, everything else with gcc-3. I just got the latest numpy and scipy cvs's and rebuild (--force) all. See comments below. ,---- | Python 2.2.1c1 (#1, Mar 20 2002, 15:04:50) | [GCC 3.0.4] on linux2 | Type "help", "copyright", "credits" or "license" for more information. | >>> import scipy | >>> import scipy.linalg2 | >>> scipy.linalg2.basic.test() | ................................ | Finding matrix determinant | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.36 | 0.74 | 0.36 | 0.86 (secs for 2000 calls) | 100 | 0.83 | 1.50 | 0.83 | 2.01 (secs for 300 calls) | 500 | 0.92 | 1.12 | 0.90 | 1.32 (secs for 4 calls) | . | Solving system of linear equations | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.53 | 0.64 | 0.53 | 0.75 (secs for 2000 calls) | 100 | 0.89 | 1.22 | 0.89 | 1.77 (secs for 300 calls) | 500 | 0.91 | 1.00 | 0.89 | 1.21 (secs for 4 calls) | . | Finding matrix inverse | ================================== | | contiguous | non-contiguous | ---------------------------------------------- | size | scipy | Numeric | scipy | Numeric | 20 | 0.76 | 1.17 | 0.76 | 1.28 (secs for 2000 calls) | 100 | 2.22 | 3.63 | 2.22 | 4.03 (secs for 300 calls) | 500 | 2.38 | 3.30 | 2.35 | 3.50 (secs for 4 calls) | . | ---------------------------------------------------------------------- | Ran 35 tests in 51.051s | | OK `---- On Fri, 5 Apr 2002 09:01:21 -0600 (CST) pearu wrote: pearu> On 4 Apr 2002, Jochen K?pper wrote: >> I don't know what the problem is, but it looks as scipy scales worse >> than Numeric? pearu> I don't understand how can you conclude that Well, se above. Going from 300 x 100x100 to 4 x 500x500 scipy takes more or the same time, whereas numpy takes less. pearu> but if scipy and Numeric use the same ATLAS (the Jochen case) pearu> then: pearu> 1) for n->oo, where n is the size of the problem, there would be no pearu> difference in speeds as the hard computation is done by the same ATLAS pearu> routine. The data does not oppose that. pearu> 2) for n fixed but repeating the computation for c times, then for c->oo pearu> you would find that scipy is 2-3 times faster that Numeric. This speed up pearu> is gained only because of the f2py generated interface between Python and pearu> the ATLAS routines that scipy uses. That is kind of what the data shows, but is only valid for small n though, as you said yourself in 1). pearu> BUT, if Numeric is linked with its lapack_lite (the Travis pearu> case), then you will have huge speedups (approx. 10 times) pearu> mainly because scipy uses highly optimized ATLAS routines. Ok. pearu> So, I don't find these testing results strange as you commented. I think it is not a valid comparison to put scipy against lapack_lite, considering how easy it is to get numpy use any LAPACK/BLAS. Esp. considering the people on this list. pearu> More surprising is that sometimes with non-contiguous input pearu> data the calculation is actually faster(!) and not slower as I pearu> would expect. I have no explanation for this one. I would assume that gives you a lower bound for the accuracy of these benchmarks... Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From pearu at scipy.org Fri Apr 5 15:17:25 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Fri, 5 Apr 2002 14:17:25 -0600 (CST) Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: Message-ID: On 5 Apr 2002, Jochen K?pper wrote: > Well, se above. Going from 300 x 100x100 to 4 x 500x500 scipy takes > more or the same time, whereas numpy takes less. Yes, because in the later case most of the time is spent in ATLAS routines and time spent in interfaces is very small, after all there is only 4 calls. But in the former case (300 x 100x100), ATLAS routines finish more quickly and since there are lots of calls (300), then the time spent in interfaces becomes noticable. > pearu> 1) for n->oo, where n is the size of the problem, there would be no > pearu> difference in speeds as the hard computation is done by the same ATLAS > pearu> routine. > > The data does not oppose that. But you don't believe me ;-) > pearu> 2) for n fixed but repeating the computation for c times, then for c->oo > pearu> you would find that scipy is 2-3 times faster that Numeric. This speed up > pearu> is gained only because of the f2py generated interface between Python and > pearu> the ATLAS routines that scipy uses. > > That is kind of what the data shows, but is only valid for small n > though, as you said yourself in 1). So, I should have said here: scipy interfaces to ATLAS routines are X times faster than the corresponding interfaces in Numeric. To find X, I run tests with small input data (taking n=2) so that most of the time is spent in the interfaces, rather in ATLAS routines. It turns out that scipy interface is 3-5 times faster than the interface of Numeric. The test results are included at the end of this message. For increasing n and c->oo, the difference between scipy and Numeric becomes smaller because of the reasons explained in 1). Note also that the n's used in these tests are relatively small. If n is really large, then I would expect also scipy to perform better than Numeric because the interfaces in scipy are also optimized to minimize the memory usage. > pearu> So, I don't find these testing results strange as you commented. > > I think it is not a valid comparison to put scipy against lapack_lite, > considering how easy it is to get numpy use any LAPACK/BLAS. > Esp. considering the people on this list. I agree. However this comparison is still generally useful: it gives a motivation for people to build numpy with optimized LAPACK/BLAS libraries. > pearu> More surprising is that sometimes with non-contiguous input > pearu> data the calculation is actually faster(!) and not slower as I > pearu> would expect. I have no explanation for this one. > > I would assume that gives you a lower bound for the accuracy of these > benchmarks... Yes, but curiosly enough with non-contiguous input the calculation is systematically faster or with the same speed, but rarely slower. Pearu --------------------------- Intel Mobile 400Mhz, 160MB RAM, Debian Woody with Linux 2.4.14-6, gcc version 2.95.4, Python 2.1.2-4, Both SciPy and NumPy use ATLAS-3.3.13. Finding matrix determinant ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 2 | 1.10 | 5.20 | 1.08 | 5.22 (secs for 4000 calls) 20 | 0.91 | 4.28 | 1.12 | 3.52 (secs for 2000 calls) 100 | 1.56 | 3.48 | 1.62 | 4.35 (secs for 300 calls) 500 | 1.67 | 2.31 | 1.73 | 2.58 (secs for 4 calls) . Solving system of linear equations ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 2 | 1.93 | 6.21 | 1.87 | 6.19 (secs for 4000 calls) 20 | 1.34 | 3.69 | 1.33 | 3.94 (secs for 2000 calls) 100 | 1.92 | 3.32 | 1.94 | 4.26 (secs for 300 calls) 500 | 2.57 | 2.21 | 1.66 | 2.39 (secs for 4 calls) . Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 2 | 2.02 | 8.77 | 1.96 | 8.89 (secs for 4000 calls) 20 | 1.73 | 5.89 | 1.75 | 6.13 (secs for 2000 calls) 100 | 4.73 | 9.49 | 4.48 | 10.35 (secs for 300 calls) 500 | 4.82 | 7.50 | 4.88 | 7.88 (secs for 4 calls) . ---------------------------------------------------------------------- Ran 35 tests in 180.893s From pearu at scipy.org Sat Apr 6 16:54:36 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sat, 6 Apr 2002 15:54:36 -0600 (CST) Subject: [SciPy-dev] scipy_base.testing - requiring Numeric? Message-ID: Hi, I noticed that in scipy_base.testing there is a try-except for passing import failures of Numeric. I am going to remove that try-except construct as I think scipy_base can assume that Numeric is properly installed. Please, let me know if you want to keep it. Pearu From jochen at unc.edu Sat Apr 6 19:53:41 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 06 Apr 2002 19:53:41 -0500 Subject: [SciPy-dev] some notes Message-ID: Some notes on latest cvs... As I have suggested before the coding guidelines should specify a maximum of 79 cols instead of 80, as emacsens break lines with exactly 80 cols... The style guide(s) referenced should be PEPs 0007 and 0008, not Guido's old paper. The reference to Numeric should probably be the generic www.numpy.org, even it being a redirect. The THANKS file should adhere to the format guidelines, too. A line with 182 cols is a little long for sure:) A question mark behind a person in this file doesn't look too professional either... I guess the file needs some general updating? A lot of what is in INSTALL typically belongs into a README, whereas INSTALL should have just /installation instructions/, IMHO. The copyright in the license file ought to be updated, I guess? Hope it helps, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From pearu at scipy.org Sat Apr 6 21:12:14 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sat, 6 Apr 2002 20:12:14 -0600 (CST) Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: <117a01c1dc18$c70390b0$6b01a8c0@ericlaptop> Message-ID: Hi, On Thu, 4 Apr 2002, eric wrote: > 2. > Where are we on the linalg to linalg2 transition? Has everything been moved > over? I haven't looked at the cblas/fblas wrappers in a while. Also, I'm not > up to speed on all the work Travis and Pearu have done. Is it release ready? I have finished blas wrappers. Well, there is still lots of work to do with tests and wrappers but at least now everything is covered that is in linalg. Except ger routines but they are not show stoppers, I hope, and can wait. Travis, can we start replacing linalg with linalg2? May be first removing linalg to linalg1 in CVS, then linalg2 -> linalg, and if everything works fine we can remove linalg1 from CVS. What do you think? > 3. > Documentation and installation guides needs to be updated. There have been > substantial updates and rearrangments that we should document. I haven't looked > at this yet. I could not find new build instructions that Fernando put together from the scipy site anymore. What happend to this document? Does anyone has a copy of it? > 4. > source and binary distributions need to be built and tested. I'd like to have > Windows,Mac,Debian, and Linux RH rpms available. I can do the Windows and maybe > the rpms on a RH 7.x box. I know binary RPMs have a lot of issues, so I guess > we should have a source RPM also? Source (tar-ball) distribution builds and passes all tests (currently 320 for level=1) with Python 2.1 on Debian Woody. With Python 2.2 only some weave tests fail if the test level is high. Pearu From fperez at pizero.colorado.edu Sat Apr 6 21:42:13 2002 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Sat, 6 Apr 2002 19:42:13 -0700 (MST) Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: Message-ID: > > I could not find new build instructions that Fernando put together from > the scipy site anymore. What happend to this document? Does anyone has a > copy of it? I can't find it at scipy either, but I'll be happy to mail you a copy if you want to use it as a starter for updated instructions. It's written in lyx, but if you prefer I'll mail you a latex or html version, just let me know. cheers, f From jochen at unc.edu Sat Apr 6 21:50:16 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 06 Apr 2002 21:50:16 -0500 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: References: Message-ID: On Sat, 6 Apr 2002 20:12:14 -0600 (CST) pearu wrote: pearu> On Thu, 4 Apr 2002, eric wrote: >> source and binary distributions need to be built and tested. I'd like to have >> Windows,Mac,Debian, and Linux RH rpms available. I can do the Windows and maybe >> the rpms on a RH 7.x box. I know binary RPMs have a lot of issues, so I guess >> we should have a source RPM also? pearu> Source (tar-ball) distribution builds and passes all tests (currently 320 pearu> for level=1) with Python 2.1 on Debian Woody. With Python 2.2 only some pearu> weave tests fail if the test level is high. Here the same. python 'release-22maint' on RedHat-7.0 test(level=10) fails 33 of 381 tests. If I'm correct all failed ones are for weave. Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From jochen at unc.edu Sat Apr 6 22:17:46 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 06 Apr 2002 22:17:46 -0500 Subject: [SciPy-dev] Cygwin build problems Message-ID: Just tried to build scipy on Cygwin. The last install I did has been on Jan 18, if I see that correctly. A few problems: I reported at least twice before that log2 is defined by math.h on Cygwin, so the following (or similar) is needed ,---- | Index: special/cephes.h | =================================================================== | RCS file: /home/cvsroot/world/scipy/special/cephes.h,v | retrieving revision 1.3 | diff -u -r1.3 cephes.h | --- special/cephes.h 2002/02/24 07:09:49 1.3 | +++ special/cephes.h 2002/04/07 03:05:09 | @@ -103,7 +103,9 @@ | /* | extern int levnsn ( int n, double r[], double a[], double e[], double refl[] ); | */ | +#ifndef log2 | extern double log2 ( double x ); | +#endif | /* | extern long lrand ( void ); | extern long lsqrt ( long x ); | Index: special/cephes/protos.h | =================================================================== | RCS file: /home/cvsroot/world/scipy/special/cephes/protos.h,v | retrieving revision 1.3 | diff -u -r1.3 protos.h | --- special/cephes/protos.h 2002/02/24 07:09:50 1.3 | +++ special/cephes/protos.h 2002/04/07 03:05:09 | @@ -101,7 +101,9 @@ | extern int levnsn ( int n, double r[], double a[], double e[], double refl[] ); | extern double log ( double x ); | extern double log10 ( double x ); | +#ifndef log2 | extern double log2 ( double x ); | +#endif | extern long lrand ( void ); | extern long lsqrt ( long x ); | extern int minv ( double A[], double X[], int n, double B[], int IPS[] ); `---- More over I get link-errors that -lpython2.2 cannot be found: ,---- | g77 -shared build/temp.cygwin-1.3.10-i686-2.2/fortranobject.o build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.o -L/usr/local/lib -L/usr/local/lib -llapack -lf77blas -lcblas -latlas -lpython2.2 -lg2c -o build/lib.cygwin-1.3.10-i686-2.2/scipy/linalg/fblas.dll | /usr/lib/gcc-lib/i686-pc-cygwin/2.95.3-5/../../../../i686-pc-cygwin/bin/ld: cannot find -lpython2.2 | collect2: ld returned 1 exit status | error: command 'g77' failed with exit status 1 `---- I can solve this by manually adding -L/usr/lib/python2.2/config to the link line. Ok, so I automate that by providing a site.cfg... ,---- | [DEFAULT] | lib_dir = /usr/local/lib:/opt/lib:/usr/lib:/lib:/usr/lib/python2.2/config `---- But all I get is an error ,---- | python setup.py build | atlas_info: | FOUND: | libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] | library_dirs = ['/usr/local/lib', '/usr/local/lib'] | | file: build/generated_pyfs/flapack.pyf | file: build/generated_pyfs/clapack.pyf | file: build/generated_pyfs/fblas.pyf | file: build/generated_pyfs/cblas.pyf | ### Little Endian detected #### | x11_info: | NOT AVAILABLE | | ### Little Endian detected #### | Traceback (most recent call last): | File "setup.py", line 127, in ? | install_package() | File "setup.py", line 110, in install_package | config_dict = merge_config_dicts(config) | File "scipy_distutils/misc_util.py", line 290, in merge_config_dicts | result[key].extend(d.get(key,[])) | AttributeError: 'NoneType' object has no attribute 'get' `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From jochen at jochen-kuepper.de Sun Apr 7 01:17:20 2002 From: jochen at jochen-kuepper.de (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 07 Apr 2002 01:17:20 -0500 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: Message-ID: Pearu, some roundup. >> Well, se above. Going from 300 x 100x100 to 4 x 500x500 scipy takes >> more or the same time, whereas numpy takes less. pearu> Yes, because in the later case most of the time is spent in pearu> ATLAS routines and time spent in interfaces is very small, pearu> after all there is only 4 calls. But in the former case (300 x pearu> 100x100), ATLAS routines finish more quickly and since there pearu> are lots of calls (300), then the time spent in interfaces pearu> becomes noticable. All this is perfectly clear and I do see your improvements. What this boils down to is that scipy is good when you have a huge number of evaluations for small matrices. (If the number of evals is small it doesn't matter, and if the matrices are big Numeric is as good -- well, almost:).) pearu> But you don't believe me ;-) Yes I do. Even before I got your mail:)) I meant to say: "I believe you, but there is not enough data to prove it." pearu> scipy interfaces to ATLAS routines are X times faster than the pearu> corresponding interfaces in Numeric. That does make perfect sense. And a factor > 2 is actually a lot considering the "small" amount of work that has to be done here. "3--5" you say... phiuuuh pearu> Note also that the n's used in these tests are relatively pearu> small. If n is really large, then I would expect also scipy to pearu> perform better than Numeric because the interfaces in scipy are pearu> also optimized to minimize the memory usage. Ah, here we go. That is what would help *me*. I'll check it. pearu> So, I don't find these testing results strange as you pearu> commented. Well, knowing what Travis posted I must say what I find strange is t he compared Numeric's lapack_lite with ATLAS where it is so easy to use a machine optimized LAPACK/BLAS with Numeric. And btw. ATLAS isn't always the best choice here -- one reason why I don't like this tight binding to ATLAS too much. pearu> I agree. However this comparison is still generally useful: it pearu> gives a motivation for people to build numpy with optimized pearu> LAPACK/BLAS libraries. Yep, but then you have to post it on numpy-discussion. I assume everybody on the scipy list who has gone through the trouble of installing scipy (I know, it got a lot better again lately) has installed numeric with lapack/blas use. pearu> Yes, but curiosly enough with non-contiguous input the pearu> calculation is systematically faster or with the same speed, pearu> but rarely slower. Hmm, looking at the test I see that non-contiguous means "inverted". Could this and the copying actually help caching?? Thewre should probably some really non-contiguous (i.e. abs(stride) != 1) data in these tests. Pearu, something related: How much work would it be to get f2py/LinAlg to work with numarray? There is a numpy_compat now that supports almost all of Numeric, so this would be one way. But in the long run one would like to have a native implementation of linalg, of course... Greetings, Jochen -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert?, ?galit?, Fraternit? GnuPG key: 44BCCD8E Sex, drugs and rock-n-roll From oliphant.travis at ieee.org Sun Apr 7 03:47:35 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 07 Apr 2002 00:47:35 -0700 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: References: Message-ID: <1018165656.14389.27.camel@travis> On Sat, 2002-04-06 at 19:12, pearu at scipy.org wrote: > > Hi, > > On Thu, 4 Apr 2002, eric wrote: > > > 2. > > Where are we on the linalg to linalg2 transition? Has everything been moved > > over? I haven't looked at the cblas/fblas wrappers in a while. Also, I'm not > > up to speed on all the work Travis and Pearu have done. Is it release ready? > > I have finished blas wrappers. Well, there is still lots of work to do > with tests and wrappers but at least now everything is covered that > is in linalg. Except ger routines but they are not show stoppers, I hope, > and can wait. > > Travis, can we start replacing linalg with linalg2? > May be first removing linalg to linalg1 in CVS, then linalg2 -> linalg, > and if everything works fine we can remove linalg1 from CVS. > What do you think? That's fine with me. My impression is that everything is now covered in linalg2. -Travis From oliphant.travis at ieee.org Sun Apr 7 04:38:02 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 07 Apr 2002 01:38:02 -0700 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: References: Message-ID: <1018168683.19441.1.camel@travis> A recent checkin caused problems for SciPy on my platform. Now I can't get anything to load before a segfault. I think you were checking in files recently, Pearu. Can you see any reason why I would be getting segfaults now? -Travis From pearu at scipy.org Sun Apr 7 04:31:02 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 03:31:02 -0500 (CDT) Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: <1018168683.19441.1.camel@travis> Message-ID: On 7 Apr 2002, Travis Oliphant wrote: > A recent checkin caused problems for SciPy on my platform. Now I can't > get anything to load before a segfault. Do you mean Windows? > I think you were checking in files recently, Pearu. Can you see any > reason why I would be getting segfaults now? In linalg2: I have implemented wrapper for gemm, changed interface a bit for gemv, and fixed a bug in scal. If you run python -v -c 'import scipy' this might give more information what is causing your problems. And as usual, rm -rf build often solves strange problems. Pearu From oliphant.travis at ieee.org Sun Apr 7 04:59:23 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 07 Apr 2002 01:59:23 -0700 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: References: Message-ID: <1018169965.19440.5.camel@travis> On Sun, 2002-04-07 at 01:31, pearu at scipy.org wrote: > > > On 7 Apr 2002, Travis Oliphant wrote: > > > A recent checkin caused problems for SciPy on my platform. Now I can't > > get anything to load before a segfault. > > Do you mean Windows? No, Mandrake-8.1 under 2 versions of python (2.2 and 2.2.1c2) both were working correctly until the recent updates and now neither one works. I've done rm -fr build/ several times to no avail. I have no idea what is going on. > > > I think you were checking in files recently, Pearu. Can you see any > > reason why I would be getting segfaults now? > > In linalg2: I have implemented wrapper for gemm, changed interface a bit > for gemv, and fixed a bug in scal. > > If you run > python -v -c 'import scipy' > this might give more information what is causing your problems. > Thanks for this. I can now see that it's choking when dlopen calls _minpack.so (no idea why, though). From pearu at cens.ioc.ee Sun Apr 7 05:14:02 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 7 Apr 2002 12:14:02 +0300 (EEST) Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: <1018169965.19440.5.camel@travis> Message-ID: On 7 Apr 2002, Travis Oliphant wrote: > Thanks for this. I can now see that it's choking when dlopen calls > _minpack.so (no idea why, though). Can you import it directly? import _minpack in its directory. If yes, then what about import fblas2,flapack2 etc? Pearu From pearu at scipy.org Sun Apr 7 06:11:50 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 05:11:50 -0500 (CDT) Subject: [SciPy-dev] Cygwin build problems In-Reply-To: Message-ID: On 6 Apr 2002, Jochen K?pper wrote: > I reported at least twice before that log2 is defined by math.h on > Cygwin, so the following (or similar) is needed Fixed. > More over I get link-errors that -lpython2.2 cannot be found: ,---- | > g77 -shared build/temp.cygwin-1.3.10-i686-2.2/fortranobject.o > build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.o -L/usr/local/lib > -L/usr/local/lib -llapack -lf77blas -lcblas -latlas -lpython2.2 -lg2c ^^^^^^^^^^^ Where this -lpython2.2 comes? Adding -L/usr/lib/python2.2/config should be done in the place where -lpython2.2 is added but I cannot figure out where. Eric, do you have ideas? Is it python distutils or mingw32_support issue? > But all I get is an error > | AttributeError: 'NoneType' object has no attribute 'get' Fixed. Pearu From pearu at scipy.org Sun Apr 7 06:33:55 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 05:33:55 -0500 (CDT) Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: Message-ID: Hi Jochen, On 7 Apr 2002, Jochen K?pper wrote: > Well, knowing what Travis posted I must say what I find strange is t > he compared Numeric's lapack_lite with ATLAS where it is so easy to > use a machine optimized LAPACK/BLAS with Numeric. And btw. ATLAS > isn't always the best choice here -- one reason why I don't like this > tight binding to ATLAS too much. Using site.cfg it is possible to use your own lapack/blas libraries in favour of atlas ones by specifying a proper order. Though currently you still need atlas otherwise building cblas will fail. In future, we can remove the strict dependence of linalg on ATLAS easily as cblas or clapack routines are not used directly but through their wrappers blas.py and lapack.py. > something related: How much work would it be to get f2py/LinAlg to > work with numarray? Actually it should not be difficult. The most intensive use of Numeric array C/API is in fortranobject.c, basically, in the function array_from_pyobj and its dependencies. There is no need to change signature files. That's the whole beauty of using tools like f2py for generating extension modules automatically. When numarray will have a stable and documented C/API interface, I'll look for its support in f2py. > There is a numpy_compat now that supports almost all of Numeric, so > this would be one way. But in the long run one would like to have a > native implementation of linalg, of course... I am not sure that numpy_compat will work straightforward for f2py as f2py is quite aware of Numeric array internals. But on the other hand, I have not looked what is done in numpy_compat or numarray lately. Pearu From arnd.baecker at physik.uni-ulm.de Sun Apr 7 09:20:35 2002 From: arnd.baecker at physik.uni-ulm.de (arnd.baecker at physik.uni-ulm.de) Date: Sun, 7 Apr 2002 15:20:35 +0200 (MEST) Subject: [SciPy-dev] installation and linalg2 bench Message-ID: Hi, I successfully installed a CVS scipy (cvs_version = (1, 68, 1455, 3473)) - everything went quite smooth (I found site.cfg very convenient!). There were just 2 [[Installation was on a debian woody, self-compiled Python 2.2.1c2 (#1, Apr 4 2002, 18:54:47) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2, self-compiled ATLAS 3.3.14, CVS Numeric 22.0a0 ]] Running scipy.test(10) gives the following messages, which I find a bit irritating (though they might not be crucial ?) 1.) creating test suite for: scipy.common !! FAILURE building test for scipy.common :1: ImportError: No module named test_common (in ?) 2.) creating test suite for: scipy.stats.stats !! FAILURE building test for scipy.stats.stats :1: ImportError: No module named test_stats (in ?) 3.) [...] 1st run(Numeric,compiled,speed up): 2.3194, 1.0310, 2.2498 2nd run(Numeric,compiled,speed up): 2.4009, 0.9876, 2.4310 .warning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations .warning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations .................................... test printing a value:2 ../home/abaecker/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp: In function `struct PyObject * compiled_func(PyObject *, PyObject *)': /home/abaecker/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp:418: no match for `Py::String & < int' /home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/weave/CXX/Objects.hxx:390: candidates are: bool Py::Object::operator <(const Py::Object &) const /home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/weave/CXX/Objects.hxx:1433: bool Py::operator <(const Py::SeqBase::const_iterator &, const Py::SeqBase::const_iterator &) /home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/weave/CXX/Objects.hxx:1426: bool Py::operator <(const Py::SeqBase::iterator &, const Py::SeqBase::iterator &) ........ [...] ----------------------------------------------------------------------- Ran 304 tests in 859.017s 4.) Finally I did the benchmarking (note that this is on a PII, 350 MHz ... ;-) >>> import scipy.linalg2 >>> scipy.linalg2.basic.test() ................................ Finding matrix determinant ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 1.14 | 2.47 | 1.13 | 2.84 (secs for 2000 calls) 100 | 2.21 | 8.24 | 2.18 | 9.58 (secs for 300 calls) 500 | 2.06 | 11.81 | 2.05 | 12.34 (secs for 4 calls) . Solving system of linear equations ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 1.70 | 2.09 | 1.67 | 2.47 (secs for 2000 calls) 100 | 2.21 | 7.67 | 2.23 | 9.19 (secs for 300 calls) 500 | 2.01 | 11.55 | 2.03 | 12.24 (secs for 4 calls) . Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 2.22 | 4.27 | 2.25 | 4.53 (secs for 2000 calls) 100 | 5.78 | 23.75 | 5.77 | 24.99 (secs for 300 calls) 500 | 5.84 | 46.16 | 5.85 | 46.57 (secs for 4 calls) . ---------------------------------------------------------------------- Ran 35 tests in 294.343s Best, Arnd From pearu at scipy.org Sun Apr 7 10:43:06 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 09:43:06 -0500 (CDT) Subject: [SciPy-dev] Warning: CVS is goint to be unstable due to linalg2 -> linalg Message-ID: ... until further notice. Pearu From pearu at scipy.org Sun Apr 7 12:14:00 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 11:14:00 -0500 (CDT) Subject: [SciPy-dev] Finished linalg2->linalg transformation Message-ID: Hi, I have finished replacing linalg with linalg2. The old version of linalg is copied into directory linalg/linalg1. Later linalg1 will be removed from the CVS tree. I have tested that the new setup builds from tar source distribution on Woody Debian with Python 2.1.2, gcc-2.95.4, ATLAS-3.3.13. When updating SciPy from CVS, I suggest first removing linalg and linalg2 directories from your local CVS tree. Also, if you have installed scipy, it is a good idea to remove all traces of scipy from site-packages directory (but keep scipy_distutils, otherwise you have to reinstall f2py). Enjoy, Pearu From jochen at unc.edu Sun Apr 7 13:42:24 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 07 Apr 2002 13:42:24 -0400 Subject: [SciPy-dev] Linalg2 benchmarks In-Reply-To: References: Message-ID: On Sun, 7 Apr 2002 05:33:55 -0500 (CDT) pearu wrote: pearu> When numarray will have a stable and documented C/API pearu> interface, I'll look for its support in f2py. Hmm, probably not stable, but it is documented now (somewhat) ... ,---- | http://python.jochen-kuepper.de/numarray `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From jochen at unc.edu Sun Apr 7 14:47:21 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 07 Apr 2002 14:47:21 -0400 Subject: [SciPy-dev] Cygwin build problems In-Reply-To: References: Message-ID: On Sun, 7 Apr 2002 05:11:50 -0500 (CDT) pearu wrote: pearu> Where this -lpython2.2 comes? Adding pearu> -L/usr/lib/python2.2/config should be done in the place where pearu> -lpython2.2 is added but I cannot figure out where. Ok, it comes from ,---- | /usr/lib/python2.2/distutils/command/build_ext.py `---- >> But all I get is an error pearu> >> | AttributeError: 'NoneType' object has no attribute 'get' Yep, but even with the following site.cfg the library isn't found: ,----[cat scipy_distutils/site.cfg] | [DEFAULT] | lib_dir = /usr/local/lib:/opt/lib:/usr/lib:/lib:/usr/lib/python2.2/config `---- I get ,---- | /usr/lib/gcc-lib/i686-pc-cygwin/2.95.3-5/../../../../i686-pc-cygwin/bin/ld: cannot find -lpython2.2 | collect2: ld returned 1 exit status | error: command 'g77' failed with exit status 1 `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From pearu at scipy.org Sun Apr 7 15:28:43 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 14:28:43 -0500 (CDT) Subject: [SciPy-dev] Cygwin build problems In-Reply-To: Message-ID: On 7 Apr 2002, Jochen K?pper wrote: > Yep, but even with the following site.cfg the library isn't found: > ,----[cat scipy_distutils/site.cfg] > | [DEFAULT] > | lib_dir = /usr/local/lib:/opt/lib:/usr/lib:/lib:/usr/lib/python2.2/config > `---- Yes, because libpython2.2 is not looked by the system_info.py script, it is not its job, it is an issue of distutils or scipy_distutils, I think. Actually, the needed library path seems to be defined in build_ext.finalize_options method in distutils/command/build_ext.py. I have little idea why it is not used. But try to uncomment lines #67 and #70 in file scipy_distutils/command/build_ext.py. I would be then interested to see the resulting linking command, whether it will be succesful or not. Also print self.library_dirs in that method to see whether the needed path is there. May be it is enough to add the following line self.compiler.library_dirs.extend(self.library_dirs) before the call in line #84: res = old_build_ext.build_extension(self,ext) Let me know how it goes. Pearu From jochen at unc.edu Sun Apr 7 16:13:28 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 07 Apr 2002 16:13:28 -0400 Subject: [SciPy-dev] Cygwin build problems In-Reply-To: References: Message-ID: On Sun, 7 Apr 2002 14:28:43 -0500 (CDT) pearu wrote: pearu> Actually, the needed library path seems to be defined in pearu> build_ext.finalize_options method in pearu> distutils/command/build_ext.py. I have little idea why it is pearu> not used. But try to uncomment lines pearu> #67 and #70 in file scipy_distutils/command/build_ext.py. pearu> I would be then interested to see the resulting linking command, pearu> whether it will be succesful or not. Ok, the link line now is ,---- | g77 -shared build/temp.cygwin-1.3.10-i686-2.2/fortranobject.o build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.o -L/usr/local/lib -L/usr/local/lib -L/usr/lib/python2.2/config -Lbuild/temp.cygwin-1.3.10-i686-2.2 -llapack -lf77blas -lcblas -latlas -lpython2.2 -lc_misc -lcephes -lg2c -o build/lib.cygwin-1.3.10-i686-2.2/scipy/linalg/fblas.dll `---- which solves the previous problem, but here's a new one: Many missing symbols. I'll look into that when I have some more time (probably tomorrow). ,---- | /home/software/programming/numeric/scipy/build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.c:5463: undefined reference to `f2py_stop_clock' | build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.o: In function `f2py_rout_fblas_chemv': | /home/software/programming/numeric/scipy/build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.c:5534: undefined reference to `f2py_start_clock' `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From pearu at scipy.org Sun Apr 7 16:20:46 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 15:20:46 -0500 (CDT) Subject: [SciPy-dev] Cygwin build problems In-Reply-To: Message-ID: On 7 Apr 2002, Jochen K?pper wrote: > Ok, the link line now is ,---- | g77 -shared > build/temp.cygwin-1.3.10-i686-2.2/fortranobject.o > build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.o -L/usr/local/lib > -L/usr/local/lib -L/usr/lib/python2.2/config > -Lbuild/temp.cygwin-1.3.10-i686-2.2 -llapack -lf77blas -lcblas -latlas > -lpython2.2 -lc_misc -lcephes -lg2c -o ^^^^^^^^^^^^^^^^^ I don't like these parasite libraries here. Did they came from uncommenting the lines? How the line #84 hack worked? I was hoping that that would solve the problem. > build/lib.cygwin-1.3.10-i686-2.2/scipy/linalg/fblas.dll `---- which > solves the previous problem, but here's a new one: Many missing > symbols. This is an easy one. Whether are you using very old f2py (you'll need the latest) or there are old object or library files around. In the latter case just removed them and rebuild. I recommend 'rm -rf build'. Pearu From jochen at unc.edu Sun Apr 7 16:40:39 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 07 Apr 2002 16:40:39 -0400 Subject: [SciPy-dev] Cygwin build problems In-Reply-To: References: Message-ID: Ok, I did a clean build (rm -rf build) with the patch to build_ext.py suggested by Pearu. The remaining problem is that 'on_exit' is an unresolved problem... This should come from libpython, I assume? ,---- | g77 -shared build/temp.cygwin-1.3.10-i686-2.2/fortranobject.o build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.o -L/usr/local/lib -L/usr/local/lib -L/usr/lib/python2.2/config -Lbuild/temp.cygwin-1.3.10-i686-2.2 -llapack -lf77blas -lcblas -latlas -lpython2.2 -lc_misc -lcephes -lgist -lg2c -o build/lib.cygwin-1.3.10-i686-2.2/scipy/linalg/fblas.dll | build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.o: In function `initfblas': | /home/software/programming/numeric/scipy/build/temp.cygwin-1.3.10-i686-2.2/fblasmodule.c:10349: undefined reference to `on_exit' | collect2: ld returned 1 exit status | error: command 'g77' failed with exit status 1 `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From pearu at scipy.org Sun Apr 7 16:45:47 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 15:45:47 -0500 (CDT) Subject: [SciPy-dev] Cygwin build problems In-Reply-To: Message-ID: On 7 Apr 2002, Jochen K?pper wrote: > Ok, I did a clean build (rm -rf build) with the patch to build_ext.py > suggested by Pearu. The remaining problem is that 'on_exit' is an > unresolved problem... This should come from libpython, I assume? I think so. f2py generated extension modules do not use on_exit *provided* that you use the latest f2py. Pearu From eric at scipy.org Sun Apr 7 15:51:32 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 15:51:32 -0400 Subject: [SciPy-dev] some notes References: Message-ID: <010901c1de6d$9eabb820$777ba8c0@ericlaptop> > Some notes on latest cvs... > > As I have suggested before the coding guidelines should specify a > maximum of 79 cols instead of 80, as emacsens break lines with exactly > 80 cols... > The style guide(s) referenced should be PEPs 0007 and 0008, not > Guido's old paper. Right. We've (well, I've) been following the 80 convention, but 79 would be better. I doubt that gets fixed before this release (lower priority), but it definitely needs to be fixed. > > The reference to Numeric should probably be the generic www.numpy.org, > even it being a redirect. Good. Done. > > The THANKS file should adhere to the format guidelines, too. A line > with 182 cols is a little long for sure:) > A question mark behind a person in this file doesn't look too > professional either... I guess the file needs some general updating? Yes. Done. Let me know if I've missed someone. Its not intended, I just don't remember all the patches that have come through. > > A lot of what is in INSTALL typically belongs into a README, whereas > INSTALL should have just /installation instructions/, IMHO. I like the INSTALL.txt file in its current form. Perhaps Pearu made it after you wrote this? Anyway, I don't see any reason for changing its format. > > The copyright in the license file ought to be updated, I guess? I've updated the date. Is this what you were referring to? thanks for the comments, eric From eric at scipy.org Sun Apr 7 16:12:23 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 16:12:23 -0400 Subject: [SciPy-dev] scipy_base.testing - requiring Numeric? References: Message-ID: <011b01c1de70$87cae8d0$777ba8c0@ericlaptop> Hey Pearu, Testing was actually developed as a generic test tool that would work outside of scipy. Weave uses it for testing, and weave is pretty much independent of scipy and scipy_base other than this. I don't want to force people to have Numeric installed when using weave. Weave will have to be packaged slightly different than it is now for stand-alone use, but I'd like to keep testing.py in a format where it can be used without modification in the stand alone version. So, this is a long way of saying, lets keep the try/except around the Numeric stuff for now. I've added it back in. Thanks for the rest of the cleanup you did. eric ----- Original Message ----- From: To: Sent: Saturday, April 06, 2002 5:54 PM Subject: [SciPy-dev] scipy_base.testing - requiring Numeric? > > Hi, > > I noticed that in scipy_base.testing there is a try-except for passing > import failures of Numeric. I am going to remove that try-except construct > as I think scipy_base can assume that Numeric is properly installed. > Please, let me know if you want to keep it. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Sun Apr 7 16:15:37 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 16:15:37 -0400 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week References: Message-ID: <012901c1de70$fb94a210$777ba8c0@ericlaptop> Hey Fernando, I don't know where this is either, and Travis V. is not around today. Could you mail me a copy? I'd like to look it over, get it in a place where people can look at it, and perhaps update it. thanks, eric > > > > I could not find new build instructions that Fernando put together from > > the scipy site anymore. What happend to this document? Does anyone has a > > copy of it? > > I can't find it at scipy either, but I'll be happy to mail you a copy if you > want to use it as a starter for updated instructions. It's written in lyx, but > if you prefer I'll mail you a latex or html version, just let me know. > > cheers, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Sun Apr 7 16:22:41 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 16:22:41 -0400 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week References: Message-ID: <012f01c1de71$f83cf620$777ba8c0@ericlaptop> ----- Original Message ----- From: To: Sent: Saturday, April 06, 2002 10:12 PM Subject: Re: [SciPy-dev] scipy-0.2 -- maybe mid next week > > Hi, > > On Thu, 4 Apr 2002, eric wrote: > > > 2. > > Where are we on the linalg to linalg2 transition? Has everything been moved > > over? I haven't looked at the cblas/fblas wrappers in a while. Also, I'm not > > up to speed on all the work Travis and Pearu have done. Is it release ready? > > I have finished blas wrappers. Well, there is still lots of work to do > with tests and wrappers but at least now everything is covered that > is in linalg. Except ger routines but they are not show stoppers, I hope, > and can wait. > > Travis, can we start replacing linalg with linalg2? > May be first removing linalg to linalg1 in CVS, then linalg2 -> linalg, > and if everything works fine we can remove linalg1 from CVS. > What do you think? Looks like you've done this already. Good. I just checked on test level 10 and everything passes on RH 7.1, even all the weave tests. I do have one problem though: >>> import clapack Traceback (most recent call last): File "", line 1, in ? ImportError: ./clapack.so: undefined symbol: clapack_sgesv This makes no sense because I checked my liblapack.a, and the symbol is defined. Anyone else seen this? Removing clapack.so gets everything to work smashingly. Anyway, I'll try to track it down. > > 3. > > Documentation and installation guides needs to be updated. There have been > > substantial updates and rearrangments that we should document. I haven't looked > > at this yet. > > I could not find new build instructions that Fernando put together from > the scipy site anymore. What happend to this document? Does anyone has a > copy of it? Not sure about this. I'll get them from him, and put them back up in a public place. > > > 4. > > source and binary distributions need to be built and tested. I'd like to have > > Windows,Mac,Debian, and Linux RH rpms available. I can do the Windows and maybe > > the rpms on a RH 7.x box. I know binary RPMs have a lot of issues, so I guess > > we should have a source RPM also? > > Source (tar-ball) distribution builds and passes all tests (currently 320 > for level=1) with Python 2.1 on Debian Woody. With Python 2.2 only some > weave tests fail if the test level is high. Cool -- but what are the weave failures? I don't get any failures when testing with the latest CVS, Python2.2b2 on RH 7.1. It looks like multiple platforms are having problems with weave. I'll post a separate message for this and try to gather up all the issues under one thread. eric > > Pearu > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Sun Apr 7 16:23:27 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 16:23:27 -0400 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week References: Message-ID: <013501c1de72$13644160$777ba8c0@ericlaptop> Hey Jochen, Can you send me the trace of reported weave errors? Thanks, eric ----- Original Message ----- From: "Jochen K?pper" To: Sent: Saturday, April 06, 2002 10:50 PM Subject: Re: [SciPy-dev] scipy-0.2 -- maybe mid next week > On Sat, 6 Apr 2002 20:12:14 -0600 (CST) pearu wrote: > > pearu> On Thu, 4 Apr 2002, eric wrote: > > >> source and binary distributions need to be built and tested. I'd like to have > >> Windows,Mac,Debian, and Linux RH rpms available. I can do the Windows and maybe > >> the rpms on a RH 7.x box. I know binary RPMs have a lot of issues, so I guess > >> we should have a source RPM also? > > pearu> Source (tar-ball) distribution builds and passes all tests (currently 320 > pearu> for level=1) with Python 2.1 on Debian Woody. With Python 2.2 only some > pearu> weave tests fail if the test level is high. > > Here the same. python 'release-22maint' on RedHat-7.0 test(level=10) > fails 33 of 381 tests. If I'm correct all failed ones are for weave. > > Greetings, > Jochen > -- > University of North Carolina phone: +1-919-962-4403 > Department of Chemistry phone: +1-919-962-1579 > Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 > Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Sun Apr 7 17:30:36 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 16:30:36 -0500 (CDT) Subject: [SciPy-dev] scipy_base.testing - requiring Numeric? In-Reply-To: <011b01c1de70$87cae8d0$777ba8c0@ericlaptop> Message-ID: On Sun, 7 Apr 2002, eric wrote: > Testing was actually developed as a generic test tool that would work outside of > scipy. Weave uses it for testing, and weave is pretty much independent of scipy > and scipy_base other than this. I don't want to force people to have Numeric > installed when using weave. Weave will have to be packaged slightly different > than it is now for stand-alone use, but I'd like to keep testing.py in a format > where it can be used without modification in the stand alone version. So, this > is a long way of saying, lets keep the try/except around the Numeric stuff for > now. OK, that's fine. I have learned to be very careful with using try/expect constructs. If the try/expect block contains more than, say, 2-3 lines, then too many things can go wrong that are not indented and should not be passed without a warning or an exception. This explains my urge to either to remove try/expect blocks or to reduce them to minimal. Pearu From eric at scipy.org Sun Apr 7 16:36:43 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 16:36:43 -0400 Subject: [SciPy-dev] Linalg2 benchmarks References: Message-ID: <015301c1de73$ee49e4f0$777ba8c0@ericlaptop> ----- Original Message ----- From: To: Sent: Sunday, April 07, 2002 6:33 AM Subject: Re: [SciPy-dev] Linalg2 benchmarks > > Hi Jochen, > > On 7 Apr 2002, Jochen K?pper wrote: > > > Well, knowing what Travis posted I must say what I find strange is t > > he compared Numeric's lapack_lite with ATLAS where it is so easy to > > use a machine optimized LAPACK/BLAS with Numeric. And btw. ATLAS > > isn't always the best choice here -- one reason why I don't like this > > tight binding to ATLAS too much. > > Using site.cfg it is possible to use your own lapack/blas > libraries in favour of atlas ones by specifying a proper order. > Though currently you still need atlas otherwise building cblas will fail. > > In future, we can remove the strict dependence of linalg on ATLAS easily > as cblas or clapack routines are not used directly but through their > wrappers blas.py and lapack.py. Yes, I'm all for removing this restriction. Building SciPy on the large parallel machines (O2K, SP3, etc.) his arder than it should be because ATLAS is not always easy to build on these beast. However, they almost always have an optimized lapack sitting around. Fixing this would probably make building on a number of platforms easier. This will take some cooperation between system_info and setup_linalg to test if cblas actually exists. > > > something related: How much work would it be to get f2py/LinAlg to > > work with numarray? > > Actually it should not be difficult. The most intensive use of Numeric > array C/API is in fortranobject.c, basically, in the function > array_from_pyobj and its dependencies. There is no need to change > signature files. That's the whole beauty of using tools like f2py for > generating extension modules automatically. > > When numarray will have a stable and documented C/API interface, I'll look > for its support in f2py. > > > There is a numpy_compat now that supports almost all of Numeric, so > > this would be one way. But in the long run one would like to have a > > native implementation of linalg, of course... > > I am not sure that numpy_compat will work straightforward for f2py as f2py > is quite aware of Numeric array internals. But on the other hand, I have > not looked what is done in numpy_compat or numarray lately. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jochen at unc.edu Sun Apr 7 17:44:21 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 07 Apr 2002 17:44:21 -0400 Subject: [SciPy-dev] some notes In-Reply-To: <010901c1de6d$9eabb820$777ba8c0@ericlaptop> References: <010901c1de6d$9eabb820$777ba8c0@ericlaptop> Message-ID: On Sun, 7 Apr 2002 15:51:32 -0400 eric wrote: eric> Right. We've (well, I've) been following the 80 convention, but eric> 79 would be better. I doubt that gets fixed before this release eric> (lower priority), but it definitely needs to be fixed. I basically meant to say: Put it into the guidelines now, fix it in the code wherever needed whenever timer permits. You might have people looking at the 0.2 guidelines for a while, so make it right... eric> I like the INSTALL.txt file in its current form. Perhaps Pearu eric> made it after you wrote this? Anyway, I don't see any reason eric> for changing its format. Hmm, looking at it now I think it is ok:) eric> I've updated the date. Is this what you were referring to? yep Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From pearu at scipy.org Sun Apr 7 17:46:51 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 7 Apr 2002 16:46:51 -0500 (CDT) Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: <012f01c1de71$f83cf620$777ba8c0@ericlaptop> Message-ID: On Sun, 7 Apr 2002, eric wrote: > > Source (tar-ball) distribution builds and passes all tests (currently 320 > > for level=1) with Python 2.1 on Debian Woody. With Python 2.2 only some > > weave tests fail if the test level is high. > > Cool -- but what are the weave failures? I don't get any failures when testing > with the latest CVS, Python2.2b2 on RH 7.1. It looks like multiple platforms > are having problems with weave. I'll post a separate message for this and try > to gather up all the issues under one thread. In my case, I got failures, see http://www.scipy.net/pipermail/scipy-dev/2002-April/000759.html when using gcc-3.0.3, Python 2.2 on Suse Linux. All tests pass (including weave) if I use gcc-2.95.4, Python 2.1.2 or 2.2.1c1 on Debian Woody. Pearu From rossini at blindglobe.net Sun Apr 7 18:27:01 2002 From: rossini at blindglobe.net (A.J. Rossini) Date: 07 Apr 2002 15:27:01 -0700 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: <012f01c1de71$f83cf620$777ba8c0@ericlaptop> References: <012f01c1de71$f83cf620$777ba8c0@ericlaptop> Message-ID: <87pu1brvpm.fsf@jeeves.blindglobe.net> >>>>> "eric" == eric writes: eric> From: eric> I do have one problem though: >>>> import clapack eric> Traceback (most recent call last): eric> File "", line 1, in ? eric> ImportError: ./clapack.so: undefined symbol: clapack_sgesv eric> This makes no sense because I checked my liblapack.a, and the symbol is defined. eric> Anyone else seen this? Removing clapack.so gets everything to work smashingly. eric> Anyway, I'll try to track it down. I sometimes see this when I'm linking with LD_LIBRARY_PATH set during compile, to include /usr/lib/atlas, but it isn't set at the shell level. -- A.J. Rossini Rsrch. Asst. Prof. of Biostatistics U. of Washington Biostatistics rossini at u.washington.edu FHCRC/SCHARP/HIV Vaccine Trials Net rossini at scharp.org -------------- http://software.biostat.washington.edu/ ---------------- FHCRC: M-W: 206-667-7025 (fax=4812)|Voicemail is pretty sketchy/use Email UW: Th: 206-543-1044 (fax=3286)|Change last 4 digits of phone to FAX (my friday location is usually completely unpredictable.) From jochen at unc.edu Sun Apr 7 18:48:28 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 07 Apr 2002 18:48:28 -0400 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week In-Reply-To: <013501c1de72$13644160$777ba8c0@ericlaptop> References: <013501c1de72$13644160$777ba8c0@ericlaptop> Message-ID: On Sun, 7 Apr 2002 16:23:27 -0400 eric wrote: eric> Can you send me the trace of reported weave errors? Attaches is a complete test-log. This is on RedHat-7.0 + gcc + python: ,---- | Reading specs from /usr/local/lib/gcc-lib/i686-pc-linux-gnu/3.0.4/specs | Configured with: ../gcc-3.0.4/configure --enable-threads=posix --enable-nls --with-system-zlib --enable-languages=c++,f77,objc | Thread model: posix | gcc version 3.0.4 `---- ,---- | Python 2.2.1 (#2, Apr 6 2002, 00:53:20) | [GCC 3.0.4] on linux2 | Type "help", "copyright", "credits" or "license" for more information. `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E -------------- next part -------------- A non-text attachment was scrubbed... Name: test.log.bz2 Type: application/x-bunzip2 Size: 2947 bytes Desc: bzip2'ed test.log URL: From eric at scipy.org Sun Apr 7 17:53:50 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 17:53:50 -0400 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week References: <013501c1de72$13644160$777ba8c0@ericlaptop> Message-ID: <01d901c1de7e$b3db8c00$777ba8c0@ericlaptop> Ahh. Neglected to see that you are using gcc-3.0.4. weave doesn't yet work with gcc 3.x. Maybe in the next release... eric ----- Original Message ----- From: "Jochen K?pper" To: Sent: Sunday, April 07, 2002 6:48 PM Subject: Re: [SciPy-dev] scipy-0.2 -- maybe mid next week > On Sun, 7 Apr 2002 16:23:27 -0400 eric wrote: > > eric> Can you send me the trace of reported weave errors? > > Attaches is a complete test-log. This is on RedHat-7.0 + gcc + python: > ,---- > | Reading specs from /usr/local/lib/gcc-lib/i686-pc-linux-gnu/3.0.4/specs > | Configured with: ../gcc-3.0.4/configure --enable-threads=posix --enable-nls --with-system-zlib -- enable-languages=c++,f77,objc > | Thread model: posix > | gcc version 3.0.4 > `---- > ,---- > | Python 2.2.1 (#2, Apr 6 2002, 00:53:20) > | [GCC 3.0.4] on linux2 > | Type "help", "copyright", "credits" or "license" for more information. > `---- > > > Greetings, > Jochen > -- > University of North Carolina phone: +1-919-962-4403 > Department of Chemistry phone: +1-919-962-1579 > Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 > Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E > From eric at scipy.org Sun Apr 7 18:45:04 2002 From: eric at scipy.org (eric) Date: Sun, 7 Apr 2002 18:45:04 -0400 Subject: [SciPy-dev] scipy-0.2 -- maybe mid next week References: <012f01c1de71$f83cf620$777ba8c0@ericlaptop> <87pu1brvpm.fsf@jeeves.blindglobe.net> Message-ID: <020b01c1de85$dc055c40$777ba8c0@ericlaptop> Hmmm. I see. Getting bit by shared libraries. Looking at system_info's output, I get: [eric at enthoughtaus1 scipy_distutils]$ python system_info.py atlas_info: FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib', '/home/eric/lib/atlas'] because there is a liblapack.a (and .so) in /usr/lib I'm guessing. My atlas installation and my Python installation are local to my user directory. If a user library (or one on the Python prefix path) is found, shouldn't it come first in the library_dirs list? This would give preference to user installed libraries. I guess site.cfg is another method of doing this, but I think giving preference to user installed libraries by default is the way to go. After making the change to system_info.py, I get [eric at enthoughtaus1 scipy_distutils]$ python system_info.py atlas_info: FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/eric/lib/atlas', '/home/eric/lib/atlas'] This solves the problem of the .so libraries. Now I've learned my altas libs are old on this machine and need to be updated... so I'll fix that and try again. thanks for your help, eric ----- Original Message ----- From: "A.J. Rossini" To: Sent: Sunday, April 07, 2002 6:27 PM Subject: Re: [SciPy-dev] scipy-0.2 -- maybe mid next week > >>>>> "eric" == eric writes: > eric> From: > > eric> I do have one problem though: > > > >>>> import clapack > eric> Traceback (most recent call last): > eric> File "", line 1, in ? > eric> ImportError: ./clapack.so: undefined symbol: clapack_sgesv > > eric> This makes no sense because I checked my liblapack.a, and the symbol is defined. > eric> Anyone else seen this? Removing clapack.so gets everything to work smashingly. > eric> Anyway, I'll try to track it down. > > I sometimes see this when I'm linking with LD_LIBRARY_PATH set during > compile, to include /usr/lib/atlas, but it isn't set at the shell level. > > > -- > A.J. Rossini Rsrch. Asst. Prof. of Biostatistics > U. of Washington Biostatistics rossini at u.washington.edu > FHCRC/SCHARP/HIV Vaccine Trials Net rossini at scharp.org > -------------- http://software.biostat.washington.edu/ ---------------- > FHCRC: M-W: 206-667-7025 (fax=4812)|Voicemail is pretty sketchy/use Email > UW: Th: 206-543-1044 (fax=3286)|Change last 4 digits of phone to FAX > (my friday location is usually completely unpredictable.) > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From josegomez at gmx.net Mon Apr 8 07:11:35 2002 From: josegomez at gmx.net (=?iso-8859-15?q?Jos=E9=20Luis=20G=F3mez=20Dans?=) Date: Mon, 8 Apr 2002 12:11:35 +0100 Subject: [SciPy-dev] Crash in plt.image() Message-ID: <200204081106.g38B6Dv24844@scipy.org> Hi, I am starting to use scipy, and I am so far enjoying the ride. However, I have found what I think is a bug. I am using Joe Reinhardt's scipy packages for debian, which identify themselves as: >>> print scipy.__version__ 0.2.0-alpha-18.3045 I am interested in plotting images, using plt.image(). This causes python to crash with a seg-fault. Take the Lena example: $ python Python 2.1.2 (#1, Mar 16 2002, 00:56:55) [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> from scipy import * >>> from scipy.plt import * >>> img = lena() >>> image(img) ['__copy__', '__deepcopy__', 'astype', 'byteswapped', 'copy', 'iscontiguous', 'itemsize', 'resize', 'savespace', 'spacesaver', 'tolist', 'toscalar', 'tostring', 'typecode'] Segmentation fault Other methods (such as plt.plot()) work fine, with one little quirk, which could be due to my X Server being misconfigured, as this problem also shows up with other programs (namely, Scilab): if I get plt to plot something (plt.plot(MyArray), let's say), the window pops up, and shows me my graph. Any subsequent plot on that window will not show up _if the window_ is covered by say, the terminal window. If I move the plot window around, and cover it with any other window, the window will clear parts of the plot window. The solution (at least for scilab) can be found in (unfortunately, I am processing a large batch of files, and cannot afford to restart X :D). Basically, this has to do with "backing store" and "save unders". Does anyone have any comments on this? Thank you, Jose -- Jos? L G?mez Dans PhD student Tel: +44 114 222 5582 Radar & Communications Group FAX; +44 870 132 2990 Department of Electronic Engineering University of Sheffield UK From steven.robbins at videotron.ca Mon Apr 8 10:11:16 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Mon, 8 Apr 2002 10:11:16 -0400 Subject: [SciPy-dev] cvs junk Message-ID: <20020408141116.GC3410@nyongwa.montreal.qc.ca> Hi, Err, these "*~" files aren't supposed to be version-controlled, are they? steve at riemann{scipy-cvs-upstream}find . -name '*~'|xargs rm steve at riemann{scipy-cvs-upstream}cvs -z3 update U fftw/__init__.py~ U optimize/minpack.h~ U special/amos/setup.py~ U xplt/gistCmodule.c~ -S -- by Rocket to the Moon, by Airplane to the Rocket, by Taxi to the Airport, by Frontdoor to the Taxi, by throwing back the blanket and laying down the legs ... - They Might Be Giants From pearu at scipy.org Mon Apr 8 10:07:57 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 8 Apr 2002 09:07:57 -0500 (CDT) Subject: [SciPy-dev] weave errors with python/gcc-3.0.3 and scipy/gcc-2.95.3 Message-ID: Hi Eric, Here is my setup: Python 2.2 (#7, Jan 28 2002, 13:08:12) [GCC 3.0.3] on linux2 gcc version 2.95.3 20010315 (release) and I get the following type of messages: ====================================================================== ERROR: check_file_to_py (test_common_spec.test_file_converter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/peterson/opt/lib/python2.2/site-packages/scipy/weave/tests/test_common_spec.py", line 32, in check_file_to_py file = inline_tools.inline(code,['file_name']) File "/home/peterson/opt/lib/python2.2/site-packages/scipy/weave/inline_tools.py", line 327, in inline auto_downcast = auto_downcast, File "/home/peterson/opt/lib/python2.2/site-packages/scipy/weave/inline_tools.py", line 432, in compile_function exec 'import ' + module_name File "", line 1, in ? ImportError: undefined symbol: __gxx_personality_v0 Any ideas? Notice that Python is compiled with gcc 3.0.3 but scipy with 2.95.3. Could this cause these problems? I will later try to recompile python also with gcc 2.95.3 so see if it ia a compiler issue. Or Python 2.2 issue. Or something else. Pearu From eric at scipy.org Mon Apr 8 11:03:49 2002 From: eric at scipy.org (eric) Date: Mon, 8 Apr 2002 11:03:49 -0400 Subject: [SciPy-dev] weave errors with python/gcc-3.0.3 and scipy/gcc-2.95.3 References: Message-ID: <02a001c1df0e$96be13a0$777ba8c0@ericlaptop> > > Hi Eric, > > Here is my setup: > > Python 2.2 (#7, Jan 28 2002, 13:08:12) > [GCC 3.0.3] on linux2 > > gcc version 2.95.3 20010315 (release) > > and I get the following type of messages: > > ====================================================================== > ERROR: check_file_to_py (test_common_spec.test_file_converter) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/peterson/opt/lib/python2.2/site-packages/scipy/weave/tests/test_common_sp ec.py", > line 32, in check_file_to_py > file = inline_tools.inline(code,['file_name']) > File > "/home/peterson/opt/lib/python2.2/site-packages/scipy/weave/inline_tools.py", > line 327, in inline > auto_downcast = auto_downcast, > File > "/home/peterson/opt/lib/python2.2/site-packages/scipy/weave/inline_tools.py", > line 432, in compile_function > exec 'import ' + module_name > File "", line 1, in ? > ImportError: undefined symbol: __gxx_personality_v0 > ???. Never seen anything like this before. Is that symbol defined in libpython.a? If so, then I'm betting it came from 3.0.3, and the 2.95.3 compiler is not picking up the correct library. Have you tried building another extension (say something from scipy) with 2.95.3? If that works with your gcc 3.x python, then we'll have to start looking through weave. It could also have something to do with g++ vs. gcc issues in the 2.x to 3.x transition. I don't think it is a python22 issue. It is starting to look like the gcc 2.x to gcc 3.x transition is a big can of worms for building extensions on the fly... We may have to start detecting the type of compiler Python was built with. Distutils is *supposed* to do this, but looking at make files can't detect when Pearu changes gcc out from under it. ;-) eric From tjlahey at mud.cgl.uwaterloo.ca Mon Apr 8 14:06:41 2002 From: tjlahey at mud.cgl.uwaterloo.ca (Tim Lahey) Date: Mon, 8 Apr 2002 14:06:41 -0400 (EDT) Subject: [SciPy-dev] Compilation status on Solaris 8 Message-ID: Hi, With the various changes in CVS, I've finally got to the compilation stage on Solaris 8. I run into the following error if I compile with cc (Sun Workshop): (Modified for email formatting) building 'cephes' library cc -DNDEBUG -O -c /u/tjlahey/devel/scipy/special/cephes/kolmogorov.c -o build/temp.solaris-2.8-sun4u-2.2/kolmogorov.o "scipy/special/cephes/mconf.h", line 114: missing operator "scipy/special/cephes/protos.h", line 41: syntax error before or at: / "scipy/special/cephes/kolmogorov.c", line 40: warning: division by 0 "scipy/special/cephes/kolmogorov.c", line 146: cannot recover from previous errors if I just try to compile with gcc: gcc -DNDEBUG -O -c scipy/special/cephes/kolmogorov.c -o build/temp.solaris-2.8-sun4u-2.2/kolmogorov.o In file included from scipy/special/cephes/kolmogorov.c:26: /u/tjlahey/devel/scipy/special/cephes/mconf.h:122: parse error In both cases the mconf error is the same error just reported differently. This code is checking for various defines to determine if it is big-endian. Likely why it hasn't been found before. The protos.h error is due to a C++ style comment in a C file (// vs. /**/) as the Sun compiler is a C compiler (one uses c++ to compile C++ code). So this is a result of a picky compiler (but one that adheres to the standard). Suggestions ? Thanks, Tim. From pearu at cens.ioc.ee Mon Apr 8 14:29:59 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 8 Apr 2002 21:29:59 +0300 (EEST) Subject: [SciPy-dev] Compilation status on Solaris 8 In-Reply-To: Message-ID: On Mon, 8 Apr 2002, Tim Lahey wrote: > The protos.h error is due to a C++ style comment in a C file (// vs. /**/) > as the Sun compiler is a C compiler (one uses c++ to compile C++ code). > So this is a result of a picky compiler (but one that adheres to the > standard). > > Suggestions ? Fixed in CVS. Pearu From pearu at scipy.org Mon Apr 8 15:23:24 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 8 Apr 2002 14:23:24 -0500 (CDT) Subject: [SciPy-dev] weave errors with python/gcc-3.0.3 and scipy/gcc-2.95.3 In-Reply-To: <02a001c1df0e$96be13a0$777ba8c0@ericlaptop> Message-ID: On Mon, 8 Apr 2002, eric wrote: > > ImportError: undefined symbol: __gxx_personality_v0 > > > > ???. Never seen anything like this before. Is that symbol defined in > libpython.a? If so, then I'm betting it came from 3.0.3, and the 2.95.3 > compiler is not picking up the correct library. This symbol __gxx_personality_v0 is defined in libsupc++ that comes with gcc-3.x. > Have you tried building another extension (say something from scipy) with > 2.95.3? If that works with your gcc 3.x python, then we'll have to start > looking through weave. It could also have something to do with g++ vs. gcc > issues in the 2.x to 3.x transition. I don't think it is a python22 issue. All other (non C++) extensions work fine with various compilers like 2.95.2, 2.95.2.1, 2.95.3. I have spend half a day trying to downgrade gcc 3.x to gcc 2.95.x without success. I think I have tried everything except reinstalling the whole Suse distribution (unfortunately I don't have a root access for this:(), but from some place this symbol sneaks in when compiling/linking C++ stuff. It seems that I am stuck with gcc-3.x. Not sure if this is good or bad, but certainly inconvinient right now: cannot test weave on this fast machine with enough memory... Pearu From Chuck.Harris at sdl.usu.edu Mon Apr 8 16:31:56 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Mon, 8 Apr 2002 14:31:56 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filter design, zero finding Message-ID: Hi All, I've written a number of python routines over the past half year for my own use, and wonder if it might be appropriate to include some of them in scipy. They break down into the general categories: Number Theory : These were used for analysing arrays of antennas used in radar interferometry. They are also useful in integer programming, cryptography, and computational algebra. Reduction of a matrix to Hermite normal form Reduction of a matrix to Smith normal form LLL basis reduction LLL basis reduction - deep version Gram-Schmidt orthogonalization Filter Design: Routines used for designing Complex Hermitian digital filters : Remez exchange algorithm - for arbitrary Chebychev systems. Zero finders: General use and fun. Bisection best for some special cases, Ridder is middle of the pack, Brent is generally best, with the two versions basically a wash, although the hyperbolic version is simpler. Whether or not there is any virtue to these as opposed to solve, I don't know. Bisection Illinois version of regula falsa Ridder Brent method with hyperbolic interpolation Brent method with inverse quadatic interpolation Genetic algorithm: Used in digital filter design to optimize for coefficient truncation error. I looked at galib and found it easier to roll my own, but didn't try for any great generality. I think it would be good to include uniform crossover and to pull the fitness function out of the genome --- in a tournament, fitness can depend on the population. Perhaps it can all be made simpler. P.S. The formatting page at scipy.org disagrees with the Python PEP's -- Python.org suggests CapWords for class names, not lowercase with underscores. Chuck From oliphant.travis at ieee.org Mon Apr 8 17:45:44 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 08 Apr 2002 15:45:44 -0600 Subject: [SciPy-dev] Segfault problems on Mandrake 8.2 -- Python2.2 In-Reply-To: References: Message-ID: <1018302345.24077.6.camel@travis> I've had a broken SciPy installation for the past couple of days due to a pretty obnoxious bug. I'm not sure which package is to blame (could be the Mandrake 8.2 toolchain??? But, I've finally isolated the problem to an -fPIC flag given when compiling _minpackmodule.c If I don't add this flag and compile the module manually then I get a module that loads and seems to work fine. The default SciPy install adds this flag and results in a module that segfaults when Python tries to open it. I have no idea what is going on, and at this point would much rather get back to doing useful work. My question to those who designed the scipy_distutils is (can I specify whether or not to include this -fPIC flag and how?) Any information anyone has about this problem, and how to better fix it I would appreciate. Thanks, Travis From pearu at scipy.org Mon Apr 8 17:56:53 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 8 Apr 2002 16:56:53 -0500 (CDT) Subject: [SciPy-dev] Segfault problems on Mandrake 8.2 -- Python2.2 In-Reply-To: <1018302345.24077.6.camel@travis> Message-ID: On 8 Apr 2002, Travis Oliphant wrote: > I've had a broken SciPy installation for the past couple of days due to > a pretty obnoxious bug. I'm not sure which package is to blame (could > be the Mandrake 8.2 toolchain??? > > But, I've finally isolated the problem to an -fPIC flag given when > compiling _minpackmodule.c Is minpack the only module with this problem? If you disable it, will scipy load without segfault? Can you send the output when minpack extension is build? > If I don't add this flag and compile the module manually then I get a > module that loads and seems to work fine. The default SciPy install > adds this flag and results in a module that segfaults when Python tries > to open it. This flag is added by Python distutils. > I have no idea what is going on, and at this point would much rather get > back to doing useful work. > > My question to those who designed the scipy_distutils is (can I specify > whether or not to include this -fPIC flag and how?) This flag -fPIC is generally needed for shared objects. In principle, it is possible to change this flag using similar hack as weave uses for changing gcc to g++ in LDSHARED (see weave build_tools.py). -fPIC is defined by CCSHARED variable, I believe. Now I see that scipy_distutils uses -fpic instead of -fPIC when compiling Fortran sources. I am not sure if it matters but you can try changing -fpic to -fPIC in scipy_distutils/command/build_flib.py (look for gnu compiler switches). Pearu From oliphant.travis at ieee.org Mon Apr 8 18:30:26 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 08 Apr 2002 16:30:26 -0600 Subject: [SciPy-dev] Segfault problems on Mandrake 8.2 -- Python2.2 In-Reply-To: References: Message-ID: <1018305028.29265.4.camel@travis> On Mon, 2002-04-08 at 15:56, pearu at scipy.org wrote: > > > On 8 Apr 2002, Travis Oliphant wrote: > > > I've had a broken SciPy installation for the past couple of days due to > > a pretty obnoxious bug. I'm not sure which package is to blame (could > > be the Mandrake 8.2 toolchain??? > > > > But, I've finally isolated the problem to an -fPIC flag given when > > compiling _minpackmodule.c > > Is minpack the only module with this problem? If you disable it, will > scipy load without segfault? Can you send the output when minpack > extension is build? Yes, when I disabled minpack it loaded. I could repeatedly compile manually the minpack module with and without -fPIC and get segfaults on import with -fPIC and normal operation without it. > > > If I don't add this flag and compile the module manually then I get a > > module that loads and seems to work fine. The default SciPy install > > adds this flag and results in a module that segfaults when Python tries > > to open it. > > This flag is added by Python distutils. I went to the config directory and modifed the Makefile to remove the -fPIC flag from CCSHARED. Now, I can just build using the setup script and not get the normal segfault. However, I'm still getting a segfault on a level 10 test (during weave?) Here is the output I'm getting. test printing a value:2 ../home/travis/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp: In function `PyObject *compiled_func (PyObject *, PyObject *)': /home/travis/.python22_compiled/sc_9a25bc84add18fe6c75501f6b01bd84e1.cpp:418: no match for `Py::String & < int' /usr/lib/python2.2/site-packages/scipy/weave/CXX/Objects.hxx:390: candidates are: bool Py::Object::operator< (const Py::Object &) const /usr/lib/python2.2/site-packages/scipy/weave/CXX/Objects.hxx:1433: bool Py::operator< (const Py::SeqBase::const_iterator &, const Py::SeqBase::const_iterator &) /usr/lib/python2.2/site-packages/scipy/weave/CXX/Objects.hxx:1426: bool Py::operator< (const Py::SeqBase::iterator &, const Py::SeqBase::iterator &) ............................................Segmentation fault > > This flag -fPIC is generally needed for shared objects. In principle, it > is possible to change this flag using similar hack as weave uses for > changing gcc to g++ in LDSHARED (see weave build_tools.py). -fPIC is > defined by CCSHARED variable, I believe. I know I've compiled extensions successfully without -fPIC using gcc. I just did and I've done it previously. > > Now I see that scipy_distutils uses -fpic instead of -fPIC when compiling > Fortran sources. I am not sure if it matters but you can try changing > -fpic to -fPIC in scipy_distutils/command/build_flib.py (look for gnu > compiler switches). I'm going to change this to see if it has any effect on the remaining segfault, but I don't suspect it will. Thanks for your help. I'm thinking of just switching to a different distribution.... -Travis From eric at scipy.org Mon Apr 8 17:41:30 2002 From: eric at scipy.org (eric) Date: Mon, 8 Apr 2002 17:41:30 -0400 Subject: [SciPy-dev] flbas.dotu fails on windows. Message-ID: <03aa01c1df46$257a95a0$777ba8c0@ericlaptop> Python 2.1 and 2.2 both fail the cdotu test for blas. Does this happen on other platforms? Oh, by the way 319 tests (currently) pass. (test level 0) eric ====================================================================== FAIL: check_dot (test_blas.test_blas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python21\scipy\linalg\tests\test_blas.py", line 60, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "C:\Python21\scipy_base\testing.py", line 282, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (1.74127589255e-039+9.02295408555e-038j) ---------------------------------------------------------------------- Ran 320 tests in 3.565s -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From pearu at scipy.org Mon Apr 8 18:54:09 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 8 Apr 2002 17:54:09 -0500 (CDT) Subject: [SciPy-dev] flbas.dotu fails on windows. In-Reply-To: <03aa01c1df46$257a95a0$777ba8c0@ericlaptop> Message-ID: On Mon, 8 Apr 2002, eric wrote: > Python 2.1 and 2.2 both fail the cdotu test for blas. Does this happen on other > platforms? I have seen this also. But only with gcc-3 and atlas 3.3.13 (Suse). With gcc-2.95.x and atlas 3.3.14 dotu succeeds (Suse). Note that this failure happens for single precision, double seems to work in all cases. I have little idea what is failing (atlas?,f2py?,compiler?,returning complex?). More success/failure reports on different platforms and compiler/atlas combinations would be useful. On debian, gcc-2.95.4, atlas 3.3.13 cdotu works correctly. Pearu From eric at scipy.org Mon Apr 8 18:18:55 2002 From: eric at scipy.org (eric) Date: Mon, 8 Apr 2002 18:18:55 -0400 Subject: [SciPy-dev] flbas.dotu fails on windows. References: Message-ID: <03b401c1df4b$6184a9f0$777ba8c0@ericlaptop> ----- Original Message ----- From: To: Sent: Monday, April 08, 2002 6:54 PM Subject: Re: [SciPy-dev] flbas.dotu fails on windows. > > > On Mon, 8 Apr 2002, eric wrote: > > > Python 2.1 and 2.2 both fail the cdotu test for blas. Does this happen on other > > platforms? > > I have seen this also. But only with gcc-3 and atlas 3.3.13 (Suse). With > gcc-2.95.x and atlas 3.3.14 dotu succeeds (Suse). I am using 2.95.3 with atlas 3.3.13. I'll upgrade my ATLAS overnight tonight. (sigh.) > Note that this failure > happens for single precision, double seems to work in all cases. You are right. The double version worked here also. > I have > little idea what is failing (atlas?,f2py?,compiler?,returning complex?). > More success/failure reports on different platforms and compiler/atlas > combinations would be useful. On debian, gcc-2.95.4, atlas 3.3.13 cdotu > works correctly. I'll try on the RH 7.1 system with a new ATLAS 3.3.14 install system tomorrow. This doesn't sound like a wrapper problem though. It may be a bug in ATLAS??? -- or just the permutations of tools we are using??? Either way, hopefully there is fix for ATLAS that works across these tools. If all else fails, we can put a test of cdotu in the code, and if it fails, disbale it for this release. eric > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Tue Apr 9 05:24:21 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Tue, 9 Apr 2002 04:24:21 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filter design, zero finding In-Reply-To: Message-ID: Hi, On Mon, 8 Apr 2002, Chuck Harris wrote: > I've written a number of python routines over the past half year for > my own use, and wonder if it might be appropriate to include some of > them in scipy. They break down into the general categories: I would suggest that you first make them modules so that we can look at them whether they can be included to SciPy and how. Since SciPy itself is quite short from documentation and unit testing (fixing this has a high priority level in forthcoming SciPy development) I would expect that any new module to be considered for inclusion to SciPy should be (more or less) fully documented and has a (more or less) complete testing site. > Number Theory : These were used for analysing arrays of antennas used > in radar interferometry. They are also useful in integer programming, > cryptography, and computational algebra. > > Reduction of a matrix to Hermite normal form > Reduction of a matrix to Smith normal form > LLL basis reduction > LLL basis reduction - deep version > Gram-Schmidt orthogonalization I am not sure where these should go when considering the current scipy state. As you mention, they are parts of different fields from what we have in scipy now. I think they should be parts of the corresponding packages that may not exist as of yet. Personally, I am very interested in CA stuff. > Filter Design: Routines used for designing Complex Hermitian digital > filters : > > Remez exchange algorithm - for arbitrary Chebychev systems. Would signal be an appropiate place for this? > Zero finders: General use and fun. Bisection best for some special > cases, Ridder is middle of the pack, Brent is generally best, with the > two versions basically a wash, although the hyperbolic version is > simpler. Whether or not there is any virtue to these as opposed to > solve, I don't know. > > Bisection > Illinois version of regula falsa > Ridder > Brent method with hyperbolic interpolation > Brent method with inverse quadatic interpolation Can you compare these zero finders with ones in scipy? Performance? Robustness to initial conditions? Etc. Are they any better? > Genetic algorithm: Used in digital filter design to optimize for > coefficient truncation error. I looked at galib and found it easier to > roll my own, but didn't try for any great generality. I think it would > be good to include uniform crossover and to pull the fitness function > out of the genome --- in a tournament, fitness can depend on the > population. Perhaps it can all be made simpler. galib seems to be developed more than 6 years and I would expect it to be rather mature, though, I have not used it myself. May be a wrapper to such a library would be more appropiate for a longer term. Though the licence may be an issue, galib seems to be GPL compatible. Pearu From Chuck.Harris at sdl.usu.edu Tue Apr 9 12:14:27 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Tue, 9 Apr 2002 10:14:27 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zero finding Message-ID: Hi, > From: pearu at scipy.org [mailto:pearu at scipy.org] > Sent: Tuesday, April 09, 2002 3:24 AM > > Hi, > > On Mon, 8 Apr 2002, Chuck Harris wrote: > > > I've written a number of python routines over the past half year for > > my own use, and wonder if it might be appropriate to include some of > > them in scipy. They break down into the general categories: > > I would suggest that you first make them modules so that we > can look at > them whether they can be included to SciPy and how. Since > SciPy itself is > quite short from documentation and unit testing (fixing this > has a high > priority level in forthcoming SciPy development) I would > expect that any > new module to be considered for inclusion to SciPy should be (more or > less) fully documented and has a (more or less) complete testing site. > Good enough. Would packaging things up with distutils be a good way to go? Also, are there any documentation guidelines? I assume that more might be wanted than goes in """...""". For testing, I'm guessing that there might be two kinds: first, to check that the routines operate correctly on a given platform and second, to check that the routines do what they are supposed to do. Are there any testing guidelines? > > Number Theory : These were used for analysing arrays of > antennas used > > in radar interferometry. They are also useful in integer > programming, > > cryptography, and computational algebra. > > > > Reduction of a matrix to Hermite normal form > > Reduction of a matrix to Smith normal form > > LLL basis reduction > > LLL basis reduction - deep version > > Gram-Schmidt orthogonalization > > I am not sure where these should go when considering the current scipy > state. As you mention, they are parts of different fields from what we > have in scipy now. I think they should be parts of the corresponding > packages that may not exist as of yet. Personally, I am very > interested in > CA stuff. These routines really need exact rational arithmetic in the general case. The floating point versions here just happen to be 'good enough' in many situations. Are there any plans to bring exact arithmetic into scipy or numpy? > > > Filter Design: Routines used for designing Complex Hermitian digital > > filters : > > > > Remez exchange algorithm - for arbitrary Chebychev systems. > > Would signal be an appropiate place for this? > Sounds right. > > Zero finders: General use and fun. Bisection best for some special > > cases, Ridder is middle of the pack, Brent is generally > best, with the > > two versions basically a wash, although the hyperbolic version is > > simpler. Whether or not there is any virtue to these as opposed to > > solve, I don't know. > > > > Bisection > > Illinois version of regula falsa > > Ridder > > Brent method with hyperbolic interpolation > > Brent method with inverse quadatic interpolation > > Can you compare these zero finders with ones in > scipy? Performance? Robustness to initial conditions? Etc. > Are they any > better? > I took a quick look at the Fortran code for the version of solve in the minimization package. It seems to be a multidimensional form of Newton's method, really the only way to go in higher dimensions unless the function is holomorphic. The routines here are for the one dimensional case and should be more robust in this situation. If any one has a more general idea of what goes on in the current zero finder, I would like to hear about it. > > Genetic algorithm: Used in digital filter design to optimize for > > coefficient truncation error. I looked at galib and found > it easier to > > roll my own, but didn't try for any great generality. I > think it would > > be good to include uniform crossover and to pull the > fitness function > > out of the genome --- in a tournament, fitness can depend on the > > population. Perhaps it can all be made simpler. > > galib seems to be developed more than 6 years and I would > expect it to be > rather mature, though, I have not used it myself. May be a > wrapper to such a library would be more appropiate for a longer term. > Though the licence may be an issue, galib seems to be GPL compatible. > > Pearu > I was sort of hoping for a ga guru to comment. Perhaps all that is really needed here is good documentation and a bit of formatting cleanup. Chuck > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From prabhu at aero.iitm.ernet.in Tue Apr 9 12:51:44 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Tue, 9 Apr 2002 22:21:44 +0530 Subject: [SciPy-dev] Crash in plt.image() In-Reply-To: <200204081106.g38B6Dv24844@scipy.org> References: <200204081106.g38B6Dv24844@scipy.org> Message-ID: <15539.7200.705077.450285@monster.linux.in> hi, >>>>> "Jose" == josegomez writes: [snip] Jose> I am interested in plotting images, using Jose> plt.image(). This causes python to crash with a Jose> seg-fault. Take the Lena example: $ python Python 2.1.2 (#1, Jose> Mar 16 2002, 00:56:55) [GCC 2.95.4 20011002 (Debian Jose> prerelease)] on linux2 Type "copyright", "credits" or Jose> "license" for more information. [snip] Jose> Other methods (such as plt.plot()) work fine, with one Jose> little quirk, which could be due to my X Server being Jose> misconfigured, as this problem also shows up with other Jose> programs (namely, Scilab): if I get plt to plot something [snip] Well, I think the problem arises because you did not start gui_thread before you started to use the plt module. What you should do is something like this: $ python >>> import gui_thread >>> >>> from scipy import plt >>> plt.plot([1,4,9,16]) ... Hope this helps, prabhu From tjlahey at mud.cgl.uwaterloo.ca Tue Apr 9 13:41:02 2002 From: tjlahey at mud.cgl.uwaterloo.ca (Tim Lahey) Date: Tue, 9 Apr 2002 13:41:02 -0400 (EDT) Subject: [SciPy-dev] More Solaris compilation of Scipy (SOLVED)! In-Reply-To: <200204091611.g39GB1v00767@scipy.org> Message-ID: Hi, I've discovered the problem with mconf.h/mconf_BE.h which is on line 120: defined(__hp9000s700) || defined(__AIX) || defined(_AIX) \ should be: defined(__hp9000s700) || defined(__AIX) || defined(_AIX) || \ So, could someone fix this in CVS ? Thanks, Tim. P.S. I'm waiting to see how the rest of the compilation goes. From pearu at scipy.org Tue Apr 9 13:39:35 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Tue, 9 Apr 2002 12:39:35 -0500 (CDT) Subject: [SciPy-dev] More Solaris compilation of Scipy (SOLVED)! In-Reply-To: Message-ID: On Tue, 9 Apr 2002, Tim Lahey wrote: > I've discovered the problem with mconf.h/mconf_BE.h > which is on line 120: > > defined(__hp9000s700) || defined(__AIX) || defined(_AIX) \ > > should be: > > defined(__hp9000s700) || defined(__AIX) || defined(_AIX) || \ > > So, could someone fix this in CVS ? Fixed. Thanks. Pearu From pearu at scipy.org Tue Apr 9 15:04:00 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Tue, 9 Apr 2002 14:04:00 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zero finding In-Reply-To: Message-ID: Hi, On Tue, 9 Apr 2002, Chuck Harris wrote: > Good enough. Would packaging things up with distutils be a good way to > go? Yes. In fact it is the only way to go. However, you may want to look how packaging is done in scipy for its different submodules. Scipy uses scipy_distutils that is derived from distutils. For more information, see http://www.scipy.net/pipermail/scipy-dev/2002-January/000162.html > Also, are there any documentation guidelines? I assume that more > might be wanted than goes in """...""". Not that I know of. We need to work out these guidelines. I think we should also consider PEP 287 while doing this: http://www.python.org/peps/pep-0287.html > For testing, I'm guessing that there might be two kinds: first, to > check that the routines operate correctly on a given platform and > second, to check that the routines do what they are supposed to do. I don't see the difference. > Are there any testing guidelines? Yes, there are. See http://www.scipy.org/site_content/tutorials/testing_guidelines These guidelines hold for most of its parts except scipy_test module part that is now moved to scipy_base/testing.py And note that all guidelines are a bit outdated due to fast development of scipy. If something seems to be inconcistent or is not working according to these guidelines then the best place to look is the source of scipy modules. And of course, scipy-dev can be helpful. > > > Number Theory : These were used for analysing arrays of > > antennas used > > > in radar interferometry. They are also useful in integer > > programming, > > > cryptography, and computational algebra. > > > > > > Reduction of a matrix to Hermite normal form > > > Reduction of a matrix to Smith normal form > > > LLL basis reduction > > > LLL basis reduction - deep version > > > Gram-Schmidt orthogonalization > > > > I am not sure where these should go when considering the current scipy > > state. As you mention, they are parts of different fields from what we > > have in scipy now. I think they should be parts of the corresponding > > packages that may not exist as of yet. Personally, I am very > > interested in > > CA stuff. > > These routines really need exact rational arithmetic in the general > case. The floating point versions here just happen to be 'good enough' > in many situations. Are there any plans to bring exact arithmetic into > scipy or numpy? I don't think that numpy is a proper place for this. And I don't think that anything useful should be under one umbrella such as SciPy. What I think is that Computational Algebra in Python should be a separate project. I have been experimenting with CA for Python almost two years trying out various approaches. Using Python 2.2 one can do serious symbol manipulations but pure Python implementation seems to be impractical - Python is too slow. Currently the most appropiate CA library seems to be GiNaC (www.ginac.de) to be wrapped to Python. I have already done that but the project is currently freezed until Boost.Python V2 becomes usable so that Python 2.2 new features can be fully exploited. But if only exact arithmetic is needed for your modules then gmpy is the best thing to consider because it wraps a very fast GMP library and it can be made available also for Windows platforms. > > > Zero finders: General use and fun. Bisection best for some special > I took a quick look at the Fortran code for the version of solve in > the minimization package. It seems to be a multidimensional form of > Newton's method, really the only way to go in higher dimensions unless > the function is holomorphic. The routines here are for the one > dimensional case and should be more robust in this situation. If any > one has a more general idea of what goes on in the current zero > finder, I would like to hear about it. Some comments about Powell's hybrid method used by minpack/hybrd.f can be found, for example, in http://www.empicasso.com/techdocs/p14_2.pdf > > > Genetic algorithm: Used in digital filter design to optimize for > > > coefficient truncation error. I looked at galib and found > > it easier to > > > roll my own, but didn't try for any great generality. I > > think it would > > > be good to include uniform crossover and to pull the > > fitness function > > > out of the genome --- in a tournament, fitness can depend on the > > > population. Perhaps it can all be made simpler. > > > > galib seems to be developed more than 6 years and I would > > expect it to be > > rather mature, though, I have not used it myself. May be a > > wrapper to such a library would be more appropiate for a longer term. > > Though the licence may be an issue, galib seems to be GPL compatible. > I was sort of hoping for a ga guru to comment. OK. Do we have one in this list? I made my comment from the SciPy point of view as in addition to possible license issues we must be careful when including new software to scipy: it must be most efficient available and it most be actively maintained preferably by the specialists of the field, and also in future. Pearu From tjlahey at mud.cgl.uwaterloo.ca Tue Apr 9 17:12:56 2002 From: tjlahey at mud.cgl.uwaterloo.ca (Tim Lahey) Date: Tue, 9 Apr 2002 17:12:56 -0400 (EDT) Subject: [SciPy-dev] Re: More on Solaris compilation In-Reply-To: <200204091701.g39H12v01338@scipy.org> Message-ID: Hi, I can manage to get everything to compile, but upon linking to get fblas, I have problems. Basically, g77 -shared doesn't seem to work, but if I do f77 -G (Sun compiler & flag) it will link. How can I get scipy to ignore g77 and use Sun's f77 instead (with the right flag) ? I actually get the following from scipy: replacing linker_so ['cc', '-G'] with ['g77', '-shared'] which if it didn't do that would work. Any thoughts ? Suggestions ? Cheers, Tim. From eric at scipy.org Tue Apr 9 16:09:56 2002 From: eric at scipy.org (eric) Date: Tue, 9 Apr 2002 16:09:56 -0400 Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zero finding References: Message-ID: <000d01c1e002$84d73fa0$6b01a8c0@ericlaptop> > > Hi, > > On Tue, 9 Apr 2002, Chuck Harris wrote: > > > Good enough. Would packaging things up with distutils be a good way to > > go? > > Yes. In fact it is the only way to go. However, you may want to look how > packaging is done in scipy for its different submodules. Scipy uses > scipy_distutils that is derived from distutils. For more information, see > > http://www.scipy.net/pipermail/scipy-dev/2002-January/000162.html If you set things up in this format, it'd be great. However, it should be little work for one of us to convert a working standard setup.py file. > > > Also, are there any documentation guidelines? I assume that more > > might be wanted than goes in """...""". > > Not that I know of. We need to work out these guidelines. I think we > should also consider PEP 287 while doing this: > > http://www.python.org/peps/pep-0287.html > I very much agree with this. There are perhaps some short coming to this format (some extra vertical whitespace), but otherwise, it looks well thought out. We struggled some with a standard format in the beginning, and never really came up with one. Initially we wanted to specify a reasonably long format for doc-strings. Something like (slightly modified from a current doc string in scipy): def vq(obs,code_book): """ Vector Quantization: assign features sets to codes in a code book. Description: Vector quantization determines which code in the code book best represents an observation of a target. The features of each observation are compared to each code in the book, and assigned the one closest to it. The observations are contained in the obs array. These features should be "whitened," or nomalized by the standard deviation of all the features before being quantized. The code book can be created using the kmeans algorithm or something similar. Arguments: obs -- 2D array. Each row of the array is an observation. The columns are the "features" seen during each observation The features must be whitened first using the whiten function or something equivalent. code_book -- 2D array. The code book is usually generated using the kmeans algorithm. Each row of the array holds a different code, and the columns are the features of the code. # c0 c1 c2 c3 code_book = [[ 1., 2., 3., 4.], #f0 [ 1., 2., 3., 4.], #f1 [ 1., 2., 3., 4.]]) #f2 Outputs: code -- 1D array. If obs is a NxM array, then a length M array is returned that holds the selected code book index for each observation. dist -- 1D array. The distortion (distance) between the observation and its nearest code Caveats: This currently forces 32 bit math precision for speed. Anyone know of a situation where this undermines the accuracy of the algorithm? Example: >>> code_book = array([[1.,1.,1.], ... [2.,2.,2.]]) >>> features = array([[ 1.9,2.3,1.7], ... [ 1.5,2.5,2.2], ... [ 0.8,0.6,1.7]]) >>> vq(features,code_book) (array([1, 1, 0],'i'), array([ 0.43588989, 0.73484692, 0.83066239])) """ This is great if you can do it (although this description could use some help...). However, I noticed when I was writing docs that I much preferred to write (and was more likely to write...) a narrative form with a short description and then pretty much everything else lumped into a paragraph or two. Judging by everyone else's docstrings in SciPy, they like to write the narrative form better also. All that said, don't let me discourage anyone from using something like the above format (converted to be reStructureText compatible). I think the users will appreciate it. Its very nice to type: >>> froms scipy.cluster.vq import vq >>> help(vq) and get a description, easy to read input/output information, and an example of use. I really wish we had more examples in functions. These should be in doctest format so that examples can be automatically tested for correctness. Would anyone be against adopting PEP 287 as the standard for docstrings in SciPy? happydoc and other tools are gonna support it, so we'll be able to generate a reference manual with relative ease (eventually). I defintely don't want to invent some new markup for this project that we have to maintain. There is more than enough maintance here already, thank you. Oh, one other thing. I prefer the following indention for doc-strings: def foo(): """ initial description more description """ Instead of: def foo(): """initial description more description """ I think it looks a little better, and has the added benefit that scintilla based editors can fold the comments up (which is does based on indentation). > > > For testing, I'm guessing that there might be two kinds: first, to > > check that the routines operate correctly on a given platform and > > second, to check that the routines do what they are supposed to do. > > I don't see the difference. > > > Are there any testing guidelines? > > Yes, there are. See > > http://www.scipy.org/site_content/tutorials/testing_guidelines > > These guidelines hold for most of its parts except scipy_test module > part that is now moved to scipy_base/testing.py > > And note that all guidelines are a bit outdated due to fast development of > scipy. If something seems to be inconcistent or is not working according > to these guidelines then the best place to look is the source of scipy > modules. And of course, scipy-dev can be helpful. I can't stress the importance of the unit tests enough. With the zillions of platforms combinations that people want to run SciPy on, they are the only prayer to validating that algorithms work. After 0.2, I'm thinking the next release will mainly be used to add tests and improve documentation. It be great to have full coverage ( for some definition of "full"). > > > > > Number Theory : These were used for analysing arrays of > > > antennas used > > > > in radar interferometry. They are also useful in integer > > > programming, > > > > cryptography, and computational algebra. > > > > > > > > Reduction of a matrix to Hermite normal form > > > > Reduction of a matrix to Smith normal form > > > > LLL basis reduction > > > > LLL basis reduction - deep version > > > > Gram-Schmidt orthogonalization > > > > > > I am not sure where these should go when considering the current scipy > > > state. As you mention, they are parts of different fields from what we > > > have in scipy now. I think they should be parts of the corresponding > > > packages that may not exist as of yet. Personally, I am very > > > interested in > > > CA stuff. > > > > These routines really need exact rational arithmetic in the general > > case. The floating point versions here just happen to be 'good enough' > > in many situations. Are there any plans to bring exact arithmetic into > > scipy or numpy? > > I don't think that numpy is a proper place for this. > And I don't think that anything useful should be under one > umbrella such as SciPy. > What I think is that Computational Algebra in Python should be a separate > project. > > I have been experimenting with CA for Python almost two years trying > out various approaches. Using Python 2.2 one can do serious symbol > manipulations but pure Python implementation seems to be impractical - > Python is too slow. Currently the most appropiate CA library seems to be > GiNaC (www.ginac.de) to be wrapped to Python. I have already done that but > the project is currently freezed until Boost.Python V2 becomes usable > so that Python 2.2 new features can be fully exploited. > > But if only exact arithmetic is needed for your modules then gmpy is the > best thing to consider because it wraps a very fast GMP library and it > can be made available also for Windows platforms. > > > > > Zero finders: General use and fun. Bisection best for some special > > > I took a quick look at the Fortran code for the version of solve in > > the minimization package. It seems to be a multidimensional form of > > Newton's method, really the only way to go in higher dimensions unless > > the function is holomorphic. The routines here are for the one > > dimensional case and should be more robust in this situation. If any > > one has a more general idea of what goes on in the current zero > > finder, I would like to hear about it. > > Some comments about Powell's hybrid method used by minpack/hybrd.f can be > found, for example, in > > http://www.empicasso.com/techdocs/p14_2.pdf > > > > > Genetic algorithm: Used in digital filter design to optimize for > > > > coefficient truncation error. I looked at galib and found > > > it easier to > > > > roll my own, but didn't try for any great generality. I > > > think it would > > > > be good to include uniform crossover and to pull the > > > fitness function > > > > out of the genome --- in a tournament, fitness can depend on the > > > > population. Perhaps it can all be made simpler. > > > > > > galib seems to be developed more than 6 years and I would > > > expect it to be > > > rather mature, though, I have not used it myself. May be a > > > wrapper to such a library would be more appropiate for a longer term. > > > Though the licence may be an issue, galib seems to be GPL compatible. > > > I was sort of hoping for a ga guru to comment. > > OK. Do we have one in this list? > I made my comment from the SciPy point of view as in addition to possible > license issues we must be careful when including new software to scipy: it > must be most efficient available and it most be actively maintained > preferably by the specialists of the field, and also in future. I'm pretty comfortable with GAs. My dissertation was on GAs for antenna and circuit design. The genetic algorithm in scipy (scipy.ga) was used for this work. I started with Matthew Wall's galib which is a very good C++ library. Wrapping it was actually my introduction to Python extensions and SWIG. It worked fine, but the type checking of C++ quickly becomes a pain with GAs, so I wrote scipy.ga as a replacement. It shares many ideas with galib, but with a central difference in that a gene is "atomic" building block instead of the genome. This is slower in general, but provides much more flexibility (mixing all kinds of gene types within a single genome). For the problems I was interested in, the fitness function swamped the computational cost of the GA, so speed wasn't an issue. Specialized genomes could be made faster if people need that. scipy.ga is reasonably full featured. The main feature I think is missing is pareto optimization stuff (and there are the beginnings of this). I'm sure other things are needed (more crossover, selection, etc.) options, but they are generally easy to add. Right now, scipy.ga suffers from a lack of documentation and a lack of attention. It still looks pretty much like my research code. While very workable, it could definitely use an inter"face lift". Thats on the "todo" list but hasn't been a priority as other things have seemed more important. I doubt this or even the next release will give it much attention. Hopefully it'll get cleaned up for 0.4 or 0.5. eric From pearu at scipy.org Tue Apr 9 17:22:14 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Tue, 9 Apr 2002 16:22:14 -0500 (CDT) Subject: [SciPy-dev] Re: More on Solaris compilation In-Reply-To: Message-ID: On Tue, 9 Apr 2002, Tim Lahey wrote: > Hi, > > I can manage to get everything to compile, but upon > linking to get fblas, I have problems. Basically, > g77 -shared doesn't seem to work, but if I do > f77 -G (Sun compiler & flag) it will link. How can > I get scipy to ignore g77 and use Sun's f77 instead > (with the right flag) ? I actually get the following > from scipy: > > replacing linker_so ['cc', '-G'] with ['g77', '-shared'] > which if it didn't do that would work. > > Any thoughts ? Suggestions ? See setup.py file line #28: build_flib.all_compilers = [build_flib.gnu_fortran_compiler] You must comment it out to use your native Fortran compiler. See also scipy_distutils/command/build_flib.py where you find the definition of a Sun Fortran compiler. Feel free to fix things there and send patches. Pearu From jochen at unc.edu Tue Apr 9 19:23:41 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 09 Apr 2002 19:23:41 -0400 Subject: [SciPy-dev] Cygwin Message-ID: Ok, I could build latest scipy on Cygwin. I had to change the link-line for gistC.dll, which had unresolved references from X11. -lX11 has to go behind -lgist, which it didn't in the original link command: ,---- | gcc -shared -Wl,--enable-auto-image-base build/temp.cygwin-1.3.10-i686-2.2/gistCmodule.o -L/usr/X11R6/lib -L/usr/lib/python2.2/config -Lbuild/temp.cygwin-1.3.10-i686-2.2 -lX11 -lm -lpython2.2 -lc_misc -lcephes -lgist -o build/lib.cygwin-1.3.10-i686-2.2/scipy/xplt/gistC.dll `---- Then there are 33 failing test and one failure, see attached log. (python was configured without thread support...) versions: ,---- | Python 2.2 (#1, Dec 31 2001, 15:21:18) | [GCC 2.95.3-5 (cygwin special)] on cygwin `---- ,---- | Reading specs from /usr/lib/gcc-lib/i686-pc-cygwin/2.95.3-5/specs | gcc version 2.95.3-5 (cygwin special) `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E -------------- next part -------------- A non-text attachment was scrubbed... Name: test.log.bz2 Type: application/x-bunzip2 Size: 3074 bytes Desc: not available URL: From eric at scipy.org Tue Apr 9 18:36:25 2002 From: eric at scipy.org (eric) Date: Tue, 9 Apr 2002 18:36:25 -0400 Subject: [SciPy-dev] Cygwin References: Message-ID: <009d01c1e016$fbb4d880$6b01a8c0@ericlaptop> Hey Jochen, It looks like your running the tests from your build/lib.xxx directory. Is this correct? If so, try installing it somewhere for testing. The build process does not copy data files (such as all the weave ,cxx and .h files) into the scipy directory. You need to install scipy somewhere before testing it. Pearu showed me a trick where you don't have to clobber an existing installation: setup.py install --prefix=/tmp cd /tmp/lib/python2.2/site-packages python -c 'import scipy;scipy.test()' Hopefully this will fix the weave errors. The cdotu function is failing everywhere. Not sure if it is ATLAS or wrappers, but it looks suspiciously like ATLAS. I'd investigate further, but I'm still trying to figure out the new generic_xxx.pyf files for linalg. ;-) eric ----- Original Message ----- From: "Jochen K?pper" To: "scipy devel ml" Sent: Tuesday, April 09, 2002 7:23 PM Subject: [SciPy-dev] Cygwin > Ok, I could build latest scipy on Cygwin. > > I had to change the link-line for gistC.dll, which had unresolved > references from X11. -lX11 has to go behind -lgist, which it didn't > in the original link command: > ,---- > | gcc -shared -Wl,--enable-auto-image-base build/temp.cygwin-1.3.10-i686-2.2/gistCmodule.o -L/usr/X11R6/lib -L/usr/lib/pyth on2.2/config -Lbuild/temp.cygwin-1.3.10-i686-2.2 -lX11 -lm -lpython2.2 -lc_misc -lcephes -lgist -o build/lib.cygwin-1.3.10-i686-2.2/scipy/xplt/gistC.dll > `---- > > Then there are 33 failing test and one failure, see attached log. > (python was configured without thread support...) > > versions: > ,---- > | Python 2.2 (#1, Dec 31 2001, 15:21:18) > | [GCC 2.95.3-5 (cygwin special)] on cygwin > `---- > ,---- > | Reading specs from /usr/lib/gcc-lib/i686-pc-cygwin/2.95.3-5/specs > | gcc version 2.95.3-5 (cygwin special) > `---- > > Greetings, > Jochen > -- > University of North Carolina phone: +1-919-962-4403 > Department of Chemistry phone: +1-919-962-1579 > Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 > Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E > From eric at scipy.org Tue Apr 9 18:45:08 2002 From: eric at scipy.org (eric) Date: Tue, 9 Apr 2002 18:45:08 -0400 Subject: [SciPy-dev] sun compiler ver_match Message-ID: <012801c1e018$33987170$6b01a8c0@ericlaptop> Hey Pearu, You applied some patches for the Sun compiler from Berthold H?llmann a couple of weeks ago. The new version match string doesn't pick up the sun compiler I have access to. I don't have an email for Berthold. I'd like to see if the following will work for his machine. ver_match = r'f90: (?P[^\s*,]*)' The output string on my compiler is: [123] eaj2 at teer3% f90 -V -dryrun f90: SC4.0 11 Sep 1995 FORTRAN 90 1.1 Usage: f90 [ options ] files. Use 'f90 -flags' for details I don't want to make the change until I know if it is a general solution. thanks, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From jochen at unc.edu Tue Apr 9 21:16:33 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 09 Apr 2002 21:16:33 -0400 Subject: [SciPy-dev] Cygwin In-Reply-To: <009d01c1e016$fbb4d880$6b01a8c0@ericlaptop> References: <009d01c1e016$fbb4d880$6b01a8c0@ericlaptop> Message-ID: On Tue, 9 Apr 2002 18:36:25 -0400 eric wrote: eric> It looks like your running the tests from your build/lib.xxx eric> directory. Is this correct? Yes [install scipy] eric> Hopefully this will fix the weave errors. Yes it does eric> The cdotu function is failing everywhere. That doesn't seem to be right. On my PIII gcc-3.0.4 python-2.2.1 machine there are 33 failures, all are weave related it seems. (ATLAS was compiled using egcs-1.1.2). Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From Chuck.Harris at sdl.usu.edu Tue Apr 9 21:51:26 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Tue, 9 Apr 2002 19:51:26 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zero finding Message-ID: Hi, > -----Original Message----- > From: pearu at scipy.org [mailto:pearu at scipy.org] > Sent: Tuesday, April 09, 2002 3:24 AM > To: scipy-dev at scipy.org > Subject: Re: [SciPy-dev] genetic algorithm, number theory, filter > design,zero finding > > [snip] > > > > Zero finders: General use and fun. Bisection best for some special > > cases, Ridder is middle of the pack, Brent is generally > best, with the > > two versions basically a wash, although the hyperbolic version is > > simpler. Whether or not there is any virtue to these as opposed to > > solve, I don't know. > > > > Bisection > > Illinois version of regula falsa > > Ridder > > Brent method with hyperbolic interpolation > > Brent method with inverse quadatic interpolation > > Can you compare these zero finders with ones in > scipy? Performance? Robustness to initial conditions? Etc. > Are they any > better? > fsolve turns out to be a combination of Newton's method and the method of steepest decent. The book by Ralston & Rabinowitz on numerical analysis seems to be good reference, as the basic algorithm is due to Rabinowitz and dates from around 1970. It seems to work well for smooth functions given a good initial estimate of the root (and where does that come from), which is to say it fails miserably on all but one of my tests --- never converges at all, even with repeated calls using the previous estimate. It really wants a nonzero derivative (nonsingular Jacobian) at all points. I think more robust routines are definitely needed in the special one dimensional case. Chuck From eric at scipy.org Tue Apr 9 21:30:34 2002 From: eric at scipy.org (eric) Date: Tue, 9 Apr 2002 21:30:34 -0400 Subject: [SciPy-dev] Cygwin References: <009d01c1e016$fbb4d880$6b01a8c0@ericlaptop> Message-ID: <012e01c1e02f$4fa25d10$6b01a8c0@ericlaptop> ----- Original Message ----- From: "Jochen K?pper" To: Sent: Tuesday, April 09, 2002 9:16 PM Subject: Re: [SciPy-dev] Cygwin > On Tue, 9 Apr 2002 18:36:25 -0400 eric wrote: > > eric> It looks like your running the tests from your build/lib.xxx > eric> directory. Is this correct? > > Yes > > [install scipy] > eric> Hopefully this will fix the weave errors. > > Yes it does glad to hear it. > > eric> The cdotu function is failing everywhere. > > That doesn't seem to be right. On my PIII gcc-3.0.4 python-2.2.1 > machine there are 33 failures, all are weave related it seems. (ATLAS > was compiled using egcs-1.1.2). Really. Hmmm. This makes it sound even more like an ATLAS issue. So, I'll restate things as "cdotu" is failing on a boat load of configurations. Pearu, I think we should check cdotu in the module import. If it fails, then replace it with zdotu. Hmmm. I guess this isn't easy because it is all an extension module. Well, I'm willing to comment it out for now. i don't want to hold things up over such a minor corner of the package. As for the gcc-3.x failures, I expect this will be the case with the 0.2 release also. We'll look at gcc-3.x for the next release. thanks for the reports, eric > > Greetings, > Jochen > -- > University of North Carolina phone: +1-919-962-4403 > Department of Chemistry phone: +1-919-962-1579 > Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 > Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Wed Apr 10 02:11:14 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 10 Apr 2002 01:11:14 -0500 (CDT) Subject: [SciPy-dev] sun compiler ver_match In-Reply-To: <012801c1e018$33987170$6b01a8c0@ericlaptop> Message-ID: On Tue, 9 Apr 2002, eric wrote: > Hey Pearu, > > You applied some patches for the Sun compiler from Berthold H?llmann a couple of > weeks ago. The new version match string doesn't pick up the sun compiler I have > access to. I don't have an email for Berthold. I'd like to see if the > following will work for his machine. > > ver_match = r'f90: (?P[^\s*,]*)' > > The output string on my compiler is: > > [123] eaj2 at teer3% f90 -V -dryrun > f90: SC4.0 11 Sep 1995 FORTRAN 90 1.1 > Usage: f90 [ options ] files. Use 'f90 -flags' for details > > I don't want to make the change until I know if it is a general solution. Here is what Berthold sent me: ------------------------------------------------------ >f90 -V f90: Sun WorkShop 6 update 2 Fortran 95 6.2 2001/05/15 Usage: f90 [ options ] files. Use 'f90 -flags' for details which has no libf90. The according library is named libfsu. ------------------------------------------------------ but he has not verified yet that my changes were working or not. Do we need to define two different Sun compilers in build_flib.py because of library issues? Pearu From pearu at scipy.org Wed Apr 10 02:35:58 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 10 Apr 2002 01:35:58 -0500 (CDT) Subject: [SciPy-dev] cdotu issue In-Reply-To: <009d01c1e016$fbb4d880$6b01a8c0@ericlaptop> Message-ID: On Tue, 9 Apr 2002, eric wrote: > The cdotu function is failing everywhere. Which is not completely true. See my previous messages about where cdotu works correctly. > Not sure if it is ATLAS or wrappers, but it looks suspiciously like > ATLAS. I am not completely sure about that it is ATLAS, only. It can be because this Fortran function cdotu returns complex that is a struct object and returning struct objects in C is very compiler dependent. It is well defined for gcc compilers but many native compilers do not support such things or require different syntax when calling such a Fortran function from C. I have some ideas how to resolve this issue, I'll test them and get back to you. Pearu From josegomez at gmx.net Wed Apr 10 06:25:01 2002 From: josegomez at gmx.net (=?iso-8859-15?q?Jos=E9=20Luis=20G=F3mez=20Dans?=) Date: Wed, 10 Apr 2002 11:25:01 +0100 Subject: [SciPy-dev] Crash in plt.image() In-Reply-To: <15539.7200.705077.450285@monster.linux.in> References: <200204081106.g38B6Dv24844@scipy.org> <15539.7200.705077.450285@monster.linux.in> Message-ID: <200204101020.g3AAKcv06068@scipy.org> Hi Prabhu, On Tuesday 09 April 2002 17:51, Prabhu Ramachandran wrote: > Well, I think the problem arises because you did not start gui_thread > before you started to use the plt module. What you should do is > something like this: Exactly, and the plt.image() problem is linked to this as well. So we are all happy now, and i have already put my head inside a bucket :-) However, it seems a bit harsh for plt.image() to segfault if gui_thread is nor imported, but looking at it, it looks as if it might be a complicated effort to have the plt. methods check whether gui_thread has been imported and if not, import it into the automatically. At any rate, thanks for the help. plt.image() is working fine even with large datasets. Jose -- Jos? L G?mez Dans PhD student Tel: +44 114 222 5582 Radar & Communications Group FAX; +44 870 132 2990 Department of Electronic Engineering University of Sheffield UK From pearu at scipy.org Wed Apr 10 06:28:26 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 10 Apr 2002 05:28:26 -0500 (CDT) Subject: [SciPy-dev] Crash in plt.image() In-Reply-To: <200204101020.g3AAKcv06068@scipy.org> Message-ID: On Wed, 10 Apr 2002, [iso-8859-15] Jos? Luis G?mez Dans wrote: > However, it seems a bit harsh for plt.image() to segfault if > gui_thread is nor imported, but looking at it, it looks as if it might > be a complicated effort to have the plt. methods check whether > gui_thread has been imported and if not, import it into the > automatically. sys.modules.has_key('gui_thread') should do it. Would it be possible automatically import gui_thread while importing plt? In what cases it wouldn't work? Pearu From jochen at unc.edu Wed Apr 10 12:58:40 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 10 Apr 2002 12:58:40 -0400 Subject: [SciPy-dev] Crash in plt.image() In-Reply-To: References: Message-ID: On Wed, 10 Apr 2002 05:28:26 -0500 (CDT) pearu wrote: pearu> Would it be possible automatically import gui_thread while pearu> importing plt? In what cases it wouldn't work? AFAIK you don't need gui_thread in programs using plt. And it shouldn't be imported without need ... Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From prabhu at aero.iitm.ernet.in Wed Apr 10 13:03:27 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 10 Apr 2002 22:33:27 +0530 Subject: [SciPy-dev] Crash in plt.image() In-Reply-To: References: <200204081106.g38B6Dv24844@scipy.org> <15539.7200.705077.450285@monster.linux.in> Message-ID: <15540.28767.519047.674799@monster.linux.in> >>>>> "Jose" == josegomez writes: >> gui_thread before you started to use the plt module. What you >> should do is something like this: Jose> Exactly, and the plt.image() problem is linked to this Jose> as well. So we are all happy now, and i have already put my Jose> head inside a bucket :-) Well, forgetting gui_thread is a common problem even for the creators of the gui_thread code. :) Jose> However, it seems a bit harsh for plt.image() to Jose> segfault if gui_thread is nor imported, but looking at it, Jose> it looks as if it might be a complicated effort to have the Jose> plt. methods check whether gui_thread has been imported and Jose> if not, import it into the automatically. Yes, there were problems with the way Python imports modules and does threading. I cant remember the problem right now but Eric is the expert on that one. prabhu From pearu at scipy.org Wed Apr 10 14:46:13 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 10 Apr 2002 13:46:13 -0500 (CDT) Subject: [SciPy-dev] cdotu issue In-Reply-To: Message-ID: On Wed, 10 Apr 2002 pearu at scipy.org wrote: > I have some ideas how to resolve this issue, I'll test them and get back > to you. Here is my conclusion: cdotu seems to fail with ATLAS-3.3.13 on machines with multiprocessers. ATLAS-3.3.14 works fine. Here are my testing results: 2CPU Pentium III (Coppermine), Python 2.2.1c2, gcc 2.95.3 and 3.0.3, Suse ATLAS-3.3.13 - cdotu fails ATLAS-3.3.14 - cdotu succeeds 1CPU Mobile Pentium II, Python 2.1.2 and 2.2.1c1, gcc 2.95.4, Debian Woody ATLAS-3.3.13 - cdotu succeeds Eric, did you test ATLAS-3.3.14 on your computer? Shall we make ATLAS >= 3.3.14 a requirement for SciPy? Pearu From Chuck.Harris at sdl.usu.edu Wed Apr 10 15:06:43 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Wed, 10 Apr 2002 13:06:43 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zerofinding Message-ID: Hi, I've decided to start with the zero finders, as fsolve is in general a poor choice in one dimension. This brings up the problem of validating function arguments and I am searching for guidelines. 1: if an argument is a scalar function, should it be checked for return type and argument types and number. 2: should all other arguments be checked, and either be explicitly cast or an error raised if they are of the wrong type. 3: do integers need to be checked for range? I believe python 2.2 no longer distinquishes between int and long integers. My functions are pure python, so most of these issues are taken care of by the runtime checks --- oh wonderful python, destroyer of niggling detail --- but what about C/C++ code? Also, is there any way of checking execution time, do we really care? Chuck From pearu at scipy.org Wed Apr 10 15:33:15 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 10 Apr 2002 14:33:15 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zerofinding In-Reply-To: Message-ID: On Wed, 10 Apr 2002, Chuck Harris wrote: > I've decided to start with the zero finders, as fsolve is in general a > poor choice in one dimension. Great. > This brings up the problem of validating > function arguments and I am searching for guidelines. > 1: if an argument is a scalar function, should it be checked for > return type and argument types and number. I am not sure anymore if this is needed. It probably would require evaluation of the function and it can be expensive. Though one can check if function accepts proper number of arguments using its func_* attributs. But be warned that it can be tricky. Let's leave it to the users, I suggest. > 2: should all other arguments be checked, and either be explicitly > cast or an error raised if they are of the wrong type. Always apply asarray if applicable. > 3: do integers need to be checked for range? I believe python 2.2 no > longer distinquishes between int and long integers. No. Unless there are some specific restrictions to integer arguments. > My functions are pure python, so most of these issues are taken care > of by the runtime checks --- oh wonderful python, destroyer of > niggling detail --- but what about C/C++ code? It depends. C/C++ code should not crash Python and therefore it should check all arguments for consistency. If it is inconvinient then one can write clever Python interfaces to C/C++ codes that take care of checks. Note that f2py provides array_from_pyobj (see f2py2e/src/fortranobject.c) that accepts all kinds of Python objects (lists,tuples,numbers) and returns a proper Numeric array with requested dimensions. This function has been heavily tested and is very robust to its arguments. If extension writers would like to use it, we could ship it into scipy_base. What do you think? > Also, is there any way of checking execution time, do we really care? See how it is done in linalg/tests/test_basic.py, look for bench_* methods and how they use ScipyTestCase.measure method. And yes, we do care and it can be very useful. Pearu From eric at scipy.org Wed Apr 10 16:22:57 2002 From: eric at scipy.org (eric) Date: Wed, 10 Apr 2002 16:22:57 -0400 Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zerofinding References: Message-ID: <004e01c1e0cd$810af120$6b01a8c0@ericlaptop> > > Note that f2py provides array_from_pyobj (see > f2py2e/src/fortranobject.c) that accepts all kinds of Python objects > (lists,tuples,numbers) and returns a proper Numeric array with requested > dimensions. This function has been heavily tested and is very robust to > its arguments. If extension writers would like to use it, we could ship it > into scipy_base. What do you think? Hmmm. Yes, I think this would be helpful. We could drop this in fastumath (which is slowly growing to very large...). Or another extension with a more appropriate name (convert?) might work also. > > > Also, is there any way of checking execution time, do we really care? > > See how it is done in linalg/tests/test_basic.py, > look for bench_* methods and how they use ScipyTestCase.measure method. > And yes, we do care and it can be very useful. Agreed. eric From eric at scipy.org Wed Apr 10 16:42:48 2002 From: eric at scipy.org (eric) Date: Wed, 10 Apr 2002 16:42:48 -0400 Subject: [SciPy-dev] Crash in plt.image() References: <200204081106.g38B6Dv24844@scipy.org><15539.7200.705077.450285@monster.linux.in> <15540.28767.519047.674799@monster.linux.in> Message-ID: <006301c1e0d0$469565e0$6b01a8c0@ericlaptop> > >>>>> "Jose" == josegomez writes: > > >> gui_thread before you started to use the plt module. What you > >> should do is something like this: > > Jose> Exactly, and the plt.image() problem is linked to this > Jose> as well. So we are all happy now, and i have already put my > Jose> head inside a bucket :-) > > Well, forgetting gui_thread is a common problem even for the creators > of the gui_thread code. :) > > Jose> However, it seems a bit harsh for plt.image() to > Jose> segfault if gui_thread is nor imported, but looking at it, > Jose> it looks as if it might be a complicated effort to have the > Jose> plt. methods check whether gui_thread has been imported and > Jose> if not, import it into the automatically. > > Yes, there were problems with the way Python imports modules and does > threading. I cant remember the problem right now but Eric is the > expert on that one. gui_thread does some shenanigans to force wxPython to be imported in a background thread. If it is imported in the foreground thread before the import in the background thread finishes, plt (or any wxPython window) will cause unpleasant things to happen when used from the command line. Initially, I tried to have gui_thread spawn a thread that imported gui_thread_guts in the second thread and then wait for gui_thread_guts to signal that wxPython was safely imported in the second thread before continuing with its own import. Unfortunately, Python has a import lock (separate from the thread lock) that only allows one import to progress at a time. As a result, blocking gui_thread import and waiting for gui_thread_guts to finish importing causes a dead lock. Is that confusing enough? Anyway, that is why you must import gui_thread first thing and then import plt. If you tried to have plt import gui_thread automatically and block until it was imported before making its own import of wxPython, you'd get the same deadlock. For more on the topic see: http://www.scipy.org/site_content/tutorials/gui_thread http://www.scipy.org/site_content/tutorials/import_thread_lock_discussion I have some ideas about getting rid of gui_thread altogether by rolling its event proxy stuff into an alternate version of the wxPython shadow classes. This fixes a lot of problems (which I'm betting only Prabhu and I have run into) with the proxy side-effects of gui_thread. I haven't looked into the approach far enough to know if it can also fix the import order problem. I hope so, but I'm not bettin' more than a nickel... eric > > prabhu > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From Chuck.Harris at sdl.usu.edu Wed Apr 10 19:33:18 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Wed, 10 Apr 2002 17:33:18 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zerofinding Message-ID: > -----Original Message----- > From: pearu at scipy.org [mailto:pearu at scipy.org] > Sent: Wednesday, April 10, 2002 1:33 PM > > On Wed, 10 Apr 2002, Chuck Harris wrote: > [snip] > > Also, is there any way of checking execution time, do we > really care? > > See how it is done in linalg/tests/test_basic.py, > look for bench_* methods and how they use > ScipyTestCase.measure method. > And yes, we do care and it can be very useful. > Curiously, a quick test shows *all* the python routines as faster than fsolve. I found this surprising. Perhaps it would be worth the hassle of putting these in C, then again, function evaluations might be the bottle neck, although I doubt it. Times are seconds for 10000 iterations, f(x) = 1 - x**2 . Initial estimate for fsolve: .5 bracketing interval for others: [.5,2] xtol: 1e-12 bisect: 5.859 illini: 2.174 ridder: 2.484 brenth: 2.724 brentq: 2.694 fsolve: 6.150 Chuck From oliphant.travis at ieee.org Wed Apr 10 19:50:48 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 10 Apr 2002 17:50:48 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filter design,zerofinding In-Reply-To: References: Message-ID: <1018482650.11662.3.camel@travis> On Wed, 2002-04-10 at 13:06, Chuck Harris wrote: > Hi, > > I've decided to start with the zero finders, as fsolve is in general a poor choice in one dimension. This brings up the problem of validating function arguments and I am searching for guidelines. There are other zero-finders in the CVS version of SciPy, besides fsolve. We could include a check for 1-D functions if your assessment of fsolve is substantiated. -Travis From Chuck.Harris at sdl.usu.edu Thu Apr 11 13:16:38 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 11:16:38 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: I've taken a look at the 1-D solvers in CVS. A few comments 1. fixed_point computes incorrect roots 2. they make assumptions on the arguments that are not enforced 3. none take into account the varying granularity of floating point 4. newton should probably check for blowup, as this is not uncommon 5. all the python routines suffer from large overheads. We should go for C. That said, the routines a pretty quick. I think a good starting point would be to take the simplest and put them in C. I expect this would at least halve the execution time for easy to compute functions. For newton, I don't think the option of using computed derivatives is worth including. There is a slightly higher order of convergence (2 vs 1.4), but this is likely to be swamped in function evaluation time, especially if the function is python and the routine is C. Chuck > -----Original Message----- > From: Travis Oliphant [mailto:oliphant.travis at ieee.org] > Sent: Wednesday, April 10, 2002 5:51 PM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > On Wed, 2002-04-10 at 13:06, Chuck Harris wrote: > > Hi, > > > > I've decided to start with the zero finders, as fsolve is > in general a poor choice in one dimension. This brings up the > problem of validating function arguments and I am searching > for guidelines. > > There are other zero-finders in the CVS version of SciPy, besides > fsolve. > > We could include a check for 1-D functions if your assessment > of fsolve > is substantiated. > > -Travis > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Thu Apr 11 14:14:18 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 11 Apr 2002 13:14:18 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: On Thu, 11 Apr 2002, Chuck Harris wrote: > I've taken a look at the 1-D solvers in CVS. A few comments > > 1. fixed_point computes incorrect roots > 2. they make assumptions on the arguments that are not enforced > 3. none take into account the varying granularity of floating point > 4. newton should probably check for blowup, as this is not uncommon > 5. all the python routines suffer from large overheads. We should go for C. > > That said, the routines a pretty quick. I think a good starting point > would be to take the simplest and put them in C. I expect this would > at least halve the execution time for easy to compute functions. How to you feel about Fortran? Actually C is also fine for me. My point is that we should use f2py to generate the interfaces to these C (or Fortran, if you have positive feelings about it) routines. It has an advantage that you don't need to struggle with the details of Python C/API (reference counting, argument checks, etc.) All this is supported in f2py generated interfaces. Using f2py saves time and bugs. And the f2py generated extension modules are really easy to maintain. If you are not familiar with f2py then we can cooperate. You give me a native C function and I'll give you an interface for calling this C function from Python in no time. > For newton, I don't think the option of using computed derivatives is > worth including. There is a slightly higher order of convergence (2 vs > 1.4), but this is likely to be swamped in function evaluation time, > especially if the function is python and the routine is C. Indeed, I have never used computed derivatives in my real problems. They are usually to large that calculating exact jacobian, even a bounded one, is to expensive. But I would like to see how much is there gain or lost of using exact jacobian in real example. Pearu From Chuck.Harris at sdl.usu.edu Thu Apr 11 14:59:44 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 12:59:44 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: Pearu, > -----Original Message----- > From: pearu at scipy.org [mailto:pearu at scipy.org] > Sent: Thursday, April 11, 2002 12:14 PM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > > > On Thu, 11 Apr 2002, Chuck Harris wrote: > > > I've taken a look at the 1-D solvers in CVS. A few comments > > > > 1. fixed_point computes incorrect roots > > 2. they make assumptions on the arguments that are not enforced > > 3. none take into account the varying granularity of floating point > > 4. newton should probably check for blowup, as this is not uncommon > > 5. all the python routines suffer from large overheads. We > should go for C. > > > > That said, the routines a pretty quick. I think a good > starting point > > would be to take the simplest and put them in C. I expect this would > > at least halve the execution time for easy to compute functions. > > How to you feel about Fortran? Actually C is also fine for me. I don't mind Fortran, and for some things I think it is superior to C, but I find that C compilers are easier to come by on all platforms; I don't much enjoy cygwin. Also, at this point in time C is more familiar to me. For the root finding, it shouldn't be too hard to keep track of reference counting. > My point is that we should use f2py to generate the > interfaces to these C > (or Fortran, if you have positive feelings about it) routines. > It has an advantage that you don't need to struggle with the > details of Python C/API (reference counting, argument checks, etc.) > All this is supported in f2py generated interfaces. Using > f2py saves time > and bugs. And the f2py generated extension modules are really easy to > maintain. > I pretty much agree with this. Perhaps it would be nice to have a c2py interface also. > If you are not familiar with f2py then we can cooperate. You give me > a native C function and I'll give you an interface for calling this C > function from Python in no time. > Hmm... f2py handles this? > > For newton, I don't think the option of using computed > derivatives is > > worth including. There is a slightly higher order of > convergence (2 vs > > 1.4), but this is likely to be swamped in function evaluation time, > > especially if the function is python and the routine is C. > > Indeed, I have never used computed derivatives in my real > problems. They > are usually to large that calculating exact jacobian, even a > bounded one, > is to expensive. But I would like to see how much is there > gain or lost of > using exact jacobian in real example. > I was thinking especially of the 1D case. My feeling is that you might save 1,2 iterations, at the expense of twice as many function evaluations in each iteration. Typical iteration count is about 10+, probably a bit less in situations where newton actually works. This clearly gets *much* worse in the multidimensional case. I am also thinking of dropping the extra args for the function call. At times in the past, I would have killed for this in some circumstances, but if really needed in python one could do something like class myf : def __init__(self,arg1,arg2): self.arg1 = arg1 etc def f(self,x) : return value depending on x and self.arg1,... solve(myf(a,b).f,...) or even def f(x) : global arg1,... return value depending on x and arg1,... or some such. The fact that f might be a method instead of a plain old function needs to be detected, but this is needed anyway. In Fortran-77, if I recall, some sort of common was needed to achieve this sort of thing easily. Comments? Is this a good idea or a total kludge. Chuck > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Thu Apr 11 13:06:44 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 11 Apr 2002 13:06:44 -0400 (EDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: > I've taken a look at the 1-D solvers in CVS. A few comments > > 1. fixed_point computes incorrect roots On what problem? I've used it successfully. Thanks for the pointer. Please give more info. > 2. they make assumptions on the arguments that are not enforced True. Can you suggest checks. > 3. none take into account the varying granularity of floating point What do you mean by this? > 4. newton should probably check for blowup, as this is not uncommon A blowup of what? The second derivative becoming very large? > 5. all the python routines suffer from large overheads. We should go for C. I'm not sure I agree with this. I'd gladly accept C code that works the same. Indeed, eventually it would be nice if everything were in C. However, we may get this for free (via psyco or Pat Miller's work). But, if the function call is most of the overhead, then this will not be helped much by moving the iteration into C. > > For newton, I don't think the option of using computed derivatives is worth including. There is a slightly higher order of convergence (2 vs 1.4), but this is likely to be swamped in function evaluation time, especially if the function is python and the routine is C. I disagree here. The secant method is useful when derivatives cannot be computed. Now, if you want to replace the secant method with something better like your brent routines, then that is a different story. Thanks for your interest and help, -Travis From oliphant at ee.byu.edu Thu Apr 11 13:11:18 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 11 Apr 2002 13:11:18 -0400 (EDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: > > How to you feel about Fortran? Actually C is also fine for me. > > My point is that we should use f2py to generate the interfaces to these C > (or Fortran, if you have positive feelings about it) routines. > It has an advantage that you don't need to struggle with the > details of Python C/API (reference counting, argument checks, etc.) > All this is supported in f2py generated interfaces. Using f2py saves time > and bugs. And the f2py generated extension modules are really easy to > maintain. > > If you are not familiar with f2py then we can cooperate. You give me > a native C function and I'll give you an interface for calling this C > function from Python in no time. > > > For newton, I don't think the option of using computed derivatives is > > worth including. There is a slightly higher order of convergence (2 vs > > 1.4), but this is likely to be swamped in function evaluation time, > > especially if the function is python and the routine is C. I understand the point now. I misread it. We should include it. Yes, people may never actually use it. But there are cases where the derivative is not hard to compute. (We could put an option where the derivative is computed along with the function and both are returned together --- can often save time). But, I would hesitate to remove it entirely. It's turned off by default anyway. > > Indeed, I have never used computed derivatives in my real problems. They > are usually to large that calculating exact jacobian, even a bounded one, > is to expensive. But I would like to see how much is there gain or lost of > using exact jacobian in real example. See optimize.py for an example of how knowledge of the jacobian can decrease minimization time for the rosenbrock function. -Travis From oliphant at ee.byu.edu Thu Apr 11 13:15:32 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 11 Apr 2002 13:15:32 -0400 (EDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: > > If you are not familiar with f2py then we can cooperate. You give me > > a native C function and I'll give you an interface for calling this C > > function from Python in no time. > > > > Hmm... f2py handles this? Yes, as long as your C functions do not take complicated structures. Is that true, Pearu -- or have you found some magic there too? > > > I was thinking especially of the 1D case. My feeling is that you might save 1,2 iterations, at the expense of twice as many function evaluations in each iteration. Typical iteration count is about 10+, probably a bit less in situations where newton actually works. This clearly gets *much* worse in the multidimensional case. Python gives us the power to use it or not. Yes if you are going to write things in C, then thinking about things like this is useful. So, just call it the secant method and ignore the derivative. > > I am also thinking of dropping the extra args for the function call. At times in the past, I would have killed for this in some circumstances, but if really needed in python one could do something like You can do that in C-code, but the wrapper will have to include such a dummy function as you describe. That is fine, but args=() is a standard we are using in all of SciPy. We're not going to get rid of it. > The fact that f might be a method instead of a plain old function needs to be detected, but this is needed anyway. In Fortran-77, if I recall, some sort of common was needed to achieve this sort of thing easily. Comments? Is this a good idea or a total kludge. Again, use f2py. This is all handled easily. -Travis From Chuck.Harris at sdl.usu.edu Thu Apr 11 15:21:04 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 13:21:04 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: Hi > -----Original Message----- > From: Travis Oliphant [mailto:oliphant at ee.byu.edu] > Sent: Thursday, April 11, 2002 11:16 AM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > You can do that in C-code, but the wrapper will have to include such a > dummy function as you describe. That is fine, but args=() > is a standard > we are using in all of SciPy. We're not going to get rid of it. OK, standards are good. Consistancy is the soul of good libraries. > Again, use f2py. This is all handled easily. I'll take a look Chuck > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Thu Apr 11 15:19:06 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 11 Apr 2002 14:19:06 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: On Thu, 11 Apr 2002, Chuck Harris wrote: > > How to you feel about Fortran? Actually C is also fine for me. > > I don't mind Fortran, and for some things I think it is superior to C, > but I find that C compilers are easier to come by on all platforms; I > don't much enjoy cygwin. Also, at this point in time C is more > familiar to me. For the root finding, it shouldn't be too hard to keep > track of reference counting. I don't think that SciPy can be freed from Fortran stuff. There is integrate module that uses odepack, and ATLAS hardly covers all the LAPACK at this moment. Etc. So I find Fortran contributions acceptable. But it is true that C codes are easier to re-use in other projects. > > My point is that we should use f2py to generate the > > interfaces to these C > > (or Fortran, if you have positive feelings about it) routines. > > It has an advantage that you don't need to struggle with the > > details of Python C/API (reference counting, argument checks, etc.) > > All this is supported in f2py generated interfaces. Using > > f2py saves time > > and bugs. And the f2py generated extension modules are really easy to > > maintain. > > > I pretty much agree with this. Perhaps it would be nice to have a c2py > interface also. Yes, but it is a project for the future. And f2py pretty much covers also c2py. > > If you are not familiar with f2py then we can cooperate. You give me > > a native C function and I'll give you an interface for calling this C > > function from Python in no time. > > > > Hmm... f2py handles this? Yes, it does (cblas,clapack are wrapped with f2py, for example). The signature files may look Fortran but there is actually only little difference in wrapping Fortran or C functions. And there are some additional hooks available for the signature files that ease overcoming these small differences (intent(c), fortranname, callstatement, callprotoargument etc. statements). Of course, I am assuming that C functions do not use complicated struct's, except the complex one. Pearu From pearu at scipy.org Thu Apr 11 15:34:18 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 11 Apr 2002 14:34:18 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: On Thu, 11 Apr 2002, Travis Oliphant wrote: > > Hmm... f2py handles this? > > Yes, as long as your C functions do not take complicated structures. Is > that true, Pearu -- or have you found some magic there too? Currently, that is true. Actually, I need this for supporting Fortran 90 TYPE arguments in f2py and there C struct's will be used... But don't hold your breath ;-) > > The fact that f might be a method instead of a plain old function > needs to be detected, but this is needed anyway. In Fortran-77, if I > recall, some sort of common was needed to achieve this sort of thing > easily. Comments? Is this a good idea or a total kludge. > > Again, use f2py. This is all handled easily. f2py can handle F77 common blocks with no problem but I would not recommend using them in this situation. Common blocks are global and there will be a mess if the corresponding functions are called recursively, for example. Pearu From Chuck.Harris at sdl.usu.edu Thu Apr 11 15:44:29 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 13:44:29 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: Hi > -----Original Message----- > From: pearu at scipy.org [mailto:pearu at scipy.org] > Sent: Thursday, April 11, 2002 1:19 PM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > > > On Thu, 11 Apr 2002, Chuck Harris wrote: > > > > How to you feel about Fortran? Actually C is also fine for me. > > > > I don't mind Fortran, and for some things I think it is > superior to C, > > but I find that C compilers are easier to come by on all > platforms; I > > don't much enjoy cygwin. Also, at this point in time C is more > > familiar to me. For the root finding, it shouldn't be too > hard to keep > > track of reference counting. > > I don't think that SciPy can be freed from Fortran stuff. There is Absolutely, no argument here. > > Hmm... f2py handles this? > > Yes, it does (cblas,clapack are wrapped with f2py, for example). The > signature files may look Fortran but there is actually only little > difference in wrapping Fortran or C functions. And there are some > additional hooks available for the signature files that ease > overcoming Signature file? > these small differences (intent(c), fortranname, callstatement, > callprotoargument etc. statements). > Of course, I am assuming that C functions do not use > complicated struct's, > except the complex one. Root finders are pretty basic. Looks like the standard call for all of them would be: solve(f,a,b,args=(),xtol=default,maxiter=default) with a (python) float return. a,b,xtol should be double, or converted to double. maxiter is integer, and f returns double to the C routine. There is a disagreement between say, fsolve and bisection, where one has the named argument xtol, and the other tol. How should this be resolved? I could probably just look at the wrapper for fsolve and make a few changes, eh? Chuck > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From Chuck.Harris at sdl.usu.edu Thu Apr 11 16:14:58 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 14:14:58 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: Hi, > -----Original Message----- > From: Travis Oliphant [mailto:oliphant at ee.byu.edu] > Sent: Thursday, April 11, 2002 11:07 AM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > > I've taken a look at the 1-D solvers in CVS. A few comments > > > > 1. fixed_point computes incorrect roots > > On what problem? I've used it successfully. Thanks for the pointer. > Please give more info. I tried def f(x) : return 1 - x**2 print 'bisection: %21.16f'%bisection(f,.5,2,tol=1e-12) and got .9... instead of 1. No exception was raised > > 2. they make assumptions on the arguments that are not enforced > > True. Can you suggest checks. Have done. Will include. > > 3. none take into account the varying granularity of floating point > > What do you mean by this? Floats are approx evenly spaced on a log scale. If the tol is to small and the root is large, the small changes in the approx root needed to zero in on the true root are computionally zero, and the routine spins with no convergence. Its a standard problem and usually dealt with by something like: tol = tol + eps*max(|a|,|b|) where a and b are the bounds containing the root. This is often made to depend on the last best approx, but this adds overhead. Anyway, the actual tol is sometimes not reached. maybe this should raise an exception instead? > > 4. newton should probably check for blowup, as this is not uncommon > > A blowup of what? The second derivative becoming very large? newton tends to head off to infinity if the function is bumpy and the initial estimate is insufficiently close. > > > 5. all the python routines suffer from large overheads. We > should go for C. > > I'm not sure I agree with this. I'd gladly accept C code > that works the > same. Indeed, eventually it would be nice if everything were in C. > However, we may get this for free (via psyco or Pat Miller's work). > For simple functions, probably the most common case, I expect significant improvements, factor of two or more. For more complicated functions there is little to gain. It all depends on how much importance we attach to execution time over all. > But, if the function call is most of the overhead, then this > will not be > helped much by moving the iteration into C. > > > > > For newton, I don't think the option of using computed > derivatives is worth including. There is a slightly higher > order of convergence (2 vs 1.4), but this is likely to be > swamped in function evaluation time, especially if the > function is python and the routine is C. > > I disagree here. The secant method is useful when > derivatives cannot be > computed. Now, if you want to replace the secant method with something > better like your brent routines, then that is a different story. > OK > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From Chuck.Harris at sdl.usu.edu Thu Apr 11 16:24:15 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 14:24:15 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: > -----Original Message----- > From: Chuck Harris > Sent: Thursday, April 11, 2002 2:15 PM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > Hi, > > > -----Original Message----- > > From: Travis Oliphant [mailto:oliphant at ee.byu.edu] > > Sent: Thursday, April 11, 2002 11:07 AM > > To: scipy-dev at scipy.org > > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > > filterdesign,zerofinding > > > > > > > I've taken a look at the 1-D solvers in CVS. A few comments > > > > > > 1. fixed_point computes incorrect roots > > > > On what problem? I've used it successfully. Thanks for > the pointer. > > Please give more info. > > I tried > def f(x) : > return 1 - x**2 > oops, ^K print 'bisection: %21.16f'%bisection(f,.5,2,tol=1e-12) print 'fixed_pt : %21.16f'%fixed_point(f,.5,tol=1e-12) > > and got .9... instead of 1. No exception was raised > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Thu Apr 11 16:21:32 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 11 Apr 2002 15:21:32 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: On Thu, 11 Apr 2002, Chuck Harris wrote: > > > Hmm... f2py handles this? > > > > Yes, it does (cblas,clapack are wrapped with f2py, for example). The > > signature files may look Fortran but there is actually only little > > difference in wrapping Fortran or C functions. And there are some > > additional hooks available for the signature files that ease > > overcoming > > Signature file? Basically, f2py can read Fortran sources and extract from them relevant information (arguments, types, etc.) that are needed for calling Fortran functions from C (or from Python as in the final layer). This information (the signatures of functions) is saved in the so-called signature files or .pyf files using Fortran 90 syntax (take a look at some .pyf files in SciPy tree). For C functions, one has to create these signature files manually taking into account the correspondance between C and Fortran types (e.g. Fortran real*8 is C double, etc). And f2py constructs a Python extension module based on the information in signature files. If requested it can compile and build it all in one call. > > these small differences (intent(c), fortranname, callstatement, > > callprotoargument etc. statements). > > Of course, I am assuming that C functions do not use > > complicated struct's, > > except the complex one. > > Root finders are pretty basic. Looks like the standard call for all of > them would be: > > solve(f,a,b,args=(),xtol=default,maxiter=default) > > with a (python) float return. a,b,xtol should be double, or converted > to double. maxiter is integer, and f returns double to the C routine. > > There is a disagreement between say, fsolve and bisection, where one > has the named argument xtol, and the other tol. How should this be > resolved? If the meaning is the same, we should stick to the same name. I would prefer tol. > I could probably just look at the wrapper for fsolve and make a few > changes, eh? Wrapper of hybrd is pretty complicated because the Fortran function hybrd is complicated. In your case you can just write a normal C function, say double solve((void*)() f,double a,double b,double xtol,int maxiter) { /* do your stuff here */ } and the corrsponding Fortran signature looks like the following function solve(f,a,b,xtol,maxiter) result (x) external f double precision f,solve fortranname solve intent(c) solve double precision intent(in,c) :: a,b double precision intent(in,c),optional :: xtol = 1e-12 integer intent(in,c),optional :: maxiter = 100 end function solve and the Python signature of the wrapper function that f2py generates looks like the following def solve(f,a,b,xtol=1e-12,maxiter=100): # do your stuff here If the Fortran signature is saved in a file foo.pyf and C function is saved in a file foo.c, then calling f2py -c foo.pyf foo.c -m bar will construct and build extension module bar.so into the current directory. And you can do, for example >>> import bar >>> bar.solve(lambda x:1-x**2,0,2) 1.0 Be warned that I skipped some details like extra arguments and callback functions and there may be minor typos, but the purpose was to give a quick and general overview of how you can wrap C functions with f2py. Pearu From Chuck.Harris at sdl.usu.edu Thu Apr 11 16:57:16 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 14:57:16 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: > -----Original Message----- > From: pearu at scipy.org [mailto:pearu at scipy.org] > Sent: Thursday, April 11, 2002 2:22 PM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > > If the meaning is the same, we should stick to the same name. > I would prefer tol. Will use. > double solve((void*)() f,double a,double b,double xtol,int maxiter) ^^^^^^^^^^ double (*f)() ? > f2py -c foo.pyf foo.c -m bar Cute! Really nice. Where do I find f2py. I've been looking... Any compiler dependences, or does this work for all the standard compilers. By the way, what do I need to do to my replies so that they follow the thread instead of messing up the list? Chuck > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Thu Apr 11 17:08:52 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 11 Apr 2002 16:08:52 -0500 (CDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: On Thu, 11 Apr 2002, Chuck Harris wrote: > > > > If the meaning is the same, we should stick to the same name. > > I would prefer tol. > > Will use. But now I see that Travis has used extensively xtol. And with a reason as there is also ftol. Travis, what do you think? > > double solve((void*)() f,double a,double b,double xtol,int maxiter) > ^^^^^^^^^^ > double (*f)() ? Ok, that's one typo. You were warned.. > Where do I find f2py. I've been looking... Any compiler dependences, > or does this work for all the standard compilers. http://cens.ioc.ee/projects/f2py2e/ f2py (actually scipy_distutils) can detect available compilers and use them. If you have gcc, then you are fine. > By the way, what do I need to do to my replies so that they follow the > thread instead of messing up the list? I guess, it depends what mailer programm are you using. Otherwise, I have no idea what you should do, exchange your mail programm, may be;). Pearu From oliphant at ee.byu.edu Thu Apr 11 15:57:34 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 11 Apr 2002 15:57:34 -0400 (EDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: > > Root finders are pretty basic. Looks like the standard call for all of them would be: > > solve(f,a,b,args=(),xtol=default,maxiter=default) > > with a (python) float return. a,b,xtol should be double, or converted to double. maxiter is integer, and f returns double to the C routine. > > There is a disagreement between say, fsolve and bisection, where one has the named argument xtol, and the other tol. How should this be resolved? We should standardize here, but the problem is the different notions of tolerance. xtol is some (relative) error on x while ftol is some error on f(x) I'm happy with those. But, sometimes you have a tolerance that is neither xtol nor ftol, but some combination. Use tol in this case. > > I could probably just look at the wrapper for fsolve and make a few changes, eh? If you are writing c-code, you don't have to use the extra args variable in the C-code (that can be handled by the Python wrapper --- and is in fact by f2py). So, you can just have the c-function as rootfind(void *func, double a, double b, double xtol, int maxiter) for example. Then, the f2py wrapper would include an (extra_args) argument and take a Python function instead of the void *func) f2py is really cool, you should get to know it. -Travis From oliphant at ee.byu.edu Thu Apr 11 16:37:47 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 11 Apr 2002 16:37:47 -0400 (EDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: > > > > On what problem? I've used it successfully. Thanks for the pointer. > > Please give more info. > > I tried > def f(x) : > return 1 - x**2 > > print 'bisection: %21.16f'%bisection(f,.5,2,tol=1e-12) > >>> def f(x): return 1 - x**2 >>> bisection(lambda x: 1-x**2, 0.5,2,xtol=1e-12) 1.0000000000002274 This is what I get. > > Floats are approx evenly spaced on a log scale. If the tol is to small and the root is large, the small changes in the approx root needed to zero in on the true root are computionally zero, and the routine spins with no convergence. Its a standard problem and usually dealt with by something like: > > tol = tol + eps*max(|a|,|b|) > > where a and b are the bounds containing the root. This is often made to depend on the last best approx, but this adds overhead. Anyway, the actual tol is sometimes not reached. maybe this should raise an exception instead? Where should this check be made. It sounds like a good thing. > > > > 4. newton should probably check for blowup, as this is not uncommon > > > > A blowup of what? The second derivative becoming very large? > > newton tends to head off to infinity if the function is bumpy and the initial estimate is insufficiently close. I see. So if the function gets above some value, stop? What value do you think appropriate? -Travis From oliphant at ee.byu.edu Thu Apr 11 16:44:34 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 11 Apr 2002 16:44:34 -0400 (EDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: > > > > I tried > > def f(x) : > > return 1 - x**2 > > > This works for me.. fixed_point(f, 0.5, xtol=1e-12) 0.6180339887498949 >>> f(_) 0.61803398874989479 It looks like it found the place where f(x)=x pretty well. -Travis From Chuck.Harris at sdl.usu.edu Thu Apr 11 18:50:13 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 16:50:13 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: > -----Original Message----- > From: Travis Oliphant [mailto:oliphant at ee.byu.edu] > Sent: Thursday, April 11, 2002 2:45 PM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > > > > > > I tried > > > def f(x) : > > > return 1 - x**2 > > > > > > > This works for me.. > > fixed_point(f, 0.5, xtol=1e-12) > > 0.6180339887498949 > > >>> f(_) > > 0.61803398874989479 > > It looks like it found the place where f(x)=x pretty well. > I misunderstood what the routine did. Now I see that I should have used f(x)+ x to find the zero 8) Thanks for the correction. Chuck > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From Chuck.Harris at sdl.usu.edu Thu Apr 11 19:40:05 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 11 Apr 2002 17:40:05 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: > -----Original Message----- > From: Travis Oliphant [mailto:oliphant at ee.byu.edu] > Sent: Thursday, April 11, 2002 2:38 PM > To: scipy-dev at scipy.org > Subject: RE: [SciPy-dev] genetic algorithm, number theory, > filterdesign,zerofinding > > > > Floats are approx evenly spaced on a log scale. If the tol > is to small and the root is large, the small changes in the > approx root needed to zero in on the true root are > computionally zero, and the routine spins with no > convergence. Its a standard problem and usually dealt with by > something like: > > > > tol = tol + eps*max(|a|,|b|) > > > > where a and b are the bounds containing the root. This is > often made to depend on the last best approx, but this adds > overhead. Anyway, the actual tol is sometimes not reached. > maybe this should raise an exception instead? > > Where should this check be made. It sounds like a good thing. Another standard solution is to pass in two tolerances, one absolute and the other relative, say x_atol, x_rtol, and then do something like tol = x_atol + x_rtol*abs(current_estimate) x_rtol is usually bit more than the smallest number such that 1 + x_rtol != 1. This number is often computed during installation of large packages, such as scipy, and made available somehow to all the routines. With python it would be easy to assign it as a default. Or we could just check x == x + dx # oh oh (I don't really like this) > > > > 4. newton should probably check for blowup, as > this is not uncommon > > > > > > A blowup of what? The second derivative becoming very large? > > > > newton tends to head off to infinity if the function is > bumpy and the initial estimate is insufficiently close. > This was a whole subject way back when. Weingarten,Dekker developed a routine that did all sorts of checks on convergence and fell back on bisection when things weren't going well. Brent added second order extrapolation, giving the Weingarten,Dekker,Brent zero finder. It's a total b*tch to understand, and even worse in the original Algol60. There is actually a fairly nice Fortran implementation --- with lots of gotos --- due to Moler floating around on the net, its called zero or some such, and I don't recall if it is the original, or Brent's improvement. This might be the way to go. The gotos actually make it easier to understand, as it is best seen as a finite state machine, and those can look pretty ugly in structured code. The problem of doing this in python is that all the checks and whatnot add *lots* of overhead. The second order extrapolation does improve things though. I also prefer my own method of doing the extrapolation, but if its packaged up and works, who cares. Chuck > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Thu Apr 11 18:50:36 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 11 Apr 2002 18:50:36 -0400 (EDT) Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding In-Reply-To: Message-ID: > > Another standard solution is to pass in two tolerances, one absolute and the other relative, say x_atol, x_rtol, and then do something like > > tol = x_atol + x_rtol*abs(current_estimate) > > x_rtol is usually bit more than the smallest number such that 1 + x_rtol != 1. > This number is often computed during installation of large packages, such as scipy, and made available somehow to all the routines. With python it would be easy to assign it as a default. Or we could just check > > x == x + dx # oh oh (I don't really like this) > > > > > > > > 4. newton should probably check for blowup, as > > this is not uncommon > > > > > > > > A blowup of what? The second derivative becoming very large? > > > > > > newton tends to head off to infinity if the function is > > bumpy and the initial estimate is insufficiently close. > > > > This was a whole subject way back when. Weingarten,Dekker developed a routine that did all sorts of checks on convergence and fell back on bisection when things weren't going well. Brent added second order extrapolation, giving the Weingarten,Dekker,Brent zero finder. It's a total b*tch to understand, and even worse in the original Algol60. There is actually a fairly nice Fortran implementation --- with lots of gotos --- due to Moler floating around on the net, its called zero or some such, and I don't recall if it is the original, or Brent's improvement. This might be the way to go. The gotos actually make it easier to understand, as it is best seen as a finite state machine, and those can look pretty ugly in structured code. Great, it would be easy to throw this in with an f2py-generated wrapper. > > The problem of doing this in python is that all the checks and whatnot add *lots* of overhead. The second order extrapolation does improve things though. I also prefer my own method of doing the extrapolation, but if its packaged up and works, who cares. People are working on Python-compilers. At some-point we would like to be able to compile the solvers using that. I wouldn't expect this in the next year, but I believe it will happen to some degree. I don't mind wrapping code, you can see that I've done a lot of it throughout SciPy. Most of SciPy is just interfaces to other-people's code. -Travis From Chuck.Harris at sdl.usu.edu Fri Apr 12 08:22:53 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Fri, 12 Apr 2002 06:22:53 -0600 Subject: [SciPy-dev] genetic algorithm, number theory, filterdesign,zerofinding Message-ID: Hi, here are some benchmarks comparing python and c versions of some of the zero finders. There is a notable improvement in the times. I'll do up a version of the Brent routine, it's really the gold standard, clean them all up a bit and submit. times in seconds f(x) = 1 - x**2 a,b = .5,2 repeat = 10000 py.bisect : 3.616 cc.bisect : 1.201 py.ridder : 2.414 cc.ridder : 0.521 Chuck From eric at scipy.org Fri Apr 12 13:27:06 2002 From: eric at scipy.org (eric) Date: Fri, 12 Apr 2002 13:27:06 -0400 Subject: [SciPy-dev] automatic test script for scipy Message-ID: <033501c1e247$44ef8db0$6b01a8c0@ericlaptop> Hello group, As I've tried to get testing of the build done for the upcoming release, I found it arduous to test on even the few platforms I have here. So, I took a couple of days to put together a automatic testing facility. It is still pretty raw, but, with polishing, it should clean up nicely. The scripts automatically test SciPy builds against different versions of Python, Numeric, f2py2e, atlas (sorta). Adding things like fftw, etc. should also be fairly easy. Currently, the script runs and then mails its output to scipy-testlog at scipy.org. This is a new mailing list that you can subscribe to it here: http://www.scipy.net/mailman/listinfo/scipy-testlog I doubt you want to though. The output is probably only interesting to a few of us. Also, the mail messages are currently very large (700K), and there are likely to be many (10-50) from nightly cron jobs running on test machines. Its probably better to look over the archives. Here is the one for April: http://www.scipy.net/pipermail/scipy-testlog/2002-April/date.html As for the release, it is passed mid-week, it still isn't here. The tests are mostly passing now, so it is just polishing, release docs, etc. that are needed (I think?). Still, I'm done guessing when the release will happen. :-| However, I hope to make some beta tar-balls and windows exe distributions today or this weekend for others to test. For those curious, Numeric 19.0 and before fail to work with SciPy. 20.0 works, but fails 3 errors. 20.3 and beyond all pass. As for the script, it is working on Linux now, and I hope to get it going on cygwin and Windows (with some modification) soon. Here is a standard call: full_scipy_build(build_dir = '/tmp/scipy_test', test_level = 10, python_version = '2.1.3', numeric_version = '18.4.1', f2py_version = '2.13.175-1250', atlas_version = '3.3.14', scipy_version = 'snapshot') This will look in a local repository of tarballs (hard coded for our network right now) to see if python-2.2.1.tgz exist there. If not, it will download it from the python.org site, cache it in the local repository, unpack it, and build it. It follows this same procedure for every package (except atlas which I'll discuss in a minute). The scipysnapshot is a nightly CVS snapshot from scipy ftp site. The builds of numeric, f2py, and scipy are done with the just-built version of python. After building everything, it runs a scipy's test suite at level=10. Both the build and test reports are sent in the email message. Enthought will eventually cover Windows, Sun, Irix, RH Linux, and Mac OS X here -- all with gcc (or MSVC on windows). Right now, it only runs on our Linux server. Feel free to use the script and email reports from your architecture. You'll have to be willing to poke through some code and change settings though (mostly at the top of the script) to get it working on your local machine. It takes 15 minutes to build/test a single group of settings on our relatively fast Linux server. Things that need work: 1. The reports are way to verbose. We need to set things up so that when Python builds successfully, it just reports success instead of the entire build process. This would cut message size significantly. I think this is very simple. 2. Clean up reports. Right now, they are very raw. They should probably have more OS/compiler information and also an easier to find report of success or failure. Also, the tests results are coming out in screwy orders (sterr,stout issues I expect), and this should be fixed. I think a custom unittest report class would solve a lot of this. 3. Add compiler options (cc, gcc, kgcc, etc.) for the individual packages. Some people are reporting that SciPy built with compiler X doesn't play well with Python or Numeric built with compiler Y. It'd be nice set the type of compiler each package is built with. 4. Atlas. Right now the script is hardcoded to download some atlas libraries I built for RH on PIII. This is obviously not portable. Automating the atlas build is also hard because it requires user input. I think we could replace config.c with our own config.c that just removed user input request and used the default values. That wouldn't be hard. We'd also need to make config.c emit the directory where it was going to put the library files in an easy to parse way so that we'd know where to copy them from. An alternative approach is to set up a repository of precompiled atlas libaries (similar to the one I have now). This could go on SciPy, and we could use some standard naming convention for OS/architecture. I think we should do both... 5. Testing against the OS installed Python instead of a locally built one. This is probably pretty important -- especially for windows. I haven't set this up yet. The test always use a version of Python built by the tests. 6. CVS testing. Right now, the test structure only can build from tar balls. 7. Move from email to a web interface. The reports should really be summarized on the web in a table. 8. Agent based? It be nice to have a web of test machines that people could "tell" to start testing the latest CVS. This would allow people to find out if their latest changes break other OSes. Shouldn't be to difficult, but I'm also not sure how much this is really worth. The nightly crons are likely enough. 9. Speeding up tests. It takes a while to build and then test all these packages -- more tahn 15 minutes on our Linux server. The script already reuses previously built versions of a tool (make the 2nd time runs very fast on Python because the files are all built), but there may be some streamlining that would help. Not sure this can really be improved. 10. Separate .cfg file so that site dependent features are included in the CVS respository. 11. Many more I'm sure. regards, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From kathy.phillips at mail.internetseer.com Fri Apr 12 15:44:31 2002 From: kathy.phillips at mail.internetseer.com (Kathy Phillips) Date: Fri, 12 Apr 2002 15:44:31 -0400 (EDT) Subject: [SciPy-dev] Recent error accessing scipy.org Message-ID: <1203023.1018640671597.JavaMail.promon@pm68> SUBJECT: Recent error accessing scipy.org On Thu Apr 11, 2002 at 08:03:13 PM EDT we were unable to reach your web site: http://scipy.org/site_content/FAQ due to the following error: Time Out As of Fri Apr 12, 2002 at 03:44:31 PM EDT we were able to access your web site again. As recommended by the Robot Guidelines, this email is to explain our system and to let you know about the problem we encountered accessing your site. InternetSeer is the largest FREE web site monitoring company in the world, monitoring over 1.1 million web sites worldwide every hour. The error listed above was initially detected by our primary site monitor in Philadelphia, Pa. then verified by our secondary site monitor located in Los Angeles, Ca. If you would like to receive alerts as soon as we detect an error accessing your site, click here for instant signup. Remember, our service is free. As part of your free web site monitoring, you'll receive immediate notifications when we encounter problems accessing your web site and weekly performance reports. There is no need to cancel because InternetSeer will never contact you again at this email address: scipy-dev at scipy.org. If you have other email addresses that you would like excluded from potential future contact, click here to have those email addresses excluded from our system. InternetSeer does not store or publish the content of your pages, but rather uses availability and link information for our research. Click here to learn more about InternetSeer. Sincerely, Kathy Phillips Connectivity Analyst InternetSeer.com http://www.internetseer.com/ep/setoc?NR5p764lad5aP5q5eMNNV5cSHVMU5bGxy=e3 ------------------------------------------------------------------------------- As stated above, there is no need to cancel since YOU WILL NEVER be contacted again at scipy-dev at scipy.org, but you may click here, for a removal confirmation from our website or simply reply to this message with the word "cancel" in the subject line. ##scipy-dev at scipy.org## SRC=37 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at scipy.org Fri Apr 12 15:14:45 2002 From: eric at scipy.org (eric) Date: Fri, 12 Apr 2002 15:14:45 -0400 Subject: [SciPy-dev] automatic test script for scipy References: <033501c1e247$44ef8db0$6b01a8c0@ericlaptop> Message-ID: <036601c1e256$4ef8af30$6b01a8c0@ericlaptop> Test reports should be much less verbose now -- about 20K or so when test pass. Death in a Python makefile will be more verbose, but still less so than previously. It remains to be seen whether the are now to short for debuggign when something in the build process goes wrong. eric ----- Original Message ----- From: "eric" To: Sent: Friday, April 12, 2002 1:27 PM Subject: [SciPy-dev] automatic test script for scipy > Hello group, > > As I've tried to get testing of the build done for the upcoming release, I found > it arduous to test on even the few platforms I have here. So, I took a couple > of days to put together a automatic testing facility. It is still pretty raw, > but, with polishing, it should clean up nicely. The scripts automatically test > SciPy builds against different versions of Python, Numeric, f2py2e, atlas > (sorta). Adding things like fftw, etc. should also be fairly easy. Currently, > the script runs and then mails its output to scipy-testlog at scipy.org. This is a > new mailing list that you can subscribe to it here: > > http://www.scipy.net/mailman/listinfo/scipy-testlog > > I doubt you want to though. The output is probably only interesting to a few of > us. Also, the mail messages are currently very large (700K), and there are > likely to be many (10-50) from nightly cron jobs running on test machines. Its > probably better to look over the archives. Here is the one for April: > > http://www.scipy.net/pipermail/scipy-testlog/2002-April/date.html > > As for the release, it is passed mid-week, it still isn't here. The tests are > mostly passing now, so it is just polishing, release docs, etc. that are needed > (I think?). Still, I'm done guessing when the release will happen. :-| > However, I hope to make some beta tar-balls and windows exe distributions today > or this weekend for others to test. > > For those curious, Numeric 19.0 and before fail to work with SciPy. 20.0 works, > but fails 3 errors. 20.3 and beyond all pass. > > As for the script, it is working on Linux now, and I hope to get it going on > cygwin and Windows (with some modification) soon. Here is a standard call: > > full_scipy_build(build_dir = '/tmp/scipy_test', > test_level = 10, > python_version = '2.1.3', > numeric_version = '18.4.1', > f2py_version = '2.13.175-1250', > atlas_version = '3.3.14', > scipy_version = 'snapshot') > > This will look in a local repository of tarballs (hard coded for our network > right now) to see if python-2.2.1.tgz exist there. If not, it will download it > from the python.org site, cache it in the local repository, unpack it, and build > it. It follows this same procedure for every package (except atlas which I'll > discuss in a minute). The scipysnapshot is a nightly CVS snapshot from scipy > ftp site. The builds of numeric, f2py, and scipy are done with the just-built > version of python. > > After building everything, it runs a scipy's test suite at level=10. Both the > build and test reports are sent in the email message. > > Enthought will eventually cover Windows, Sun, Irix, RH Linux, and Mac OS X > here -- all with gcc (or MSVC on windows). Right now, it only runs on our Linux > server. Feel free to use the script and email reports from your architecture. > You'll have to be willing to poke through some code and change settings though > (mostly at the top of the script) to get it working on your local machine. It > takes 15 minutes to build/test a single group of settings on our relatively fast > Linux server. > > Things that need work: > > 1. > The reports are way to verbose. We need to set things up so that when Python > builds successfully, it just reports success instead of the entire build > process. This would cut message size significantly. I think this is very > simple. > > 2. > Clean up reports. Right now, they are very raw. They should probably have more > OS/compiler information and also an easier to find report of success or failure. > Also, the tests results are coming out in screwy orders (sterr,stout issues I > expect), and this should be fixed. I think a custom unittest report class would > solve a lot of this. > > 3. > Add compiler options (cc, gcc, kgcc, etc.) for the individual packages. Some > people are reporting that SciPy built with compiler X doesn't play well with > Python or Numeric built with compiler Y. It'd be nice set the type of compiler > each package is built with. > > 4. > Atlas. Right now the script is hardcoded to download some atlas libraries I > built for RH on PIII. This is obviously not portable. Automating the atlas > build is also hard because it requires user input. I think we could replace > config.c with our own config.c that just removed user input request and used the > default values. That wouldn't be hard. We'd also need to make config.c emit > the directory where it was going to put the library files in an easy to parse > way so that we'd know where to copy them from. > > An alternative approach is to set up a repository of precompiled atlas libaries > (similar to the one I have now). This could go on SciPy, and we could use some > standard naming convention for OS/architecture. > > I think we should do both... > > 5. > Testing against the OS installed Python instead of a locally built one. > This is probably pretty important -- especially for windows. I haven't set this > up yet. The test always use a version of Python built by the tests. > > 6. > CVS testing. > Right now, the test structure only can build from tar balls. > > 7. > Move from email to a web interface. > The reports should really be summarized on the web in a table. > > 8. > Agent based? > It be nice to have a web of test machines that people could "tell" to start > testing the latest CVS. This would allow people to find out if their latest > changes break other OSes. Shouldn't be to difficult, but I'm also not sure how > much this is really worth. The nightly crons are likely enough. > > 9. > Speeding up tests. > It takes a while to build and then test all these packages -- more tahn 15 > minutes on our Linux server. The script already reuses previously built > versions of a tool (make the 2nd time runs very fast on Python because the files > are all built), but there may be some streamlining that would help. Not sure > this can really be improved. > > 10. > Separate .cfg file so that site dependent features are included in the CVS > respository. > > 11. > Many more I'm sure. > > regards, > eric > > -- > Eric Jones > Enthought, Inc. [www.enthought.com and www.scipy.org] > (512) 536-1057 > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jochen at unc.edu Fri Apr 12 17:18:06 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 12 Apr 2002 17:18:06 -0400 Subject: [SciPy-dev] weave 3.0.4 progress Message-ID: Hey, testing current scipy with gcc-3.0.4 gives only two failures. Some warnings abouts numeric_limits in weave, though:) Attached is a log. Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.log.gz Type: application/x-gunzip Size: 2612 bytes Desc: not available URL: From pearu at scipy.org Fri Apr 12 17:57:26 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Fri, 12 Apr 2002 16:57:26 -0500 (CDT) Subject: [SciPy-dev] automatic test script for scipy In-Reply-To: <033501c1e247$44ef8db0$6b01a8c0@ericlaptop> Message-ID: Hi Eric, You have done a nice job. On Fri, 12 Apr 2002, eric wrote: > 4. > Atlas. Right now the script is hardcoded to download some atlas libraries I > built for RH on PIII. This is obviously not portable. Automating the atlas > build is also hard because it requires user input. I think we could replace > config.c with our own config.c that just removed user input request and used the > default values. That wouldn't be hard. We'd also need to make config.c emit > the directory where it was going to put the library files in an easy to parse > way so that we'd know where to copy them from. May be the following hack is simpler: cd /tmp/dir rm -rf ATLAS # must start from a clean src to avoid config problems tar xzf /path/to/atlas3.3.14.tar.gz cd ATLAS python -c 'for i in range(100): print' | make #everything is default ARCH=`python -c 'import glob;print glob.glob("Make*UNKNOWN")[0][5:]'` make install arch=$ARCH that will build ATLAS libraries to directory lib/$ARCH. Further fixing liblapack.a should be simple. Other thing: it is not complete clear to me whether these automatic test hooks are only to be used from enthought machines (from a cron job) or should they be used by developers as well? Build everything from scratch can take hours on others machines. Pearu From eric at scipy.org Fri Apr 12 18:40:35 2002 From: eric at scipy.org (eric) Date: Fri, 12 Apr 2002 18:40:35 -0400 Subject: [SciPy-dev] automatic test script for scipy References: Message-ID: <038b01c1e273$10265ab0$6b01a8c0@ericlaptop> > > Hi Eric, > > You have done a nice job. Thanks! Vinay Sajip did a nice job writing the logging.py tool based on the PEP also. It certainly came in handy here. http://www.red-dove.com/python_logging.html > > On Fri, 12 Apr 2002, eric wrote: > > > 4. > > Atlas. Right now the script is hardcoded to download some atlas libraries I > > built for RH on PIII. This is obviously not portable. Automating the atlas > > build is also hard because it requires user input. I think we could replace > > config.c with our own config.c that just removed user input request and used the > > default values. That wouldn't be hard. We'd also need to make config.c emit > > the directory where it was going to put the library files in an easy to parse > > way so that we'd know where to copy them from. > > May be the following hack is simpler: > > cd /tmp/dir > rm -rf ATLAS # must start from a clean src to avoid config problems > tar xzf /path/to/atlas3.3.14.tar.gz > cd ATLAS > python -c 'for i in range(100): print' | make #everything is default > ARCH=`python -c 'import glob;print glob.glob("Make*UNKNOWN")[0][5:]'` > make install arch=$ARCH That's a very good solution -- except... On some platforms -- like our RH 7.1 machine, there is one question where you have to answer "no" instead of yes(which is default). That is because RH uses a 2.96 compiler which doesn't produce optimal code for ATLAS. Even when I specify make CC=kgcc and the config.c file is built with kgcc, the process still detects 2.96 and complains about it. We need to figure out a way around this. However, your fix will work fine on most machines, so I think it is a fine solution for now. I'll just have a "binary atlas install" class that we'll use here. Your fix for getting the ARCH is good also. > > that will build ATLAS libraries to directory lib/$ARCH. Further fixing > liblapack.a should be simple. Yes, I have the stuff to download and build the unoptimized lapack stuff in the code, so th ar -r stuff should be reasonably simple. > > Other thing: it is not complete clear to me whether these automatic > test hooks are only to be used from enthought machines (from a cron > job) or should they be used by developers as well? Build everything from > scratch can take hours on others machines. Please use them whereever you like. I think now that it is spewing on 20K or so of output, the mailing list will give us a reasonable feel for how scipy is doing on multiple platforms. I'm not sure the mailing list is the best format for this, but it was quick and dirty and gives everyone access to the data. Later we can beautify this whole process and perhaps get rid of the mailing list. As far as the time, if your not building atlas and have all but the scipy snapshot in a local file repository, the build process should be less than an hour -- no? It is just building ATLAS that will cause machines to grind on for endless hours. Also, after the scripts have run once, the second run is much faster (no compiling). I'd like to set up some more scenarios such as testing against a machines current installations also. This would be faster. It should also be pretty simple -- detect the python version, build anything it is missing into some tempdir (numeric, f2py, atlas, whatever), and then build scipy. It is all just logistics. Also, if others choose to run this, we may want to hack the scripts some so that machine name, and more diagnostics are returned. By the way, did you get this to run on your machine at all Pearu? I'd be interested to learn what needs to be re-factored to get less specific to our network. eric From pearu at scipy.org Sat Apr 13 06:04:11 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sat, 13 Apr 2002 05:04:11 -0500 (CDT) Subject: [SciPy-dev] automatic test script for scipy In-Reply-To: <038b01c1e273$10265ab0$6b01a8c0@ericlaptop> Message-ID: Eric, On Fri, 12 Apr 2002, eric wrote: > > May be the following hack is simpler: > > > > cd /tmp/dir > > rm -rf ATLAS # must start from a clean src to avoid config problems > > tar xzf /path/to/atlas3.3.14.tar.gz > > cd ATLAS > > python -c 'for i in range(100): print' | make #everything is default > > ARCH=`python -c 'import glob;print glob.glob("Make*UNKNOWN")[0][5:]'` > > make install arch=$ARCH > > That's a very good solution -- except... On some platforms -- like our RH 7.1 > machine, there is one question where you have to answer "no" instead of > yes(which is default). That is because RH uses a 2.96 compiler which doesn't > produce optimal code for ATLAS. Even when I specify make CC=kgcc and the > config.c file is built with kgcc, the process still detects 2.96 and complains > about it. We need to figure out a way around this. That should be easy. I started to use the following script to build my local ATLAS library: ----------------------------------- ATLAS_VERS=3.3.14 ATLAS_SRC=atlas$ATLAS_VERS.tar.gz TMPDIR=`mktemp -d` cp -v ./$ATLAS_SRC $TMPDIR || exit 1 cd $TMPDIR echo "Unpacking" tar xzf atlas$ATLAS_VERS.tar.gz cd ATLAS || exit 1 # Configure ATLAS: python -c 'for a in [""]*4+["4"]+[""]*3+["/usr/bin/g77","-Wall \ -fno-second-underscore -fpic -O3 -funroll-loops -march=i686 \ -malign-double"]+[""]*10: print a' | make ARCH=`python -c 'import glob;print glob.glob("Make*_*")[0][5:]'` make install arch=$ARCH ----------------------------------- where 5th option "4" corresponds to setting PII and with 9th option you can change your compiler. In 10th option I have supplied some optimization flags but they are not necessary. This gives you an idea how to fix compiler under RH. > > Other thing: it is not complete clear to me whether these automatic > > test hooks are only to be used from enthought machines (from a cron > > job) or should they be used by developers as well? Build everything from > > scratch can take hours on others machines. > > Please use them whereever you like. I think now that it is spewing on 20K or so > of output, the mailing list will give us a reasonable feel for how scipy is > doing on multiple platforms. I'm not sure the mailing list is the best format > for this, but it was quick and dirty and gives everyone access to the data. > Later we can beautify this whole process and perhaps get rid of the mailing > list. Mailing list is fine for me. Could you add the real version number of scipy to the subject line instead of just a snapshot? BTW, have you thought about making scipy-cvs list? I have send you patches a while ago. > I'd like to set up some more scenarios such as testing against a machines > current installations also. This would be faster. It should also be pretty > simple -- detect the python version, build anything it is missing into some > tempdir (numeric, f2py, atlas, whatever), and then build scipy. It is all just > logistics. Great. > Also, if others choose to run this, we may want to hack the scripts some so that > machine name, and more diagnostics are returned. > > By the way, did you get this to run on your machine at all Pearu? I'd be > interested to learn what needs to be re-factored to get less specific to our > network. No, I didn't try. It was a bit late here. Obviously local_repository = "/home/shared/tarballs" local_mail_server = "enthought.com" are specific to your network but these are minor issues. Some questions arised for me, however: Is it correct that if local_repository contains sources to all required software (with the specified version numbers) then nothing is downloaded from Internet? Another note: if specified software is once installed to dst_dir, then why not to keep it there instead of removing it after tests? This would avoid re-compilation if nothing is changed in the software. I guess dst_dir must then include the version numbers. For example, if python is installed with the following command setup.py install --prefix=/dst_dir/Python-2.1.3 then subsequent installation commands for packages would be cd /dst_dir/Python-2.1.3/python setup.py install cd /dst_dir/Python-2.1.3/python setup.py install \ --prefix=/dst_dir/Python-2.1.3/Numeric- and testing would be executed in the following loop for py_ver in ['2.1.3',..]: for numpy_ver ['18.3',..]: for atlas_ver in [...]: cd # Install ATLAS=/path/to/atlas- PYTHONPATH=/dst_dir/Python-/Numeric-\ /lib/python2.1/site-packages /dst_dir/Python-/bin/python \ setup.py install # Test PYTHONPATH=/dst_dir/Python-/Numeric-\ /lib/python2.1/site-packages /dst_dir/Python-/bin/python \ -c "import scipy;scipy.test(1)" But if all this takes too much time to implement, then we could leave it for SciPy-0.3 and now concentrate on getting SciPy-0.2 out. Current CVS seems to be quite stable and we should use it before it gets unstable again due to new contributions. I see that you want to make SciPy releases perfect (testing lots of platforms, various combinations of software packages, etc.). It is a very good goal. But I think few initial releases can be a bit imperfect (incomplete in various parts like tests, docs, etc). ;-) Pearu From wagner.nils at vdi.de Sat Apr 13 14:26:53 2002 From: wagner.nils at vdi.de (My VDI Freemail) Date: Sat, 13 Apr 2002 20:26:53 +0200 Subject: [SciPy-dev] ImportError: cannot import name =?iso-8859-1?q?P=5Froots?= Message-ID: <200204131821.g3DILXZ14064@scipy.org> Hi, I have build and installed scipy via latest CVS. Everything works fine so far. Python 2.2 (#1, Mar 26 2002, 15:46:04) [GCC 2.95.3 20010315 (SuSE)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.2/site-packages/scipy/__init__.py", line 57, in ? import optimize, integrate, signal, special, interpolate, cow, ga, cluster, weave File "/usr/lib/python2.2/site-packages/scipy/integrate/__init__.py", line 28, in ? from quadrature import * File "/usr/lib/python2.2/site-packages/scipy/integrate/quadrature.py", line 5, in ? from scipy.special.orthogonal import P_roots ImportError: cannot import name P_roots >>> Any idea ? Nils From eric at scipy.org Sat Apr 13 22:09:43 2002 From: eric at scipy.org (eric) Date: Sat, 13 Apr 2002 22:09:43 -0400 Subject: [SciPy-dev] fblas tests Message-ID: <042801c1e359$719a80f0$6b01a8c0@ericlaptop> Hey Pearu, I was looking over the behavior of the new fblas wrappers and think we should make some changes back to how the previous version worked (for reasons explained below). The old test suite is pretty extensive and covers many of the various calling approaches that can occur. We should definitely try to get all of its tests to pass, as they exercise the interface fairly well. The current blas wrappers sacrifice performance in an effort to make the functions more Pythonic in there behavior and also reduce the chance of operator error. However, I consider the fblas and cblas very low level and aimed at maximum performance. If people are using them, they should have access to the maximum speed that the underlying library can provide. This means there shouldn't be any extra copying if we can help it. The result may be a less "Pythonic" interface, but that is OK. Pythonic approaches already exist for most of the problems. Consider scopy. Someone wanting to use Pythonic approaches for making strided copies from one array to another will use: >>> a[:8:4] = b[:6:3] instead of: >>> fblas.scopy(a,b,n=2,incx=4,incy=3). The person choosing the second would do so only for speed. Also, they would expect the values of b to get changed in the operation since scopy moves portions of a into portions of b. The current wrapper doesn't do this. It makes a copy of b, copies the values of a into it, and then returns this array. It leaves b unaltered. >>> x = arange(8.,typecode=Float32)*10 >>> y = arange(6.,typecode=Float32) >>> x array([ 0., 10., 20., 30., 40., 50., 60., 70.]) >>> y array([ 0., 1., 2., 3., 4., 5.],'f') # An output array is created with the changes to y instead of changing y in place. >>> fblas.scopy(x,y,n=2,incx=4,incy=3) array([ 0., 1., 2., 40., 4., 5.],'f') # This guy should have been changed inplace >>> y array([ 0., 1., 2., 3., 4., 5.],'f') I did a comparison of this function with the Pythonic approach for very large arrays and it was actually slower. I removed the "copy" fromt the "intent(in,copy,out) y" poriton of the interface definition, and scopy becomes noticably faster (1.5 or more speed up) over the indexed copy approach. It also injects its values directly into b, so it removes the extra memory allocation and copying. Here is the output from my test: C:\home\ej\wrk\scipy\linalg\build\lib.win32-2.1\linalg>python copy_test.py python: 0.0879670460911 scopy -- without copy to output: 0.0521141653478 scopy -- with copy to output: 0.157154051702 I've included the script below. Using this new approach, non-contiguous arrays passed into y will result in exceptions. That is OK though. I think the experts that will use these functions would rather force these functions to require contiguous input than to have them handle the non-contiguous case at the expense of optimal performance. So, I'm gonna work my way through the interface and try to switch the behavior back to the old approach. I think it only requires removing the "copy" argument from most of the arguments. Let me know if you think of other things I should change. Also, let me know if there are other pitfalls I'm not thinking of here. thanks, eric --------------------- import time from scipy_base import * import fblas import scipy.linalg.fblas its = 1 l= 250000. #l= 10. x_size = 8.*l y_size = 6.*l x = arange(x_size)*10. x = x.astype(Float32) n = len(x) / 4. y = arange(y_size,typecode=Float32) t1 = time.clock() for i in range(its): y[::3] = x[::4] t2 = time.clock() print 'python: ', t2 - t1 r_python = y.copy() # should really be y, but interface is different than expected. #----------------------------------------------------------------------------- x = arange(x_size)*10. x = x.astype(Float32) n = len(x) / 4. y = arange(y_size,typecode=Float32) t1 = time.clock() for i in range(its): z = fblas.scopy(x,y,n=n,incx=4,incy=3) t2 = time.clock() print 'scopy -- without copy to output: ', t2 - t1 r_nocopy = y.copy() #----------------------------------------------------------------------------- x = arange(x_size)*10. x = x.astype(Float32) n = len(x) / 4. y = arange(y_size,typecode=Float32) t1 = time.clock() for i in range(its): z = scipy.linalg.fblas.scopy(x,y,n=n,incx=4,incy=3) t2 = time.clock() print 'scopy -- with copy to output: ', t2 - t1 r_copy = z.copy() # should really be y, but interface is different than expected. #----------------------------------------------------------------------------- #print 'y:', y #print 'z:', z #print 'nocopy:', r_nocopy #print 'python:', r_python #assert(alltrue(r_nocopy == r_python)) # This illustrates why the copy is needed for a "robust" interface. #x = arange(8.*100000.)*10. #x = x.astype(Float32) #n = len(x) / 4 #yy = arange(12.,typecode=Float32) #z = fblas.scopy(x,y[::2],n=2,incx=4,incy=3) #print 'y:', yy #print 'z:', z -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From peterson at math.utwente.nl Sun Apr 14 01:39:46 2002 From: peterson at math.utwente.nl (Pearu Peterson) Date: Sun, 14 Apr 2002 07:39:46 +0200 (CEST) Subject: [SciPy-dev] Re: fblas tests In-Reply-To: <042801c1e359$719a80f0$6b01a8c0@ericlaptop> Message-ID: Hi Eric, On Sat, 13 Apr 2002, eric wrote: > I was looking over the behavior of the new fblas wrappers and think we should > make some changes back to how the previous version worked (for reasons explained > below). The old test suite is pretty extensive and covers many of the various > calling approaches that can occur. We should definitely try to get all of its > tests to pass, as they exercise the interface fairly well. I agree that fblas tests need more extensive tests (as it was in linalg1). > The current blas wrappers sacrifice performance in an effort to make the > functions more Pythonic in there behavior and also reduce the chance of operator > error. However, I consider the fblas and cblas very low level and aimed at > maximum performance. If people are using them, they should have access to the > maximum speed that the underlying library can provide. This means there > shouldn't be any extra copying if we can help it. The result may be a less > "Pythonic" interface, but that is OK. Pythonic approaches already exist for > most of the problems. Consider scopy. Someone wanting to use Pythonic > approaches for making strided copies from one array to another will use: > > >>> a[:8:4] = b[:6:3] > > instead of: > > >>> fblas.scopy(a,b,n=2,incx=4,incy=3). > > The person choosing the second would do so only for speed. Also, they would > expect the values of b to get changed in the operation since scopy moves > portions of a into portions of b. The current wrapper doesn't do this. It > makes a copy of b, copies the values of a into it, and then returns this array. > It leaves b unaltered. So you should use fblas.scopy(a,b,n=2,incx=4,incy=3,overwrite_y=1) > >>> x = arange(8.,typecode=Float32)*10 > >>> y = arange(6.,typecode=Float32) > >>> x > array([ 0., 10., 20., 30., 40., 50., 60., 70.]) > >>> y > array([ 0., 1., 2., 3., 4., 5.],'f') > # An output array is created with the changes to y instead of changing y in > place. > >>> fblas.scopy(x,y,n=2,incx=4,incy=3) > array([ 0., 1., 2., 40., 4., 5.],'f') > # This guy should have been changed inplace > >>> y > array([ 0., 1., 2., 3., 4., 5.],'f') With overwrite_y option set you will have: >>> from scipy import * >>> x = arange(8.,typecode=Float32)*10 >>> y = arange(6.,typecode=Float32) >>> x array([ 0., 10., 20., 30., 40., 50., 60., 70.]) >>> y array([ 0., 1., 2., 3., 4., 5.],'f') >>> linalg.fblas.scopy(x,y,n=2,incx=4,incy=3,overwrite_y=1) array([ 0., 1., 2., 40., 4., 5.],'f') >>> y array([ 0., 1., 2., 40., 4., 5.],'f') > I did a comparison of this function with the Pythonic approach for very large > arrays and it was actually slower. I removed the "copy" fromt the > "intent(in,copy,out) y" poriton of the interface definition, and scopy becomes No need for that. Use overwrite_y. >>> print linalg.fblas.scopy.__doc__ scopy - Function signature: y = scopy(x,y,[n,offx,incx,offy,incy,overwrite_y]) Required arguments: x : input rank-1 array('f') with bounds (*) y : input rank-1 array('f') with bounds (*) Optional arguments: n := (len(x)-offx)/abs(incx) input int offx := 0 input int incx := 1 input int overwrite_y := 0 input int offy := 0 input int incy := 1 input int Return objects: y : rank-1 array('f') with bounds (*) Note that using intent(copy) makes default overwrite_y = 0 while using intent(overwrite) sets default overwrite_y = 1. I would prefer intent(copy) and using overwrite_y = 1 explicitely. > noticably faster (1.5 or more speed up) over the indexed copy approach. It also > injects its values directly into b, so it removes the extra memory allocation > and copying. Here is the output from my test: > > C:\home\ej\wrk\scipy\linalg\build\lib.win32-2.1\linalg>python copy_test.py > python: 0.0879670460911 > scopy -- without copy to output: 0.0521141653478 > scopy -- with copy to output: 0.157154051702 So, with various overwrite_y option values I get python: 0.12 scopy -- with default overwrite_y=0: 0.21 scopy -- with overwrite_y=1: 0.07 > I've included the script below. > > Using this new approach, non-contiguous arrays passed into y will result in > exceptions. That is OK though. I think the experts that will use these > functions would rather force these functions to require contiguous input than to > have them handle the non-contiguous case at the expense of optimal performance. > > So, I'm gonna work my way through the interface and try to switch the behavior > back to the old approach. I think it only requires removing the "copy" argument > from most of the arguments. Let me know if you think of other things I should > change. Also, let me know if there are other pitfalls I'm not thinking of here. Please, don't remove "copy" arguments. Use overwrite_y=1 instead. But if you insist exception, then let's change intent(copy) to intent(overwrite), that is, switching the defaults of overwrite_* options. That is fine with me. Pearu From wagner.nils at vdi.de Sun Apr 14 03:59:07 2002 From: wagner.nils at vdi.de (My VDI Freemail) Date: Sun, 14 Apr 2002 09:59:07 +0200 Subject: [SciPy-dev] wxPython Message-ID: <200204140753.g3E7rlZ21282@scipy.org> Hi, I have just build and installed wxPython on SuSE8.0. However there seems to be a problem ~/mysoft/wxPython-2.3.2.1/demo> python demo.py Traceback (most recent call last): File "demo.py", line 3, in ? import Main File "Main.py", line 15, in ? from wxPython.wx import * File "/usr/lib/python2.2/site-packages/wxPython/__init__.py", line 20, in ? import wxc ImportError: /usr/lib/python2.2/site-packages/wxPython/wxc.so: undefined symbol: SeekI__13wxInputStreamx10wxSeekMode What can I do ? Nils From pearu at scipy.org Sun Apr 14 06:24:14 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Sun, 14 Apr 2002 05:24:14 -0500 (CDT) Subject: [SciPy-dev] fblas tests In-Reply-To: <042801c1e359$719a80f0$6b01a8c0@ericlaptop> Message-ID: Hi Eric, On Sat, 13 Apr 2002, eric wrote: > I was looking over the behavior of the new fblas wrappers and think we should > make some changes back to how the previous version worked (for reasons explained > below). The old test suite is pretty extensive and covers many of the various > calling approaches that can occur. We should definitely try to get all of its > tests to pass, as they exercise the interface fairly well. I hope you don't want the old interface back just because to get the old tests working quickly again. In this particular case, the interface tests should be changed, not the interfaces, I think. And the main reason is that the new interfaces are definitely better (and not just because I wrote them;-). The new interfaces are better because a) ... they avoid side-effects (making the intent(inout) feature obsolete). In C or Fortran, intent(inout) type of side-effects are widely used and often the only way to get job done. But this is not the case for Python. I think that we should adopt the interfaces to Python style as much as possible as they are mostly used by Python users, not C or Fortran programmers. b) ... unless power users request side-effects explicitely (the overwrite_* = 1 feature) for performance reasons. Note that the new interfaces do not sacrifice performance or memory consumption if they are used properly, in fact, they are more optimal compared to the old interfaces. Of cource, the proper usage must be documented somewhere. c) ... it is safe and simple for casual users to use them and getting some speed-up compared to not using them at all (scopy is a rather extreme counter-example that shows interface behaving poorly with default values but that does not prove that all other interfaces are poor because these default values are optimal for these other cases. The old interface could not do all that because f2py did not had required features earlier. In CVS log you also mention reverting gemv to the previous interface style. What do you mean? The task of gemv is y <- alpha*op(a)*x + beta*y gemv old signature: gemv(trans,a,x,y,[m,n,alpha,lda,incx,beta,incy]) gemv current signature: y = gemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Notes: 1) in old gemv, arguments m,n,lda are redudant. Why would you want to restore them? 2) trans in old gemv expects a character from string 'ntc' while trans in new gemv expects an int from a list [0,1,2]. I see little difference in using either of them, in the latter case the interface code is a bit simpler. 3) In new gemv, alpha is required. I agree that it should be made optional with the default value 1. 4) New gemv has additional offx and offy arguments so that using gemv from Python covers all possible senarious that one would have when using gemv directly from C or Fortran. 5) When changing the style here, you should do it also in all other wrappers of blas1,blas2,blas3,lapack routines. I can ensure you, it is not a simple replace-str1-with-str2 job. To sum up, I would suggest the following signature for gemv: y = gemv(a,x,[alpha,beta,y,offx,incx,offy,incy,trans,overwrite_y]) that in the simplest case, y = gemv(a,x), corresponds to matrix multiplication of a matrix `a' with a vector `x'. And gradually using other optional arguments the task of gemv is extended to more specific cases. What do you think? If you have questions about the signatures and the new constructs in them, I am happy to explain. Regards, Pearu From eric at scipy.org Mon Apr 15 09:33:37 2002 From: eric at scipy.org (eric) Date: Mon, 15 Apr 2002 09:33:37 -0400 Subject: [SciPy-dev] fblas tests References: Message-ID: <050a01c1e482$260bbf30$6b01a8c0@ericlaptop> > > > I was looking over the behavior of the new fblas wrappers and think we should > > make some changes back to how the previous version worked (for reasons explained > > below). The old test suite is pretty extensive and covers many of the various > > calling approaches that can occur. We should definitely try to get all of its > > tests to pass, as they exercise the interface fairly well. > > I hope you don't want the old interface back just because to get the old > tests working quickly again. In this particular case, the interface tests > should be changed, not the interfaces, I think. The old tests were pretty much correct, though I did find one error in a strided array test. I wasn't aware of the added overwrite keyword, adding it would have corrected all the old tests. Still, I feel these guys are explicitly low level enough that overwrite should be the default behaviour. If we want to put a higher level wrapper around them (such as matrix_multipy, or matmult), then that could default (or even only allow) copy. > And the main reason is that the new interfaces are definitely better > (and not just because I wrote them;-). > > The new interfaces are better because > > a) ... they avoid side-effects (making the intent(inout) feature > obsolete). > In C or Fortran, intent(inout) type of side-effects are widely used > and often the only way to get job done. But this is not the case > for Python. I think that we should adopt the interfaces to Python > style as much as possible as they are mostly used by Python users, > not C or Fortran programmers. > > b) ... unless power users request side-effects explicitely (the > overwrite_* = 1 feature) for performance reasons. > > Note that the new interfaces do not sacrifice performance or memory > consumption if they are used properly, in fact, they are more optimal > compared to the old interfaces. Of cource, the proper usage must be > documented somewhere. I think "power users", or at least people that understand explicitly what they are doing, are the only ones who will ever touch these things. They have goofy names, and were designed for use in Fortran to copy data from one array to another. Preserving this behavior is pretty much the only way to get any useful, efficient work out of them. Otherwise using pure Python will gets you reasonably close to the same speed. I've only looked at 5 of the functions so far, but the interfaces looked pretty much the same on all of them with the exception of the default "copy" behavior and 'a' not having a default value of 1. for ?axpy (I added it back in). The others are different and improved, then that is great. > > c) ... it is safe and simple for casual users to use them and getting > some speed-up compared to not using them at all (scopy is a rather > extreme counter-example that shows interface behaving poorly with > default values but that does not prove that all other interfaces are > poor because these default values are optimal for these other cases. I guess I feel there shouldn't be a "casual" user of these routines. They should never have to know that gemm is a matrix multiply. We should have matmult or something like that that they call. > > The old interface could not do all that because f2py did not had required > features earlier. > right. > > In CVS log you also mention reverting gemv to the previous interface > style. What do you mean? The task of gemv is > > y <- alpha*op(a)*x + beta*y > > gemv old signature: > gemv(trans,a,x,y,[m,n,alpha,lda,incx,beta,incy]) > > gemv current signature: > y = gemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) > > Notes: > 1) in old gemv, arguments m,n,lda are redudant. Why would you want to > restore them? The original purpose of these wrappers was to experts writing efficient linear algebra routines, such as LAPACK, a tool for experimenting with the low-level BLAS routines from Python. I occasionally use BLAS, but never for such purposes, so I don't know the needs of these users. As a result, the original wrappers tried to follow the API as faithfully as possible, but adding keyword variables when they were helpful. It looks like m, lda, and n are now hard coded to be the shape of a. I am not sure that linear algebra experts always want this behavior. That is why they were left as they were. I think these have been replaced in some sense by offx and offy which aren't even a part of the Fortran interface. As a linear algebra non-expert, I wasn't confident enough to go through and change a long standing interface. The current one may indeed by superior to the original, but I think we need to ask some longtime and expert users of BLAS if our interfaces limit the capabilities of the original functions in some way. Pearu, you may be an expert in the field. If so, re-assure me that the interfaces are not limiting, and we'll move on. Otherwise, we should probably ask someone like Clint Whaley (if he is willing) to look them over and make comments before we change them. > > 2) trans in old gemv expects a character from string 'ntc' while > trans in new gemv expects an int from a list [0,1,2]. > I see little difference in using either of them, in the latter case > the interface code is a bit simpler. This is fine. This is not the kind of change I objected to. > > 3) In new gemv, alpha is required. I agree that it should be made > optional with the default value 1. cool. > > 4) New gemv has additional offx and offy arguments so that using gemv > from Python covers all possible senarious that one would have when > using gemv directly from C or Fortran. OK. If this is true, then maybe they are a good addition. > > 5) When changing the style here, you should do it also in all > other wrappers of blas1,blas2,blas3,lapack routines. I can ensure you, > it is not a simple replace-str1-with-str2 job. > > To sum up, I would suggest the following signature for gemv: > > y = gemv(a,x,[alpha,beta,y,offx,incx,offy,incy,trans,overwrite_y]) > > that in the simplest case, y = gemv(a,x), corresponds to > matrix multiplication of a matrix `a' with a vector `x'. > And gradually using other optional arguments the task of gemv is extended > to more specific cases. > > What do you think? I like overwrite as the default the blas stuff. And the offx and offy still need some discussion. If there are any other heavy duty blas users, please speak up now. thanks, eric From pearu at scipy.org Mon Apr 15 14:52:16 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 15 Apr 2002 13:52:16 -0500 (CDT) Subject: [SciPy-dev] fblas tests In-Reply-To: <050a01c1e482$260bbf30$6b01a8c0@ericlaptop> Message-ID: Hi, On Mon, 15 Apr 2002, eric wrote: > The old tests were pretty much correct, though I did find one error in a strided > array test. I wasn't aware of the added overwrite keyword, adding it would have > corrected all the old tests. Still, I feel these guys are explicitly low level > enough that overwrite should be the default behaviour. If we want to put a > higher level wrapper around them (such as matrix_multipy, or matmult), then that > could default (or even only allow) copy. So you prefer living on an edge ;) Say, for a function foo we can have two signatures: 1) x = foo(x,overwrite_x=1) PLUS: it is convinient for power user not to specify overwrite_a if in-situ change of x is allowed. MINUS: Explicit is better than implicit. The above is implicit. MINUS: So, if a careless/tired/etc power user defines def bar(x): x1 = foo(x) return x1 + x then the result can be unpredictable: if foo increases its argument with type 'i' by one, then we have bar(1) -> 3 bar(array(1,"i")) -> 4 2) x = foo(x,overwrite_x=0) MINUS: if in-situ change of x is desired, then user must specify overwrite_x = 1 argument. PLUS: if in-situ change of x is desired, then user must specify overwrite_x = 1 argument. Explicit is better than implicit. PLUS: Careless/tired/etc power user can safely define bar as above. Purely because 'Explicit is better than implicit' I would prefer default overwrite to be 0, even for low level functions such as in fblas. It is more typing for the developer, but later when looking the code, it is clear what developer expected to happen. In-situ changes are not Pythonic and therefore such things should show up explictely when used in Python. Even a C developer while using Python would not assume in-situ changes by default just because it is not Pythonic, IMHO. Personally, I am not against intent(overwrite). But I just imagine that intent(copy) is safer for other developers. Either they use overwrite=1 if it is really needed or they don't need to bother about setting overwrite=0 if overwrite is not expected or desired. And note that using overwrite_x = 1 makes sense only if you don't care what would be the contents of x after the call. Such a situation can be useful only for temporary variables with initially filled with input data. You should _not_ use overwrite_x = 1 to get the same effect as if the argument would be intent(inout). Arguments that are changed in-situ and filled with output data, should always be defined as intent(in,out), that is, x = foo(x) is better than foo(x) if x is going to be changed in-situ. Do you agree? > I think "power users", or at least people that understand explicitly what they > are doing, are the only ones who will ever touch these things. OK, let's assume only power users in what follows. But I think even power users should write clear and explicit codes and with a style proper to a given language. > They have goofy names, and were designed for use in Fortran to copy > data from one array to another. Preserving this behavior is pretty > much the only way to get any useful, efficient work out of them. I disagree. The whole point of wrapping Fortran to Python is to make coding easier. This includes that some technical details that can be established from arguments in one-to-one fashion, can be hidden. This is possible for Python types but not for Fortran or C types (think of an array and its shape). So, coding is easier because: 1) less typing, 2) less possibility to make errors, 3) less bugs as there are some internal checks. So, I don't see why would one want to call a Fortran function from Python if it would have, say, 20 arguments and consider all technical details while the work would be the same as using Fortran directly. I would not write Python code in this case, I would write it directly in Fortran then. > Otherwise using pure Python will gets you reasonably close to the same > speed. Not true. If wrappers are used properly. The same holds for wrappers that expose everything. Only with proper use one can have speed ups. In the former case it takes less efford to learn proper use than in the latter case. > > In CVS log you also mention reverting gemv to the previous interface > > style. What do you mean? The task of gemv is > > > > y <- alpha*op(a)*x + beta*y > > > > gemv old signature: > > gemv(trans,a,x,y,[m,n,alpha,lda,incx,beta,incy]) > > > > gemv current signature: > > y = gemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) > > > > Notes: > > 1) in old gemv, arguments m,n,lda are redudant. Why would you want to > > restore them? > > The original purpose of these wrappers was to experts writing efficient linear > algebra routines, such as LAPACK, a tool for experimenting with the low-level > BLAS routines from Python. I occasionally use BLAS, but never for such > purposes, so I don't know the needs of these users. As a result, the original > wrappers tried to follow the API as faithfully as possible, but adding keyword > variables when they were helpful. I see your point but I find it impractical. I don't see a point of experimenting with these low-level routines from Python if I am later going to translate the algorithm to Fortran. I would write the algorithm directly in Fortran if it is my goal to have it in Fortran. Otherwise the work would be doubled. Note that developing an algorithm in Fortran is as efficient as in Python if using tools like f2py: all testing and I/O is done in Python while algorithm is exposed to Python only with one command: f2py -c ... I can imagine that such a developing style is not classical and many developers hardly use it but it is acctually extremely efficient, both the development and the resulting software. > It looks like m, lda, and n are now hard coded to be the shape of a. I am not > sure that linear algebra experts always want this behavior. That is why they > were left as they were. Exposing array shapes to Python has a minus because it increases the wrappers code, which already is quite large. Even more, if it is not clear whether they are useful or not. I would suggest leaving them out and if it turns out that such arguments would be useful to have, then add them later. It is better to extend the set of features rather than having useless features lying around. > I think these have been replaced in some sense by offx and offy which > aren't even a part of the Fortran interface. Actually, such things like calling foo(x(10)) where x(10) refers to a sub-array starting at index 10 (and not only the 10th element), is part of Fortran language and widely used in Fortran codes. The equivalent C codelet would be foo(x+10) and Python one is foo(x,offx=10) I don't know how else can you model Fortran statement `x(10)' other than using offx keyword argument. Using foo(x[10:]) from Python would be quite inefficient compared to foo(x,offx=10): just think what is happening behind the scenes in those two cases. > As a linear algebra non-expert, I wasn't confident enough to go > through and change a long standing interface. The current one may > indeed by superior to the original, but I think we need to ask some > longtime and expert users of BLAS if our interfaces limit the > capabilities of the original functions in some way. Pearu, you may be > an expert in the field. If so, re-assure me that the interfaces are > not limiting, and we'll move on. Otherwise, we should probably ask > someone like Clint Whaley (if he is willing) to look them over and > make comments before we change them. Well, see above, I have tried my best in assuring you. You seem to think that I don't care about performance or generality. They are very much my first priorities. Besides that I work hard to get the simplest wrapper possible, even if it takes several iterations. May be that gives an impression that they cannot be efficient or general. But I would hate to produce complicated wrappers just that they would look sophisticated ;) > > 4) New gemv has additional offx and offy arguments so that using gemv > > from Python covers all possible senarious that one would have when > > using gemv directly from C or Fortran. > > OK. If this is true, then maybe they are a good addition. Yes, they are. See the foo(x,offx=10) example above. > I like overwrite as the default the blas stuff. And the offx and offy still > need some discussion. If there are any other heavy duty blas users, please > speak up now. Yes, please do. Pearu From nwagner at mecha.uni-stuttgart.de Tue Apr 16 12:14:16 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 16 Apr 2002 18:14:16 +0200 Subject: [SciPy-dev] Up-to-date Tutorial for Scipy ? Message-ID: <3CBC4DD7.67456832@mecha.uni-stuttgart.de> Hi, I wonder if there is any progress in writing a new tutorial for scipy ? In order to benefit from the whole functional range of scipy it might be very valuable to have an up-to-date manual. Nils From jack at email.com Wed Apr 17 13:34:35 2002 From: jack at email.com (eyou321) Date: Thu, 18 Apr 2002 01:34:35 +0800 Subject: [SciPy-dev] ADV:Harvest lots of Target Email address quickly Message-ID: <20020417174935.EF9EDA85B@mx2.mail.twtelecom.net> An HTML attachment was scrubbed... URL: From hoel at germanlloyd.org Thu Apr 18 10:35:45 2002 From: hoel at germanlloyd.org (hoel at germanlloyd.org) Date: 18 Apr 2002 16:35:45 +0200 Subject: [SciPy-dev] SciPy on Solaris 7 using gcc 3.0.4 Message-ID: Hello, trying to install the latest scipy snapshot on Solaris 2.7 using gcc 3.0.4 I get: g77 -shared build/temp.solaris-2.7-sun4u-2.2/fortranobject.o build/temp.solaris-2.7-sun4u-2.2/fblasmodule.o -L/export/data/tmp/hoel/atlas3.3.15/lib/SunOS_SunUS4_3/ -L/export/data/tmp/hoel/atlas3.3.15/lib/SunOS_SunUS4_3/ -Lbuild/temp.solaris-2.7-sun4u-2.2 -lfblas -llapack -lf77blas -lcblas -latlas -lg2c -o build/lib.solaris-2.7-sun4u-2.2/scipy/linalg/fblas.so Text relocation remains referenced against symbol offset in file ATL_zupNBmm24_4_1_b1 0x838 /export/data/tmp/hoel/atlas3.3.15/lib/SunOS_SunUS4_3//libatlas.a(ATL_zupNBmm_b1.o) ATL_zupNBmm24_4_1_b1 0x7c4 /export/data/tmp/hoel/atlas3.3.15/lib/SunOS_SunUS4_3//libatlas.a(ATL_zupNBmm_b1.o) .... lots of similar lines. This error can be solved by adding the "-mimpure-text" flag to the link line (has to be done for every link command linking *.a libraries to a "-shared" object. The next problem comes when linking chphes.so: g77 -shared build/temp.solaris-2.7-sun4u-2.2/cephesmodule.o build/temp.solaris-2.7-sun4u-2.2/amos_wrappers.o build/temp.solaris-2.7-sun4u-2.2/specfun_wrappers.o build/temp.solaris-2.7-sun4u-2.2/toms_wrappers.o build/temp.solaris-2.7-sun4u-2.2/cdf_wrappers.o build/temp.solaris-2.7-sun4u-2.2/ufunc_extras.o -Lbuild/temp.solaris-2.7-sun4u-2.2 -lamos -ltoms -lc_misc -lcephes -lmach -lcdf -lspecfun -lg2c -o build/lib.solaris-2.7-sun4u-2.2/scipy/special/cephes.so ld: fatal: too many symbols require `small' PIC references: have 3366, maximum 2048 -- recompile some modules -K PIC. collect2: ld returned 1 exit status error: command 'g77' failed with exit status 1 I was able to get it to something like have 25??, maximum 2048 -- recompile some modules -K PIC. when replacing switches = switches + ' -fpic ' with switches = switches + ' -fPIC ' in scipy_distutils/command/build_flib.py, but it still dosn't link. Greetings Berthold -- Dipl.-Ing. Berthold H?llmann __ Address: hoel at germanlloyd.org G / \ L Germanischer Lloyd phone: +49-40-36149-7374 -+----+- Vorsetzen 32/35 P.O.Box 111606 fax : +49-40-36149-7320 \__/ D-20459 Hamburg D-20416 Hamburg From tinu at email.ch Thu Apr 18 11:20:31 2002 From: tinu at email.ch (tinu) Date: Thu, 18 Apr 2002 17:20:31 +0200 Subject: [SciPy-dev] problem compiling CVS version Message-ID: <15550.58431.814970.864725@yak.ethz.ch> Dear Scipythonistas (is this correct?) I had problems compiling the recent CVS version, due to a small compiler (gcc version 2.95.3) parsing problem. Removing the "\" helped. (patch included) Thanks for the great work! Martin -- Martin L?thi VAW Glaciology, ETH Z?rich, Switzerland mel: luthi at vaw.baug.ethz.ch Index: zeros.h =================================================================== RCS file: /home/cvsroot/world/scipy/optimize/Zeros/zeros.h,v retrieving revision 1.3 diff -c -r1.3 zeros.h *** zeros.h 2002/04/15 20:40:58 1.3 --- zeros.h 2002/04/18 15:36:42 *************** *** 14,21 **** } default_parameters; static double dminarg1,dminarg2; ! #define DMIN(a,b) (dminarg1=(a),dminarg2=(b),(dminarg1) < (dminarg2) ?\ ! (dminarg1) : (dminarg2)) #define SIGN(a) ((a) > 0.0 ? 1.0 : -1.0) #define ERROR(params,num,val) (params)->error_num=(num); return (val) --- 14,20 ---- } default_parameters; static double dminarg1,dminarg2; ! #define DMIN(a,b) (dminarg1=(a),dminarg2=(b),(dminarg1) < (dminarg2) ? (dminarg1) : (dminarg2)) #define SIGN(a) ((a) > 0.0 ? 1.0 : -1.0) #define ERROR(params,num,val) (params)->error_num=(num); return (val) From hoel at germanlloyd.org Thu Apr 18 11:57:49 2002 From: hoel at germanlloyd.org (hoel at germanlloyd.org) Date: 18 Apr 2002 17:57:49 +0200 Subject: [SciPy-dev] import scipy fails on linux Message-ID: Hello, I tried to install the latest scipy snapshot on a linux box. compilation went fine (other than on Solaris), but testing things failed: >python Python 2.2.1 (#1, Apr 18 2002, 17:26:54) [GCC 3.0.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy exceptions.ImportError: /usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/linalg/flapack.so: undefined symbol: sgesdd_ exceptions.ImportError: /usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/linalg/_flinalg.so: undefined symbol: dlaswp_ Traceback (most recent call last): File "", line 1, in ? File "/usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/__init__.py", line 42, in ? import special, io, linalg, stats, fftpack File "/usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/special/__init__.py", line 325, in ? import orthogonal File "/usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/special/orthogonal.py", line 59, in ? from scipy.linalg import eig File "/usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/linalg/__init__.py", line 40, in ? from basic import * File "/usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/linalg/basic.py", line 17, in ? import calc_lwork ImportError: /usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so: undefined symbol: ieeeck_ >>> Greetings Berthold -- Dipl.-Ing. Berthold H?llmann __ Address: hoel at germanlloyd.org G / \ L Germanischer Lloyd phone: +49-40-36149-7374 -+----+- Vorsetzen 32/35 P.O.Box 111606 fax : +49-40-36149-7320 \__/ D-20459 Hamburg D-20416 Hamburg From tinu at email.ch Thu Apr 18 11:49:43 2002 From: tinu at email.ch (tinu) Date: Thu, 18 Apr 2002 17:49:43 +0200 Subject: [SciPy-dev] import scipy fails on linux In-Reply-To: References: Message-ID: <15550.60183.172541.180665@yak.ethz.ch> > I tried to install the latest scipy snapshot on a linux > box. compilation went fine (other than on Solaris), but testing things > failed: Exactly the same problem here (SuSE 8, Pyton 2.2.1) -- Martin L?thi VAW Glaciology, ETH Z?rich, Switzerland mel: luthi at vaw.baug.ethz.ch From tinu at email.ch Thu Apr 18 11:53:23 2002 From: tinu at email.ch (tinu) Date: Thu, 18 Apr 2002 17:53:23 +0200 Subject: [SciPy-dev] import scipy fails on linux In-Reply-To: <15550.60183.172541.180665@yak.ethz.ch> References: <15550.60183.172541.180665@yak.ethz.ch> Message-ID: <15550.60403.421410.310712@yak.ethz.ch> looking somewhat deeper, I find that many symbols are undefined: tinu at yak:~> ldd -d /usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so libm.so.6 => /lib/libm.so.6 (0x40020000) libc.so.6 => /lib/libc.so.6 (0x40043000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x80000000) undefined symbol: ieeeck_ (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: _Py_NoneStruct (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyFloat_Type (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyString_Type (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyComplex_Type (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyLong_Type (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyExc_TypeError (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyExc_AttributeError (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyCObject_Type (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyInt_Type (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyType_Type (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) undefined symbol: PyExc_MemoryError (/usr/local/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so) don't know what to do with that, though... Martin -- Martin L?thi VAW Glaciology, ETH Z?rich, Switzerland mel: luthi at vaw.baug.ethz.ch From pearu at cens.ioc.ee Thu Apr 18 13:39:14 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 18 Apr 2002 20:39:14 +0300 (EEST) Subject: [SciPy-dev] import scipy fails on linux In-Reply-To: Message-ID: On 18 Apr 2002 hoel at germanlloyd.org wrote: > Hello, > > I tried to install the latest scipy snapshot on a linux > box. compilation went fine (other than on Solaris), but testing things > failed: > > >python > Python 2.2.1 (#1, Apr 18 2002, 17:26:54) > [GCC 3.0.4] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > exceptions.ImportError: /usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/linalg/flapack.so: undefined symbol: sgesdd_ > exceptions.ImportError: /usr/local/fitools/py/linux/lib/python2.2/site-packages/scipy/linalg/_flinalg.so: undefined symbol: dlaswp_ This looks like ATLAS installation problem: What ATLAS do you use? Do you have complete lapack library? You must apply http://math-atlas.sourceforge.net/errata.html#completelp What is the output of python scipy_distutils/system_info.py ? Pearu From heiko at hhenkelmann.de Fri Apr 19 15:49:56 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Fri, 19 Apr 2002 21:49:56 +0200 Subject: [SciPy-dev] corrupted line ends Message-ID: <000b01c1e7db$61e77f60$08d99e3e@arrow> Hello there, all files in scipy\optimize\Zeros have corrupted line ends. See for example zeros.h: 00000000: 2f2a 2057 7269 7474 656e 2062 7920 4368 /* Written by Ch 00000010: 6172 6c65 7320 4861 7272 6973 2063 6861 arles Harris cha 00000020: 726c 6573 2e68 6172 7269 7340 7364 6c2e rles.harris at sdl. 00000030: 7573 752e 6564 7520 2a2f 0d0d 0a0d 0d0a usu.edu */...... 00000040: 2f2a 204d 6f64 6966 6965 6420 746f 206e /* Modified to n 00000050: 6f74 2064 6570 656e 6420 6f6e 2050 7974 ot depend on Pyt 00000060: 686f 6e20 6576 6572 7977 6865 7265 2062 hon everywhere b 00000070: 7920 5472 6176 6973 204f 6c69 7068 616e y Travis Oliphan 00000080: 742e 0d0d 0a20 2a2f 0d0d 0a0d 0d0a 0d0d t.... */........ 00000090: 0a23 6966 6e64 6566 205a 4552 4f53 5f48 .#ifndef ZEROS_H 000000a0: 0d0d 0a23 6465 6669 6e65 205a 4552 4f53 ...#define ZEROS 000000b0: 5f48 0d0d 0a0d 0d0a 2364 6566 696e 6520 _H......#define 000000c0: 5a45 524f 535f 5041 5241 4d5f 4845 4144 ZEROS_PARAM_HEAD I'm checkin out with WinCVS on a Windows ME Box. All the files in the other directories seem to be OK. Heiko From al at ime.auc.dk Tue Apr 23 13:37:44 2002 From: al at ime.auc.dk (Anders Lyckegaard) Date: Tue, 23 Apr 2002 19:37:44 +0200 Subject: [SciPy-dev] BVP code Message-ID: <200204230747.JAA09747@dora.ime.auc.dk> Dear scipy-developers, This is my first post to this list. I have been working on a bit of code that implements a shooting routine for two point boundary value problems. It uses the SciPy routines odeint and fsolve, so I was wondering if the shooting routine would be of interest to the SciPy community. I am completely new to SciPy, and Scientific Computing with Python, so don't expect too much. I have only used SciPy for a few days now. I would also like to say that I think that SciPy is a great tool. It is nice to a somewhat larger collection of high quality modules for Scientific computing instead of all the small modules that are floating around the internet. Best regards Anders Lyckegaard -- Anders Lyckegaard, Ph.D student Institute of Mechanical Engineering, Aalborg University Pontoppidanstraede 101, DK-9220 Aalborg Oest, Denmark Phone: +45 9635 8080 (direct: +45 9635 9325), fax: +45 9815 1675 Email: al at ime.auc.dk From dog at ERC.MsState.Edu Tue Apr 23 07:32:01 2002 From: dog at ERC.MsState.Edu (David O'Gwynn) Date: Tue, 23 Apr 2002 06:32:01 -0500 (CDT) Subject: [SciPy-dev] Backwards compatibility for weave Message-ID: In trying to use weave on the system where I work, I found that it will not work with Python2.0. Namely because of the use of the sys._getframe method. There might be other compatibility issues as well. Is there any kind of workaround for backwards compatibility? _____________ David O'Gwynn dog at erc.msstate.edu From pearu at cens.ioc.ee Tue Apr 23 08:35:40 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 23 Apr 2002 15:35:40 +0300 (EEST) Subject: [SciPy-dev] Backwards compatibility for weave In-Reply-To: Message-ID: On Tue, 23 Apr 2002, David O'Gwynn wrote: > In trying to use weave on the system where I work, I found that it will > not work with Python2.0. Namely because of the use of the sys._getframe > method. There might be other compatibility issues as well. > > Is there any kind of workaround for backwards compatibility? For the getframe issue there is a workaround. Look at the get_frame function in scipy_distutils/misc_utils.py and in principle weave could use it. I don't know about other (if any) compatibility issues, though. Pearu From eric at scipy.org Tue Apr 23 14:03:45 2002 From: eric at scipy.org (eric) Date: Tue, 23 Apr 2002 14:03:45 -0400 Subject: [SciPy-dev] BVP code References: <200204230747.JAA09747@dora.ime.auc.dk> Message-ID: <0bfe01c1eaf1$36615190$6b01a8c0@ericlaptop> Hey Anders, Welcome. I'm glad SciPy is proving useful to you. I'm not familiar with shooting routines. Can you give a little background on what it is and perhaps a reference. SciPy will ideally evolve into be a repository of "best in class" algorithms that are peer reviewed. If your algorithm is a general solver that fits this criteria, then it is likely of interest. Please send a few more details. thanks, eric ----- Original Message ----- From: "Anders Lyckegaard" To: Sent: Tuesday, April 23, 2002 1:37 PM Subject: [SciPy-dev] BVP code > Dear scipy-developers, > > This is my first post to this list. > > I have been working on a bit of code that implements a shooting routine for > two point boundary value problems. It uses the SciPy routines odeint and > fsolve, so I was wondering if the shooting routine would be of interest to > the SciPy community. > > I am completely new to SciPy, and Scientific Computing with Python, so don't > expect too much. I have only used SciPy for a few days now. > > I would also like to say that I think that SciPy is a great tool. It is nice > to a somewhat larger collection of high quality modules for Scientific > computing instead of all the small modules that are floating around the > internet. > > > Best regards > > Anders Lyckegaard > > -- > Anders Lyckegaard, Ph.D student > Institute of Mechanical Engineering, Aalborg University > Pontoppidanstraede 101, DK-9220 Aalborg Oest, Denmark > Phone: +45 9635 8080 (direct: +45 9635 9325), fax: +45 9815 1675 > Email: al at ime.auc.dk > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Tue Apr 23 14:16:53 2002 From: eric at scipy.org (eric) Date: Tue, 23 Apr 2002 14:16:53 -0400 Subject: [SciPy-dev] Backwards compatibility for weave References: Message-ID: <0c0401c1eaf3$0c852750$6b01a8c0@ericlaptop> Hey David, I haven't tested weave against Python 2.0. The _getframe work around suggested by Pearu might help. Also, I think the standard inspect.py module has a stack() function that serves a similar function. I'm not sure if inspect.py was part of 2.0 though. weave also needs distutils -- I think it was in 2.0, right? Other than that, I don't know of any other show stoppers. The code generator is all pure Python code that, other than the stack shinanigans, doesn't do anything too strange. Since it uses distutils, weave should build the extension modules with all the correct settings for 2.0 automatically. Let me know if there are some easy fixes. I'm loath to give up the _getframe call because it is much more efficient than other approaches. Extra checks and function calls within inline() or blitz() come at a high price because they slow down every call to a compiled function. However, we could supply a different version of the functions that work with 2.0 if the fixes are simple. eric ----- Original Message ----- From: "David O'Gwynn" To: Sent: Tuesday, April 23, 2002 7:32 AM Subject: [SciPy-dev] Backwards compatibility for weave > In trying to use weave on the system where I work, I found that it will > not work with Python2.0. Namely because of the use of the sys._getframe > method. There might be other compatibility issues as well. > > Is there any kind of workaround for backwards compatibility? > > _____________ > David O'Gwynn > dog at erc.msstate.edu > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From Chuck.Harris at sdl.usu.edu Tue Apr 23 15:22:20 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Tue, 23 Apr 2002 13:22:20 -0600 Subject: [SciPy-dev] BVP code Message-ID: Hi, > -----Original Message----- > From: eric [mailto:eric at scipy.org] > Sent: Tuesday, April 23, 2002 12:04 PM > To: scipy-dev at scipy.org > Subject: Re: [SciPy-dev] BVP code > > > Hey Anders, > > Welcome. I'm glad SciPy is proving useful to you. > > I'm not familiar with shooting routines. Can you give a > little background on > what it is and perhaps a reference. SciPy will ideally > evolve into be a > repository of "best in class" algorithms that are peer > reviewed. If your > algorithm is a general solver that fits this criteria, then > it is likely of > interest. Please send a few more details. > > thanks, > eric > I'm not in this loop, but, shooting routines solve boundary value problems for ODE's. For example, vibrating string problems where separation of variables leads to a second order ODE with one parameter which must satisfy a boundary condition at each end of the string. Satisfy the boundary condition at one end, and then vary the parameter until the integrated ODE satisfies the other boundary condition. Integrating the ODE provides the 'shooting'. Chebychev methods can also provide quick and simple solutions to these types of problems and perhaps it would be nice to include a selection of these. Chuck From jochen at unc.edu Tue Apr 23 17:02:36 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 23 Apr 2002 17:02:36 -0400 Subject: [SciPy-dev] BVP code In-Reply-To: <200204230747.JAA09747@dora.ime.auc.dk> References: <200204230747.JAA09747@dora.ime.auc.dk> Message-ID: On Tue, 23 Apr 2002 19:37:44 +0200 Anders Lyckegaard wrote: Anders> I have been working on a bit of code that implements a Anders> shooting routine for two point boundary value problems. It Anders> uses the SciPy routines odeint and fsolve, so I was wondering Anders> if the shooting routine would be of interest to the SciPy Anders> community. It would. I actually was thinking of doing something similar in C, providing a Python interface, using GSL. Maybe I just take your code now:)) Anders> I am completely new to SciPy, and Scientific Computing with Anders> Python, so don't expect too much. I have only used SciPy for a Anders> few days now. Well, it definitely is necessary to get it robust for general usage... How do you vary the eigenvalue? How do you determine whether accuracy conditions are met? How does it compare to Numerov-Cooley? On Tue, 23 Apr 2002 14:03:45 -0400 eric wrote: eric> background on what it is and perhaps a reference. You can look it up in Numerical Recipes, actually. (You know all the caveats:) Or just almost any book about ODE's, I guess. Cooleys paper (which is not what Anders is doing! but very useful nevertheless) is ,---- | @Article{11901, | author = {J. W. Cooley}, | title = {An Improved Eigenvalue Corrector Formula for Solving the | Schr?dinger Equation for Central Fields}, | journal = {Math. Comput.}, | volume = {15}, | pages = {363}, | year = {1961}, | } `---- Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From Chuck.Harris at sdl.usu.edu Tue Apr 23 17:18:25 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Tue, 23 Apr 2002 15:18:25 -0600 Subject: [SciPy-dev] BVP code Message-ID: > ,---- > | @Article{11901, > | author = {J. W. Cooley}, > | title = {An Improved Eigenvalue Corrector Formula for Solving the > | Schr?dinger Equation for Central Fields}, > | journal = {Math. Comput.}, > | volume = {15}, > | pages = {363}, > | year = {1961}, > | } > `---- Is that the one that shoots from the ends and meets in the middle and tries to match the slopes? I believe there is also a version that includes the electron spins, so the ODE is vector instead of scalar. With the great linear algebra stuff we have, some of these are neatly solved by projecting on a finite basis set --- say sines or chebychev polynomials --- and diagonalizing the resulting matrix. Chuck From jochen at unc.edu Tue Apr 23 18:12:40 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 23 Apr 2002 18:12:40 -0400 Subject: [SciPy-dev] BVP code In-Reply-To: References: Message-ID: On Tue, 23 Apr 2002 15:18:25 -0600 Chuck Harris wrote: >> ,---- >> | @Article{11901, >> | author = {J. W. Cooley}, >> | title = {An Improved Eigenvalue Corrector Formula for Solving the >> | Schr?dinger Equation for Central Fields}, >> | journal = {Math. Comput.}, >> | volume = {15}, >> | pages = {363}, >> | year = {1961}, >> | } >> `---- Chuck> Is that the one that shoots from the ends and meets in the Chuck> middle and tries to match the slopes? Yep Chuck> I believe there is also a version that includes the electron Chuck> spins, so the ODE is vector instead of scalar. IIRC that's discussed in the same paper... Chuck> With the great linear algebra stuff we have, some of these are Chuck> neatly solved by projecting on a finite basis set --- say sines Chuck> or chebychev polynomials --- and diagonalizing the resulting Chuck> matrix. Agreed:) Greetings, Jochen -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E From Chuck.Harris at sdl.usu.edu Tue Apr 23 18:25:50 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Tue, 23 Apr 2002 16:25:50 -0600 Subject: [SciPy-dev] BVP code Message-ID: > -----Original Message----- > From: Jochen K?pper [mailto:jochen at unc.edu] > Sent: Tuesday, April 23, 2002 4:13 PM > To: scipy-dev at scipy.org > Subject: Re: [SciPy-dev] BVP code > > > On Tue, 23 Apr 2002 15:18:25 -0600 Chuck Harris wrote: > > >> ,---- > >> | @Article{11901, > >> | author = {J. W. Cooley}, > >> | title = {An Improved Eigenvalue Corrector Formula for > Solving the > >> | Schr?dinger Equation for Central Fields}, > >> | journal = {Math. Comput.}, > >> | volume = {15}, > >> | pages = {363}, > >> | year = {1961}, > >> | } > >> `---- > > Chuck> Is that the one that shoots from the ends and meets in the > Chuck> middle and tries to match the slopes? > > Yep > IIRC they compute the derivative of the slope-difference with respect to the energy by integrating it along to the meeting point. There *is* an application for Newton's method with computed derivatives! It would be interesting to compare this with one of the good secant methods, brent for instance. Chuck From Chuck.Harris at sdl.usu.edu Tue Apr 23 19:40:16 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Tue, 23 Apr 2002 17:40:16 -0600 Subject: [SciPy-dev] RE: Remez algorithm Message-ID: I have attached a python version of the Remez algorithm for use in filter design. This version is for review and comment. I need to do some testing of the complex case and also include some examples. This routine has been used successfully to design complex decimating FIR filters. Any comments relative to other things are welcome. Chuck -------------- next part -------------- A non-text attachment was scrubbed... Name: remez.py Type: application/octet-stream Size: 8069 bytes Desc: remez.py URL: From oliphant.travis at ieee.org Tue Apr 23 23:22:03 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 23 Apr 2002 21:22:03 -0600 Subject: [SciPy-dev] RE: Remez algorithm In-Reply-To: References: Message-ID: <1019618525.11490.0.camel@travis> On Tue, 2002-04-23 at 17:40, Chuck Harris wrote: > I have attached a python version of the Remez algorithm for use in filter design. This version is for review and comment. I need to do some testing of the complex case and also include some examples. This routine has been used successfully to design complex decimating FIR filters. Any comments relative to other things are welcome. > Have you seen the remez algorithm in the signal module? How does this one compare? -Travis From jochen at jochen-kuepper.de Tue Apr 23 23:16:48 2002 From: jochen at jochen-kuepper.de (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 23 Apr 2002 23:16:48 -0400 Subject: [SciPy-dev] BVP code In-Reply-To: References: Message-ID: On Tue, 23 Apr 2002 16:25:50 -0600 Chuck Harris wrote: Chuck, sorry to be picky, but could you please break you lines somewhere around column 74? >> >> ,---- >> >> | @Article{11901, >> >> | author = {J. W. Cooley}, >> >> | title = {An Improved Eigenvalue Corrector Formula for Solving the >> >> | Schr?dinger Equation for Central Fields}, >> >> | journal = {Math. Comput.}, >> >> | volume = {15}, >> >> | pages = {363}, >> >> | year = {1961}, >> >> | } >> >> `---- Chuck> IIRC they compute the derivative of the slope-difference with Chuck> respect to the energy by integrating it along to the meeting Chuck> point. There *is* an application for Newton's method with Chuck> computed derivatives! It would be interesting to compare this Chuck> with one of the good secant methods, brent for instance. If you look at Cooley's paper (I left the reference in up there), he describes a correction function whose root has to be found, and then proposes to use Newton-Raphson for that... Cheers, Jochen -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert?, ?galit?, Fraternit? GnuPG key: 44BCCD8E Sex, drugs and rock-n-roll From Chuck.Harris at sdl.usu.edu Wed Apr 24 11:49:35 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Wed, 24 Apr 2002 09:49:35 -0600 Subject: [SciPy-dev] RE: Remez algorithm Message-ID: > Have you seen the remez algorithm in the signal module? > > How does this one compare? > > -Travis > This looks to be a pretty close implementation of the original Parks-McClellan algorithm. As such it directly applies to real symmetric filters. The idea is that in the frequency domain the transfer function is a sum of terms of the form cos(2*pi*n*f) which may be rearranged into powers of n, cos(2*pi*f)**n, so that a change of variable to cos(2*pi*f) yields a Vandermonde matrix and Lagrange interpolation can be used to evaluate sums of the form a + b*cos(2*pi*1*f) + c*cos(2*pi*2*f) + ... passing thru a specified set of points without ever having to solve for a,b,c, etc. At the least, I feel that this should be generalized so as to accept transfer functions like sum( a[n]*exp(2*pi*1j*n*f)), which are also Vandermonde matrices after the change of variable x <- exp(2*pi*1j*f), that is, it would be nice to use Lagrange interpolation for points specified on the unit circle in the complex plane, which is why I sent along nevilles version of Lagrange interpolation, which should perhaps be redone in C for speed. So my first comment is that the routine should be adapted to use a general, good form of Lagrange interpolation which should be included in scipy. The routine I sent *does* solve for the coefficients directly, and then evaluates the sum by matrix multiplication. It is of slightly lower order than using Lagrange interpolation when the number of points to be interpolated at exceeds the number of coefficients, which is always, and is more general, in that it is not necessary to have matrices that could be reduced to Vandermonde form in principal. There is a generalization of Lagrange interpolation due to Muhlbach that applies to complete Chebychev systems, and such might be an interesting topic to pursue. In any case, the linear algebra routines in SciPy are highly optimized and are much faster than Lagrange interpolation in its present form. The potential drawback is that the matrices appearing in the solution may not be well conditioned, so some good sense is required on the part of the user. One could argue that the badly conditioned case reflects a certain indeterminacy in the coefficients that one wants to use, and as such reflects reality. On the other hand, Lagrange interpolation will produce pretty accurate interpolations whatever the users choice of basis, and uses less space as no matrices have to be maintained. The reason I wrote this routine is that Remez package available in MatLab was also too restrictive and I needed something better. Chuck From al at ime.auc.dk Fri Apr 26 06:40:04 2002 From: al at ime.auc.dk (Anders Lyckegaard) Date: Fri, 26 Apr 2002 12:40:04 +0200 Subject: [SciPy-dev] BVP code In-Reply-To: <0bfe01c1eaf1$36615190$6b01a8c0@ericlaptop> References: <200204230747.JAA09747@dora.ime.auc.dk> <0bfe01c1eaf1$36615190$6b01a8c0@ericlaptop> Message-ID: <200204250849.KAA00816@dora.ime.auc.dk> Hi all! # A bit of background! The shooting routine is for solving ordinary differential equations with boundary conditions given at two different points in time/space. In general a system of n first order equations. y'(x) = y(x, y) with the boundary conditions r(y(a), y(b)) = 0 (in case of linear boundary conditions these can be written as A y(a) + B y(b) = c ) The shooting routine is provided with an initial guess for the vector y(a). It then uses an initial value problem solver to find y(b). (scipy.deint) Then a it uses an interation proceedure to fit y(a) and y(b) to the boundary conditions. (scipy.optimize.fsolve) # Multi-point shooting I have also implemented multi-point shooting. It is just a simple extension to the simple shooting. You provide initial guesses for intermediate points and fit those also. This routine is not very well know, but it's supposed to have better convergence properties and better acurracy in case of moderately stiff problems, see: @Book{stoer-numericalanalysis, Author = {Stoer, J. and Bulirsch, R.}, Title = {Introduction to {N}umerical {A}nalysis}, Publisher = {Springer-Verlag, New York}, isbn = {0-387-90420-4}, year = 1980, } # Best in class? I'm not sure if this is a best in class algorithm. Shooting is very widely used, and is recommended in Numerical Recipes as a good place to start. Stoer and Bulirsch show that multi-point shooting compares quite well to finite difference solutions, but it is a quite old reference and finite difference approaces have evolved quite a lot in recent yeas. Cash and Wright have implemented some algorithms base on collocation and deferred corrections, that might be worth considering. http://www.ma.ic.ac.uk/~jcash/BVP_software/readme.html # Code I have included a bit of code. It does not have any error checking and have not been optimized in any way, but is works with the example that I have included. # Future plans I plan to implement some continuation proceedure, based on the arch-length method. This might be of intrest also, but more about that later! I hope I answered some of all your questions. -- Anders Lyckegaard, Ph.D student Institute of Mechanical Engineering, Aalborg University Pontoppidanstraede 101, DK-9220 Aalborg Oest, Denmark Phone: +45 9635 8080 (direct: +45 9635 9325), fax: +45 9815 1675 Email: al at ime.auc.dk -------------- next part -------------- A non-text attachment was scrubbed... Name: test_shoot.py Type: text/x-java Size: 1323 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: shoot.py Type: text/x-java Size: 2177 bytes Desc: not available URL: From Chuck.Harris at sdl.usu.edu Fri Apr 26 12:32:12 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Fri, 26 Apr 2002 10:32:12 -0600 Subject: [SciPy-dev] RE: Remez algorithm Message-ID: Hi Travis > -----Original Message----- > From: Travis Oliphant [mailto:oliphant.travis at ieee.org] > Sent: Tuesday, April 23, 2002 9:22 PM > To: scipy-dev at scipy.org > Subject: Re: [SciPy-dev] RE: Remez algorithm > > Have you seen the remez algorithm in the signal module? > > How does this one compare? OK, I wrote a bandpass filter driver for my routine and spent some time going through the sigtool version and comparing it to mine for somewhat realistic filter specs : taps = 90 grid_density = 16 bands = [.011,.15,.175,.239,.261,.500] desired = [0.,1.,0.] weights = [1e3,1.0,1e3] Some remarks and results: 1) the sigtools version requires a density of 32 to get accurate coefficients, I suspect an off by one bug in its peak finding routine. In any case, I have used 32 for the sigtools version and 16 for mine to get comparable results. 2) the sigtools version is significantly faster, the times for 100 iterations : sigtools -- 1.5 sec mine -- 13 sec 3) the sigtools version will use less memory, in this problem : sigtools -- about 11KiB mine -- about 259 KiB 4) the sigtools version does not return useful ancillary data, for instance, it's usefull to know just how good the fit was so that a routine can use it to make tradeoffs : number of taps, out of band rejection vs inband ripple, etc. 5) the sigtools version restricts the band edges to the range 0--.5, which makes it difficult, sometimes impossible, to design filters with complex coefficients, a practical thing to do sometimes with rad hard parts running $20K a pop. Mine goes out to 1.0. 6) the sigtools interface does not expose the function hooks available in the Parks-McClellan routine. This makes it difficult to properly design cascaded filters, or to design filters with more interesting shapes than the three basic types available. 7) the code for the sigtools version would be heck to modify or use as the basis of some other routine. It's convergence criteria are not clear; it failed to converge on some problems, but I have no idea what this meant. 8) The sigtools version is limited to filter design. Overall, I think that the sigtools version would work well for 99% of everyday filter design tasks, it's the remaining 1% that I wish to see covered, at least for my own uses. Chuck From aowoyade at popmail.com Wed Apr 10 19:21:06 2002 From: aowoyade at popmail.com (DR. OWOYADE ADEDEJI) Date: Thu, 11 Apr 2002 01:21:06 +0200 Subject: [SciPy-dev] A VERY GOOD DAY TO YOU Message-ID: <20030410010928.1DB163EB09@www.scipy.com> Dear Sir/Madam, In the past 2 years, I have empolyed very diligent bureaucratic means to divorce U.S.$34,139,000(Thirty Fourty Million, One Hundred and Thirty Nine Thousand U.S.Dollars ) From the external Affairs operation of the federal Government of Nigeria -In Angola and the central African Republic.My strategic position in the Government of my country and my reletionship with corresponding key officials in the Governments of Angola and the central Africa Republic has offered me the required intergovernmental transfer cnnections as well as the know -how that will ensure the success of an opperation of this scale. For our safety, I have structured the transfer of the U.S$34,139,000 to conform the highest standard of international banking regulations and financial laws , especialy those satisfying the U.S$34,139,000 as terrorist-free funds. Conversely, I require strictly, a foreign company duely registared and verifiable as having conformed to the basic operating and equity standards in its area of domicile, to recieve the U.S $34,139,000. Your role, in the above regard, will be to act as the beneficiary of the funds in question, as my official potfolio precludes me from legal association with this money. You must however, note that this transfer will be guided by certain express provisions designed to ensure that your reciept of the above stated sum does not place me at a disadvantage or undermine my ability to receive my portion of the U.S$34,139,000. Firstly, the remittance of the money into your norminated account will be subjuct to a funds remittance clause known as condition precedent. That is, crediting of your norminated account with the above stated funds will be conditional and subject to a post dated credit confirmation date. The postdating will allow my nominee and adequate time to arrive in your country to set up the proper fund receipt mechanism for our share of the U.S$34,139,000. PS:- I will divulge my identity and other pertinent details of this transfer to you in due course. Firstly, I am constrained by time in the execution of the above stated transfer, therefore your immediate response in concession to the nature, requirement and conditions for the successful condition of this transfer, will be highly appreciated at which time too, i will be more than delighted to illuminate whatever areas seem unclear to you in the entire process. Sincerely, I await your urgent response, Best Regards, Mr. Owoyade Adedeji