From heiko at hhenkelmann.de Fri Mar 1 14:58:00 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Fri, 1 Mar 2002 20:58:00 +0100 Subject: [SciPy-dev] CVS problem Message-ID: <004101c1c15b$67137e40$ced99e3e@arrow> We again have a problem with the CVS access: cvs -z7 update -P -d (in directory C:\home\henkelma\projects\python\scipy_clean\) ? ChangeLog cvs server: Updating . U __cvs_version__.py cvs server: Updating blas cvs server: cannot open .cvsignore: Too many open files cvs server: cannot open .cvswrappers: Too many open files cvs server: Updating blas/SRC cvs [server aborted]: cannot open CVS/Repository: Too many open files *****CVS exited normally with code 1***** Heiko From eric at scipy.org Fri Mar 1 14:18:52 2002 From: eric at scipy.org (eric) Date: Fri, 1 Mar 2002 14:18:52 -0500 Subject: [SciPy-dev] CVS problem References: <004101c1c15b$67137e40$ced99e3e@arrow> Message-ID: <0a3601c1c155$eff9c8f0$6b01a8c0@ericlaptop> System rebooted. Try again. We're still working on the other server. eric ----- Original Message ----- From: "Heiko Henkelmann" To: Sent: Friday, March 01, 2002 2:58 PM Subject: [SciPy-dev] CVS problem > > We again have a problem with the CVS access: > > cvs -z7 update -P -d (in directory > C:\home\henkelma\projects\python\scipy_clean\) > ? ChangeLog > cvs server: Updating . > U __cvs_version__.py > cvs server: Updating blas > cvs server: cannot open .cvsignore: Too many open files > cvs server: cannot open .cvswrappers: Too many open files > cvs server: Updating blas/SRC > cvs [server aborted]: cannot open CVS/Repository: Too many open files > > *****CVS exited normally with code 1***** > > > Heiko > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From heiko at hhenkelmann.de Fri Mar 1 15:40:28 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Fri, 1 Mar 2002 21:40:28 +0100 Subject: [SciPy-dev] CVS problem References: <004101c1c15b$67137e40$ced99e3e@arrow> <0a3601c1c155$eff9c8f0$6b01a8c0@ericlaptop> Message-ID: <00c801c1c161$52eae9c0$ced99e3e@arrow> It worked. Thanks. ----- Original Message ----- From: "eric" To: Sent: Friday, March 01, 2002 8:18 PM Subject: Re: [SciPy-dev] CVS problem > System rebooted. Try again. > > We're still working on the other server. > > eric > > ----- Original Message ----- > From: "Heiko Henkelmann" > To: > Sent: Friday, March 01, 2002 2:58 PM > Subject: [SciPy-dev] CVS problem > > > > > > We again have a problem with the CVS access: > > > > cvs -z7 update -P -d (in directory > > C:\home\henkelma\projects\python\scipy_clean\) > > ? ChangeLog > > cvs server: Updating . > > U __cvs_version__.py > > cvs server: Updating blas > > cvs server: cannot open .cvsignore: Too many open files > > cvs server: cannot open .cvswrappers: Too many open files > > cvs server: Updating blas/SRC > > cvs [server aborted]: cannot open CVS/Repository: Too many open files > > > > *****CVS exited normally with code 1***** > > > > > > Heiko > > > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jmr at engineering.uiowa.edu Sat Mar 2 00:03:10 2002 From: jmr at engineering.uiowa.edu (Joe Reinhardt) Date: Fri, 01 Mar 2002 23:03:10 -0600 Subject: [SciPy-dev] compiled new debian packages Message-ID: <874rjzk1dd.fsf@phantom.ecn.uiowa.edu> I grabbed the CVS from Friday night, around 9 PM central time. I compile a new debian package for scipy. You can find it at http://people.debian.org/~jmr Let me know how the debian package works. - Joe P.S. I just returned from the SPIE Conf. on Medical Imaging (see www.spie.org) in San Diego. At least two groups showed demos that used python and scipy (my group plus one other group). Several other groups used numeric and python-vtk. -- Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 Telephone: 319-335-5634 FAX: 319-335-5631 From prabhu at aero.iitm.ernet.in Sat Mar 2 01:21:43 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sat, 2 Mar 2002 11:51:43 +0530 Subject: [SciPy-dev] compiled new debian packages In-Reply-To: <874rjzk1dd.fsf@phantom.ecn.uiowa.edu> References: <874rjzk1dd.fsf@phantom.ecn.uiowa.edu> Message-ID: <15488.28535.115756.223793@monster.linux.in> >>>>> "JR" == Joe Reinhardt writes: JR> I grabbed the CVS from Friday night, around 9 PM central time. JR> I compile a new debian package for scipy. You can find it at JR> http://people.debian.org/~jmr JR> Let me know how the debian package works. Cool. I still build it from sources but am anxious to know if this will get into sid/woody sometime? thanks, prabhu From jmr at engineering.uiowa.edu Sat Mar 2 09:10:48 2002 From: jmr at engineering.uiowa.edu (Joe Reinhardt) Date: Sat, 02 Mar 2002 08:10:48 -0600 Subject: [SciPy-dev] compiled new debian packages In-Reply-To: <874rjzk1dd.fsf@phantom.ecn.uiowa.edu> (jmr@engineering.uiowa.edu's message of "Fri, 01 Mar 2002 23:03:10 -0600") References: <874rjzk1dd.fsf@phantom.ecn.uiowa.edu> Message-ID: <87n0xrhxg7.fsf@phantom.ecn.uiowa.edu> jmr at engineering.uiowa.edu (Joe Reinhardt) writes: > I grabbed the CVS from Friday night, around 9 PM central time. I > compile a new debian package for scipy. Eigenvectors don't seem to work in the package I created. >>> linalg.eig(A, 1) dgeev:lwork=0 Traceback (most recent call last): File "", line 1, in ? File "debian/scipy/usr/lib/python2.1/site-packages/scipy/linalg/linear_algebra.py", line 440, in eig flapack.error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork I built scipy using f2py version 2.13.175-1212. Thanks, Joe -- Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 Telephone: 319-335-5634 FAX: 319-335-5631 From jmr at engineering.uiowa.edu Sat Mar 2 09:06:50 2002 From: jmr at engineering.uiowa.edu (Joe Reinhardt) Date: Sat, 02 Mar 2002 08:06:50 -0600 Subject: [SciPy-dev] compiled new debian packages In-Reply-To: <15488.28535.115756.223793@monster.linux.in> (Prabhu Ramachandran's message of "Sat, 2 Mar 2002 11:51:43 +0530") References: <874rjzk1dd.fsf@phantom.ecn.uiowa.edu> <15488.28535.115756.223793@monster.linux.in> Message-ID: <87sn7jhxmt.fsf@phantom.ecn.uiowa.edu> Prabhu Ramachandran writes: > Cool. I still build it from sources but am anxious to know if this > will get into sid/woody sometime? I certainly hope to get it into the main distribution soon. However, scipy is still changing very quickly and I am unsure when I can grab a "stable" snapshot from CVS. I want to avoid packaging a snapshot that is in the middle of a series of big changes. - Joe -- Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 Telephone: 319-335-5634 FAX: 319-335-5631 From fperez at pizero.colorado.edu Sat Mar 2 11:16:54 2002 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Sat, 2 Mar 2002 09:16:54 -0700 (MST) Subject: [SciPy-dev] bug in python 2.2/linux for numerical computations? Message-ID: Hi all, this is a repost of something I saw today on c.l.p, but which seems of interest to us. I tested it on a python 2.2 installation hand-built under Mandrake 8.1 (gcc 2.96), same problem. It doesn't happen under windows xp (py2.2 downloaded from python.org site). Cheers, f. Original post: Python 2.2 seriously crippled for numerical computation? From:Huaiyu Zhu Date:Saturday 02 March 2002 04:51:26 Groups:comp.lang.python There appears to be a serious bug in Python 2.2 that severely limits its usefulness for numerical computation: # Python 1.5.2 - 2.1 >>> 1e200**2 inf >>> 1e-200**2 0.0 # Python 2.2 >>> 1e-200**2 Traceback (most recent call last): File "", line 1, in ? OverflowError: (34, 'Numerical result out of range') >>> 1e200**2 Traceback (most recent call last): File "", line 1, in ? OverflowError: (34, 'Numerical result out of range') This produces the following serious effects: after hours of numerical computation, just as the error is converging to zero, the whole thing suddenly unravels. Note that try/except is completely useless for this purpose. I hope this is unintended behavior and that there is an easy fix. Have any of you experienced this? Huaiyu Tim Peter's response: > There appears to be a serious bug in Python 2.2 that severely limits its > usefulness for numerical computation: > > # Python 1.5.2 - 2.1 > > >>> 1e200**2 > inf A platform-dependent accident, there. > >>> 1e-200**2 > 0.0 > > # Python 2.2 > > >>> 1e-200**2 > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: (34, 'Numerical result out of range') That one is surprising and definitely not intended: it suggests your platform libm is setting errno to ERANGE for pow(1e-200, 2.0), or that your platform C headers define INFINITY but incorrectly, or that your platform C headers define HUGE_VAL but incorrectly, or that your platform C compiler generates bad code, or optimizes incorrectly, for negating and/or comparing against its definition of HUGE_VAL or INFINITY. Python intends silent underflow to 0 in this case, and I haven't heard of underflows raising OverflowError before. Please file a bug report with full details about which operating system, Python version, compiler and C libraries you're using (then it's going to take a wizard with access to all that stuff to trace into it and determine the true cause). > >>> 1e200**2 > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: (34, 'Numerical result out of range') That one is intended; see http://sf.net/tracker/?group_id=5470&atid=105470&func=detail&aid=496104 for discussion. > This produces the following serious effects: after hours of numerical > computation, just as the error is converging to zero, the whole thing > suddenly unravels. It depends on how you write your code, of course. > Note that try/except is completely useless for this purpose. Ditto. If your platform C lets you get away with it, you may still be able to get an infinity out of 1e200 * 1e200. > I hope this is unintended behavior Half intended, half both unintended and never before reported. > and that there is an easy fix. Sorry, "no" to either. ------------------------------ Paul Dubois just posted this a second ago: I also see the underflow problem on my Linux box 2.4.2-2. This is certainly untenable. However, I am able to catch OverflowError in both cases. I had a user complain about this just yesterday, so I think it is a new behavior in Python 2.2 which I was just rolling out. A small Fortran test problem did not exhibit the underflow bug, and caught the overflow bug at COMPILE TIME (!). There are two states for the IEEE underflow: one in which the hardware sets it to zero, and the other in which the hardware signals the OS and you can tell the OS to set it to zero. There is no standard for the interface to this facility that I am aware of. (Usually I have had to figure out how to make sure the underflow was handled in hardware because the sheer cost of letting it turn into a system call was prohibitive.) I speculate that on machines where the OS call is the default that Python 2.2 is catching the signal when it should let it go by. I have not looked at this lately so something may have changed. You can use the kinds package that comes with Numeric to test for maximum and minimum exponents. kinds.default_float_kind.MAX_10_EXP (equal to 308 on my Linux box, for example) tells you how big an exponent a floating point number can have. MIN_10_EXP (-307 for me) is also there. Work around on your convergence test: instead of testing x**2 you might test log10(x) vs. a constant or some expression involving kinds.default_float_kind.MIN_10_EXP. From pearu at cens.ioc.ee Sat Mar 2 12:28:26 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 2 Mar 2002 19:28:26 +0200 (EET) Subject: [SciPy-dev] Progress with linalg2 Message-ID: Hi, I am working again with linalg2 and I have made some progress with it. I have almost finished testing solve() function, other functions will get out faster hopefully. Here are some timing results that compare the corresponding functions of scipy and Numeric: Solving system of linear equations ================================== | continuous | non-continuous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 1.11 | 1.70 | 1.10 | 1.85 (secs for 2000 calls) 100 | 1.65 | 3.02 | 1.68 | 4.47 (secs for 300 calls) 500 | 1.73 | 2.14 | 1.78 | 2.33 (secs for 4 calls) 1000 | 5.60 | 6.23 | 5.59 | 7.03 (secs for 2 calls) Notes: 1) `Numeric' refers to using LinearAlgebra.solve_linear_equations(). 2) `scipy' refers to using scipy.linalg.solve(). 3) `size' is the number of equations. 4) Both continuous and non-continuous arrays were used in the tests. 5) Both Numeric and scipy use the same LAPACK routine dgesv from ATLAS-3.3.13. 6) The tests were run on PII-400MHz, 160MB RAM, Debian Woody with gcc-2.95.4, Python 2.2, Numeric 20.3, f2py-2.13.175-1218. Conclusions: 1) The corresponding Scipy function is faster in all tests. The difference gets smaller for larger tasks but it does not vanish. 2) Since both Scipy and Numeric functions use the same LAPACK routine, then these tests actually measure the efficency of the interfaces to LAPACK routines. In the Scipy case the interfaces are generated by f2py and in the Numeric case by a man. These results show that it makes sense to use automatically generated extension modules: one can always tune the code generator for producing a better code while hand-written extension modules will hardly get tuned in practice. 3) Note that there is almost no difference whether the input array to f2py generated extension module is contiguous or non-contiguous, these cases are efficently handled by the interface. While using the Numeric interface, the difference is quite noticable. Note also that in order to run these tests, one has to have f2py-2.13.175-1218 (in f2py CVS) or later because earlier versions of f2py leak memory. Here is how I run the tests (remember to cvs update): cd linalg2 python setup_linalg.py build --build-platlib=. python tests/test_basic.py Regards, Pearu From eric at scipy.org Sat Mar 2 18:27:12 2002 From: eric at scipy.org (eric) Date: Sat, 2 Mar 2002 18:27:12 -0500 Subject: [SciPy-dev] Progress with linalg2 References: Message-ID: <0bda01c1c241$c85cc3f0$6b01a8c0@ericlaptop> Hey Pearu, This looks great on several accounts. First congrats on the speed of your f2py interfaces. I've wondered in the past if what your doing is generally appicable and efficient, and your proving that it is. Good work. Second, it is nice to see to see a speed improvement on the linalg solve. Call me greedy, but I was actually expecting more -- I thought I remembered factors of 2-10 when comparing ATLAS to standard LAPACK on my machine. Perhaps this is because of the processor types. As I remember ATLAS runs faster on PIII than PII because of the SSE instruction set, and it really comes into its own on a P4 with the SSE2 instruction set. I'll be interested to see time comparisons for those machines, and also compared to Matlab, Octave, C, and other common tools. One other thing. Sorry I haven't had more time to put into this part of the linalg section of the project. I haven't managed to make time for it as I had hoped, and you've picked up the slack wonderfully. Thanks. I'll run tests this evening and report how they do on my PIII-850 laptop on W2K. thanks for all your good work, eric ----- Original Message ----- From: "Pearu Peterson" To: Sent: Saturday, March 02, 2002 12:28 PM Subject: [SciPy-dev] Progress with linalg2 > > Hi, > > I am working again with linalg2 and I have made some progress with it. > I have almost finished testing solve() function, other functions will > get out faster hopefully. > > Here are some timing results that compare the corresponding functions of > scipy and Numeric: > > Solving system of linear equations > ================================== > | continuous | non-continuous > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 1.11 | 1.70 | 1.10 | 1.85 (secs for 2000 calls) > 100 | 1.65 | 3.02 | 1.68 | 4.47 (secs for 300 calls) > 500 | 1.73 | 2.14 | 1.78 | 2.33 (secs for 4 calls) > 1000 | 5.60 | 6.23 | 5.59 | 7.03 (secs for 2 calls) > > Notes: > 1) `Numeric' refers to using LinearAlgebra.solve_linear_equations(). > 2) `scipy' refers to using scipy.linalg.solve(). > 3) `size' is the number of equations. > 4) Both continuous and non-continuous arrays were used in the tests. > 5) Both Numeric and scipy use the same LAPACK routine dgesv from > ATLAS-3.3.13. > 6) The tests were run on PII-400MHz, 160MB RAM, Debian Woody > with gcc-2.95.4, Python 2.2, Numeric 20.3, f2py-2.13.175-1218. > > Conclusions: > 1) The corresponding Scipy function is faster in all tests. > The difference gets smaller for larger tasks but it does not vanish. > > 2) Since both Scipy and Numeric functions use the same LAPACK > routine, then these tests actually measure the efficency of the interfaces > to LAPACK routines. In the Scipy case the interfaces are generated by > f2py and in the Numeric case by a man. These results show that it makes > sense to use automatically generated extension modules: one can always > tune the code generator for producing a better code while hand-written > extension modules will hardly get tuned in practice. > > 3) Note that there is almost no difference whether the input array to f2py > generated extension module is contiguous or non-contiguous, these > cases are efficently handled by the interface. While using the Numeric > interface, the difference is quite noticable. > > Note also that in order to run these tests, one has to have > f2py-2.13.175-1218 (in f2py CVS) or later because earlier versions of f2py > leak memory. Here is how I run the tests (remember to cvs update): > > cd linalg2 > python setup_linalg.py build --build-platlib=. > python tests/test_basic.py > > Regards, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Sun Mar 3 06:35:02 2002 From: eric at scipy.org (eric) Date: Sun, 3 Mar 2002 06:35:02 -0500 Subject: [SciPy-dev] Progress with linalg2 References: Message-ID: <0c2601c1c2a7$758aeb50$6b01a8c0@ericlaptop> Hey Pearu, 3 things: 1. My atlas is missing some routines that you have in generic_clapack.pyf. Specifically: clapack_xgetri clapack_xpotri clapack_xlauum clapack_xtrtri where x stands for the various types. Are these in your ATLAS? I have a fairly recent version, but perhaps I need to upgrade? 2. I get segfaults when trying to run the tests. It looks like they happen for both C and Fortran (I commented out contiguous cases to test Fortran). Do you think this is a windows specific issue? I am using the latest f2py CVS. 3. I like your _measure tests. I've long been wishing we had some set of benchmarks that we could test with something like: >>> import scipy >>> scipy.benchmark() This is a nice step in that direction. eric ----- Original Message ----- From: "Pearu Peterson" To: Sent: Saturday, March 02, 2002 12:28 PM Subject: [SciPy-dev] Progress with linalg2 > > Hi, > > I am working again with linalg2 and I have made some progress with it. > I have almost finished testing solve() function, other functions will > get out faster hopefully. > > Here are some timing results that compare the corresponding functions of > scipy and Numeric: > > Solving system of linear equations > ================================== > | continuous | non-continuous > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 1.11 | 1.70 | 1.10 | 1.85 (secs for 2000 calls) > 100 | 1.65 | 3.02 | 1.68 | 4.47 (secs for 300 calls) > 500 | 1.73 | 2.14 | 1.78 | 2.33 (secs for 4 calls) > 1000 | 5.60 | 6.23 | 5.59 | 7.03 (secs for 2 calls) > > Notes: > 1) `Numeric' refers to using LinearAlgebra.solve_linear_equations(). > 2) `scipy' refers to using scipy.linalg.solve(). > 3) `size' is the number of equations. > 4) Both continuous and non-continuous arrays were used in the tests. > 5) Both Numeric and scipy use the same LAPACK routine dgesv from > ATLAS-3.3.13. > 6) The tests were run on PII-400MHz, 160MB RAM, Debian Woody > with gcc-2.95.4, Python 2.2, Numeric 20.3, f2py-2.13.175-1218. > > Conclusions: > 1) The corresponding Scipy function is faster in all tests. > The difference gets smaller for larger tasks but it does not vanish. > > 2) Since both Scipy and Numeric functions use the same LAPACK > routine, then these tests actually measure the efficency of the interfaces > to LAPACK routines. In the Scipy case the interfaces are generated by > f2py and in the Numeric case by a man. These results show that it makes > sense to use automatically generated extension modules: one can always > tune the code generator for producing a better code while hand-written > extension modules will hardly get tuned in practice. > > 3) Note that there is almost no difference whether the input array to f2py > generated extension module is contiguous or non-contiguous, these > cases are efficently handled by the interface. While using the Numeric > interface, the difference is quite noticable. > > Note also that in order to run these tests, one has to have > f2py-2.13.175-1218 (in f2py CVS) or later because earlier versions of f2py > leak memory. Here is how I run the tests (remember to cvs update): > > cd linalg2 > python setup_linalg.py build --build-platlib=. > python tests/test_basic.py > > Regards, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Sun Mar 3 07:58:22 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 3 Mar 2002 14:58:22 +0200 (EET) Subject: [SciPy-dev] Progress with linalg2 In-Reply-To: <0c2601c1c2a7$758aeb50$6b01a8c0@ericlaptop> Message-ID: Hi Eric, On Sun, 3 Mar 2002, eric wrote: > 1. My atlas is missing some routines that you have in generic_clapack.pyf. > Specifically: > > clapack_xgetri > clapack_xpotri > clapack_xlauum > clapack_xtrtri > > where x stands for the various types. Are these in your ATLAS? I have a > fairly recent version, but perhaps I need to upgrade? Yes, my ATLAS have them. I am using ATLAS-3.3.13 and I didn't realize that it may contain functions not included in the stable ATLAS. Sorry about that. > 2. I get segfaults when trying to run the tests. It looks like they happen > for both C and Fortran (I commented out contiguous cases to test Fortran). > Do you think this is a windows specific issue? I am using the latest > f2py CVS. Try using only flapack. Because the segfaults may be caused by the 1. issue above. To be sure that solve() will not use clapack functions, you need to remove clapack.so file, commenting some test cases out may not be enough. > 3. I like your _measure tests. I've long been wishing we had some set of > benchmarks that we could test with something like: > > >>> import scipy > >>> scipy.benchmark() > > This is a nice step in that direction. So, shall we use benchmark_ or bench_ (instead of measure_) prefix for the names of corresponding member functions in test_*.py? Pearu From eric at scipy.org Sun Mar 3 07:01:14 2002 From: eric at scipy.org (eric) Date: Sun, 3 Mar 2002 07:01:14 -0500 Subject: [SciPy-dev] log plots References: <004101c1be3d$d630f620$4761e03e@arrow> <038501c1bef9$4f303ca0$6b01a8c0@ericlaptop> <001001c1bf0e$5f66bb20$1bd89e3e@arrow> Message-ID: <0c8301c1c2ab$20a5af90$6b01a8c0@ericlaptop> Hey Heiko, Thanks for the log plot patches to gplt. I changed the interface slightly, added support for the y axis, and checked them into the cvs. The following should work now. >>> from scipy import gplt >>> gplt.plot([1,2,3]) >>> gplt.logx() >>> gplt.logx('off') >>> gplt.logy() I noticed that grid lines disappeared on axis with log scale. Not sure why this happens. eric ----- Original Message ----- From: "Heiko Henkelmann" To: Sent: Tuesday, February 26, 2002 4:41 PM Subject: Re: [SciPy-dev] log plots > > > I added logscalex() and nologscalex() to gplt. > > Heiko > ----- Original Message ----- > From: "eric" > To: > Sent: Tuesday, February 26, 2002 8:10 PM > Subject: Re: [SciPy-dev] log plots > > > > > > > > > Hello There, > > > > > > are there any plans to include log plots in any of the plot modules in > the > > > future? Or did I miss anything in the current version? > > > > > > > They aren't there now, but they should show up within the next 6 months. > > Patches to the current version with this feature are welcome. > > > > eric > > > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-dev > > > From eric at scipy.org Sun Mar 3 07:12:39 2002 From: eric at scipy.org (eric) Date: Sun, 3 Mar 2002 07:12:39 -0500 Subject: [SciPy-dev] Progress with linalg2 References: Message-ID: <0cca01c1c2ac$b6fa6070$6b01a8c0@ericlaptop> > > Hi Eric, > > On Sun, 3 Mar 2002, eric wrote: > > > 1. My atlas is missing some routines that you have in generic_clapack.pyf. > > Specifically: > > > > clapack_xgetri > > clapack_xpotri > > clapack_xlauum > > clapack_xtrtri > > > > where x stands for the various types. Are these in your ATLAS? I have a > > fairly recent version, but perhaps I need to upgrade? > > Yes, my ATLAS have them. I am using ATLAS-3.3.13 and I didn't realize that > it may contain functions not included in the stable ATLAS. Sorry about > that. > > > 2. I get segfaults when trying to run the tests. It looks like they happen > > for both C and Fortran (I commented out contiguous cases to test Fortran). > > Do you think this is a windows specific issue? I am using the latest > > f2py CVS. > > Try using only flapack. Because the segfaults may be caused by the > 1. issue above. To be sure that solve() will not use clapack functions, > you need to remove clapack.so file, commenting some test cases out may not > be enough. I'll do more research here and report back. > > > 3. I like your _measure tests. I've long been wishing we had some set of > > benchmarks that we could test with something like: > > > > >>> import scipy > > >>> scipy.benchmark() > > > > This is a nice step in that direction. > > So, shall we use benchmark_ or bench_ (instead of measure_) prefix for the > names of corresponding member functions in test_*.py? I like "bench_". short, sweet, and obvious (well hopefully). Also, I guess we should start adding a "bench_suite" function to the test modules also. At some point, we need to sub-class the unit test class to provide a few timing routines so that: def bench_solve(self): self.start_timer() self.stop_timer() The test case should keep up with the timing information internally. A more sophisticated interface might be necessary, but this is a start. see ya, eric From eric at scipy.org Sun Mar 3 07:14:13 2002 From: eric at scipy.org (eric) Date: Sun, 3 Mar 2002 07:14:13 -0500 Subject: [SciPy-dev] compiled new debian packages References: <874rjzk1dd.fsf@phantom.ecn.uiowa.edu> Message-ID: <0cda01c1c2ac$eeb16090$6b01a8c0@ericlaptop> > P.S. I just returned from the SPIE Conf. on Medical Imaging (see > www.spie.org) in San Diego. At least two groups showed demos that > used python and scipy (my group plus one other group). Several other > groups used numeric and python-vtk. Very cool indeed. From heiko at hhenkelmann.de Sun Mar 3 14:46:33 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Sun, 3 Mar 2002 20:46:33 +0100 Subject: [SciPy-dev] log plots References: <004101c1be3d$d630f620$4761e03e@arrow> <038501c1bef9$4f303ca0$6b01a8c0@ericlaptop> <001001c1bf0e$5f66bb20$1bd89e3e@arrow> <0c8301c1c2ab$20a5af90$6b01a8c0@ericlaptop> Message-ID: <001f01c1c2ec$1faa2800$76d89e3e@arrow> Hello Eric, thank you for integrating the patches. The grid lines on the axis with logscale do not really disappear. Gnuplot is only using the grid for the major ticks (0.1,1.0,10.0,...). Heiko ----- Original Message ----- From: "eric" To: Sent: Sunday, March 03, 2002 1:01 PM Subject: Re: [SciPy-dev] log plots > Hey Heiko, > > Thanks for the log plot patches to gplt. I changed the interface slightly, > added support for the y axis, and checked them into the cvs. The following > should work now. > > >>> from scipy import gplt > >>> gplt.plot([1,2,3]) > >>> gplt.logx() > > >>> gplt.logx('off') > > >>> gplt.logy() > > > I noticed that grid lines disappeared on axis with log scale. Not sure why > this happens. > > eric > From pearu at cens.ioc.ee Sun Mar 3 18:32:11 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 4 Mar 2002 01:32:11 +0200 (EET) Subject: [SciPy-dev] Progress with linalg2 In-Reply-To: <0cca01c1c2ac$b6fa6070$6b01a8c0@ericlaptop> Message-ID: Hi, On Sun, 3 Mar 2002, eric wrote: > I like "bench_". short, sweet, and obvious (well hopefully). Also, I guess we > should start adding a "bench_suite" function to the test modules also. At some > point, we need to sub-class the unit test class to provide a few timing routines > so that: > > def bench_solve(self): > > self.start_timer() > > self.stop_timer() > > The test case should keep up with the timing information internally. A more > sophisticated interface might be necessary, but this is a start. In linalg2/tests/test_basic.py I have implemented BenchCase class for simplifying benchmarking, and as you will see in test_solve, its usage is really simple now. I would suggest that scipy_test will provide class ScipyTestCase(unittest.TestCase, BenchCase): ... that the test files should use as a base class. Feel free to choose better names. Another idea is about policy of running tests with different levels: if a lower level test fails for some reasons, then higher level tests will be all skipped. This is to avoid lots of error messages if something simple is wrong. What do you think? In case of interest, here are more test results that compare scipy vs Numeric. Now, scipy is much faster than Numeric (you'll need f2py>=2.13.175-1222): Solving system of linear equations ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 1.17 | 3.87 | 1.18 | 4.07 (secs for 2000 calls) 100 | 1.60 | 3.11 | 1.61 | 3.99 (secs for 300 calls) 500 | 1.64 | 2.12 | 1.64 | 2.38 (secs for 4 calls) 1000 | 5.64 | 6.65 | 5.62 | 7.19 (secs for 2 calls) . Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 1.50 | 5.94 | 1.49 | 6.15 (secs for 2000 calls) 100 | 4.37 | 9.23 | 4.42 | 10.14 (secs for 300 calls) 500 | 4.78 | 7.36 | 4.77 | 7.60 (secs for 4 calls) 1000 | 17.55 | 24.59 | 17.53 | 24.49 (secs for 2 calls) . ---------------------------------------------------------------------- Ran 13 tests in 207.334s Pearu From eric at scipy.org Sun Mar 3 17:47:36 2002 From: eric at scipy.org (eric) Date: Sun, 3 Mar 2002 17:47:36 -0500 Subject: [SciPy-dev] Progress with linalg2 References: Message-ID: <0d5001c1c305$6aac63e0$6b01a8c0@ericlaptop> > > Hi Eric, > > On Sun, 3 Mar 2002, eric wrote: > > > 1. My atlas is missing some routines that you have in generic_clapack.pyf. > > Specifically: > > > > clapack_xgetri > > clapack_xpotri > > clapack_xlauum > > clapack_xtrtri > > > > where x stands for the various types. Are these in your ATLAS? I have a > > fairly recent version, but perhaps I need to upgrade? > > Yes, my ATLAS have them. I am using ATLAS-3.3.13 and I didn't realize that > it may contain functions not included in the stable ATLAS. Sorry about > that. I downloaded the latest (3.3.14) and it built fine under cygwin. It sounds like this is the last one before a stable release for ATLAS, so the problem with missing functions is soon to go away. > > > 2. I get segfaults when trying to run the tests. It looks like they happen > > for both C and Fortran (I commented out contiguous cases to test Fortran). > > Do you think this is a windows specific issue? I am using the latest > > f2py CVS. > > Try using only flapack. Because the segfaults may be caused by the > 1. issue above. To be sure that solve() will not use clapack functions, > you need to remove clapack.so file, commenting some test cases out may not > be enough. Good news! After building with the new ATLAS, everything works fine. Here is the output on my machine: C:\home\ej\wrk\scipy\linalg2>python tests\test_basic.py .. Solving system of linear equations ================================== | continuous | non-continuous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 0.49 | 0.54 | 0.49 | 0.65 (secs for 2000 calls) 100 | 0.98 | 2.11 | 0.86 | 2.46 (secs for 300 calls) 500 | 1.08 | 3.13 | 1.06 | 3.28 (secs for 4 calls) 1000 | 3.56 | 28.32 | 3.59 | 28.39 (secs for 2 calls) . ---------------------------------------------------------------------- Ran 3 tests in 81.427s OK So, we get similar results for the scipy/ATLAS stuff. It looks like the reason I remembered such a difference in scipy/Numeric is that I'm getting such lousy performance from Numeric's linpack_lite routines. I wonder why our Numeric performance is so different?? I'm running the latest 21.0b1 .exe on the Numeric website. Are you linking your Numeric against an optimized set of linpack routines? Anyway, it doesn't matter to much. The important thing is that linalg2 is working on W2K just fine. see ya, eric From oliphant.travis at ieee.org Sun Mar 3 21:46:50 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 3 Mar 2002 19:46:50 -0700 Subject: [SciPy-dev] Progress with linalg2 In-Reply-To: References: Message-ID: On Sunday 03 March 2002 04:32 pm, you wrote: > Hi, > > In case of interest, here are more test results that compare scipy vs > Numeric. Now, scipy is much faster than Numeric (you'll need > f2py>=2.13.175-1222): > > Solving system of linear equations > ================================== > > | contiguous | non-contiguous > > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 1.17 | 3.87 | 1.18 | 4.07 (secs for 2000 calls) > 100 | 1.60 | 3.11 | 1.61 | 3.99 (secs for 300 calls) > 500 | 1.64 | 2.12 | 1.64 | 2.38 (secs for 4 calls) > 1000 | 5.64 | 6.65 | 5.62 | 7.19 (secs for 2 calls) > . > Finding matrix inverse > ================================== > > | contiguous | non-contiguous > > ---------------------------------------------- > size | scipy | Numeric | scipy | Numeric > 20 | 1.50 | 5.94 | 1.49 | 6.15 (secs for 2000 calls) > 100 | 4.37 | 9.23 | 4.42 | 10.14 (secs for 300 calls) > 500 | 4.78 | 7.36 | 4.77 | 7.60 (secs for 4 calls) > 1000 | 17.55 | 24.59 | 17.53 | 24.49 (secs for 2 calls) > . Now, just to be clear these are only testing interfaces, correct? ATLAS is being used in both scipy and Numeric for these tests, right. It would be interesting to see what how a non-atlased Numeric compared. -Travis From pearu at cens.ioc.ee Mon Mar 4 04:13:44 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 4 Mar 2002 11:13:44 +0200 (EET) Subject: [SciPy-dev] Progress with linalg2 In-Reply-To: Message-ID: On Sun, 3 Mar 2002, Travis Oliphant wrote: > > Solving system of linear equations > > ================================== > > > > | contiguous | non-contiguous > > > > ---------------------------------------------- > > size | scipy | Numeric | scipy | Numeric > > 20 | 1.17 | 3.87 | 1.18 | 4.07 (secs for 2000 calls) > > 100 | 1.60 | 3.11 | 1.61 | 3.99 (secs for 300 calls) > > 500 | 1.64 | 2.12 | 1.64 | 2.38 (secs for 4 calls) > > 1000 | 5.64 | 6.65 | 5.62 | 7.19 (secs for 2 calls) > > . > > Finding matrix inverse > > ================================== > > > > | contiguous | non-contiguous > > > > ---------------------------------------------- > > size | scipy | Numeric | scipy | Numeric > > 20 | 1.50 | 5.94 | 1.49 | 6.15 (secs for 2000 calls) > > 100 | 4.37 | 9.23 | 4.42 | 10.14 (secs for 300 calls) > > 500 | 4.78 | 7.36 | 4.77 | 7.60 (secs for 4 calls) > > 1000 | 17.55 | 24.59 | 17.53 | 24.49 (secs for 2 calls) > > . > > Now, just to be clear these are only testing interfaces, correct? ATLAS is > being used in both scipy and Numeric for these tests, right. Correct. Right. > It would be interesting to see what how a non-atlased Numeric > compared. Here are the results for downgraded Numeric that uses the C lite versions of LAPACK from Numeric/Src: Solving system of linear equations ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 1.36 | 1.66 | 1.16 | 1.89 (secs for 2000 calls) 100 | 1.66 | 5.64 | 1.65 | 6.86 (secs for 300 calls) 500 | 1.92 | 10.02 | 1.67 | 10.10 (secs for 4 calls) 1000 | 5.69 | 49.10 | 5.68 | 49.89 (secs for 2 calls) . Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | Numeric | scipy | Numeric 20 | 1.44 | 3.22 | 1.43 | 3.48 (secs for 2000 calls) 100 | 4.25 | 17.63 | 4.17 | 18.43 (secs for 300 calls) 500 | 4.75 | 36.82 | 4.75 | 37.08 (secs for 4 calls) 1000 | 17.55 | 156.70 | 17.48 | 156.88 (secs for 2 calls) . ---------------------------------------------------------------------- Ran 13 tests in 644.039s Pearu From arnd.baecker at physik.uni-ulm.de Tue Mar 5 16:40:10 2002 From: arnd.baecker at physik.uni-ulm.de (arnd.baecker at physik.uni-ulm.de) Date: Tue, 5 Mar 2002 22:40:10 +0100 (MET) Subject: [SciPy-dev] installation In-Reply-To: Message-ID: Hi, I successfully managed to install scipy (under debian) I just started to have a closer look at python/Numeric/scipy and a couple of points 1.) Installation of current CVS Numeric, f2py2e, scipy: (scipy: cvs_version = (1, 18, 1377, 3080) ) Went really smooth (complete install including python 2.2b1, Numeric, ATLAS 3.2.1, fftw). Only one (minor) point: to install f2py2e it was necessary to copy the scipy_distutils from scipy to the f2py2e directory. 2.) scipy.text() lead to the following three errors: ====================================================================== ERROR: check_basic (test_basic1a.test_roots) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/tests/test_basic 1a.py", line 19, in check_basic assert_array_almost_equal(roots(a1),[2,2],11) File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/basic1a.py", lin e 52, in roots roots,dummy = eig(A) File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/linalg/linear_al gebra.py", line 440, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork ====================================================================== ERROR: check_inverse (test_basic1a.test_roots) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/tests/test_basic 1a.py", line 25, in check_inverse assert_array_almost_equal(sort(roots(poly(a))),sort(a),5) File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/basic1a.py", lin e 52, in roots roots,dummy = eig(A) File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/linalg/linear_al gebra.py", line 440, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork ====================================================================== ERROR: check_basic (test_handy.test_real_if_close) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/tests/test_handy .py", line 206, in check_basic a = randn(10) File "/home/abaecker/PYTHON/lib/python2.2/site-packages/scipy/basic1a.py", lin e 104, in randn return stats.standard_normal(size=args) AttributeError: 'module' object has no attribute 'standard_normal' Is any of these errors to be exected at present ? If no, what tests can I do or what further information do you need to find the problem (or what I did wrong .. ;) 3.) scipy.test(10): Ran 319 tests in 763.373s FAILED (errors=3) (same as under 2.) Arnd From jsw at cdc.noaa.gov Wed Mar 6 07:49:08 2002 From: jsw at cdc.noaa.gov (Jeff Whitaker) Date: Wed, 6 Mar 2002 05:49:08 -0700 (MST) Subject: [SciPy-dev] scipy on macos x works Message-ID: Hi: I've suceeded in getting scipy to work on MacOS X 10.1.3 (well mostly - it fails a few tests). Here are the patches I used to get it working. 1) f2py --- F2PY-2.13.175-1212/src/fortranobject.h.orig Mon Mar 4 20:12:46 2002 +++ F2PY-2.13.175-1212/src/fortranobject.h Mon Mar 4 20:16:36 2002 @@ -12,7 +12,9 @@ #undef F2PY_REPORT_ATEXIT #else #ifndef __WIN32__ +#ifndef __APPLE__ #define F2PY_REPORT_ATEXIT +#endif #endif #endif This prevents undefined symbol errors for "ftime" and "on_exit", which don't exist on MacOS X. 2) python 2.2 --- Python-2.2/Lib/distutils/util.py Sat Feb 23 21:23:17 2002 +++ Python-2.2/Lib/distutils/util.py Sat Feb 23 21:24:31 2002 @@ -41,6 +41,11 @@ # Try to distinguish various flavours of Unix (osname, host, release, version, machine) = os.uname() + + # On MacOS X, remove space from machine name + + if machine == "Power Macintosh": + machine = "PowerMacintosh" # Convert the OS name to lowercase and remove '/' characters # (to accommodate BSD/OS) If the temporary build directory has spaces in the name, the scipy install chokes. --- Python-2.2/configure.orig Sun Mar 3 13:02:32 2002 +++ Python-2.2/configure Tue Mar 5 04:38:52 2002 @@ -3174,13 +3174,10 @@ LDSHARED="$LDSHARED -undefined suppress" fi ;; Darwin/*) - LDSHARED='$(CC) $(LDFLAGS) -bundle' + LDSHARED='$(CC) $(LDFLAGS) -bundle -bundle_loader /sw/bin/python.exe' if test "$enable_framework" ; then # Link against the framework. All externals should be de fined. LDSHARED="$LDSHARED "'-framework $(PYTHONFRAMEWORK)' - else - # No framework. Ignore undefined symbols, assuming they come from Python - LDSHARED="$LDSHARED -flat_namespace -undefined suppress" fi ;; Linux*) LDSHARED="gcc -shared";; dgux*) LDSHARED="ld -G";; Replacing the LDSHARED flags "-flat_namespace -undefined suppress" with "-bundle_loader /bin/python." makes extension modules compile with "two-level namespaces". What exactly this means I can't tell you - this is a workaround for multiply defined symbol errors that occur when multiple "flat namespace" loadable modules that link the same static lib are loaded in python in Mac OS X. This fix was suggested to me on the pythonmac-sig list. 3) scipy --- ./scipy/scipy_distutils/command/build_flib.py.orig Sat Feb 23 07:42:12 2002 +++ ./scipy/scipy_distutils/command/build_flib.py Sat Feb 23 13:26:32 2002 @@ -362,6 +362,9 @@ cmd = 'ar -cur %s %s' % (lib_file,objects) print cmd os.system(cmd) + cmd = 'ranlib %s ' % lib_file + print cmd + os.system(cmd) def build_library(self,library_name,source_list,module_dirs=None, temp_dir = ''): @@ -667,7 +670,7 @@ def get_linker_so(self): # win32 linking should be handled by standard linker - if sys.platform != 'win32': + if sys.platform != 'win32' and os.uname()[0] != 'Darwin': return [self.f77_compiler,'-shared'] def f90_compile(self,source_files,module_files,temp_dir=''): --- ./scipy/sparse/UMFPACK2.2/Makefile.orig Sat Feb 23 17:23:22 2002 +++ ./scipy/sparse/UMFPACK2.2/Makefile Sat Feb 23 17:23:37 2002 @@ -33,6 +33,7 @@ libumfpack.a: $(UMFD) $(UMFS) $(UMFC) $(UMFZ) $(HARWELL) ar -rv libumfpack.a $(UMFD) $(UMFS) $(UMFC) $(UMFZ) $(HARWELL) + ranlib libumfpack.a dmain.out: dmain in dmain < in > dmain.out --- ./scipy/special/amos/setup.py.orig Sat Feb 23 17:20:19 2002 +++ ./scipy/special/amos/setup.py Sat Feb 23 17:20:51 2002 @@ -30,6 +30,9 @@ cmd = 'ar -cr lib%s.a %s' % (library_name,objects) print cmd os.system(cmd) + cmd = 'ranlib lib%s.a' % library_name + print cmd + os.system(cmd) def build_library(self,library_name,source_list): object_list = map(lambda x: x[:-1] +'o',source_list) --- ./scipy/special/cephes/polmisc.c.orig Sat Feb 23 06:35:43 2002 +++ ./scipy/special/cephes/polmisc.c Sat Feb 23 06:36:00 2002 @@ -4,7 +4,9 @@ */ #include +#ifndef __APPLE__ #include +#endif #include "mconf.h" #ifndef ANSIPROT double atan2(), sqrt(), fabs(), sin(), cos(); --- ./scipy/special/cephes/polyn.c.orig Sat Feb 23 06:36:21 2002 +++ ./scipy/special/cephes/polyn.c Sat Feb 23 06:36:39 2002 @@ -65,7 +65,9 @@ #define NULL 0 #endif #include "mconf.h" +#ifndef __APPLE__ #include +#endif /* near pointer version of malloc() */ /* --- ./scipy/scipy_distutils/command/build_clib.py.orig Sat Feb 23 20:21:51 2002 +++ ./scipy/scipy_distutils/command/build_clib.py Sat Feb 23 20:24:48 2002 @@ -249,6 +249,9 @@ self.compiler.create_static_lib(objects, lib_name, output_dir=self.build_clib, debug=self.debug) + cmd = 'ranlib %s/lib%s.a' % (self.build_clib,lib_name) + print cmd + os.system(cmd) # for libraries --- scipy/special/cephes/mconf.h.orig Tue Mar 5 10:01:37 2002 +++ scipy/special/cephes/mconf.h Tue Mar 5 10:02:50 2002 @@ -100,7 +100,7 @@ /* Intel IEEE, low order words come first: */ -#define IBMPC 1 +/* #define IBMPC 1*/ /* Motorola IEEE, high order words come first * (Sun 680x0 workstation): @@ -113,10 +113,10 @@ * roundoff problems in pow.c: * (Sun SPARCstation) */ -/* #define UNK 1 */ +#define UNK 1 /* If you define UNK, then be sure to set BIGENDIAN properly. */ -#define BIGENDIAN 0 +#define BIGENDIAN 1 /* Define this `volatile' if your compiler thinks * that floating point arithmetic obeys the associative Basically, this patch adds the execution of "ranlib" to the build scripts, ifdef's out malloc.h includes (malloc.h doesn't exist on OS X), and tells cephes that OS X is big endian. You'll also need to install atlas, g77, fftw, numeric (linked to atlas), and optionally wxpython-wxgtk (wxpython-wxmac doesn't yet work). All of these packages are available through fink (http://fink.sf.net). I'm a fink developer and am the maintainer for most of the scientific packages (as well as python). I've just uploaded a package for scipy (based on a March 5 CVS snapshot). It would be great if these patches could be incorporated into f2py and scipy before the 0.2 release. Hopefull, python 2.2.1 will contain the necessary changes to LDSHARED, although I don't think it will remove the spaces from "osname". Maybe there is a way to work around this in scipy, but I couldn't figure it out. Here are the results of scipy.test() on my G4: Ran 262 tests in 9.809s FAILED (failures=1, errors=1) the messages include: !! FAILURE building test for scipy.basic /sw/src/root-scipy-20020305-1/sw/lib/python2.2/site-packages/scipy/basic1a.py:107: AttributeError: 'module' object has no attribute 'standard_normal' (in randn) FAILURE to import scipy.stats.distributions :0: AttributeError: 'module' object has no attribute 'distributions' (in ?) FAILURE to import scipy.stats.rv :0: AttributeError: 'module' object has no attribute 'rv' (in ?) FAILURE to import scipy.stats.rv2 :0: AttributeError: 'module' object has no attribute 'rv2' (in ?) FAILURE to import scipy.stats.stats :0: AttributeError: 'module' object has no attribute 'stats' (in ?) ====================================================================== ERROR: check_basic (test_handy.test_real_if_close) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/src/root-scipy-20020305-1/sw/lib/python2.2/site-packages/scipy/tests/test_handy.py", line 206, in check_basic a = randn(10) File "/sw/src/root-scipy-20020305-1/sw/lib/python2.2/site-packages/scipy/basic1a.py", line 107, in randn AttributeError: 'module' object has no attribute 'standard_normal' ====================================================================== FAIL: check_basic (test_basic1a.test_roots) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/src/root-scipy-20020305-1/sw/lib/python2.2/site-packages/scipy/tests/test_basic1a.py", line 19, in check_basic assert_array_almost_equal(roots(a1),[2,2],11) File "/sw/src/root-scipy-20020305-1/sw/lib/python2.2/site-packages/scipy_test/scipy_test.py", line 313, in assert_array_almost_equal AssertionError: Arrays are not almost equal: Do people see these failures on linux, or is it something I should try to fix for OS X? Cheers, -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : jsw at cdc.noaa.gov 325 Broadway Web : www.cdc.noaa.gov/~jsw Boulder, CO, USA 80303-3328 Office : Skaggs Research Cntr 1D-124 From arnd.baecker at physik.uni-ulm.de Wed Mar 6 08:48:27 2002 From: arnd.baecker at physik.uni-ulm.de (arnd.baecker at physik.uni-ulm.de) Date: Wed, 6 Mar 2002 14:48:27 +0100 (MET) Subject: [SciPy-dev] scipy on macos x works In-Reply-To: Message-ID: Hi, of the errors you mentioned I only get (under linux, cvs_version = (1, 18, 1377, 3080)) the error for test_handy.test_real_if_close (AttributeError: 'module' object has no attribute 'standard_normal'). In addition I have two different further ones (see my previous mail), in check_basic and check_inverse (test_basic1a.test_roots), both related to eig, which are presumably irrelevant to you. Arnd From cindy.jordan at mail.internetseer.com Wed Mar 6 12:07:34 2002 From: cindy.jordan at mail.internetseer.com (Cindy Jordan) Date: Wed, 6 Mar 2002 12:07:34 -0500 (EST) Subject: [SciPy-dev] Broken link on your website Message-ID: <3957444.1015434454504.JavaMail.promon@pm68> I noticed that your page: http://www.scipy.org/site_content/FAQ contained a link to: http://www.scipy.org/site_content/FAQ/document_view. On Wed Mar 06, 2002 at 12:07:34 PM EST the page at http://www.scipy.org/site_content/FAQ/document_view could not be accessed because of the following error: Time Out.? Please note: Unless the broken link points to your website, this is not a problem with your web site. I work for InternetSeer. InternetSeer, a Web site monitoring company, is conducting an ongoing study of web connectivity. As recommended by the Robots Guidelines, this email is being sent to explain our research activities and to let you know about the difficulty in connecting to your site. Your page was last examined on Wed Dec 12, 2001 at 10:55:42 AM EST. If your page has not been updated since Wed Dec 12, 2001 at 10:55:42 AM EST, this link is most likely currently broken. The error listed above was initially detected by our primary site monitor in Philadelphia, Pa. then verified by our secondary site monitor located in Los Angeles, Ca. before this error event was recorded. InternetSeer is the largest FREE web site monitoring company in the world. We provide free web site monitoring to over 1 million users worldwide. We'll monitor your web site every hour, 7 days a week, 24 hours a day for free. To have InternetSeer monitor your web site for free, click here for instant signup: http://scclick.internetseer.com/sitecheck/clickthrough.jsp?I5s57e5h5h5h5h5l5l53M5pxRWq15yA59VMW5tW6vLzJ5bwwM55P5qQxPz5m5c5eTUK5axRXyA5aNPD5cWXPM6r_wHPMHP5e625u6g5bVw6vWNVIT6rTSVU55x5q5i=e3 As part of your free web site monitoring, you'll receive immediate notifications when we encounter problems accessing your web site and weekly performance reports. InternetSeer does not store or publish the content of your pages, but rather uses availability and link information for our research. Click here to learn more about InternetSeer. http://scclick.internetseer.com/sitecheck/clickthrough.jsp?I5s57e5h5h5h5h5l5l53M5pxRzls5yA59VMW5tW6vLzJ5bwwM55P5qQxPz5m5c5eTUK5axRXyA5aNPD5cWXPM6r_wHPMHP5e625u6g52P5s5e=e3 Sincerely, Cindy Jordan Web Site Analyst InternetSeer.com "Free Website Monitoring" http://www.internetseer.com/ep/setoc?NR5p764lad5aP5q5eMNNV5cSHVMU5bGxy=e3 ---------------------------------------------------------------------------------- If you prefer not to receive any further alerts regarding the availability of your Web site, click this link: http://scclick.internetseer.com/sitecheck/cancel.jsp?xRXy7A45v59VMW5tW6vLzJ5bwwM.e3 or reply to this message with the word "cancel" in the subject line. ##scipy-dev at scipy.org## SRC=29 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez at pizero.colorado.edu Wed Mar 6 17:42:48 2002 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Wed, 6 Mar 2002 15:42:48 -0700 (MST) Subject: [SciPy-dev] weave - type_factories change? Message-ID: Hi all, the following code used to work with weave a few weeks ago: #----------------------------------------------------------------------------- # Returning a scalar quantity computed from a Numeric array. def trace(mat): """Return the trace of a matrix. """ nrow,ncol = mat.shape code = \ """ double tr=0.0; for(int i=0;i References: Message-ID: <15494.58637.355512.75712@monster.linux.in> >>>>> "FP" == Fernando Perez writes: FP> Now the type_factories call doesn't work, and FP> blitz_type_factories doesn't seem to exist in blitz anymore. I FP> couldn't find what the new syntax is, could someone enlighten FP> me please? just add: from scipy.weave import converters And call inline like so: return weave.inline(code,['mat','nrow','ncol'], type_converters = converters.blitz) prabhu From peterson at math.utwente.nl Thu Mar 7 04:43:41 2002 From: peterson at math.utwente.nl (Pearu Peterson) Date: Thu, 7 Mar 2002 10:43:41 +0100 (CET) Subject: [SciPy-dev] linalg fixes Message-ID: Hi! I have updated linalg (only in CVS) to work again with the 'breaking' f2py. Before using it real applications, please check that the corresponding functions return correct results. If you find that results are incorrect, try the same function with arrays transposed. I would suspect that incorrect transposition may be source for these possible bugs. Note that linalg is fixed only temporarily (in the sense that I will not add any new features there) and eventually linalg2 will replace linalg. Regards, Pearu From pearu at scipy.org Thu Mar 7 04:56:39 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Thu, 7 Mar 2002 03:56:39 -0600 (CST) Subject: [SciPy-dev] what is going on with scipy.org IP? Message-ID: I find the following weird: [pearu at www pearu]$ host scipy.org scipy.org has address 209.163.223.165 scipy.org mail is handled (pri=20) by mx2.mail.twtelecom.net scipy.org mail is handled (pri=5) by scipy.org scipy.org mail is handled (pri=20) by mx1.mail.twtelecom.net [pearu at www pearu]$ host 209.163.223.165 165.223.163.209.IN-ADDR.ARPA domain name pointer westermandesign.com [pearu at www pearu]$ host westermandesign.com westermandesign.com has address 66.9.82.135 westermandesign.com mail is handled (pri=0) by westermandesign.com [pearu at www pearu]$ host 66.9.82.135 Host not found, try again. Is scipy.org being exploited? Pearu From travis at scipy.org Thu Mar 7 09:06:57 2002 From: travis at scipy.org (Travis N. Vaught) Date: Thu, 7 Mar 2002 08:06:57 -0600 Subject: [SciPy-dev] what is going on with scipy.org IP? In-Reply-To: Message-ID: Prabhu pointed out this IP wierdness a few weeks ago. The T1 provider explained that there was not a problem with the DNS record, but possibly a vestigial setting in the global DNS. I do not understand this explanation--he said there were no problems with the behavior, since the domain mapping was right. I might have spoken to the 'wrong' person, though. Our new T1 (on site at Enthought) is due next Thursday. After it is up, we'll move to the new server, on a new IP. So (after some outages) we should be secure and stable at that time. Travis > -----Original Message----- > From: scipy-dev-admin at scipy.org [mailto:scipy-dev-admin at scipy.org]On > Behalf Of pearu at scipy.org > Sent: Thursday, March 07, 2002 3:57 AM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] what is going on with scipy.org IP? > > > > I find the following weird: > > [pearu at www pearu]$ host scipy.org > scipy.org has address 209.163.223.165 > scipy.org mail is handled (pri=20) by mx2.mail.twtelecom.net > scipy.org mail is handled (pri=5) by scipy.org > scipy.org mail is handled (pri=20) by mx1.mail.twtelecom.net > > [pearu at www pearu]$ host 209.163.223.165 > 165.223.163.209.IN-ADDR.ARPA domain name pointer westermandesign.com > > [pearu at www pearu]$ host westermandesign.com > westermandesign.com has address 66.9.82.135 > westermandesign.com mail is handled (pri=0) by westermandesign.com > > [pearu at www pearu]$ host 66.9.82.135 > Host not found, try again. > > Is scipy.org being exploited? > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From eric at scipy.org Thu Mar 7 12:06:39 2002 From: eric at scipy.org (eric) Date: Thu, 7 Mar 2002 12:06:39 -0500 Subject: [SciPy-dev] out for a week Message-ID: <02ad01c1c5fa$726ee290$6b01a8c0@ericlaptop> Hey group, I'll be out for the next week or so returning Monday the 18th. Hopefully I'll be able to check email periodically, but that remains to be seen. Talk to you in a week! eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From oliphant at ee.byu.edu Thu Mar 7 11:50:06 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 7 Mar 2002 11:50:06 -0500 (EST) Subject: [SciPy-dev] Re: f2py and linalg (version 1) In-Reply-To: Message-ID: O.K. So, now I see that inspite of the error message. ev(a) works if a is Not Contiguous ===== but ev(A) does not work if it is contiguous ==== This seems to have something to do with intent(inout) I know, we aren't supposed to use this anymore. So, is behavior broken if the variable is defined this way? -Travis Is this the same behavior on your system? From peterson at math.utwente.nl Thu Mar 7 13:52:42 2002 From: peterson at math.utwente.nl (Pearu Peterson) Date: Thu, 7 Mar 2002 19:52:42 +0100 (CET) Subject: [SciPy-dev] Re: f2py and linalg (version 1) In-Reply-To: Message-ID: On Thu, 7 Mar 2002, Travis Oliphant wrote: > > O.K. > > So, now I see that inspite of the error message. > > ev(a) works if a is Not Contiguous > ===== > but > > ev(A) does not work if it is contiguous > ==== > > This seems to have something to do with > > intent(inout) > > I know, we aren't supposed to use this anymore. So, is behavior broken if > the variable is defined this way? > > -Travis > > > Is this the same behavior on your system? Yes, the error messages from old interface+new f2py are totally misleading. Pearu From wagner.nils at vdi.de Thu Mar 7 16:58:05 2002 From: wagner.nils at vdi.de (My VDI Freemail) Date: Thu, 07 Mar 2002 22:58:05 +0100 Subject: [SciPy-dev] Fwd: [SciPy-user] Re: Problems with expm, expm2 Message-ID: <200203072152.g27Lq2627869@scipy.org> ------ Forwarded message ------- From: My VDI Freemail Reply-To: scipy-user at scipy.orgTo: "Travis Oliphant" Cc: scipy-user at scipy.net Date: Thu, 07 Mar 2002 20:56:18 +0100 ------------------- > > Dear Travis, > > > > I have tried several standard test problems (see references) . > > Unfortunately, I never get the correct result for expm and expm2. > > This is most likely a problem with the linear algebra interface still. > > Most likely the solve is not working (could you test that on your system)? > > I have tested these routines and am pretty sure that they work.. > > I used the routines from Matrix Computations. > > > -Travis > > Hi Travis, I have used a symmetric test matrix. It works fine. Therefore it seems to be a problem of non-symmetric matrices. What do you think ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: testerg.dat Type: application/octet-stream Size: 1840 bytes Desc: not available URL: From travis at scipy.org Sun Mar 10 15:15:04 2002 From: travis at scipy.org (Travis N. Vaught) Date: Sun, 10 Mar 2002 14:15:04 -0600 Subject: [SciPy-dev] FW: building SciPy from CVS Message-ID: Carl, This is of general interest to the dev list, so (hopefully with your permission) I've forwarded it along. Fernando subscribes to the dev-list as well as others who may have some help. Unfortunately, I am not familiar with libcdf--hopefully someone else will be. TV -----Original Message----- From: Carl... Sent: Sunday, March 10, 2002 10:25 AM To: travis at enthought.com Subject: building SciPy from CVS Travis, I tried to build SciPy on Mandrake 8.1 with Python2.2 from the CVS repository. However, I get an error message saying that it cannot find libcdf. Do you know where I might find libcdf? I can find a netcdf in my Mandrake distribution; is this the right thing? It might be useful if fperez could update his CVS build instruction page to include references to CDF. http://www.scipy.org/Members/fperez/PerezCVSBuild.htm I would send this message to fperez, except that I couldn't find his email address, and the page send to send comments to you ;-) Thanks, Carl From jhauser at ifm.uni-kiel.de Sun Mar 10 16:03:43 2002 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Sun, 10 Mar 2002 22:03:43 +0100 Subject: [SciPy-dev] FW: building SciPy from CVS In-Reply-To: References: Message-ID: <20020310220343.0632e679.jhauser@ifm.uni-kiel.de> > Travis, > > I tried to build SciPy on Mandrake 8.1 with Python2.2 from > the CVS repository. However, I get an error message saying > that it cannot find libcdf. Do you know where I might find > libcdf? I can find a netcdf in my Mandrake distribution; is > this the right thing? It might be useful if fperez could update > his CVS build instruction page to include references to CDF. > It's probably not the netcdf library. But I also have no libcdf installed. __Janko From jhauser at ifm.uni-kiel.de Sun Mar 10 16:11:37 2002 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Sun, 10 Mar 2002 22:11:37 +0100 Subject: [SciPy-dev] FW: building SciPy from CVS In-Reply-To: References: Message-ID: <20020310221137.2ff8b851.jhauser@ifm.uni-kiel.de> > > I tried to build SciPy on Mandrake 8.1 with Python2.2 from > the CVS repository. However, I get an error message saying > that it cannot find libcdf. Do you know where I might find > libcdf? Actually it should be build by the build process from the source in $SCIPY/special/cdflib. __Janko From fperez at pizero.colorado.edu Mon Mar 11 15:53:56 2002 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Mon, 11 Mar 2002 13:53:56 -0700 (MST) Subject: [SciPy-dev] FW: building SciPy from CVS In-Reply-To: Message-ID: On Sun, 10 Mar 2002, Travis N. Vaught wrote: > I tried to build SciPy on Mandrake 8.1 with Python2.2 from > the CVS repository. However, I get an error message saying > that it cannot find libcdf. Do you know where I might find > libcdf? I can find a netcdf in my Mandrake distribution; is > this the right thing? It might be useful if fperez could update > his CVS build instruction page to include references to CDF. > > http://www.scipy.org/Members/fperez/PerezCVSBuild.htm > > I would send this message to fperez, except that I couldn't > find his email address, and the page send to send comments > to you ;-) Sorry but I don't really have anything to say about this. Hopefully Pearu's comments were useful. I'll update the above mentioned page once the new scipy site comes up so I don't have to bug the enthought guys every time I need to edit something, I'm sure they're busy enough as it is. Cheers, f. From p.Collard at i-net.paiko.gr Wed Mar 13 13:35:36 2002 From: p.Collard at i-net.paiko.gr (p.Collard at i-net.paiko.gr) Date: Wed, 13 Mar 2002 15:35:36 -0300 (BRT) Subject: [SciPy-dev] Minimize your phone expenses Message-ID: <1016044590.0169378903@lemnos.geo.auth.gr> An HTML attachment was scrubbed... URL: From Brenda8367j31 at msn.com Thu Mar 14 13:52:58 2002 From: Brenda8367j31 at msn.com (Brenda8367j31 at msn.com) Date: Thu, 14 Mar 2002 12:52:58 -0600 Subject: [SciPy-dev] Content Management 7331xxQI7-57l11 Message-ID: <007e17a72d4a$7873e1b7$1bd62ee7@ugwbuf> An HTML attachment was scrubbed... URL: From nwagner at mecha.uni-stuttgart.de Thu Mar 14 13:18:53 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Mar 2002 19:18:53 +0100 Subject: [SciPy-dev] [Fwd: [SciPy-user] Availability of linalg2] Message-ID: <3C90E98D.FEB8763E@mecha.uni-stuttgart.de> -------------- next part -------------- An embedded message was scrubbed... From: Nils Wagner Subject: [SciPy-user] Availability of linalg2 Date: Thu, 14 Mar 2002 19:02:38 +0100 Size: 2533 URL: From z1z2 at sympatico.ca Sun Mar 17 15:25:48 2002 From: z1z2 at sympatico.ca (MG Publishing) Date: Sun, 17 Mar 2002 15:25:48 -0500 Subject: [SciPy-dev] Available; Subsidies, Grants, Loans, Financing. Message-ID: <200203172020.g2HKKp631653@scipy.org> MG PUBLISHING 916 Ste-Adele Blvd. Ste-Adele, Qc Canada J8B 2N2 PRESS RELEASE CANADIAN SUBSIDY DIRECTORY 2002 Legal deposit-National Library of Canada : ISBN 2-922-870-02-02 M.G. Publishing is offering to the public a revised edition of the Canadian Subsidy Directory, a guide containing 2867 direct and indirect financial subsidies, grants and loans offered by government departments and agencies, foundations, associations and organizations. In this new 2002 edition all programs are well described. The Canadian Subsidy Directory is the most comprehensive tool to start up a business, improve existent activities, set up a business plan, or obtain assistance from experts in fields such as: Industry, transport, agriculture, communications, municipal infrastructure, education, import-export, labor, construction and renovation, the service sector, hi-tech industries, research and development, joint ventures, arts, cinema, theatre, music and recording industry, the self employed, contests, and new talents. Assistance from and for foundations and associations, guidance to prepare a business plan, market surveys, computers, and much more! Retail price.....$ 49.95 plus taxes, shipping and handling. To obtain a copy of the Canadian Subsidy Directory please contact one the following resellers. Canadian Business Ressource Center: (250) 381-4822 Fureteur bookstore: (450) 465-5597, fax: (credit cards only) (450) 465-8144 From a.schmolck at gmx.net Mon Mar 18 09:47:59 2002 From: a.schmolck at gmx.net (A.Schmolck) Date: 18 Mar 2002 14:47:59 +0000 Subject: [SciPy-dev] Re: [Numpy-discussion] adding a .M attribute to the array. In-Reply-To: References: Message-ID: [Sorry about the crossposting, but it also seemed relevant to both scipy and numpy...] Huaiyu Zhu writes: [...] > I'd like to hear any suggestions on how to proceed. My own favorite would > be to have separate array and matrix classes with easy but explicit > conversions between them. Without conversions, arrays and matrices would > be completely independent semantically. In other words, I'm mostly in > favor of Konrad Hinsen's position, with the addition of using ~ operators > for elementwise operations for matrix-like classes. The PEP itself also > discussed ideas of extending the meaning of ~ to other parts of Python for > elementwise operations on aggregate types, but my impressions of people's > impressions is that it has a better chance without that part. > Well, from my impression of the previous discussions, the situation (both for numpy and scipy) seems to boil down to me as follows: Either `array` currently is too much of a matrix, or too little: Linear algebra functionality is currently exclusively provided by `array` and libraries that operate on and return `array`s, but the computational and notational efficiency leaves to be desired (compared to e.g. Matlab) in some areas, importantly matrix multiplications (which are up to 40 times slower) and really awkward to write (and much more importantly, decipher afterwards). So I think what one should really do is discuss the advantages and disadvantages of the two possible ways out of this situation, namely providing: 1) a new (efficient) `matrix` class/type (and appropriate libraries that operate on it) [The Matrix class that comes with Numeric is more some syntactic sugar wrapper -- AFAIK it's not use as a return type or argument in any of the functions that only make sense for arrays that are matrices]. 2) the additional functionality that is needed for linear algebra in `array` and the libraries that operate on it. (see [1] below for what I feel is currently missing and could be done either in way 1) or 2)) I think it might be helpful to investigate these "macro"-issues before one gets bogged down in discussions about operators (I admit these are not entirely unrelated -- given that one of the reasons for the creation of a Matrix type would be that '*' is already taken in 'array's and there is no way to add a new operator without modifying the python core -- just for the record and ignoring my own advice, _iff_ there is a chance of getting '~*' into the language, I'd rather have '*' do the same for both matrices and arrays). My impression is that the best path also very much depends on the what the feature aspirations and divisions of labor of numpy/numarray and scipy are going to be. For example, scipy is really aimed at scientific users, which need performance, and are willing to buy it with inconvenience (like the necessity to install other libraries on one's machine, most prominently atlas and blas). The `array` type and the functions in `Numeric`, on the other hand, potentially target a much wider community -- the efficient storage and indexing facilities (rich comparisons, strides, the take, choose etc. functions) make it highly useful for code that is not necessarily numeric, (as an example I'm currently using it for feature selection algorithms, without doing any numerical computations on the arrays). So maybe (a subset of) numpy should make it into the python core (or an as yet `non-existent sumo-distribution`) [BTW, I also wonder whether the python-core array module could be superseded/merged with numpy's `array`? One potential show stopper seems to be that it is e.g. `pop`able]. In such a scenario, where numpy remains relatively general (and might even aim at incorporation into the core), it would be a no-no to bloat it with too much code aimed at improving efficiency (calling blas when possible, sparse storage etc.). On the other hand people who want to do serious numerical work will need this -- and the scipy community already requires atlas etc. and targets a more specialized audience. Under this consideration it might be an attractive solution do incorporate good matrix functionality (and possibly other improvements for hard core number crunchers) in scipy only (or at least limit the efficient _implementation_ of matrices to scipy, providing at only a pure python class or so in numpy). I'm not suggesting, BTW, to necessarily put all of [1] into a single class -- it seems sensible to have a couple of subclasses (for masked, sparse representations etc.) to `matrix` (maybe the parent-class should even be a relatively na?ve Numpy implementation, with the good stuff as subclasses in scipy...). In any event, creating a new matrix class/type would also mean that matrix functionality in libraries should use and return this class (existing libraries should presumably largely still operate on arrays for backwards-compatibily (or both -- after a typecheck), and some matrix operations are so useful that it makes sense to provide array versions for them (e.g. dot) -- but on the whole it makes little sense to have a computationally and space efficient matrix type if one has to cast it around all the time). A `matrix` class is more specialized than an `array` and since the operations one will often do on it are consequently more limited, I think it should provide most important functionality as methods (rather than as external functions; see [2] for a list of suggestions). Approach 1) on the other hand would have the advantage that the current interface would stay pretty much the same, and as long as 2D arrays can just be regarded as matrices, there is no absolutely compelling reason not to stuff everything into array (at least the scipy-version thereof). Another important question to ask before deciding what to change how and if, is obviously how many people in the scipy/numpy community do lots of linear algebra (and how many deflectors from matlab etc. one could hope to win if one spiced things up a bit for them...), but I would suppose there must be quite a few (but I'm certainly biased ;). Unfortunately, I've really got to do some work again now, but before I return to number-crunching I'd like to say that I'd be happy to help with the implementation of a matrix class/type in python (I guess a .py-prototype would be helpful to start with, but ultimately a (subclassable) C(++)-type will be called for, at least in scipy). --alex Footnotes: [1] The required improvements for serious linear algebra seem to be: - optional use (atlas) blas routines for real and complex matrix, matrix `dot`s if atlas is available on the build machine (see http://www.scipy.org/Members/aschmolck for a patch -- it produces speedups of more than factor 40 for big matrices; I'd be willing to provide an equivalent patch for the scipy distribution if there is interest) - making sure that no unnecessary copies are created (e.g. when transposing a matrix to use it in `dot` -- AFAIK although the transpose itself only creates a new view, using it for dot results in a copy (but I might be wrong here )) - allowing more space efficient storage forms for special cases (e.g. sparse matrices, upper triangular etc.). IO libraries that can save and load such representations are also needed (methods and static methods might be a good choice to keep things transparent to the user). - providing a convinient and above all legible notation for common matrix operations (better than `dot(tranpose(A),B)` etc. -- possibilities include A * B.T or A ~* B.T or A * B ** T (by overloding __rpow__ as suggested in a previous post)) - (in the case of a new `matrix` class): indexing functionality (e.g. `where`, `choose` etc. should be available without having to cast, e.g. for the common case that I want to set everything under a certain threshold to 0., I don't want to have to cast my sparse matrix to an array etc.) [2] What should a matrix class contain? - a dot operator (certainly eventually, but if there is a good chance to get ~* into python, maybe '*' should remain unimplemented till this can be decided) - most or all of what scipy's linalg module does - possibly IO, (reading as a static method) - indexing (the like of take, choose etc. (some should maybe be functions or static methods)) -- Alexander Schmolck Postgraduate Research Student Department of Computer Science University of Exeter A.Schmolck at gmx.net http://www.dcs.ex.ac.uk/people/aschmolc/ From hinsen at cnrs-orleans.fr Mon Mar 18 10:57:04 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: 18 Mar 2002 16:57:04 +0100 Subject: [SciPy-dev] Re: [Numpy-discussion] adding a .M attribute to the array. In-Reply-To: References: Message-ID: a.schmolck at gmx.net (A.Schmolck) writes: > Linear algebra functionality is currently exclusively provided by `array` and > libraries that operate on and return `array`s, but the computational and > notational efficiency leaves to be desired (compared to e.g. Matlab) in some > areas, importantly matrix multiplications (which are up to 40 times slower) > and really awkward to write (and much more importantly, decipher afterwards). Computational and notational efficiency are rather well separated, fortunately. Both the current dot function and an hypothetical matrix multiply operator could be implemented in straightforward C code or using a high-performance library such as Atlas. In fact, this should even be an installation choice in my opinion, as installing Atlas isn't trivial on all machines (e.g. with some gcc versions), and I consider it important for fundamental libraries that they work everywhere easily, even if not optimally. > My impression is that the best path also very much depends on the what the > feature aspirations and divisions of labor of numpy/numarray and scipy are > going to be. For example, scipy is really aimed at scientific users, which > need performance, and are willing to buy it with inconvenience (like the I see the main difference in distribution philosophy. NumPy is an add-on package to Python, which is in turn used by other add-on packages in a modular way. SciPy is rather a monolithic super-distribution for scientific users. Personally I strongly favour the modular package approach, and in fact I haven't installed SciPy on my system for that reason, although I would be interested in some of its components. > algorithms, without doing any numerical computations on the arrays). So maybe > (a subset of) numpy should make it into the python core (or an as yet This has been discussed already, and it might well happen one day, but not with the current NumPy implementation. Numarray looks like a much better candidate, but isn't ready yet. > In such a scenario, where numpy remains relatively general (and > might even aim at incorporation into the core), it would be a no-no > to bloat it with too much code aimed at improving efficiency > (calling blas when possible, sparse storage etc.). On the other hand The same approach as for XML could be used: a slim-line version in the standard distribution that could be replaced by a high-efficiency extended version for those who care. > attractive solution do incorporate good matrix functionality (and > possibly other improvements for hard core number crunchers) in scipy > only (or at least limit the efficient _implementation_ of matrices > to scipy, providing at only a pure python class or so in numpy). I'm I'd love to have efficient matrices without having to install the whole SciPy package! Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From pearu at cens.ioc.ee Mon Mar 18 15:31:51 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 18 Mar 2002 22:31:51 +0200 (EET) Subject: [SciPy-dev] Re: [Numpy-discussion] adding a .M attribute to the array. In-Reply-To: Message-ID: Off topic warning On 18 Mar 2002, Konrad Hinsen wrote: > I see the main difference in distribution philosophy. NumPy is an > add-on package to Python, which is in turn used by other add-on > packages in a modular way. SciPy is rather a monolithic > super-distribution for scientific users. > > Personally I strongly favour the modular package approach, and in fact > I haven't installed SciPy on my system for that reason, although I > would be interested in some of its components. Me too. In what I have contributed to SciPy, I have tried to follow this modularity approach. Modularity is also important property from the development point of view: it minimizes possible interference with other unreleated modules and their bugs. What I am trying to say here is that SciPy can (and should?, +1 from me) provide its components separately, though, currently only few of its components seem to be available in that way without some changes. Pearu From a.schmolck at gmx.net Mon Mar 18 17:25:23 2002 From: a.schmolck at gmx.net (A.Schmolck) Date: 18 Mar 2002 22:25:23 +0000 Subject: [SciPy-dev] Re: [Numpy-discussion] adding a .M attribute to the array. In-Reply-To: References: Message-ID: Konrad Hinsen writes: > Computational and notational efficiency are rather well separated, > fortunately. Both the current dot function and an hypothetical matrix Yes, the only thing they have in common is that both are currently unsatisfactory (for matrix operations) in numpy, at least for my needs. Although I've solved my most pressing performance problems by patching Numeric [1], I'm obviously interested in a more official solution (i.e. one that is maintained by others :) [...] [order changed by me] > a.schmolck at gmx.net (A.Schmolck) writes: > > My impression is that the best path also very much depends on the what the > > feature aspirations and divisions of labor of numpy/numarray and scipy are ^^^^^^^ Darn, I made a confusing mistake -- this should read _future_. > > going to be. For example, scipy is really aimed at scientific users, which > > need performance, and are willing to buy it with inconvenience (like the > > I see the main difference in distribution philosophy. NumPy is an > add-on package to Python, which is in turn used by other add-on > packages in a modular way. SciPy is rather a monolithic > super-distribution for scientific users. > > Personally I strongly favour the modular package approach, and in fact > I haven't installed SciPy on my system for that reason, although I > would be interested in some of its components. [...] > The same approach as for XML could be used: a slim-line version in the > standard distribution that could be replaced by a high-efficiency > extended version for those who care. [...] I personally agree with all your above points -- if you have a look at our "dotblas"-patch mentioned earlier (see [1]), you will find that it aims to do provide that -- have dot run anywhere without a hassle but run (much) faster if the user is willing to install atlas. My main concern was that the argument should shift away a bit from syntactic and implementation details to what audiences and what needs numpy/numarray and are supposed to address and, in this light, how to best strike the balance between convinience for users and maitainers, speed and bloat, generality and efficiency etc. As an example, adding the dotblas patch [1] to Numeric is, I think more convinient for the users (granting a few assumptions (like that it actually works :) for the sake of the argument) -- it gives users that have atlas better-performance and those who don't won't (or at least shouldn't) notice. It is however inconvinient for the maintainers. Whether one should bother including it in this or some other way depends, among the obvious question of whether there is a better way to achieve what it does for both groups (like creating a dedicated Matrix class), also on what numpy is really supposed to achieve. I'm not entirely clear on that. For example I don't know how many numpy users deeply care about their matrix multiplications for big (1000x1000) matrices being 40 times faster. The monolithic approach is not entirely without its charms (remember python's "batteries included" jinggle)? Apart from convinience factors it also has the not unconsiderable advantage that people use _one_ standard module for a certain thing -- rather than 20 different solutions. This certainly helps to improve code quality. Not least because someone goes through the trouble of deciding what merrit's inclusion in the "Big Thing", possibly urging changes but at least almost certainly taking more time for evalutation than an indivdual programmer who just wants to get a certain job done. It also makes life easier for module writers -- they can rely on certain stuff being around (and don't have to reinvent the wheel, another potential improvement to code quality). As such it makes live easier for maintainers, as does the scipy commandment that you have to install atlas/lapack, full-stop (and if it doesn't run on your machine -- well at least it works fast for some people and that might well be better than working slow for everyone in this context). So, I think what's good really depends on what you're aiming at, that's why I'd like to know what users and developers think about these matters. My points regarding scipy and numpy/numarray were just one attempt at interpreting what these respective libraries try to/should/could attempt to be or become. Now, not being a developer for either of them (I've only submitted a few minor patches to scipy), I'm not in a particular good position to venture such interpretations, but I hoped that it would provoke other and more knowledgeable people to share their opinions and insights on this matter (as indeed you did). > I'd love to have efficient matrices without having to install the > whole SciPy package! Welcome to the linear algebra lobby group ;) yep, that would be nice but my impression was that the scipy folks are currently more concerned about performance issues than the numpy/numarray folks and I could live with either package providing what I want. Ideally , I'd like to see a slim core numarray, without any frills (and more streamlined to behave like standard python containers (e.g. indexing and type/casts behavior)) for the python core, something more enabled and efficient for numerics (including matrices!) as a seperate package (like the XML example you quote). And then maybe a bigger pre-bundled collection of (ideally rather modular) numerical libraries for really hard-core scientific users (maybe in the spirit of xemacs-packages and sumo-tar-balls -- no bloat if you don't need it, plenty of features in an instant if you do). Anyway, is there at least general agreement that there should be some new and wonderful matrix class (plus supporting libraries) somewhere (rather than souping up array)? alex Footnotes: [1] patch for faster dot product in Numeric http://www.scipy.org/Members/aschmolck -- Alexander Schmolck Postgraduate Research Student Department of Computer Science University of Exeter A.Schmolck at gmx.net http://www.dcs.ex.ac.uk/people/aschmolc/ From jochen at unc.edu Mon Mar 18 19:29:30 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 18 Mar 2002 19:29:30 -0500 Subject: [SciPy-dev] cephes build problem Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Latest cvs, RedHat-7.0, gcc-3.0.4 gives me the following error: ,---- | building 'scipy.special.cephes' extension | error: file '/home/jochen/source/numeric/scipy/special/specfun_wrappers.c' does not exist `---- Greetings, Jochen - -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6-cygwin-fcn-1 (Cygwin) Comment: Processed by Mailcrypt and GnuPG iEYEARECAAYFAjyWhmsACgkQiJ/aUUS8zY7SCQCfdb8jjBhtISHcqjqatHRUCRJr i40AnjA0PSjGTtcI/FaqzlIZaOCKT6Zl =9dmZ -----END PGP SIGNATURE----- From hinsen at cnrs-orleans.fr Tue Mar 19 06:05:50 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: 19 Mar 2002 12:05:50 +0100 Subject: [SciPy-dev] Re: [Numpy-discussion] adding a .M attribute to the array. In-Reply-To: References: Message-ID: a.schmolck at gmx.net (A.Schmolck) writes: > > > feature aspirations and divisions of labor of numpy/numarray and scipy are > ^^^^^^^ > Darn, I made a confusing mistake -- this should read _future_. Or perhaps __future__ ;-) > I personally agree with all your above points -- if you have a look at our > "dotblas"-patch mentioned earlier (see [1]), you will find that it aims to do And I didn't know even about this... > It is however inconvinient for the maintainers. Whether one should bother > including it in this or some other way depends, among the obvious question of There could be two teams, one maintaining a standard portable implementation, and another one taking care of optimization add-ons. >From the user's point of view, what matters most is a single entry-point for finding everything that is available. > The monolithic approach is not entirely without its charms (remember > python's "batteries included" jinggle)? Apart from convinience Sure, but... That's the standard library. Everybody has it, in identical form, and its consistency and portability is taken care off by the Python development team. There can be only *one* standard library that works like this. I see no problem either with providing a larger integrated distribution for specific user communities. But such distribution and packaging strategies should be distinct from development projects. If I can get a certain package only as part of a juge distribution that I can't or don't want to install, then that package is effectively lost for me. Worse, if one package comes with its personalized version of another package (SciPy with NumPy), then I end up having to worry about internal conflicts within my installation. On the other hand, package interdependencies are a big problem in the Open Source community at large, and I have personally been bitten more than once by an incompatible change in NumPy that broke my modules. But I don't see any other solution than better communication between development teams. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From prabhu at aero.iitm.ernet.in Wed Mar 20 13:21:47 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 20 Mar 2002 23:51:47 +0530 Subject: [SciPy-dev] Re: [Numpy-discussion] adding a .M attribute to the array. In-Reply-To: References: Message-ID: <15512.54075.312312.299325@monster.linux.in> hi, I'm sorry I havent been following the discussion too closely and this post might be completely unrelated. >>>>> "AS" == A Schmolck writes: AS> Ideally , I'd like to see a slim core numarray, without any AS> frills (and more streamlined to behave like standard python AS> containers (e.g. indexing and type/casts behavior)) for the AS> python core, something more enabled and efficient for numerics AS> (including matrices!) as a seperate package (like the XML AS> example you quote). And then maybe a bigger pre-bundled AS> collection of (ideally rather modular) numerical libraries for AS> really hard-core scientific users (maybe in the spirit of AS> xemacs-packages and sumo-tar-balls -- no bloat if you don't AS> need it, plenty of features in an instant if you do). AS> Anyway, is there at least general agreement that there should AS> be some new and wonderful matrix class (plus supporting AS> libraries) somewhere (rather than souping up array)? Ideally, I'd like something that also has a reasonably easy to use interface from C/C++. The idea is that it should be easy (and natural) for someone to use the same library from C/C++ when performance was desired. This would be really nice and very useful. prabhu From benjamin.sauthier at snecma.fr Fri Mar 22 10:15:19 2002 From: benjamin.sauthier at snecma.fr (benjamin.sauthier at snecma.fr) Date: Fri, 22 Mar 2002 16:15:19 +0100 Subject: [SciPy-dev] compilation bug Message-ID: Hello, I compile Scipy 0.1 on an SGI OCTANE with OS 6.5 I run setup.py install 1)-================================ when compiling f77 files there is a crash there is a bug in the file build_flib.py the "-03" option for the f77 compiler is incorrect it is in fact -O3. 2)======================= compiling scipy.signal the compiler crash here is a part of the log file: building 'mach' library ar -cur build/temp.irix64-6.5-2.0/libmach.a build/temp.irix64-6.5-2.0/d1mach.o build/temp.irix64-6.5-2.0/i1mach.o build/temp.irix64-6.5-2.0/r1mach.o build/temp.irix64-6.5-2.0/xerror.o running build_ext skipping 'scipy.cluster._vq' extension (up-to-date) skipping 'scipy.io.numpyio' extension (up-to-date) skipping 'scipy.signal.sigtools' extension (up-to-date) building 'scipy.signal.spline' extension gcc -g -O2 -Wall -Wstrict-prototypes -shared -INumerical/Include -I/appl/ykl/aerothermique_externe/Applicatifs/Developpement/s086781/include/python2.0 -c signal/splinemodule.c -o build/temp.irix64-6.5-2.0/signal/splinemodule.o signal/splinemodule.c:20: warning: function declaration isn't a prototype signal/splinemodule.c: In function `cspline2d': signal/splinemodule.c:77: warning: implicit declaration of function `S_cubic_spline2D' signal/splinemodule.c:81: warning: implicit declaration of function `D_cubic_spline2D' signal/splinemodule.c:56: warning: `retval' might be used uninitialized in this function signal/splinemodule.c: In function `qspline2d': signal/splinemodule.c:137: warning: implicit declaration of function `S_quadratic_spline2D' signal/splinemodule.c:141: warning: implicit declaration of function `D_quadratic_spline2D' signal/splinemodule.c:114: warning: `retval' might be used uninitialized in this function signal/splinemodule.c: In function `FIRsepsym2d': signal/splinemodule.c:196: warning: implicit declaration of function `S_separable_2Dconvolve_mirror' signal/splinemodule.c:204: warning: implicit declaration of function `D_separable_2Dconvolve_mirror' signal/splinemodule.c:213: warning: implicit declaration of function `C_separable_2Dconvolve_mirror' signal/splinemodule.c:221: warning: implicit declaration of function `Z_separable_2Dconvolve_mirror' signal/splinemodule.c: In function `IIRsymorder1': signal/splinemodule.c:308: warning: implicit declaration of function `S_IIR_forback1' signal/splinemodule.c:319: warning: implicit declaration of function `D_IIR_forback1' signal/splinemodule.c:330: warning: implicit declaration of function `C_IIR_forback1' signal/splinemodule.c:332: Unable to access real part of complex value in a hard register on this target error: command 'gcc' failed with exit status 1 3)=========================== when all is compiled (I got rid of signal and fft, maybe it's dumb) if from pythhon prompt I want to import fastumath (Numeric 20.1.1) import fastumath ImportError: 4025998:python: rld: Fatal Error: unresolvable symbol in /usr/lib32/libfortran.so: __dshiftl4 If you see what is the problem ... I would appreciate some help. Thanks. Sincerely Ben France ... ... From pearu at cens.ioc.ee Fri Mar 22 10:47:50 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Mar 2002 17:47:50 +0200 (EET) Subject: [SciPy-dev] compilation bug In-Reply-To: Message-ID: On Fri, 22 Mar 2002 benjamin.sauthier at snecma.fr wrote: > Hello, > I compile Scipy 0.1 on an SGI OCTANE with OS 6.5 Thanks for the bug report. Have you tried the CVS version of SciPy-0.2? It fixes many bugs, some of them you just reported. I also believe that nobody will fix bugs in SciPy-0.1. Regards, Pearu From prabhu at aero.iitm.ernet.in Fri Mar 22 22:16:03 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sat, 23 Mar 2002 08:46:03 +0530 Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. Message-ID: <15515.62323.831176.215244@monster.linux.in> hi, First, the good news. A while back I reported a bug with severe bug with Python + f2py under Debian Linux that Pearu Peterson investigated. He filed a bug with the Python folks and found that the bug was in the Debian packages. He then filed a bug with the Debian developers. I just noticed yesterday that Woody has updated python packages that fix this problem. Here is the report. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=135461 The f2py tests and some of my modules all work for me (they used to segfault). I am having some difficulties with the new build process. I'm using the atlas libs from Debian GNU/Linux (woody). When I try to do a python setup.py build I get an error saying atlas libs cant be found. They are very much installed. I looked at the code and it seems that the code looks for liblapack, libf77blas, libcblas, libatlas all in either /usr/lib/atlas or /usr/local/lib/atlas (or a user specified directory). Under debian these libraries all exist but are not in /usr/lib/atlas. The cblas, f77blas and atlas libs are in /usr/lib the others in /usr/lib/atlas. This fools the system_info.py script into thinking that there aren't any useful atlas libs and hence the build does not proceed. I worked around the problem by creating links to these libraries in /usr/lib/atlas. After I built scipy the tests seem to run but I get the following error: In [1]: import scipy /usr/local/lib/python2.1/site-packages/scipy/integrate/vode.so: undefined symbol: f2py_report_on_exit I dont know why this happens. When I run the tests I get 4 failures. But it does run and does not segfault anymore. :) In [2]: scipy.test() [snip] ====================================================================== FAIL: check_default_cols (test_misc.test_mean) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 53, in check_default_cols assert_array_equal(val,desired) File "/usr/local/lib/python2.1/site-packages/scipy_test/scipy_test.py", line 301, in assert_array_equal assert alltrue(ravel(reduced)),\ AssertionError: Arrays are not equal: ====================================================================== FAIL: check_qnan (test_misc.test_isnan) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 112, in check_qnan assert(isnan(log(-1.)) == 1) AssertionError ====================================================================== ====================================================================== FAIL: check_qnan (test_misc.test_isfinite) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 132, in check_qnan assert(isfinite(log(-1.)) == 0) AssertionError ====================================================================== FAIL: check_qnan (test_misc.test_isinf) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 157, in check_qnan assert(isnan(log(-1.)) == 1) AssertionError ---------------------------------------------------------------------- Ran 262 tests in 3.281s FAILED (failures=4) Thanks, prabhu From eric at scipy.org Sat Mar 23 00:47:08 2002 From: eric at scipy.org (eric) Date: Sat, 23 Mar 2002 00:47:08 -0500 Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. References: <15515.62323.831176.215244@monster.linux.in> Message-ID: <020501c1d22e$2be04090$777ba8c0@ericlaptop> Hey Prabhu, [snip stuff about fixing Debian Python] Thanks Prabhu for riding herd on this one. > > After I built scipy the tests seem to run but I get the following > error: > > In [1]: import scipy > /usr/local/lib/python2.1/site-packages/scipy/integrate/vode.so: undefined symbol: f2py_report_on_exit I just got this when building on RH 7.1 today also. You can get rid of it by re-compiling with F2PY_REPORT_ATEXIT_DISABLE defined. I got rid of it by edit my f2py2e/src/fortranobject.h header to have disable as the default. Pearu, this seems to cause problems on a lot of platforms. Any chance you can change the reporting stuff to default to off in f2py? > > I dont know why this happens. When I run the tests I get 4 failures. > But it does run and does not segfault anymore. :) > > In [2]: scipy.test() > [snip] > ====================================================================== > FAIL: check_default_cols (test_misc.test_mean) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 53, in check_default_cols > assert_array_equal(val,desired) > File "/usr/local/lib/python2.1/site-packages/scipy_test/scipy_test.py", line 301, in assert_array_equal > assert alltrue(ravel(reduced)),\ > AssertionError: > Arrays are not equal: > > ====================================================================== > FAIL: check_qnan (test_misc.test_isnan) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 112, in check_qnan > assert(isnan(log(-1.)) == 1) > AssertionError > ====================================================================== > > ====================================================================== > FAIL: check_qnan (test_misc.test_isfinite) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 132, in check_qnan > assert(isfinite(log(-1.)) == 0) > AssertionError > ====================================================================== > FAIL: check_qnan (test_misc.test_isinf) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/lib/python2.1/site-packages/scipy/tests/test_misc.py", line 157, in check_qnan > assert(isnan(log(-1.)) == 1) > AssertionError > ---------------------------------------------------------------------- > Ran 262 tests in 3.281s > > FAILED (failures=4) > I get the exact same errors on RH7.1. The NaN stuff is still flaky. We get it working on one platform and it breaks on another. Not sure about the first failure -- it looks like that one should pass. eric From pearu at cens.ioc.ee Sat Mar 23 02:31:01 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 23 Mar 2002 09:31:01 +0200 (EET) Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. In-Reply-To: <020501c1d22e$2be04090$777ba8c0@ericlaptop> Message-ID: On Sat, 23 Mar 2002, eric wrote: > > After I built scipy the tests seem to run but I get the following > > error: > > > > In [1]: import scipy > > /usr/local/lib/python2.1/site-packages/scipy/integrate/vode.so: undefined > symbol: f2py_report_on_exit > > I just got this when building on RH 7.1 today also. You can get rid of it by > re-compiling with F2PY_REPORT_ATEXIT_DISABLE defined. > > I got rid of it by edit my f2py2e/src/fortranobject.h header to have disable as > the default. > > Pearu, this seems to cause problems on a lot of platforms. Any chance you can > change the reporting stuff to default to off in f2py? Yes, I was thinking the same thing. It was from the start a kind of debugging/tuning feature and I have now the information that I needed from that. So, I'll turn it off from the next f2py snapshot. Pearu From pearu at cens.ioc.ee Sat Mar 23 03:54:10 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 23 Mar 2002 10:54:10 +0200 (EET) Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. In-Reply-To: <15515.62323.831176.215244@monster.linux.in> Message-ID: On Sat, 23 Mar 2002, Prabhu Ramachandran wrote: > I get an error saying atlas libs cant be found. They are very much > installed. I looked at the code and it seems that the code looks for > liblapack, libf77blas, libcblas, libatlas all in either /usr/lib/atlas > or /usr/local/lib/atlas (or a user specified directory). Under debian > these libraries all exist but are not in /usr/lib/atlas. The cblas, > f77blas and atlas libs are in /usr/lib the others in /usr/lib/atlas. > This fools the system_info.py script into thinking that there aren't > any useful atlas libs and hence the build does not proceed. I worked > around the problem by creating links to these libraries in > /usr/lib/atlas. Writing configure tool can be tricky. Different systems, different distributions of the same system, different users of the same distribution may install software in endless ways (different locations, having only or both system-wide and user-defined setups, etc). Making one user happy probably means making another unhappy etc. Anyway, I have added some additional hooks so that atlas libraries are discovered also on Debian Woody. Prabhu, can you undo your work around and test it with default woody setup? Just running system_info.py should be enough to see the effect. Regardless what I said above I think we should try to get scipy to build on most widely used distributions. Although I can predict some problems with linalg2 that implements wrappers to ATLAS routines that are available only in unstable ATLAS releases, and system distributors are sometimes very slow updating even stable releases... Regards, Pearu From prabhu at aero.iitm.ernet.in Sat Mar 23 05:17:31 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sat, 23 Mar 2002 15:47:31 +0530 Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. In-Reply-To: References: <15515.62323.831176.215244@monster.linux.in> Message-ID: <15516.22075.325571.204750@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> Writing configure tool can be tricky. Different systems, PP> different distributions of the same system, different users of PP> the same distribution may install software in endless ways PP> (different locations, having only or both system-wide and PP> user-defined setups, etc). Making one user happy probably PP> means making another unhappy etc. Absolutely! My sympathies for you. :) PP> Anyway, I have added some additional hooks so that atlas PP> libraries are discovered also on Debian Woody. Prabhu, can PP> you undo your work around and test it with default woody PP> setup? Just running system_info.py should be enough to see the PP> effect. Yes, it works great now. I just updated f2py and am rebuilding scipy after a setup.py clean. I saw that you've turned the atexit off by default now and that works fine with one of my test modules. I'm sure the rest of the build will go fine. Thanks! prabhu From prabhu at aero.iitm.ernet.in Sat Mar 23 05:19:37 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sat, 23 Mar 2002 15:49:37 +0530 Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. In-Reply-To: <020501c1d22e$2be04090$777ba8c0@ericlaptop> References: <15515.62323.831176.215244@monster.linux.in> <020501c1d22e$2be04090$777ba8c0@ericlaptop> Message-ID: <15516.22201.615091.711723@monster.linux.in> >>>>> "EJ" == eric writes: [scipy test failures ...] EJ> I get the exact same errors on RH7.1. The NaN stuff is still EJ> flaky. We get it working on one platform and it breaks on EJ> another. Not sure about the first failure -- it looks like EJ> that one should pass. I guess this must be another tricky issue. Anyway most of the stuff seems okay for now. thanks, prabhu From oliphant.travis at ieee.org Sun Mar 24 05:36:02 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Mar 2002 03:36:02 -0700 Subject: [SciPy-dev] Re: Linalg2 In-Reply-To: References: Message-ID: > Hi Travis, > > On Sat, 23 Mar 2002, Travis Oliphant wrote: > > I'm trying to help you with linalg2 in my spare time, but I'm having a > > time > > That's great. I have been quite short with my spare time lately.. > > > getting what's there to work. I think linalg is important and it's not > > worth my time to try and fix the problems in linalg caused by the f2py > > changes. > > > > Currently I get an error on import. > > > > exceptions.ImportError: > > /usr/lib/python2.1/site-packages/scipy/linalg/clapack.so: undefined > > symbol: clapack_sgetri > > > > What version of ATLAS do you need? > > I think I have the latest stable release. Do I need the unstable > > release? > > Yes, I am using ATLAS-3.3.14. It seems from scanning atlas mailing > lists that the ATLAS team is close to releasing a new stable release. > On the other hand, it is possible to separate wrappers to new routines in > ATLAS so that one can use linalg2 also with the current stable release. > So, the question is whether to require latest "unstable" (that looks > quite stable to me) ATLAS or to introduce hooks to linalg2 that later > become obsolete. I would prefer the first approch provided that new > stable ATLAS will be released soon. What do you think? > If we phase in linalg2, then this idea will probably work. > While you start testing with linalg2 and find unimplemented features, > could we make a hotlist of what are the most important functions that need > to be implemented ASAP, like the ones that are used in other > scipy modules. I could then releatively quickly implement the required > wrappers. For example, I'll try to wrap the svd algorithm today. Anything > else? There aren't that many functions implemented in linear algebra that require wrappings. Schur decomposition, QR decomposition, Cholesky, and SVD decomposition would pretty much do it. From pearu at cens.ioc.ee Sun Mar 24 09:48:08 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 24 Mar 2002 16:48:08 +0200 (EET) Subject: [SciPy-dev] Re: Linalg2 In-Reply-To: Message-ID: On Sun, 24 Mar 2002, Travis Oliphant wrote: > There aren't that many functions implemented in linear algebra that require > wrappings. Schur decomposition, QR decomposition, Cholesky, and SVD > decomposition would pretty much do it. I have implemented SVD now, it is in SciPy CVS. Travis, I noticed that you reorganized blas wrappers in linalg2. As a result many Fortran BLAS 2 routines are lost. Did you forget to commit some files where you put the signatures of these routines? Pearu From heiko at hhenkelmann.de Sun Mar 24 16:38:32 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Sun, 24 Mar 2002 22:38:32 +0100 Subject: [SciPy-dev] accuracy problem in filterdesign.py Message-ID: <000b01c1d37c$4a64a5c0$e5db9e3e@arrow> Hello, while writing a test driver for a minimum phase calculation routine I came across the following problem. It is causing asymmetriesin the output of freqz. In line 107 and 109 in filter_design.py the follwoing is happening: >>> N=512 >>> lastpoint=2*pi >>> w1=arange(0,lastpoint,lastpoint/N) >>> w2=arange(0,N)*(lastpoint/N) >>> lastpoint-w1[511]-w1[1] -6.3546390371982397e-014 >>> lastpoint-w2[511]-w2[1] 4.0245584642661925e-016 >>> w1[511] 6.2709134608765646 >>> w2[511] 6.2709134608765007 >>> w2[511]-w1[511] -6.3948846218409017e-014 >>> It appears that arange for floating point values is accumulating an error during summation. Do you think it would be worthwhile to tweak filter_design.py to work around this problem? Heiko From pearu at cens.ioc.ee Sun Mar 24 17:24:15 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Mar 2002 00:24:15 +0200 (EET) Subject: [SciPy-dev] accuracy problem in filterdesign.py In-Reply-To: <000b01c1d37c$4a64a5c0$e5db9e3e@arrow> Message-ID: Hi Heiko, On Sun, 24 Mar 2002, Heiko Henkelmann wrote: > while writing a test driver for a minimum phase calculation routine I came > across the following problem. It is causing asymmetriesin the output of > freqz. In line 107 and 109 in filter_design.py the follwoing is happening: > > > >>> N=512 > >>> lastpoint=2*pi > >>> w1=arange(0,lastpoint,lastpoint/N) > >>> w2=arange(0,N)*(lastpoint/N) > >>> lastpoint-w1[511]-w1[1] > -6.3546390371982397e-014 > >>> lastpoint-w2[511]-w2[1] > 4.0245584642661925e-016 > >>> w1[511] > 6.2709134608765646 > >>> w2[511] > 6.2709134608765007 > >>> w2[511]-w1[511] > -6.3948846218409017e-014 > >>> What Numeric are you using? Platform? Compiler? Because I don't see these rounding errors on Linux, Numeric 20.2.1, gcc-2.95.4: Python 2.1.2 (#1, Mar 16 2002, 00:56:55) [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> from Numeric import * >>> N=512 >>> lastpoint=2*pi >>> w1=arange(0,lastpoint,lastpoint/N) >>> w2=arange(0,N)*(lastpoint/N) >>> lastpoint-w1[511]-w1[1] 4.0245584642661925e-16 >>> lastpoint-w2[511]-w2[1] 4.0245584642661925e-16 >>> w1[511] 6.2709134608765007 >>> w2[511] 6.2709134608765007 >>> w2[511]-w1[511] 0.0 If I recall correctly, then gcc uses longer floats when doing internal float operations and may be that's why I don't have these errors. > It appears that arange for floating point values is accumulating an error > during summation. Do you think it would be worthwhile to tweak > filter_design.py to work around this problem? Could you bug report this to Numeric people? The corresponding code fragment of arange is value = start; for (i=0; i < length; i++) { dbl_descr->cast[type](&value, 0, rptr, 0, 1); value += step; rptr += elsize; } and suggest the following fix: for (i=0; i < length; ++i) { value = start + i * step; dbl_descr->cast[type](&value, 0, rptr, 0, 1); rptr += elsize; } Regards, Pearu From oliphant.travis at ieee.org Sun Mar 24 19:08:10 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Mar 2002 17:08:10 -0700 Subject: [SciPy-dev] Re: Linalg2 In-Reply-To: References: Message-ID: On Sunday 24 March 2002 07:48 am, you wrote: > On Sun, 24 Mar 2002, Travis Oliphant wrote: > > There aren't that many functions implemented in linear algebra that > > require wrappings. Schur decomposition, QR decomposition, Cholesky, and > > SVD decomposition would pretty much do it. > > I have implemented SVD now, it is in SciPy CVS. > > Travis, I noticed that you reorganized blas wrappers in linalg2. As a > result many Fortran BLAS 2 routines are lost. Did you forget to commit > some files where you put the signatures of these routines? > Nothing should be lost. I just renamed the files (e.g. from fblas to fblas2) so that they could be installed along with linalg. Otherwise distutils writes linalg2/fblas over linalg/fblas or vice-versa depending on which one was installed first. It's possible I forgot to commit a file. For example, do the fblas.pyf (now fblas2.pyf) files need to be committed or are they automatically generated from the generic_fblas.pyf files? Can you be more specific about what you think is "lost." -Travis From oliphant.travis at ieee.org Sun Mar 24 19:58:53 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Mar 2002 17:58:53 -0700 Subject: [SciPy-dev] accuracy problem in filterdesign.py In-Reply-To: <000b01c1d37c$4a64a5c0$e5db9e3e@arrow> References: <000b01c1d37c$4a64a5c0$e5db9e3e@arrow> Message-ID: On Sunday 24 March 2002 02:38 pm, you wrote: > Hello, > > while writing a test driver for a minimum phase calculation routine I came > across the following problem. It is causing asymmetriesin the output of > > >>> N=512 > >>> lastpoint=2*pi > >>> w1=arange(0,lastpoint,lastpoint/N) > >>> w2=arange(0,N)*(lastpoint/N) > >>> lastpoint-w1[511]-w1[1] > > -6.3546390371982397e-014 > > >>> lastpoint-w2[511]-w2[1] > > 4.0245584642661925e-016 > > >>> w1[511] > > 6.2709134608765646 > > >>> w2[511] > > 6.2709134608765007 > > >>> w2[511]-w1[511] > > -6.3948846218409017e-014 > I just fixed this in Numeric. The arange in Numeric used to increment the value by the step amount. It now computes the value using value = start + i*step which fixes the problem. Thanks for pointing this out. From pearu at cens.ioc.ee Mon Mar 25 03:41:11 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Mar 2002 10:41:11 +0200 (EET) Subject: [SciPy-dev] Re: Linalg2 In-Reply-To: Message-ID: On Sun, 24 Mar 2002, Travis Oliphant wrote: > Nothing should be lost. I just renamed the files (e.g. from fblas to fblas2) > so that they could be installed along with linalg. Otherwise distutils > writes linalg2/fblas over linalg/fblas or vice-versa depending on which one > was installed first. > > It's possible I forgot to commit a file. For example, do the fblas.pyf (now > fblas2.pyf) files need to be committed or are they automatically generated > from the generic_fblas.pyf files? > > Can you be more specific about what you think is "lost." See http://scipy.net/cgi-bin/viewcvs.cgi/scipy/linalg2/generic_fblas2.pyf.diff?r1=1.2&r2=1.3&sortby=date Previously there were files generic_fblas.pyf <- this includes what follows generic_fblas1.pyf <- signatures for blas 1 routines generic_fblas2.pyf <- signatures for blas 2 routines But then, I suspect, you did mv generic_fblas.pyf generic_fblas2.pyf so that you overwrite an existing file when renaming and now we have generic_fblas1.pyf generic_fblas2.pyf As a result one file is lost that contained wrappers to BLAS 2 routines gemv,hemv,symv,trmv. I think there was no need to rename all these files. Use the following def process_all(): # process the standard files. for name in ['fblas2','cblas2','clapack2','flapack2']: generate_interface(name,'generic_%s.pyf'%(name[:-1]),name+'.pyf') in interface_gen.py. Regards, Pearu From oliphant.travis at ieee.org Mon Mar 25 11:26:08 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 09:26:08 -0700 Subject: [SciPy-dev] Re: Linalg2 In-Reply-To: References: Message-ID: > As a result one file is lost that contained wrappers to BLAS 2 routines > gemv,hemv,symv,trmv. > I recovered the old generic_fblas2.pyf file from CVS and placed this (along with the include directive in the new generic_fblas2.pyf file). Nothing is lost with CVS :-) Perhaps we could just use interface_gen2 to create these with a different number as Pearu suggested. I'm willing for that to happen. -Travis From chaugan at visi.com Mon Mar 25 10:48:56 2002 From: chaugan at visi.com (Carl Haugan) Date: 25 Mar 2002 10:48:56 -0500 Subject: [SciPy-dev] how to debug: undefined symbol: clapack_sgetrf Message-ID: <1017071337.2618.8.camel@localhost.localdomain> Hello, I'm trying to install scipy (loaded from the CVS tree) onto a Mandrake 8.2 system. I think I've followed the instructions at: http://www.scipy.org/Members/fperez/PerezCVSBuild.htm I get the following error: Python 2.2 (#1, Feb 24 2002, 16:21:58) [GCC 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)] on linux-i386 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.2/site-packages/scipy/__init__.py", line 78, in ? modules2all(__all__, _level1, globals()) File "/usr/lib/python2.2/site-packages/scipy/__init__.py", line 48, in modules2all exec("import %s" % name, gldict) File "", line 1, in ? File "/usr/lib/python2.2/site-packages/scipy/linalg/__init__.py", line 41, in ? scipy.modules2all(__all__, _modules, globals()) File "/usr/lib/python2.2/site-packages/scipy/__init__.py", line 48, in modules2all exec("import %s" % name, gldict) File "", line 1, in ? ImportError: /usr/lib/python2.2/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgetrf How should I go about figuring out what went wrong? Thanks, -- --------------------------------- Carl Haugan chaugan at visi.com --------------------------------- From oliphant.travis at ieee.org Mon Mar 25 12:11:03 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 10:11:03 -0700 Subject: [SciPy-dev] Linalg In-Reply-To: References: Message-ID: I'm not sure what happened, but now linalg does not work anymore (and neither does linalg2). Just yesterday, linalg was working, but today it's giving me errors about the first argument having to be contiguous. >>> linalg.inv(a) array_from_pyobj:intent(inout) array must be contiguous and with a proper type and size. Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.1/site-packages/scipy/linalg/linear_algebra.py", line 150, in inv results = getrf(a) flapack.error: failed in converting 1st argument `a' of flapack.dgetrf to C/Fortran array This sounds like an f2py issue, but I thought I had it working yesterday with the new f2py. linalg2 is giving me outputs that are completely wrong. >>> import scipy.linalg2 >>> scipy.linalg2.inv(a) array([[ 0. +nanj, 0. +nanj, 0. +nanj], [ 0. +nanj, 0. +nanj, 0. +nanj], [ 0. +nanj, 0. +nanj, 0. +nanj]],'F') Does this work for anybody else? From pearu at cens.ioc.ee Mon Mar 25 12:14:47 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Mar 2002 19:14:47 +0200 (EET) Subject: [SciPy-dev] how to debug: undefined symbol: clapack_sgetrf In-Reply-To: <1017071337.2618.8.camel@localhost.localdomain> Message-ID: On 25 Mar 2002, Carl Haugan wrote: > Hello, > I'm trying to install scipy (loaded from the CVS tree) onto a Mandrake > 8.2 system. I think I've followed the instructions at: > http://www.scipy.org/Members/fperez/PerezCVSBuild.htm > > I get the following error: > > exec("import %s" % name, gldict) > File "", line 1, in ? > ImportError: /usr/lib/python2.2/site-packages/scipy/linalg/clapack.so: > undefined symbol: clapack_sgetrf > > How should I go about figuring out what went wrong? Seems like ATLAS is not discovered properly. What is the output of python scipy_distutils/system_info.py ? What ATLAS you are using? Note that the latest SciPy CVS tree requires also the latest unstable version of ATLAS (>3.3.14) because linalg2 is being merged into the SciPy installation tree right now and linalg2 wraps some routines that are not present in the current stable ATLAS release. The instructions in PerezCVSBuild.htm should be updated accordingly. Regards, Pearu From pearu at cens.ioc.ee Mon Mar 25 12:41:42 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Mar 2002 19:41:42 +0200 (EET) Subject: [SciPy-dev] Linalg In-Reply-To: Message-ID: On Mon, 25 Mar 2002, Travis Oliphant wrote: > >>> linalg.inv(a) > array_from_pyobj:intent(inout) array must be contiguous and with a proper > type and size. > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib/python2.1/site-packages/scipy/linalg/linear_algebra.py", > line 150, in inv > results = getrf(a) > flapack.error: failed in converting 1st argument `a' of flapack.dgetrf to > C/Fortran array > > This sounds like an f2py issue, but I thought I had it working yesterday with > the new f2py. Nothing changed in f2py lately, so I don't think that it is an f2py issue. > linalg2 is giving me outputs that are completely wrong. > > >>> import scipy.linalg2 > >>> scipy.linalg2.inv(a) > array([[ 0. +nanj, 0. +nanj, 0. +nanj], > [ 0. +nanj, 0. +nanj, 0. +nanj], > [ 0. +nanj, 0. +nanj, 0. +nanj]],'F') > > Does this work for anybody else? Yes, latest CVS works fine for me: Python 2.2 (#7, Jan 28 2002, 13:08:12) [GCC 3.0.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.linalg >>> import scipy.linalg2 >>> scipy.linalg.inv([[1,2],[3,4]]) array([[-2. , 1. ], [ 1.5, -0.5]]) >>> scipy.linalg2.inv([[1,2],[3,4]]) array([[-2. , 1. ], [ 1.5, -0.5]]) >>> scipy.__version__ '0.2.0-alpha-38.3279' May be your ATLAS installation is broken? Are you using the unstable version of ATLAS? What system_info.py shows to you? Pearu From pearu at cens.ioc.ee Mon Mar 25 12:46:13 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Mar 2002 19:46:13 +0200 (EET) Subject: [SciPy-dev] Re: Linalg2 In-Reply-To: Message-ID: On Mon, 25 Mar 2002, Travis Oliphant wrote: > Perhaps we could just use interface_gen2 to create these with a different > number as Pearu suggested. I'm willing for that to happen. I don't exactly understand what you are proposing here but I can fix this blas 2 issue, say, in an hour (unless you'll get it first). Pearu From oliphant.travis at ieee.org Mon Mar 25 12:59:56 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 10:59:56 -0700 Subject: [SciPy-dev] how to debug: undefined symbol: clapack_sgetrf In-Reply-To: References: Message-ID: On Monday 25 March 2002 10:14 am, you wrote: > On 25 Mar 2002, Carl Haugan wrote: > > Hello, > > I'm trying to install scipy (loaded from the CVS tree) onto a Mandrake > > 8.2 system. I think I've followed the instructions at: > > http://www.scipy.org/Members/fperez/PerezCVSBuild.htm > > > > I get the following error: > > > > exec("import %s" % name, gldict) > > File "", line 1, in ? > > ImportError: /usr/lib/python2.2/site-packages/scipy/linalg/clapack.so: > > undefined symbol: clapack_sgetrf > > > > How should I go about figuring out what went wrong? > > Seems like ATLAS is not discovered properly. What is the output of > > python scipy_distutils/system_info.py > > ? What ATLAS you are using? > > Note that the latest SciPy CVS tree requires also the latest unstable > version of ATLAS (>3.3.14) because linalg2 is being merged into the SciPy > installation tree right now and linalg2 wraps some routines that are > not present in the current stable ATLAS release. Technically, it only requires it if you want linalg2 to work (linalg2 is not imported by default, so you can safely ignore it if you don't want to bother with the latest ALTAS). -Travis From oliphant.travis at ieee.org Mon Mar 25 13:12:47 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 11:12:47 -0700 Subject: [SciPy-dev] Re: Linalg2 In-Reply-To: References: Message-ID: On Monday 25 March 2002 10:46 am, you wrote: > On Mon, 25 Mar 2002, Travis Oliphant wrote: > > Perhaps we could just use interface_gen2 to create these with a different > > number as Pearu suggested. I'm willing for that to happen. > > I don't exactly understand what you are proposing here but I can fix this > blas 2 issue, say, in an hour (unless you'll get it first). > I'll leave it to you. -Travis From cookedm at physics.mcmaster.ca Mon Mar 25 13:47:08 2002 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 25 Mar 2002 13:47:08 -0500 Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. In-Reply-To: (Pearu Peterson's message of "Sat, 23 Mar 2002 10:54:10 +0200 (EET)") References: Message-ID: At some point, Pearu Peterson wrote: > On Sat, 23 Mar 2002, Prabhu Ramachandran wrote: > >> I get an error saying atlas libs cant be found. They are very much >> installed. I looked at the code and it seems that the code looks for >> liblapack, libf77blas, libcblas, libatlas all in either /usr/lib/atlas >> or /usr/local/lib/atlas (or a user specified directory). Under debian >> these libraries all exist but are not in /usr/lib/atlas. The cblas, >> f77blas and atlas libs are in /usr/lib the others in /usr/lib/atlas. >> This fools the system_info.py script into thinking that there aren't >> any useful atlas libs and hence the build does not proceed. I worked >> around the problem by creating links to these libraries in >> /usr/lib/atlas. > > Writing configure tool can be tricky. Different systems, different > distributions of the same system, different users of the same > distribution may install software in endless ways (different > locations, having only or both system-wide and user-defined setups, > etc). Making one user happy probably means making another unhappy etc. > > Anyway, I have added some additional hooks so that atlas libraries are > discovered also on Debian Woody. Note that atlas in Debian Sid (unstable) can be different. There are processor-specific atlas packages which install the libraries in /usr/lib/{3dnow,sse,sse2}. It looks like the code for discovering the libraries always looks for a library path of the form prefix+'lib'+something. It seems to me that it should do something like libprefix+something, allowing libprefix to be specified separately. I decided to rewrite system_info.py to use a configuration file for where to look. I've eschewed a sophisticated automated search that can fail with no simple way to override, for a simple automated search and user-settable paths. My version reads a site.cfg (which is in the same directory as system_info.py) which is in a .INI format (read with ConfigParser). I can now compile scipy with no troubles :-) Hope you can use this. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at physics.mcmaster.ca -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: system_info.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: site.cfg URL: From pearu at scipy.org Mon Mar 25 13:58:00 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 25 Mar 2002 12:58:00 -0600 (CST) Subject: [SciPy-dev] Building on Woody(Debian): good news, bad news. In-Reply-To: Message-ID: On Mon, 25 Mar 2002, David M. Cooke wrote: > I decided to rewrite system_info.py to use a configuration file for > where to look. I've eschewed a sophisticated automated search that can > fail with no simple way to override, for a simple automated search and > user-settable paths. My version reads a site.cfg (which is in the same > directory as system_info.py) which is in a .INI format (read with > ConfigParser). > > I can now compile scipy with no troubles :-) Hope you can use this. Thanks! I'll study your changes later and see how to merge them with the current CVS system_info.py. But from a quick look on your changes, they look really good. Thanks, Pearu From pearu at cens.ioc.ee Mon Mar 25 15:41:06 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Mar 2002 22:41:06 +0200 (EET) Subject: linalg2 is back (Re: [SciPy-dev] Re: Linalg2) In-Reply-To: Message-ID: Hi, I have fixed the confusion with linalg2 now. I have tested it to work both when building it as a submodule of scipy and as a standalone package. But what I don't understand, is why http://scipy.net/cgi-bin/viewcvs.cgi/scipy/linalg2/ is not showing generic_flapack.pyf file (it appears only in Attic directory). Nevertheless, I can checkout this file from two different machines as a normal file. I'll take that this is either a viewcvs issue or a feature of cvs. And I am sure that someone will notice if there will be problems with checking out this file.. ;) Regards, Pearu From oliphant.travis at ieee.org Mon Mar 25 16:59:05 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 14:59:05 -0700 Subject: linalg2 is back (Re: [SciPy-dev] Re: Linalg2) In-Reply-To: References: Message-ID: On Monday 25 March 2002 01:41 pm, you wrote: > Hi, > > I have fixed the confusion with linalg2 now. I have tested it to work > both when building it as a submodule of scipy and as a standalone > package. > > But what I don't understand, is why > > http://scipy.net/cgi-bin/viewcvs.cgi/scipy/linalg2/ > > is not showing generic_flapack.pyf file (it appears only in Attic > directory). > Nevertheless, I can checkout this file from two different machines as a > normal file. I'll take that this is either a viewcvs issue or a feature > of cvs. And I am sure that someone will notice if there will be problems > with checking out this file.. ;) > I'm getting the following error on build (with a brand-new checked out version of SciPy). exceptions.ImportError: No module named _flinalg Traceback (most recent call last): File "setup.py", line 128, in ? install_package() File "setup.py", line 94, in install_package config.extend([get_package_config(x,parent_package)for x in standard_packages]) File "setup.py", line 46, in get_package_config config = mod.configuration(parent) File "linalg2/setup_linalg2.py", line 21, in configuration from linalg2.interface_gen import generate_interface File "/home/travis/SciPy/scipy/linalg2/__init__.py", line 31, in ? exec 'import %s as _mod' % (_mod_name) File "", line 1, in ? File "/home/travis/SciPy/scipy/linalg2/basic.py", line 11, in ? import calc_lwork ImportError: No module named calc_lwork -Travis From oliphant.travis at ieee.org Mon Mar 25 17:04:45 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 15:04:45 -0700 Subject: linalg2 is back (Re: [SciPy-dev] Re: Linalg2) In-Reply-To: References: Message-ID: On Monday 25 March 2002 01:41 pm, you wrote: > Hi, > Pearu, I noticed you changed the name of interface_gen2.py back to interface_gen.py. Did you fix the problem with distutils confusing linalg/interface_gen.py with linalg2/interface_gen.py ? I cannot test this currently because build is broken on my machine --- after an all-day delete everything and update again to try and fix whatever was wrong before. Thanks for your help, -Travis From pearu at cens.ioc.ee Mon Mar 25 17:20:00 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 26 Mar 2002 00:20:00 +0200 (EET) Subject: linalg2 is back (Re: [SciPy-dev] Re: Linalg2) In-Reply-To: Message-ID: On Mon, 25 Mar 2002, Travis Oliphant wrote: > On Monday 25 March 2002 01:41 pm, you wrote: > > Hi, > > > > Pearu, > > I noticed you changed the name of interface_gen2.py back to interface_gen.py. > > Did you fix the problem with distutils confusing linalg/interface_gen.py > with linalg2/interface_gen.py ? Well, I thought I fixed it by importing linalg2.interface_gen but obviously it fails (it worked for me when I built linalg2 locally, if I remove local build, then I get the import error). I'll try to figure out how to fix this without renaming interface_gen.py. If we would be forced to rename local files then it would indicate a bad design, I think. Two completely different packages may have files with the same names and there should be no naming conflicts. Talk to you soon, Pearu From oliphant.travis at ieee.org Mon Mar 25 17:37:59 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 15:37:59 -0700 Subject: linalg2 is back (Re: [SciPy-dev] Re: Linalg2) In-Reply-To: References: Message-ID: On Monday 25 March 2002 03:20 pm, you wrote: > On Mon, 25 Mar 2002, Travis Oliphant wrote: > > On Monday 25 March 2002 01:41 pm, you wrote: > > > Hi, > > > > Pearu, > > > > I noticed you changed the name of interface_gen2.py back to > > interface_gen.py. > > > > Did you fix the problem with distutils confusing linalg/interface_gen.py > > with linalg2/interface_gen.py ? > > Well, I thought I fixed it by importing linalg2.interface_gen but > obviously it fails (it worked for me when I built linalg2 locally, if I > remove local build, then I get the import error). > > I'll try to figure out how to fix this without renaming interface_gen.py. > If we would be forced to rename local files then it would indicate a bad > design, I think. Two completely different packages may have files with > the same names and there should be no naming conflicts. > I have a semi-hack which works. I just put a try-except around import (the setup doesn't need basic.py) in the __init__.py file for linalg2. This seems to work. Finally, it looks like my installation is working again. Thanks for your help, -Travis From pearu at scipy.org Mon Mar 25 17:28:14 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 25 Mar 2002 16:28:14 -0600 (CST) Subject: linalg2 is back (Re: [SciPy-dev] Re: Linalg2) In-Reply-To: Message-ID: On Mon, 25 Mar 2002, Travis Oliphant wrote: > I have a semi-hack which works. > > I just put a try-except around import (the setup doesn't need basic.py) in > the __init__.py file for linalg2. > > This seems to work. > > Finally, it looks like my installation is working again. Thanks for your > help, Hold on a bit before commiting, I think I have a non-hack solution: In get_package_config there is a hack that inserts/removes a path in order to import setup_*. I'll try to get the same result without this hack by using temporarily-changing-directory technique... Pearu From pearu at scipy.org Mon Mar 25 17:58:50 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Mon, 25 Mar 2002 16:58:50 -0600 (CST) Subject: linalg2 is back (Re: [SciPy-dev] Re: Linalg2) In-Reply-To: Message-ID: On Mon, 25 Mar 2002 pearu at scipy.org wrote: > On Mon, 25 Mar 2002, Travis Oliphant wrote: > > > I have a semi-hack which works. > > > > I just put a try-except around import (the setup doesn't need basic.py) in > > the __init__.py file for linalg2. > > > > This seems to work. > > > > Finally, it looks like my installation is working again. Thanks for your > > help, > > Hold on a bit before commiting, I think I have a non-hack solution: > In get_package_config there is a hack that inserts/removes a path > in order to import setup_*. I'll try to get the same result without this > hack by using temporarily-changing-directory technique... Ok, I have a somewhat nicer fix now: setup_linalg2.py (also setup_linalg.py for completeness) must contain import interface_gen reload(interface_gen) generate_interface = interface_gen.generate_interface and all these try-except hacks are unnecessary. Travis, can you verify that? Pearu From oliphant.travis at ieee.org Mon Mar 25 20:41:03 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Mar 2002 18:41:03 -0700 Subject: [SciPy-dev] Linalg2 working and isnan questions? In-Reply-To: References: Message-ID: Pearu and others, Linalg2 works for me (along with linalg), yeah... I've been able to move qr over to linalg2 (and tests pass). On another subject: I noticed before that some people were saying isnan tests failed. Could you describe that problem again. I got failures on the test isnan(log(-1))==1 as well but only because the scipy.log function returns complex values for negative numbers now. fastumath.log still does it the old way for any die-hards. -Travis From nwagner at mecha.uni-stuttgart.de Tue Mar 26 04:31:47 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 26 Mar 2002 10:31:47 +0100 Subject: [SciPy-dev] ATLAS > 3.3.13 Message-ID: <3CA04003.BDAC5A15@mecha.uni-stuttgart.de> Hi, The latest stable version of ATLAS is 3.2.1. 3.3.5- 3.3.14 are Developer (unstable) versions. Does it make sense to install an unstable version ? Nils From pearu at cens.ioc.ee Tue Mar 26 03:30:36 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 26 Mar 2002 10:30:36 +0200 (EET) Subject: [SciPy-dev] Some concerns on Scipy development Message-ID: Hi, I have some concerns about how different Scipy modules are dependent on each other by a global scipy namespace, though there is often little need for this dependency. Namely, most of the current modules import scipy and as result one can say that effectively all modules import each other. Such a policy is not good because of the following reasons: 1) It is generally a good style of programming if a module imports only thouse resources that it actually uses. This helps, for example, in localizing and fixing bugs in a particular module level, which will be much easier when doing it in a global scipy level. Testing routines should be particullary careful not importing irrelevant modules, escpecially, if these are also under testing. For a particular example, many modules import MLab or scipy just to access only the resources in Numeric. I would remove all such imports in favour of importing Numeric directly. 2) There has been wished (see numpy-discussion list) that scipy wouldn't be such a big monolitic environment. Instead, it should be possible to install Scipy subpackages as standalone packages if that is possible in principle. It has an advantage that other projects could use parts of Scipy codes without actually building the whole scipy. Or is this the Scipy policy to force installing/using either the whole scipy package or nothing from it? I think that then some experts will not use scipy at all (also I find it less stimulating to contribute to scipy). 3) The idea of different levels in SciPy is not working if the lowest level modules import everything. It would be less confusing to drop the idea of leveling from start -- I am not proposing to acctually go for it. Instead, as I see it, Scipy should be rather thin interface between an user and modules that it provides, with some additional (high level) convinience functions (that may import "everything"). Lower level modules should not depend each other unless there is a good reason for that. I wonder how other Scipy developers feel about these issues? Do you think that my concerns are somehow relevant or not? Is it acceptable trying gradually to turn lower level modules to be independent from each other if possible? Thanks, Pearu From pearu at scipy.org Tue Mar 26 03:30:36 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Tue, 26 Mar 2002 02:30:36 -0600 (CST) Subject: [SciPy-dev] Re: [SciPy-user] ATLAS > 3.3.13 In-Reply-To: <3CA04003.BDAC5A15@mecha.uni-stuttgart.de> Message-ID: On Tue, 26 Mar 2002, Nils Wagner wrote: > Hi, > > The latest stable version of ATLAS is 3.2.1. > 3.3.5- 3.3.14 are Developer (unstable) versions. > > Does it make sense to install an unstable version ? In case of ATLAS, yes, I think so. The latest stable 3.2.1 is not quite stable as you need to apply some patches. And the latest unstable 3.3.>13 seems to be quite stable to me. There is a rumor going on that ATLAS team will release a new stable release soon. Pearu From oliphant at ee.byu.edu Tue Mar 26 02:00:31 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Mar 2002 02:00:31 -0500 (EST) Subject: [SciPy-dev] ATLAS > 3.3.13 In-Reply-To: <3CA04003.BDAC5A15@mecha.uni-stuttgart.de> Message-ID: > Hi, > > The latest stable version of ATLAS is 3.2.1. > 3.3.5- 3.3.14 are Developer (unstable) versions. > > Does it make sense to install an unstable version ? Well, SciPy in the CVS tree is also "unstable" It depends on how "cutting edge" you want to be. Pearu has noted that there appears to be some gearing up for a release in the ATLAS mailing lists which would indicate that the "unstable" release is about to become "stable." I'm using 3.3.13 and it appears to work fine. -Travis From oliphant at ee.byu.edu Tue Mar 26 02:16:21 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Mar 2002 02:16:21 -0500 (EST) Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: Message-ID: On Tue, 26 Mar 2002, Pearu Peterson wrote: > > Hi, > > I have some concerns about how different Scipy modules are dependent on > each other by a global scipy namespace, though there is often little need > for this dependency. Namely, most of the current modules import scipy and > as result one can say that effectively all modules import each other. Such > a policy is not good because of the following reasons: > > 1) It is generally a good style of programming if a module imports only > thouse resources that it actually uses. This helps, for example, in > localizing and fixing bugs in a particular module level, which will be > much easier when doing it in a global scipy level. > Testing routines should be particullary careful not importing > irrelevant modules, escpecially, if these are also under testing. > The point with SciPy is to bring together a lot of different modules in one umbrella to make it easier to use. My vision is that SciPy provides a basic set of functionality that lives under the SciPy namespace (this functionality encompasses at least Numeric and MLab and other basic functions). Modules (e.g. scipy.module_name) could in principle be distributed separately, but most likely will be written to depend on basic (and extended) scipy functionality. I think we can still re-factor, of course, but there is a reason and purpose for the design as it exists. > For a particular example, many modules import MLab or scipy just to > access only the resources in Numeric. I would remove all such imports in > favour of importing Numeric directly. But, I see SciPy subsuming Numeric's function calls. In other words, a person shouldn't have to remember whether a certain expression they use regularly comes (originally) from Numeric or SciPy. No package in SciPy should use RandomArray, or LinearAlgebra, or FFT (ideally) as these are all supported in SciPy. > > 2) There has been wished (see numpy-discussion list) that scipy wouldn't > be such a big monolitic environment. Instead, it should be possible to > install Scipy subpackages as standalone packages if that is possible in > principle. It has an advantage that other projects could use parts of > Scipy codes without actually building the whole scipy. It is hoped that this will be possible. That is the point of "levels." The basic package will likely have to be installed in order to get other modules. I just don't think it's worth the effort to satisfy one dissonant user who will probably never like SciPy no matter what we do. > > Or is this the Scipy policy to force installing/using either the whole > scipy package or nothing from it? I think that then some experts will not > use scipy at all (also I find it less stimulating to contribute to > scipy). No, but, the user should have to install some "base package." > > 3) The idea of different levels in SciPy is not working if the lowest > level modules import everything. It would be less confusing to drop the > idea of leveling from start -- I am not proposing to acctually go for it. > Instead, as I see it, Scipy should be rather thin interface between an > user and modules that it provides, with some additional (high > level) convinience functions (that may import "everything"). Lower level > modules should not depend each other unless there is a good reason for > that. First, lower levels don't depend on functions in higher levels, as far as I know (If they did the code wouldn't import), so I'm not sure where that statement comes from. I'm not really sure how your overall idea is different from what we are doing, other than the fact that I think Numeric is subsumed by SciPy (there are future compatibility issues for this choise as well with Numeric in a state of flux). The level idea is just a loose association to try and capture different dependencies. It solves the problem of importing things in the right order in the __init__ file. It also helps to see which packages can be "installed separately." > > I wonder how other Scipy developers feel about these issues? > Do you think that my concerns are somehow relevant or not? > Is it acceptable trying gradually to turn lower level modules to be > independent from each other if possible? If you have particular suggestions please state them. I'm always willing to consider re-factoring code. -Travis From oliphant at ee.byu.edu Tue Mar 26 02:41:51 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Mar 2002 02:41:51 -0500 (EST) Subject: [SciPy-dev] Test cases for expm Message-ID: The test cases Nils posted recently work fine for me. As expected expm2 does not work well (nor does funm) for the badly conditioned matrix obtained during that computation. expm and expm3 agree and look good. Thanks for the tests. If you are still having trouble, consider removing the build directory and starting again. The error Nils showed was fixed only a few hours ago, and so you should update your CVS copy. -Travis From nwagner at mecha.uni-stuttgart.de Tue Mar 26 06:20:06 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 26 Mar 2002 12:20:06 +0100 Subject: [SciPy-dev] Re: [SciPy-user] Test cases for expm References: Message-ID: <3CA05966.FD2FC933@mecha.uni-stuttgart.de> Travis Oliphant schrieb: > > The test cases Nils posted recently work fine for me. > > As expected expm2 does not work well (nor does funm) for the badly > conditioned matrix obtained during that computation. > > expm and expm3 agree and look good. > > Thanks for the tests. > > If you are still having trouble, consider removing the build directory and > starting again. > > The error Nils showed was fixed only a few hours ago, and so you should > update your CVS copy. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user Travis, I have updated both my CVS copy of scipy and ATLAS atlas3.2.1 --> atlas3.3.13 This is the result of python exptest.py Traceback (most recent call last): File "exptest.py", line 1, in ? from scipy import * File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line 72, in ? names2all(__all__, _level0, globals()) File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line 37, in names2all exec("import %s" % name, gldict) File "", line 1, in ? File "/usr/local/lib/python2.1/site-packages/scipy/basic.py", line 25, in ? cast = {Numeric.Character: toChar, AttributeError: 'Numeric' module has no attribute 'Character' Any idea ? Nils . From kern at caltech.edu Tue Mar 26 05:17:42 2002 From: kern at caltech.edu (Robert Kern) Date: Tue, 26 Mar 2002 02:17:42 -0800 Subject: [SciPy-dev] Re: [SciPy-user] Test cases for expm In-Reply-To: <3CA05966.FD2FC933@mecha.uni-stuttgart.de> References: <3CA05966.FD2FC933@mecha.uni-stuttgart.de> Message-ID: <20020326101742.GA27996@taliesen.caltech.edu> On Tue, Mar 26, 2002 at 12:20:06PM +0100, Nils Wagner wrote: [snip] > Travis, > > I have updated both my CVS copy of scipy and ATLAS atlas3.2.1 --> > atlas3.3.13 > > This is the result of python exptest.py > > Traceback (most recent call last): > File "exptest.py", line 1, in ? > from scipy import * > File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line > 72, in ? > names2all(__all__, _level0, globals()) > File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line > 37, in names2all > exec("import %s" % name, gldict) > File "", line 1, in ? > File "/usr/local/lib/python2.1/site-packages/scipy/basic.py", line 25, > in ? > cast = {Numeric.Character: toChar, > AttributeError: 'Numeric' module has no attribute 'Character' > > Any idea ? Just ran into that myself. Upgrade to Numeric 21.0 > Nils -- Robert Kern Ruddock House President kern at caltech.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at scipy.org Tue Mar 26 06:14:17 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Tue, 26 Mar 2002 05:14:17 -0600 (CST) Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: Message-ID: Hi Travis, On Tue, 26 Mar 2002, Travis Oliphant wrote: > On Tue, 26 Mar 2002, Pearu Peterson wrote: > > > > 1) It is generally a good style of programming if a module imports only > > thouse resources that it actually uses. This helps, for example, in > > localizing and fixing bugs in a particular module level, which will be > > much easier when doing it in a global scipy level. > > Testing routines should be particullary careful not importing > > irrelevant modules, escpecially, if these are also under testing. > > > > The point with SciPy is to bring together a lot of different modules in > one umbrella to make it easier to use. I agree. > My vision is that SciPy provides a basic set of functionality that lives > under the SciPy namespace (this functionality encompasses at least > Numeric and MLab and other basic functions). Ok. > Modules (e.g. scipy.module_name) could in principle be distributed > separately, but most likely will be written to depend on basic (and > extended) scipy functionality. The "but" part will probably the killer for the first part. My point is to avoid this from the very beginning when introducing a new module. Later the dependency on basic scipy may become necessity, but may be not: I believe that the later may be the case especially for wrappers to external libraries. > > For a particular example, many modules import MLab or scipy just to > > access only the resources in Numeric. I would remove all such imports in > > favour of importing Numeric directly. > > But, I see SciPy subsuming Numeric's function calls. In other words, a > person shouldn't have to remember whether a certain expression they use > regularly comes (originally) from Numeric or SciPy. I agree with that as an end-user when importing scipy. But as a developer, I am looking the code and it will not be clear where this or that function comes from. And I am sure that also users would appreciate if it would be clearer. I presume that SciPy will dependent on Numeric for a long time. > > No package in SciPy should use RandomArray, or LinearAlgebra, or FFT > (ideally) as these are all supported in SciPy. I agree with that. > > 2) There has been wished (see numpy-discussion list) that scipy wouldn't > > be such a big monolitic environment. Instead, it should be possible to > > install Scipy subpackages as standalone packages if that is possible in > > principle. It has an advantage that other projects could use parts of > > Scipy codes without actually building the whole scipy. > > It is hoped that this will be possible. That is the point of "levels." > > The basic package will likely have to be installed in order to get other > modules. I just don't think it's worth the effort to satisfy one > dissonant user who will probably never like SciPy no matter what we do. For example, linalg2 needed only scipy_distutils (and that comes with f2py) and nothing from scipy basic level, to be fully functional and useful. Until it imported scipy... > > > Or is this the Scipy policy to force installing/using either the whole > > scipy package or nothing from it? I think that then some experts will not > > use scipy at all (also I find it less stimulating to contribute to > > scipy). > > No, but, the user should have to install some "base package." Which currently means the whole scipy. Sure, one can remove unnecessay modules from scipy.__init__, but then I would not call the resulting installation as a `scipy' package, though it will share the name. By installing scipy.module_name as a standalone module I mean that one makes a change to directory module_name and executes there ./setup_module_name.py install and module_name will be installed as module_name, not scipy.module_name. Also, testing scipy.module_name becomes difficult if module_name imports scipy: one has to install scipy before it is possible to run tests in scipy/module_name directory. Such a waste of time... not to speak about difficulty to repair possible bugs... > > 3) The idea of different levels in SciPy is not working if the lowest > > level modules import everything. It would be less confusing to drop the > > idea of leveling from start -- I am not proposing to acctually go for it. > > Instead, as I see it, Scipy should be rather thin interface between an > > user and modules that it provides, with some additional (high > > level) convinience functions (that may import "everything"). Lower level > > modules should not depend each other unless there is a good reason for > > that. > > First, lower levels don't depend on functions in higher levels, as far as > I know (If they did the code wouldn't import), so I'm not sure where that > statement comes from. Sure. But the lower levels import the higher levels (that is included in import scipy statement). To me strictly only vice-versa makes sense. > I'm not really sure how your overall idea is different from what we are > doing, other than the fact that I think Numeric is subsumed by SciPy > (there are future compatibility issues for this choise as well with > Numeric in a state of flux). > > The level idea is just a loose association to try and capture > different dependencies. It solves the problem of importing things in > the right order in the __init__ file. It also helps to see which packages > can be "installed separately." I appreciate the idea. I believe that the roots of problems of importing things in a possibly wrong order come from the fact that submodules try to import everything -- I have several times been faced with this issue in different projects. I have worked out a possible way to avoid these problems with an effect that all submodules have everything and yet they import very little. I don't want to suggest this approach to SciPy right now though. But to get an idea, see http://cens.ioc.ee/cgi-bin/cvsweb/python/soliton/hirota/symb/ > > I wonder how other Scipy developers feel about these issues? > > Do you think that my concerns are somehow relevant or not? > > Is it acceptable trying gradually to turn lower level modules to be > > independent from each other if possible? > > If you have particular suggestions please state them. I'm always willing > to consider re-factoring code. In particular, I am currently interested in linalg2 and developing it without the need to rebuild the whole scipy too often, already building linalg2 alone takes some precious time. Note that you don't need to bother with that provided that if you don't mind if I can change some things in your commits to linalg2. Regards, Pearu From prabhu at aero.iitm.ernet.in Tue Mar 26 06:09:12 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Tue, 26 Mar 2002 16:39:12 +0530 Subject: [SciPy-dev] Linalg2 working and isnan questions? In-Reply-To: References: Message-ID: <15520.22232.89140.81151@monster.linux.in> >>>>> "TO" == Travis Oliphant writes: TO> I noticed before that some people were saying isnan tests TO> failed. TO> Could you describe that problem again. TO> I got failures on the test TO> isnan(log(-1))==1 TO> as well but only because the scipy.log function returns TO> complex values for negative numbers now. fastumath.log still TO> does it the old way for any die-hards. My original report of this is here: http://www.scipy.net/pipermail/scipy-dev/2002-March/000664.html I havent updated my cvs tree since so its status quo here. prabhu From k.roscoe at freeuk.com Tue Mar 26 10:29:31 2002 From: k.roscoe at freeuk.com (keith roscoe) Date: Tue, 26 Mar 2002 15:29:31 +0000 Subject: [SciPy-dev] Weave/blitz element index type-error (?) Message-ID: Dear all, When trying to directly access the elements of a 2D Numeric array ( passed to weave.inline as a blitz array), I get a compilation failure when using type 'unsigned int' for the element index. However, using type 'int' for the index compiles and executes perfectly. Blitz doesn't have this limitation when used in a stand-alone C++ program, which leads me to believe it may be a bug. But the weave compilation doesn't fail for 1D arrays though, so I'm hoping that it's my understanding that's at fault. At the end of the message is some code that exhibits this behaviour. Change the value of _break_compilation to 1 to make the compilation fail. Regards, Keith My setup: Python: 2.2 Numeric: 20.3 weave: CVS (affects 0.2.3 also) # -------------- Start of problematic code ---- import Numeric import weave """ Demonstrates weave compilation failure when indexing type is unsigned for 2D arrays. Toggle '_break_compilation' flag to show bug. Working output (_break_compilation == 0) --------------------------------------- Before weave: [[0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0]] After weave: [[0 0 0 0 0] [0 0 0 0 0] [0 0 9 0 0] [0 0 0 0 0] [0 0 0 0 0]] Fails to compile for _break_compilation != 1 """ # set to 0 for working code, set to 1 to break it. _break_compilation = 0 def main(): a = Numeric.zeros((5, 5)) if _break_compilation: index_type = "unsigned int" else: index_type = "int" print "Before weave:" print a code = \ """ typedef %s index_type; const index_type middle = 2; a(middle, middle) = 9; """ % index_type weave.inline(code, ['a',], type_converters = weave.converters.blitz) print "\nAfter weave:" print a if __name__ == '__main__': main() From jochen at unc.edu Tue Mar 26 11:37:37 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 26 Mar 2002 11:37:37 -0500 Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, some concerns regarding the discussion of the umbrella mode vs. separability of individual modules: To my mind there is a big "problem" with using scipy currently: It's in too much flux. While there is great progress and that is really appreciated in view of a great product at some later time, it makes it hard to use for now. (So that I decided not to put any "scipy"-effort into code I write for now.) If you guys can keep that work up (and it looks like that), just go ahead and come back with a great product. This is a long way to go, though, and you should consider whether you want people to be able to use scipy and get involved into it's development that way? If so, one way would be to get individual modules "stable" and let them be used without getting broken by every other scipy change (commit ?). Btw: I have a separated integrate module independent of scipy. [1] I am looking to make a standalone linalg2... (Shouldn't be to hard from what I read from Pearu.) Greetings, Jochen Footnotes: [1] I stated that it was written by Travis and put in the scipy LICENSE. Anything else I have to consider to distribute it? - -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert?, ?galit?, Fraternit? GnuPG key: 44BCCD8E Sex, drugs and rock-n-roll -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6-cygwin-fcn-1 (Cygwin) Comment: Processed by Mailcrypt and GnuPG iEYEARECAAYFAjygo9MACgkQiJ/aUUS8zY5i4QCdGl/E8fO2ewI5n8SNPbVLgoT8 LW4An1mFVlK9k+74/TKHZyXPyHvTAeS9 =2M/8 -----END PGP SIGNATURE----- From eric at scipy.org Tue Mar 26 10:39:51 2002 From: eric at scipy.org (eric) Date: Tue, 26 Mar 2002 10:39:51 -0500 Subject: [SciPy-dev] Weave/blitz element index type-error (?) References: Message-ID: <034c01c1d4dc$79503dc0$6b01a8c0@ericlaptop> Hey Keith, >From my quick test, it looks like the version of Blitz that weave currently uses (blitz-20001213) doesn't actually support this. I included my little C++ test program below. My experience with newer Blitz versions was never to pleasant -- when I found one that worked on Windows and Linux, I went with it. However, I haven't played with the most recent versions -- perhaps they are better? What version of Blitz are you using? Are you having good luck with it? I'm happy to upgrade the weave package if there is a "drop in solution". In fact, rumor has it that some of the newer Blitz versions are set up to compile in about half the time for many situations, so upgrading would have this added benefit. eric ------ test command an program -------- C:\temp>g++.exe -O2 -w -Wstrict-prototypes -IC:\Python21\weave\blitz-20001213 -c C:\temp\wt.cxx #include "stdio.h" #include "blitz/array.h" int main() { blitz::Array a = blitz::Array(5,5); blitz::TinyVector _Na = a.shape(); typedef unsigned int index_type; const index_type middle = 2; a(middle, middle) = 9; printf("middle: %d\n",a(middle,middle)); return 0; } ----- Original Message ----- From: "keith roscoe" To: Sent: Tuesday, March 26, 2002 10:29 AM Subject: [SciPy-dev] Weave/blitz element index type-error (?) > Dear all, > > When trying to directly access the elements of a 2D Numeric array ( > passed to weave.inline as a blitz array), I get a compilation failure > when using type 'unsigned int' for the element index. However, using > type 'int' for the index compiles and executes perfectly. > > Blitz doesn't have this limitation when used in a stand-alone C++ > program, which leads me to believe it may be a bug. > > But the weave compilation doesn't fail for 1D arrays though, so I'm > hoping that it's my understanding that's at fault. > > At the end of the message is some code that exhibits this behaviour. > Change the value of _break_compilation to 1 to make the compilation > fail. > > Regards, > > Keith > > > My setup: > > Python: 2.2 > Numeric: 20.3 > weave: CVS (affects 0.2.3 also) > > > > # -------------- Start of problematic code ---- > > import Numeric > import weave > > """ > Demonstrates weave compilation failure when indexing type is > unsigned for 2D arrays. > Toggle '_break_compilation' flag to show bug. > > Working output (_break_compilation == 0) > --------------------------------------- > > Before weave: > [[0 0 0 0 0] > [0 0 0 0 0] > [0 0 0 0 0] > [0 0 0 0 0] > [0 0 0 0 0]] > > After weave: > [[0 0 0 0 0] > [0 0 0 0 0] > [0 0 9 0 0] > [0 0 0 0 0] > [0 0 0 0 0]] > > Fails to compile for _break_compilation != 1 > > """ > > # set to 0 for working code, set to 1 to break it. > _break_compilation = 0 > > > def main(): > a = Numeric.zeros((5, 5)) > > if _break_compilation: > index_type = "unsigned int" > else: > index_type = "int" > > print "Before weave:" > print a > > code = \ > """ > typedef %s index_type; > > const index_type middle = 2; > > a(middle, middle) = 9; > """ % index_type > > weave.inline(code, ['a',], > type_converters = weave.converters.blitz) > > print "\nAfter weave:" > print a > > > if __name__ == '__main__': > main() > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From k.roscoe at freeuk.com Tue Mar 26 11:48:15 2002 From: k.roscoe at freeuk.com (keith roscoe) Date: Tue, 26 Mar 2002 16:48:15 +0000 Subject: [SciPy-dev] Re: Scipy-dev -- confirmation of subscription -- request 570689 Message-ID: From k.roscoe at freeuk.com Tue Mar 26 11:53:53 2002 From: k.roscoe at freeuk.com (keith roscoe) Date: Tue, 26 Mar 2002 16:53:53 +0000 Subject: [SciPy-dev] Scipy-dev -- confirmation of subscription -- request 570689 Message-ID: confirm 570689 From magnus at thinkware.se Tue Mar 26 12:09:35 2002 From: magnus at thinkware.se (Magnus =?iso-8859-1?Q?Lyck=E5?=) Date: Tue, 26 Mar 2002 18:09:35 +0100 Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: References: Message-ID: <5.1.0.14.0.20020326180233.02affd48@mail.irrblosset.se> At 02:16 2002-03-26 -0500, Travis Oliphant wrote: >The point with SciPy is to bring together a lot of different modules in >one umbrella to make it easier to use. It's my experience that looser coupling between components makes them both easier to use and (most of all) easier to develop and maintain. In the short run it might be easy to assimilate all in a borg-like manner, but in the long run it leads to maintenance problems if coupling is tighter than it needs to be. >My vision is that SciPy provides a basic set of functionality that lives >under the SciPy namespace (this functionality encompasses at least >Numeric and MLab and other basic functions). I fail to see why placing more functionality in one single namespace simplifies life for the users. -- Magnus Lyck?, Thinkware AB ?lvans v?g 99, SE-907 50 UME? tel: 070-582 80 65, fax: 070-612 80 65 http://www.thinkware.se/ mailto:magnus at thinkware.se From k.roscoe at freeuk.com Tue Mar 26 12:29:45 2002 From: k.roscoe at freeuk.com (keith roscoe) Date: Tue, 26 Mar 2002 17:29:45 +0000 Subject: [SciPy-dev] Weave/blitz element index type-error (?) Message-ID: Eric, > From my quick test, it looks like the version of Blitz that weave currently uses > (blitz-20001213) doesn't actually support this. I included my little C++ test > program below. Thanks for this. I looks like I went off half cocked and there was a mistake with my C++ test script. I'll look into the newer versions of blitz though and if any of them support this I'll mail you back. Thanks for you time. Keith ps. Apologies for spamming the list with confirmation messages. I'm having a few teething problems with my new mail client. (Hence the odd formatting of this message!) From k.roscoe at freeuk.com Tue Mar 26 12:47:13 2002 From: k.roscoe at freeuk.com (keith roscoe) Date: Tue, 26 Mar 2002 17:47:13 +0000 Subject: [SciPy-dev] Weave/blitz element index type-error (?) Message-ID: > I'll look into the newer versions of blitz though and if any of them > support this > I'll mail you back. I've just been reading the blitz mailing lists (should've done this first in hindsight), and someone there asks the same question about using unsigned quantities for array indices. The upshot is that blitz doesn't support them because it has support for negative indices. Sorry for wasting your time Eric, still it's a good thing to know I suppose. Keith From oliphant at ee.byu.edu Tue Mar 26 11:26:56 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Mar 2002 11:26:56 -0500 (EST) Subject: [SciPy-dev] Re: [SciPy-user] Test cases for expm In-Reply-To: <3CA05966.FD2FC933@mecha.uni-stuttgart.de> Message-ID: On Tue, 26 Mar 2002, Nils Wagner wrote: > Travis Oliphant schrieb: > > > > The test cases Nils posted recently work fine for me. > > > > As expected expm2 does not work well (nor does funm) for the badly > > conditioned matrix obtained during that computation. > > > > expm and expm3 agree and look good. > > > > Thanks for the tests. > > > > If you are still having trouble, consider removing the build directory and > > starting again. > > > > The error Nils showed was fixed only a few hours ago, and so you should > > update your CVS copy. > > > > -Travis > > > > I have updated both my CVS copy of scipy and ATLAS atlas3.2.1 --> > atlas3.3.13 > > This is the result of python exptest.py > > Traceback (most recent call last): > File "exptest.py", line 1, in ? > from scipy import * > File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line > 72, in ? > names2all(__all__, _level0, globals()) > File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line > 37, in names2all > exec("import %s" % name, gldict) > File "", line 1, in ? > File "/usr/local/lib/python2.1/site-packages/scipy/basic.py", line 25, > in ? > cast = {Numeric.Character: toChar, > AttributeError: 'Numeric' module has no attribute 'Character' > > Any idea ? Try: import Numeric print Numeric.Character If that doesn't work, you have an unusual Numeric. Which version? This error shouldn't occur. -Travis From jochen at unc.edu Tue Mar 26 13:29:23 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 26 Mar 2002 13:29:23 -0500 Subject: [SciPy-dev] Re: [SciPy-user] Test cases for expm In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 26 Mar 2002 11:26:56 -0500 (EST) Travis Oliphant wrote: Travis> On Tue, 26 Mar 2002, Nils Wagner wrote: >> This is the result of python exptest.py >> >> Traceback (most recent call last): >> File "exptest.py", line 1, in ? >> from scipy import * >> File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line >> 72, in ? >> names2all(__all__, _level0, globals()) >> File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line >> 37, in names2all >> exec("import %s" % name, gldict) >> File "", line 1, in ? >> File "/usr/local/lib/python2.1/site-packages/scipy/basic.py", line 25, >> in ? >> cast = {Numeric.Character: toChar, >> AttributeError: 'Numeric' module has no attribute 'Character' >> >> Any idea ? Travis> If that doesn't work, you have an unusual Numeric. Which Travis> version? See NumPy bug [ #512223 ]. This was fixed in 21.0. Greetings, Jochen - -- University of North Carolina phone: +1-919-962-4403 Department of Chemistry phone: +1-919-962-1579 Venable Hall CB#3290 (Kenan C148) fax: +1-919-843-6041 Chapel Hill, NC 27599, USA GnuPG key: 44BCCD8E -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6-cygwin-fcn-1 (Cygwin) Comment: Processed by Mailcrypt and GnuPG iEYEARECAAYFAjygvgQACgkQiJ/aUUS8zY40UQCgtdt3SNGY+g2tBJW0QAs0SeVV bhwAn2TPHSduUjXr5qTpjyaq/5+i1ZzG =0BB/ -----END PGP SIGNATURE----- From oliphant at ee.byu.edu Tue Mar 26 11:42:47 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Mar 2002 11:42:47 -0500 (EST) Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: Message-ID: > > To my mind there is a big "problem" with using scipy currently: It's > in too much flux. While there is great progress and that is really > appreciated in view of a great product at some later time, it makes it > hard to use for now. (So that I decided not to put any "scipy"-effort > into code I write for now.) > The biggest problems was with linalg, and it was related to the fact that Pearu changed f2py for the better, but did so withouth changing the interface to linalg (who can blame him we all do this in our "spare" time). I disagree with this. I've been using the current CVS copy of SciPy regularly for my work and my teaching and had no real problem until this linalg thing came up. I think you are taking the exception for the rule. Regarding the umbrella verses modules approach. SciPy is an umbrella package. I used to distribute lots of modules separately. I'm not going back to that approach. For me it's too much trouble to make sure they work together. SciPy ensures that they do work together. If others want to grab pieces of SciPy and make them available cleanly, then they naturally have that right. There are too many things to do to get SciPy to where we want it to worry about distributing all the modules separately. I won't be upset if other people make them available, but I doubt I'll spend much time worrying about it. Sorry. I'd like to be helpful, but I've just got to draw the line somewhere. > If you guys can keep that work up (and it looks like that), just go > ahead and come back with a great product. This is a long way to go, > though, Why do you think that. How "far" is it for you. What's missing. With the linalg fixes and the "stats" fixes it's a lot farther than you think. What packages are people so opposed to that they don't want to install all of SciPy. I've never heard anybody discuss that. We instead speak hypothetically. Let's talk about the real problems if they exist. From rlytle at tqs.com Tue Mar 26 14:01:53 2002 From: rlytle at tqs.com (Lytle, Robert TQO) Date: Tue, 26 Mar 2002 11:01:53 -0800 Subject: [SciPy-dev] Latest weave still doesn't work on FreeBSD without a hack\ large arrays crash my system Message-ID: <81E1D2E15CCBD311A74700A0C9E1CC8E043F9488@chunky.tqs.com> It can't find libstdc++. So I added the path and library name to one of the files. Later when I get home from work I can make a diff. But here is the data, so it should be a no-brainer. FreeBSD puts libstdc++ in /usr/lib. Also, I have no problems using weave.blitz() for 50x50x50 FDTD cells. But when I go to 100x100x100 cells I run out of memory and swap. Why does the size of the array affect the compilation? Isn't an array just a pointer to a block of memory? Rob. From jochen at unc.edu Tue Mar 26 14:38:28 2002 From: jochen at unc.edu (Jochen =?iso-8859-1?q?K=FCpper?=) Date: 26 Mar 2002 14:38:28 -0500 Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 26 Mar 2002 11:42:47 -0500 (EST) Travis Oliphant wrote: >> To my mind there is a big "problem" with using scipy currently: It's >> in too much flux. While there is great progress and that is really >> appreciated in view of a great product at some later time, it makes it >> hard to use for now. (So that I decided not to put any "scipy"-effort >> into code I write for now.) Travis> The biggest problems was with linalg, and it was related to the fact that Travis> Pearu changed f2py for the better, but did so withouth changing the Travis> interface to linalg (who can blame him we all do this in our "spare" Travis> time). I have not and will not blame anybody. The worst thing I would do is to ignore it. (And that shouldn't bother you too much.) Travis> I disagree with this. I've been using the current CVS copy of SciPy Travis> regularly for my work and my teaching and had no real problem until this Travis> linalg thing came up. Well, I have been following the scipy development for a while and have tried to build scipy fairly regularly. And since there are no "releases" cvs is the only way to follow it's progress. Sometimes it builds, soemtimes it doesn't. Then you have to see whether it works. (And more often than not it hasn't worked for me.) Initially I tried to dig into the problems ("long" ago), but lately I am just waiting for a new release. Then there are these issues with ATLAS. We have a full ATLAS named libblas.a in a path where every linker finds it, not so scipy. It isn't even happy with liblas.a, it needs all these atlas, cblas, f77blas stuff. Travis> Regarding the umbrella verses modules approach. SciPy is an Travis> umbrella package. I used to distribute lots of modules Travis> separately. I'm not going back to that approach. For me it's Travis> too much trouble to make sure they work together. SciPy Travis> ensures that they do work together. I can see this. I am actually using quite some code written by you (Thanks!). But I have to write code now that works. I have just tried to use scipy a few days ago. All the sudden I had wierd errors where "array / scalar" wouldn't work. Reproducing it in a small example of course didn't work either, so I just skipped scipy again. Yep, I should probably spend some more time to figure out what's going on and send a bug-report, but I am sorry I don't have the time currently. Travis> If others want to grab pieces of SciPy and make them available Travis> cleanly, then they naturally have that right. Sure. I actually have some of them. And I really just see them as a interim solution up to the time scipy is more nature -- whenever that is. Travis> I won't be upset if other people make them available, but I Travis> doubt I'll spend much time worrying about it. Sorry. I'd Travis> like to be helpful, but I've just got to draw the line Travis> somewhere. Of course. I can just support that. Put your time into scipy, I'd seriously say. Travis> Why do you think that. How "far" is it for you. What's Travis> missing. Stability. Portability, in a second step. Documentation. Functionality? None for now. Travis> With the linalg fixes and the "stats" fixes it's a lot farther Travis> than you think. Maybe it is. Travis> What packages are people so opposed to that they don't want to Travis> install all of SciPy. I've never heard anybody discuss that. I have no problems installing all of scipy, besides having problems with the installation process itself. Some problems are mentioned above. Greetings, Jochen - -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert?, ?galit?, Fraternit? GnuPG key: 44BCCD8E Sex, drugs and rock-n-roll -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6-cygwin-fcn-1 (Cygwin) Comment: Processed by Mailcrypt and GnuPG iEYEARECAAYFAjygzjYACgkQiJ/aUUS8zY4W7ACbBkW45mcEk/gwyDJyO4hNbYxK Hr8An3lVYV8FmwgqHxT8ShXKndVK8Wbw =rrhu -----END PGP SIGNATURE----- From eric at scipy.org Tue Mar 26 14:09:41 2002 From: eric at scipy.org (eric) Date: Tue, 26 Mar 2002 14:09:41 -0500 Subject: [SciPy-dev] Latest weave still doesn't work on FreeBSD without a hack\ large arrays crash my system References: <81E1D2E15CCBD311A74700A0C9E1CC8E043F9488@chunky.tqs.com> Message-ID: <03cb01c1d4f9$c8a53520$6b01a8c0@ericlaptop> Hey Rob, > It can't find libstdc++. So I added the path and library name to one of > the files. Later when I get home from work I can make a diff. > > But here is the data, so it should be a no-brainer. FreeBSD puts libstdc++ > in /usr/lib. I don't get this. weave no longer explicitly specifies stdc++. It uses g++ instead whenever gcc is detected as the compiler. g++ should automatically know where stdc++ is. When you use "verbose=2" as an argument to blitz(), what is the output? > > Also, I have no problems using weave.blitz() for 50x50x50 FDTD cells. But > when I go to 100x100x100 cells I run out of memory and swap. Why does the > size of the array affect the compilation? Isn't an array just a pointer to > a block of memory? This doesn't make sense to me either. The same code should be generated for both. In fact, after the FDTD equation(s) have been compiled once, they shouldn't have to be compiled again -- it should detect the already compiled version and simply start cranking away. One thought. Perhaps the 100x100x100 arrays are eating up a significant fraction of your memory. I don't know what formulation you are using, but if this is 3D, you have 6 material arrays, and your using double precision, then the memory usage for the FDTD alone is: 100*100*100/1e6 * (6 + 3) * 8 = 96 MB If your machine is somewhat limited, this plus the memory that g++ uses when compiling blitz++ code (memory intensive) may be stressing your machine to its limits. This doesn't seem that plausible, but I can't think of another explanation. Can you send me a snippet that exhibits this problem? thanks, eric From rlytle at tqs.com Tue Mar 26 18:25:03 2002 From: rlytle at tqs.com (Lytle, Robert TQO) Date: Tue, 26 Mar 2002 15:25:03 -0800 Subject: [SciPy-dev] Hi Eric Message-ID: <81E1D2E15CCBD311A74700A0C9E1CC8E043F995C@chunky.tqs.com> The stdc++ thing doesn't bother me as much as the large array problem. I use blitz() three times for enclosing the field update equations in FDTD. Typically the compile uses up 200Meg ram and 200Meg swap, so its pretty slow. But afterwards it runs like a bat out of hell. But when I upped the matrix size it suddenly needed more memory for the compile, and crashed. Without blitz() this project consumes 400MB, since it has many other matrices for the UPML. Now here at work, I have a 1.8Ghz P4 with 768MB ram. I tried out blitz() with MingW and I get stack overflows. Unlike gcc, it can't handle such large compiles. Rob. From eric at scipy.org Tue Mar 26 17:52:20 2002 From: eric at scipy.org (eric) Date: Tue, 26 Mar 2002 17:52:20 -0500 Subject: [SciPy-dev] Some concerns on Scipy development Message-ID: <04bd01c1d518$e34b7870$6b01a8c0@ericlaptop> Hey Pearu, I've have thought about this a little lately also. There is a philosophical difference to packaging among the scientific developers. Some wish for small single purpose and stand alone packages that are installed one by one. Others wish for a single "standard library" of scientific tools that, once installed, is a one stop shop for a large number of scientific algorithms. There are benefits to both. However, I come squarely down in the second camp. A monolithic package is easier to install for end users, and it solves compatibility issues (such as SciPy changing the behavior of Numeric in some places). I believe the existence of such a package is required before there can be a mass conversion of engineers and scientist to Python as their tool of choice for daily tasks. This is the goal of SciPy. That said, the monolithic nature does pose some problems occasionally complicating development and following the CVS as Jochen has pointed out. Also, some may want to use some modules (weave, integrate, linalg) outside of scipy. This is useful in cases where you want to minimize install size or use something like py2exe (which currently doesn't work with SciPy). We should facilitate separating out packages when it is convenient, but not when it requires duplication of code or a lot of extra work. Perhaps re-organizing the architecture can make it convenient more often and come closer to making both camps happy. I agree that, when possible, it is nice to develop packages independent of SciPy -- that is how weave was developed. Later it was folded into SciPy, but it still runs separately. The new build structure (with separate setup_xxx.py files for each sub-package) implemented several months ago was developed, in part, to facilitate this sort of thing. Weave was easy in this respect because it doesn't need many numeric capabilities. So, I think this is a worthy goal for *some* of the modules (notably the ones people are discussing such as integrate, linalg, etc), with one caveat. These modules need access to some functions provided by scipy and will need to import at least one extra module. Scanning linalg, the needed functions are amax, amin, triu, etc. and a handful of functions subsumed from Numeric as well as some constants from scipy.limits. I consider it a bad idea to replicate these functions across multiple modules because of the maintenance issues associated with duplicate code. I don't want to go down that path. However, one thought is to make the idea of "levels" more explicit. We could define a package called "scipy_lite" or "scipy_level0" that would subsume Numeric and add the helper functions that are often used. It would not reference other scipy modules. This package would live in the scipy development tree, but would install as a separate package. So scipy_level0 would sit next to scipy in the site-packages directory. scipy_level0 would be easy to build without major dependencies -- much like Numeric. It would hold fastumath and maybe a few other extension modules, but it would be predominantly python code. The linalg, integrate, etc modules would import scipy_level0 instead of scipy. This way, people only have to port scipy_level0 instead of the whole of scipy if they want to use integrate in their package. I can't imagine much dissent about this approach by the people wanting single packages. If your willing to install Numeric and linalg on your own, then you should be willing to install the scipy_level0 package. Installing scipy_level0 outside the scipy package has some precedence since we've already done this with scipy_distutils and scipy_test. I'd rather leave it as a sub-directory of scipy, but pulling it out is necessary because of the way that python handles (or doesn't handle...) imports within packages -- that is if we want to make it easy to use sub-modules of scipy separately. So this is what the site-packages view of scipy would be: site-packages scipy_distutils scipy_test scipy_level0 subsumes and customizes Numeric handy.py misc.py scimath.py Matrix.py (?) fastumath.so (pyd) etc. scipy subsume scipy_base everything else In regards to higher level modules that use fft, svd, and other complex algorithms, they are just gonna have to import scipy. This requires some discussion before we make the change. It's also gonna require someone to step up and implement the change -- though it probably isn't a major effort. eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From eric at scipy.org Tue Mar 26 17:57:47 2002 From: eric at scipy.org (eric) Date: Tue, 26 Mar 2002 17:57:47 -0500 Subject: [SciPy-dev] Hi Eric References: <81E1D2E15CCBD311A74700A0C9E1CC8E043F995C@chunky.tqs.com> Message-ID: <04c501c1d519$a605ef30$6b01a8c0@ericlaptop> > The stdc++ thing doesn't bother me as much as the large array problem. I > use blitz() three times for enclosing the field update equations in FDTD. > Typically the compile uses up 200Meg ram and 200Meg swap, so its pretty > slow. But afterwards it runs like a bat out of hell. But when I upped the > matrix size it suddenly needed more memory for the compile, and crashed. > Without blitz() this project consumes 400MB, since it has many other > matrices for the UPML. > > Now here at work, I have a 1.8Ghz P4 with 768MB ram. I tried out blitz() > with MingW and I get stack overflows. Unlike gcc, it can't handle such > large compiles. As I said, this doesn't make sense to me. The same code is (or should be) generated in both situations. If you use verbose=2 argument to blitz for the 50x50x50 and 100x100x100 cases, you can see the name of the .cxx file created for the two situations. Diff these and they should be the same. If they aren't, please send me the difference. Also, a python snippet showing the problem will be a big help. thanks, eric From rob at pythonemproject.com Tue Mar 26 20:42:45 2002 From: rob at pythonemproject.com (rob) Date: Tue, 26 Mar 2002 17:42:45 -0800 Subject: [SciPy-dev] I'm doing some more experimenting Message-ID: <3CA12395.661E4104@pythonemproject.com> It turns out that what is happening is that blitz() is creating some overhead which makes my memory consumption higher. That is why a 100x100x100 wont work on my laptop with blitz, while is works with plain numpy. I dropped it down to 80x80x80, and now everything is Ok. I will try to get an exact amount for the extra memory overhead using top for 80x80x80. Right now top says 188Meg for the blitz version. I will have to make another file to use plain python. It is definitely true that the exact same blitz libraries work with the larger arrays, until I hit my swap limit. Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com From rob at pythonemproject.com Tue Mar 26 21:23:11 2002 From: rob at pythonemproject.com (rob) Date: Tue, 26 Mar 2002 18:23:11 -0800 Subject: [SciPy-dev] More experimenting continues Message-ID: <3CA12D0F.98B78271@pythonemproject.com> No, it turns out that an 80x80x80 sim uses the same memory with blitz() as with plain Numpy. I will try your debugging suggestion. Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com From eric at scipy.org Tue Mar 26 22:08:18 2002 From: eric at scipy.org (eric) Date: Tue, 26 Mar 2002 22:08:18 -0500 Subject: [SciPy-dev] preparing for release 0.2! Message-ID: <053401c1d53c$a52453e0$6b01a8c0@ericlaptop> Hello all, Travis Oliphant and I had a talk today, and it sounds like SciPy 0.2 is about baked and ready to come out of the oven. The main things we wanted to add for this release were linalg and a cleaned up stats module. Both are pretty close to completion thanks to a ton of work by Pearu Peterson and Travis O. Seeing this is the case, I'd like to shoot for a 0.2 release candidate by Friday, April 5th. Here's a partial list of the things I know of before release: 1. Re-factor scipy_lite module (pending discussion). 2. Fix floating point NaN issues on multiple platforms (cephes module). 3. Have several stats people review the stats module 4. Add lu_factor and lu_solve (qr_factor and qr_solve, etc?) pairs to linalg. 5. Clean up weave documentation to match current implementation. 6. Update install instructions. 7. Test on multiple platforms. Feel free to add items -- this is meant as a starter list. Please don't put things like "document SciPy" or "complete set of unit tests." These aren't priorities for this release. If pydoc versions of the documentation get in, that's great. If they don't that is OK also. I'm most interested in getting something out there so that people have pre-built's for 2.2 (on Windows) and (reasonably) stable tarballs for Unix. I guess Pearu and Travis O. are the main guys to add other "todo" items. Also, we should coordinate a 2-4 hour period for one or two days next week on ICQ or IRC to work as a group on final clean up. Right now, I'd say Monday and Thursday are the best for me. Will that work for yall? Pearu, what time of day works best for you? I'm excited about getting this one out, because it is getting close to having a "functionally complete" core now. The next revision we can concentrate on docs and testing (everyone's favorite I know...). see ya, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From tjlahey at mud.cgl.uwaterloo.ca Tue Mar 26 23:26:54 2002 From: tjlahey at mud.cgl.uwaterloo.ca (Tim Lahey) Date: Tue, 26 Mar 2002 23:26:54 -0500 Subject: [SciPy-dev] Problems with CVS Scipy on Solaris Message-ID: Hi, I checked out the Scipy from CVS and had a number of difficulties in compiling. 1. Is there a way of using the Vendor LAPACK/BLAS instead of ATLAS? For the cases I'm concerned about, the Sun BLAS outperforms ATLAS. 2. I compiled ATLAS and managed to get it past the system_info check. But when I get to the X11 check, it says NOT FOUND then I get a traceback. 2a. Changing the directory list in the x11_info doesn't seem to work. 2b. I still couldn't get it to pick up the header and library files when I added the right directory. The tests were done on a Sun Blade 100 running Solaris 8. Cheers, Tim --- Tim Lahey PhD Student - Systems Design Engineering tjlahey at cgl.uwaterloo.ca From pearu at scipy.org Wed Mar 27 04:39:41 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 27 Mar 2002 03:39:41 -0600 (CST) Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: Message-ID: Hi, On Tue, 26 Mar 2002, Travis Oliphant wrote: > The biggest problems was with linalg, and it was related to the fact that > Pearu changed f2py for the better, but did so withouth changing the > interface to linalg (who can blame him we all do this in our "spare" > time). I was very much aware that my changes will break any interface that contains multi-dimensional arrays. Normally one would expect some transition period when a program introduces an incompatibility feature. I could not afford that for f2py because of my time limitations now and in future (recall that f2py is still an one-man project run without any external support). The issues that the new f2py solves now were just too important that I suppressed any hesitations about backward compatibility and fixed the long lasted confusion with the differences in how data are stored in multi-dimensional arrays in FORTRAN and Numeric Python. I cannot take any blame on not to fixing the interface to linalg (that I actually did in linalg2, what else would you call it?) or any other interface. It is the job of the interface writers, after all, the new features was introduced to ease their jobs and to avoid constant queries about data storage issues from them to me and from the users to them and me. And in case you have not noticed, then f2py as an LGPL software comes with "NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK." Regards, Pearu From pearu at scipy.org Wed Mar 27 08:59:17 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 27 Mar 2002 07:59:17 -0600 (CST) Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: <04bd01c1d518$e34b7870$6b01a8c0@ericlaptop> Message-ID: Hi Eric and Travis. On Tue, 26 Mar 2002, eric wrote: > I've have thought about this a little lately also. There is a philosophical > difference to packaging among the scientific developers. Some wish for small > single purpose and stand alone packages that are installed one by one. Others > wish for a single "standard library" of scientific tools that, once installed, > is a one stop shop for a large number of scientific algorithms. There are > benefits to both. However, I come squarely down in the second camp. A > monolithic package is easier to install for end users, and it solves > compatibility issues (such as SciPy changing the behavior of Numeric in some > places). I believe the existence of such a package is required before there can > be a mass conversion of engineers and scientist to Python as their tool of > choice for daily tasks. This is the goal of SciPy. I have been in peace with this goal of SciPy for a long time. In my concers I was not trying to propose to change this general goal in any way. Instead, I was concerned on the internal structure of SciPy and to see if we could ease the SciPy development and make it more robust for the future. One efficient way to achive that would be to require that internal modules in SciPy would be as independent as possible. A good measure for this independence is that a particular module can be installed as a standalone. Note that I am not proposing this because I would like to use these modules as standalone modules myself (or any other party), but only to strengthen SciPy by making it more robust internally. By doing this, it does not mean that the main goal of SciPy is somehow threatened, it will be still a monolithic package for end-users. Just its internal structure will be modular and less sensitive to adding new modules or reviewing some if needed in future. Now about the question whether SciPy parts can be completely independent? I think this can be never achived in principle nor it is desired, but it is a good ideal to follow *whenever* it is possible (and not just a nice thing to do as you say) and, indeed, can be practical for other projects, and all that for the sake of SciPy own success. > So, I think this is a worthy goal for *some* of the modules (notably the ones > people are discussing such as integrate, linalg, etc), with one caveat. These > modules need access to some functions provided by scipy and will need to import > at least one extra module. Scanning linalg, the needed functions are amax, > amin, triu, etc. and a handful of functions subsumed from Numeric as well as > some constants from scipy.limits. I consider it a bad idea to replicate these > functions across multiple modules because of the maintenance issues associated > with duplicate code. I don't want to go down that path. Me neither. However your statement that these modules necessarily need access to scipy functions, is a bit exaggerated. In general, there are several ways how the same functionality can be implemented, and it is my experience that linalg2 can be implemented without the scipy dependence and that also without replicating any code. In fact, using high-level scipy convinience functions in linalg2 that is supposed to provide highly efficient and yet to be user-friendly (yes, both goals can achived at the same time!) algorithms, is not good because scipy functions just are inefficient due to their general purpose feature and the initial wins in performance are lost. Therefore low level modules like linalg, integrate, etc must be carefully implemented even if it takes more time and seemingly direct Python hooks could be applied. > So this is what the site-packages view of scipy would be: > > site-packages > scipy_distutils > scipy_test > scipy_level0 > subsumes and customizes Numeric > handy.py > misc.py > scimath.py > Matrix.py (?) > fastumath.so (pyd) > etc. > scipy > subsume scipy_base > everything else This looks like a positive plan to me. Any other candidates for naming scipy_level0? It reflects too much the internals of SciPy but will contain very useful general purpose functions, I assume, to be useful more widely. How about scipy_base? Another idea would be then to move scipy_test inside scipy_base (and dropping its scipy_ prefix). Since scipy_base would be mostly pure Python, it should be feasible. (Later, be not surprised if I will question the naming of handy.py and misc.py, but I am not ready for that yet ...;-) > In regards to higher level modules that use fft, svd, and other complex > algorithms, they are just gonna have to import scipy. +2 > This requires some discussion before we make the change. It's also gonna > require > someone to step up and implement the change -- though it probably isn't a major > effort. It may be a good idea to release 0.2 before such a change. If it works out nicely, then 0.3 could follow quickly. Regards, Pearu From magnus at thinkware.se Wed Mar 27 09:41:14 2002 From: magnus at thinkware.se (Magnus =?iso-8859-1?Q?Lyck=E5?=) Date: Wed, 27 Mar 2002 15:41:14 +0100 Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: References: <04bd01c1d518$e34b7870$6b01a8c0@ericlaptop> Message-ID: <5.1.0.14.0.20020327152517.02ab13f0@mail.irrblosset.se> At 07:59 2002-03-27 -0600, Pearu wrote: >One efficient way to achive that would be to require that internal modules >in SciPy would be as independent as possible. A good measure for this >independence is that a particular module can be installed as a standalone. I think this is very sound thinking. We always run into situations sooner or later when we need to chage some parts of a system. The more dependence between modules we have, the more pain from such a change. "O What a tangled web we weave..." >Note that I am not proposing this because I would like to use these >modules as standalone modules myself (or any other party), but only to >strengthen SciPy by making it more robust internally. Right. But if it's also possible to accomodate those who for some reason can't or won't use the entire thing, it's still an additional benefit. I've only used the plotting parts of SciPy so far, and I'm likely to make simple applications in the future, and distribute them using py2exe or the McMillan installer. I prefer such apps to be as small as possible, and I wouldn't see any benefit in bundling code my users don't (or at least shouldn't) need. -- Magnus Lycka, Thinkware AB Alvans vag 99, SE-907 50 UMEA, SWEDEN phone: int+46 70 582 80 65, fax: int+46 70 612 80 65 http://www.thinkware.se/ mailto:magnus at thinkware.se From rob at pythonemproject.com Wed Mar 27 10:12:23 2002 From: rob at pythonemproject.com (rob) Date: Wed, 27 Mar 2002 07:12:23 -0800 Subject: [SciPy-dev] More info on 100x100x100 FDTD's with Weave Message-ID: <3CA1E156.3AF35529@pythonemproject.com> The more info is that there is no more info that makes sense. Verbose=2 gave me nothing. When I switch from 80 cubed to 100 cubed region, as soon as the program hits the first Weave statement, my box starts paging with no CPU activity. It finally maxes out at 300 Meg swap and kills the app. No verbose debugging info to be found. Plain Numpy works fine with this sim at 300Meg+ memory. It does require a lot of swap on my HD, but there is plenty left to run Netscape, X11, etc. I have posted the program on my web site. Hit the "Whats New" button. Its called UFOblitz.py since I am doing a quasi-static analysis of those HV flying craft that were recently on Slashdot. If someone else can test this program and prove my results I will be happy. Otherwise this might be a FreeBSD problem. Just run the program normally, then run again after changing the "cells/meter" variable to 100. (Its in a 1M cubed sim space) Thanks, Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com From eric at scipy.org Wed Mar 27 11:53:46 2002 From: eric at scipy.org (eric) Date: Wed, 27 Mar 2002 11:53:46 -0500 Subject: [SciPy-dev] Some concerns on Scipy development References: Message-ID: <05ee01c1d5af$f666a640$6b01a8c0@ericlaptop> Hey Pearu, > > > I've have thought about this a little lately also. There is a philosophical > > difference to packaging among the scientific developers. Some wish for small > > single purpose and stand alone packages that are installed one by one. Others > > wish for a single "standard library" of scientific tools that, once installed, > > is a one stop shop for a large number of scientific algorithms. There are > > benefits to both. However, I come squarely down in the second camp. A > > monolithic package is easier to install for end users, and it solves > > compatibility issues (such as SciPy changing the behavior of Numeric in some > > places). I believe the existence of such a package is required before there can > > be a mass conversion of engineers and scientist to Python as their tool of > > choice for daily tasks. This is the goal of SciPy. > > I have been in peace with this goal of SciPy for a long time. In my > concers I was not trying to propose to change this general goal in any > way. Instead, I was concerned on the internal structure of SciPy and to > see if we could ease the SciPy development and make it more robust for the > future. Right. I didn't think you were -- I just wanted to note the differnce of opinions on this and explain where SciPy fit in the picture. > > One efficient way to achive that would be to require that internal modules > in SciPy would be as independent as possible. A good measure for this > independence is that a particular module can be installed as a standalone. I agree and think your suggestion to move as far as possible this direction is good. But, I also don't think dependence on a single package is to much of a price to pay. There is already some difference in scipy_base/scipy_lite (whatever it is called) and Numeric's behavior. We need to import this instead of Numeric directly to insure current and future linalg, etc. modules comply with the expected behavior in SciPy. Also, scipy_base has many convenience functions that will be helpful in other places. > Note that I am not proposing this because I would like to use these > modules as standalone modules myself (or any other party), but only to > strengthen SciPy by making it more robust internally. I'm actually am a consumer in this case. I'd would like to use modules outside of SciPy on occasion, and want to make it as easy as possible within the SciPy framework. Witness weave. It seems like the scipy_base concept accomplishes this. If your willing to inlcude Numeric as a requirement, adding scipy_base shouldn't be an issue. > > By doing this, it does not mean that the main goal of SciPy is > somehow threatened, it will be still a monolithic package for end-users. > Just its internal structure will be modular and less sensitive to adding > new modules or reviewing some if needed in future. Again, I agree -- I think we are on the same page. > > Now about the question whether SciPy parts can be completely independent? > I think this can be never achived in principle nor it is desired, > but it is a good ideal to follow *whenever* it is possible (and not > just a nice thing to do as you say) and, indeed, can be practical for > other projects, and all that for the sake of SciPy own success. > > > > > So, I think this is a worthy goal for *some* of the modules (notably the ones > > people are discussing such as integrate, linalg, etc), with one caveat. These > > modules need access to some functions provided by scipy and will need to import > > at least one extra module. Scanning linalg, the needed functions are amax, > > amin, triu, etc. and a handful of functions subsumed from Numeric as well as > > some constants from scipy.limits. I consider it a bad idea to replicate these > > functions across multiple modules because of the maintenance issues associated > > with duplicate code. I don't want to go down that path. > > Me neither. However your statement that these modules necessarily need > access to scipy functions, is a bit exaggerated. > In general, there are several ways how the same functionality can be > implemented, and it is my experience that linalg2 can be implemented > without the scipy dependence and that also without replicating any > code. This may be the case. Please let us know what you have in mind. Travis has implemented a lot of stuff that uses functions that are currently in scipy and will be in scipy_lite. The linalg interfaces to solve, expm, etc. may not currently be the most efficient, but, by all reports, they are working pretty well and address many problems. I'm sure we will need to rework the interface some -- I personally see the need for an lu_factor and lu_solve method that are thinly layered over getrf and getrs for efficiency. I'm sure there are other places that linear algebra gurus could point out. Waiting for the perfect interface though, makes people like Jochen who is waiting on a (somewhat) stable release continue to wait. If the only problem is efficiency, I say we get a release based on the current interface out there, and solve the efficiency issues in the next release. One other note. I do not see the interface of a 0.2 package set in stone. Users are considered "early adopters." If there is good reason to change the interface between 0.2 and 0.3 then we should do it. When we get up in the .6 or .7 range, then we should be more careful about changes. But for now, like f2py, the changes are OK. Perhaps we should start a thread discussing the SciPy linear algebra interface. Would this be helpful? > In fact, using high-level scipy convinience functions in linalg2 > that is supposed to provide highly efficient and yet to be user-friendly > (yes, both goals can achived at the same time!) algorithms, is not good > because scipy functions just are inefficient due to their general > purpose feature and the initial wins in performance are lost. Some can be made efficient. Some will be less so. I'm more worried about getting a working version out that (hopefully) can be made efficient in the future than I am in optimizing it right now. If we want to make changes to linalg, lets discuss specifics. > > Therefore low level modules like linalg, integrate, etc must be carefully > implemented even if it takes more time and seemingly direct Python hooks > could be applied. > > > So this is what the site-packages view of scipy would be: > > > > site-packages > > scipy_distutils > > scipy_test > > scipy_level0 > > subsumes and customizes Numeric > > handy.py > > misc.py > > scimath.py > > Matrix.py (?) > > fastumath.so (pyd) > > etc. > > scipy > > subsume scipy_base > > everything else > > This looks like a positive plan to me. > > Any other candidates for naming scipy_level0? It reflects too much > the internals of SciPy but will contain very useful general purpose > functions, I assume, to be useful more widely. > How about scipy_base? scipy_base is fine with me. > Another idea would be then to move scipy_test inside scipy_base (and > dropping its scipy_ prefix). Since scipy_base would be mostly pure Python, > it should be feasible. Good idea. The current "packagization" of scipy_test was a complete hack to get around limitations in distutils. scipy_base is a much better home for it. > (Later, be not surprised if I will question the naming of handy.py and > misc.py, but I am not ready for that yet ...;-) Funny you should mention that. misc.py was my utility module. handy.py was Travis O.'s. We both thought they should be merged into an appropriately named module in the move to scipy_base. Pick a name. > > In regards to higher level modules that use fft, svd, and other complex > > algorithms, they are just gonna have to import scipy. > > +2 > > > This requires some discussion before we make the change. It's also gonna > > require > > someone to step up and implement the change -- though it probably isn't a major > > effort. > > It may be a good idea to release 0.2 before such a change. If it works out > nicely, then 0.3 could follow quickly. We could do that. I think the change isn't that difficult. Travis O. has already structured the code in a way that is pretty much equivalent to the scipy_base idea. His level0 functions/modules can be moved over into to scipy_base plus fastumath, limits, scipy_test (others?). Creating scipy_base now solves the problem of where to put fastumath which doesn't have a good home. The issue that needs more thought is the NaN functions. They should also go over there, but they are part of cephes, and the entire "special" package should not be moved (I don't think...). Needs the most thought. After making the scipy_base package, the find/replaces need to be done in appropriate modules. I'd lean toward trying to get the scipy_base idea in this release. If it looks like to much disruption though, we'll push it to 0.3. Perhaps April 5th is to ambitious to fit all this in. I'd like to try though. eric From prabhu at aero.iitm.ernet.in Wed Mar 27 12:59:06 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 27 Mar 2002 23:29:06 +0530 Subject: [SciPy-dev] Some concerns on Scipy development In-Reply-To: References: <04bd01c1d518$e34b7870$6b01a8c0@ericlaptop> Message-ID: <15522.2154.437552.798933@monster.linux.in> >>>>> "PP" == pearu writes: >> So this is what the site-packages view of scipy would be: >> >> site-packages scipy_distutils scipy_test scipy_level0 subsumes >> and customizes Numeric handy.py misc.py scimath.py Matrix.py >> (?) fastumath.so (pyd) etc. scipy subsume scipy_base >> everything else PP> This looks like a positive plan to me. PP> Any other candidates for naming scipy_level0? It reflects too PP> much the internals of SciPy but will contain very useful PP> general purpose functions, I assume, to be useful more widely. PP> How about scipy_base? FWIW, scipy_base sounds much better. prabhu From pearu at scipy.org Wed Mar 27 15:22:28 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 27 Mar 2002 14:22:28 -0600 (CST) Subject: [SciPy-dev] misc, hand, and friends In-Reply-To: <05ee01c1d5af$f666a640$6b01a8c0@ericlaptop> Message-ID: Hi, On Wed, 27 Mar 2002, eric wrote: > > (Later, be not surprised if I will question the naming of handy.py and > > misc.py, but I am not ready for that yet ...;-) > > Funny you should mention that. misc.py was my utility module. handy.py was > Travis O.'s. We both thought they should be merged into an appropriately named > module in the move to scipy_base. Pick a name. I think we should pick many names here... Some months back I made a quick reference card on scipy functions and their dependencies for my self. It follows below. Note that it aims not to be complete or updated but to give some prespective. A quick look on this map shows to me a rather high scattering of relative functions in different modules. In the following messages I shall be more specific to give some starting ideas on re-factoring this stuff. Please, feel free to draw your own conclusions so that overlapping ideas can be collected and applied. Pearu ----------------------------------- scipy: __init__: import Numeric,os,sys,fastumath,string from helpmod import help, source from Matrix import Matrix as Mat defines: Inf,inf,NaN,nan somenames2all,names2all,modules2all,objects2all helpmod: import inspect,types,sys,os defines: split_line,makenamedict,help,source handy: import Numeric,types,cPickle,sys,scipy,re from Numeric import * from fastumath import * defines: ScalarType nd_grid,grid concatenator,r_,c_ index_exp disp,logspace,linspace fix,mod,fftshift,ifftshift,fftfreq,cont_ft,r1array r2array,who,objsave,objload,isscalar,toeplitz, hankel,real_if_close,sort_complex,poly,polyint, polyder,polyval,polyadd,polysub,polymul,polydiv, deconvolve,poly1d,select misc: import scipy.special from types import IntType,ComplexType defines: real,imag,iscomplex,isreal,array_iscomplex,array_isreal isposinf,isneginf,nan_to_num,logn,log2,lena, histogram,trim_zeros,atleast_1d,atleast_2d,atleast_3d, vstack,hstack,column_stack,dstack,replace_zero_by_x_arrays, array_split,split,hsplit,vsplit,dsplit x_array_kind,x_array_precision,x_array_type x_common_type limits: import Numeric defines: toChar,toInt8,toInt16,toInt32,toFloat32,toFloat64 epsilon,tiny float_epsilon,float_tiny,float_min,float_max, float_precision,float_resolution double_epsilon,double_tiny double_min,double_max,double_precision,double_resolution data_store: import dumb_shelve import string,os defines: load,save,create_module,create_shelf dumb_shelve: from shelve import Shelf import zlib from cStringIO import StringIO import cPickle defines: DbfilenameShelf,open dumbdbm_patched: defines: open basic: from Numeric import * import Matrix,copy from handy import isscalar, mod from fastumath import * defines: eye,tri,diag,fliplr,flipud,rot90,tril,triu,amax,amin ptp,mean,median,std,cumsum,prod,cumprod,diff cov,corrcoef,squeeze,sinc,angle,unwrap,allMat basic1a: import Numeric,fastumath,types,handy from scipy import diag,special,r1array,hstack from scipy.linalg import eig import scipy.stats as stats from Numeric import * from fastumath import * defines: find_non_zero,roots,factorial,comb,rand,randn ... ------------------------------------------- From pearu at scipy.org Wed Mar 27 16:21:23 2002 From: pearu at scipy.org (pearu at scipy.org) Date: Wed, 27 Mar 2002 15:21:23 -0600 (CST) Subject: [SciPy-dev] misc, hand, and friends In-Reply-To: Message-ID: Hi again, Below I just outline some ideas on cleaning scipy up and without taking into account actual implementation details of these functions. The analysis is very raw and the suggestions may only show possible directions. Currently I find quite difficult to decide where a particular function should go because the overall structure really needs refactored and an overall purpose of each part should be summarized. Can someone layout a more detailed vision of scipy parts and structure? I am a bit lost right now, may be due to the late hour here ... Pearu On Wed, 27 Mar 2002 pearu at scipy.org wrote: > ----------------------------------- > scipy: > __init__: > defines: > Inf,inf,NaN,nan > somenames2all,names2all,modules2all,objects2all Eric mentioned the problem with NaN stuff. I have not looked into it yet.. > helpmod: > defines: > split_line,makenamedict,help,source There is also helper.py that defines help2,help3,etc. Merge helper.py and helpmod.py. > handy: > defines: > ScalarType > nd_grid,grid > concatenator,r_,c_ > index_exp these functions could be factored somewhere. > disp find a better place.. > logspace,linspace dito > fix,mod,fftshift,ifftshift,fftfreq,cont_ft, fft stuff could go into a yet not existing transform module. I have in mind implementing other transforms as well like hilbert, etc. that is based on fft. > r1array > r2array find or make a place .. > who find or make a place .. > objsave,objload related to data_store?? > isscalar, > toeplitz, > hankel should go into Matrix > real_if_close,sort_complex, find or make a place > poly,polyint, > polyder,polyval,polyadd,polysub,polymul,polydiv, > deconvolve,poly1d,select collect poly,polyint,... into a separate polynomial module? > misc: > defines: > real,imag Put into scimath.py? > iscomplex,isreal,array_iscomplex,array_isreal > isposinf,isneginf find or make a place > nan_to_num,logn,log2 put into scimath.py? > lena ?? to something with it > histogram,trim_zeros,atleast_1d,atleast_2d,atleast_3d, > vstack,hstack,column_stack,dstack,replace_zero_by_x_arrays, Looks like a stuff for a separate module. Any relation to basic.py? > array_split,split,hsplit,vsplit,dsplit find or make a place > x_array_kind,x_array_precision,x_array_type > x_common_type find a place > limits: > defines: > toChar,toInt8,toInt16,toInt32,toFloat32,toFloat64 > epsilon,tiny > float_epsilon,float_tiny,float_min,float_max, > float_precision,float_resolution > double_epsilon,double_tiny > double_min,double_max,double_precision,double_resolution That looks ok to me. > data_store: > import dumb_shelve > import string,os > defines: > load,save,create_module,create_shelf > dumb_shelve: > from shelve import Shelf > import zlib > from cStringIO import StringIO > import cPickle > defines: > DbfilenameShelf,open > dumbdbm_patched: > defines: > open Merge data_store, dumb_shelve, dumbdbm_patched to reduce number of files. Or make a separete package if this stuff will be extended. > basic: > defines: > eye,tri,diag,fliplr,flipud,rot90,tril,triu,amax,amin > ptp,mean,median,std,cumsum,prod,cumprod,diff > cov,corrcoef,squeeze,sinc,angle,unwrap,allMat looks like MLab > basic1a: > defines: > find_non_zero,roots,factorial,comb,rand,randn From rob at pythonemproject.com Thu Mar 28 10:38:34 2002 From: rob at pythonemproject.com (rob) Date: Thu, 28 Mar 2002 07:38:34 -0800 Subject: [SciPy-dev] Is the SciPy website down? Message-ID: <3CA338F9.66AB65ED@pythonemproject.com> I was going to reinstall Weave and send you the verbose output, in the case where the compiler can't find stdc++. But your site is dead here. Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com From prabhu at aero.iitm.ernet.in Thu Mar 28 03:08:15 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 28 Mar 2002 13:38:15 +0530 Subject: [SciPy-dev] Some thoughts on packaging SciPy Message-ID: <15522.53103.341877.908017@monster.linux.in> hi, While I believe that its nice to have everything in one big package, I dont think that it fits the bill all the time. For instance I think weave, all the plot utilities, gui_thread and scipy_distutils and others do not belong in the scipy namespace. They are useful in scopes far beyond scipy. This is surely obvious to you all but I'd like to belabour it. scipy is of use to scientists and engineers but even we do not do scientific programming all the time. Many modules have use outside the scientific world. These should not be locked into scipy. OTOH, anything that is related to science/numerics should be part of scipy. It only makes sense. Consider plt. I'm sure its possible that a sys admin would like to use plt to plot stuff (maybe log files?). There is nothing scipy specific to wanting to plot data. Similarly with weave and maybe others. These are modules that should not be part of the scipy package. However, they should definitely be bundled as part of the scipy distribution. Hence the distinction between the scipy python package/namespace and the scipy distribution. I think they are different things and need to be noted. However, as I write this I realize that one viewpoint would be, "Why dont you just install the whole of scipy and just use what you want?" The problem with this approach is that its not possible to install scipy without other requirements. If someone wants just plt, they'd have to figure out the atlas related issues or any others that will arise in the future (and these will not be easy issues). I think there is only one way of dealing with this. We need a tool like cpan or ciphon. Or maybe not even that. Something like this might suffice. (1) Each module has a set of dependencies which it advertises in some file. (2) setup.py has a config file (or command line switches) that lets you select what packages you want installed. (3) setup.py also does not install anything that fails dependencies. For instance if someone wanted plt alone they select plt. plt ropes in gui_thread and requires wxPython. If wxPython is avlbl. plt and gui_thread are installed if not it is not installed. So the users can pick and choose what they want and install things with the least hassle. Once this is done it really does not matter if the modules are installed inside the scipy python package/namespace or outside it. The idea is somewhat like rpms and debs. There is one big source package from which smaller components are installable separately. I think this is definitely implementable but dont know how hard it should be to do. My apologies for not contributing any code that addresses these things. I thought that the least I can do is contribute ideas. prabhu From travis at scipy.org Thu Mar 28 11:38:15 2002 From: travis at scipy.org (Travis N. Vaught) Date: Thu, 28 Mar 2002 10:38:15 -0600 Subject: [SciPy-dev] Is the SciPy website down? In-Reply-To: <3CA338F9.66AB65ED@pythonemproject.com> Message-ID: Yes, Zope is down -- working on it as we speak. TV > -----Original Message----- > From: scipy-dev-admin at scipy.org [mailto:scipy-dev-admin at scipy.org]On > Behalf Of rob > Sent: Thursday, March 28, 2002 9:39 AM > To: scipy-dev at scipy.net > Subject: [SciPy-dev] Is the SciPy website down? > > > I was going to reinstall Weave and send you the verbose output, in the > case where the compiler can't find stdc++. But your site is dead here. > Rob. > -- > ----------------------------- > The Numeric Python EM Project > > www.pythonemproject.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev