From nwagner at mecha.uni-stuttgart.de Fri Mar 4 03:38:36 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 04 Mar 2005 09:38:36 +0100 Subject: [SciPy-dev] PyTrilinos - Python interface to Trilinos libraries Message-ID: <42281E8C.60800@mecha.uni-stuttgart.de> Hi all, A new release of Trilinos is available. It includes a Python interface to Trilinos libraries. http://software.sandia.gov/trilinos/release_5.0_notes.html Regards, Nils From nikolai.hlubek at mailbox.tu-dresden.de Fri Mar 4 05:23:51 2005 From: nikolai.hlubek at mailbox.tu-dresden.de (Nikolai Hlubek) Date: Fri, 04 Mar 2005 10:23:51 +0000 Subject: [SciPy-dev] LiveDocs for offline viewing Message-ID: <1109931831l.17623l.0l@ptpcp11> Hi everyone, I recently discovered the livedocs for scipy and quite like them since they allow better browsing than the ones created by pydoc. Mostly because of the better arranged and linked index pages. I've tried to start a local version of twistd to serve the documentation from my install of scipy but most of the links are missing. Only scipy_base is equal to the version online available. So for scipy I only get the following, which is much less compared to http://oliphant.ee.byu.edu:81/scipy/ SciPy --- A scientific computing package for Python =================================================== Available subpackages --------------------- scipy.cluster --- Vector Quantization / Kmeans scipy.cow --- Cluster Of Workstations scipy.fftpack --- Discrete Fourier Transform algorithms scipy.ga --- Genetic Algorithms scipy.gplt --- Plotting with gnuplot scipy.integrate --- Integration routines scipy.interpolate --- Interpolation Tools scipy.io --- Data input and output scipy.linalg --- Linear algebra routines scipy.optimize --- Optimization Tools scipy.plt --- Plotting with wxPython scipy.signal --- Signal Processing Tools scipy.sparse --- Sparse matrix scipy.special --- Special Functions scipy.stats --- Statistical Functions scipy.xplt --- Plotting routines based on Gist scipy.xxx --- Scipy example module for developers weave --- C/C++ integration Where only weave is linked. Does anyone have an idea what this problem is related to? Thanks for your help, Nikolai From pearu at scipy.org Fri Mar 4 05:29:47 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 4 Mar 2005 04:29:47 -0600 (CST) Subject: [SciPy-dev] LiveDocs for offline viewing In-Reply-To: <1109931831l.17623l.0l@ptpcp11> References: <1109931831l.17623l.0l@ptpcp11> Message-ID: On Fri, 4 Mar 2005, Nikolai Hlubek wrote: > Hi everyone, > > I recently discovered the livedocs for scipy > and quite like them since they allow better > browsing than the ones created by pydoc. > Mostly because of the better arranged and linked > index pages. > > I've tried to start a local version of twistd to > serve the documentation from my install of scipy > but most of the links are missing. > Only scipy_base is equal to the version online > available. > > So for scipy I only get the following, which is much less > compared to http://oliphant.ee.byu.edu:81/scipy/ > > SciPy --- A scientific computing package for Python > =================================================== > > Available subpackages > --------------------- > > scipy.cluster --- Vector Quantization / Kmeans > scipy.cow --- Cluster Of Workstations > scipy.fftpack --- Discrete Fourier Transform algorithms > scipy.ga --- Genetic Algorithms > scipy.gplt --- Plotting with gnuplot > scipy.integrate --- Integration routines > scipy.interpolate --- Interpolation Tools > scipy.io --- Data input and output > scipy.linalg --- Linear algebra routines > scipy.optimize --- Optimization Tools > scipy.plt --- Plotting with wxPython > scipy.signal --- Signal Processing Tools > scipy.sparse --- Sparse matrix > scipy.special --- Special Functions > scipy.stats --- Statistical Functions > scipy.xplt --- Plotting routines based on Gist > scipy.xxx --- Scipy example module for developers > weave --- C/C++ integration > > Where only weave is linked. Does anyone have an idea what > this problem is related to? It looks like you have some older scipy version installed. I believe that livedocs represent CVS version of scipy. Pearu From nikolai.hlubek at mailbox.tu-dresden.de Fri Mar 4 11:15:24 2005 From: nikolai.hlubek at mailbox.tu-dresden.de (Nikolai Hlubek) Date: Fri, 04 Mar 2005 16:15:24 +0000 Subject: [SciPy-dev] LiveDocs for offline viewing In-Reply-To: (from pearu@scipy.org on Fri Mar 4 11:29:47 2005) References: <1109931831l.17623l.0l@ptpcp11> Message-ID: <1109952924l.17623l.6l@ptpcp11> On 03/04/05 11:29:47, Pearu Peterson wrote: > It looks like you have some older scipy version installed. > I believe that livedocs represent CVS version of scipy. > > Pearu Hi Ok, that got me partly started. (got version '0.3.3_303.4579' now) I've got the documentation server running now and changed the title page a bit, to fit better with the rest of the layout. (You can have a look at it if you like at http://141.30.17.31:81/) But, I don't get all of the help files i.e. http://141.30.17.31:81/scipy/xplt/histogram is empty. It should have some data, as http://oliphant.ee.byu.edu:81/scipy/xplt/histogram is not empty. Also some of the python source files are included: http://localhost:81/scipy_test/testing/ScipyTestCase/__source__ others aren't: http://localhost:81/scipy/fftpack/fft/__source__ Any idea what might still be wrong? Best regards, Nikolai From pearu at cens.ioc.ee Tue Mar 8 17:56:24 2005 From: pearu at cens.ioc.ee (pearu at cens.ioc.ee) Date: Wed, 9 Mar 2005 00:56:24 +0200 (EET) Subject: [SciPy-dev] Re: scipy.special for numarray advice In-Reply-To: <1109945927.19608.172.camel@halloween.stsci.edu> Message-ID: Hi Todd, PS: I am sending this thread also to scipy-dev list, I hope you don't mind. I just commited `build_ext --backends` support to scipy_distutils CVS. At the moment I have tested it only on scipy_distutils/tests/f2py_ext example. For example, try cd scipy_distutils/tests/f2py_ext python setup.py build_src build_ext --backends=numarray,numeric build python tests/test_fib2.py # default array backend is numarray # as it appears first in --backends list. python tests/test_fib2.py --numarray python tests/test_fib2.py --numeric after you have installed updated scipy_distutils. Note that scipy_distutils/command/build_src.py does not use scipy_base/numerix.py array backend detection mechanism as it does not work quite right. For example, it should remove --numarray or --numeric from sys.argv once the array backend is established. Also, to keep --backends feature general, instead of detecting NUMERIX env. variable, the uppercased backend name env. variable is checked instead. For example, the last two commands above would be equivalent to NUMARRAY=1 python tests/test_fib2.py NUMERIC=1 python tests/test_fib2.py I am not still sure whether this would be the way to choose the array backend package. Other suggestions are welcome. Let me know if there are any trouble with using --backends feature while building scipy modules for different array backends. Regards, Pearu On Fri, 4 Mar 2005, Todd Miller wrote: > Fantastic! This sounds just right. I think your idea about the > longhand version of --backends=numeric,numarray and the resulting > non-intrusive extensibility is a good one, both from the point of view > of clarity and future growth. > > Regards, > Todd > > On Thu, 2005-03-03 at 20:21, pearu at cens.ioc.ee wrote: > > Hi Todd, > > > > Just a quick note on what I started to work on. I have introduced > > --backends option to build_ext command that takes a comma separated list > > of the following words: nc, na, (and in future, n3 for Numeric3). > > > > For example, if --backends=nc,na is used and some setup.py file defines > > Extension('foo',..) then scipy_distutils replaces this with two Extension > > instances, Extension('_nc.foo',..) and Extension('_na.foo',..), and > > creates a foo.py file that will import either _nc.foo or _na.foo. > > > > This approach has an advantage that there is no need to modify existing > > setup.py files as well as f2py2e. In addition, the order of words in > > --backends may define the default backend and one can build extension > > modules for arbitrary number of array backends. Of cource, the sources of > > extension modules must be aware of -DNUMERIC, -DNUMARRAY, (and in future, > > -DNUMERIC3) cpp macros. > > > > I hope that I can finish --backends support by the end of this week. It is > > almost working on my local tree, I only have to figure out the proper > > content for the foo.py file and make sure that all extension sources are > > compiled to different locations for different backends (this should avoid > > the need to introduce all these nc_* and na_* files). > > > > [Words nc,na,n3 could be replaced with numeric,numarray,numeric3, > > respectively, if that would be clearer. Acctualy, the corresponding upper > > cased words could then define cpp macros and in principle one could > > introduce new array backends without the need to modify > > scipy_distutils.] > > > > The above approach assumes that only one array backend is used in one > > python session. It would be possible to choose the proper backend also in > > runtime based on the types of arguments but I'd leave this feature for > > future because it's a non-trivial task. However, note that implementing > > such an extension would require only figuring out the proper contents for > > the foo.py file. > > > > Regards, > > Pearu > > > > On Thu, 3 Mar 2005, Todd Miller wrote: > > > > > We're still working at STScI on porting SciPy to numarray. Obviously > > > if Numeric3 succeeds, all this is moot, but in the interim we're > > > proceeding. > > > > > > We're starting by porting scipy.special to numarray so I've been > > > thinking about how to build scipy extensions for numarray. > > > My initial thoughts were to duplicate what we did for matplotlib and > > > scipy_base so that scipy could be built to simultaneously support either > > > Numeric or numarray; this might potentially double the size of the > > > binary but makes distribution and switching back and forth easier. The > > > other option is to use the current scheme used with f2py: build for > > > numarray or Numeric but not both. > > > > > > My current hand implemented porting scheme is roughly like this: > > > > > > 1. Re-do the modules for extensions using the Numeric ufunc API and use > > > the numarray N-ary ufunc API. This is fairly simple table driven > > > Python code now that numarray has N-ary ufunc support. This always has > > > to be hand done because the Ufunc APIs are different. > > > > > > 2. Re-compile f2py .pyf files for both numarray and Numeric by renaming > > > and copying to na_*.pyf and nc_*.pyf. Internally, the .pyf module > > > names get pre-pended with na_ or nc_ but that's it. This could be > > > completely automated in f2py or the distutils. I've been thinking about > > > a "numarray certified" switch which tells f2py to build for both. > > > > > > 3. For both ufunc API and f2py modules, create Python proxy modules > > > named after the current Numeric module. In the proxy, do something > > > like: > > > > > > if scipy_base.numerix.which[0] == "numeric": > > > from nc_specfun import * > > > else: > > > from na_specfun import * > > > > > > where nc_specfun is a Numeric version of the extension, and na_specfun > > > is the numarray version. The Python proxies have a minor disadvantage > > > that they do not effectively override the existing .so modules; those > > > have to be removed because they have "priority" over the same named .py > > > at import time. > > > > > > 4. Modify setup_special.py to build both versions of the extensions, one > > > with -DNUMARRAY=1, generate numarray wrapper code, etc. > > > > > > One obstacle is that fortranobject.o gets built and cached for either > > > Numeric or numarray so that it's difficult to build both kinds of > > > extensions with a single setup. Nevertheless, with some shennanigans > > > it's already easily possible to build scipy.special for both. I think > > > one hack to make this work easier (rather than figuring out where > > > fortranobject.o is) is to have f2py temp copy fortranobject.c to > > > na_fortranobject.c and nc_fortranobject.c. > > > > > > So, I'm writing to see if you have any top-of-the-head comments or > > > suggestions about this. I understand if you have limited time or > > > interest at this point but thought I should still ask. > > > > > > Do you have an opinion on the "single binary for both array packages vs. > > > build for either/or" choice? > > > > > > Do you have any suggestions on how to get the distutils or f2py to build > > > extensions for both Numeric and numarray? > > > > > > Regards, > > > Todd > > > > > > > > > > -- > > From oliphant at ee.byu.edu Wed Mar 9 02:32:33 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Mar 2005 00:32:33 -0700 Subject: [SciPy-dev] Future directions for SciPy in light of meeting at Berkeley Message-ID: <422EA691.9080404@ee.byu.edu> I wanted to send an update to this list regarding the meeting at Berkeley that I attended. A lot of good disscussions took place at the meeting that should stimulate larger feedback. Personally, I had far more to discuss before I had to leave, and so I hope that the discussions can continue. I was looking to try and understand why with an increasing number of Scientific users of Python, relatively few people actually seem to want to contribute to scipy, regularly, even becoming active developers. There are lots of people who seem to identify problems (though very often vague ones), but not many who seem able (either through time or interest contraints) to actually contribute to code, documentation, or infrastructure. Scipy is an Open source project and relies on the self-selection process of open source contributors. It would seem that while the scipy conference demonstrates a continuing and even increasing use of Python for scientific computing, not as many of these users are scipy devotees. Why? I think the answers come down to a few issues which I will attempt to answer with proposals. 1) Plotting -- scipy's plotting wasn't good enough (we knew that) and the promised solution (chaco) took too long to emerge as a simple replacement. While the elements were all there for chaco to work, very few people knew that and nobody stepped up to take chaco to the level that matplotlib, for example, has reached in terms of cross-gui applicability and user-interface usability. Proposal: Incorporate matplotlib as part of the scipy framework (replacing plt). Chaco is not there anymore and the other two plotting solutions could stay as backward compatible but not progressing solutions. I have not talked to John about this, though I would like to. I think if some other packaging issues are addressed we might be able to get John to agree. 2) Installation problems -- I'm not completely clear on what the "installation problems" really are. I hear people talk about them, but Pearu has made significant strides to improve installation, so I'm not sure what precise issues remain. Yes, installing ATLAS can be a pain, but scipy doesn't require it. Yes, fortran support can be a pain, but if you use g77 then it isn't a big deal. The reality, though, is that there is this perception of installation trouble and it must be based on something. Let's find out what it is. Please speak up users of the world!!!! Proposal (just an idea to start discussion): Subdivide scipy into several super packages that install cleanly but can also be installed separately. Implement a CPAN-or-yum-like repository and query system for installing scientific packages. Base package: scipy_core -- this super package should be easy to install (no Fortran) and should essentially be old Numeric. It was discussed at Berkeley, that very likely Numeric3 should just be included here. I think this package should also include plotting, weave, scipy_distutils, and even f2py. Some of these could live in dual namespaces (i.e. both weave and scipy.weave are available on install). scipy.traits scipy.weave (weave) scipy.plt (matplotlib) scipy.numeric (Numeric3 -- uses atlas when installed later) scipy.f2py scipy.distutils scipy.fft scipy.linalg? (include something like lapack-lite for basic but slow functionality, installation of improved package replaces this with atlas usage) scipy.stats scipy.util (everything else currently in scipy_core) scipy.testing (testing facilities) Each of these should be a separate package installable and distributable separately (though there may be co-dependencies so that scipy.plt would have to be distributed with scipy. Libraries (each separately installable). scipy.lib -- there should be several sub-packages that could live under hear. This is simply raw code with basic wrappers (kind of like a /usr/lib) scipy.lib.lapack -- installation also updates narray and linalg (hooks to do that) scipy.lib.blas -- installation updates narray and linalg scipy.lib.quadpack etc... Extra sub-packages: named in a hierarchy to be determined and probably each dependent on a variety of scipy-sub-packages. I haven't fleshed this thing out yet as you can tell. I'm mainly talking publicly to spur discussion. The basic idea is that we should force ourselves to distribute scipy in separate packages. This would force us to implement a yum-or-CPAN-like package repository, so that we define the interface as to how an additional module could be developed by someone, even maintained separately (with a different license), and simply inserted into an intelligent point under the scipy infrastructure. It also allow installation/compilation issues to be handled on a more per-module basis so that difficult ones could be noted. I think this would also help interested people get some of the enthought stuff put in to the scipy hierarchy as well. Thoughts and comments (and even half-working code) welcomed and encouraged... -Travis O. From pearu at cens.ioc.ee Wed Mar 9 04:50:09 2005 From: pearu at cens.ioc.ee (pearu at cens.ioc.ee) Date: Wed, 9 Mar 2005 11:50:09 +0200 (EET) Subject: [SciPy-dev] Re: [Numpy-discussion] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <422EBC9C.1060503@ims.u-tokyo.ac.jp> Message-ID: On Wed, 9 Mar 2005, Michiel Jan Laurens de Hoon wrote: > Travis Oliphant wrote: > > Proposal (just an idea to start discussion): > > > > Subdivide scipy into several super packages that install cleanly but can > > also be installed separately. Implement a CPAN-or-yum-like repository > > and query system for installing scientific packages. > > Yes! If SciPy could become a kind of scientific CPAN for python from > which users can download the packages they need, it would be a real > improvement. In the end, the meaning of SciPy evolve into "the website > where you can download scientific packages for python" rather than "a > python package for scientific computing", and the SciPy developers might > not feel OK with that. Personally, I would be OK with that. SciPy as a "download site" does not exclude it to provide also a "scipy package" as it is now. I am all in favore of refactoring current scipy modules as much as possible. Pearu From verveer at embl-heidelberg.de Wed Mar 9 05:56:54 2005 From: verveer at embl-heidelberg.de (Peter Verveer) Date: Wed, 9 Mar 2005 11:56:54 +0100 Subject: [SciPy-dev] Re: [Numpy-discussion] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <422EA691.9080404@ee.byu.edu> References: <422EA691.9080404@ee.byu.edu> Message-ID: <6c9c1490f05fc0812a640a1897574857@embl-heidelberg.de> > Proposal (just an idea to start discussion): > > Subdivide scipy into several super packages that install cleanly but > can also be installed separately. Implement a CPAN-or-yum-like > repository and query system for installing scientific packages. +1, I would be far more inclined to contribute if we could agree on such a structure. > Extra sub-packages: named in a hierarchy to be determined and probably > each dependent on a variety of scipy-sub-packages. > > I haven't fleshed this thing out yet as you can tell. I'm mainly > talking publicly to spur discussion. The basic idea is that we should > force ourselves to distribute scipy in separate packages. This would > force us to implement a yum-or-CPAN-like package repository, so that > we define the interface as to how an additional module could be > developed by someone, even maintained separately (with a different > license), and simply inserted into an intelligent point under the > scipy infrastructure. Two comments: 1) We should consider the issue of licenses. For instance: the python wrappers for GSL and FFTW probably need to be GPL-licensed. These packages definitely need to be part of a repository. There needs to be some kind of a category for such packages, as their license is more restrictive. 2) If there is going to be a repository structure it should provide for packages that can be installed independently of a scipy hierarchy. Packages that only require a dependency on the Numeric core should not require scipy_core. That makes sense if Numeric3 ever gets into the core Python. Such packages could (and probably should) also live in a dual scipy namespace. Peter From prabhu_r at users.sf.net Wed Mar 9 06:24:42 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 9 Mar 2005 16:54:42 +0530 Subject: [SciPy-dev] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <422EA691.9080404@ee.byu.edu> References: <422EA691.9080404@ee.byu.edu> Message-ID: <16942.56570.375971.565270@monster.linux.in> Hi Travis, >>>>> "TO" == Travis Oliphant writes: TO> I was looking to try and understand why with an increasing TO> number of Scientific users of Python, relatively few people TO> actually seem to want to contribute to scipy, regularly, even TO> becoming active developers. There are lots of people who seem TO> to identify problems (though very often vague ones), but not TO> many who seem able (either through time or interest TO> contraints) to actually contribute to code, documentation, or TO> infrastructure. I think there are two issues here. 1. Finding developers. Unfortunately, I'm as clueless as anyone else. It looks to me that most folks who are capable of contributing are already occupied with other projects. The rest use scipy and are quite happy with it (except for the occasional problem). Others are either heavily invested in other solutions, or don't have the skill or time to contribute. I also think that there are a fair number of users who use scipy at some level or another but are quiet about it and don't have a chance to contribute. From what I can tell, the intersection of the set of people who possess good computing skills and also persue numerical work from Python is still a very small number compared to other fields. 2. Packaging issues. More on this later. [...] TO> I think the answers come down to a few issues which I will TO> attempt to answer with proposals. TO> 1) Plotting -- scipy's plotting wasn't good enough (we knew I am not sure what this has to do with scipy's utility? Do you mean to say that you'd like to have people starting to use scipy to plot things and then hope that they contribute back to scipy's numeric algorithms? If all they did was to use scipy for plotting, the only contributions would be towards plotting. If you only mean this as a convenience, then this seems like a packaging issue and not related to scipy. Plotting is one part of the puzzle. You don't seem to mention any deficiencies with respect to numerical algorithms. This seems to suggest that apart from things like packaging and docs, the numeric side is pretty solid. Let me take this to an extreme, if plotting be deemed a part of scipy's core then how about f2py? It is definitely core functionality. So why not make f2py part of scipy? How about g77, g95, and gcc. The only direction this looks to be headed is to make a SciPy OS (== Enthon?). I think we are mixing packaging along with other issues here. To make it clear, I am not against incorporating matplotlib in scipy. I just think that the argument for its inclusion does not seem clear to me. [...] TO> 2) Installation problems -- I'm not completely clear on what TO> the TO> "installation problems" really are. I hear people talk about [...] TO> Proposal (just an idea to start discussion): TO> Subdivide scipy into several super packages that install TO> cleanly but can also be installed separately. Implement a TO> CPAN-or-yum-like repository and query system for installing TO> scientific packages. What does this have to do with scipy per se? This is more like a user convenience issue. [scipy-sub-packages] TO> I haven't fleshed this thing out yet as you can tell. I'm TO> mainly talking publicly to spur discussion. The basic idea is TO> that we should force ourselves to distribute scipy in separate TO> packages. This would force us to implement a yum-or-CPAN-like TO> package repository, so that we define the interface as to how TO> an additional module could be developed by someone, even TO> maintained separately (with a different license), and simply TO> inserted into an intelligent point under the scipy TO> infrastructure. This is in general a good idea but one that goes far beyond scipy itself. Joe Cooper mentioned that he had ideas on how to really do this in a cross-platform way. Many of us eagerly await his solution. :) regards, prabhu From gruben at bigpond.net.au Wed Mar 9 07:12:39 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Wed, 09 Mar 2005 23:12:39 +1100 Subject: [SciPy-dev] Re: Future directions In-Reply-To: <20050309073315.2825F3EB7C@www.scipy.com> References: <20050309073315.2825F3EB7C@www.scipy.com> Message-ID: <422EE837.5090500@bigpond.net.au> > It would seem that while the scipy conference demonstrates a continuing > and even increasing use of Python for scientific computing, not as many > of these users are scipy devotees. Why? > > I think the answers come down to a few issues which I will attempt to > answer with proposals. > > 1) Plotting -- scipy's plotting wasn't good enough (we knew that) and > the promised solution (chaco) took too long to emerge as a simple > replacement. While the elements were all there for chaco to work, very > few people knew that and nobody stepped up to take chaco to the level > that matplotlib, for example, has reached in terms of cross-gui > applicability and user-interface usability. I found plt and gplt too limiting from early on and quickly moved to gnuplot.py Matplotlib would be a nice choice, mainly due to its active development, clean interface and good documentation. I haven't been keeping up - is Chaco dead? That would be a shame. Python is still missing a cross-platform GUI-interactive plotting package. A long time ago, I toyed with implementing errorbars in Chaco, but found it too unapproachable, mainly due to lack of documentation. > 2) Installation problems -- I'm not completely clear on what the > "installation problems" really are. I hear people talk about them, but > Pearu has made significant strides to improve installation, so I'm not > sure what precise issues remain. Yes, installing ATLAS can be a pain, > but scipy doesn't require it. Yes, fortran support can be a pain, but > if you use g77 then it isn't a big deal. The reality, though, is that > there is this perception of installation trouble and it must be based on > something. Let's find out what it is. Please speak up users of the > world!!!! They may be thinking of what it used to be like - things have improved, so it may be a case of re-education or patience. > Thoughts and comments (and even half-working code) welcomed and > encouraged... > > -Travis O. One thing missing from your list about lack of uptake is lack of decent pdf-based documentation a'la Numeric or Numarray or even matplotlib docs. I know this was discussed a little while back so it will happen, but I personally think it is a hurdle for people wanting to know exactly what Scipy contains and could be the main reason for lower than expected uptake. One day I'll get my ErrorVal module cleaned up enough to re-propose it for inclusion in Scipy :-) but I'm studying honours physics at the moment, so I have no life or time. Gary From nwagner at mecha.uni-stuttgart.de Wed Mar 9 08:10:26 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Mar 2005 14:10:26 +0100 Subject: [SciPy-dev] Re: Future directions In-Reply-To: <422EE837.5090500@bigpond.net.au> References: <20050309073315.2825F3EB7C@www.scipy.com> <422EE837.5090500@bigpond.net.au> Message-ID: <422EF5C2.8010109@mecha.uni-stuttgart.de> Gary Ruben wrote: >> It would seem that while the scipy conference demonstrates a >> continuing and even increasing use of Python for scientific >> computing, not as many of these users are scipy devotees. Why? >> >> I think the answers come down to a few issues which I will attempt to >> answer with proposals. >> >> 1) Plotting -- scipy's plotting wasn't good enough (we knew that) and >> the promised solution (chaco) took too long to emerge as a simple >> replacement. While the elements were all there for chaco to work, >> very few people knew that and nobody stepped up to take chaco to the >> level that matplotlib, for example, has reached in terms of cross-gui >> applicability and user-interface usability. > > > I found plt and gplt too limiting from early on and quickly moved to > gnuplot.py > Matplotlib would be a nice choice, mainly due to its active > development, clean interface and good documentation. > I haven't been keeping up - is Chaco dead? That would be a shame. > Python is still missing a cross-platform GUI-interactive plotting > package. A long time ago, I toyed with implementing errorbars in > Chaco, but found it too unapproachable, mainly due to lack of > documentation. > >> 2) Installation problems -- I'm not completely clear on what the >> "installation problems" really are. I hear people talk about them, >> but Pearu has made significant strides to improve installation, so >> I'm not sure what precise issues remain. Yes, installing ATLAS can >> be a pain, but scipy doesn't require it. Yes, fortran support can be >> a pain, but if you use g77 then it isn't a big deal. The reality, >> though, is that there is this perception of installation trouble and >> it must be based on something. Let's find out what it is. Please >> speak up users of the world!!!! > > > They may be thinking of what it used to be like - things have > improved, so it may be a case of re-education or patience. > >> Thoughts and comments (and even half-working code) welcomed and >> encouraged... >> >> -Travis O. > > > One thing missing from your list about lack of uptake is lack of > decent pdf-based documentation a'la Numeric or Numarray or even > matplotlib docs. I know this was discussed a little while back so it > will happen, but I personally think it is a hurdle for people wanting > to know exactly what Scipy contains and could be the main reason for > lower than expected uptake. > I quite agree with you. Of course there are some sources of documentation but a central source like a complete and pdf-based manual would simplify matters. Moreover scipy could benifit from supporting features which are rarely available in other numerical packages (e.g.sparse matrices, large scale eigenproblems, to name a few) (I am aware of the fact, that there is no all-in-one device suitable for every purpose.) Lust but not least some kind of redundance in developing numerical packages is visible (at least from my point of view). Competition is helpful, but why can't I see an effort to focus on a few but well developed packages. BTW, the installation of scipy is not at all the problem. Many thanks to all the developers of scipy. Nils > One day I'll get my ErrorVal module cleaned up enough to re-propose it > for inclusion in Scipy :-) but I'm studying honours physics at the > moment, so I have no life or time. > > Gary > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From aisaac at american.edu Wed Mar 9 08:49:32 2005 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 9 Mar 2005 08:49:32 -0500 (Eastern Standard Time) Subject: =?UTF-8?Q?Re[2]:=20[SciPy-dev]=20Future=20directions=20for=20SciPy=20in=20?==?UTF-8?Q?light=20of=20meeting=09at=09Berkeley?= In-Reply-To: <16942.56570.375971.565270@monster.linux.in> References: <422EA691.9080404@ee.byu.edu><16942.56570.375971.565270@monster.linux.in> Message-ID: On Wed, 9 Mar 2005, Prabhu Ramachandran apparently wrote: > What does this have to do with scipy per se? This is more > like a user convenience issue. I think the proposal is: development effort is a function of community size, and community size is a function of convenience as well as functionality. This seems right to me. Cheers, Alan Isaac From prabhu_r at users.sf.net Wed Mar 9 11:52:42 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 9 Mar 2005 22:22:42 +0530 Subject: [SciPy-user] Re[2]: [SciPy-dev] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: References: <422EA691.9080404@ee.byu.edu> <16942.56570.375971.565270@monster.linux.in> Message-ID: <16943.10714.85815.666793@monster.linux.in> >>>>> "AI" == Alan G Isaac writes: AI> On Wed, 9 Mar 2005, Prabhu Ramachandran apparently wrote: >> What does this have to do with scipy per se? This is more like >> a user convenience issue. AI> I think the proposal is: development effort is a function of AI> community size, and community size is a function of AI> convenience as well as functionality. To put it bluntly, I don't believe that someone who can't install scipy today is really capable of contributing code to scipy. I seriously doubt claims that scipy is scary or hard to install today. Therefore, the real problem does not appear to be convenience and IMHO neither is functionality the problem. My only point is this. I think Travis and Pearu have been doing a great job! I'd rather see them working on things like Numeric3 and core scipy functionality rather than spend time worrying about packaging, including other new packages and making things more comfortable for the user (especially when these things are already taken care of). Anyway, Joe's post about ASP's role is spot on! Thanks Joe. More on that thread. cheers, prabhu From prabhu_r at users.sf.net Wed Mar 9 12:22:56 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 9 Mar 2005 22:52:56 +0530 Subject: [SciPy-dev] Re: [SciPy-user] Re: Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <200503091542.j29Fg7nX021779@oobleck.astro.cornell.edu> References: <20050309112636.24F99334FE@sc8-sf-spam1.sourceforge.net> <200503091542.j29Fg7nX021779@oobleck.astro.cornell.edu> Message-ID: <16943.12528.791748.115592@monster.linux.in> Hi Joe, >>>>> "JH" == Joe Harrington writes: JH> SciPy will reach the open-source jumping-off point when an JH> outsider has the following experience: They google, find us, JH> visit us, learn what they'll be getting, install it trivially, JH> and read a tutorial that in less than 15 minutes has them JH> plotting their own data. In that process, which will take JH> less than 45 minutes total, they must also gain confidence in JH> the solidity and longevity of the software and find a JH> supportive community. We don't meet all the elements of this JH> test now. Once we do, people will be ready to jump on and JH> work the open-source magic. Do you already have the top level document that guides the new user through this 15 minute drill? If you don't, would it help if I took a stab at it? I often find that a top level document of this form helps us fill in the blanks. I'm not promising much, just an overall first page with a bunch of internal links to other wiki pages that will eventually be filled in over time by everyone. If you think this would be of any use, please let me know off-list. Thanks. [...] JH> knew where the repository was. It's: JH> http://www.enthought.com/python/fedora/$releasever We need JH> someone to write installation notes for each package I think its a good first start to just stick stubs for each of these. cheers, prabhu From stephen.walton at csun.edu Wed Mar 9 12:33:19 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 09 Mar 2005 09:33:19 -0800 Subject: [SciPy-dev] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <422EA691.9080404@ee.byu.edu> References: <422EA691.9080404@ee.byu.edu> Message-ID: <422F335F.8060107@csun.edu> I only have a little to contribute at this point: > Proposal: > Incorporate matplotlib as part of the scipy framework (replacing plt). While this is an admirable goal, I personally find scipy and matplotlib easy to install separately. The only difficulty (of course!) is the numarray/numeric split, so I have to be sure that I select numerix as Numeric in my .matplotlibrc file before typing 'ipython -pylab -p scipy', which actually works really well. > 2) Installation problems -- I'm not completely clear on what the > "installation problems" really are. scipy and matplotlib are both very easy to install. Using ATLAS is the biggest pain, as Travis says, and one can do without it. Now that a simple 'scipy setup.py bdist_rpm' seems to work reliably, I for one am happy. I think splitting scipy up into multiple subpackages isn't such a good idea. Perhaps I'm in the minority, but I find CPAN counter-intuitive, hard to use, and hard to keep track of in an RPM-based environment. Any large package is going to include a lot of stuff most people don't need, but like a NY Times ad used to say, "You might not read it all, but isn't it nice to know it's all there?" I can tell you why I'm not contributing much code to the effort at least in one recent instance. Since I'm still getting core dumps when I try to use optimize.leastsq with a defined Jacobian function, I dove into _minpackmodule.c and its associated routines last night. I'm at sea. I know enough Python to be dangerous, used LMDER from Fortran extensively while doing my Ph.D., and am pretty good at C, but am completely unfamiliar with the Python-C API. So I don't even know how to begin tracking the problem down. Finally, as I mentioned at SciPy04, our particular physics department is at an undergraduate institution (no Ph.D. program), so we mainly produce majors who stop at the B.S. or M.S. degree. Their job market seems to want MATLAB skills, not Python, at the moment, so that's what the faculty are learning and teaching to their students. Many of them/us simply don't have the time to learn Python on top of that. Though, when I showed some colleagues how trivial it was to trim some unwanted bits out of data files they had using Python, I think I converted them. From jdhunter at ace.bsd.uchicago.edu Wed Mar 9 12:31:06 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed, 09 Mar 2005 11:31:06 -0600 Subject: [SciPy-dev] Re: [SciPy-user] Re: Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <16943.12528.791748.115592@monster.linux.in> (Prabhu Ramachandran's message of "Wed, 9 Mar 2005 22:52:56 +0530") References: <20050309112636.24F99334FE@sc8-sf-spam1.sourceforge.net> <200503091542.j29Fg7nX021779@oobleck.astro.cornell.edu> <16943.12528.791748.115592@monster.linux.in> Message-ID: >>>>> "Prabhu" == Prabhu Ramachandran writes: Prabhu> Do you already have the top level document that guides the Prabhu> new user through this 15 minute drill? If you don't, Prabhu> would it help if I took a stab at it? I often find that a Prabhu> top level document of this form helps us fill in the Prabhu> blanks. Fernando and I have been working on such a beast. We took the plunge after agreeing to teach an all day seminar on scientific computing in python, and typed up a course document starting with an introduction to python that used scientific examples to illustrate python concepts (eg complex numbers to introduce attributes) and will eventually cover numstar, matplotlib, ipython, scipy, VTK, mayavi, traits, wrapping and some GUI stuff. We have draft chapters for the intro, ipython, matplotlib and VTK so far. It's too rough and incomplete now for distribution, but we hope to have something useful in insert-time-interval-here :-) JDH From jdhunter at ace.bsd.uchicago.edu Wed Mar 9 12:31:06 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed, 09 Mar 2005 11:31:06 -0600 Subject: [SciPy-dev] Re: [SciPy-user] Re: Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <16943.12528.791748.115592@monster.linux.in> (Prabhu Ramachandran's message of "Wed, 9 Mar 2005 22:52:56 +0530") References: <20050309112636.24F99334FE@sc8-sf-spam1.sourceforge.net> <200503091542.j29Fg7nX021779@oobleck.astro.cornell.edu> <16943.12528.791748.115592@monster.linux.in> Message-ID: >>>>> "Prabhu" == Prabhu Ramachandran writes: Prabhu> Do you already have the top level document that guides the Prabhu> new user through this 15 minute drill? If you don't, Prabhu> would it help if I took a stab at it? I often find that a Prabhu> top level document of this form helps us fill in the Prabhu> blanks. Fernando and I have been working on such a beast. We took the plunge after agreeing to teach an all day seminar on scientific computing in python, and typed up a course document starting with an introduction to python that used scientific examples to illustrate python concepts (eg complex numbers to introduce attributes) and will eventually cover numstar, matplotlib, ipython, scipy, VTK, mayavi, traits, wrapping and some GUI stuff. We have draft chapters for the intro, ipython, matplotlib and VTK so far. It's too rough and incomplete now for distribution, but we hope to have something useful in insert-time-interval-here :-) JDH From jh at oobleck.astro.cornell.edu Wed Mar 9 13:39:31 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 9 Mar 2005 13:39:31 -0500 Subject: [SciPy-dev] Re: [SciPy-user] Re: Future directions for SciPy in light of meeting at Berkeley In-Reply-To: (message from John Hunter on Wed, 09 Mar 2005 11:31:06 -0600) References: <20050309112636.24F99334FE@sc8-sf-spam1.sourceforge.net> <200503091542.j29Fg7nX021779@oobleck.astro.cornell.edu> Message-ID: <200503091839.j29IdVPm023307@oobleck.astro.cornell.edu> I think it would be great if both of you took a stab at it. The doc will be necessarily short, and we can eventually combine the best elements into one doc. The project agreed by straw poll to use LyX, or you can use latex, which LyX reads/produces. It does math well and from it we can easily make all formats (html and pdf being the main ones). If you don't want to write in LyX, you can use whatever you want and someone will convert it. Before you start, it makes sense to have a brief discussion of what it should contain. My suggestion is in the ASP proposal: 4. They read the "Getting Started" document, which is about 20 pages and contains: a. A brief (1/2 page) description of SciPy's key characteristics from a new user's point of view b. A get-your-feet-wet tutorial, for example: make a sine wave and a parabola, add them, and plot, read some ASCII data and plot it, do a Fourier transform of the data and plot it, read an image and display it, make a second image, add it to the first, and display it, and extract an image section, manipulate it, and display it. c. The basic (non-programming) Python syntax, including array creation, access, and manipulation. d. The key elements of Python's interaction with the operating system (e.g., the most important environment variables, how to tell it where routines are stored, etc.). e. A statement on the current state of the software, that there is flux in graphics and the underlying array package, but that the functionality we teach in the docs will not (likely) go away. f. Sources of further information and support, both in the downloaded package and on the web. Remember that all this must fit in 15 minutes of interactive reading time, so references to descriptions elsewhere are best for the items that aren't leading the user through python interactions. --jh-- From jdhunter at ace.bsd.uchicago.edu Wed Mar 9 13:44:30 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed, 09 Mar 2005 12:44:30 -0600 Subject: [SciPy-dev] Re: [SciPy-user] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <422EA691.9080404@ee.byu.edu> (Travis Oliphant's message of "Wed, 09 Mar 2005 00:32:33 -0700") References: <422EA691.9080404@ee.byu.edu> Message-ID: >>>>> "Travis" == Travis Oliphant writes: Travis> It would seem that while the scipy conference demonstrates Travis> a continuing and even increasing use of Python for Travis> scientific computing, not as many of these users are scipy Travis> devotees. Why? Hi Travis, I like a lot of your proposal, and I want to throw a couple of additional ideas into the mix. There are two ideas about what scipy is: a collection of scientific algorithms and a general purpose scientific computing environment. On the first front, scipy has been a great success; on the second, less so. I think the following would be crucial to make such an effort a success (some of these are just restatements of your ideas with additional comments) * Easy to install: - it would be probably be important to have a fault-tolerant install so that even if a component fails, the parts that don't depend on that can continue. Matthew Knepley's build system might be an important tool to make this work right for source installs, rather than trying to push distutils too hard. * A package repository and a way of specifying dependencies between the packages and allow automated recursive downloads ala apt-get, yum, etc.... So basically we have to come up with a package manager, and probably one that supports src as well as binary installs. Everyone knows this is a significant problem in python, and we're in a good place to tackle it in that we have experience distributing complex packages across platforms which are a mixture of python/C/C++/FORTRAN, so if we can make it work, it will probably work for all of python. I think we would want contributions from people who do packaging on OSX and win32, eg Bob Ippolito, Joe Cooper, Robert Kern, and others. * Transparent support for Numeric, numarray and Numeric3 built into a compatibility layer, eg something like matplotlib.numerix which enables the user to be shielded from past and future changes in the array package. If you and the numarray developers can agree on that interface, that is an important start, because no matter how much success you have with Numeric3, Numeric 23.x and numarray will be in the wild for some time to come. Having all the major players come together and agree on a core interface layer would be a win. In practice, it works well in matplotlib.numerix. * Buy-in from the developers of all the major packages that people want and need to have the CVS / SVN live on a single site which also has mailing lists etc. I think this is a possibility, actually; I'm open to it at least. * Good tutorial, printable documentation, perhaps following a "dive into python" model with a "just-in-time" model of teaching the language; ie, task oriented. A question I think should be addressed is whether scipy is the right vehicle for this aggregation. I know this has been a long-standing goal of yours and appreciate your efforts to continue to make it happen. But there is a lot of residual belief that scipy is hard to install, and this is founded in an old memory that refuses, sometimes irrationally, to die, and in part from people's continued difficulties. If we make a grand effort to unify into a coherent whole, we might be better off with a new name that doesn't carry the difficult-to-install connotation. And easy-to-install should be our #1 priority. Another reason to consider a neutral name is that it wouldn't scare off a lot of people who want to use these tools but don't consider themselves to be scientists. In matplotlib, there are people who just want to make bar and pie charts, and in talks I've given many people are very happy when I tell them that I am interested in providing plotting capabilities outside the realm of scientific plotting. This is obviously a lot to bite off but it could be made viable with some dedicated effort; python is like that. Another concern I have, though, is that it seems to duplicate a lot of the enthought effort to build a scientific python bundle -- they do a great job already for win32 and I think an enthought edition for linux and OSX are in the works. The advantage of your approach is that it is modular rather than monolithic. To really make this work, I think enthought would need to be on board with it. Eg mayavi2 and traits2 are both natural candidates for inclusion into this beast, but both live in the enthought subversion tree. Much of what you describe seems to be parallel to the enthought python, which also provides scipy, numeric, ipython, mayavi, plotting, and so on. I am hesitant to get too involved in the packaging game -- it's really hard and would take a lot of work. We might be better off each making little focused pieces, and let packagers (pythonmac, fink, yum, debian, enthought, ...) do what they do well. Not totally opposed, mind you, just hesitant.... JDH From loredo at astro.cornell.edu Wed Mar 9 14:42:43 2005 From: loredo at astro.cornell.edu (Tom Loredo) Date: Wed, 9 Mar 2005 14:42:43 -0500 (EST) Subject: [SciPy-dev] Re: Future directions for SciPy in light of meeting at Berkeley Message-ID: <200503091942.j29JghU17395@laplace.astro.cornell.edu> Hi folks- Prabhu wrote: > Anyway, Joe's post about ASP's role is spot on! I agree with this sentiment. I especially agree with Joe's prioritization, in particular, that work on his pt. 1 (resolving the Numeric/numarray split) should be highest priority. I don't agree that not many people know about scipy or, more generally, Python as a tool for scientific computing. My experience when I raise the topic with colleagues is that they have heard of it, but when they investigated, either they found the situation confusing (e.g., the Numeric/numarray split; bewildering variety of platform-dependent plotting options) or they found the software deficient (either hard to install or buggy). There has been great progress on the latter problems in the last year especially, but the fact is that scipy has been around for a *few* years, and many people have opinions of scipy/Python based on year-old experiences. (Also, I feel that those portraying the installation issue as completely solved are likely not using scipy and matplotlib across multiple platforms.) I do not have the expertise to help with the #1 issue, and I'm sure many scipy users feel similarly. But as Joe emphasized, there are other ways to contribute. He mentioned documentation, and I agree that's a weak spot that I suspect many folks could contribute to. Another way is to develop scipy add-ons, but to not release them until they are well-documented and reasonably mature (at least, not release them as anything but beta software). I realize this is counter to what seems to have become the typical open source model of developing "out in the open," releasing code with warts and all and letting it evolve entirely in public view. But I think something can be said for trying to avoid making a bad first impression. This is the path I'm persuing myself, with work on a statistical inference package. I have a body of code written, but I'm not going to release it widely until it is largely well-documented and well-exercised. I've re-written a core piece of it three times now, as my own experiments have revealed inadquacies in the design. I hate to think what the potential users would have thought if either they discovered the problems themselves in their own usage, or found that the next version of the code changed interfaces in order to be more general and robust. My point here is that "bleeding edge" users have a tendency to portray tools as more ready for public consumption than they may actually be. When potential new users discover obstacles that aren't present with other tools, a bad taste is left in their mouths and they are reluctant to come back for a second try. They almost feel lied to---"It shouldn't be this hard; why didn't they warn me??!!" I'm advocating something like "truth in advertising." We need to accurately describe what the user experience will be (on various platforms), and let people know up front what obstacles they may encounter, and that the obstacles are being addressed. Then they can say either, "Wow, all that capability looks worth the possible trouble, let me dive in," or "Wow, all that capability is appealing, but I'm not up for the possible trouble; I'll check back in 6 months." What we want to avoid is, "Darn, this should be as easy as it is with Matlab, but I've just wasted 2 hours trying to figure XXX out--I'm going back to Matlab!" Finally, I think another way to contribute to adoption of scipy is to take seriously Claerbout's notion of "really reproducible research." As Buckheit and Donoho have described it, "When we publish articles containing figures which were generated by computer, we also publish the complete software environment which generates the figures.... "An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures." [See http://www.stat.washington.edu/jaw/jaw.research.reproducible.html and http://sepwww.stanford.edu/research/redoc/ for more on this notion.] I'm sure many of you have published papers with results computed with Python/scipy. Like me, you probably did most of these with one-off scripts or modules that were not written for public consumption. What if each of us who publishes technical results using Python were to neaten up and document the code for just a few nontrivial published figures, and post the results online? Perhaps scipy.org could itself serve as a clearing house for at least a select set of such "reproducible research documents." What better advocacy for scipy as a research tool could we offer than being able to say: "Go to scipy.org, and you'll find scripts for 100 published results in 10 different disciplines---and you can pick up right where they left off." Anyway, that's one of my criteria for the public release of the package I'm working on. While I'll soon be releasing some of it to a group of volunteer beta testers, I won't call public attention to it until its main parts are well-documented and I have in place with the package a sample of scripts that reproduce calculations in at least a few published, peer-reviewed research papers. This needn't be a criterion for every scipy tool or package, but I think a few packages with this characteristic will make a very good impression. I bet some of you could achieve this with your code more quickly than I can with mine! Go to it! Cheers, Tom Loredo From aisaac at american.edu Wed Mar 9 17:36:11 2005 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 9 Mar 2005 17:36:11 -0500 (Eastern Standard Time) Subject: [SciPy-user] Re[2]: [SciPy-dev] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <16943.10714.85815.666793@monster.linux.in> References: <422EA691.9080404@ee.byu.edu><16942.56570.375971.565270@monster.linux.in><16943.10714.85815.666793@monster.linux.in> Message-ID: >> I think the proposal is: development effort is a function >> of community size, and community size is a function of >> convenience as well as functionality. On Wed, 9 Mar 2005 22:22:42 +0530 Prabhu Ramachandran apparently wrote: > To put it bluntly, I don't believe that someone who can't install > scipy today is really capable of contributing code to scipy. I > seriously doubt claims that scipy is scary or hard to > install today. I agree, but that is beside the point. The community does not consist only of developers. Even bringing in students, e.g. from engineering, biology, and economics, matters for this. Or so I claim. On Wed, 09 Mar 2005, Stephen Walton apparently wrote: > Finally, as I mentioned at SciPy04, our particular physics > department is at an undergraduate institution (no Ph.D. > program), so we mainly produce majors who stop at the B.S. > or M.S. degree. Their job market seems to want MATLAB > skills, not Python, at the moment, so that's what the > faculty are learning and teaching to their students. Yes! And market desired skills are also a function of community size as well as functionality. On Wed, 9 Mar 2005, konrad.hinsen at laposte.net apparently wrote: > I can install SciPy, but given that > most of my code is written with the ultimate goal of being published > and used by people with less technical experience, I need to take those > people into account when choosing packages to build on. Yes indeed! Cheers, Alan Isaac From oliphant at ee.byu.edu Wed Mar 9 22:21:46 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Mar 2005 20:21:46 -0700 Subject: [SciPy-dev] Current thoughts on future directions In-Reply-To: <6916ec732f2e70d1789cc0f480f82e7f@redivi.com> References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu> <6916ec732f2e70d1789cc0f480f82e7f@redivi.com> Message-ID: <422FBD4A.3030708@ee.byu.edu> I had a lengthy discussion with Eric today and clarified some things in my mind about the future directions of scipy. The following is basically what we have decided. We are still interested in input so don't think the issues are closed, but I'm just giving people an idea of my (and Eric's as far as I understand it) thinking on scipy. 1) There will be a scipy_core package which will be essentially what Numeric has always been (plus a few easy to install extras already in current scipy_core). It will likely contain the functionality of (the names and placements will be similar to current scipy_core). Numeric3 (actually called ndarray or narray or numstar or numerix or something....) fft (based on c-only code -- no fortran dependency) linalg (a lite version -- no fortran or ATLAS dependency) stats (a lite version --- no fortran dependency) special (only c-code --- no fortran dependency) weave f2py? (still need to ask Pearu about this) scipy_distutils and testing matrix and polynomial classes ...others...? We will push to make this an easy-to-install effective replacement for Numeric and hopefully for numarray users as well. Therefore community input and assistance will be particularly important. 2) The rest of scipy will be a package (or a series of packages) of algorithms. We will not try to do plotting as part of scipy. The current plotting in scipy will be supported for a time, but users will be weaned off to other packages: matplotlib, pygist (for xplt -- and I will work to get any improvements for xplt into pygist itself), gnuplot, etc. 3) Having everything under a scipy namespace is not necessary, nor worth worrying about at this point. My scipy-related focus over the next 5-6 months will be to get scipy_core to the point that most can agree it effectively replaces the basic tools of Numeric and numarray. -Travis From eric at enthought.com Wed Mar 9 23:41:45 2005 From: eric at enthought.com (eric jones) Date: Wed, 09 Mar 2005 22:41:45 -0600 Subject: [SciPy-dev] Re: [SciPy-user] Current thoughts on future directions In-Reply-To: <422FBD4A.3030708@ee.byu.edu> References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu> <6916ec732f2e70d1789cc0f480f82e7f@redivi.com> <422FBD4A.3030708@ee.byu.edu> Message-ID: <422FD009.4020706@enthought.com> Hey Travis, It sounds like the Berkeley meeting went well. I am glad that the Numeric3 project is going well and looks like it has a good chance to unify the Numeric/Numarray communities. I really appreciate you putting in so much effort intto its implementation. I also appreciate all the work by Perry, Todd, and the others at StSci have done building Numarray. We've all learned a ton from it. Most of the plans sound right to me (several questions/comments below). Much of SciPy has been structured in this way already, but we really have never worked to make the core useful as a stand alone package. Supporting lite and full versions of fft, linalg, and stats sounds potentially painful, but also worthwhile given the circumstances. Now: 1. How much of stats do we loose from removing fortran dependencies? 2. I do question whether weave really be in this core? I think it was in scipy_core before because it was needed to build some of scipy. 3. Now that I think about it, I also wonder if f2py should really be there -- especially since we are explicitly removing any fortran dependencies from the core. 4. I think keeping scipy a algorithms library and leaving plotting to other libraries is a good plan. At one point, the setup_xplt.py file was more than 1000 lines. It is much cleaner now, but dealing with X11, etc. does take maintenance work. Removing these libraries from scipy would decrease the maintenance effort and leave the plotting to matplotlib, chaco, and others. 5. I think having all the generic algorithm packages (signal, ga, stats, etc. -- basically all the packages that are there now) under the scipy namespace is a good idea. It prevents worry about colliding with other peoples packages. However, I think domain specific libraries (such as astropy) should be in their own namespace and shouldn't be in scipy. thanks, eric Travis Oliphant wrote: > I had a lengthy discussion with Eric today and clarified some things > in my mind about the future directions of scipy. The following is > basically what we have decided. We are still interested in input so > don't think the issues are closed, but I'm just giving people an idea > of my (and Eric's as far as I understand it) thinking on scipy. > > 1) There will be a scipy_core package which will be essentially what > Numeric has always been (plus a few easy to install extras already in > current scipy_core). It will likely contain the functionality of > (the names and placements will be similar to current scipy_core). > Numeric3 (actually called ndarray or narray or numstar or numerix or > something....) > fft (based on c-only code -- no fortran dependency) > linalg (a lite version -- no fortran or ATLAS dependency) > stats (a lite version --- no fortran dependency) > special (only c-code --- no fortran dependency) > weave > f2py? (still need to ask Pearu about this) > scipy_distutils and testing > matrix and polynomial classes > > ...others...? > > We will push to make this an easy-to-install effective replacement for > Numeric and hopefully for numarray users as well. Therefore > community input and assistance will be particularly important. > > 2) The rest of scipy will be a package (or a series of packages) of > algorithms. We will not try to do plotting as part of scipy. The > current plotting in scipy will be supported for a time, but users will > be weaned off to other packages: matplotlib, pygist (for xplt -- and > I will work to get any improvements for xplt into pygist itself), > gnuplot, etc. > > 3) Having everything under a scipy namespace is not necessary, nor > worth worrying about at this point. > > My scipy-related focus over the next 5-6 months will be to get > scipy_core to the point that most can agree it effectively replaces > the basic tools of Numeric and numarray. > > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From prabhu_r at users.sf.net Thu Mar 10 02:53:47 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Thu, 10 Mar 2005 13:23:47 +0530 Subject: [SciPy-user] Re[2]: [SciPy-dev] Future directions for SciPy in light of meeting at Berkeley In-Reply-To: References: <422EA691.9080404@ee.byu.edu> <16942.56570.375971.565270@monster.linux.in> <16943.10714.85815.666793@monster.linux.in> Message-ID: <16943.64779.881661.847849@monster.linux.in> Hi, >>>>> "AI" == Alan G Isaac writes: >> To put it bluntly, I don't believe that someone who can't >> install scipy today is really capable of contributing code to >> scipy. I seriously doubt claims that scipy is scary or hard to >> install today. AI> I agree, but that is beside the point. The community does not AI> consist only of developers. Even bringing in students, AI> e.g. from engineering, biology, and economics, matters for AI> this. Or so I claim. Well, I am glad I got so many people excited by the blunt words. I think everyone wants something done but there are still too few people who want to do anything about it. With this I take leave from this thread. I'd rather do something positive than spend time arguing this or that over here. cheers, prabhu From pearu at scipy.org Thu Mar 10 03:49:01 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 10 Mar 2005 02:49:01 -0600 (CST) Subject: [SciPy-dev] Re: [SciPy-user] Current thoughts on future directions In-Reply-To: <422FD009.4020706@enthought.com> References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu> <6916ec732f2e70d1789cc0f480f82e7f@redivi.com> <422FBD4A.3030708@ee.byu.edu> <422FD009.4020706@enthought.com> Message-ID: Hi, To clarify few technical details: On Wed, 9 Mar 2005, eric jones wrote: > 1. How much of stats do we loose from removing fortran dependencies? > 2. I do question whether weave really be in this core? I think it was in > scipy_core before because it was needed to build some of scipy. At the moment scipy does not contain modules that need weave. > 3. Now that I think about it, I also wonder if f2py should really be there -- > especially since we are explicitly removing any fortran dependencies from the > core. f2py is not a fortran-only tool. In scipy it has been used to wrap also C codes (fft, atlas) and imho f2py should be used more so whenever possible. > Travis Oliphant wrote: > >> 1) There will be a scipy_core package which will be essentially what >> Numeric has always been (plus a few easy to install extras already in >> current scipy_core). It will likely contain the functionality of (the >> names and placements will be similar to current scipy_core). >> Numeric3 (actually called ndarray or narray or numstar or numerix or >> something....) >> fft (based on c-only code -- no fortran dependency) Hmm, what would be the default underlying fft library here? Currently in scipy it is Fortran fftpack. And when fftw is available, it is used instead. >> linalg (a lite version -- no fortran or ATLAS dependency) Again, what would be the underlying linear algebra library here? Numeric uses f2c version of lite lapack library. Shall we do the same but wrapping the c codes with f2py rather than by hand? f2c might be useful also in other cases to reduce fortran dependency, but only when it is critical to ease the scipy_core installation. >> stats (a lite version --- no fortran dependency) >> special (only c-code --- no fortran dependency) >> weave >> f2py? (still need to ask Pearu about this) I am not against it, it actually would simplify many things (for scipy users it provides one less dependency to worry about, f2py bug fixes and new features are immidiately available, etc). And I can always ship f2py as standalone for non-scipy users. >> scipy_distutils and testing >> matrix and polynomial classes >> >> ...others...? There are few pure python modules (ppimport,machar,pexec,..) in scipy_base that I have heard to be used as very useful standalone modules. >> We will push to make this an easy-to-install effective replacement for >> Numeric and hopefully for numarray users as well. Therefore community >> input and assistance will be particularly important. >> >> 2) The rest of scipy will be a package (or a series of packages) of >> algorithms. We will not try to do plotting as part of scipy. The current >> plotting in scipy will be supported for a time, but users will be weaned >> off to other packages: matplotlib, pygist (for xplt -- and I will work to >> get any improvements for xplt into pygist itself), gnuplot, etc. +1 for not doing plotting in scipy. Pearu From prabhu_r at users.sf.net Thu Mar 10 04:31:25 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Thu, 10 Mar 2005 15:01:25 +0530 Subject: [SciPy-dev] Re: [SciPy-user] Re: Future directions for SciPy in light of meeting at Berkeley In-Reply-To: <200503091839.j29IdVPm023307@oobleck.astro.cornell.edu> References: <20050309112636.24F99334FE@sc8-sf-spam1.sourceforge.net> <200503091542.j29Fg7nX021779@oobleck.astro.cornell.edu> <200503091839.j29IdVPm023307@oobleck.astro.cornell.edu> Message-ID: <16944.5101.855647.853172@monster.linux.in> >>>>> "JH" == Joe Harrington writes: JH> I think it would be great if both of you took a stab at it. JH> The doc will be necessarily short, and we can eventually JH> combine the best elements into one doc. The project agreed by OK, I'll coordinate with John and Fernando (offlist) on this. Thanks for the tips also. Will get back to you when we have something. cheers, prabhu From perry at stsci.edu Thu Mar 10 10:01:55 2005 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 10 Mar 2005 10:01:55 -0500 Subject: [SciPy-dev] Re: [SciPy-user] Current thoughts on future directions In-Reply-To: <422FD009.4020706@enthought.com> References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu> <6916ec732f2e70d1789cc0f480f82e7f@redivi.com> <422FBD4A.3030708@ee.byu.edu> <422FD009.4020706@enthought.com> Message-ID: On Mar 9, 2005, at 11:41 PM, eric jones wrote: > > 2. I do question whether weave really be in this core? I think it was > in scipy_core before because it was needed to build some of scipy. > 3. Now that I think about it, I also wonder if f2py should really be > there -- especially since we are explicitly removing any fortran > dependencies from the core. It would seem to me that so long as: 1) both these tools have very general usefulness (and I think they do), and 2) are not installation problems (I don't believe they are since they themselves don't require any compilation of Fortran, C++ or whatever--am I wrong on that?) That they are perfectly fine to go into the core. In fact, if they are used by any of the extra packages, they should be in the core to eliminate the extra step in the installation of those packages. Perry From perry at stsci.edu Thu Mar 10 10:28:48 2005 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 10 Mar 2005 10:28:48 -0500 Subject: [SciPy-dev] Notes from meeting with Guido regarding inclusion of array package in Python core Message-ID: On March 7th Travis Oliphant and Perry Greenfield met Guido and Paul Dubois to discuss some issues regarding the inclusion of an array package within core Python. The following represents thoughts and conclusions regarding our meeting with Guido. They in no way represent the order of discussion with Guido and some of the points we raise weren't actually mentioned during the meeting, but instead were spurred by subsequent discussion after the meeting with Guido. 1) Including an array package in the Python core. To start, before the meeting we both agreed that we did not think that this itself was a high priority in itself. Rather we both felt that the most important issue was making arrays an acceptable and widely supported interchange format (it may not be apparent to some that this does not require arrays be in the core; more on that later). In discussing the desirability of including arrays in the core with Guido, we quickly came to the conclusion that not only was it not important, that in the near term (the next couple years and possibly much longer) it was a bad thing to do so. This is primarily because it would mean that updates to the array package would wait on Python releases potentially delaying important bug fixes, performance enhancements, or new capabilities greatly. Neither of us envisions any scenario regarding array packages, whether that be Numeric3 or numarray, where we would consider it to be something that would not *greatly* benefit from decoupling its release needs from that of Python (it's also true that it possibly introduces complications for Python releases if they need to synch with array schedules, but being inconsiderate louts, we don't care much about that). And when one considers that the move to multicore and 64-bit processors will introduce the need for significant changes in the internals to take advantage of these capabilities, it is unlike we will see a quiescent, maintenance-level state for an array package for some time. In short, this issue is a distraction at the moment and will only sap energy from what needs to be done to unify the array packages. So what about supporting arrays as an interchange format? There are a number of possibilities to consider, none of which require inclusion of arrays into the core. It is possible for 3rd party extensions to optionally support arrays as an interchange format through one of the following mechanisms: a) So long as the extension package has access to the necessary array include files, it can build the extension to use the arrays as a format without actually having the array package installed. The include files alone could be included into the core (Guido has previously been receptive to doing this though at this meeting he didn't seem quite as receptive instead suggesting the next option) or could be packaged with extension (we would prefer the former to reduce the possibilities of many copies of include files). The extension could then be successfully compiled without actually having the array package present. The extension would, when requested to use arrays would see if it could import the array package, if not, then all use of arrays would result in exceptions. The advantage of this approach is that it does not require that arrays be installed before the extension is built for arrays to supported. It could be built, and then later the array package could be installed and no rebuilding would be necessary. b) One could modify the extension build process to see if the package is installed and the include files are available, if so, it is built with the support, otherwise not. The advantage of this approach is that it doesn't require the include files be included with the core or be bundled with the extension, thus avoiding any potential version mismatches. The disadvantage is that later adding the array package require the extension to be rebuilt, and it results in more complex build process (more things to go wrong). c) One could provide the support at the Python level by instead relying on the use of buffer objects by the extension at the C level, thus avoiding any dependence on the array C api. So long as the extension has the ability to return buffer objects containing the putative array data to the Python level and the necessary meta information (in this case, the shape, type, and other info, e.g., byteswapping, necessary to properly interpret the array) to Python, the extension can provide its own functions or methods to convert these buffer objects into arrays without copying of the data in the buffer object. The extension can try to import the array package, and if it is present, provide arrays as a data format using this scheme. In many respects this is the most attractive approach. It has no dependencies on include files, build order, etc. This approach led to the suggestion that Python develop a buffer object that could contain meta information, and a way of supporting community conventions (e.g., a name attribute indicating which conventions was being used) to facilitate the interchange of any sort of binary data, not just arrays. We also concluded that it would be nice to be able create buffer objects from Python with malloced memory (currently one can only create buffer objects from other objects that already have memory allocated; there is no way of creating newly allocated, writable memory from Python within a buffer object; one can create a buffer object from a string, but it is not writable). Nevertheless, if an extension is written in C, none of these changes are necessary to make use of this mechanism for interchange purposes now. This is the approach we recommend trying. The obvious case to apply it to is PIL as test case. We should do this ourselves and offer it as a patch to PIL. Other obvious cases are to support image interchange for GUIs (e.g., wxPython) and OpenGL. 2) Scalar support, rank-0 and related. Travis and I agreed (we certainly seek comments on this conclusion; we may have forgotten about key arguments arguing for one the different approaches) that the desirability of using rank-0 arrays as return values from single element indexing depends on other factors, most importantly Python's support for scalars in various aspects. This is a multifaceted issue that will need to be determined by considering all the facets simultaneously. The following tries to list the pro's and con's previously discussed for returning scalars (two cases previously discussed) or rank-0 arrays (input welcomed). a) return only existing Python scalar types (cast upwards except for long long and long double based types) Pros: - What users probably expect (except matlab users!) - No performance hit in subsequent scalar expressions - faster indexing performance (?) Cons: - Doesn't support array attributes, numeric behaviors - What do you return for long long and long double? No matter what is done, you will either lose precision or lose consistency. Or you create a few new Python scalar types for the unrepresentable types? But, with subclassing in C the effort to create a few scalar types is very close to the effort to create many. b) create new Python scalar types and return those (one for each basic array type) Pros: - Exactly what numeric users expect in representation - No peformance hit in subsequent scalar expressions - faster indexing performance - Scalars have the same methods and attributes as arrays Cons: - Might require great political energy to eventually get the arraytype with all of its scalartype-children into the Python core. This is really an unknown, though, since if the arrayobject is in the standard module and not in the types module, then people may not care (a new type is essentially a new-style class and there are many, many classes in the Python standard library). A good scientific-packaging solution that decreases the desireability of putting the arrayobject into the core would help alleviate this problem as well. - By itself it doesn't address different numeric behaviors for the "still-present" Python scalars throughout Python. c) return rank-0 array Pros: - supports all array behaviors, particularly with regard to numerical processing, particularly with regard to ieee exception handling (a matter of some controversy, some would like it also to be len()=1 and support [0] index, which strictly speaking rank-0 arrays should not support) Cons: - Performance hit on all scalar operations (e.g., if one then does many loops over what appears to be a pure scalar expression, use of rank-0 will be much slower than Python scalars since use of arrays incurs significant overhead. - Doesn't eliminate the fact that one can still run into different numerical behavior involving operations between Python scalars. - Still necessary to write code that must deal with Python scalars "leaking" into code as inputs to functions. - Can't currently be used to index sequences (so not completely usable in place of scalars) Out of this came two potential needs (The first isn't strictly necessary if approach a is taken, but could help smooth use of all integer types as indexes if approach b is taken): If rank-0 arrays are returned, then Guido was very receptive to supporting a special method, __index__ which would allow any Python object to be used as an index to a sequence or mapping object. Calling this would return a value that would be suitable as index if the object was not itself suitable directly. Thus rank-0 arrays would have this method called to convert its internal integer value into a Python integer. There are some details about how this would work at the C level that need to be worked out. This would allow rank-0 integer arrays to be used as indices. To be useful, it would be necessary to get this into the core as quickly as possible (if there are C API issues that have lingering solutions that won't be solved right away, then a greatly delayed implementation in Python would make this less than useful). We talked at some length about whether it was possible to change Python's numeric behavior for scalars, namely support for configurable handling of numeric exceptions in the way numarray does it (and Numeric3 as well). In short, not much was resolved. Guido didn't much like the stack approach to the exception handling mode. His argument (a reasonable one) was that even if the stack allowed pushing and popping modes, it was fragile for two reasons. If one called other functions in other modules that were previously written without knowledge that the mode could be changed, those functions presumed the previous behavior and thus could be broken with mode change (though we suppose that just puts the burden on the caller to guard all external calls with restores to default behavior; even so, many won't do that leading to spurious bug reports that may annoy maintainers to no end though no fault of their own). He also felt that some termination conditions may cause missed pops leading to incorrect modes. He suggested studying the use of the decimal's use of context to see if it could used as a model. Overall he seemed to think that setting mode on a module basis was a better approach. Travis and I wondered about how that could be implemented (it seems to imply that the exception handling needs to know what module or namespace is being executed in order to determine the mode. So some more thought is needed regarding this. The difficulty of proposing such changes and getting them accepted is likely to be considerable. But Travis had a brilliant idea (some may see this as evil but I think it has great merit). Nothing prevents a C extension from hijacking the existing Python scalar objects behaviors. Once a reference is obtained to an integer, float or complex value, one can replace the table of operations on those objects with whatever code one wishes. In this way an array package could (optionally) change the behavior of Python scalars. In this way we could test the behavior of proposed changes quite easily, distribute that behavior quite easily in the community, and ultimately see if there are really any problems without expending any political energy to get it accepted. Once seeing if it really worked (without "forking" Python either), would place us in a much stronger position to have the new behaviors incorporated into the core. Even then, it may never prove necessary if can be so customized by the array package. This holds out the potential of making scalar/array behavior much more consistent. Doing this may allow option a) as the ultimate solution, i.e., no changes needed to Python at all (as such), no rank-0 arrays. This will be studied further. One possible issue is that adding the necessary machinery to make numeric scalar processing consistent with that of the array package may introduce significant performance penalties (what is negligible overhead for arrays may not be for scalars). One last comment is that it is unlikely that any choice in this area prevents the need for added helper functions to the array package to assist in writing code that works well with scalars and arrays. There are likely a number of such issues. A common approach is to wrap all unknown objects with "asarray". This works reasonably well but doesn't handle the following case: If you wish to write a function that will accept arrays or scalars, in principal it would be nice to return scalars if all that was supplied were scalars. So functions to help determine what the output type should be based on the inputs would be helpful, for example to distinguish from when someone provided a rank-0 array as an input (or rank-1 len-1 array) and an actual scalar if asarray happens to map this to the same thing so that the return can properly return a scalar if that is what was originally input. Other such tools may help writing code that allows the main body to treat all objects as arrays without needing checks for scalars. Other miscellaneous comments. The old use of where() may be deprecated and only "nonzero" interpretation will be kept. A new function will be defined to replace the old usage of where (we deem that regular expression search and replaces should work pretty well to make changes in almost all old code). With the use of buffer objects, tostring methods are likely to be deprecated. Python PEPs needed =================== From the discussions it was clear that at least two Python PEPs need to be written and implemented, but that these needed to wait until the unification of the arrayobject takes place. PEP 1: Insertion of an __index__ special method and an as_index slot (perhaps in the as_sequence methods) in the C-level typeobject into Python. PEP 2: Improvements on the buffer object and buffer builtin method so that buffer objects can be Python-tracked wrappers around allocated memory that extension packages can use and share. Two extensions are considered so far. 1) The buffer objects have a meta attribute so that meta information can be passed around in a unified manner and 2) The buffer builtin should take an integer giving the size of writeable buffer object to create. From jh at oobleck.astro.cornell.edu Thu Mar 10 11:45:50 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu, 10 Mar 2005 11:45:50 -0500 Subject: [SciPy-dev] Re: Notes from meeting with Guido regarding inclusion of array package in Python core In-Reply-To: <20050310153125.D7F6088827@sc8-sf-spam1.sourceforge.net> (numpy-discussion-request@lists.sourceforge.net) References: <20050310153125.D7F6088827@sc8-sf-spam1.sourceforge.net> Message-ID: <200503101645.j2AGjopf019350@oobleck.astro.cornell.edu> It never rains, but it pours! Thanks for talking with Guido and hammering out these issues and options. You are of course right that the release schedule issue is enough to keep us out of Python core for the time being (and matplotlib out of scipy, according to JDH at SciPy04, for the same reason). However, I think we should still strongly work to put it there eventually. For now, this means keeping it "acceptable", and communicating with Guido often to get his feedback and let him know what we are doing. There are three reasons I see for this. First, having it core-acceptable makes it clear to potential users that this is standard, stable, well-thought-out stuff. Second, it will mean that numerical behavior and plain python behavior will be as close as possible, so it will be easiest to switch between the two. Third, if we don't strive for acceptability, we will likely run into a problem in the future when something we depend on is deprecated or changed. No doubt this will happen anyway, but it will be worse if we aren't tight with Guido. Conversely, if we *are* tight with Guido, he is likely to be aware of our concerns and take them into account when making decisions about Python core. --jh-- From swisher at enthought.com Thu Mar 10 12:03:37 2005 From: swisher at enthought.com (Janet M. Swisher) Date: Thu, 10 Mar 2005 11:03:37 -0600 Subject: [SciPy-dev] Re: [SciPy-user] Re: Future directions for SciPy in light of meeting at Berkeley In-Reply-To: References: <20050309112636.24F99334FE@sc8-sf-spam1.sourceforge.net> <200503091542.j29Fg7nX021779@oobleck.astro.cornell.edu> <16943.12528.791748.115592@monster.linux.in> Message-ID: <42307DE9.9060902@enthought.com> John Hunter wrote: > Prabhu> Do you already have the top level document that guides the > Prabhu> new user through this 15 minute drill? If you don't, > Prabhu> would it help if I took a stab at it? I often find that a > Prabhu> top level document of this form helps us fill in the > Prabhu> blanks. > >Fernando and I have been working on such a beast. > > We have draft chapters for the intro, ipython, >matplotlib and VTK so far. > >It's too rough and incomplete now for distribution, but we hope to >have something useful in insert-time-interval-here :-) > > If you'd like, I'd be happy to do "developmental" editing --- in other words, review your outline and make suggestions about structure, content coverage, etc. (leaving aside copy-editing until later). Feel free to contact me off-list. -- Janet Swisher --- Senior Technical Writer Enthought, Inc. http://www.enthought.com From stephen.walton at csun.edu Thu Mar 10 12:33:42 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Thu, 10 Mar 2005 09:33:42 -0800 Subject: [SciPy-dev] Re: [Numpy-discussion] Current thoughts on future directions In-Reply-To: <422FBD4A.3030708@ee.byu.edu> References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu> <6916ec732f2e70d1789cc0f480f82e7f@redivi.com> <422FBD4A.3030708@ee.byu.edu> Message-ID: <423084F6.7020804@csun.edu> Can I put in a good word for Fortran? Not the language itself, but the available packages for it. I've always thought that one of the really good things about Scipy was the effort put into getting all those powerful, well tested, robust Fortran routines from Netlib inside Scipy. Without them, it seems to me that folks who just install the new scipy_base are going to re-invent a lot of wheels. Is it really that hard to install g77 on non-Linux platforms? Steve Walton From oliphant at ee.byu.edu Thu Mar 10 12:50:39 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Mar 2005 10:50:39 -0700 Subject: [SciPy-dev] Re: [Numpy-discussion] Current thoughts on future directions In-Reply-To: <423084F6.7020804@csun.edu> References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu> <6916ec732f2e70d1789cc0f480f82e7f@redivi.com> <422FBD4A.3030708@ee.byu.edu> <423084F6.7020804@csun.edu> Message-ID: <423088EF.5060807@ee.byu.edu> Stephen Walton wrote: > Can I put in a good word for Fortran? Not the language itself, but > the available packages for it. I've always thought that one of the > really good things about Scipy was the effort put into getting all > those powerful, well tested, robust Fortran routines from Netlib > inside Scipy. Without them, it seems to me that folks who just > install the new scipy_base are going to re-invent a lot of wheels. I didn't emphasize it, but of course there will still be the rest of scipy available wrapping all the fortran stuff (and more). This will be advertised at the site where people download scipy_core. If they don't need it they won't download it, but they will be aware of its existence. It will just be a separate package (or set of packages). In other words we go from today's Numeric + scipy to tomorrows scipy_core + scipy. So, this really isn't different from the current experience for scipy users. > > Is it really that hard to install g77 on non-Linux platforms? I think what Konrad (and I know of at least 3 others) are saying is that it is hard enough that he doesn't want to require it for users of his packages, especially when his package doesn't need it. I think the argument has merit. The current strategy doesn't really change anything for current scipy users. But, it does make it possible for more people to start using scipy_core and have a common package that nearly all of the community can get behind. -Travis From charles.harris at sdl.usu.edu Thu Mar 10 14:41:58 2005 From: charles.harris at sdl.usu.edu (Charles R Harris) Date: Thu, 10 Mar 2005 12:41:58 -0700 Subject: [SciPy-dev] Notes from meeting with Guido regarding inclusion of array package in Python core In-Reply-To: References: Message-ID: <1110483718.5622.19.camel@E011704> Hi All, On Thu, 2005-03-10 at 10:28 -0500, Perry Greenfield wrote: > c) One could provide the support at the Python level by instead relying > on the use of buffer objects by the extension at the C level, thus > avoiding any dependence on the array C api. So long as the extension > has the ability to return buffer objects containing the putative array > data to the Python level and the necessary meta information (in this > case, the shape, type, and other info, e.g., byteswapping, necessary to > properly interpret the array) to Python, the extension can provide its > own functions or methods to convert these buffer objects into arrays > without copying of the data in the buffer object. The extension can try > to import the array package, and if it is present, provide arrays as a > data format using this scheme. In many respects this is the most > attractive approach. It has no dependencies on include files, build > order, etc. This approach led to the suggestion that Python develop a > buffer object that could contain meta information, and a way of > supporting community conventions (e.g., a name attribute indicating > which conventions was being used) to facilitate the interchange of any > sort of binary data, not just arrays. We also concluded that it would > be nice to be able create buffer objects from Python with malloced > memory (currently one can only create buffer objects from other objects > that already have memory allocated; there is no way of creating newly > allocated, writable memory from Python within a buffer object; one can > create a buffer object from a string, but it is not writable). I like this idea. It also provides a nice interface for writing classes in python that use c (c++) routines, where the buffer can be used to store state information. I needed something like this for a Mersenne Twister class and ended up using c++ with boost/python. > 2) Scalar support, rank-0 and related. Travis and I agreed (we > certainly seek comments on this conclusion; we may have forgotten about > key arguments arguing for one the different approaches) that the > desirability of using rank-0 arrays as return values from single > element indexing depends on other factors, most importantly Python's > support for scalars in various aspects. This is a multifaceted issue > that will need to be determined by considering all the facets > simultaneously. The following tries to list the pro's and con's > previously discussed for returning scalars (two cases previously > discussed) or rank-0 arrays (input welcomed). I agree that this is a difficult problem and it is hard to make a decision. I think the simplest solution is to return rank-0 arrays. Once people get used to this it should seem natural and obviate all that checking that currently goes on. Overhead is a concern, but I suspect that the heavy computation lifting will be done by the array routines, not python code. However, it is important to make sure it is easy to write extensions so things can be prototyped in python, then made efficient with an extension. One further problem with this approach is that python types need to be cast to suitable types for the array. When working with, say, a float array and multiplying by a python 1.0, it would be undesirable to return the result as a double. Using rank-0 arrays preserves the type info, but for python scalars perhaps we need some simple typecast functions so that one can write Int16(pythonScalar) and get the appropriate scalar to use in a numeric routine. Perhaps a special Numeric3 scalar type would be appropriate and address the efficiency problems. One other glitch I see with this approach is that numerix functions like sin() currently take the place of the same functions in the math module. It would be nice if the different behavior -- if there is such -- could be made transparent. > From the discussions it was clear that at least two Python PEPs need to > be written and implemented, but that these needed to wait until the > unification of the arrayobject takes place. > > PEP 1: Insertion of an __index__ special method and an as_index slot > (perhaps in the as_sequence methods) in the C-level typeobject into > Python. > > PEP 2: Improvements on the buffer object and buffer builtin method so > that buffer objects can be Python-tracked wrappers around allocated > memory that extension packages can use and share. Two extensions are > considered so far. 1) The buffer objects have a meta attribute so that > meta information can be passed around in a unified manner and 2) The > buffer builtin should take an integer giving the size of writeable > buffer object to create. > Thanks for pursuing this. In the long run it will make python a more serviceable language for numerical work. Chuck From charles.harris at sdl.usu.edu Thu Mar 10 14:56:20 2005 From: charles.harris at sdl.usu.edu (Charles R Harris) Date: Thu, 10 Mar 2005 12:56:20 -0700 Subject: [SciPy-dev] Current thoughts on future directions In-Reply-To: <422FBD4A.3030708@ee.byu.edu> References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu> <422FBD4A.3030708@ee.byu.edu> Message-ID: <1110484580.5622.25.camel@E011704> On Wed, 2005-03-09 at 20:21 -0700, Travis Oliphant wrote: > matrix and polynomial classes But I don't like the order in which the polynomial coefficients are stored . Oh well. > > My scipy-related focus over the next 5-6 months will be to get > scipy_core to the point that most can agree it effectively replaces the > basic tools of Numeric and numarray. > Thanks for this. The split between numarray and Numeric has certainly made life difficult. -Chuck From perry at stsci.edu Thu Mar 10 15:21:16 2005 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 10 Mar 2005 15:21:16 -0500 Subject: [SciPy-dev] Re: [Numpy-discussion] Notes from meeting with Guido regarding inclusion of array package in Python core In-Reply-To: <4230D60A.9050108@noaa.gov> References: <4230D60A.9050108@noaa.gov> Message-ID: <747d91fa83ebcbcfc71fb15dc54bce5b@stsci.edu> On Mar 10, 2005, at 6:19 PM, Chris Barker wrote: > >> a) So long as the extension package has access to the necessary array >> include files, it can build the extension to use the arrays as a >> format without actually having the array package installed. > > The >> extension would, when requested to use arrays would see if it could >> import the array package, if not, then all use of arrays would result >> in exceptions. > > I'm not sure this is even necessary. In fact, in the above example, > what would most likely happen is that the **Helper functions would > check to see if the input object was an array, and then fork the code > if it were. An array couldn't be passed in unless the package were > there, so there would be no need for checking imports or raising > exceptions. > So what would the helper function do if the argument was an array? You mean use the sequence protocol? Yes, I suppose that is always a fallback (but presumes that the original code to deal with such things is present; figuring out that a sequence satisfies array constraints can be a bit involved, especially at the C level) >> It could be built, and then later the array package could be >> installed and no rebuilding would be necessary. > > That is a great feature. > > I'm concerned about the inclusion of all the headers in either the > core or with the package, as that would lock you to a different > upgrade cycle than the main numerix upgrade cycle. It's my experience > that Numeric has not been binary compatible across versions. Hmmm, I thought it had been. It does make it much harder to change the api and structure layouts once in, but I thought that had been pretty stable. >> b) One could modify the extension build process to see if the package >> is installed and the include files are available, if so, it is built >> with the support, otherwise not.The disadvantage is that later adding >> the array package >> require the extension to be rebuilt > > This is a very big deal as most users on Windows and OS-X (and maybe > even Linux) don't build packages themselves. > > A while back this was discussed on this very list, and it seemed like > there was some idea about including not the whole numerix header > package, but just the code for PyArray_Check or an equivalent. This > would allow code to check if an input object was an array, and do > something special if it was. That array-specific code would only get > run if an array was passed in, so you'd know numerix was installed at > run time. This would require Numerix to be installed at build time, > but it would be optional at run time. I like this, because anyone > capable of building wxPython (it can be tricky) is capable of > installing Numeric, but folks that are using binaries don't need to > know anything about it. > > This would only really work for extensions that use arrays, but don't > create them. We'd still have the version mismatch problem too. > Yes, at the binary level. Perry From perry at stsci.edu Thu Mar 10 15:29:31 2005 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 10 Mar 2005 15:29:31 -0500 Subject: [SciPy-dev] Notes from meeting with Guido regarding inclusion of array package in Python core In-Reply-To: <1110483718.5622.19.camel@E011704> References: <1110483718.5622.19.camel@E011704> Message-ID: <0dcd3e327a7881fed401a85c71262c1e@stsci.edu> On Mar 10, 2005, at 2:41 PM, Charles R Harris wrote: > > I agree that this is a difficult problem and it is hard to make a > decision. I think the simplest solution is to return rank-0 arrays. > Once > people get used to this it should seem natural and obviate all that > checking that currently goes on. Overhead is a concern, but I suspect > that the heavy computation lifting will be done by the array routines, > not python code. However, it is important to make sure it is easy to > write extensions so things can be prototyped in python, then made > efficient with an extension. One further problem with this approach is > that python types need to be cast to suitable types for the array. When > working with, say, a float array and multiplying by a python 1.0, it > would be undesirable to return the result as a double. Using rank-0 > arrays preserves the type info, but for python scalars perhaps we need > some simple typecast functions so that one can write > Int16(pythonScalar) > and get the appropriate scalar to use in a numeric routine. Perhaps a > special Numeric3 scalar type would be appropriate and address the > efficiency problems. > Actually, the coercion issue is already handled by the fact that either numarray or Numeric3 arrays don't allow scalars to force coercion of arrays unless the scalar is of a "higher" kind than arrays (e.g., float scalar with int array). So multiplying an Int16 array with the constant 3 won't change the result to Int32. Perry From charles.harris at sdl.usu.edu Thu Mar 10 20:39:25 2005 From: charles.harris at sdl.usu.edu (Charles R Harris) Date: Thu, 10 Mar 2005 18:39:25 -0700 Subject: [SciPy-dev] formatting code in wiki pages Message-ID: <1110505165.5622.55.camel@E011704> Hi all, I am trying to put some c++ code on a wiki page and having a heck of a time. The zwiki software is supposed to recognize structured text, but it fails to recognize the :: token for literal code snippets. Likewise the tag is unrecognized. The
 tag works, sorta, but all
those <,>,& etc have special html meanings. Can anyone help me out?



From perry at stsci.edu  Thu Mar 10 22:13:22 2005
From: perry at stsci.edu (Perry Greenfield)
Date: Thu, 10 Mar 2005 22:13:22 -0500
Subject: [SciPy-dev] formatting code in wiki pages
In-Reply-To: <1110505165.5622.55.camel@E011704>
Message-ID: 

Did you make sure the code was indented (all the lines)?

> -----Original Message-----
> From: scipy-dev-bounces at scipy.net [mailto:scipy-dev-bounces at scipy.net]On
> Behalf Of Charles R Harris
> Sent: Thursday, March 10, 2005 8:39 PM
> To: scipy-dev at scipy.net
> Subject: [SciPy-dev] formatting code in wiki pages
> 
> 
> Hi all,
> 
> I am trying to put some c++ code on a wiki page and having a heck of a
> time. The zwiki software is supposed to recognize structured text, but
> it fails to recognize the :: token for literal code snippets. Likewise
> the  tag is unrecognized. The 
 tag works, sorta, but all
> those <,>,& etc have special html meanings. Can anyone help me out?
> 
> _______________________________________________
> Scipy-dev mailing list
> Scipy-dev at scipy.net
> http://www.scipy.net/mailman/listinfo/scipy-dev
> 



From oliphant at ee.byu.edu  Thu Mar 10 22:54:07 2005
From: oliphant at ee.byu.edu (Travis Oliphant)
Date: Thu, 10 Mar 2005 20:54:07 -0700
Subject: [SciPy-dev] Re: [Numpy-discussion] Re: [SciPy-user] Current
 thoughts on future directions
In-Reply-To: <42310D7D.3000009@ims.u-tokyo.ac.jp>
References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu>
	<6916ec732f2e70d1789cc0f480f82e7f@redivi.com>
	<422FBD4A.3030708@ee.byu.edu> <422FD009.4020706@enthought.com>
	
	<42310D7D.3000009@ims.u-tokyo.ac.jp>
Message-ID: <4231165F.1040908@ee.byu.edu>

Michiel Jan Laurens de Hoon wrote:

> Perry Greenfield wrote:
>
>> On Mar 9, 2005, at 11:41 PM, eric jones wrote:
>>
>>> 2. I do question whether weave really be in this core?  I think it 
>>> was in scipy_core before because it was needed to build some of scipy.
>>> 3. Now that I think about it, I also wonder if f2py should really be 
>>> there -- especially since we are explicitly removing any fortran 
>>> dependencies from the core.
>>
>>
>>
>> It would seem to me that so long as:
>>
>> 1) both these tools have very general usefulness (and I think they 
>> do), and
>> 2) are not installation problems (I don't believe they are since they 
>> themselves don't require any compilation of Fortran, C++ or 
>> whatever--am I wrong on that?)
>>
>> That they are perfectly fine to go into the core. In fact, if they 
>> are used by any of the extra packages, they should be in the core to 
>> eliminate the extra step in the installation of those packages.
>>
> -0.
> 1) In der Beschraenkung zeigt sich der Meister. In other words, avoid 
> software bloat.
> 2) f2py is a Fortran-Python interface generator, once the interface is 
> created there is no need for the generator.
> 3) I'm sure f2py is useful, but I doubt that it has very general 
> usefulness. There are lots of other useful Python packages, but we're 
> not including them in scipy-core either.
> 4) f2py and weave don't fit in well with the rest of scipy-core, which 
> is mainly standard numerical algorithms.


I'm of the opinion that f2py and weave should go into the core. 

1) Neither one requires Fortran and both install very, very easily.
2) These packages are fairly small but provide huge utility ---  
inlining fortran or C code is an easy way to speed up Python. People who 
don't "need it" will never realize it's there
3) Building the rest of scipy will need at least f2py already installed 
and it would simplify the process.
4) Enthought packages (to be released in the future and of interest to 
scientists) rely on weave.  Why not make that process easier with a 
single initial install. 
5) It would encourage improvements of weave and f2py from the entire 
community.  
6) The developers of f2py and weave are both scipy developers and so it 
would make sense for their code that forms
a foundation for other work to go into scipy_core. 

-Travis





From prabhu_r at users.sf.net  Fri Mar 11 03:28:59 2005
From: prabhu_r at users.sf.net (Prabhu Ramachandran)
Date: Fri, 11 Mar 2005 13:58:59 +0530
Subject: [SciPy-dev] Re: [Numpy-discussion] Re: [SciPy-user] Current
	thoughts	on future directions
In-Reply-To: <4231165F.1040908@ee.byu.edu>
References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu>
	<6916ec732f2e70d1789cc0f480f82e7f@redivi.com>
	<422FBD4A.3030708@ee.byu.edu> <422FD009.4020706@enthought.com>
	
	<42310D7D.3000009@ims.u-tokyo.ac.jp> <4231165F.1040908@ee.byu.edu>
Message-ID: <16945.22219.772480.154332@monster.linux.in>

>>>>> "TO" == Travis Oliphant  writes:

    TO> I'm of the opinion that f2py and weave should go into the
    TO> core.

If you are looking for feedback, I'd say +2 for that.

regards,
prabhu



From stephen.walton at csun.edu  Mon Mar 14 18:11:41 2005
From: stephen.walton at csun.edu (Stephen Walton)
Date: Mon, 14 Mar 2005 15:11:41 -0800
Subject: [SciPy-dev] Re: [Numpy-discussion] Current thoughts on future
	directions
In-Reply-To: <4231076E.6090507@ims.u-tokyo.ac.jp>
References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu>
	<6916ec732f2e70d1789cc0f480f82e7f@redivi.com>
	<422FBD4A.3030708@ee.byu.edu>
	<423084F6.7020804@csun.edu> <4231076E.6090507@ims.u-tokyo.ac.jp>
Message-ID: <42361A2D.2030708@csun.edu>

Michiel Jan Laurens de Hoon wrote:

> I agree that Netlib should be in SciPy. But why should Netlib be in 
> scipy_base?

It should not, and I'm sorry if my original message made it sound like I 
was advocating for that.  I was mainly advocating for f2py to be in 
scipy_base.



From stephen.walton at csun.edu  Mon Mar 14 18:16:09 2005
From: stephen.walton at csun.edu (Stephen Walton)
Date: Mon, 14 Mar 2005 15:16:09 -0800
Subject: [SciPy-dev] Re: [Numpy-discussion] Current thoughts on future
	directions
In-Reply-To: 
References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu>
	<6916ec732f2e70d1789cc0f480f82e7f@redivi.com>
	<422FBD4A.3030708@ee.byu.edu> <423084F6.7020804@csun.edu>
	
Message-ID: <42361B39.6090300@csun.edu>

konrad.hinsen at laposte.net wrote:

> On Mar 10, 2005, at 18:33, Stephen Walton wrote:
>
>>
>> Is it really that hard to install g77 on non-Linux platforms?
>
>
> It takes some careful reading of the instructions...

I have to admit that I hadn't remembered the pain I went through 
building gcc/g77 on HP-UX until Konrad reminded me.  He's right, it's 
_really_ hard.  That said, when I was on HP-UX I never used g77, I used 
HP's Fortran and ANSI C compilers.  So, when we talk about Fortran in 
Scipy (and I've redirected this message to scipy-dev), for non-Linux 
Unix platforms I think we mean supporting the vendor-provided 
compilers.  And, since Enthought can't keep an in-house copy of every 
variant, that means we'll be dependent on users of that platform to 
report what doesn't work, and maybe even try to fix it.  The scripts for 
supporting various compilers in Scipy are simple and easy to 
understand.  I've messed with the Absoft one myself.



From Fernando.Perez at colorado.edu  Tue Mar 15 20:46:46 2005
From: Fernando.Perez at colorado.edu (Fernando Perez)
Date: Tue, 15 Mar 2005 18:46:46 -0700
Subject: [SciPy-dev] Re: [Numpy-discussion] Re: [SciPy-user] Current
 thoughts on future directions
In-Reply-To: <42310D7D.3000009@ims.u-tokyo.ac.jp>
References: <422EA691.9080404@ee.byu.edu> <422F335F.8060107@csun.edu>
	<6916ec732f2e70d1789cc0f480f82e7f@redivi.com>
	<422FBD4A.3030708@ee.byu.edu> <422FD009.4020706@enthought.com>
	
	<42310D7D.3000009@ims.u-tokyo.ac.jp>
Message-ID: <42379006.4070501@colorado.edu>

Michiel Jan Laurens de Hoon wrote:
> Perry Greenfield wrote:

[weave & f2py in the core]

>>That they are perfectly fine to go into the core. In fact, if they are 
>>used by any of the extra packages, they should be in the core to 
>>eliminate the extra step in the installation of those packages.
>>
> 
> -0.
> 1) In der Beschraenkung zeigt sich der Meister. In other words, avoid 
> software bloat.
> 2) f2py is a Fortran-Python interface generator, once the interface is 
> created there is no need for the generator.
> 3) I'm sure f2py is useful, but I doubt that it has very general 
> usefulness. There are lots of other useful Python packages, but we're 
> not including them in scipy-core either.
> 4) f2py and weave don't fit in well with the rest of scipy-core, which 
> is mainly standard numerical algorithms.

I'd like to argue that these two tools are actually critically important in 
the core of a python for scientific computing toolkit, at its most basic 
layer.  The reason is that python's dynamic runtime type checking makes it 
impossible to write efficient loop-based code, as we all know.  And it is not 
always feasible to write all algorithms in terms of Numeric vector operations: 
sometimes you just need to write an indexed loop.

At this point, the standard python answer is 'go write an extension module'. 
While writing extension modules by hand, from scratch, is not all that hard, 
it certainly presents a significant barrier for less experienced programmers. 
  And yet both weave and f2py make it incredibly easy to get working compiled 
array code in no time at all.  I say this from direct experience, having 
pointed colleagues to weave and f2py for this very problem.  After handing 
them some notes I have to get started, they've come back saying "I can't 
believe it was that easy: in a few minutes I had sped up the loop I needed 
with a bit of C, and now I can continue working on the problem I'm interested 
in".  I know for a fact that if I'd told them to write a full extension module 
by hand, the result would have been quite different.

The reality is that, in scientific work, you are likely to run into this 
problem at a very early stage, much more so than for other kinds of python 
usage.  For this reason, it is important that the basic toolset provides a 
clean solution from the start.

At least that's been my experience.

Regards,

f



From gruben at bigpond.net.au  Wed Mar 16 05:21:12 2005
From: gruben at bigpond.net.au (Gary Ruben)
Date: Wed, 16 Mar 2005 21:21:12 +1100
Subject: [SciPy-dev] Possible bug
Message-ID: <42380898.80108@bigpond.net.au>

I recently tried installing Fernando Perez's iPython under win98 and 
experienced a small problem, which he suggests might be a scipy problem.

Typing
import scipy
help(scipy)

or

help
scipy

in a Win98 DOS window running iPython starts displaying the help but 
freezes the DOS window once a screenful of help is displayed.

If I run iPython from the IPython_shell.py shortcut and look at the
running processes, I note two:
Winoldap
python    <- DOS shell

If I then type
help
scipy

I notice that the DOS window title changes from Python to GNUPLO~1 and
some gnuplot processes start up:
Winoldap
GNUPLO~1  <- DOS shell
Gnuplot_helper
Win32popenwin9x
Wgnuplot

The DOS window freezes up.
I don't know why gnuplot is starting.
The help> prompt system in iPython seems to work for other cases, so it 
seems scipy specific. It's not a showstopper for me by any means.

Gary R.



From arnd.baecker at web.de  Thu Mar 24 16:26:08 2005
From: arnd.baecker at web.de (Arnd Baecker)
Date: Thu, 24 Mar 2005 22:26:08 +0100 (CET)
Subject: [SciPy-dev] check_syevr_irange/check_syevr_vrange ?
Message-ID: 


Hi,

to figure out a question related to a recent snapshot for windows
by Joe Cooper I installed on my debian sarge machine
a pretty recent scipy from CVS (see below).
For scipy.test(1) there are 4 errors, all related
to check_syevr_irange/check_syevr_vrange

I am not worried too much about the errors, as the results
seem pretty close.
Still I thought it would be better to report those here.

Best,

Arnd



Python 2.3.5 (#1, Mar 22 2005, 11:11:34)
Type "copyright", "credits" or "license" for more information.

IPython 0.6.13_cvs -- An enhanced Interactive Python.
?       -> Introduction to IPython's features.
%magic  -> Information about IPython's 'magic' % functions.
help    -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]:import scipy
numerix Numeric 23.8
In [2]:scipy.__version__
Out[2]:'0.3.3_303.4584'
In [3]:scipy.test(1)


======================================================================
FAIL: check_heevr_irange_high
(scipy.lib.lapack.test_lapack.test_flapack_complex)
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py",
line 80, in check_heevr_irange_high
    def check_heevr_irange_high(self):
self.check_syevr_irange(sym='he',irange=[1,2])
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py",
line 68, in check_syevr_irange
    assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i])
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy_test/testing.py",
line 740, in assert_array_almost_equal
    assert cond,\
AssertionError:
Arrays are not almost equal (mismatch 33.3333333333%):
        Array 1: [ 3.6929266+0.j  4.0951085+0.j  7.3420534+0.j]
        Array 2: [ 3.6929276+0.j  4.0951101+0.j  7.3420528+0.j]


======================================================================
FAIL: check_heevr_vrange
(scipy.lib.lapack.test_lapack.test_flapack_complex)
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py",
line 102, in check_heevr_vrange
    def check_heevr_vrange(self): self.check_syevr_vrange(sym='he')
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py",
line 94, in check_syevr_vrange
    assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i])
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy_test/testing.py",
line 740, in assert_array_almost_equal
    assert cond,\
AssertionError:
Arrays are not almost equal (mismatch 33.3333333333%):
        Array 1: [ 3.6929266+0.j  4.0951085+0.j  7.3420534+0.j]
        Array 2: [ 3.6929276+0.j  4.0951101+0.j  7.3420528+0.j]

======================================================================
FAIL: check_syevr_irange_high
(scipy.lib.lapack.test_lapack.test_flapack_float)
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py",
line 74, in check_syevr_irange_high
    def check_syevr_irange_high(self):
self.check_syevr_irange(irange=[1,2])
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py",
line 68, in check_syevr_irange
    assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i])
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy_test/testing.py",
line 740, in assert_array_almost_equal
    assert cond,\
AssertionError:
Arrays are not almost equal (mismatch 33.3333333333%):
        Array 1: [ 3.6929266  4.0951085  7.3420534]
        Array 2: [ 3.6929276  4.0951101  7.3420528]


======================================================================
FAIL: check_syevr_vrange (scipy.lib.lapack.test_lapack.test_flapack_float)
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py",
line 94, in check_syevr_vrange
    assert_array_almost_equal(dot(a,v[:,i]),w[i]*v[:,i])
  File
"/home/abaecker/NBB/INSTALL_SOFT/TEST_NUMERIC3/InstDir/lib/python2.3/site-packages/scipy_test/testing.py",
line 740, in assert_array_almost_equal
    assert cond,\
AssertionError:
Arrays are not almost equal (mismatch 33.3333333333%):
        Array 1: [ 3.6929266  4.0951085  7.3420534]
        Array 2: [ 3.6929276  4.0951101  7.3420528]


If I should send the full output of
  python scipy_core/scipy_distutils/system_info.py
or any further info, just let me know...



From oliphant at ee.byu.edu  Thu Mar 24 19:36:40 2005
From: oliphant at ee.byu.edu (Travis Oliphant)
Date: Thu, 24 Mar 2005 17:36:40 -0700
Subject: [SciPy-dev] Re: [Numpy-discussion] Trying out Numeric3
In-Reply-To: <4242BA03.5050204@ims.u-tokyo.ac.jp>
References: <423A6F69.8020803@ims.u-tokyo.ac.jp>
	
	<4242BA03.5050204@ims.u-tokyo.ac.jp>
Message-ID: <42435D18.809@ee.byu.edu>

Michiel Jan Laurens de Hoon wrote:

> Arnd's comment raises the question of how to try out or contribute to 
> Numeric3 if the code base is changing from day to day. It may be a 
> good idea to set up some division of labor, so we can contribute to 
> Numeric3 without getting in each other's way. For example, I'd be 
> interested in working on setup.py and putting different parts of 
> Numeric3/scipy_base together.
>

Michiel, you are free to work on setup.py all you want :-)

Putting the parts of scipy_base together is a good idea.   Exactly how 
to structure this is going to require some thought and need to be 
coordinated with current scipy.

I want a package that is as easy to install as current Numeric (so the 
default will have something like lapack_lite). 

But, this should not handicap nor ignore a speed-conscious user who 
wants to install ATLAS or take advantage of vendor-supplied libraries.

There should be a way to replace functionality that is clean and does 
not require editing setup.py files.

Anybody with good ideas about how to do this well is welcome to speak 
up.   

Perhaps, the easiest thing to do is to keep the basic Numeric structure 
(with C-based easy-to-install additions) and call it scipylite  (with 
backwards compatibility provided for Numeric, LinearAlgebra, 
RandomArray, and MLab names).  This also installs the namespace scipy 
which has a little intelligence in it to determine if you have altas and 
fortran capabilities installed or not.

Then, provide a scipyatlas package that can be installed to take 
advantage of atlas and vendor-supplied lapack/blas.

Then, a scipyfortran package that can be installed if you have a fortran 
compiler which provides the functionality provided by fortran libraries. 

So, there are three divisions here. 

Feedback and criticisms encouraged and welcomed.....

-Travis


















From pearu at scipy.org  Fri Mar 25 02:56:55 2005
From: pearu at scipy.org (Pearu Peterson)
Date: Fri, 25 Mar 2005 01:56:55 -0600 (CST)
Subject: [SciPy-dev] Re: [Numpy-discussion] Trying out Numeric3
In-Reply-To: <42435D18.809@ee.byu.edu>
References: <423A6F69.8020803@ims.u-tokyo.ac.jp>
	
	<4242BA03.5050204@ims.u-tokyo.ac.jp> <42435D18.809@ee.byu.edu>
Message-ID: 



On Thu, 24 Mar 2005, Travis Oliphant wrote:

> Michiel Jan Laurens de Hoon wrote:
>
>> Arnd's comment raises the question of how to try out or contribute to 
>> Numeric3 if the code base is changing from day to day. It may be a good 
>> idea to set up some division of labor, so we can contribute to Numeric3 
>> without getting in each other's way. For example, I'd be interested in 
>> working on setup.py and putting different parts of Numeric3/scipy_base 
>> together.
>> 
>
> Michiel, you are free to work on setup.py all you want :-)
>
> Putting the parts of scipy_base together is a good idea.   Exactly how to 
> structure this is going to require some thought and need to be coordinated 
> with current scipy.
>
> I want a package that is as easy to install as current Numeric (so the 
> default will have something like lapack_lite). 
> But, this should not handicap nor ignore a speed-conscious user who wants to 
> install ATLAS or take advantage of vendor-supplied libraries.
>
> There should be a way to replace functionality that is clean and does not 
> require editing setup.py files.
>
> Anybody with good ideas about how to do this well is welcome to speak up. 
> Perhaps, the easiest thing to do is to keep the basic Numeric structure (with 
> C-based easy-to-install additions) and call it scipylite  (with backwards 
> compatibility provided for Numeric, LinearAlgebra, RandomArray, and MLab 
> names).  This also installs the namespace scipy which has a little 
> intelligence in it to determine if you have altas and fortran capabilities 
> installed or not.
>
> Then, provide a scipyatlas package that can be installed to take advantage of 
> atlas and vendor-supplied lapack/blas.
>
> Then, a scipyfortran package that can be installed if you have a fortran 
> compiler which provides the functionality provided by fortran libraries. 
> So, there are three divisions here.

Hmm, the idea of introducing scipylite, scipyatlas, scipyfortran packages 
does not sound like a good idea. The usage of atlas or fortran blas/lapack or 
vendor based blas/lapack libraries is an implementation detail and should 
not be reflected in scipy_base package structure. This is because such an 
approach is not suitable for writing portable Numeric3 based applications 
or packages. For example, if a developer uses scipyfortran package in a 
package, it immidiately reduces the number of potential users for this 
package.

I got an impression from earlier threads that scipy_distutils will be 
included to scipy_base. So, I am proposing to use scipy_distutils tools 
and our scipy experience for dealing with this issue, scipy.lib.lapack
would be a good working prototype here.

Ideally, scipy_base should provide a complete interface to LAPACK 
routines, but not immidiately, of course. Now, depending on the 
availability of compilers and resources in a particular computer, the 
following would happen:
1) No Fortran compiler, no lapack libraries in the system, only C compiler 
is available --- f2c generated lite-lapack C sources are used to build 
lapack extension module; wrappers to lapack routines, for which there are 
no f2c generated sources, are disabled by f2py `only:` feature.
lite-lapack C sources come with scipy_base sources.
2) No Fortran compiler, system has lapack libraries (atlas or Accelerate 
or vecLib), C compiler is available --- system lapack library will be used 
and a complete lapack extension module can be built.
3) Fortran and C compiler are available, no lapack libraries in the system 
--- Fortran lite-lapack sources are used to build lapack extension module;
lite-lapack Fortran sources come with scipy_base sources. Similar to 
the case (1), some wrappers are disabled.
4-..) other combinations are possible and users can choose their favorite 
approach.

The availability of system resources can be checked using 
scipy_distutils.system_info.get_info. Checking the availability of Fortran 
compiler should be done in a configuration step and only when an user 
specifically asks for it, by default we should assume that Fortran 
compiler is not available. The same should apply also to atlas/lapack/blas 
libraries, by default f2c generated lite-lapack C sources will be used.
In this way users that only need Numeric3 array capabilities will avoid 
all possible troubles that may show up when using all possible resources 
for speed on an arbitrary computer.

Btw, I would suggest using `scipy  ` instead of
`scipy ` or `scipy  ` for naming packages.

Pearu



From oliphant at ee.byu.edu  Fri Mar 25 03:39:05 2005
From: oliphant at ee.byu.edu (Travis Oliphant)
Date: Fri, 25 Mar 2005 01:39:05 -0700
Subject: [SciPy-dev] Re: [Numpy-discussion] Trying out Numeric3
In-Reply-To: 
References: <423A6F69.8020803@ims.u-tokyo.ac.jp>
	
	<4242BA03.5050204@ims.u-tokyo.ac.jp> <42435D18.809@ee.byu.edu>
	
Message-ID: <4243CE29.80304@ee.byu.edu>

 >For example, if a developer uses scipyfortran package in a package, it 
immidiately reduces the number of >potential users for this package.

While I'm not in love with my suggestion and would prefer to see better 
ones put forward,  wouldn't any system that uses routines not available 
unless you have a fortran-compiled package installed be a problem?  I 
was just proposing not "hiding" this from the developer but making it 
explicit.

What do you propose to do for those situations?  I was just proposing 
putting them in a separate hierarchy so the developer is aware he is 
using something that requires fortran.    I actually think that it's 
somewhat of a non-issue myself, and feel that people who don't have 
fortran compilers will look for binaries anyway.

>
> I got an impression from earlier threads that scipy_distutils will be 
> included to scipy_base. So, I am proposing to use scipy_distutils 
> tools and our scipy experience for dealing with this issue, 
> scipy.lib.lapack
> would be a good working prototype here.
>
> Ideally, scipy_base should provide a complete interface to LAPACK 
> routines, but not immidiately, of course. Now, depending on the 
> availability of compilers and resources in a particular computer, the 
> following would happen:
> 1) No Fortran compiler, no lapack libraries in the system, only C 
> compiler is available --- f2c generated lite-lapack C sources are used 
> to build lapack extension module; wrappers to lapack routines, for 
> which there are no f2c generated sources, are disabled by f2py `only:` 
> feature.
> lite-lapack C sources come with scipy_base sources.
> 2) No Fortran compiler, system has lapack libraries (atlas or 
> Accelerate or vecLib), C compiler is available --- system lapack 
> library will be used and a complete lapack extension module can be built.
> 3) Fortran and C compiler are available, no lapack libraries in the 
> system --- Fortran lite-lapack sources are used to build lapack 
> extension module;
> lite-lapack Fortran sources come with scipy_base sources. Similar to 
> the case (1), some wrappers are disabled.
> 4-..) other combinations are possible and users can choose their 
> favorite approach.

Great, Sounds like Pearu has some good ideas here.  I nominate Pearu to 
take the lead here.

Michiel sounds like he? wants to keep the Numeric, RandomArray, 
LinearAlgebra naming conventions forever.  I want them to be more 
coordinated like scipy is doing with scipy.linalg scipy.stats and 
scipy_base ( I agree scipy.base is better).  

What are the opinions of others on this point.   Of course the names 
Numeric, RandomArray, and LinearAlgebra will still work,  but I think 
they should be deprecated in favor of a better overall design for 
numerical packages.    

What do others think?





From pearu at scipy.org  Fri Mar 25 04:21:39 2005
From: pearu at scipy.org (Pearu Peterson)
Date: Fri, 25 Mar 2005 03:21:39 -0600 (CST)
Subject: [SciPy-dev] Re: [Numpy-discussion] Trying out Numeric3
In-Reply-To: <4243CE29.80304@ee.byu.edu>
References: <423A6F69.8020803@ims.u-tokyo.ac.jp>
	
	<4242BA03.5050204@ims.u-tokyo.ac.jp> <42435D18.809@ee.byu.edu>
	
	<4243CE29.80304@ee.byu.edu>
Message-ID: 



On Fri, 25 Mar 2005, Travis Oliphant wrote:

>> For example, if a developer uses scipyfortran package in a package, it 
> immidiately reduces the number of >potential users for this package.
>
> While I'm not in love with my suggestion and would prefer to see better ones 
> put forward,  wouldn't any system that uses routines not available unless you 
> have a fortran-compiled package installed be a problem?  I was just proposing 
> not "hiding" this from the developer but making it explicit.
>
> What do you propose to do for those situations?  I was just proposing putting 
> them in a separate hierarchy so the developer is aware he is using something 
> that requires fortran.    I actually think that it's somewhat of a non-issue 
> myself, and feel that people who don't have fortran compilers will look for 
> binaries anyway.

Such an situation can be avoided if a package is extended with new 
wrappers parallel for all backend cases. For example, when adding a new 
interface to a lapack routine then to the scipy_base sources must be added 
both Fortran and f2c versions of the corresponding routine.

Pearu



From pearu at scipy.org  Fri Mar 25 04:58:01 2005
From: pearu at scipy.org (Pearu Peterson)
Date: Fri, 25 Mar 2005 03:58:01 -0600 (CST)
Subject: [SciPy-dev] Re: [Numpy-discussion] Trying out Numeric3
In-Reply-To: <4243D4A5.9050004@ims.u-tokyo.ac.jp>
References: <423A6F69.8020803@ims.u-tokyo.ac.jp>
	
	<4242BA03.5050204@ims.u-tokyo.ac.jp> <42435D18.809@ee.byu.edu>
	<4243D4A5.9050004@ims.u-tokyo.ac.jp>
Message-ID: 



On Fri, 25 Mar 2005, Michiel Jan Laurens de Hoon wrote:

> Pearu Peterson wrote:
>> I got an impression from earlier threads that scipy_distutils will be 
>> included to scipy_base. So, I am proposing to use scipy_distutils tools and 
>> our scipy experience for dealing with this issue, scipy.lib.lapack
>> would be a good working prototype here.
>
> Have you tried integrating scipy_distutils with Python's distutils? My guess 
> is that Python's distutils can benefit from what is in scipy_distutils, 
> particularly the parts dealing with C compilers. A clean integration will 
> also prevent duplicated code, avoids Pearu having to keep scipy_distutils up 
> to date with Python's distutils, and will enlarge the number of potential 
> users. Having two distutils packages seems to be too much of a good thing.

No, I have not. Though a year or so ago there was a discussion about this 
in distutils list, mainly for adding Fortran compiler support to 
distutils. At the time I didn't have resources to push scipy_distutils 
features to distutils and even less so for now. So, one can think that 
scipy_distutils is an extension to distutils, though it also includes
few bug fixes for older distutils.

On the other hand, since Scipy supports Python starting at 2.2 then it 
cannot relay much on new features added to distutils of later Python 
versions. Instead, if these features happen to be useful for Scipy then 
they are backported for Python 2.2 through implementing them in 
scipy_distutils. "Luckily", there are not much such features as 
scipy_distutils has evolved with new very useful features much quicker 
than distutils.

But, for Numeric3, scipy.distutils would be a perfect place to clean up 
scipy_distutils a bit, e.g. removing some obsolete features and assuming 
that Numeric3 will support Python 2.3 and up. Based on that, integrating 
scipy_distutils features to standard distutils can be made less pain if 
someone decides to do that.

Pearu



From pearu at scipy.org  Mon Mar 28 04:23:28 2005
From: pearu at scipy.org (Pearu Peterson)
Date: Mon, 28 Mar 2005 03:23:28 -0600 (CST)
Subject: [SciPy-dev] Re: [Numpy-discussion] Trying out Numeric3
In-Reply-To: <42464443.8050402@ims.u-tokyo.ac.jp>
References: <423A6F69.8020803@ims.u-tokyo.ac.jp>
	
	<4242BA03.5050204@ims.u-tokyo.ac.jp> <42435D18.809@ee.byu.edu>
	<4243D4A5.9050004@ims.u-tokyo.ac.jp>
	<42464443.8050402@ims.u-tokyo.ac.jp>
Message-ID: 



On Sun, 27 Mar 2005, Michiel Jan Laurens de Hoon wrote:

> Having a separate scipy_distutils that fixes some bugs in Python's distutils 
> is a design mistake in SciPy that we should not repeat in Numeric3. Not that 
> I don't think the code in scipy_distutils is not useful -- I think it would 
> be very useful. But the fact that it is not integrated with the existing 
> Python distutils makes me wonder if this package really has been thought out 
> that well.

I don't think that part of scipy_distutils design was to fix Python's 
distutils bugs. As we found a bug, its fix was added to scipy_distutils as 
well as reported to distutils bug tracker. The main reason for adding bug 
fixes to scipy_distutils was to continue the work with scipy instead of 
waiting for the next distutils release (i.e. Python release), nor we could 
expect that SciPy users would use CVS version of Python's distutils. Also, 
SciPy was meant to support Python 2.1 and up, so the bug fixes remained 
relevant even when the bugs were fixed in Python 2.2 or 2.3 distutils.
So much of history..

> As far as I can tell, scipy_distutils now fulfills four functions:
> 1) Bug fixes for Python's distutils for older Python versions. As Numeric3 
> will require Python 2.3 or up, these are no longer relevant.
> 2) Bug fixes for current Python's distutils. These should be integrated with 
> Python's distutils. Writing your own package instead of contributing to 
> Python gives you bad karma.
> 3) Fortran support. Very useful, and I'd like to see them in Python's 
> distutils. Another option would be to put this in SciPy.fortran or something 
> similar. But since Python's distutils already has a language= option for C++ 
> and Objective-C, the cleanest way would be to add this to Python's distutils 
> and enable language="fortran".
> 4) Stuff particular to SciPy, for example finding Atlas/Lapack/Blas 
> libraries. These we can decide on a case-by-case basis if it's useful for 
> Numeric3.

Plus I would add the scipy_distutils ability to build sources on-fly 
feature (build_src command). That's a very fundamental feature useful
whenever swig or f2py is used, or when building sources from templates or 
dynamically during a build process.

Btw, I have started scipy_core clean up. The plan is to create the 
following package tree under Numeric3 source tree:

   scipy.distutils - contains cpuinfo, exec_command, system_info, etc
   scipy.distutils.fcompiler - contains Fortran compiler support
   scipy.distutils.command - contains build_src and config_compiler
     commands plus few enhancements to build_ext, build_clib, etc
     commands
   scipy.base - useful modules from scipy_base
   scipy.testing - enhancements to unittest module, actually
     current scipy_test contains one useful module (testing.py) that
     could also go under scipy.base and so getting rid of scipy.testing
   scipy.weave -
   scipy.f2py  - not sure yet how to incorporate f2py2e or weave sources
     here. As a first instance people are assumed to download them to
     Numeric3/scipy/ directory but in future their sources could be added
     to Numeric3 repository. For Numeric3 f2py and weave are optional.
   scipy.lib.lapack - wrappers to Atlas/Lapack libraries, by default
     f2c generated wrappers are used as in current Numeric.

For backwards compatibility, there will be
   Packages/{FFT,MA,RNG,dotblas+packages from numarray}/
and
   Lib/{LinearAlgebra,..}.py
under Numeric3 that will use modules from scipy.

Pearu



From oliphant at ee.byu.edu  Mon Mar 28 04:37:54 2005
From: oliphant at ee.byu.edu (Travis Oliphant)
Date: Mon, 28 Mar 2005 02:37:54 -0700
Subject: [SciPy-dev] Re: [Numpy-discussion] Trying out Numeric3
In-Reply-To: 
References: <423A6F69.8020803@ims.u-tokyo.ac.jp>
	
	<4242BA03.5050204@ims.u-tokyo.ac.jp> <42435D18.809@ee.byu.edu>
	<4243D4A5.9050004@ims.u-tokyo.ac.jp>	<42464443.8050402@ims.u-tokyo.ac.jp>
	
Message-ID: <4247D072.3090406@ee.byu.edu>


> Plus I would add the scipy_distutils ability to build sources on-fly 
> feature (build_src command). That's a very fundamental feature useful
> whenever swig or f2py is used, or when building sources from templates 
> or dynamically during a build process.


I'd like to use this feature in Numeric3 (which has code-generation). 

>
> Btw, I have started scipy_core clean up. The plan is to create the 
> following package tree under Numeric3 source tree:

This is great news.    I'm thrilled to have Pearu's help in doing this.  
He understands a lot of these issues very well.    I'm sure he will be 
open to suggestions. 

>
>   scipy.distutils - contains cpuinfo, exec_command, system_info, etc
>   scipy.distutils.fcompiler - contains Fortran compiler support
>   scipy.distutils.command - contains build_src and config_compiler
>     commands plus few enhancements to build_ext, build_clib, etc
>     commands

>   scipy.base - useful modules from scipy_base
>   scipy.testing - enhancements to unittest module, actually
>     current scipy_test contains one useful module (testing.py) that
>     could also go under scipy.base and so getting rid of scipy.testing
>   scipy.weave -
>   scipy.f2py  - not sure yet how to incorporate f2py2e or weave sources
>     here. As a first instance people are assumed to download them to
>     Numeric3/scipy/ directory but in future their sources could be added
>     to Numeric3 repository. For Numeric3 f2py and weave are optional.
>   scipy.lib.lapack - wrappers to Atlas/Lapack libraries, by default
>     f2c generated wrappers are used as in current Numeric.
>
> For backwards compatibility, there will be
>   Packages/{FFT,MA,RNG,dotblas+packages from numarray}/
> and
>   Lib/{LinearAlgebra,..}.py
> under Numeric3 that will use modules from scipy.

This looks like a good break down.  Where will the ndarray object and 
the ufunc code go in this breakdown?  In scipy.base?

-Travis




From aisaac at american.edu  Wed Mar 30 01:10:27 2005
From: aisaac at american.edu (Alan G Isaac)
Date: Wed, 30 Mar 2005 01:10:27 -0500 (Eastern Standard Time)
Subject: [SciPy-dev] cvxopt
Message-ID: 

Why isn't cvxopt integrating with SciPy?
Here is the author's answer:
http://www.ee.ucla.edu/~vandenbe/cvxopt/cvxopt-faq

fwiw,
Alan Isaac