From Robert_Pickle at brown.edu Thu Nov 1 12:32:30 2007 From: Robert_Pickle at brown.edu (Robert Pickle) Date: Thu, 01 Nov 2007 12:32:30 -0400 Subject: [SciPy-dev] disutils related bug when building scipy via svn Message-ID: <4729FF9E.7020705@brown.edu> hello, i'm having some trouble building scipy from subversion. here's the relevant output: building library "statlib" sources Traceback (most recent call last): File "setup.py", line 92, in setup_package() File "setup.py", line 84, in setup_package configuration=configuration ) File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", line 176, in setup return old_setup(**new_attr) File "distutils/core.py", line 151, in setup File "distutils/dist.py", line 974, in run_commands File "distutils/dist.py", line 994, in run_command File "/usr/lib/python2.5/site-packages/numpy/distutils/command/install.py", line 16, in run r = old_install.run(self) File "distutils/command/install.py", line 506, in run File "/usr/lib/python2.5/cmd.py", line 333, in run_command del help[cmd] File "distutils/dist.py", line 994, in run_command File "distutils/command/build.py", line 113, in run File "/usr/lib/python2.5/cmd.py", line 333, in run_command del help[cmd] File "distutils/dist.py", line 994, in run_command File "/usr/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 130, in run self.build_sources() File "/usr/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 144, in build_sources self.check_extensions_list(self.extensions) File "/usr/lib/python2.5/distutils/command/build_ext.py", line 316, in check_extensions_list (ext_name, build_info) = ext TypeError: iteration over non-sequence -------- i've tried looking elsewhere for this problem and all i've found is a discussion on pyrex/cython that doesn't really provide an actual solution. anyone else come across this or know what i should do? this is well beyond my grasp. thanks, robert From nwagner at iam.uni-stuttgart.de Mon Nov 5 02:14:13 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 05 Nov 2007 08:14:13 +0100 Subject: [SciPy-dev] ImportError: No module named distutils.ccompiler Message-ID: Hi all, I cannot install the latest svn version of numpy (r4380). I am using python2.3 on x86_64. python setup.py install --prefix=$HOME/local Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 90, in ? setup_package() File "setup.py", line 62, in setup_package from numpy.distutils.core import setup File "/data/home/nwagner/svn/numpy/numpy/distutils/__init__.py", line 6, in ? import ccompiler File "/data/home/nwagner/svn/numpy/numpy/distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * ImportError: No module named distutils.ccompiler Nils From robert.kern at gmail.com Mon Nov 5 11:05:53 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 05 Nov 2007 10:05:53 -0600 Subject: [SciPy-dev] ImportError: No module named distutils.ccompiler In-Reply-To: References: Message-ID: <472F3F61.9000905@gmail.com> Nils Wagner wrote: > Hi all, > > I cannot install the latest svn version of numpy (r4380). > I am using python2.3 on x86_64. > > python setup.py install --prefix=$HOME/local > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 90, in ? > setup_package() > File "setup.py", line 62, in setup_package > from numpy.distutils.core import setup > File > "/data/home/nwagner/svn/numpy/numpy/distutils/__init__.py", > line 6, in ? > import ccompiler > File > "/data/home/nwagner/svn/numpy/numpy/distutils/ccompiler.py", > line 6, in ? > from distutils.ccompiler import * > ImportError: No module named distutils.ccompiler Do you have the appropriate python-dev/python-devel package installed? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Tue Nov 6 16:07:32 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 6 Nov 2007 16:07:32 -0500 Subject: [SciPy-dev] From F90 to F77 Message-ID: <200711061607.32265.pgmdevlist@gmail.com> All, A few months ago, I wrote a small scipy package calling some Fortran 90 code. Everything went fine until we tried to compile the package with g77, where the compiler choked on the slices and FLOOR that were present in the code. Rewriting the F90 code to get rid of the slices was easy, but I wouldn't want to mess with FLOOR. What would be the best approach to have access to FLOOR during the build ? Thanks a lot for any comment/suggestions. P. From jtravs at gmail.com Tue Nov 6 16:26:18 2007 From: jtravs at gmail.com (John Travers) Date: Tue, 06 Nov 2007 21:26:18 +0000 Subject: [SciPy-dev] From F90 to F77 In-Reply-To: <200711061607.32265.pgmdevlist@gmail.com> References: <200711061607.32265.pgmdevlist@gmail.com> Message-ID: <1194384378.10965.2.camel@marvin> On Tue, 2007-11-06 at 16:07 -0500, Pierre GM wrote: > All, > > A few months ago, I wrote a small scipy package calling some Fortran 90 code. > Everything went fine until we tried to compile the package with g77, where > the compiler choked on the slices and FLOOR that were present in the code. > Rewriting the F90 code to get rid of the slices was easy, but I wouldn't want > to mess with FLOOR. > > What would be the best approach to have access to FLOOR during the build ? > Why not just use gfortran? g77 is no longer maintained for several years whereas gfortran is under very active development and should compile all f95/f90/f77 code and even have compatibility with many of g77s bugs (err features). Best regards, John From pgmdevlist at gmail.com Tue Nov 6 16:40:41 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 6 Nov 2007 16:40:41 -0500 Subject: [SciPy-dev] From F90 to F77 In-Reply-To: <1194384378.10965.2.camel@marvin> References: <200711061607.32265.pgmdevlist@gmail.com> <1194384378.10965.2.camel@marvin> Message-ID: <200711061640.41259.pgmdevlist@gmail.com> > Why not just use gfortran? Well, I use gfortran myself, but the colleague who's trying to install that particular package still relies on g77. Seems that as blas was compiled with g77, g77 is the compiler called by default during the setup... So, is there a way to force one particular executable to be used ? Preferably in a way transparent to the user ? From david at ar.media.kyoto-u.ac.jp Tue Nov 6 21:58:21 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 07 Nov 2007 11:58:21 +0900 Subject: [SciPy-dev] From F90 to F77 In-Reply-To: <1194384378.10965.2.camel@marvin> References: <200711061607.32265.pgmdevlist@gmail.com> <1194384378.10965.2.camel@marvin> Message-ID: <473129CD.60705@ar.media.kyoto-u.ac.jp> John Travers wrote: > On Tue, 2007-11-06 at 16:07 -0500, Pierre GM wrote: > >> All, >> >> A few months ago, I wrote a small scipy package calling some Fortran 90 code. >> Everything went fine until we tried to compile the package with g77, where >> the compiler choked on the slices and FLOOR that were present in the code. >> Rewriting the F90 code to get rid of the slices was easy, but I wouldn't want >> to mess with FLOOR. >> >> What would be the best approach to have access to FLOOR during the build ? >> >> > > Why not just use gfortran? g77 is no longer maintained for several years > whereas gfortran is under very active development and should compile all > f95/f90/f77 code and even have compatibility with many of g77s bugs (err > features). > g77 is still used as the default fortran compiler in major distributions (ubuntu and debian, for example), and g77 is not ABI compatible with gfortran without a lot of efforts. cheers, David From millman at berkeley.edu Wed Nov 7 19:21:02 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 7 Nov 2007 16:21:02 -0800 Subject: [SciPy-dev] ANN: NumPy 1.0.4 Message-ID: I'm pleased to announce the release of NumPy 1.0.4. NumPy is the fundamental package needed for scientific computing with Python. It contains: * a powerful N-dimensional array object * sophisticated (broadcasting) functions * basic linear algebra functions * basic Fourier transforms * sophisticated random number capabilities * tools for integrating Fortran code. Besides it's obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide-variety of databases. This is largely a bug fix release with one notable improvements: - The NumPy and SciPy developers have decided to adopt the Python naming convention for classes. So as of this release, TestCase classes may now be prefixed with either 'test' or 'Test'. This will allow us to write TestCase classes using the CapCase words, while still accepting the old style names. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?release_id=552568&group_id=1369 Thank you to everybody who contributed to the recent release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthieu.brucher at gmail.com Thu Nov 8 03:21:41 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 8 Nov 2007 09:21:41 +0100 Subject: [SciPy-dev] ANN: NumPy 1.0.4 In-Reply-To: References: Message-ID: Excellent ! I just tried it on a Windows box (the Python 2.5 package), it works without anychanges (besides the changes in NumpyTest). Matthieu 2007/11/8, Jarrod Millman : > > I'm pleased to announce the release of NumPy 1.0.4. > > NumPy is the fundamental package needed for scientific computing with > Python. It contains: > > * a powerful N-dimensional array object > * sophisticated (broadcasting) functions > * basic linear algebra functions > * basic Fourier transforms > * sophisticated random number capabilities > * tools for integrating Fortran code. > > Besides it's obvious scientific uses, NumPy can also be used as an > efficient multi-dimensional container of generic data. Arbitrary > data-types can be defined. This allows NumPy to seamlessly and > speedily integrate with a wide-variety of databases. > > This is largely a bug fix release with one notable improvements: > > - The NumPy and SciPy developers have decided to adopt the Python > naming convention for classes. So as of this release, TestCase > classes may now be prefixed with either 'test' or 'Test'. This will > allow us to write TestCase classes using the CapCase words, while > still accepting the old style names. > > For information, please see the release notes: > > http://sourceforge.net/project/shownotes.php?release_id=552568&group_id=1369 > > Thank you to everybody who contributed to the recent release. > > Enjoy, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Tue Nov 13 01:09:20 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 12 Nov 2007 22:09:20 -0800 Subject: [SciPy-dev] I removed some duplicate code from weave Message-ID: Hey, I just noticed that dumb_shelve.py and dumbdbm_patched.py were in both scipy.io and scipy.weave. Since I assume that this was just an oversight, I went ahead and cleaned it up. First, I made sure that the newest code was in scipy.io: http://projects.scipy.org/scipy/scipy/changeset/3521 http://projects.scipy.org/scipy/scipy/changeset/3522 And then I removed the version in scipy.weave: http://projects.scipy.org/scipy/scipy/changeset/3524 Please let me know if this change causes any major problems. Cheers, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthieu.brucher at gmail.com Tue Nov 13 02:03:58 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 13 Nov 2007 08:03:58 +0100 Subject: [SciPy-dev] weave not supporting other compilers than gnu ? In-Reply-To: References: Message-ID: Hi, I raise this question from the dead, in case someone has now an answer ;) 2007/10/10, Matthieu Brucher : > > Hi, > > I'm trying to get a simple code to work with weave, the blitz converter > and ICC, but it seems that only the gnu headers are included in the > repository. This means that weave cannot be used with something else than > GCC, which is not what is said in the tutorial (BTW, it is outdated, and the > examples are as well :() > Is there a reason why those headers are not available ? > > Matthieu > Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Nov 13 15:28:31 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 13 Nov 2007 13:28:31 -0700 Subject: [SciPy-dev] Bug in weave, current SVN. Message-ID: Hi all, a bug recently popped up in weave, I was wondering if anyone would mind the fix attached. I can commit it, but since I don't regularly touch the codebase, I prefer to check for approval from the regulars. The problem was lines like: code = arg_strings.join("") which are backwards: join() must be used as string.join(list), not list.join(string). In addition, I added minimal docstrings and rewrote the three little offending routines as list comprehensions, which are faster and (to me) more readable. I can also just commit the bare fix without any 'improvements' if there are objections. Thanks! f -------------- next part -------------- A non-text attachment was scrubbed... Name: weave_fix.diff Type: text/x-diff Size: 1537 bytes Desc: not available URL: From millman at berkeley.edu Tue Nov 13 16:37:51 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 13 Nov 2007 13:37:51 -0800 Subject: [SciPy-dev] Bug in weave, current SVN. In-Reply-To: References: Message-ID: Whoops, my bad. I will fix this later today. Sorry, Jarrod On Nov 13, 2007 12:28 PM, Fernando Perez wrote: > Hi all, > > a bug recently popped up in weave, I was wondering if anyone would > mind the fix attached. I can commit it, but since I don't regularly > touch the codebase, I prefer to check for approval from the regulars. > > The problem was lines like: > > code = arg_strings.join("") > > which are backwards: join() must be used as string.join(list), not > list.join(string). In addition, I added minimal docstrings and > rewrote the three little offending routines as list comprehensions, > which are faster and (to me) more readable. I can also just commit > the bare fix without any 'improvements' if there are objections. > > Thanks! > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From fperez.net at gmail.com Tue Nov 13 17:09:23 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 13 Nov 2007 15:09:23 -0700 Subject: [SciPy-dev] Bug in weave, current SVN. In-Reply-To: References: Message-ID: On Nov 13, 2007 2:37 PM, Jarrod Millman wrote: > Whoops, my bad. I will fix this later today. Great, thanks. Cheers, f From nwagner at iam.uni-stuttgart.de Wed Nov 14 15:56:04 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Nov 2007 21:56:04 +0100 Subject: [SciPy-dev] sandbox packages under Windows XP Message-ID: Hi all, I have installed numpy,scipy and pylab under Windows XP. Is there a chance to include the sandbox packages arpack and lobpcg into the next binary installers ? If not, how can I install the bleeding-edge versions (svn) of numpy, scipy and matplotlib under Windows XP ? So far I have installed Mingw and Cygwin. Is it also possible to install UMFPACK under Windows XP. I would like to use UMFPACK instead of SuperLU. Any hint would be appreciated ? Nils From david at ar.media.kyoto-u.ac.jp Wed Nov 14 22:46:28 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 15 Nov 2007 12:46:28 +0900 Subject: [SciPy-dev] sandbox packages under Windows XP In-Reply-To: References: Message-ID: <473BC114.6060300@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Hi all, > > I have installed numpy,scipy and pylab under Windows XP. > Is there a chance to include the sandbox packages arpack > and lobpcg into the next binary installers ? > > If not, how can I install the bleeding-edge versions (svn) > of numpy, scipy and matplotlib under Windows XP ? > > So far I have installed Mingw and Cygwin. You don't need cygwin per se. To build numpy and scipy, you need: - a C compiler - a Fortran compiler - BLAS/LAPACK So with mingw, you get 1 and 2 (be sure to select g77 in mingw, is it not selected by default). You still need BLAS/LAPACK, which is not difficult to build with mingw (use the linux makefiles; I find cygwin to be helpful here to compile those, though). If you are willing to use cygwin, you may want to use garnumpy: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.4.tbz2 This is just a set of makefiles and rules to build numpy, scipy and all the dependencies (including umfpack, some sandboxed packages, BLAS, LAPACK and ATLAS; I don't think ATLAS would work on cygwin, though). Think about it as a global configuration so that everything is compiled with the same rules. To use it under cygwin, you will need one modification. In gar.cc.mk, add -mno-cygwin, and remove -fPIC F77_COMMON += -mno-cygwin F77_OPTIMS += -O3 -funroll-all-loops F77_COMMON += $(GFORTRAN_FLAGS) CFLAGS_COMMON += -mno-cygwin Once you do this, just do: make garchive # download all the tarballs: numpy, scipy, ipython, fftw3, etc... cd platform && make numpy # build numpy and all dependencies: BLAS, LAPACK cd platform && make scipy # build scipy, umfpack, etc.... For sandboxed packages, you should use the variable SCIPYSANDPKG in gar.conf.mk (there is one example in the comment). This cannot be used to build things from SVN, though. cheers, David From gregg at renesys.com Fri Nov 16 11:36:00 2007 From: gregg at renesys.com (Gregg Lind) Date: Fri, 16 Nov 2007 10:36:00 -0600 Subject: [SciPy-dev] Offer to help with Scipy.sparse Message-ID: <473DC6F0.3080501@renesys.com> Hello Scipy folk, I have been authorized to contribute changes to Scipy.sparse to do a lot of tidying of the code, including error message clarification, augmenting and improving the dok_matrix class, and a number of other small changes. I work with scipy.sparse all the time, so have them lying around in seperate fork, and I would like to mainline them. How do I submit them? Thanks, Gregg Lind, M.S. Data Engineer Renesys Corp. From wnbell at gmail.com Fri Nov 16 16:30:56 2007 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 16 Nov 2007 15:30:56 -0600 Subject: [SciPy-dev] Offer to help with Scipy.sparse In-Reply-To: <473DC6F0.3080501@renesys.com> References: <473DC6F0.3080501@renesys.com> Message-ID: On Nov 16, 2007 10:36 AM, Gregg Lind wrote: > Hello Scipy folk, > > I have been authorized to contribute changes to Scipy.sparse to do a lot > of tidying of the code, including error message clarification, > augmenting and improving the dok_matrix class, and a number of other > small changes. I work with scipy.sparse all the time, so have them > lying around in seperate fork, and I would like to mainline them. How > do I submit them? Hi Gregg, by "authorized" do you mean that you have SVN commit privileges? If not, then one of the admins[1] will need to set you up first. I also contribute code to scipy.sparse (mainly to csr/csc/coo_matrix and sparsetools) so I know your improvements will be appreciated. [1] try Jeff Strunk -- Nathan Bell wnbell at gmail.com From cimrman3 at ntc.zcu.cz Mon Nov 19 10:36:40 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 19 Nov 2007 16:36:40 +0100 Subject: [SciPy-dev] styleguide for info.py files? Message-ID: <4741AD88.10503@ntc.zcu.cz> Hi, Is there a recommended way of writing the info.py files in numpy/scipy trees? It might be good to follow similar convention to the docstrings, as it would make an automatic generation of 'printed' documentation easier. I ask because I have created a short script (nothing fancy) for scanning the docstrings and creating a LaTeX-typeset pdf. An example can be found at http://ui505p06-mbs.ntc.zcu.cz/sfe/Documentation#Terms Now I need to do something similar for the SciPy modules I have contributed (yes, end of year and evaluation time...). The minimal change to the infos required is to mark sections by colons (:A section:). r. From robert.kern at gmail.com Mon Nov 19 10:53:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Nov 2007 09:53:16 -0600 Subject: [SciPy-dev] styleguide for info.py files? In-Reply-To: <4741AD88.10503@ntc.zcu.cz> References: <4741AD88.10503@ntc.zcu.cz> Message-ID: <4741B16C.2090507@gmail.com> Robert Cimrman wrote: > Hi, > > Is there a recommended way of writing the info.py files in numpy/scipy > trees? It might be good to follow similar convention to the docstrings, > as it would make an automatic generation of 'printed' documentation easier. The docstrings in the info.py files provide the package-level docstrings, so they should be formatted like the other docstrings, too, yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eads at soe.ucsc.edu Mon Nov 19 23:14:34 2007 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 19 Nov 2007 21:14:34 -0700 Subject: [SciPy-dev] Hierarchical clustering package Message-ID: <47425F2A.4000509@soe.ucsc.edu> Hello, I developed a hierarchical clustering package, which I offer under the terms of the BSD License. It supports agglomerative clustering, plotting of dendrograms, flat cluster formation, computation of a few cluster statistics, and computing distances between vectors using a variety of distance metrics. The source code is available for your perusal at http://scipy-cluster.googlecode.com/svn/trunk/ and the API Documentation, http://www.soe.ucsc.edu/~eads/cluster.html . The interface is similar to the interface used in MATLAB's Statistics Toolbox to ease conversion of old MATLAB programs to Python. Eventually, I'd like to integrate it into Scipy (hence, naming my SVN repository scipy-cluster). A few things: * matplotlib is optional: the only function that requires matplotlib support is dendrogram. However, I've abstracted the code so that ImportError exceptions are caught when importing matplotlib. This enables the package to be imported without any import errors. When an attempt is made to plot when matplotlib is unavailable, an exception is then thrown indicating that graphical rendering is not supported. The caller can pass a no_plot parameter to have the coordinates of the plot elements calculated without any rendering done in matplotlib. * Most of the algorithms are written in C. Half-way through the development of the package, I mistakenly strided arrays using the dimensions field of the PyArrayObject, and not the strides field. On occasion, this causes erroneous behavior when the array passed refers to a base array. As a work-around until I get around to rewriting the code to use proper striding, if an array's base is non-null, it is copied prior to being passed to a C function. I should also note that the task of changing the code to use proper striding might be difficult since, in many places in the code, I assume for efficiency and code expressibility/readability sake, array elements are stored side-by-side in the array's underlying buffer. * When this project started, I used dtype='int32' when declaring integer Numpy arrays, and then referred to them with an int* pointer in C. I realize this might pose a problem on different architectures, 64-bit for example. When declaring a numpy array with dtype='int', does Numpy use the host's compiler's sizeof(int) to determine the size of the ints to allocate for the array? If so, I might as well change all the instances of dtype='int32' to dtype='int' in my code. I can't really justify why I originally did this but I know the code as it stands works on my 32-bit Intel+gcc. * I wrote the API documentation without any use of markup (epydoc or the like). This is so that help(function) still provides human-readable documentation. I noticed some discussion about reformatting Scipy's docstrings to use the epydoc mark-up. A few weeks ago, I tried contacting the author of epydoc to ask whether epydoc supports rendering mark-up into human-readable ASCII text, and whether there were any plans to enable such renderings when invoking python's help command. He has yet to respond. I'm reluctant to use epydoc until I have assurance that console-based help with human-readable text rendering is supported. Thoughts on this? * The tests I'm writing require some data files to run. What is the convention for storing and retrieving data files when running a Scipy regression test? Presumably the test programs should be able to find the data files without regard to whether the data files are stored in /usr/share or in the src directory. One solution is to embed the data in the testing programs themselves but this is messy, and I'd like to know if there is a better solution. * The vector quantization/k-means Scipy package is already called cluster so there is a naming conflict. If the hierarchical clustering package is integrated into Scipy, I could rename it "agglom" or "hierarchical", and have it sit in the cluster package directory. Cheers, Damian Eads http://www.soe.ucsc.edu/~eads From david at ar.media.kyoto-u.ac.jp Tue Nov 20 06:24:17 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 20 Nov 2007 20:24:17 +0900 Subject: [SciPy-dev] Hierarchical clustering package In-Reply-To: <47425F2A.4000509@soe.ucsc.edu> References: <47425F2A.4000509@soe.ucsc.edu> Message-ID: <4742C3E1.3060705@ar.media.kyoto-u.ac.jp> Damian Eads wrote: > Hello, > > I developed a hierarchical clustering package, which I offer under the > terms of the BSD License. It supports agglomerative clustering, plotting > of dendrograms, flat cluster formation, computation of a few cluster > statistics, and computing distances between vectors using a variety of > distance metrics. The source code is available for your perusal at > http://scipy-cluster.googlecode.com/svn/trunk/ and the API > Documentation, http://www.soe.ucsc.edu/~eads/cluster.html . The > interface is similar to the interface used in MATLAB's Statistics > Toolbox to ease conversion of old MATLAB programs to Python. Eventually, > I'd like to integrate it into Scipy (hence, naming my SVN repository > scipy-cluster). Hi Damian, This looks great. I have a couple of questions: - do you think it would be possible to split the package for the reusable parts (in perticular, the distance matrices: scipy.cluster, and a few other packages could reuse those). - do you have some examples ? I don't know what the opinon of others are on this, but maybe this package could be added to scikits (there is already a scikits.learn packages for ML-related algorithms, ANN, Em for mixtures of Gaussian, and SVM) ? > > A few things: > > * matplotlib is optional: the only function that requires matplotlib > support is dendrogram. However, I've abstracted the code so that > ImportError exceptions are caught when importing matplotlib. This > enables the package to be imported without any import errors. When an > attempt is made to plot when matplotlib is unavailable, an exception is > then thrown indicating that graphical rendering is not supported. The > caller can pass a no_plot parameter to have the coordinates of the plot > elements calculated without any rendering done in matplotlib. I think this is one way of doing it (I am doing the same for my own packages, at least). > > * The tests I'm writing require some data files to run. What is the > convention for storing and retrieving data files when running a Scipy > regression test? Presumably the test programs should be able to find the > data files without regard to whether the data files are stored in > /usr/share or in the src directory. One solution is to embed the data in > the testing programs themselves but this is messy, and I'd like to know > if there is a better solution. The convention is to have the datasets in the package. I am not sure to understand why it is messy: it is good to have self-contained regression tests ? David From eads at soe.ucsc.edu Fri Nov 23 22:12:14 2007 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 23 Nov 2007 20:12:14 -0700 Subject: [SciPy-dev] Hierarchical clustering package In-Reply-To: <4742C3E1.3060705@ar.media.kyoto-u.ac.jp> References: <47425F2A.4000509@soe.ucsc.edu> <4742C3E1.3060705@ar.media.kyoto-u.ac.jp> Message-ID: <4747968E.9050308@soe.ucsc.edu> Hi David, Sorry for the late response. Thanksgiving festivities, holiday shopping, and my day job have all gotten in the way. David Cournapeau wrote: > Damian Eads wrote: >> Hello, >> >> I developed a hierarchical clustering package, which I offer under the >> terms of the BSD License. It supports agglomerative clustering, plotting >> of dendrograms, flat cluster formation, computation of a few cluster >> statistics, and computing distances between vectors using a variety of >> distance metrics. The source code is available for your perusal at >> http://scipy-cluster.googlecode.com/svn/trunk/ and the API >> Documentation, http://www.soe.ucsc.edu/~eads/cluster.html . The >> interface is similar to the interface used in MATLAB's Statistics >> Toolbox to ease conversion of old MATLAB programs to Python. Eventually, >> I'd like to integrate it into Scipy (hence, naming my SVN repository >> scipy-cluster). > Hi Damian, > > This looks great. I have a couple of questions: > > - do you think it would be possible to split the package for the > reusable parts (in perticular, the distance matrices: scipy.cluster, and > a few other packages could reuse those). The distance functions are fairly self-contained so I don't see why not. In fact, one would only need to move the *python* distance functions from the hcluster.py file to the appropriate module file. Alternatively, the __init__.py file in scipy/cluster can import the distance functions into the scipy.cluster package without importing the other hierarchical clustering functions. > - do you have some examples ? I do, it is available at http://www.soe.ucsc.edu/~eads/iris.html. The hierarchical clustering examples in the MATLAB Statistics Toolbox documentation should work as well. > I don't know what the opinon of others are on this, but maybe this > package could be added to scikits (there is already a scikits.learn > packages for ML-related algorithms, ANN, Em for mixtures of Gaussian, > and SVM) ? I'm fine with maintaining it as a separate package (hcluster) on my website unless others find that including it in Scipy would be useful. >> * The tests I'm writing require some data files to run. What is the >> convention for storing and retrieving data files when running a Scipy >> regression test? Presumably the test programs should be able to find the >> data files without regard to whether the data files are stored in >> /usr/share or in the src directory. One solution is to embed the data in >> the testing programs themselves but this is messy, and I'd like to know >> if there is a better solution. > > The convention is to have the datasets in the package. I am not sure to > understand why it is messy: it is good to have self-contained regression > tests ? I am not making an argument against self-contained tests. Rather, I am simply stating that putting the data in Python programs is a bit messy, especially when the data are large. It gives unnecessary lines for SVN diff to process when a program containing a data set is changed, which is not as likely when the data sets are stored in separate text files. If the Scipy convention is to put testing data in test programs then so be it. What's the convention? Java has a facility for loading resources like images, text and data files, which are all loaded in a similar way classes are loaded. Does Python have a similar facility? Cheers, Damian From matthieu.brucher at gmail.com Sat Nov 24 04:58:21 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 24 Nov 2007 10:58:21 +0100 Subject: [SciPy-dev] Hierarchical clustering package In-Reply-To: <4747968E.9050308@soe.ucsc.edu> References: <47425F2A.4000509@soe.ucsc.edu> <4742C3E1.3060705@ar.media.kyoto-u.ac.jp> <4747968E.9050308@soe.ucsc.edu> Message-ID: > > > I don't know what the opinon of others are on this, but maybe this > > package could be added to scikits (there is already a scikits.learn > > packages for ML-related algorithms, ANN, Em for mixtures of Gaussian, > > and SVM) ? > > I'm fine with maintaining it as a separate package (hcluster) on my > website unless others find that including it in Scipy would be useful. I think this package would fit great in a scikit (the learn one for instance) but not it scipy directly. The cluster package stays in scipy for compatibility reasons IIRC. > The convention is to have the datasets in the package. I am not sure to > > understand why it is messy: it is good to have self-contained regression > > tests ? > > I am not making an argument against self-contained tests. Rather, I am > simply stating that putting the data in Python programs is a bit messy, > especially when the data are large. It gives unnecessary lines for SVN > diff to process when a program containing a data set is changed, which > is not as likely when the data sets are stored in separate text files. > If the Scipy convention is to put testing data in test programs then so > be it. What's the convention? You put them in separate files. Fo rinstance, you can pickle them and save them, and finally load them when you process the test. David proposed an answer for a dataset interface, but I don't know where it stands now. Java has a facility for loading resources like images, text and data > files, which are all loaded in a similar way classes are loaded. Does > Python have a similar facility? Scipy does (the IO module) to some extent. But you do not need to load images, text and data files, only preprocessed data ;) Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Nov 26 01:36:10 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 26 Nov 2007 15:36:10 +0900 Subject: [SciPy-dev] tests, relative import: what's best ? Message-ID: <474A695A.2060502@ar.media.kyoto-u.ac.jp> Hi, I have a question regarding tests and import in numpy/scipy in general. When implementing tests, the package to be tested has to be imported, and there are two possibilities on how to do it (both are used in scipy). Taking an hypothetical package foo in scipy as an example: 1 : using set_package_path /restore_path facilities of numpy.testing, and do a "relative" import set_package_path() from foo import bar restore_path() This means that you can test the package foo without building the whole scipy source tree. This is for example used in scipy.linalg tests. It also correspond to DISTUTILS.txt doc in numpy/doc. 2 : using set_package_path / restore_path facilities, and do a "absolute" import: set_package_path() from scipy.foo import bar restore_path() In this case, I don't understand why set_package_path is used ? And also, this means that you cannot test the subpackage by itself, you need to install the whole scipy tree. This is for example used in scipy.signal tests. Are there any advantages to 2 ? If not, would it be a good idea to convert every package to convention 1 ? cheers, David From robert.kern at gmail.com Mon Nov 26 05:13:36 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 26 Nov 2007 04:13:36 -0600 Subject: [SciPy-dev] tests, relative import: what's best ? In-Reply-To: <474A695A.2060502@ar.media.kyoto-u.ac.jp> References: <474A695A.2060502@ar.media.kyoto-u.ac.jp> Message-ID: <474A9C50.7080803@gmail.com> David Cournapeau wrote: > Hi, > > I have a question regarding tests and import in numpy/scipy in > general. When implementing tests, the package to be tested has to be > imported, and there are two possibilities on how to do it (both are used > in scipy). Taking an hypothetical package foo in scipy as an example: > > 1 : using set_package_path /restore_path facilities of > numpy.testing, and do a "relative" import > > set_package_path() > from foo import bar > restore_path() > > This means that you can test the package foo without building the whole > scipy source tree. This is for example used in scipy.linalg tests. It > also correspond to DISTUTILS.txt doc in numpy/doc. > > 2 : using set_package_path / restore_path facilities, and do a > "absolute" import: > > set_package_path() > from scipy.foo import bar > restore_path() > > In this case, I don't understand why set_package_path is used ? And > also, this means that you cannot test the subpackage by itself, you need > to install the whole scipy tree. This is for example used in > scipy.signal tests. > > Are there any advantages to 2 ? If not, would it be a good idea to > convert every package to convention 1 ? If we can, I'd prefer that we simply make build_src/build_ext --inplace work for numpy and scipy and then remove set_package_path() and restore_path(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Nov 26 05:51:34 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 26 Nov 2007 19:51:34 +0900 Subject: [SciPy-dev] tests, relative import: what's best ? In-Reply-To: <474A9C50.7080803@gmail.com> References: <474A695A.2060502@ar.media.kyoto-u.ac.jp> <474A9C50.7080803@gmail.com> Message-ID: <474AA536.904@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: >> Hi, >> >> I have a question regarding tests and import in numpy/scipy in >> general. When implementing tests, the package to be tested has to be >> imported, and there are two possibilities on how to do it (both are used >> in scipy). Taking an hypothetical package foo in scipy as an example: >> >> 1 : using set_package_path /restore_path facilities of >> numpy.testing, and do a "relative" import >> >> set_package_path() >> from foo import bar >> restore_path() >> >> This means that you can test the package foo without building the whole >> scipy source tree. This is for example used in scipy.linalg tests. It >> also correspond to DISTUTILS.txt doc in numpy/doc. >> >> 2 : using set_package_path / restore_path facilities, and do a >> "absolute" import: >> >> set_package_path() >> from scipy.foo import bar >> restore_path() >> >> In this case, I don't understand why set_package_path is used ? And >> also, this means that you cannot test the subpackage by itself, you need >> to install the whole scipy tree. This is for example used in >> scipy.signal tests. >> >> Are there any advantages to 2 ? If not, would it be a good idea to >> convert every package to convention 1 ? > > If we can, I'd prefer that we simply make build_src/build_ext --inplace work for > numpy and scipy and then remove set_package_path() and restore_path(). I didn't know about this option of distutils. I've just tried it on some packages, it seems to just put the extensions and python files outside the usual build directory ? This way, we should follow the first method, but without messing sys.path ? What does not work with numpy/scipy related to --inplace ? David From robert.kern at gmail.com Mon Nov 26 11:22:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 26 Nov 2007 10:22:54 -0600 Subject: [SciPy-dev] tests, relative import: what's best ? In-Reply-To: <474AA536.904@ar.media.kyoto-u.ac.jp> References: <474A695A.2060502@ar.media.kyoto-u.ac.jp> <474A9C50.7080803@gmail.com> <474AA536.904@ar.media.kyoto-u.ac.jp> Message-ID: <474AF2DE.5060603@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> If we can, I'd prefer that we simply make build_src/build_ext --inplace work for >> numpy and scipy and then remove set_package_path() and restore_path(). > I didn't know about this option of distutils. I've just tried it on some > packages, it seems to just put the extensions and python files outside > the usual build directory ? Yes. > This way, we should follow the first method, > but without messing sys.path ? No, the absolute imports. > What does not work with numpy/scipy related to --inplace ? numpy doesn't put the generated headers (__multiarray_api.h and __ufunc_api.h) in the right place, so it isn't usable for building scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Laurence.Viry at imag.fr Mon Nov 26 11:31:36 2007 From: Laurence.Viry at imag.fr (Laurence Viry) Date: Mon, 26 Nov 2007 17:31:36 +0100 Subject: [SciPy-dev] [Fwd: [SciPy-user] install numpy/scipy with Intel Compilers] Message-ID: <474AF4E8.1020509@imag.fr> -- Laurence Viry CRIP UJF - Projet MIRAGE ( http://mirage.imag.fr ) Laboratoire de Mod?lisation et Calcul - IMAG tel: 04 76 63 56 34 fax: 04 76 63 12 63 e-mail: Laurence.Viry at imag.fr -------------- next part -------------- An embedded message was scrubbed... From: Laurence Viry Subject: [SciPy-user] install numpy/scipy with Intel Compilers Date: Fri, 23 Nov 2007 19:16:50 +0100 Size: 5873 URL: From couge.chen at gmail.com Mon Nov 26 13:21:55 2007 From: couge.chen at gmail.com (Couge Chen) Date: Mon, 26 Nov 2007 13:21:55 -0500 Subject: [SciPy-dev] Problem about scipy.integrate.odeint Message-ID: <2a05ece60711261021u64fec3b6od78ca1932ce1f26c@mail.gmail.com> Hi all, I met a weird problem when using scipy.integrate.odeint(). It is followed as: from scipy import * y=integrate.odeint(lambda y,t:sin(y)*y,y0=random.rand(N),t=arange(0,100)) Here, I need to solve a field equation and have to set the large number N. If N>= 73300 or N<=70400, it works well. However, if N is within the range(70400,73300), an error message "MemoryError" shows up. I am very confused about that. My python version is "Python2.4.3", scipy version is "scipy 0.5.0.2033". By the way, my computer's basic configuration is CoreDuo 1.83GHz of CPU, 1GB of RAM, and WinXP. Anyone can help me? Thanks a lot! Couge From anand.prabhakar.patil at gmail.com Tue Nov 27 11:32:45 2007 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 27 Nov 2007 08:32:45 -0800 Subject: [SciPy-dev] Sparse BLAS in scipy? Message-ID: <2bc7a5a50711270832u26d51e40q76ad6964abadf8c1@mail.gmail.com> Hi all, Apologies for cross-posting- this one seems to have fallen through the cracks on scipy-user, maybe this is a better place to send it. Is there a set of python-callable sparse BLAS out there yet? I haven't found one, and not for lack of Googling. I'm willing to work on swigging a library if the need exists, but would like some guidance from someone with more experience with these things. First, it looks like there are three implementations that would be good for this purpose: - NIST sparse BLAS - SparseLib++ (superseded by above?) - Boost's ublas with sparse template parameters. Questions: - Do I need to do this, or are wrappers already available? - If so, which library would be best? I lean toward Boost, just because it's so broadly templatized that scripting a wrapper for all all the sparse-sparse and sparse-dense versions should be relatively easy. - What should the calling conventions from Python be like? - Any other pointers? (things I should know about numpy.i, for example). Anand From wnbell at gmail.com Tue Nov 27 12:28:50 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 27 Nov 2007 11:28:50 -0600 Subject: [SciPy-dev] Sparse BLAS in scipy? In-Reply-To: <2bc7a5a50711270832u26d51e40q76ad6964abadf8c1@mail.gmail.com> References: <2bc7a5a50711270832u26d51e40q76ad6964abadf8c1@mail.gmail.com> Message-ID: On Nov 27, 2007 10:32 AM, Anand Patil wrote: > Apologies for cross-posting- this one seems to have fallen through the > cracks on scipy-user, maybe this is a better place to send it. > > Is there a set of python-callable sparse BLAS out there yet? I haven't > found one, and not for lack of Googling. I'm willing to work on > swigging a library if the need exists, but would like some guidance > from someone with more experience with these things. > > First, it looks like there are three implementations that would be > good for this purpose: > > - NIST sparse BLAS > - SparseLib++ (superseded by above?) > - Boost's ublas with sparse template parameters. > > Questions: > - Do I need to do this, or are wrappers already available? > - If so, which library would be best? I lean toward Boost, just > because it's so broadly templatized that scripting a wrapper for all > all the sparse-sparse and sparse-dense versions should be relatively > easy. > - What should the calling conventions from Python be like? > - Any other pointers? (things I should know about numpy.i, for example). Have you tried scipy.sparse? http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse -- Nathan Bell wnbell at gmail.com From anand.prabhakar.patil at gmail.com Tue Nov 27 15:02:23 2007 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 27 Nov 2007 12:02:23 -0800 Subject: [SciPy-dev] Scipy-dev Digest, Vol 49, Issue 14 In-Reply-To: References: Message-ID: <2bc7a5a50711271202t81b6ce3oe107c14467382ede@mail.gmail.com> > > Apologies for cross-posting- this one seems to have fallen through the > > cracks on scipy-user, maybe this is a better place to send it. > > > > Is there a set of python-callable sparse BLAS out there yet? I haven't > > found one, and not for lack of Googling. I'm willing to work on > > swigging a library if the need exists, but would like some guidance > > from someone with more experience with these things. > > > > First, it looks like there are three implementations that would be > > good for this purpose: > > > > - NIST sparse BLAS > > - SparseLib++ (superseded by above?) > > - Boost's ublas with sparse template parameters. > > > > Questions: > > - Do I need to do this, or are wrappers already available? > > - If so, which library would be best? I lean toward Boost, just > > because it's so broadly templatized that scripting a wrapper for all > > all the sparse-sparse and sparse-dense versions should be relatively > > easy. > > - What should the calling conventions from Python be like? > > - Any other pointers? (things I should know about numpy.i, for example). > > Have you tried scipy.sparse? > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse > Of course, and it's great for general sparse matrices but I didn't see anything in there for triangular matrices. I didn't see anything in linsolve either, though quite a bit could no doubt be extracted from SuperLU. Also, I just like working with the BLAS rather than higher-level interfaces when it's code that I'm going to reuse, so as to have more control over when things get overwritten as opposed to copied, etc. Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Tue Nov 27 16:52:33 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 27 Nov 2007 15:52:33 -0600 Subject: [SciPy-dev] Scipy-dev Digest, Vol 49, Issue 14 In-Reply-To: <2bc7a5a50711271202t81b6ce3oe107c14467382ede@mail.gmail.com> References: <2bc7a5a50711271202t81b6ce3oe107c14467382ede@mail.gmail.com> Message-ID: On Nov 27, 2007 2:02 PM, Anand Patil wrote: > Of course, and it's great for general sparse matrices but I didn't see > anything in there for triangular matrices. I didn't see anything in linsolve > either, though quite a bit could no doubt be extracted from SuperLU. Do you just want to backsolve with a triangular matrix? If so, that could be added to sparsetools without much trouble. You'd still need to decide where to expose the functionality, and how to handle potential errors (e.g. zero diagonals), but the backend would be simple. > Also, I just like working with the BLAS rather than higher-level interfaces > when it's code that I'm going to reuse, so as to have more control over when > things get overwritten as opposed to copied, etc. Funny you should mention that :) I was planning to change sparsetools to be more BLAS-like where possible. Currently the library returns newly allocated memory for most operations instead of accepting a preallocated array. As you suggest, mimicking BLAS's y = a*A*x + b*y instead of the current y = A*x is helpful is some situations. The "level 3" operations such as matrix matrix products are trickier since you can't anticipate the memory cost of the output in advance. I'm going to see if there's any benefit to breaking these operations up into two pass algorithms (like the SMMP algorithm does). Now that most compilers support OpenMP, I'd also like to parallelize matrix vector multiplication and perhaps other operations. Block CSR/CSC matrices, diagonal matrices, and some simple benchmarking are others item on the todo list. Anyway, if you'd like to improve sparsetools I'd greatly appreciate your help and advice. -- Nathan Bell wnbell at gmail.com From anand.prabhakar.patil at gmail.com Tue Nov 27 17:35:47 2007 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 27 Nov 2007 14:35:47 -0800 Subject: [SciPy-dev] Scipy-dev Digest, Vol 49, Issue 14 In-Reply-To: References: <2bc7a5a50711271202t81b6ce3oe107c14467382ede@mail.gmail.com> Message-ID: <2bc7a5a50711271435l24e974d4i574fdf110c6e6aca@mail.gmail.com> On Nov 27, 2007 1:52 PM, Nathan Bell wrote: Do you just want to backsolve with a triangular matrix? If so, that > could be added to sparsetools without much trouble. You'd still need > to decide where to expose the functionality, and how to handle > potential errors (e.g. zero diagonals), but the backend would be > simple. > That and the analogous forward multiplication with a triangular matrix are the two things I really need right now, yeah. The backend would be simple, yet painful; upper/lower, transpose and side arguments, in addition to accounting for all the possible combinations of matrix formats, and the strides when 'B' is dense, would make for a huge number of bug opportunities. That's what made me think wrapping a library would be less work than writing even 'two' routines from scratch. > The "level 3" operations such as matrix matrix products are trickier > since you can't anticipate the memory cost of the output in advance. > I'm going to see if there's any benefit to breaking these operations > up into two pass algorithms (like the SMMP algorithm does). > Good point, I hadn't thought of that. BLAS-style triangular and symmetric multiplies would save time even so, of course. Now that most compilers support OpenMP, I'd also like to parallelize > matrix vector multiplication and perhaps other operations. > That would be very cool. Too bad it's too early for Fortress... > Anyway, if you'd like to improve sparsetools I'd greatly appreciate > your help and advice. Yeah, if I'm going to code this triangular stuff anyway I may as well do it in such a way that's useful to other people. Any thoughts on wrapping a library vs. writing from scratch? Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Wed Nov 28 05:29:20 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 28 Nov 2007 11:29:20 +0100 Subject: [SciPy-dev] documentation generator based on pyparsing Message-ID: <474D4300.6020904@ntc.zcu.cz> Hi, At http://scipy.org/Generate_Documentation you can find a very small documentation generator for NumPy/SciPy modules based on pyparsing package (by Paul McGuire). I am not sure if this belongs to where I put it, so feel free to (re)move the page as needed. I hope it might be interesting for you. r. From wnbell at gmail.com Wed Nov 28 19:06:07 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 28 Nov 2007 18:06:07 -0600 Subject: [SciPy-dev] Scipy-dev Digest, Vol 49, Issue 14 In-Reply-To: <2bc7a5a50711271435l24e974d4i574fdf110c6e6aca@mail.gmail.com> References: <2bc7a5a50711271202t81b6ce3oe107c14467382ede@mail.gmail.com> <2bc7a5a50711271435l24e974d4i574fdf110c6e6aca@mail.gmail.com> Message-ID: On Nov 27, 2007 4:35 PM, Anand Patil wrote: > That and the analogous forward multiplication with a triangular matrix are > the two things I really need right now, yeah. The backend would be simple, > yet painful; upper/lower, transpose and side arguments, in addition to > accounting for all the possible combinations of matrix formats, and the > strides when 'B' is dense, would make for a huge number of bug > opportunities. That's what made me think wrapping a library would be less > work than writing even 'two' routines from scratch. I guess I don't fully understand what you need here. What I imagined was two methods for forward/backward solves with lower/upper triangular matrices in CSR format. BTW, for such matrices, isn't this is equivalent to Gauss-Seidel[1] sweep in the appropriate direction. If so, then we might just move something like [2] to scipy.linsolve. Also, are you sure the sparse LU solvers wouldn't do this for you anyway? [1] http://en.wikipedia.org/wiki/Gauss-Seidel_method [2] http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sandbox/multigrid/multigridtools/relaxation.h > Yeah, if I'm going to code this triangular stuff anyway I may as well do it > in such a way that's useful to other people. > > Any thoughts on wrapping a library vs. writing from scratch? About a year ago I implemented sparsetools from scratch. I looked at the same packages you listed before and came to the conclusion that the amount of code needed was rather small and that by writing the library to fit well with SciPy was worth the effort. The current bindings automatically upcast their arguments to the appropriate type, which simplifies the Python side a bit. Special functions to sort the column/row indices of a CSR/CSC matrix may not exist in other libraries. Other libraries may not support numpy's integer of complex types out of the box. I didn't trust that other libraries had efficient implementations of level 3 operations between CSR/CSC types, conversions among the formats, etc. Ultimately, I think making the backend code amenable to SWIG and scipy.sparse saved more effort than starting with an existing library. -- Nathan Bell wnbell at gmail.com From anand.prabhakar.patil at gmail.com Wed Nov 28 20:55:42 2007 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Wed, 28 Nov 2007 17:55:42 -0800 Subject: [SciPy-dev] Sparse BLAS in scipy Message-ID: <2bc7a5a50711281755s670db34br466d899609af53bc@mail.gmail.com> On Nov 28, 2007 4:06 PM, Nathan Bell wrote: > On Nov 27, 2007 4:35 PM, Anand Patil > wrote: > > That and the analogous forward multiplication with a triangular matrix > are > > the two things I really need right now, yeah. The backend would be > simple, > > yet painful; upper/lower, transpose and side arguments, in addition to > > accounting for all the possible combinations of matrix formats, and the > > strides when 'B' is dense, would make for a huge number of bug > > opportunities. That's what made me think wrapping a library would be > less > > work than writing even 'two' routines from scratch. > > I guess I don't fully understand what you need here. What I imagined > was two methods for forward/backward solves with lower/upper > triangular matrices in CSR format. > I was thinking the triangular matrices could be any format, and the 'b' matrix/vector could be any format, in which case the multiplicity of functions rivals that of the ordinary multiplications involving sparse matrices. BTW, for such matrices, isn't this is equivalent to Gauss-Seidel[1] > sweep in the appropriate direction. If so, then we might just move > something like [2] to scipy.linsolve. > I can't see any difference... it would be nice to have the L3 version too, though, and in that case b could be in any format. Also, are you sure the sparse LU solvers wouldn't do this for you anyway? > Nope, will have to look more closely at those. > About a year ago I implemented sparsetools from scratch. I looked at > the same packages you listed before and came to the conclusion that > the amount of code needed was rather small and that by writing the > library to fit well with SciPy was worth the effort. > > Ultimately, I think making the backend code amenable to SWIG and > scipy.sparse saved more effort than starting with an existing library. OK, if you've done the research already and come to that conclusion I should just do it! Regardless of the backend (LU, gauss-seidel or from scratch) here's what I'm proposing for the Python interface: B=trimult(A, B, uplo='U', side='L', inplace=True) B=trisolve(A, B, uplo, side, inplace) A is triangular, B is rectangular or a vector. We could trim the uplo and side arguments if there were 'triangular matrix' classes that kept track of whether they're upper or lower, but that would be more trouble than it's worth, no? I'm thinking if inplace=True, B will only be overwritten if it's possible to recycle the memory efficiently, so the only safe usage is to use the return value. How does this sound? Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand.prabhakar.patil at gmail.com Wed Nov 28 22:01:25 2007 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Wed, 28 Nov 2007 19:01:25 -0800 Subject: [SciPy-dev] Sparse BLAS in scipy In-Reply-To: <2bc7a5a50711281755s670db34br466d899609af53bc@mail.gmail.com> References: <2bc7a5a50711281755s670db34br466d899609af53bc@mail.gmail.com> Message-ID: <2bc7a5a50711281901i6fbfd265yd9ee253b6e521469@mail.gmail.com> On Nov 28, 2007 5:55 PM, Anand Patil wrote: > > OK, if you've done the research already and come to that conclusion I > should just do it! Regardless of the backend (LU, gauss-seidel or from > scratch) here's what I'm proposing for the Python interface: > > B=trimult(A, B, uplo='U', side='L', inplace=True) > B=trisolve(A, B, uplo, side, inplace) > I just sat down to write the 'trimult' routines and realized there's no need, the sparse general matrix-matrix multiplication saves the work anyway! Sorry for being slow. I'll just work on the 'trisolve' one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Thu Nov 29 05:15:55 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 29 Nov 2007 11:15:55 +0100 Subject: [SciPy-dev] documentation generator based on pyparsing In-Reply-To: <474D4300.6020904@ntc.zcu.cz> References: <474D4300.6020904@ntc.zcu.cz> Message-ID: <85b5c3130711290215q48fdc86eh8e0d834320842ce8@mail.gmail.com> On Nov 28, 2007 11:29 AM, Robert Cimrman wrote: > Hi, > > At http://scipy.org/Generate_Documentation you can find a very small > documentation generator for NumPy/SciPy modules based on pyparsing > package (by Paul McGuire). I am not sure if this belongs to where I put > it, so feel free to (re)move the page as needed. I hope it might be > interesting for you. Thanks for sharing this. Ondrej From jh at physics.ucf.edu Fri Nov 30 15:11:42 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 30 Nov 2007 15:11:42 -0500 Subject: [SciPy-dev] what the heck is an egg ;-) Message-ID: <1196453502.6047.400.camel@glup.physics.ucf.edu> A lot of the add-on software for numpy/scipy is distributed using novel Python install processes (eggs, setup.py) rather than tarballs or the preferred OS-native installers (dpkg, rpm, etc.). I'm sure they are described, perhaps even well, in other places, but, since scipy.org is our portal, I think it would be good to have a few-line description of each method on the Download page and a link to more detailed descriptions elsewhere (or on subsidiary pages). An example of installing a package many will want, like mayavi2, would be great. In particular, many sysadmins (who might be considering a user's request for an install and know nothing about python) get nervous when package managers other than the native one for the OS start mucking around in the system directories, and are hesitant to use something like eggs. Some statements describing what they do and where they put stuff would be good (like, a guarrantee that they only tread in certain directories). How to update and how to completely remove a package would be good. Is there a way to have them check periodically for updates? Of course, a statement near the top of why these methods are used rather than the native OS installers would help a lot. Thanks, --jh-- From matthieu.brucher at gmail.com Fri Nov 30 15:21:42 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 30 Nov 2007 21:21:42 +0100 Subject: [SciPy-dev] what the heck is an egg ;-) In-Reply-To: <1196453502.6047.400.camel@glup.physics.ucf.edu> References: <1196453502.6047.400.camel@glup.physics.ucf.edu> Message-ID: Hi, Eggs are a package containing binaries for the package. They are made with setuptools which is what will more or less replace distutils. When you install setuptools, you get a new script, easy_install.py that can be executed with the name of the package you want to install as a parameter. Then, it looks on http://pypi.python.org/ by default to get a matching package with its dependencies. A kind of CPAN, but for Python. Matthieu 2007/11/30, Joe Harrington : > > A lot of the add-on software for numpy/scipy is distributed using novel > Python install processes (eggs, setup.py) rather than tarballs or the > preferred OS-native installers (dpkg, rpm, etc.). I'm sure they are > described, perhaps even well, in other places, but, since scipy.org is > our portal, I think it would be good to have a few-line description of > each method on the Download page and a link to more detailed > descriptions elsewhere (or on subsidiary pages). An example of > installing a package many will want, like mayavi2, would be great. > > In particular, many sysadmins (who might be considering a user's request > for an install and know nothing about python) get nervous when package > managers other than the native one for the OS start mucking around in > the system directories, and are hesitant to use something like eggs. > Some statements describing what they do and where they put stuff would > be good (like, a guarrantee that they only tread in certain > directories). How to update and how to completely remove a package > would be good. Is there a way to have them check periodically for > updates? Of course, a statement near the top of why these methods are > used rather than the native OS installers would help a lot. > > Thanks, > > --jh-- > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: