From oliphant at ee.byu.edu Wed Sep 1 10:54:40 2004 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 01 Sep 2004 08:54:40 -0600 Subject: [SciPy-user] IMPORTANT BUG in scipy... rescue me !! In-Reply-To: <1335.196.30.178.181.1093986170.squirrel@glue.jjj.sun.ac.za> References: <24676.1093685033@www3.gmx.net> <31830.1093697422@www13.gmx.net><41330520.3000202@novagrid.com> <41342C48.7020100@novagrid.com> <1335.196.30.178.181.1093986170.squirrel@glue.jjj.sun.ac.za> Message-ID: <4135E2B0.1010802@ee.byu.edu> Brett G. Olivier wrote: >Hi > >I get the following (PIII, Win2k, Python 2.3.2) these are the N values >where the error begins. > >SciPy Numeric >typecode='D' typecode='D' >N=258144 FAIL N=258140 FAIL > >typecode='F' typecode='F' >N=258140 FAIL N=258140 FAIL > >scipy.__version__ = '0.3.1_283.4226' >Numeric.__version__ = '23.3' >Both compiled with MinGW GCC 3.3.3 > >Hope this helps >Brett > > > I've changed some things in CVS to address this problem. I think the problem is due to calling a function that returns a structure (Py_complex) that was compiled with one compiler (MSVC) from another compiler (mingw32). This came up before but I thought we addressed the problem --- apparently a few slipped through. I've placed all needed complex functions in fastumath so that Python functions previously defined are not called. This seemed to fix the problem. I would like feedback from others however... Thanks, -Travis O. From bgoli at sun.ac.za Wed Sep 1 16:56:42 2004 From: bgoli at sun.ac.za (Brett G. Olivier) Date: Wed, 1 Sep 2004 22:56:42 +0200 (SAST) Subject: [SciPy-user] IMPORTANT BUG in scipy... rescue me !! In-Reply-To: <4135E2B0.1010802@ee.byu.edu> References: <24676.1093685033@www3.gmx.net><31830.1093697422@www13.gmx.net><413305 20.3000202@novagrid.com><41342C48.7020100@novagrid.com><1335.196.30.17 8.181.1093986170.squirrel@glue.jjj.sun.ac.za> <4135E2B0.1010802@ee.byu.edu> Message-ID: <1194.196.30.178.168.1094072202.squirrel@glue.jjj.sun.ac.za> Hi Thanks, using scipy.__version__ = '0.3.1_284.4247' works for me as well with both types D, F and +-*/ operations. Regards Brett > I've changed some things in CVS to address this problem. > > I think the problem is due to calling a function that returns a > structure (Py_complex) that was compiled with one compiler (MSVC) from > another compiler (mingw32). This came up before but I thought we > addressed the problem --- apparently a few slipped through. > > I've placed all needed complex functions in fastumath so that Python > functions previously defined are not called. This seemed to fix the > problem. > > I would like feedback from others however... > > Thanks, > > -Travis O. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- Brett G. Olivier (bgoli at sun dot ac dot za ) Triple-J Group for Molecular Cell Physiology, Stellenbosch University http://glue.jjj.sun.ac.za/~bgoli Tel +27-21-8085871 Fax +27-21-8085863 Mobile +27-82-7329306 From david.grant at telus.net Wed Sep 1 20:39:13 2004 From: david.grant at telus.net (David Grant) Date: Wed, 01 Sep 2004 17:39:13 -0700 Subject: [SciPy-user] matplotlib slow, biggles problem as well Message-ID: <41366BB1.5020301@telus.net> I'm asking here because the biggles mailing are never busy and I know some peopel here use biggles. I'm trying to get the latest biggles for Python 2.3 wto work on my Windows machine. I downloaded the 3 requried *.dll's. Where do I put them? I tried c:\WINNT\system32, and I tried d:\python23\ and I tried d:\python23\DLLS but nothing works. Please help anyone if you can, I'd like to get this working ASAP. By the way, my problem is that I was using matplotlib, but when trying to plot a 300 point array, it chokes up. I could downsample it, but why should I have to? Mathematica doesn't have problems plotting arrays like this. matpoltlib doesn't crash necessarily, just goes exremely slow. I was using plot(V,'ro'). Maybe I need to configure some options? Or maybe it's just an inherent problem with tk or gtk or whatever. Is there another plotting program I could try which would work for sure? -- David J. Grant http://www.davidgrant.ca:81 -------------- next part -------------- A non-text attachment was scrubbed... Name: david.grant.vcf Type: text/x-vcard Size: 200 bytes Desc: not available URL: From elcorto at gmx.net Thu Sep 2 05:46:49 2004 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 2 Sep 2004 11:46:49 +0200 (MEST) Subject: [SciPy-user] matplotlib slow, biggles problem as well References: <41366BB1.5020301@telus.net> Message-ID: <14557.1094118409@www39.gmx.net> > I'm asking here because the biggles mailing are never busy and I know > some peopel here use biggles. > > I'm trying to get the latest biggles for Python 2.3 wto work on my > Windows machine. I downloaded the 3 requried *.dll's. Where do I put > them? I tried c:\WINNT\system32, and I tried d:\python23\ and I tried > d:\python23\DLLS but nothing works. Please help anyone if you can, I'd > like to get this working ASAP. I put the .dll's in a directory where I know for sure that Python searches for modules etc. So I was able to do import biggles witout errors. But plotting something crashes the whole Python (i.e. the interpreter programm). This works :) with PythonWin and IPython. > > By the way, my problem is that I was using matplotlib, but when trying > to plot a 300 point array, it chokes up. I could downsample it, but why > should I have to? Mathematica doesn't have problems plotting arrays > like this. matpoltlib doesn't crash necessarily, just goes exremely > slow. I was using plot(V,'ro'). Maybe I need to configure some > options? Or maybe it's just an inherent problem with tk or gtk or > whatever. Is there another plotting program I could try which would > work for sure? > Matplotlib is _very_ slow on my WinXP box as well (tkagg backend). If you love Matlab syntax you should try scipy.xplt which is very fast (but crashes on my box _sometimes_ for no reason also killing the whole interpreter (as does biggles above). If you aren't addicted to Matlab syntax too much you should definitely try gnuplot and the Gnuplot module (http://gnuplot-py.sourceforge.net/). It is almost as fast as if you were using gnuplot directly. Very nice. > -- > David J. Grant > http://www.davidgrant.ca:81 > > bye steve -- If you can't beat your computer at chess, try kickboxing. -- Superg?nstige DSL-Tarife + WLAN-Router f?r 0,- EUR* Jetzt zu GMX wechseln und sparen http://www.gmx.net/de/go/dsl From cuma at tyszkiewicz.info Thu Sep 2 07:10:26 2004 From: cuma at tyszkiewicz.info (Cuma Tyszkiewicz) Date: Thu, 2 Sep 2004 13:10:26 +0200 Subject: [SciPy-user] how to use scaLAPACK Message-ID: <200409021310.26961.cuma@tyszkiewicz.info> Hello Is it possible to call scaLAPACK routines from SciPy? Best Regards Cuma From arnd.baecker at web.de Thu Sep 2 07:44:00 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 2 Sep 2004 13:44:00 +0200 (CEST) Subject: [SciPy-user] SciPy '04 BoF: Making Python Attractive to General Scientists In-Reply-To: <200408302211.i7UMB8AI003276@oobleck.astro.cornell.edu> References: <200408302211.i7UMB8AI003276@oobleck.astro.cornell.edu> Message-ID: On Mon, 30 Aug 2004, Joe Harrington wrote: > Making Python Attractive to General Scientists > BoF session at SciPy '04 > > Joe Harrington, Cornell > Perry Greenfield, STScI [...] > Please join us for a BoF on Thursday night. Hi, as I won't be able to come to SciPy '04, I would like to add a few remarks on this: python in general - syntax checker which does not run the code Documentation: I think that for a package like Numeric/SciPy the documentation should also allow for mathematical formulae. I think that ReSt, http://docutils.sourceforge.net/rst.html would a good candidate. There is an experimental latex preprocessor, http://docutils.sourceforge.net/sandbox/cben/rolehack/README.html Maybe MathMl would be another option? Using a new command like `ghelp` a graphical output window could display all this nicely. More generally, a good help browser would be great (documancer, http://documancer.sourceforge.net/ looks most promising and I use it a lot already now!) In particular, for scientific use the display of LaTeX formulae (or MathML ?) would be necessary. ((documancer uses wxMozilla, so it should be possible to do that). The above proposed `ghelp` could actually invoke documancer to display the help entry.)) The maple/mathematica or matlab help seem to be pretty good examples for something like this. Specifically I think that - more extensive documentation is needed, IMHO. In particular, each command should have at _least_ one example. Moreover, technical/algorithmical background information (with formulae) would be helpful. - There should be an easy way for user-contributed documentation and code-snippets (which will/could be integrated into scipy in an automatic (?) way) ((When trying out a command I tend to write a short example to test whether I understood the description correctly. If many users would contribute their mini-examples using a well-defined and simple procedure that could accumulate quite quickly ...)) Best, Arnd P.S.: and of course the `SciPyWorkBench', http://www.scipy.net/pipermail/ipython-user/2004-May/000298.html might be one important thing to many users. From Giovanni.Samaey at cs.kuleuven.ac.be Thu Sep 2 08:33:48 2004 From: Giovanni.Samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Thu, 02 Sep 2004 14:33:48 +0200 Subject: ***[Possible UCE]*** Re: [SciPy-user] odeint -- segmentation fault In-Reply-To: <4134F97E.4000203@ee.byu.edu> References: <41337FB3.1050107@cs.kuleuven.ac.be> <413392C0.7020200@ucsd.edu> <4134C31D.1070602@cs.kuleuven.ac.be> <4134F97E.4000203@ee.byu.edu> Message-ID: <4137132C.2090504@cs.kuleuven.ac.be> Travis E. Oliphant wrote: > I think I've fixed this problem. The problem seems to be that lsoda > (written in Fortran) made heavy use of common blocks which made it > difficult to use in a re-entrant fashion (your dydt function itself > calls odeint). This is what I suspected to be the problem (the re-entrant use). I am really thankful for this fix; it helped me a lot. Do you suspect that deeper recursions could cause problems? (Just wondering, I don't need this for now.) > ODEPACK provided a function to save and restore the common block which > I sprinkled throughout the original fortran code so that the > functions called (and jacobians) could use odeint without ruining the > needed common block for the rest of the algorithm. > > Your test script now works on my system and the fixes are in CVS. Thank you. The test script works here too now; and my "real" code works too! On question concerning the CVS version of scipy. If I do regular updates, do I have to rebuild scipy completely, or does the build script take care of checking what changed? And can I break running code by updating regularly? Best, Giovanni > > I hope this helps > > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Giovanni Samaey http://www.cs.kuleuven.ac.be/~giovanni/ Katholieke Universiteit Leuven email: giovanni at cs.kuleuven.ac.be Departement Computerwetenschappen phone: +32-16-327081 Celestijnenlaan 200A, B-3001 Heverlee, Belgium fax: +32-16-327996 Office: A04.36 From s.e.murdock at soton.ac.uk Thu Sep 2 15:22:24 2004 From: s.e.murdock at soton.ac.uk (Stuart Murdock) Date: Thu, 02 Sep 2004 19:22:24 +0000 Subject: [SciPy-user] First few eigenvectors of a numarray array Message-ID: <413772F0.9010801@soton.ac.uk> Hi I have a 10000 by 10000 square numarray array which I have obtained using numarray. I need to obtain the first 10 eigenvalues and eigenvectors of this so I dont want to have to calculate all eigenvectors of the matrix. Is anyone aware of any pythonic packages which can calculate, lets say, the 10 eigenvectors corresponding to the top 10 eigenvalues of a numarray array. There are a few functions which calculate all of the eigenvectors e.g. eig eigenvectors but I only want to calculate the top few. Thanks Stuart -- Stuart Murdock Ph.D, Research Fellow, Dept. of Chemistry / E-Science, University of Southampton, Highfield, Southampton, SO17 1BJ, United Kingdom http://www.biosimgrid.org From david.grant at telus.net Thu Sep 2 15:35:25 2004 From: david.grant at telus.net (David Grant) Date: Thu, 02 Sep 2004 12:35:25 -0700 Subject: [SciPy-user] First few eigenvectors of a numarray array In-Reply-To: <413772F0.9010801@soton.ac.uk> References: <413772F0.9010801@soton.ac.uk> Message-ID: <413775FD.6090109@telus.net> When you say "top 10" do you mean the eigenvalues with the largest or smallest values? I would also be interested in knowing if there is any mathematical way of doing this. Sometimes for example in molecular simulations you only want to calculate the ground state energy, for example, and you don't care about the rest... Dave Stuart Murdock wrote: > Hi > > I have a 10000 by 10000 square numarray array which I have obtained > using numarray. I need to obtain the first 10 eigenvalues and > eigenvectors of this so I dont want to have to calculate all > eigenvectors of the matrix. Is anyone aware of any pythonic packages > which > can calculate, lets say, the 10 eigenvectors corresponding to the top > 10 eigenvalues of a numarray array. > > There are a few functions which calculate all of the eigenvectors e.g. > > eig > eigenvectors > > but I only want to calculate the top few. > > Thanks > > Stuart > -- David J. Grant http://www.davidgrant.ca:81 -------------- next part -------------- A non-text attachment was scrubbed... Name: david.grant.vcf Type: text/x-vcard Size: 200 bytes Desc: not available URL: From s.e.murdock at soton.ac.uk Thu Sep 2 15:50:57 2004 From: s.e.murdock at soton.ac.uk (Stuart Murdock) Date: Thu, 02 Sep 2004 19:50:57 +0000 Subject: [SciPy-user] First few eigenvectors of a numarray array References: <413772F0.9010801@soton.ac.uk> <413775FD.6090109@telus.net> Message-ID: <413779A1.6090106@soton.ac.uk> Hi Dave David Grant wrote: > When you say "top 10" do you mean the eigenvalues with the largest or > smallest values? > Primarily I would be concerned with the eigenvectors associated with the largest eigenvalues as presently I am interested in Principal Component Analysis of biomolecular simulation trajectories. There are ways to approximate the top eigenvector / eigenvalue, then get the next and so on but I was wondering if there were any packages with those types of algorithms already implemented. Thanks Stuart > I would also be interested in knowing if there is any mathematical way > of doing this. Sometimes for example in molecular simulations you > only want to calculate the ground state energy, for example, and you > don't care about the rest... > > Dave > > > Stuart Murdock wrote: > >> Hi >> >> I have a 10000 by 10000 square numarray array which I have obtained >> using numarray. I need to obtain the first 10 eigenvalues and >> eigenvectors of this so I dont want to have to calculate all >> eigenvectors of the matrix. Is anyone aware of any pythonic packages >> which >> can calculate, lets say, the 10 eigenvectors corresponding to the top >> 10 eigenvalues of a numarray array. >> >> There are a few functions which calculate all of the eigenvectors e.g. >> >> eig >> eigenvectors >> >> but I only want to calculate the top few. >> >> Thanks >> >> Stuart >> > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -- Stuart Murdock Ph.D, Research Fellow, Dept. of Chemistry / E-Science, University of Southampton, Highfield, Southampton, SO17 1BJ, United Kingdom http://www.biosimgrid.org From gazzar at email.com Fri Sep 3 00:18:02 2004 From: gazzar at email.com (Gary Ruben) Date: Fri, 03 Sep 2004 14:18:02 +1000 Subject: [SciPy-user] First few eigenvectors of a numarray array Message-ID: <20040903041802.36943164002@ws1-4.us4.outblaze.com> Hi Stuart, It may well not be relevant to your needs, but since you mentioned you were doing PCA, you might want to look at the MDP package which was announced on this list a few days ago. http://mdp-toolkit.sourceforge.net/ Gary ----- Original Message ----- From: Stuart Murdock Date: Thu, 02 Sep 2004 19:50:57 +0000 To: SciPy Users List Subject: Re: [SciPy-user] First few eigenvectors of a numarray array > Hi Dave > > David Grant wrote: > > > When you say "top 10" do you mean the eigenvalues with the largest or > > smallest values? > > > > Primarily I would be concerned with the eigenvectors associated with the > largest eigenvalues as presently > I am interested in Principal Component Analysis of biomolecular > simulation trajectories. There are ways to approximate > the top eigenvector / eigenvalue, then get the next and so on but I was > wondering if there were any packages with those > types of algorithms already implemented. > > Thanks > > Stuart > > > I would also be interested in knowing if there is any mathematical way > > of doing this. Sometimes for example in molecular simulations you > > only want to calculate the ground state energy, for example, and you > > don't care about the rest... > > > > Dave > > > > > > Stuart Murdock wrote: > > > >> Hi > >> > >> I have a 10000 by 10000 square numarray array which I have obtained > >> using numarray. I need to obtain the first 10 eigenvalues and > >> eigenvectors of this so I dont want to have to calculate all > >> eigenvectors of the matrix. Is anyone aware of any pythonic packages > >> which > >> can calculate, lets say, the 10 eigenvectors corresponding to the top > >> 10 eigenvalues of a numarray array. > >> > >> There are a few functions which calculate all of the eigenvectors e.g. > >> > >> eig > >> eigenvectors > >> > >> but I only want to calculate the top few. > >> > >> Thanks > >> > >> Stuart > >> > > > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.net > >http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > -- > > Stuart Murdock Ph.D, > Research Fellow, > Dept. of Chemistry / E-Science, > University of Southampton, > Highfield, Southampton, > SO17 1BJ, United Kingdom > > http://www.biosimgrid.org > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- ___________________________________________________________ Sign-up for Ads Free at Mail.com http://promo.mail.com/adsfreejump.htm From zunzun at zunzun.com Fri Sep 3 10:29:32 2004 From: zunzun at zunzun.com (James R. Phillips) Date: Fri, 3 Sep 2004 10:29:32 -400 Subject: [SciPy-user] scipy.stats.medianscore missing from SciPy Message-ID: <41387fcca04556.85124172@mercury.sabren.com> The scipy.stats module no longer contains the medianscore method. Is there a reason for the removal? The internal documentation at the top of the stats.py file says it should be there. James Phillips http://zunzun.com From p.berkes at biologie.hu-berlin.de Fri Sep 3 11:07:06 2004 From: p.berkes at biologie.hu-berlin.de (p.berkes at biologie.hu-berlin.de) Date: Fri, 3 Sep 2004 17:07:06 +0200 (CEST) Subject: [SciPy-user] First few eigenvectors of a numarray array In-Reply-To: <20040903041802.36943164002@ws1-4.us4.outblaze.com> Message-ID: Hi Stuart, in fact MDP contains a function (mdp.utils.symeig), which provides an interface to some LAPACK routines that are able to compute only the first eigenvectors of a matrix, given that it is symmetrical positive definite (which should be the case, since you're doing PCA). Pietro. On Fri, 3 Sep 2004, Gary Ruben wrote: > Hi Stuart, > > It may well not be relevant to your needs, but since you mentioned you were doing PCA, you might want to look at the MDP package which was announced on this list a few days ago. > http://mdp-toolkit.sourceforge.net/ > > Gary > > ----- Original Message ----- > From: Stuart Murdock > Date: Thu, 02 Sep 2004 19:50:57 +0000 > To: SciPy Users List > Subject: Re: [SciPy-user] First few eigenvectors of a numarray array > > > Hi Dave > > > > David Grant wrote: > > > > > When you say "top 10" do you mean the eigenvalues with the largest or > > > smallest values? > > > > > > > Primarily I would be concerned with the eigenvectors associated with the > > largest eigenvalues as presently > > I am interested in Principal Component Analysis of biomolecular > > simulation trajectories. There are ways to approximate > > the top eigenvector / eigenvalue, then get the next and so on but I was > > wondering if there were any packages with those > > types of algorithms already implemented. > > > > Thanks > > > > Stuart > > > > > I would also be interested in knowing if there is any mathematical way > > > of doing this. Sometimes for example in molecular simulations you > > > only want to calculate the ground state energy, for example, and you > > > don't care about the rest... > > > > > > Dave > > > > > > > > > Stuart Murdock wrote: > > > > > >> Hi > > >> > > >> I have a 10000 by 10000 square numarray array which I have obtained > > >> using numarray. I need to obtain the first 10 eigenvalues and > > >> eigenvectors of this so I dont want to have to calculate all > > >> eigenvectors of the matrix. Is anyone aware of any pythonic packages > > >> which > > >> can calculate, lets say, the 10 eigenvectors corresponding to the top > > >> 10 eigenvalues of a numarray array. > > >> > > >> There are a few functions which calculate all of the eigenvectors e.g. > > >> > > >> eig > > >> eigenvectors > > >> > > >> but I only want to calculate the top few. > > >> > > >> Thanks > > >> > > >> Stuart > > >> > > > > > > > > >_______________________________________________ > > >SciPy-user mailing list > > >SciPy-user at scipy.net > > >http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > > > > > -- > > > > Stuart Murdock Ph.D, > > Research Fellow, > > Dept. of Chemistry / E-Science, > > University of Southampton, > > Highfield, Southampton, > > SO17 1BJ, United Kingdom > > > > http://www.biosimgrid.org > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > From s.e.murdock at soton.ac.uk Fri Sep 3 14:13:50 2004 From: s.e.murdock at soton.ac.uk (Stuart Murdock) Date: Fri, 03 Sep 2004 18:13:50 +0000 Subject: [SciPy-user] First few eigenvectors of a numarray array References: Message-ID: <4138B45E.70804@soton.ac.uk> Hi Pietro, Gary p.berkes at biologie.hu-berlin.de wrote: >Hi Stuart, > >in fact MDP contains a function (mdp.utils.symeig), which provides >an interface to some LAPACK routines that are able to compute only >the first eigenvectors of a matrix, given that it is symmetrical >positive definite (which should be the case, since you're doing >PCA). > > Thanks This looks like the kind of thing I am after. Brilliant !!! Stuart >Pietro. > > >On Fri, 3 Sep 2004, Gary Ruben wrote: > > > >>Hi Stuart, >> >>It may well not be relevant to your needs, but since you mentioned you were doing PCA, you might want to look at the MDP package which was announced on this list a few days ago. >>http://mdp-toolkit.sourceforge.net/ >> >>Gary >> >>----- Original Message ----- >>From: Stuart Murdock >>Date: Thu, 02 Sep 2004 19:50:57 +0000 >>To: SciPy Users List >>Subject: Re: [SciPy-user] First few eigenvectors of a numarray array >> >> >> >>>Hi Dave >>> >>>David Grant wrote: >>> >>> >>> >>>>When you say "top 10" do you mean the eigenvalues with the largest or >>>>smallest values? >>>> >>>> >>>> >>>Primarily I would be concerned with the eigenvectors associated with the >>>largest eigenvalues as presently >>>I am interested in Principal Component Analysis of biomolecular >>>simulation trajectories. There are ways to approximate >>>the top eigenvector / eigenvalue, then get the next and so on but I was >>>wondering if there were any packages with those >>>types of algorithms already implemented. >>> >>>Thanks >>> >>>Stuart >>> >>> >>> >>>>I would also be interested in knowing if there is any mathematical way >>>>of doing this. Sometimes for example in molecular simulations you >>>>only want to calculate the ground state energy, for example, and you >>>>don't care about the rest... >>>> >>>>Dave >>>> >>>> >>>>Stuart Murdock wrote: >>>> >>>> >>>> >>>>>Hi >>>>> >>>>>I have a 10000 by 10000 square numarray array which I have obtained >>>>>using numarray. I need to obtain the first 10 eigenvalues and >>>>>eigenvectors of this so I dont want to have to calculate all >>>>>eigenvectors of the matrix. Is anyone aware of any pythonic packages >>>>>which >>>>>can calculate, lets say, the 10 eigenvectors corresponding to the top >>>>>10 eigenvalues of a numarray array. >>>>> >>>>>There are a few functions which calculate all of the eigenvectors e.g. >>>>> >>>>>eig >>>>>eigenvectors >>>>> >>>>>but I only want to calculate the top few. >>>>> >>>>>Thanks >>>>> >>>>>Stuart >>>>> >>>>> >>>>> >>>>_______________________________________________ >>>>SciPy-user mailing list >>>>SciPy-user at scipy.net >>>>http://www.scipy.net/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> >>>-- >>> >>>Stuart Murdock Ph.D, >>>Research Fellow, >>>Dept. of Chemistry / E-Science, >>>University of Southampton, >>>Highfield, Southampton, >>>SO17 1BJ, United Kingdom >>> >>>http://www.biosimgrid.org >>> >>> >>> >>>_______________________________________________ >>>SciPy-user mailing list >>>SciPy-user at scipy.net >>>http://www.scipy.net/mailman/listinfo/scipy-user >>> >>> >> >> > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > >. > > > -- Stuart Murdock Ph.D, Research Fellow, Dept. of Chemistry / E-Science, University of Southampton, Highfield, Southampton, SO17 1BJ, United Kingdom http://www.biosimgrid.org From zunzun at zunzun.com Fri Sep 3 15:00:47 2004 From: zunzun at zunzun.com (James R. Phillips) Date: Fri, 3 Sep 2004 15:00:47 -400 Subject: [SciPy-user] scipy.stats.samplevar documentation incorrect? Message-ID: <4138bf5fc76289.53874235@mercury.sabren.com> In the scipy.ststs module, the documentation for the method samplevar is identical to that for the method samplestd. I think it ought to read, "sample variance" rather than "sample standard deviation". James Phillips http://zunzun.com From zunzun at zunzun.com Fri Sep 3 15:05:03 2004 From: zunzun at zunzun.com (James R. Phillips) Date: Fri, 3 Sep 2004 15:05:03 -400 Subject: [SciPy-user] scipy.stats tmax and tmin method signatures Message-ID: <4138c05f8a5084.31043064@mercury.sabren.com> scipy.stats.tmin() has a default lower limit of None, where tmin does not have a default upper limit of None. I think they should both be None, is that correct? James Phillips http://zunzun.com From jdhunter at ace.bsd.uchicago.edu Fri Sep 3 14:26:20 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 03 Sep 2004 13:26:20 -0500 Subject: [SciPy-user] matplotlib slow, biggles problem as well In-Reply-To: <41366BB1.5020301@telus.net> (David Grant's message of "Wed, 01 Sep 2004 17:39:13 -0700") References: <41366BB1.5020301@telus.net> Message-ID: >>>>> "David" == David Grant writes: David> By the way, my problem is that I was using matplotlib, but David> when trying to plot a 300 point array, it chokes up. I David> could downsample it, but why should I have to? Mathematica David> doesn't have problems plotting arrays like this. David> matpoltlib doesn't crash necessarily, just goes exremely David> slow. I was using plot(V,'ro'). Maybe I need to configure David> some options? Or maybe it's just an inherent problem with David> tk or gtk or whatever. Is there another plotting program I David> could try which would work for sure? I routinely do 512x512 arrays with imshow in matplotlib on linux and win32 with no performance problems on a modern p4. For 2D arrays, imshow is much faster than pcolor for large arrays. See the image_demo* examples at http://matplotlib.sf.net/examples For 1D arrays with plot and scatter, my system handles 20,000-50,000 (and larger) length arrays efficiently (order of 1 second to plot). If you are having trouble with 300 point arrays, something is probably wrong with your computer or matplotlib setup. Make sure you read the FAQ http://matplotlib.sf.net/faq.html#SLOW which explains the most common configuration problems - the main problem comes from trying to use numarray with a matplotlib compiled for numeric, and vice-versa. If you keep having troubles, please post a script, your platform info, your python / matplotlib / numeric version info to the matplotlib list. Cheers, JDH From secchi at sssup.it Sat Sep 4 09:25:15 2004 From: secchi at sssup.it (Angelo Secchi) Date: Sat, 4 Sep 2004 15:25:15 +0200 Subject: [SciPy-user] Installing problem: newbie with a new machine Message-ID: <20040904152515.489c772b.secchi@sssup.it> Hi, I have a working scipy installation on a 32bit Suse machine and I'm trying to install it on a dual opteron machine with gentoo. I obtain this error error: Command "/usr/x86_64-pc-linux-gnu/gcc-bin/3.3/g77 -shared build/temp.linux-x86_64-2.3/build/src/build/src/fblasmodule.o build/temp.linux-x86_64-2.3/build/src/fortranobject.o build/temp.linux-x86_64-2.3/Lib/linalg/src/fblaswrap.o -L/usr/local/Linux_HAMMER64SSE2_2/lib/ -Lbuild/temp.linux-x86_64-2.3 -llapack -lptf77blas -lptcblas-latlas -lg2c -o build/lib.linux-x86_64-2.3/scipy/linalg/fblas.so" failed with exit status 1 Any suggestion? Thanks a. From cookedm at physics.mcmaster.ca Sun Sep 5 23:11:44 2004 From: cookedm at physics.mcmaster.ca (David M.Cooke) Date: Sun, 5 Sep 2004 23:11:44 -0400 Subject: [SciPy-user] Installing problem: newbie with a new machine In-Reply-To: <20040904152515.489c772b.secchi@sssup.it> References: <20040904152515.489c772b.secchi@sssup.it> Message-ID: <7B11D318-FFB2-11D8-84BB-000A95BE5E0A@physics.mcmaster.ca> On Sep 4, 2004, at 09:25, Angelo Secchi wrote: > Hi, > I have a working scipy installation on a 32bit Suse machine and I'm > trying to install it on a dual opteron machine with gentoo. I obtain > this error "trying to install" -> I presume from scratch? Not copying the binaries (I'm just a little confused by your use of "have a working installation and trying to install it"). I presume you're trying to make a 64-bit version for the opteron? > error: Command "/usr/x86_64-pc-linux-gnu/gcc-bin/3.3/g77 -shared > build/temp.linux-x86_64-2.3/build/src/build/src/fblasmodule.o > build/temp.linux-x86_64-2.3/build/src/fortranobject.o > build/temp.linux-x86_64-2.3/Lib/linalg/src/fblaswrap.o > -L/usr/local/Linux_HAMMER64SSE2_2/lib/ -Lbuild/temp.linux-x86_64-2.3 > -llapack -lptf77blas -lptcblas-latlas -lg2c -o > build/lib.linux-x86_64-2.3/scipy/linalg/fblas.so" failed with exit > status 1 There's not a lot here to go on. I'm not familiar with gentoo, but check if there is a flag for emerge to show all the steps it takes while compiling (-v or something). The message above just shows what command failed, but not the more informative error message that g77 probably printed out. My guess (because this is a common problem :-) is that you're missing part of LAPACK (you need the full version, as well as the optimized bits that ATLAS provides). |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From rkern at ucsd.edu Mon Sep 6 04:08:31 2004 From: rkern at ucsd.edu (Robert Kern) Date: Mon, 06 Sep 2004 01:08:31 -0700 Subject: [SciPy-user] Installing problem: newbie with a new machine In-Reply-To: <7B11D318-FFB2-11D8-84BB-000A95BE5E0A@physics.mcmaster.ca> References: <20040904152515.489c772b.secchi@sssup.it> <7B11D318-FFB2-11D8-84BB-000A95BE5E0A@physics.mcmaster.ca> Message-ID: <413C1AFF.7000904@ucsd.edu> David M.Cooke wrote: [snip] > My guess (because this is a common problem :-) is that you're missing > part of LAPACK (you need the full version, as well as the optimized > bits that ATLAS provides). C.f. http://math-atlas.sourceforge.net/errata.html#completelp -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Jeremy.Martin at noaa.gov Mon Sep 6 22:12:32 2004 From: Jeremy.Martin at noaa.gov (Jeremy Martin) Date: Tue, 07 Sep 2004 02:12:32 GMT Subject: [SciPy-user] Gplt questions Message-ID: <88d4b86b4a.86b4a88d4b@noaa.gov crmsg1.crh.noaa.gov> List, Ive got two questions/problems that I was wondering if anyone can help me with. 1) Is there any way to have gplt run in the background and save to a png or gif file without the output displaying on the screen?. 2)Ive noticed that when I save a graph to an image file, the icons colors, etc do not match what ws originally displayed on the screen. Does anyone know what causes this behavior...or more important are there any workarounds for it? Thanks! Jeremy Martin From aisaac at american.edu Mon Sep 6 22:46:45 2004 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 6 Sep 2004 22:46:45 -0400 (Eastern Daylight Time) Subject: [SciPy-user] Gplt questions In-Reply-To: <88d4b86b4a.86b4a88d4b@noaa.gov crmsg1.crh.noaa.gov> References: <88d4b86b4a.86b4a88d4b@noaa.gov crmsg1.crh.noaa.gov> Message-ID: On Tue, 07 Sep 2004, Jeremy Martin apparently wrote: > 2)Ive noticed that when I save a graph to an image file, the icons > colors, etc do not match what ws originally displayed on the screen. > Does anyone know what causes this behavior...or more important are > there any workarounds for it? This is really a gnuplot question. >From the gnuplot help topics you'll want: terminal linestyle test Roughly: that's just not how gnuplot works. But: display onscreen is ambiguous: how about using ghostscript as your viewer? hth, Alan Isaac From Jeremy.Martin at noaa.gov Tue Sep 7 00:21:55 2004 From: Jeremy.Martin at noaa.gov (Jeremy Martin) Date: Tue, 07 Sep 2004 04:21:55 GMT Subject: [SciPy-user] RE: Gplt Questions Message-ID: <877ca8ab18.8ab18877ca@noaa.gov crmsg1.crh.noaa.gov> THanks for the reply about looking more into gnuplot for guidance but I think I should clarify my questions a tad. First im writing a wxPython(v 2.4) based program on Windows XP. I guess my first question was if it was possible to redirect to the NULL device using the output command so that the graph will not display on screen...and I can retrieve the image file at a later time. Im still not sure if I understand why the point types would be completly different on the png Image than what is displayed while the graph is drawn. Are you saying that maybe its Windows having trouble displaying the image? Thanks for helping me on this , Jeremy Martin From jdhunter at ace.bsd.uchicago.edu Tue Sep 7 06:57:57 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 07 Sep 2004 05:57:57 -0500 Subject: [SciPy-user] RE: Gplt Questions In-Reply-To: <877ca8ab18.8ab18877ca@noaa.gov crmsg1.crh.noaa.gov> ("Jeremy Martin"'s message of "Tue, 07 Sep 2004 04:21:55 GMT") References: <877ca8ab18.8ab18877ca@noaa.gov crmsg1.crh.noaa.gov> Message-ID: >>>>> "Jeremy" == Jeremy Martin writes: Jeremy> Im still not sure if I understand why the point types Jeremy> would be completly different on the png Image than what is Jeremy> displayed while the graph is drawn. Are you saying that Jeremy> maybe its Windows having trouble displaying the image? If it is important to you to have identical images in your wx window and hard copy png, you may want to take a look at matplotlib with the wxagg backend. This uses the same image renderer ("agg") to draw to a wx canvas and dump to PNG. With the exception of changes in resolution (you may want your hardcopy to be a higher resolution than your screen copy), the images will otherwise be identical. matplotlib with the wxagg backend will work "out of the box" with the enthought edition of python, or any python install that has wxpython + numeric or numarray. After installing, edit c:\python23\share\matplotlib\.matplotlitbrc and set in that file backend : WXAgg The src distribution *.zip has a lot of examples in the examples subdirectory, which can also be found at http://matplotlib.sf.net/examples. JDH From heiland at indiana.edu Tue Sep 7 08:06:02 2004 From: heiland at indiana.edu (Randy Heiland) Date: Tue, 7 Sep 2004 07:06:02 -0500 Subject: [SciPy-user] Disney pkg mentioned at SciPy04? Message-ID: <000c01c494d3$0bd78320$c686fea9@escher> Can someone tell me the name of the Disney graphics/modeling (open source?) package that was mentioned (by Eric Jones, I think)? Thanks, Randy From jdhunter at ace.bsd.uchicago.edu Tue Sep 7 07:17:57 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 07 Sep 2004 06:17:57 -0500 Subject: [SciPy-user] Disney pkg mentioned at SciPy04? In-Reply-To: <000c01c494d3$0bd78320$c686fea9@escher> ("Randy Heiland"'s message of "Tue, 7 Sep 2004 07:06:02 -0500") References: <000c01c494d3$0bd78320$c686fea9@escher> Message-ID: >>>>> "Randy" == Randy Heiland writes: Randy> Can someone tell me the name of the Disney Randy> graphics/modeling (open source?) package that was mentioned Randy> (by Eric Jones, I think)? Panda3D From jdhunter at ace.bsd.uchicago.edu Tue Sep 7 07:17:57 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 07 Sep 2004 06:17:57 -0500 Subject: [SciPy-user] Disney pkg mentioned at SciPy04? In-Reply-To: <000c01c494d3$0bd78320$c686fea9@escher> ("Randy Heiland"'s message of "Tue, 7 Sep 2004 07:06:02 -0500") References: <000c01c494d3$0bd78320$c686fea9@escher> Message-ID: >>>>> "Randy" == Randy Heiland writes: Randy> Can someone tell me the name of the Disney Randy> graphics/modeling (open source?) package that was mentioned Randy> (by Eric Jones, I think)? Panda3D From heiland at indiana.edu Tue Sep 7 08:17:30 2004 From: heiland at indiana.edu (Randy Heiland) Date: Tue, 7 Sep 2004 07:17:30 -0500 Subject: [SciPy-user] Disney pkg mentioned at SciPy04? In-Reply-To: Message-ID: <000d01c494d4$a613a760$c686fea9@escher> Yep, that was it. My note-taking had stopped at that point I guess. Thanks John! --Randy > -----Original Message----- > From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] > On Behalf Of John Hunter > Sent: Tuesday, September 07, 2004 6:18 AM > To: SciPy Users List > Cc: scipy-user at scipy.org > Subject: Re: [SciPy-user] Disney pkg mentioned at SciPy04? > > >>>>> "Randy" == Randy Heiland writes: > > Randy> Can someone tell me the name of the Disney > Randy> graphics/modeling (open source?) package that was mentioned > Randy> (by Eric Jones, I think)? > > Panda3D > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From eric at enthought.com Wed Sep 8 01:16:55 2004 From: eric at enthought.com (eric jones) Date: Tue, 07 Sep 2004 22:16:55 -0700 Subject: [SciPy-user] Disney pkg mentioned at SciPy04? In-Reply-To: <000d01c494d4$a613a760$c686fea9@escher> References: <000d01c494d4$a613a760$c686fea9@escher> Message-ID: <413E95C7.7090207@enthought.com> Hey Randy, I saw this paper at PyCon. Here is a reference to the paper. *http://www.python.org/pycon/dc2004/papers/29/ I specifically mentioned its ability to reload python modules on the fly and automatically update any "live" objects in the system to the new class definitions that were just reloaded. This can streamline development of GUI frameworks (and others) quite a bit. We haven't gotten this into Envisage yet, but want to. If you search the paper referenced above for a section called "Redefining methods" you'll find a brief description of the feature. I just looked through the source code, and the magic appears to happen in the "Finder" module here: \panda\direct\src\showbase\Finder.py It looks like "\panda\direct\src\fsm\State.py" might also play a role. I'll see if I can figure out the minimum code that needs to be extracted so that it can be used elsewhere... I've copied Shalin Shodhan on the mail as he authoered the PyCon paper. thanks, eric * Randy Heiland wrote: >Yep, that was it. My note-taking had stopped at that point I guess. >Thanks John! >--Randy > > > >>-----Original Message----- >>From: scipy-user-bounces at scipy.net >> >> >[mailto:scipy-user-bounces at scipy.net] > > >>On Behalf Of John Hunter >>Sent: Tuesday, September 07, 2004 6:18 AM >>To: SciPy Users List >>Cc: scipy-user at scipy.org >>Subject: Re: [SciPy-user] Disney pkg mentioned at SciPy04? >> >> >> >>>>>>>"Randy" == Randy Heiland writes: >>>>>>> >>>>>>> >> Randy> Can someone tell me the name of the Disney >> Randy> graphics/modeling (open source?) package that was mentioned >> Randy> (by Eric Jones, I think)? >> >>Panda3D >> >> >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From Fernando.Perez at colorado.edu Tue Sep 7 01:21:44 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 06 Sep 2004 23:21:44 -0600 Subject: [SciPy-user] RE: Gplt Questions In-Reply-To: <877ca8ab18.8ab18877ca@noaa.gov crmsg1.crh.noaa.gov> References: <877ca8ab18.8ab18877ca@noaa.gov crmsg1.crh.noaa.gov> Message-ID: <413D4568.20602@colorado.edu> Jeremy Martin wrote: > THanks for the reply about looking more into gnuplot for guidance but > I think I should clarify my questions a tad. > > First im writing a wxPython(v 2.4) based program on Windows XP. I > guess my first question was if it was possible to redirect to the NULL > device using the output command so that the graph will not display on > screen...and I can retrieve the image file at a later time. To be quite honest, I'd recommend that for scientific plotting with python, you look into matplotlib. While I am a heavy user of scipy's many good features, I think plotting is not its strongest point. Matplotlib is a modern, excellent plotting library, directly usable as a WX widget, and which can produce on or off-screen plots in most formats you are likely to need. While it may still have a few rough edges, it is under active development and the remaining (very minor) issues will be quickly ironed out, I think. > Im still not sure if I understand why the point types would be > completly different on the png Image than what is displayed while the > graph is drawn. Are you saying that maybe its Windows having trouble > displaying the image? Gnuplot's rendering model is terminal-dependent, and gnuplot makes no attempt to have the plots look the same across terminals. This is explained in detail in gnuplot's documentation. Best, f From Fernando.Perez at colorado.edu Tue Sep 7 01:28:47 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 06 Sep 2004 23:28:47 -0600 Subject: [SciPy-user] Python & Scientific computing links? Message-ID: <413D470F.90300@colorado.edu> Hi all, at the excellent BOF session run by Joe Harrington at scipy'04, one of the things that came out was the need for more organized and centralized documentation about using Python for various aspects of scientific computing. An effort is under way to tackle this problem, but I'll let Joe report in full on the whole idea in due time. As my tiny contribution to this, I offered to collect a simple page of links and short summaries of any project out on the web which is reasonably connected to scientific computing. Obviously I have my own bookmarks list which I'll use as a starting point, but I'm asking here for all of you to simply mail me your favorite list of links on this topic. Feel free to add extra info which may help me, but it's OK if it's just a list of URLs. I'll collect them, check that at least things are active, and write a short summary for each project (probably copy/paste from their own pages). In a couple of weeks, I'll mail this page to Joe so it can go into the relevant area of the Scipy.org website, where it will most likely end up as a Wiki page. From that point on, the community can continue to improve it, correct my mistakes, and keep it updated as new projects of interest appear or old ones vanish. Best, f. From graphicsguy at cmu.edu Wed Sep 8 15:52:31 2004 From: graphicsguy at cmu.edu (Shalin Parag Shodhan) Date: Wed, 8 Sep 2004 15:52:31 -0400 (EDT) Subject: [SciPy-user] Disney pkg mentioned at SciPy04? In-Reply-To: <413E95C7.7090207@enthought.com> References: <000d01c494d4$a613a760$c686fea9@escher> <413E95C7.7090207@enthought.com> Message-ID: <59372.159.153.4.91.1094673151.squirrel@159.153.4.91> Hi, Thanks for your interest in Panda3D. The dynamic re-binding works with Emacs and i believe the folks at Disney make modifications to emacs scripts to make this happen with Python and Panda. I'm CC'ing David Rose on this email. He will be able to answer any specific questions better than I. CMU's Panda team is also CC'd in case you have more general Panda questions. Cheers! Shalin > Hey Randy, > > I saw this paper at PyCon. Here is a reference to the paper. > > *http://www.python.org/pycon/dc2004/papers/29/ > > I specifically mentioned its ability to reload python modules on the fly > and automatically update any "live" objects in the system to the new class > definitions that were just reloaded. This can streamline development of > GUI frameworks (and others) quite a bit. We haven't gotten this into > Envisage yet, but want to. > > If you search the paper referenced above for a section called "Redefining > methods" you'll find a brief description of the feature. > > I just looked through the source code, and the magic appears to happen in > the "Finder" module here: > > \panda\direct\src\showbase\Finder.py > > It looks like "\panda\direct\src\fsm\State.py" might also play a role. > > I'll see if I can figure out the minimum code that needs to be extracted > so that it can be used elsewhere... I've copied Shalin Shodhan on the > mail as he authoered the PyCon paper. > > thanks, eric > > * > > Randy Heiland wrote: > >> Yep, that was it. My note-taking had stopped at that point I guess. >> Thanks John! --Randy >> >> >> >>> -----Original Message----- From: scipy-user-bounces at scipy.net >>> >>> >> [mailto:scipy-user-bounces at scipy.net] >> >> >>> On Behalf Of John Hunter Sent: Tuesday, September 07, 2004 6:18 AM To: >>> SciPy Users List Cc: scipy-user at scipy.org Subject: Re: [SciPy-user] >>> Disney pkg mentioned at SciPy04? >>> >>> >>> >>>>>>>> "Randy" == Randy Heiland writes: >>>>>>>> >>>>>>>> >>> Randy> Can someone tell me the name of the Disney Randy> >>> graphics/modeling (open source?) package that was mentioned Randy> (by >>> Eric Jones, I think)? >>> >>> Panda3D >>> >>> >>> >>> _______________________________________________ SciPy-user mailing >>> list SciPy-user at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-user >>> >>> >>> >> >> >> _______________________________________________ SciPy-user mailing list >> SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user >> >> > > > From quendor at nandor.net Fri Sep 10 11:07:13 2004 From: quendor at nandor.net (quendor at nandor.net) Date: Fri, 10 Sep 2004 10:07:13 -0500 Subject: [SciPy-user] ATLAS and LPACK Message-ID: <18CB3968-033B-11D9-975B-000A95A52B4A@nandor.net> I have followed the excellent instructions on installing scipy on OS X provided by Christopher Fonnesbeck. Everything seems to work as expected. My only question is if I need to install ATLAS and LPACK separately (from source) or are they already included if I have previously installed the Developer Tools? Thanks -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 373 bytes Desc: not available URL: From rkern at ucsd.edu Fri Sep 10 19:25:17 2004 From: rkern at ucsd.edu (Robert Kern) Date: Fri, 10 Sep 2004 16:25:17 -0700 Subject: [SciPy-user] ATLAS and LPACK In-Reply-To: <18CB3968-033B-11D9-975B-000A95A52B4A@nandor.net> References: <18CB3968-033B-11D9-975B-000A95A52B4A@nandor.net> Message-ID: <414237DD.7030302@ucsd.edu> quendor at nandor.net wrote: > I have followed the excellent instructions on installing scipy on OS X > provided by Christopher Fonnesbeck. Everything seems to work as expected. > > My only question is if I need to install ATLAS and LPACK separately > (from source) or are they already included if I have previously > installed the Developer Tools? You need to install separately from source. I would also mention that the BLAS portion of the vecLib framework that comes with your OS *is* ATLAS, but it will be a little incomplete compared to self-built ATLAS and LAPACK. Namely, the handful of LAPACK functions for which ATLAS includes row-major versions and optimizes are missing. Possibly the row-major version of the BLAS functions are also missing, but I haven't checked. > Thanks -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From josh8912 at yahoo.com Sun Sep 12 04:02:51 2004 From: josh8912 at yahoo.com (J) Date: Sun, 12 Sep 2004 01:02:51 -0700 (PDT) Subject: [SciPy-user] installing Atlas on Fedora core 2, newbie Message-ID: <20040912080251.38494.qmail@web51708.mail.yahoo.com> Hello: I am quite new to the python world...new to the linux world too. I have a question on installing atlas. I did read the well-written instructions on the scipy site, but I still have questions. I am using a new installation of Fedora core 2, and it has Blas and Lapack installed. However, it does not seem to have atlas installed, or at least neither I nor the python install script for scipy could find it. So I assume I need to install atlas. But I have questions. 1) Should I remove the installed versions of Blas and lapack and then do a complete new install by following the directions on the scipy web site? It seems I need to either do that or do item 2 below. But if I do do a new install, I do not know what value to use for the architecture name in the following line when installing atlas: Enter Architecture name (ARCH) [Linux_P4SSE2]: I am using an AMD Athlon(tm) XP 2600+ machine. I am not using SELinux. 2) I did download the atlas binaries from the scipy site and could install them, leaving my current installations of Lapack and Blas intact. But I dont know if I should use the atlas3.6.0_Linux_ATHLON.tgz or the Linux_Athlon.tar.gz files. I have all the developer sofware installed on my Fudora box. Regarless, all that are in the ziped files are a few *.a files. I assume I would need to follow the directions given: cd $ATLAS_SRC/lib/$ARCH cp liblapack.a liblapack_orig.a # make a backup mkdir tmp; cd tmp ar x ../liblapack.a cp $LAPACK ../liblapack.a ar r ../liblapack.a *.o cd ..; rm -rf tmp The first four lines of this worked fine. However, I do not know what the correct value for $LAPACK should be or how to set it. I tried: # export LAPACK_SRC=$BUILD_DIR/src/LAPACK # cp $LAPACK ../liblapack.a but received: cp: missing destination file If anyone could shed light on how to proceed, I would appreciate it. John __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From quendor at nandor.net Sun Sep 12 13:06:17 2004 From: quendor at nandor.net (quendor at nandor.net) Date: Sun, 12 Sep 2004 12:06:17 -0500 Subject: [SciPy-user] ATLAS and LPACK Message-ID: <0FC41801-04DE-11D9-8116-000A95A52B4A@nandor.net> Thanks Robert Kern. I have compiled and installed ATLAS from source. I am attempting to do the same with LPACK, but am struggling a little with the make.inc file. It requires that the compiler and the compiler options be specified. I am using the Fortran compiler 'g77'. I have never used g77 and have not dug the possible compiler options. Obviously, I would like to take advantage of available optimizations. BTW, I am using a Mac, OS X. If someone has done this before, I would appreciate any suggestions. Thanks -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 608 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Mon Sep 13 04:19:04 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 13 Sep 2004 10:19:04 +0200 Subject: [SciPy-user] ATLAS and LPACK In-Reply-To: <0FC41801-04DE-11D9-8116-000A95A52B4A@nandor.net> References: <0FC41801-04DE-11D9-8116-000A95A52B4A@nandor.net> Message-ID: <414557F8.3030604@mecha.uni-stuttgart.de> quendor at nandor.net wrote: > Thanks Robert Kern. I have compiled and installed ATLAS from source. I > am attempting to do the same with LPACK, but am struggling a little > with the make.inc file. It requires that the compiler and the compiler > options be specified. I am using the Fortran compiler 'g77'. I have > never used g77 and have not dug the possible compiler options. > Obviously, I would like to take advantage of available optimizations. > BTW, I am using a Mac, OS X. If someone has done this before, I would > appreciate any suggestions. > > Thanks > > > Here comes a short example for make.inc HTH Nils #################################################################### # LAPACK make include file. # # LAPACK, Version 3.0 # # June 30, 1999 # #################################################################### # SHELL = /bin/sh # # The machine (platform) identifier to append to the library names # PLAT = _LINUX # # Modify the FORTRAN and OPTS definitions to refer to the # compiler and desired compiler options for your machine. NOOPT # refers to the compiler options desired when NO OPTIMIZATION is # selected. Define LOADER and LOADOPTS to refer to the loader and # desired load options for your machine. # FORTRAN = g77 OPTS = -funroll-all-loops -fno-f2c -O3 DRVOPTS = $(OPTS) NOOPT = LOADER = g77 LOADOPTS = # # The archiver and the flag(s) to use when building archive (library) # If you system has no ranlib, set RANLIB = echo. # ARCH = ar ARCHFLAGS= cr RANLIB = ranlib # # The location of the libraries to which you will link. (The # machine-specific, optimized BLAS library should be used whenever # possible.) # BLASLIB = ../../blas$(PLAT).a LAPACKLIB = lapack$(PLAT).a TMGLIB = tmglib$(PLAT).a EIGSRCLIB = eigsrc$(PLAT).a LINSRCLIB = linsrc$(PLAT).a >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From oliphant at ee.byu.edu Mon Sep 13 13:21:23 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Mon, 13 Sep 2004 11:21:23 -0600 Subject: [SciPy-user] Conference presentations Message-ID: <4145D713.4080009@ee.byu.edu> Hi all, The SciPy 2004 conference was a great success. I personally enjoyed seeing all attendees and finding out about the activity that has been occurring with Python and Science. As promised, all of the presentations that were submitted to abstracts at scipy.org are now available on-line under the conference-schedule page. The link is http://www.scipy.org/wikis/scipy04/ConferenceSchedule If anyone who hasn't submitted their presentation would like to, you still can. As I was only able to attend the first day, I cannot comment on the entire conference. However, what I saw was very encouraging. There continues to be a great amount of work being done in using Python for Scientific Computing and the remaining problems seems to be how to get the word out and increase the user base. Many thanks are due to the presenters and the conference sponsors: *The National Biomedical Computation Resource* (NBCR, UCSD, San Diego, CA) The mission of the National Biomedical Computation Resource at the University of California San Diego and partners at The Scripps Research Institute and Washington University is to conduct, catalyze, and advance biomedical research by harnessing, developing and deploying forefront computational, information, and grid technologies. NBCR is supported by _National Institutes of Health (NIH) _ through a _National Center for Research Resources_ centers grant (P 41 RR08605). *The Center for Advanced Computing Research* (CACR, CalTech , Pasadena, CA) CACR is dedicated to the pursuit of excellence in the field of high-performance computing, communication, and data engineering. Major activities include carrying out large-scale scientific and engineering applications on parallel supercomputers and coordinating collaborative research projects on high-speed network technologies, distributed computing and database methodologies, and related topics. Our goal is to help further the state of the art in scientific computing. *Enthought, Inc.* (Austin, TX) Enthought, Inc. provides business and scientific computing solutions through software development, consulting and training. Best regards to all, -Travis Oliphant Brigham Young University 459 CB Provo, UT 84602 oliphant.travis at ieee.org From heiland at indiana.edu Mon Sep 13 13:32:25 2004 From: heiland at indiana.edu (Randy Heiland) Date: Mon, 13 Sep 2004 12:32:25 -0500 Subject: [SciPy-user] Conference presentations In-Reply-To: <4145D713.4080009@ee.byu.edu> Message-ID: <002701c499b7$a2f35f20$c686fea9@escher> Travis, For whatever reason, this page is really garbled under IE. E.g., the "News" column on the right is overlayed onto the paper links, and I don't even see the Friday papers. --Randy > -----Original Message----- > From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] > On Behalf Of Travis E. Oliphant > Sent: Monday, September 13, 2004 12:21 PM > To: SciPy Users List; python-list at python.org; numpy- > discussion at lists.sourceforge.net > Subject: [SciPy-user] Conference presentations > > > Hi all, > > The SciPy 2004 conference was a great success. I personally enjoyed > seeing all attendees and finding out about the activity that has been > occurring with Python and Science. > > As promised, all of the presentations that were submitted to > abstracts at scipy.org are now available on-line under the > conference-schedule page. > > The link is > > http://www.scipy.org/wikis/scipy04/ConferenceSchedule > > If anyone who hasn't submitted their presentation would like to, you > still can. As I was only able to attend the first day, I cannot comment > on the entire conference. However, what I saw was very encouraging. > There continues to be a great amount of work being done in using Python > for Scientific Computing and the remaining problems seems to be how to > get the word out and increase the user base. > > Many thanks are due to the presenters and the conference sponsors: > > *The National Biomedical Computation Resource* > (NBCR, UCSD, San Diego, CA) > The mission of the National Biomedical Computation Resource at the > University of California San Diego and partners at The Scripps Research > Institute and Washington University is to conduct, catalyze, and advance > biomedical research by harnessing, developing and deploying forefront > computational, information, and grid technologies. NBCR is supported by > _National Institutes of Health (NIH) _ through a _National Center for > Research Resources_ centers grant (P 41 RR08605). > > *The Center for Advanced Computing Research* > (CACR, CalTech > , Pasadena, CA) > CACR is dedicated to the pursuit of excellence in the field of > high-performance computing, communication, and data engineering. Major > activities include carrying out large-scale scientific and engineering > applications on parallel supercomputers and coordinating collaborative > research projects on high-speed network technologies, distributed > computing and database methodologies, and related topics. Our goal is to > help further the state of the art in scientific computing. > > *Enthought, Inc.* (Austin, TX) > Enthought, Inc. provides business and scientific computing solutions > through software development, consulting and training. > > > Best regards to all, > > -Travis Oliphant > Brigham Young University > 459 CB > Provo, UT 84602 > oliphant.travis at ieee.org > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From oliphant at ee.byu.edu Mon Sep 13 13:39:04 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Mon, 13 Sep 2004 11:39:04 -0600 Subject: [SciPy-user] Conference presentations In-Reply-To: <002701c499b7$a2f35f20$c686fea9@escher> References: <002701c499b7$a2f35f20$c686fea9@escher> Message-ID: <4145DB38.20702@ee.byu.edu> Randy Heiland wrote: > Travis, For whatever reason, this page is really garbled under IE. > E.g., the "News" column on the right is overlayed onto the paper links, > and I don't even see the Friday papers. > > --Randy I don't know what the problem is but I suspect it is PLONE. I can only suggest using Mozilla as a viewer. If I knew how to fix it, I would. Any suggestions? -Travis From stephen.walton at csun.edu Mon Sep 13 16:34:17 2004 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon, 13 Sep 2004 13:34:17 -0700 Subject: [SciPy-user] installing Atlas on Fedora core 2, newbie In-Reply-To: <20040912080251.38494.qmail@web51708.mail.yahoo.com> References: <20040912080251.38494.qmail@web51708.mail.yahoo.com> Message-ID: <1095107657.27830.68.camel@sunspot.csun.edu> On Sun, 2004-09-12 at 01:02, J wrote: > I am using a new installation of Fedora core 2, and it > has Blas and Lapack installed. These are from the blas-3.0 and lapack-3.0 RPMs which come with FC2. They are incomplete and probably not optimized, and the only thing on FC2 which should depend on them is octave. I'd think you can safely delete them (sudo rpm -e blas lapack). > I do not know what value to use for the > architecture name in the following line when > installing atlas: > > Enter Architecture name (ARCH) [Linux_P4SSE2]: The default is in brackets, and is what you get if you just hit Enter here. It will work. > 2) I did download the atlas binaries from the scipy > site and could install them, leaving my current > installations of Lapack and Blas intact. Yes you could, and the steps you listed are not needed. Though it perhaps isn't clear, just create a subdirectory /usr/local/lib/atlas and unpack one of the precompiled ATLAS binaries into it. SciPy should work fine with this arrangement. -- Stephen Walton Dept. of Physics & Astronomy, Cal State Northridge From s.e.murdock at soton.ac.uk Tue Sep 14 06:33:50 2004 From: s.e.murdock at soton.ac.uk (Stuart Murdock) Date: Tue, 14 Sep 2004 10:33:50 +0000 Subject: [SciPy-user] Latex from docstrings Message-ID: <4146C90E.9030107@soton.ac.uk> Hi I have lots of Python code which is well commented (docstrings) and I want to create a reference manual for this code using latex. I need to include the Python docstrings in my latex document. The trouble is that most of the document generators produce (pydoc etc.) the documentation in a way which I find difficult to include in latex i.e. the documentation generators produce pdf or html quite easily. Are there any documentation generators out there which produce latex files, or can anyone see a nice solution to my problem? The extract_doc.py package seems to be on the case but they dont seem to have implemented the latex flag yet. http://www.rexx.com/~dkuhlman/extract_doc.html *-l, *--latex** Generate LaTeX for the Python LaTeX documentation system. Not yet implemented. Thanks for your help Stuart -- Stuart Murdock Ph.D, Research Fellow, Dept. of Chemistry / E-Science, University of Southampton, Highfield, Southampton, SO17 1BJ, United Kingdom http://www.biosimgrid.org From baddlci at yahoo.com Sun Sep 12 15:18:53 2004 From: baddlci at yahoo.com (Bob Minvielle) Date: Sun, 12 Sep 2004 12:18:53 -0700 (PDT) Subject: [SciPy-user] Re: matrixmultiply and pxq times qxr matrices Message-ID: <20040912191853.60770.qmail@web51306.mail.yahoo.com> Hello, I have what is probably a silly question, but I can not find out the solution... I am trying to multiply two matrices in python using the numarray package and it is not working. to be more concise, it is not working when I do it in a script, but it does work interactively... for example: electron:~ number9$ python Python 2.3 (#1, Sep 13 2003, 00:49:11) [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * >>> a = ([[0.5, 0.3, 0.6, 0.4],[0.5, 0.4, 0.6, 0.5]]) >>> b = ([0.9, 0.99, 0.98, 0.98]) >>> innerproduct(a,b) array([ 1.727, 1.924]) >>> c = array([[0.5, 0.3, 0.6, 0.4],[0.5, 0.4, 0.6, 0.5]]) >>> d = array([0.9, 0.99, 0.98, 0.98]) >>> innerproduct(c,d) array([ 1.727, 1.924]) >>> print c [[ 0.5 0.3 0.6 0.4] [ 0.5 0.4 0.6 0.5]] >>> print d [ 0.9 0.99 0.98 0.98] Ok, that looks great, but when I do this: deltawtsL2 = rate * transpose(matrixmultiply(actoutsL1, tderivsBenefL2)) Where a print actoutsL1 yeilds: [[ 0.5 0.31002552 0.59868766 0.40131234] [ 0.5 0.40131234 0.59868766 0.5 ]] and a print tderivBenefL2 yeilds: [-0.54437211 0.44655051 0.43416131 -0.55163754] I get an error: Traceback (most recent call last): File "backprop.py", line 59, in ? deltawtsL2 = rate * transpose(matrixmultiply(actoutsL1, tderivsBenefL2)) ValueError: innerproduct: last sequence dimensions must match. I have printed out the values from within the script right before I call deltawtsL2 = and the values above are what it gives me. I have (as noted above) tried this by hand and it works consistently. Is the print statement not generating what is really in the array? The arrays that I am trying to multiply come about from other functions, but the input to those functions are generated with the var = array([....]) method. Any suggestions as to where I should look now would be greatly appreciated. Thank you. ===== Bob (number9) baddlci at yahoo.com __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From cdaniels at nandor.net Fri Sep 10 09:28:03 2004 From: cdaniels at nandor.net (C.Bryan Daniels) Date: Fri, 10 Sep 2004 08:28:03 -0500 Subject: [SciPy-user] ATLAS and LPACK Message-ID: <3E59B12B-032D-11D9-975B-000A95A52B4A@nandor.net> I have followed the excellent instructions on installing scipy on OS X provided by Christopher Fonnesbeck. Everything seems to work as expected. My only question is if I need to install ATLAS and LPACK separately (from source) or are they already included if I have previously installed the Developer Tools? Thanks -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 373 bytes Desc: not available URL: From oliphant at enthought.com Mon Sep 13 13:10:37 2004 From: oliphant at enthought.com (Travis Oliphant) Date: Mon, 13 Sep 2004 11:10:37 -0600 Subject: [SciPy-user] Conference presentations Message-ID: <4145D48D.1000106@enthought.com> Hi all, The SciPy 2004 conference was a great success. I personally enjoyed seeing all attendees and finding out about the activity that has been occurring with Python and Science. As promised, all of the presentations that were submitted to abstracts at scipy.org are now available on-line under the conference-schedule page. The link is http://www.scipy.org/wikis/scipy04/ConferenceSchedule If anyone who hasn't submitted their presentation would like to, you still can. As I was only able to attend the first day, I cannot comment on the entire conference. However, what I saw was very encouraging. There continues to be a great amount of work being done in using Python for Scientific Computing and the remaining problems seems to be how to get the word out and increase the user base. Many thanks are due to the presenters and the conference sponsors: *The National Biomedical Computation Resource* (NBCR, UCSD, San Diego, CA) The mission of the National Biomedical Computation Resource at the University of California San Diego and partners at The Scripps Research Institute and Washington University is to conduct, catalyze, and advance biomedical research by harnessing, developing and deploying forefront computational, information, and grid technologies. NBCR is supported by _National Institutes of Health (NIH) _ through a _National Center for Research Resources_ centers grant (P 41 RR08605). *The Center for Advanced Computing Research* (CACR, CalTech , Pasadena, CA) CACR is dedicated to the pursuit of excellence in the field of high-performance computing, communication, and data engineering. Major activities include carrying out large-scale scientific and engineering applications on parallel supercomputers and coordinating collaborative research projects on high-speed network technologies, distributed computing and database methodologies, and related topics. Our goal is to help further the state of the art in scientific computing. *Enthought, Inc.* (Austin, TX) Enthought, Inc. provides business and scientific computing solutions through software development, consulting and training. Best regards to all, -Travis Oliphant Brigham Young University 459 CB Provo, UT 84602 oliphant.travis at ieee.org From dd55 at cornell.edu Wed Sep 15 16:33:48 2004 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 15 Sep 2004 16:33:48 -0400 (EDT) Subject: [SciPy-user] installing numeric, newbie In-Reply-To: <1095107657.27830.68.camel@sunspot.csun.edu> References: <20040912080251.38494.qmail@web51708.mail.yahoo.com> <1095107657.27830.68.camel@sunspot.csun.edu> Message-ID: <32799.65.110.147.198.1095280428.squirrel@65.110.147.198> Hi, I am trying to compile numeric 23.3 on a gentoo-linux machine. I have successfully installed atlas 3.4.2, made the required edits to point setup.py to my libraries (/usr/lib/atlas, /usr/include), but am getting an error when I try to build numeric. It looks like its not finding the libraries I just mentioned, but I am sure they are appropriately addressed. Could anyone offer a suggestion? I am running out of ideas. Thank you, Darren running install [clip] building 'lapack_lite' extension gcc -fno-strict-aliasing -DNDEBUG -fPIC -I/usr/include -IInclude -IPackages/FFT/Include -IPackages/RNG/Include -I/usr/include/python2.3 -c Src/lapack_litemodule.c -o build/temp.linux-i686-2.3/Src/lapack_litemodule.o gcc -pthread -shared build/temp.linux-i686-2.3/Src/lapack_litemodule.o -L/usr/lib/atlas -llapack -lcblas -lf77blas -latlas -o build/lib.linux-i686-2.3/lapack_lite.so /usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.4/../../../../i686-pc-linux-gnu/bin/ld: cannot find -llapack collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 From jdhunter at ace.bsd.uchicago.edu Wed Sep 15 21:49:35 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed, 15 Sep 2004 20:49:35 -0500 Subject: [SciPy-user] installing numeric, newbie In-Reply-To: <32799.65.110.147.198.1095280428.squirrel@65.110.147.198> ("Darren Dale"'s message of "Wed, 15 Sep 2004 16:33:48 -0400 (EDT)") References: <20040912080251.38494.qmail@web51708.mail.yahoo.com> <1095107657.27830.68.camel@sunspot.csun.edu> <32799.65.110.147.198.1095280428.squirrel@65.110.147.198> Message-ID: >>>>> "Darren" == Darren Dale writes: Darren> cannot find -llapack collect2: ld returned 1 exit status Darren> error: command 'gcc' failed with exit status 1 Where exactly on your system is liblapack.* ? You need to make sure it is in your library path (LD_LIBRARY_PATH) or specify the directory with a -L flag, or put a symlink to it in /usr/lib. Sometimes, but rarely, it can help to do a > sudo /sbin/ldconfig after installing a new library. JDH From dd55 at cornell.edu Thu Sep 16 00:54:52 2004 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 16 Sep 2004 00:54:52 -0400 (EDT) Subject: [SciPy-user] installing numeric, newbie In-Reply-To: References: <20040912080251.38494.qmail@web51708.mail.yahoo.com> <1095107657.27830.68.camel@sunspot.csun.edu> <32799.65.110.147.198.1095280428.squirrel@65.110.147.198> Message-ID: <32900.65.110.147.198.1095310492.squirrel@65.110.147.198> >>>>>> "Darren" == Darren Dale writes: > Darren> cannot find -llapack collect2: ld returned 1 exit status > Darren> error: command 'gcc' failed with exit status 1 > > Where exactly on your system is liblapack.* ? You need to make sure > it is in your library path (LD_LIBRARY_PATH) or specify the directory > with a -L flag, or put a symlink to it in /usr/lib. Sometimes, but > rarely, it can help to do a > >> sudo /sbin/ldconfig > > after installing a new library. > liblapack.a, libcblas.a,libatlas.a and libf77blas,a live in /usr/lib. cblas.h and clapack.h are in /usr/include, and libg2c.a is in /usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.4 I tried "export LD_LIBRARY_PATH=/usr/lib" at the command line, and /sbin/ldconfig. The right directories are being flagged -L during the build, but the problem persists. Here is the relevent part of setup.py: library_dirs_list = ['/usr/lib','/usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.4'] #fails when = ['/usr/lib'] too libraries_list = ['liblapack', 'libcblas', 'libf77blas', 'libatlas', 'libg2c'] use_dotblas = 1 include_dirs = ['/usr/include'] Slightly off topic, LD_LIBRARY_PATH did not exist, should I add it to /etc/profile? From elidich at netvision.net.il Thu Sep 16 05:26:20 2004 From: elidich at netvision.net.il (Eli Dichterman) Date: Thu, 16 Sep 2004 11:26:20 +0200 Subject: [SciPy-user] Latex from docstrings In-Reply-To: <4146C90E.9030107@soton.ac.uk> References: <4146C90E.9030107@soton.ac.uk> Message-ID: <41495C3C.3000700@netvision.net.il> Have not tried it but the filter suggested at http://i31www.ira.uka.de/~baas/pydoxy/ may enable the use of the excellent doxygen tool http://www.stack.nl/~dimitri/doxygen/ Eli Stuart Murdock wrote: > Hi > > I have lots of Python code which is well commented (docstrings) and I > want to create a reference manual for this code using latex. > > I need to include the Python docstrings in my latex document. The > trouble is that most of the document generators produce (pydoc etc.) > the documentation in a way which I find difficult to include in latex > i.e. the documentation generators produce pdf or html quite > easily. Are there any documentation generators out there which produce > latex files, or can anyone see a nice solution to my problem? > > The extract_doc.py package seems to be on the case but they dont seem > to have implemented the latex flag yet. > > http://www.rexx.com/~dkuhlman/extract_doc.html > > *-l, *--latex** > Generate LaTeX for the Python LaTeX documentation system. Not yet > implemented. > > > > Thanks for your help > > Stuart > From pearu at cens.ioc.ee Thu Sep 16 09:37:42 2004 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 16 Sep 2004 16:37:42 +0300 (EEST) Subject: [SciPy-user] installing numeric, newbie In-Reply-To: <32900.65.110.147.198.1095310492.squirrel@65.110.147.198> Message-ID: On Thu, 16 Sep 2004, Darren Dale wrote: > liblapack.a, libcblas.a,libatlas.a and libf77blas,a live in /usr/lib. > cblas.h and clapack.h are in /usr/include, and libg2c.a is in > /usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.4 > > I tried "export LD_LIBRARY_PATH=/usr/lib" at the command line, and > /sbin/ldconfig. The right directories are being flagged -L during the > build, but the problem persists. Here is the relevent part of setup.py: > > library_dirs_list = ['/usr/lib','/usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.4'] There is no need to specify /usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.4 when using gcc as a linker. > #fails when = ['/usr/lib'] too > libraries_list = ['liblapack', 'libcblas', 'libf77blas', 'libatlas', > 'libg2c'] That should read libraries_list = ['lapack', 'cblas', 'f77blas', 'atlas', 'g2c'] > use_dotblas = 1 > include_dirs = ['/usr/include'] > > Slightly off topic, LD_LIBRARY_PATH did not exist, should I add it to > /etc/profile? You can put it shouldn't be necessary when linking against static libraries. Pearu From tfogal at apollo.sr.unh.edu Thu Sep 16 09:41:21 2004 From: tfogal at apollo.sr.unh.edu (tom fogal) Date: Thu, 16 Sep 2004 09:41:21 -0400 Subject: [SciPy-user] installing numeric, newbie In-Reply-To: Your message of "Thu, 16 Sep 2004 00:54:52 EDT." <32900.65.110.147.198.1095310492.squirrel@65.110.147.198> References: <20040912080251.38494.qmail@web51708.mail.yahoo.com> <1095107657.27830.68.camel@sunspot.csun.edu> <32799.65.110.147.198.1095280428.squirrel@65.110.147.198> <32900.65.110.147.198.1095310492.squirrel@65.110.147.198> Message-ID: <200409161341.i8GDfLUS009362@apollo.sr.unh.edu> Before I get into my email, I noted earlier that you were using gentoo. Why not just 'emerge -v numeric'? It should take care of dependencies automatically. You might need to do some tweaking to your USE flags to enable optional dependencies that you want. <32900.65.110.147.198.1095310492.squirrel at 65.110.147.198>"Darren Dale" writes: >>>>>>> "Darren" == Darren Dale writes: >> Darren> cannot find -llapack collect2: ld returned 1 exit status >> Darren> error: command 'gcc' failed with exit status 1 >> >> Where exactly on your system is liblapack.* ? You need to make sure >> it is in your library path (LD_LIBRARY_PATH) or specify the directory >> with a -L flag, or put a symlink to it in /usr/lib. Sometimes, but >> rarely, it can help to do a >> >>> sudo /sbin/ldconfig >> >> after installing a new library. This must always be done when installing a new library. A lot of the time it is transparent since 'make install' will run it for you, and distributions usually run it in startup scripts. >liblapack.a, libcblas.a,libatlas.a and libf77blas,a live in /usr/lib. >cblas.h and clapack.h are in /usr/include, and libg2c.a is in >/usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.4 /usr/lib must be listed in /etc/ld.so.conf for ldconfig to pick up on it. I can't imagine its not listed there, though... >Slightly off topic, LD_LIBRARY_PATH did not exist, should I add it to >/etc/profile? It is normal for this environment variable not to exist. I believe LD_LIBRARY_PATH is only for finding libraries at runtime, so would not affect a compile... don't quote me on that though. My only guess is that the libraries in question are not readable by the user running setup.py. Try an 'ldd /usr/lib/liblapack.a' (and the rest of the libraries), I believe it will give an error message if you don't have the permissions it needs, which happen to correspond to the permissions needed for compiling software with it. Good luck, -tom From loredo at astro.cornell.edu Thu Sep 16 16:20:13 2004 From: loredo at astro.cornell.edu (Tom Loredo) Date: Thu, 16 Sep 2004 16:20:13 -0400 (EDT) Subject: [SciPy-user] installing numeric, newbie Message-ID: <200409162020.i8GKKDe16183@laplace.astro.cornell.edu> FWIW, I had no luck installing Numeric 23.3 on my Mac OS X machine (under either Jaguar or Panther). I haven't had any problems with installing Numeric for a *long* time; something substantial must have changed in 23.3. I am currently using 23.1 which is still available from sourceforge and which installs with no fuss. -Tom Loredo From andrew.fant at tufts.edu Thu Sep 16 16:52:43 2004 From: andrew.fant at tufts.edu (Andrew Fant) Date: Thu, 16 Sep 2004 16:52:43 -0400 Subject: [SciPy-user] installing numeric, newbie In-Reply-To: <200409161341.i8GDfLUS009362@apollo.sr.unh.edu> References: <20040912080251.38494.qmail@web51708.mail.yahoo.com> <1095107657.27830.68.camel@sunspot.csun.edu> <32799.65.110.147.198.1095280428.squirrel@65.110.147.198> <32900.65.110.147.198.1095310492.squirrel@65.110.147.198> <200409161341.i8GDfLUS009362@apollo.sr.unh.edu> Message-ID: <98080000.1095367963@flux.usg.tufts.edu> On the topic of Gentoo and SciPy I have a related question. Gentoo is in the process of virtualizing blas and lapack so that they always appear to be /usr/lib/libblas.{so|a} and /usr/lib/liblapack.{so.a}, no matter whether they are the reference implementation or the atlas optimized version. The good news is that there is an ebuild in testing for SciPy that does successfully build it on Gentoo. The problem is that it appears that setup.py in SciPy uses the library name to identify the capacities of the library. Because the library found is libblas.so, there is no threading and no attempt to take full advantage of the atlas implementation. Is there any way to force setup.py to use the atlas interface? There are blas-config and lapack-config programs that work like gnome-config which can provide the includes and library search directories for the active version of blas and lapack, but I am not sure that my python-fu is up to patching the setup.py file myself. I'd appreciate any hints and suggestions that other people have. If there are other gentoo types reading this, it's bug #51926 in bugzilla. Thanks, Andy P.S. The Numeric and f2py ebuilds work just fine for building SciPy on Gentoo, fwiw. --On Thursday, September 16, 2004 09:41:21 -0400 tom fogal wrote: > Before I get into my email, I noted earlier that you were using gentoo. > Why not just 'emerge -v numeric'? It should take care of dependencies > automatically. You might need to do some tweaking to your USE flags to > enable optional dependencies that you want. From nwagner at mecha.uni-stuttgart.de Sun Sep 19 04:14:00 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sun, 19 Sep 2004 10:14:00 +0200 Subject: [SciPy-user] scipy.test() failures latest cvs Message-ID: File "/usr/local/lib/python2.3/site-packages/scipy/stats/distributions.py", line 3803 mu = 1/(exp(lambda_))-1) ^ SyntaxError: invalid syntax Ran 846 tests in 2.744s FAILED (errors=91) From joe at enthought.com Mon Sep 20 20:09:23 2004 From: joe at enthought.com (Joe Cooper) Date: Mon, 20 Sep 2004 19:09:23 -0500 Subject: [SciPy-user] SciPy server upgrades Message-ID: <414F7133.4090304@enthought.com> Hi all, Just a heads up: The SciPy server will be undergoing some routine maintenance this evening. Extended downtime will be avoided if possible, but I expect the website, and possibly the whole server, to be down for a few minutes at least once or twice tonight. Thanks! From prabhu at aero.iitm.ernet.in Sun Sep 19 06:41:05 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 19 Sep 2004 16:11:05 +0530 Subject: [SciPy-user] Performance Python with Weave article updated Message-ID: <16717.25153.598977.19108@monster.linux.in> Hi folks, Just to let you know that I've updated the article on weave compared with other options here: http://www.scipy.org/documentation/weave/weaveperformance.html The examples now work with weave from SciPy-0.3. I've also added a comparison with Psyco and Pyrex. The tarball has also been updated and includes all the sources along with a trivial setup.py to build the f2py module and the Pyrex module. cheers, prabhu From pearu at cens.ioc.ee Tue Sep 21 04:02:17 2004 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 21 Sep 2004 11:02:17 +0300 (EEST) Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <16717.25153.598977.19108@monster.linux.in> Message-ID: Hi Prabhu, On Sun, 19 Sep 2004, Prabhu Ramachandran wrote: > Just to let you know that I've updated the article on weave compared > with other options here: > > http://www.scipy.org/documentation/weave/weaveperformance.html > > The examples now work with weave from SciPy-0.3. I've also added a > comparison with Psyco and Pyrex. The tarball has also been updated > and includes all the sources along with a trivial setup.py to build > the f2py module and the Pyrex module. I was a bit surprised on the bad performance in the fortran test case. So, I took a closer look into laplace.py and found there a bug in fortranTimeStep() method; the line u, err = flaplace.timestep(g.u, g.dx, g.dy) should read g.u, err = flaplace.timestep(g.u, g.dx, g.dy) With this corrected code the performace in fortran case increased two times: $ python laplace.py Doing 100 iterations on a 500x500 grid numeric took 5.06 seconds blitz took 1.91 seconds inline took 0.8 seconds fastinline took 0.54 seconds fortran took 0.59 seconds pyrex took 0.44 seconds slow (1 iteration) took 2.72 seconds 100 iterations should take about 272.000000 seconds slow with Psyco (1 iteration) took 1.81 seconds 100 iterations should take about 181.000000 seconds Best, Pearu From maschke at molgen.mpg.de Tue Sep 21 09:03:31 2004 From: maschke at molgen.mpg.de (Elisabeth Maschke-Dutz) Date: Tue, 21 Sep 2004 15:03:31 +0200 Subject: [SciPy-user] using integrate.odeint and optimize.fsolve in threadsafe mode Message-ID: <415026A3.3090006@molgen.mpg.de> Hello, we are using integrate.odeint and optimize.fsolve in a product of the Zope environment enabled for multi-threading support and we have the problem that processes which uses scipy at the same time are resulting in a crash of the Zope server. We wonder if there is a way to enable the mentioned routines for the use in multithreaded applications and how to do this. We would appreciate if anyone can give us an advice or refer us to some further information. Thanks in advance Elisabeth email: maschke at molgen.mpg.de -------------------------------------------------- From oliphant at ee.byu.edu Tue Sep 21 15:34:44 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Tue, 21 Sep 2004 13:34:44 -0600 Subject: [SciPy-user] using integrate.odeint and optimize.fsolve in threadsafe mode In-Reply-To: <415026A3.3090006@molgen.mpg.de> References: <415026A3.3090006@molgen.mpg.de> Message-ID: <41508254.308@ee.byu.edu> Elisabeth Maschke-Dutz wrote: > Hello, > we are using integrate.odeint and optimize.fsolve in a product of the > Zope environment enabled for > multi-threading support and we have the problem that processes which > uses scipy at the same > time are resulting in a crash of the Zope server. We wonder if there is > a way to enable the > mentioned routines for the use in multithreaded applications and how to > do this. > We would appreciate if anyone can give us an advice or refer us to some > further information. > Thanks in advance > Elisabeth If appropriate locks were put into place before the underlying fortran were called, then those codes could be made threadsafe. I'm not exactly sure how to do that without taking the time to study threadsafe programming in more detail, but it seems possible, in principle. Perhaps someone with more threading experience could chime in? -Travis From prabhu at aero.iitm.ernet.in Tue Sep 21 21:49:19 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 22 Sep 2004 07:19:19 +0530 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: References: <16717.25153.598977.19108@monster.linux.in> Message-ID: <16720.55839.373051.666677@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> I was a bit surprised on the bad performance in the fortran PP> test case. So, I took a closer look into laplace.py and found PP> there a bug in fortranTimeStep() method; the line PP> u, err = flaplace.timestep(g.u, g.dx, g.dy) PP> should read PP> g.u, err = flaplace.timestep(g.u, g.dx, g.dy) PP> With this corrected code the performace in fortran case PP> increased two times: Aha! Thanks! I noticed slower times but did not investigate carefully because the code was unchanged from the last time. I also upgraded to sarge and thought it was something to do with the newer version of g77. After you pointed out the error, I initially thought the reason for the two fold improvement was a convergence related issue. However, I recall that I had specifically tailored the test case so that all of the methods do indeed run 100 iterations. So, it appears the cost is in copying and converting the Numeric array from a C-contiguous one to Fortran-contiguous one. With the old code, this had to be done on each iteration, doubling the time. Has something changed in f2py in this regard? When I put up the article the first time, the old (buggy) code did perform well. cheers, prabhu From falted at pytables.org Wed Sep 22 08:29:46 2004 From: falted at pytables.org (Francesc Alted) Date: Wed, 22 Sep 2004 14:29:46 +0200 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <16717.25153.598977.19108@monster.linux.in> References: <16717.25153.598977.19108@monster.linux.in> Message-ID: <200409221429.46953.falted@pytables.org> Hi Prabhu, Very interesting comparison indeed. By the way, I've managed to create a Pyrex version of your small benchmark that is a little simpler than yours (it does not need to expose the Numeric array structure) and that works for both Numeric and numarray. The times taken by this version are very similar to your version (for both numarray and Numeric). I'm attaching the code at the end of this message. By the way, here you have the figures for my own platform (Pentium IV @ 2GHz): $ python laplace.py Doing 100 iterations on a 500x500 grid numeric took 8.49 seconds blitz took 2.96 seconds inline took 0.9 seconds fastinline took 0.6 seconds fortran took 0.83 seconds pyrex took 0.6 seconds slow (1 iteration) took 3.13 seconds 100 iterations should take about 313.000000 seconds slow with Psyco (1 iteration) took 2.19 seconds 100 iterations should take about 219.000000 seconds Cheers, -- Francesc Alted # Pyrex sources to compute the laplacian. # # Author: Prabhu Ramachandran # Modified by F. Alted for yet another way to access Numeric/numarray objects # Some helper routines from the Python API cdef extern from "Python.h": int PyObject_AsReadBuffer(object, void **rbuf, int *len) int PyObject_AsWriteBuffer(object, void **rbuf, int *len) cdef extern from "numarray/libnumarray.h": # The numarray initialization funtion void import_libnumarray() # The Numeric API requires this function to be called before # using any Numeric facilities in an extension module. import_libnumarray() cdef extern from "math.h": double sqrt(double x) def pyrexTimeStep(object u, double dx, double dy): cdef int nx, ny cdef double dx2, dy2, dnr_inv, err cdef double *elem cdef int i, j cdef double *uc, *uu, *ud, *ul, *ur cdef double diff, tmp cdef int buflen cdef void *data if u.typecode() <> "d": raise TypeError("Double array required") if len(u.shape) <> 2: raise ValueError("2 dimensional array required") nx = u.shape[0] ny = u.shape[1] dx2, dy2 = dx**2, dy**2 dnr_inv = 0.5/(dx2 + dy2) # Get the pointer to the buffer data area if hasattr(u, "__class__"): # numarray case if PyObject_AsReadBuffer(u._data, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") else: # Numeric case if PyObject_AsReadBuffer(u, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") elem = data err = 0.0 for i from 1 <= i < nx-1: uc = elem + i*ny + 1 ur = elem + i*ny + 2 ul = elem + i*ny uu = elem + (i+1)*ny + 1 ud = elem + (i-1)*ny + 1 for j from 1 <= j < ny-1: tmp = uc[0] uc[0] = ((ul[0] + ur[0])*dy2 + (uu[0] + ud[0])*dx2)*dnr_inv diff = uc[0] - tmp err = err + diff*diff uc = uc + 1; ur = ur + 1; ul = ul + 1 uu = uu + 1; ud = ud + 1 return sqrt(err) From p.berkes at biologie.hu-berlin.de Wed Sep 22 10:35:58 2004 From: p.berkes at biologie.hu-berlin.de (p.berkes at biologie.hu-berlin.de) Date: Wed, 22 Sep 2004 16:35:58 +0200 (CEST) Subject: [SciPy-user] scipy.sign Message-ID: Using the last CVS scipy_base module, I get a problem with scipy.sign after a call to scipy.alter_numeric: Python 2.3.4 (#1, Jul 30 2004, 12:17:30) [GCC 3.2 20020903 (Red Hat Linux 8.0 3.2-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.3.0_266.4239' >>> scipy.scipy_base_version.scipy_base_version '0.3.1_20.301' >>> scipy.sign(-1) -1 >>> scipy.alter_numeric() >>> scipy.sign(-1) 255 which is the wrong answer. Is anybody able to reproduce this bug? Pietro. From falted at pytables.org Wed Sep 22 11:22:32 2004 From: falted at pytables.org (Francesc Alted) Date: Wed, 22 Sep 2004 17:22:32 +0200 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <200409221429.46953.falted@pytables.org> References: <16717.25153.598977.19108@monster.linux.in> <200409221429.46953.falted@pytables.org> Message-ID: <200409221722.32354.falted@pytables.org> Hi again, I was thinking on why the numarray headers do exist on the code that I've sent earlier and I've ended with another version. This is simpler, without any need to importing Numeric or numarray libraries or headers. This one just recognize the object (Numeric or numarray) and access to its data buffer. Regarding compilation, you don't even need to have Numeric and/or numarray installed to compile the extension, so the code is very portable to any platform having Numeric and/or numarray and besides, you don't have to bother in case of an ABI change on any of these. As you can see, although I've used PyObject_AsReadBuffer() to get the buffer data area, nothing forbids me to write in this area as well (in the end, elem is a C pointer). So you can use this trick to modify data as well. Maybe this is a hack, but hey! it "just work". And more importantly, it reveals how powerful Pyrex can be by effectively mixing Python and C calls. This can easy a lot the transition between Numeric and numarray apps. In fact, it would be very easy to write apps that support both Numeric and numarray without any conflict and as I said before without worrying about library headers or ABI changes. Enjoy! -- Francesc Alted # Pyrex sources to compute the laplacian. # Some of this code is taken from numeric_demo.pyx # # Author: Prabhu Ramachandran # Modified by F. Alted for yet another way to access Numeric/numarray objects # Some helper routines from the Python API cdef extern from "Python.h": int PyObject_AsReadBuffer(object, void **rbuf, int *len) int PyObject_AsWriteBuffer(object, void **rbuf, int *len) cdef extern from "math.h": double sqrt(double x) def pyrexTimeStep(object u, double dx, double dy): cdef int nx, ny cdef double dx2, dy2, dnr_inv, err cdef double *elem cdef int i, j cdef double *uc, *uu, *ud, *ul, *ur cdef double diff, tmp cdef int buflen cdef void *data if u.typecode() <> "d": raise TypeError("Double array required") if len(u.shape) <> 2: raise ValueError("2 dimensional array required") nx = u.shape[0] ny = u.shape[1] dx2, dy2 = dx**2, dy**2 dnr_inv = 0.5/(dx2 + dy2) # Get the pointer to the buffer data area if hasattr(u, "__class__"): # numarray case if PyObject_AsReadBuffer(u._data, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") else: # Numeric case if PyObject_AsReadBuffer(u, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") elem = data err = 0.0 for i from 1 <= i < nx-1: uc = elem + i*ny + 1 ur = elem + i*ny + 2 ul = elem + i*ny uu = elem + (i+1)*ny + 1 ud = elem + (i-1)*ny + 1 for j from 1 <= j < ny-1: tmp = uc[0] uc[0] = ((ul[0] + ur[0])*dy2 + (uu[0] + ud[0])*dx2)*dnr_inv diff = uc[0] - tmp err = err + diff*diff uc = uc + 1; ur = ur + 1; ul = ul + 1 uu = uu + 1; ud = ud + 1 return sqrt(err) From josegomez at gmx.net Wed Sep 22 13:06:23 2004 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Wed, 22 Sep 2004 19:06:23 +0200 (MEST) Subject: [SciPy-user] Reading laaaarge arrays Message-ID: <9739.1095872783@www28.gmx.net> Hi! I have a list of essentially (x,y) data (two columns, in glorious ASCII), that I want to read into a scipy array. The most obvious thing to do is to use io.read_array(), but the files are quite large (up to 0.5Gb), and thus, read_array is very slow. I remember someone mentioning this before, and suggesting a little function written in C, and usable from within scipy to do this efficiently, but I can't find it in the archives or in my mailbox. Can anyone suggest a way to do this? So far, I've been trying to read in a 72Mb file for 17 minutes with io.read_array(), and it's still going!!! Many thanks, Jose -- +++ GMX DSL Premiumtarife 3 Monate gratis* + WLAN-Router 0,- EUR* +++ Clevere DSL-Nutzer wechseln jetzt zu GMX: http://www.gmx.net/de/go/dsl From Fernando.Perez at colorado.edu Wed Sep 22 13:32:41 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 22 Sep 2004 11:32:41 -0600 Subject: [SciPy-user] scipy.sign In-Reply-To: References: Message-ID: <4151B739.8020605@colorado.edu> p.berkes at biologie.hu-berlin.de wrote: > Using the last CVS scipy_base module, I get a problem with scipy.sign > after a call to scipy.alter_numeric: > > Python 2.3.4 (#1, Jul 30 2004, 12:17:30) > [GCC 3.2 20020903 (Red Hat Linux 8.0 3.2-7)] on linux2 > Type "help", "copyright", "credits" or "license" for more > information. > >>> import scipy > >>> scipy.__version__ > '0.3.0_266.4239' > >>> scipy.scipy_base_version.scipy_base_version > '0.3.1_20.301' > >>> scipy.sign(-1) > -1 > >>> scipy.alter_numeric() > >>> scipy.sign(-1) > 255 > > which is the wrong answer. Is anybody able to reproduce this bug? No problems here: In [1]: import scipy In [2]: scipy.sign(-1) Out[2]: -1 In [3]: scipy.alter_numeric() In [4]: scipy.sign(-1) Out[4]: -1 In [5]: scipy.__version__ Out[5]: '0.3.1_281.4212' In [6]: scipy.scipy_base_version.scipy_base_version Out[6]: '0.3.1_20.297' Cheers, f From prabhu at aero.iitm.ernet.in Wed Sep 22 13:41:09 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 22 Sep 2004 23:11:09 +0530 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <200409221429.46953.falted@pytables.org> References: <16717.25153.598977.19108@monster.linux.in> <200409221429.46953.falted@pytables.org> Message-ID: <16721.47413.834479.330771@monster.linux.in> Hi Francesc, >>>>> "FA" == Francesc Alted writes: FA> Hi Prabhu, Very interesting comparison indeed. Thanks. FA> By the way, I've managed to create a Pyrex version of your FA> small benchmark that is a little simpler than yours (it does FA> not need to expose the Numeric array structure) and that works FA> for both Numeric and numarray. The times taken by this version FA> are very similar to your version (for both numarray and FA> Numeric). Cool! What I am really interested in, is to see if there is a way to manipulate the arrays in a more elegant fashion. All this pointer arithmetic is fine, but ultimately you'd like something that is easy to write. I can't find anything simpler than the slower inline code. I'm sure, that with some effort one can build an extension class around the numeric array inside Pyrex to expose the array as a nice 2D Numeric-ish array so the array computations can be more elegant. It might then be a good idea to abstract that interface into a simple Pyrex library that folks can use to write simple, elegant code to write their high-performance routines with. FA> I'm attaching the code at the end of this message. Thanks! Do you want me to include this and/or your subsequent version in the laplace.tar.gz as src/pyx_lap1.pyx with a note about it in the README? cheers, prabhu From Fernando.Perez at colorado.edu Wed Sep 22 13:44:52 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 22 Sep 2004 11:44:52 -0600 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <16721.47413.834479.330771@monster.linux.in> References: <16717.25153.598977.19108@monster.linux.in> <200409221429.46953.falted@pytables.org> <16721.47413.834479.330771@monster.linux.in> Message-ID: <4151BA14.90600@colorado.edu> Prabhu Ramachandran wrote: > What I am really interested in, is to see if there is a way to > manipulate the arrays in a more elegant fashion. All this pointer > arithmetic is fine, but ultimately you'd like something that is easy > to write. I can't find anything simpler than the slower inline code. > I'm sure, that with some effort one can build an extension class > around the numeric array inside Pyrex to expose the array as a nice 2D > Numeric-ish array so the array computations can be more elegant. > > It might then be a good idea to abstract that interface into a simple > Pyrex library that folks can use to write simple, elegant code to > write their high-performance routines with. +1000 :) This would be _really_ useful: a simple interface for common indexing of arrays with 1-4 indices (the realistic usage cases in most scientific computing). The boilerplate gets written _once_, and from then on we all benefit. I can't do it myself, but I'll cheer all you want :) Best, f From prabhu at aero.iitm.ernet.in Wed Sep 22 13:48:12 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 22 Sep 2004 23:18:12 +0530 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <200409221722.32354.falted@pytables.org> References: <16717.25153.598977.19108@monster.linux.in> <200409221429.46953.falted@pytables.org> <200409221722.32354.falted@pytables.org> Message-ID: <16721.47836.558705.737232@monster.linux.in> >>>>> "FA" == Francesc Alted writes: FA> Hi again, I was thinking on why the numarray headers do exist FA> on the code that I've sent earlier and I've ended with another FA> version. This is simpler, without any need to importing FA> Numeric or numarray libraries or headers. This one just FA> recognize the object (Numeric or numarray) and access to its FA> data buffer. Neat. Thanks. [...] FA> Maybe this is a hack, but hey! it "just work". And more FA> importantly, it reveals how powerful Pyrex can be by FA> effectively mixing Python and C calls. This can easy a lot FA> the transition between Numeric and numarray apps. In fact, it FA> would be very easy to write apps that support both Numeric and FA> numarray without any conflict and as I said before without FA> worrying about library headers or ABI changes. Pyrex is really cool but I still think that as far as ease of use goes, weave is hard to beat. No extension, no new syntax (apart from C/C++ of course), no need to worry about conversion of data (typecasting etc.) and whatnot. Plus weave works with C++ libraries, has been extended to work with things like wxWindows, VTK and arbitrary SWIG'd libraries. The one thing I'm not sure you can do is define a new extension type. I don't believe that there is a technical issue there. Only lack of time. This is not to take anything away from Pyrex. I'm excited about it too. Its just that there are certain areas where weave really fits well. cheers, prabhu From oliphant at ee.byu.edu Wed Sep 22 14:23:40 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Wed, 22 Sep 2004 12:23:40 -0600 Subject: [SciPy-user] Reading laaaarge arrays In-Reply-To: <9739.1095872783@www28.gmx.net> References: <9739.1095872783@www28.gmx.net> Message-ID: <4151C32C.5040808@ee.byu.edu> Jose Luis Gomez Dans wrote: > Hi! > I have a list of essentially (x,y) data (two columns, > in glorious ASCII), that I want to read into a scipy > array. The most obvious thing to do is to use > io.read_array(), but the files are quite large (up to > 0.5Gb), and thus, read_array is very slow. > > I remember someone mentioning this before, and > suggesting a little function written in C, and usable > from within scipy to do this efficiently, but I can't > find it in the archives or in my mailbox. > > Can anyone suggest a way to do this? So far, I've > been trying to read in a 72Mb file for 17 minutes > with io.read_array(), and it's still going!!! > I've started writing a basic version of scanf for exactly this purpose, but I'm not done yet. For now, my suggestion for such large data is to use weave and sscanf. io.read_array is quite fancy and not meant for reading simple but very, very large files. -Travis From david.grant at telus.net Wed Sep 22 19:44:46 2004 From: david.grant at telus.net (David Grant) Date: Wed, 22 Sep 2004 16:44:46 -0700 Subject: [SciPy-user] multiplying matrices Message-ID: <41520E6E.8000201@telus.net> Is there any way to multiple a whole bunch of matrices together easily? Ideally, the * operator would be for matrix multiplication as opposed to element-by-element multiplication. I've been doing this: listofmatricies = [blah, blah, blah, ....] return reduce(matrixmultiply,listofmatricies) Seems to work, but it would nice to be able to do it in one line as in matlab. -- David J. Grant Get Firefox! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: get.gif Type: image/gif Size: 1786 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: david.grant.vcf Type: text/x-vcard Size: 200 bytes Desc: not available URL: From falted at pytables.org Thu Sep 23 03:20:17 2004 From: falted at pytables.org (Francesc Alted) Date: Thu, 23 Sep 2004 09:20:17 +0200 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <16721.47836.558705.737232@monster.linux.in> References: <16717.25153.598977.19108@monster.linux.in> <200409221722.32354.falted@pytables.org> <16721.47836.558705.737232@monster.linux.in> Message-ID: <200409230920.17431.falted@pytables.org> A Dimecres 22 Setembre 2004 19:48, Prabhu Ramachandran va escriure: > This is not to take anything away from Pyrex. I'm excited about it > too. Its just that there are certain areas where weave really fits > well. I see. Well, in any case it's good to see many alternatives for creating fast code that works well with Python and C libraries. For me (and maybe for many programmers, specially people that just want to access the data and metadata of Numeric/numarray objects), Pyrex is just great. But I recognize that weave/SWIG and others have its own string points. [I've made extensions using only Pyrex, so my opinion is most probably quite biased here] By the way, you can do with the code I've sent whatever you want, even including it in a publicly accessible tar ball :) Cheers, -- Francesc Alted From prabhu at aero.iitm.ernet.in Thu Sep 23 04:06:37 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 23 Sep 2004 13:36:37 +0530 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <200409230920.17431.falted@pytables.org> References: <16717.25153.598977.19108@monster.linux.in> <200409221722.32354.falted@pytables.org> <16721.47836.558705.737232@monster.linux.in> <200409230920.17431.falted@pytables.org> Message-ID: <16722.33805.979827.965218@monster.linux.in> >>>>> "FA" == Francesc Alted writes: FA> A Dimecres 22 Setembre 2004 19:48, Prabhu Ramachandran va FA> escriure: >> This is not to take anything away from Pyrex. I'm excited >> about it too. Its just that there are certain areas where >> weave really fits well. FA> I see. Well, in any case it's good to see many FA> alternatives for creating fast code that works well with FA> Python and C libraries. For me (and maybe for many ^^^ Add C++ to that. AFAIK, Pyrex does not *yet* work with C++ code unless you provide a C API to the C++ library. FA> programmers, specially people that just want to access the FA> data and metadata of Numeric/numarray objects), Pyrex is just FA> great. But I recognize that weave/SWIG and others have its own FA> string points. Sure. FA> By the way, you can do with the code I've sent whatever you FA> want, even including it in a publicly accessible tar ball :) Thanks. cheers, prabhu From falted at pytables.org Thu Sep 23 04:23:32 2004 From: falted at pytables.org (Francesc Alted) Date: Thu, 23 Sep 2004 10:23:32 +0200 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <16722.33805.979827.965218@monster.linux.in> References: <16717.25153.598977.19108@monster.linux.in> <200409230920.17431.falted@pytables.org> <16722.33805.979827.965218@monster.linux.in> Message-ID: <200409231023.32303.falted@pytables.org> A Dijous 23 Setembre 2004 10:06, Prabhu Ramachandran va escriure: > Add C++ to that. AFAIK, Pyrex does not *yet* work with C++ code > unless you provide a C API to the C++ library. Yes, you are right. And it is not clear at all that C++ support will be implemented anytime soon. Cheers, -- Francesc Alted From nwagner at mecha.uni-stuttgart.de Thu Sep 23 05:28:43 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 23 Sep 2004 11:28:43 +0200 Subject: [SciPy-user] Usage of block matrices in scipy Message-ID: <4152974B.2000900@mecha.uni-stuttgart.de> Hi all, How can I introduce block matrices in scipy; given matrices A and B of appropraite order, I would like to build matrices of the form C = [A B] or C = [A; B] BTW, I cannot find functions for the Kronecker product and the vec operator in scipy. Is it planned to add this features ? Nils From p.berkes at biologie.hu-berlin.de Thu Sep 23 08:18:26 2004 From: p.berkes at biologie.hu-berlin.de (p.berkes at biologie.hu-berlin.de) Date: Thu, 23 Sep 2004 14:18:26 +0200 (CEST) Subject: [SciPy-user] scipy casting behavior and scipy.sign error Message-ID: I noticed an inconsistent casting behavior of scipy when adding or subtracting zero-dimensional arrays. For example, the result of a byte plus a long is an integer, while a long plus a byte is a Python scalar (i.e. not an array anymore). Does this have a explanation? >>> import scipy >>> scipy.scipy_base_version.scipy_base_version '0.3.1_20.301' >>> scipy.Numeric.__version__ '23.3' >>> b = scipy.asarray(1,"b") >>> l = scipy.asarray(1,"l") >>> (b+l).typecode() 'i' >>> (l+b).typecode() Traceback (most recent call last): File "", line 1, in ? AttributeError: 'int' object has no attribute 'typecode' >>> (b-l).typecode() 'i' >>> (-l+b).typecode() Traceback (most recent call last): File "", line 1, in ? AttributeError: 'int' object has no attribute 'typecode' Moreover, this behavior changes after a call to alter_numeric, causing an error in scipy.sign and possibly other functions (make sure you have the latest CVS version of scipy_base, since otherwise the alter_numeric function does not change the casting defaults of Numeric). >>> scipy.sign(-1) >>> -1 >>> scipy.alter_numeric() >>> (-l+b).typecode() 'b' >>> scipy.sign(-1) >>> 255 Thank you for any comment, Pietro. From nwagner at mecha.uni-stuttgart.de Thu Sep 23 08:52:10 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 23 Sep 2004 14:52:10 +0200 Subject: [SciPy-user] Solve continuous-time Lyapunov equations with scipy ? Message-ID: <4152C6FA.3040704@mecha.uni-stuttgart.de> Dear experts, Has someone written a function for the solution of continuous-time Lyapunov equations ? I am aware of the Control and Systems library SLICOT http://www.win.tue.nl/niconet/ The routine SB03MD.f is responsible for this task. Has someone written a wrapper for this Fortran routine ? Pearu, is it possible to include this library in scipy ? Any pointer would be appreciated. Regards, Nils From aisaac at american.edu Thu Sep 23 09:11:55 2004 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 23 Sep 2004 09:11:55 -0400 (Eastern Daylight Time) Subject: [SciPy-user] Usage of block matrices in scipy In-Reply-To: <4152974B.2000900@mecha.uni-stuttgart.de> References: <4152974B.2000900@mecha.uni-stuttgart.de> Message-ID: On Thu, 23 Sep 2004, Nils Wagner apparently wrote: > How can I introduce block matrices in scipy; > given matrices A and B of appropraite order, I would like to build matrices > of the form > C = [A B] > or > C = [A; B] Do you need the blocks to be identifiable after building? If not, just use concatenate. > BTW, I cannot find functions for the Kronecker product Look in the most recent numarray for code. It will work in Numeric. > and the vec operator Try reshape. hth, Alan Isaac PS Here's a Kronecker product: http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2003-September/002138.html From nwagner at mecha.uni-stuttgart.de Thu Sep 23 09:23:00 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 23 Sep 2004 15:23:00 +0200 Subject: [SciPy-user] Usage of block matrices in scipy In-Reply-To: References: <4152974B.2000900@mecha.uni-stuttgart.de> Message-ID: <4152CE34.2000901@mecha.uni-stuttgart.de> Alan G Isaac wrote: >On Thu, 23 Sep 2004, Nils Wagner apparently wrote: > > >>How can I introduce block matrices in scipy; >>given matrices A and B of appropraite order, I would like to build matrices >>of the form >>C = [A B] >>or >>C = [A; B] >> >> > >Do you need the blocks to be identifiable after building? >If not, just use concatenate. > > > >>BTW, I cannot find functions for the Kronecker product >> >> > >Look in the most recent numarray for code. >It will work in Numeric. > > > >>and the vec operator >> >> > >Try reshape. > >hth, >Alan Isaac > >PS Here's a Kronecker product: > http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2003-September/002138.html > > > > Thank you for your reply. I have used r_[A, B] and c_[A,B] respectively. BTW, it would be nice to have both Kronecker product and vec-operator as s t a n d a r d functions in scipy. Maybe Pearu or Travis can manage that... Nils >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -- Dr.-Ing. Nils Wagner Institut A f?r Mechanik Universit?t Stuttgart Pfaffenwaldring 9 D-70550 Stuttgart Tel.: (+49) 0711 685 6262 Fax.: (+49) 0711 685 6282 E-mail: nwagner at mecha.uni-stuttgart.de From gpajer at rider.edu Thu Sep 23 09:28:56 2004 From: gpajer at rider.edu (Gary Pajer) Date: Thu, 23 Sep 2004 09:28:56 -0400 Subject: [SciPy-user] Usage of block matrices in scipy In-Reply-To: References: <4152974B.2000900@mecha.uni-stuttgart.de> Message-ID: <4152CF98.8070700@rider.edu> Alan G Isaac wrote: >On Thu, 23 Sep 2004, Nils Wagner apparently wrote: > > >>How can I introduce block matrices in scipy; >>given matrices A and B of appropraite order, I would like to build matrices >>of the form >>C = [A B] >>or >>C = [A; B] >> >> > >Do you need the blocks to be identifiable after building? >If not, just use concatenate. > > see also the concatenate abbreviations: c = r_[a,b] and c = c_[a,b] -g From chris at fisher.forestry.uga.edu Thu Sep 23 11:12:16 2004 From: chris at fisher.forestry.uga.edu (Christopher Fonnesbeck) Date: Thu, 23 Sep 2004 11:12:16 -0400 Subject: [SciPy-user] linalg error in cvs update Message-ID: I just updated my cvs install on a redhat machine, and got the following error from the linalg package: File "/usr/lib/python2.3/site-packages/PyMC/KalmanFiltering.py", line 25, in ? from scipy.linalg import cholesky as chol File "/usr/lib/python2.3/site-packages/scipy_base/ppimport.py", line 303, in __getattr__ module = self._ppimport_importer() File "/usr/lib/python2.3/site-packages/scipy_base/ppimport.py", line 262, in _ppimport_importer raise PPImportError,\ scipy_base.ppimport.PPImportError: Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy_base/ppimport.py", line 273, in _ppimport_importer module = __import__(name,None,None,['*']) File "/usr/lib/python2.3/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/lib/python2.3/site-packages/scipy/linalg/basic.py", line 222, in ? import decomp File "/usr/lib/python2.3/site-packages/scipy/linalg/decomp.py", line 15, in ? from blas import get_blas_funcs File "/usr/lib/python2.3/site-packages/scipy/linalg/blas.py", line 13, in ? import fblas ImportError: /usr/lib/python2.3/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ I have not seen this one before. Any ideas? Thanks, Chris Fonnesbeck From pearu at scipy.org Thu Sep 23 11:38:16 2004 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 23 Sep 2004 10:38:16 -0500 (CDT) Subject: [SciPy-user] linalg error in cvs update In-Reply-To: References: Message-ID: On Thu, 23 Sep 2004, Christopher Fonnesbeck wrote: > I just updated my cvs install on a redhat machine, and got the following > error from the linalg package: > > ... > File "/usr/lib/python2.3/site-packages/scipy/linalg/blas.py", line 13, in ? > import fblas > ImportError: /usr/lib/python2.3/site-packages/scipy/linalg/fblas.so: > undefined symbol: srotmg_ > > I have not seen this one before. Any ideas? Quote from INSTALL.txt: """ BLAS sources shipped with LAPACK are incomplete ----------------------------------------------- Some distributions (e.g. Redhat Linux 7.1) provide BLAS libraries that are built from such incomplete sources and therefore cause import errors like :: ImportError: .../fblas.so: undefined symbol: srotmg_ Fix: Use ATLAS or the official release of BLAS libraries. """ HTH, Pearu From WILLIAM.GRIFFIN at asu.edu Thu Sep 23 11:29:35 2004 From: WILLIAM.GRIFFIN at asu.edu (William Griffin) Date: Thu, 23 Sep 2004 08:29:35 -0700 Subject: [SciPy-user] matrix construction Message-ID: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> I'm trying to build an adjacency matrix (52*52) from an imported ASCII list. I want to be able to construct and read the list as two columns [i] [j], and use a count method to obtain the tally cell values for the occurrences and include the count value in the [i,j] cell; or else, bring in the list as three columns [i][j][tallies], and then simply place the [tallies] value in the [i,j] cell. I prefer the former two column approach because it involves one less step. I've made several attempts at generating a general algorithm, but it has not been very successful. I was wondering anyone else had to do this or can point me to any code that might be adapted for this. Thanks, Bill Griffin From mike_lists at yahoo.com.au Thu Sep 23 11:39:46 2004 From: mike_lists at yahoo.com.au (Michael Sorich) Date: Fri, 24 Sep 2004 01:39:46 +1000 (EST) Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <4151BA14.90600@colorado.edu> Message-ID: <20040923153946.95190.qmail@web53608.mail.yahoo.com> My 2 cents worth. I am a biologist who learnt python to facilitate my research. I have found it so useful that I have not been able to justify the time to learn more than a smattering of C/C++. I find that NumPy is great for most cases where plain python would be too slow, but it is not suitable for all situations. Writing the occasional small extension using numeric arrays is a real pain and doesn't seem to fit with the simplicity of python in general. One of the big reasons that I like weave is the simplicity of the blitz arrays. The inline code usually looks very similar to the python code I prototype. This allows people like me (with very limited experience in C/C++ or fortran) to quickly and simply speed up nested loops involving manipulation of arrays. My thanks to those who wrote Weave. I think pyrex is a wonderful concept. However, most of the code I want to speed up involves lookup/manipulation of arrays (in ways that NumPy is not suited to), which is still complicated in pyrex. I think it would be great if one could index numeric arrays in pyrex (or with any extension method) in the same manner as one does in python. Michael --- Fernando Perez wrote: > Prabhu Ramachandran wrote: > > > What I am really interested in, is to see if there > is a way to > > manipulate the arrays in a more elegant fashion. > All this pointer > > arithmetic is fine, but ultimately you'd like > something that is easy > > to write. I can't find anything simpler than the > slower inline code. > > I'm sure, that with some effort one can build an > extension class > > around the numeric array inside Pyrex to expose > the array as a nice 2D > > Numeric-ish array so the array computations can be > more elegant. > > > > It might then be a good idea to abstract that > interface into a simple > > Pyrex library that folks can use to write simple, > elegant code to > > write their high-performance routines with. > > +1000 :) > > This would be _really_ useful: a simple interface > for common indexing of > arrays with 1-4 indices (the realistic usage cases > in most scientific > computing). The boilerplate gets written _once_, > and from then on we all > benefit. I can't do it myself, but I'll cheer all > you want :) > > Best, > > f > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Find local movie times and trailers on Yahoo! Movies. http://au.movies.yahoo.com From oliphant at ee.byu.edu Thu Sep 23 15:53:45 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Thu, 23 Sep 2004 13:53:45 -0600 Subject: [SciPy-user] Usage of block matrices in scipy In-Reply-To: <4152974B.2000900@mecha.uni-stuttgart.de> References: <4152974B.2000900@mecha.uni-stuttgart.de> Message-ID: <415329C9.70507@ee.byu.edu> Nils Wagner wrote: > Hi all, > > How can I introduce block matrices in scipy; > given matrices A and B of appropraite order, I would like to build matrices > of the form > > C = [A B] > > or > > C = [A; B] >>> info(bmat) # build matrix > > BTW, I cannot find functions for the Kronecker product and the vec > operator in scipy. Yes we would like these. I have some fortran code for computing Kronecker products, but it is not yet integrated into SciPy. I'm not as familiar with the "vec" operator. -Travis From oliphant at ee.byu.edu Thu Sep 23 15:55:44 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Thu, 23 Sep 2004 13:55:44 -0600 Subject: [SciPy-user] Solve continuous-time Lyapunov equations with scipy ? In-Reply-To: <4152C6FA.3040704@mecha.uni-stuttgart.de> References: <4152C6FA.3040704@mecha.uni-stuttgart.de> Message-ID: <41532A40.8060904@ee.byu.edu> Nils Wagner wrote: > Dear experts, > > Has someone written a function for the solution > of continuous-time Lyapunov equations ? > > I am aware of the Control and Systems library SLICOT > http://www.win.tue.nl/niconet/ > > The routine SB03MD.f is responsible for this task. > Has someone written a wrapper for this Fortran routine ? > Nils, It might be a good time for you to learn f2py. You could do this task quite easily with f2py in a couple of hours (depending on how difficult the library is). Then, when you have completed the nice Python interface, you could submit the .py file (if any), the .pyf file and the Fortran code to SciPy. Best regards, -Travis O. From oliphant at ee.byu.edu Thu Sep 23 18:52:48 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Thu, 23 Sep 2004 16:52:48 -0600 Subject: [SciPy-user] scipy casting behavior and scipy.sign error In-Reply-To: References: Message-ID: <415353C0.3020308@ee.byu.edu> p.berkes at biologie.hu-berlin.de wrote: > I noticed an inconsistent casting behavior of scipy when adding or > subtracting zero-dimensional arrays. For example, the result of a byte > plus a long is an integer, while a long plus a byte is a Python scalar > (i.e. not an array anymore). Does this have a explanation? > > >>>>import scipy >>>>scipy.scipy_base_version.scipy_base_version > > '0.3.1_20.301' > >>>>scipy.Numeric.__version__ > > '23.3' Thank you for your report. Numeric has had some of these casting issues for a long time. We can fix them with scipy.alter_numeric when we catch them if we think it is important. BTW, there are still some casting issues in alter_numeric that have not been ironed out. Thank you for your feedback. -Travis O. > >>>>b = scipy.asarray(1,"b") >>>>l = scipy.asarray(1,"l") >>>>(b+l).typecode() > > 'i' > >>>>(l+b).typecode() > > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'int' object has no attribute 'typecode' > >>>>(b-l).typecode() > > 'i' > >>>>(-l+b).typecode() > > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'int' object has no attribute 'typecode' > > Moreover, this behavior changes after a call to alter_numeric, causing an > error in scipy.sign and possibly other functions (make sure you have the > latest CVS version of scipy_base, since otherwise the alter_numeric > function does not change the casting defaults of Numeric). > > >>>>scipy.sign(-1) >>>>-1 >>>>scipy.alter_numeric() >>>>(-l+b).typecode() > > 'b' > >>>>scipy.sign(-1) >>>>255 > > > Thank you for any comment, > Pietro. > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From prabhu at aero.iitm.ernet.in Thu Sep 23 22:02:37 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Fri, 24 Sep 2004 07:32:37 +0530 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <20040923153946.95190.qmail@web53608.mail.yahoo.com> References: <4151BA14.90600@colorado.edu> <20040923153946.95190.qmail@web53608.mail.yahoo.com> Message-ID: <16723.32829.842771.670537@monster.linux.in> >>>>> "MS" == Michael Sorich writes: MS> My 2 cents worth. MS> I am a biologist who learnt python to facilitate my MS> research. I have found it so useful that I have not been able MS> to justify the time to learn more than a smattering of MS> C/C++. I find that NumPy is great for most cases where plain [...] Thanks for your 2 cents. :) I guess you'd have got the prize for the quote of the week: "I am a biologist who learnt python to facilitate my research. I have found it so useful that I have not been able to justify the time to learn more than a smattering of C/C++." regards, prabhu From Bob.Cowdery at CGI-Europe.com Fri Sep 24 05:05:01 2004 From: Bob.Cowdery at CGI-Europe.com (Bob.Cowdery at CGI-Europe.com) Date: Fri, 24 Sep 2004 10:05:01 +0100 Subject: [SciPy-user] Performance Python with Weave article updated Message-ID: <9B40DB5FBF7D2048AD79C2A91F862A517730FF@STEVAPPS05.Stevenage.CGI-Europe.com> Thanks for a very interesting article. I have been trying to work out the best way to optimise some signal processing code. I am experimenting on just one function which is a demodulator and has nested loops that I don't think can be optimised out with numeric - the result, it runs very slowly in python. The attached code (yes I know it's a bit of a mess but its just for testing) has the function as plain python, using weave and using pyrex. Initially I thought the results were bogus because of the massive differences but I can't see any errors in my code that would account for the differences. The code does manipulate and use a lot of numeric arrays. Pyrex is only about twice as fast and Weave a staggering 200+ times faster. I would really prefer to use Pyrex as I want to precompile but would prefer not to have a separate C compilation linked to Pyrex, which is the combination I have not tried yet. Logic would tell me that that mode should be the same as Weave except any differences in the extension code generated. Here are the results (time in ms): import test test.pythontest() 762.090153916 test.pyrextest() 315.408801957 test.inlinetest() file changed 5260.41745529 test.inlinetest() 3.00652736591 Regards Bob _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user *** Confidentiality Notice *** Proprietary/Confidential Information belonging to CGI Group Inc. and its affiliates may be contained in this message. If you are not a recipient indicated or intended in this message (or responsible for delivery of this message to such person), or you think for any reason that this message may have been addressed to you in error, you may not use or copy or deliver this message to anyone else. In such case, you should destroy this message and are asked to notify the sender by reply email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: application/octet-stream Size: 7442 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sigblocks_ext.pyx Type: application/octet-stream Size: 3153 bytes Desc: not available URL: From Bob.Cowdery at CGI-Europe.com Fri Sep 24 09:23:39 2004 From: Bob.Cowdery at CGI-Europe.com (Bob.Cowdery at CGI-Europe.com) Date: Fri, 24 Sep 2004 14:23:39 +0100 Subject: [SciPy-user] Performance Python with Weave article updated Message-ID: <9B40DB5FBF7D2048AD79C2A91F862A51773100@STEVAPPS05.Stevenage.CGI-Europe.com> Ok I see the error of my ways, I should have read the article more closely!! I need to expose each array using ArrayType - I guess that will make all the difference to the Pyrex time... Bob ======================================== -----Original Message----- From: Cowdery, Bob [UK] Sent: 24 September 2004 10:05 To: 'SciPy Users List' Subject: RE: [SciPy-user] Performance Python with Weave article updated Thanks for a very interesting article. I have been trying to work out the best way to optimise some signal processing code. I am experimenting on just one function which is a demodulator and has nested loops that I don't think can be optimised out with numeric - the result, it runs very slowly in python. The attached code (yes I know it's a bit of a mess but its just for testing) has the function as plain python, using weave and using pyrex. Initially I thought the results were bogus because of the massive differences but I can't see any errors in my code that would account for the differences. The code does manipulate and use a lot of numeric arrays. Pyrex is only about twice as fast and Weave a staggering 200+ times faster. I would really prefer to use Pyrex as I want to precompile but would prefer not to have a separate C compilation linked to Pyrex, which is the combination I have not tried yet. Logic would tell me that that mode should be the same as Weave except any differences in the extension code generated. Here are the results (time in ms): import test test.pythontest() 762.090153916 test.pyrextest() 315.408801957 test.inlinetest() file changed 5260.41745529 test.inlinetest() 3.00652736591 Regards Bob _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user *** Confidentiality Notice *** Proprietary/Confidential Information belonging to CGI Group Inc. and its affiliates may be contained in this message. If you are not a recipient indicated or intended in this message (or responsible for delivery of this message to such person), or you think for any reason that this message may have been addressed to you in error, you may not use or copy or deliver this message to anyone else. In such case, you should destroy this message and are asked to notify the sender by reply email. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/ms-tnef Size: 3539 bytes Desc: not available URL: From prabhu at aero.iitm.ernet.in Fri Sep 24 10:08:24 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Fri, 24 Sep 2004 19:38:24 +0530 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <9B40DB5FBF7D2048AD79C2A91F862A51773100@STEVAPPS05.Stevenage.CGI-Europe.com> References: <9B40DB5FBF7D2048AD79C2A91F862A51773100@STEVAPPS05.Stevenage.CGI-Europe.com> Message-ID: <16724.10840.376302.309612@monster.linux.in> >>>>> "BC" == Bob Cowdery writes: BC> Ok I see the error of my ways, I should have read the article BC> more closely!! I need to expose each array using ArrayType - BC> I guess that will make all the difference to the Pyrex time... Yes, it will make all the difference. Just FYI, the latest laplace.tar.gz has Francesc's Pyrex version also. You should also consider using weave's ext_tools. Look inside the weave examples/ directory and look at these examples: $ grep "import ext_tool" * fibonacci.py:import ext_tools increment_example.py:#from weave import ext_tools increment_example.py:import ext_tools ramp2.py:from weave import ext_tools vq.py: import ext_tools All of these show you how you can build extension modules from within Python. Easiest to see is increment_example.py. You can create a module (where you want to create it) and then populate it with a bunch of functions implemented in C, build it and use it. Its just a little more code than usual. You might want to give it a shot. Also read this: http://www.scipy.org/documentation/weave/weaveusersguide.html#extension_modules cheers, prabhu From falted at pytables.org Fri Sep 24 12:41:36 2004 From: falted at pytables.org (Francesc Alted) Date: Fri, 24 Sep 2004 18:41:36 +0200 Subject: [SciPy-user] Performance Python with Weave article updated In-Reply-To: <9B40DB5FBF7D2048AD79C2A91F862A51773100@STEVAPPS05.Stevenage.CGI-Europe.com> References: <9B40DB5FBF7D2048AD79C2A91F862A51773100@STEVAPPS05.Stevenage.CGI-Europe.com> Message-ID: <200409241841.36749.falted@pytables.org> A Divendres 24 Setembre 2004 15:23, Bob.Cowdery at CGI-Europe.com va escriure: > Ok I see the error of my ways, I should have read the article more closely!! > I need to expose each array using ArrayType - I guess that will make all the > difference to the Pyrex time... In fact, As Prahbu has already said it is not really necessary to expose the Array type in Pyrex. Anyway, I looked into your code, and I saw that this is the kind of code where Pyrex can really shine. So, I put my hands to work and this is what I got: 1.- I've converted all the arithmetic where Numeric arrays where implied and substituted by pointers in C. As all the arrays implied were unidimensionals, I was able to keep the indexes (Pyrex can do that for one-dimensional pointers, not for the general multidimensional case). For example, the lines: for tapup from 0 <= tapup <= ln-1: ISum = ISum + filter_phasing[tapdwn] * I_Delay[delay_ptr] QSum = QSum + filter_phasing[tapup] * Q_Delay[delay_ptr] have been replaced by: cdef int buflen cdef void *data cdef double *p_I_Delay cdef double *p_Q_Delay cdef double *p_filter_phasing if PyObject_AsWriteBuffer(I_Delay, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") p_I_Delay = data if PyObject_AsWriteBuffer(Q_Delay, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") p_Q_Delay = data if PyObject_AsWriteBuffer(filter_phasing, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") p_filter_phasing = data for tapup from 0 <= tapup <= ln-1: ISum = ISum + p_filter_phasing[tapdwn] * p_I_Delay[delay_ptr] QSum = QSum + p_filter_phasing[tapup] * p_Q_Delay[delay_ptr] and so on and so forth for the other vector operations. That first optimization gave a speed-up of 6.2x over a the original python code. You can find the complete code for this step in the attachement named as: src/sigblocks_ext.pyx.versio1 2.- Then, I declared the type of variables in the main function. If you don't do that, all the variables are considered python objects, and access to its value is very expensive. This optimization gave an additional speed-up of 3.7x for a total speed-up of 27.7x. The code is in the attachment: sigblocks_ext.pyx.versio2 3.- I've removed all the python calls in the loops, namely abs() and int(). abs() has been replaced by the fabs() C call, while int() has been removed completely (why you try to convert a double to an int and then assign again to a double? if you want to do that, perhaps a C call to roundf would be better). Also, I've removed the line: if(fabs(usb) > peak): self.m_peak = fabs(usb) and replaced it by: cdef double m_peak if(fabs(usb) > peak): m_peak = fabs(usb) and return this maximum at the end: return m_peak This optimization gave an additional speed-up of 3.7x (yes, again) for a total speed-up of 161x. The code of this final version is in the attachment: sigblocks_ext.pyx Well, this is not exactly the more than 200x speed-up that you have achieved with inline weave, but very close. Perhaps the code might be optimized still further, although I believe the gains would be minimal. Well, I think I have ended with a good exercise for my next seminar about python and scientific computing :) Cheers, -- Francesc Alted -------------- next part -------------- A non-text attachment was scrubbed... Name: sigblocks_ext.pyx Type: text/x-csrc Size: 4408 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sigblocks_ext.pyx.versio2 Type: text/x-csrc Size: 4280 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sigblocks_ext.pyx.versio1 Type: text/x-csrc Size: 4219 bytes Desc: not available URL: From stephen.walton at csun.edu Fri Sep 24 16:24:08 2004 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 24 Sep 2004 13:24:08 -0700 Subject: [SciPy-user] matrix construction In-Reply-To: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> Message-ID: <1096057448.3331.43.camel@sunspot.csun.edu> On Thu, 2004-09-23 at 08:29, William Griffin wrote: > I'm trying to build an adjacency matrix (52*52) Maybe this is so obvious that I'm missing what you're trying to do: from string import * from numarray import * A=zeros((52,52)) # adjacency matrix file=open('data') for line in file.readlines(): w = split(line) m = atoi(w[0]) n = atoi(w[1]) A[m, n] += 1 I've gotten in the habit of using m,n instead of i,j, by the way because of i and j being the sqrt(-1) in MATLAB and Numeric/numarray, respectively. Now, does anyone have code to draw a connectivity graph from an adjacency matrix like this? From Fernando.Perez at colorado.edu Fri Sep 24 16:36:30 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 24 Sep 2004 14:36:30 -0600 Subject: [SciPy-user] matrix construction In-Reply-To: <1096057448.3331.43.camel@sunspot.csun.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> <1096057448.3331.43.camel@sunspot.csun.edu> Message-ID: <4154854E.3010802@colorado.edu> Stephen Walton schrieb: > On Thu, 2004-09-23 at 08:29, William Griffin wrote: > >>I'm trying to build an adjacency matrix (52*52) > > > Maybe this is so obvious that I'm missing what you're trying to do: > > from string import * > from numarray import * > A=zeros((52,52)) # adjacency matrix > file=open('data') > for line in file.readlines(): > w = split(line) > m = atoi(w[0]) > n = atoi(w[1]) > A[m, n] += 1 Just a slightly more 'pythonic' approach, for the benefit of the OP (there's nothing wrong with your solution, this is just for educational purposes): import Numeric as N a=N.zeros((52,52)) for line in open('data'): m,n=map(int,line.split()) a[m,n] += 1 Less temporaries, these days files are their own iterators, the builtin int() type does the conversions fine, and string objects have their own split() methods. This has the advantage of trivially generalizing to arbitrary dimensions if we forgo explicit m,n indexing: dims = 4 size = 52 a = N.zeros((size,)*dims) for line in open('data'): a[map(int,line.split())] += 1 Best, f From golux at comcast.net Fri Sep 24 17:09:09 2004 From: golux at comcast.net (Stephen Waterbury) Date: Fri, 24 Sep 2004 17:09:09 -0400 Subject: [SciPy-user] matrix construction In-Reply-To: <4154854E.3010802@colorado.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> <1096057448.3331.43.camel@sunspot.csun.edu> <4154854E.3010802@colorado.edu> Message-ID: <41548CF5.8000705@comcast.net> Fernando Perez wrote: > Just a slightly more 'pythonic' approach, for the benefit of the OP > (there's nothing wrong with your solution, this is just for educational > purposes): > > import Numeric as N > a=N.zeros((52,52)) > for line in open('data'): > m,n=map(int,line.split()) > a[m,n] += 1 > > Less temporaries, these days files are their own iterators ... ... or more au courant: for line in file('data') -- per the Python library docs: "file() is new in Python 2.2. The older built-in open() is an alias for file()." From ransom at physics.mcgill.ca Fri Sep 24 17:28:54 2004 From: ransom at physics.mcgill.ca (Scott Ransom) Date: Fri, 24 Sep 2004 17:28:54 -0400 Subject: [SciPy-user] matrix construction In-Reply-To: <1096057448.3331.43.camel@sunspot.csun.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> <1096057448.3331.43.camel@sunspot.csun.edu> Message-ID: <20040924212854.GA24535@spock.physics.mcgill.ca> On Fri, Sep 24, 2004 at 01:24:08PM -0700, Stephen Walton wrote: ... > I've gotten in the habit of using m,n instead of i,j, by the way because > of i and j being the sqrt(-1) in MATLAB and Numeric/numarray, > respectively. Sorry for the off-topic nature of this post...but my solution for this was using "ii" and "jj". They also have the huge advantage that you can search for them and not find every i and j in every other word in the program. Now I'll let someone else answer your real question! ;-) Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From Fernando.Perez at colorado.edu Fri Sep 24 18:13:33 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 24 Sep 2004 16:13:33 -0600 Subject: [SciPy-user] matrix construction In-Reply-To: <41548CF5.8000705@comcast.net> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> <1096057448.3331.43.camel@sunspot.csun.edu> <4154854E.3010802@colorado.edu> <41548CF5.8000705@comcast.net> Message-ID: <41549C0D.2060400@colorado.edu> Stephen Waterbury schrieb: > Fernando Perez wrote: >>Less temporaries, these days files are their own iterators ... > > > ... or more au courant: > > for line in file('data') > > -- per the Python library docs: > > "file() is new in Python 2.2. The older built-in open() is an alias > for file()." Well, this is getting off-topic, but I used open() deliberately. There was a recent thread on python-dev where Guido himself clarified that he intended open() to remain the 'file-open' idiom of choice, with file() being possibly reserved in the future for other things. I don't have a reference handy, but after this I've gone back to open() (after having switched to file() for a while, based on the docs you point to). Part of that discussion clarified that the docs were a bit misleading on this point, they may have been fixed for 2.4, I don't know. So open() it is :) Best, f From josh8912 at yahoo.com Fri Sep 24 21:12:39 2004 From: josh8912 at yahoo.com (J) Date: Fri, 24 Sep 2004 18:12:39 -0700 (PDT) Subject: [SciPy-user] matrix construction In-Reply-To: <1096057448.3331.43.camel@sunspot.csun.edu> Message-ID: <20040925011239.25209.qmail@web51709.mail.yahoo.com> Hello Stephen: I found something that might be of use. The Matlab file "FindConnectedComponents.m" listed at http://www.cs.tau.ac.il/~borens/courses/ml/code.html might be what you are looking for. It takes an adjaceny matrix as input and finds the connected components. Best, John --- Stephen Walton wrote: > On Thu, 2004-09-23 at 08:29, William Griffin wrote: > > I'm trying to build an adjacency matrix (52*52) > > Maybe this is so obvious that I'm missing what > you're trying to do: > > from string import * > from numarray import * > A=zeros((52,52)) # adjacency matrix > file=open('data') > for line in file.readlines(): > w = split(line) > m = atoi(w[0]) > n = atoi(w[1]) > A[m, n] += 1 > > I've gotten in the habit of using m,n instead of > i,j, by the way because > of i and j being the sqrt(-1) in MATLAB and > Numeric/numarray, > respectively. > > Now, does anyone have code to draw a connectivity > graph from an > adjacency matrix like this? > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > _______________________________ Do you Yahoo!? Declare Yourself - Register online to vote today! http://vote.yahoo.com From quendor at nandor.net Sat Sep 25 10:22:14 2004 From: quendor at nandor.net (quendor at nandor.net) Date: Sat, 25 Sep 2004 09:22:14 -0500 Subject: [SciPy-user] Most Basic of Matrix In-Reply-To: <4154854E.3010802@colorado.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> <1096057448.3331.43.camel@sunspot.csun.edu> <4154854E.3010802@colorado.edu> Message-ID: <4C11C0AA-0EFE-11D9-AAFC-000A95A52B4A@nandor.net> I forgive in advance what will probably be the lamest of questions. I am new to scipy. I can select an element of a matrix, but seem unable to change the element. Am I using the wrong API? When I run the following code: #!/usr/bin/python from scipy import * A = mat([[1,2],[3,4]]) print A[1,1] A[1,1] = 100 I get the following output: 4 Traceback (most recent call last): File "./test.py", line 6, in ? A[1,1] = 100 File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/Numer\ ic/Matrix.py", line 183, in __setitem__ value = value[0,0] IndexError: invalid index From stephen.walton at csun.edu Sat Sep 25 12:32:33 2004 From: stephen.walton at csun.edu (Stephen Walton) Date: Sat, 25 Sep 2004 09:32:33 -0700 Subject: [SciPy-user] matrix construction In-Reply-To: <20040925011239.25209.qmail@web51709.mail.yahoo.com> References: <20040925011239.25209.qmail@web51709.mail.yahoo.com> Message-ID: <1096129952.3084.3.camel@localhost.localdomain> On Fri, 2004-09-24 at 18:12, J wrote: > Hello Stephen: > I found something that might be of use. The Matlab > file "FindConnectedComponents.m" listed at > http://www.cs.tau.ac.il/~borens/courses/ml/code.html Ah, thanks, Josh. I actually wrote my own code to do this based on the same algorithm. But I was actually looking for code to make a graphic representation of the adjacency matrix. From rkern at ucsd.edu Sat Sep 25 16:07:06 2004 From: rkern at ucsd.edu (Robert Kern) Date: Sat, 25 Sep 2004 13:07:06 -0700 Subject: [SciPy-user] Most Basic of Matrix In-Reply-To: <4C11C0AA-0EFE-11D9-AAFC-000A95A52B4A@nandor.net> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> <1096057448.3331.43.camel@sunspot.csun.edu> <4154854E.3010802@colorado.edu> <4C11C0AA-0EFE-11D9-AAFC-000A95A52B4A@nandor.net> Message-ID: <4155CFEA.50405@ucsd.edu> quendor at nandor.net wrote: > I forgive in advance what will probably be the lamest of questions. I > am new to scipy. I can select an element of a matrix, but seem unable > to change the element. Am I using the wrong API? > > When I run the following code: > > #!/usr/bin/python > from scipy import * > > A = mat([[1,2],[3,4]]) > print A[1,1] > A[1,1] = 100 This works fine for me on OS X 10.3, Python 2.3, SciPy CVS, and Numeric 23.3. What versions of SciPy and Numeric are you using? -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at cens.ioc.ee Sat Sep 25 17:00:20 2004 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 26 Sep 2004 00:00:20 +0300 (EEST) Subject: [SciPy-user] ANN: F2PY - Fortran to Python Interface Generator Message-ID: F2PY - Fortran to Python Interface Generator -------------------------------------------- I am pleased to announce the eight public release of F2PY, version 2.43.239_1806. The purpose of the F2PY project is to provide the connection between Python and Fortran programming languages. For more information, see http://cens.ioc.ee/projects/f2py2e/ Download: http://cens.ioc.ee/projects/f2py2e/2.x/F2PY-2-latest.tar.gz http://cens.ioc.ee/projects/f2py2e/2.x/F2PY-2-latest.win32.exe http://cens.ioc.ee/projects/f2py2e/2.x/scipy_distutils-latest.tar.gz http://cens.ioc.ee/projects/f2py2e/2.x/scipy_distutils-latest.win32.exe What's new? ------------ * Added support for ``ENTRY`` statement. * New attributes: ``intent(callback)`` to support non-external Python calls from Fortran; ``intent(inplace)`` to support in-situ changes, including typecode and contiguouness changes, of array arguments. * Added support for ``ALLOCATABLE`` string arrays. * New command line switches: --compiler and --include_paths. * Numerous bugs are fixed. Support for ``PARAMETER``s has been improved considerably. * Documentation updates. Pyfort and F2PY comparison. Projects using F2PY, users feedback, etc. * Support for Numarray 1.1 (thanks to Todd Miller). * Win32 installers for F2PY and the latest scipy_distutils are provided. Enjoy, Pearu Peterson ---------------

F2PY 2.43.239_1806 - The Fortran to Python Interface Generator (25-Sep-04) From tom at kornack.com Mon Sep 27 23:58:33 2004 From: tom at kornack.com (Tom Kornack) Date: Mon, 27 Sep 2004 23:58:33 -0400 Subject: [SciPy-user] Calling C code (Newbie Question) Message-ID: Hello: I am attempting to perform a Lomb Periodogram (using the Scargle method) on a rather large data set. It's a way of getting a power spectrum for non-uniformly sampled data. I have the program written in C and it works well. I have converted the program to pure Python and it's too slow. It's still slow when I convert the for loops into various Numeric constructs. So I'm looking for a way to call a C program from python and pass big Numeric arrays or a similar format to and from it. (without, hopefully, copying them in memory). I tried weave, but I noticed that you can't weave in whole programs. I suppose you can't weave in anything with a def statement, right? I would deeply appreciate any guidance. I feel as though this must be a very common program for scientists and I did try to find explicit examples of embedded C code, but to no avail. If you want to read on, see below. Thank you very much. Tom Kornack -=- If you would like to see just what kind of computation is being done, the C and python codes can be found here: http://listera.org/pub/lomb/ The main loops look like this, where om and time are arrays: for i in range(numf): s2[i] = sum( sin(2.*om[i]*time) ) c2[i] = sum( cos(2.*om[i]*time) ) and I tried to speed things up by doing: s2 = sum( sin(2. * outerproduct(om,time) ), 1) c2 = sum( cos(2. * outerproduct(om,time) ), 1) but found that was too memory intensive and still slow. So that's why I wanted to do something purely in C. http://kornack.com Fundamental Symmetries Lab, Princeton University 609-716-7259 (h), 609-933-2186 (m), 609-258-0702 (w), 609-258-1625 (f) Thomas Kornack, 157 North Post Road, Princeton Junction, NJ 08550-5009 From rkern at ucsd.edu Tue Sep 28 01:09:01 2004 From: rkern at ucsd.edu (Robert Kern) Date: Mon, 27 Sep 2004 22:09:01 -0700 Subject: [SciPy-user] Calling C code (Newbie Question) In-Reply-To: References: Message-ID: <4158F1ED.7060503@ucsd.edu> Tom Kornack wrote: [snip] > If you would like to see just what kind of computation is being done, > the C and python codes can be found here: > > http://listera.org/pub/lomb/ > > The main loops look like this, where om and time are arrays: > > for i in range(numf): > s2[i] = sum( sin(2.*om[i]*time) ) > c2[i] = sum( cos(2.*om[i]*time) ) > > and I tried to speed things up by doing: > > s2 = sum( sin(2. * outerproduct(om,time) ), 1) > c2 = sum( cos(2. * outerproduct(om,time) ), 1) > > but found that was too memory intensive and still slow. So that's why I > wanted to do something purely in C. If you want to do things in C, look into the SciPy package called weave. It allows you to inline C code. If you are reasonably good at FORTRAN, I've found that writing a small FORTRAN routine and wrapping it with F2PY is often easier than using weave when dealing with Numeric arrays. Another benefit of using FORTRAN this way is that one can easily use LAPACK or BLAS subroutines. Another thing you might want to try in pure Numeric is using dot() as follows: Your code: sh = sum( cn*sin(outerproduct(om,time) ), 1) My code: sh = dot(sin(outerproduct(time, om)), cn) That is what I did when I implemented a version of the Lomb-Scargle periodogram. If you have ATLAS or some other optimized BLAS installed, you can install Numeric such that dot() uses BLAS. You may still have memory problems, though. To reduce time at the expense of memory, you can compute phase=outerproduct(time,om) once and call sin() and cos() on phase. cossin = power.outer(exp(1j*time), om) projection = dot(cossin, cn) ch = projection.real sh = projection.imag -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From tom at kornack.com Tue Sep 28 01:27:29 2004 From: tom at kornack.com (Tom Kornack) Date: Tue, 28 Sep 2004 01:27:29 -0400 Subject: [SciPy-user] Calling C code (Newbie Question) Message-ID: <17306757-110F-11D9-A57B-000A95DA14CE@kornack.com> Hi Robert: > Your code: > sh = sum( cn*sin(outerproduct(om,time) ), 1) > > My code: > sh = dot(sin(outerproduct(time, om)), cn) Thanks for your suggestions. Using dot() is better than sum(), however, the outerproduct() alone gives me malloc errors and I have 2 GB memory. I mean, it's a huge matrix that gets created when I have a million points. That's why I wanted to use C. The better question for this list would be: how do I Weave this in C? > for i in range(numf): > s2[i] = sum( sin(2.*om[i]*time) ) > c2[i] = sum( cos(2.*om[i]*time) ) Sorry for the rather basic question. Tom From eric at enthought.com Tue Sep 28 01:53:22 2004 From: eric at enthought.com (eric jones) Date: Tue, 28 Sep 2004 00:53:22 -0500 Subject: [SciPy-user] Calling C code (Newbie Question) In-Reply-To: <17306757-110F-11D9-A57B-000A95DA14CE@kornack.com> References: <17306757-110F-11D9-A57B-000A95DA14CE@kornack.com> Message-ID: <4158FC52.4070001@enthought.com> Hey Tom, Tom Kornack wrote: > Hi Robert: > >> Your code: >> sh = sum( cn*sin(outerproduct(om,time) ), 1) >> >> My code: >> sh = dot(sin(outerproduct(time, om)), cn) > > > Thanks for your suggestions. Using dot() is better than sum(), > however, the outerproduct() alone gives me malloc errors and I have 2 > GB memory. I mean, it's a huge matrix that gets created when I have a > million points. That's why I wanted to use C. > > The better question for this list would be: how do I Weave this in C? > I inserted the following replacements into your example app: >> for i in range(numf): >> s2[i] = sum( sin(2.*om[i]*time) ) >> c2[i] = sum( cos(2.*om[i]*time) ) > import weave code = """ for (int i=0; i < Nom[0]; i++) { s2[i]=0.0; c2[i]=0.0; float om_2 = 2.0*om[i]; for (int j=0; j < Ntime[0]; j++) { float ang = om_2*time[j]; s2[i] += sin(ang); c2[i] += cos(ang); } } """ weave.inline(code,['om','s2','c2','time']) and a little later in your code: #for i in range(numf): # sh[i] = sum( cn*sin(om[i]*time) ) # ch[i] = sum( cn*cos(om[i]*time) ) code = """ for (int i=0; i < Nom[0]; i++) { sh[i] = 0.0; ch[i] = 0.0; for (int j=0; j < Ntime[0]; j++) { float ang om[i]*time[j]; sh[i] += cn[j]*sin(ang); ch[i] += cn[j]*cos(ang); } } """ weave.inline(code,['om','sh','ch','time','cn']) [Note: Check the accuracy of the calculations as this was just a quick example.] I've attached the edited example file that I used for testing. It has two calls to your function in the main() to get rid of any caching affects from the first run or a weave.inline() function. I got a speedup of about 1.75 when running your function with multiple=0. How much faster is the pure C version? I played around a little, and it looks like the sin() and cos() are the limiting factors here. For N=4000, the function runs in about 3.95 seconds on my machine. It is an order of magnitude faster if I comment out all the inner loop trig calls with something like: sh[i] += cn[j]*ang; //sin(ang); ch[i] += cn[j]*ang //cos(ang); This suggest that even the pure C version that uses the same approach isn't going to be much faster since it also will be limited byt the speed of the sin() and cos() calls. > > Sorry for the rather basic question. Not at all. eric -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ex.py URL: From ariciputi at pito.com Tue Sep 28 03:36:50 2004 From: ariciputi at pito.com (Andrea Riciputi) Date: Tue, 28 Sep 2004 09:36:50 +0200 Subject: [SciPy-user] Calling C code (Newbie Question) In-Reply-To: <4158FC52.4070001@enthought.com> References: <17306757-110F-11D9-A57B-000A95DA14CE@kornack.com> <4158FC52.4070001@enthought.com> Message-ID: <28E45C0C-1121-11D9-BD50-000A95C0BC68@pito.com> Just two notes: 1) You can look at SWIG (http://www.swig.org/) as an alternative to Weave. It lets you wrap an entire C library in your Python code, not only code chuncks as Weave does. 2) As stated by Eric I think the limiting factor are the trig functions. Look at Numerical Recipes section 5.5. I've found that the algorithm suggested there to calculate sin and cos function can speed up loops a lot. HTH, Andrea. On 28 Sep 2004, at 07:53, eric jones wrote: > Hey Tom, > > Tom Kornack wrote: > >> Hi Robert: >> >>> Your code: >>> sh = sum( cn*sin(outerproduct(om,time) ), 1) >>> >>> My code: >>> sh = dot(sin(outerproduct(time, om)), cn) >> >> >> Thanks for your suggestions. Using dot() is better than sum(), >> however, the outerproduct() alone gives me malloc errors and I have 2 >> GB memory. I mean, it's a huge matrix that gets created when I have a >> million points. That's why I wanted to use C. >> >> The better question for this list would be: how do I Weave this in C? >> > I inserted the following replacements into your example app: > >>> for i in range(numf): >>> s2[i] = sum( sin(2.*om[i]*time) ) >>> c2[i] = sum( cos(2.*om[i]*time) ) >> > > import weave > code = """ > for (int i=0; i < Nom[0]; i++) > { > s2[i]=0.0; > c2[i]=0.0; > float om_2 = 2.0*om[i]; > for (int j=0; j < Ntime[0]; j++) > { > float ang = om_2*time[j]; > s2[i] += sin(ang); > c2[i] += cos(ang); > } } > """ > weave.inline(code,['om','s2','c2','time']) > > and a little later in your code: > > #for i in range(numf): # sh[i] = sum( > cn*sin(om[i]*time) ) > # ch[i] = sum( cn*cos(om[i]*time) ) > code = """ > for (int i=0; i < Nom[0]; i++) > { > sh[i] = 0.0; > ch[i] = 0.0; > for (int j=0; j < Ntime[0]; j++) > { > float ang om[i]*time[j]; > sh[i] += cn[j]*sin(ang); > ch[i] += cn[j]*cos(ang); > } } > """ > weave.inline(code,['om','sh','ch','time','cn']) > > [Note: Check the accuracy of the calculations as this was just a quick > example.] > > I've attached the edited example file that I used for testing. It has > two calls to your function in the main() to get rid of any caching > affects from the first run or a weave.inline() function. > I got a speedup of about 1.75 when running your function with > multiple=0. How much faster is the pure C version? I played around a > little, and it looks like the sin() and cos() are the limiting factors > here. For N=4000, the function runs in about 3.95 seconds on my > machine. It is an order of magnitude faster if I comment out all the > inner loop trig calls with something like: > > sh[i] += cn[j]*ang; //sin(ang); > ch[i] += cn[j]*ang //cos(ang); > > This suggest that even the pure C version that uses the same approach > isn't going to be much faster since it also will be limited byt the > speed of the sin() and cos() calls. > >> >> Sorry for the rather basic question. > > Not at all. > > eric > > #! /usr/bin/env python > # -*- coding: UTF-8 -*- > > from __future__ import division > from Numeric import * > from scipy.stats import std, mean, norm > > #from Scientific.Statistics import standardDeviation > from RandomArray import normal > from MLab import mean > > def lomb(time, signal, freqin=[], fap=0.01, multiple=0, noise=0, > verbosity=2): > # > # NAME: > # lomb > # > # > # PURPOSE: > # Compute the lomb-scargle periodogram of an unevenly sampled > # lightcurve > # > # > # CATEGORY: > # time series analysis > # > # > # CALLING SEQUENCE: > # psd, freq = > scargle(time,signal,fmin=fmin,fmax=fmax,numf=numf,pmin=pmin,pmax=pmax, > # omega=omega,fap=fap,signi=signi > # > # > # INPUTS: > # time: The times at which the time series was measured > # rate: the corresponding count rates > # > # > # OPTIONAL INPUTS: > # fmin,fmax: minimum and maximum frequency (NOT ANGULAR FREQ!) > # to be used (has precedence over pmin,pmax) > # pmin,pmax: minimum and maximum PERIOD to be used > # omega: angular frequencies for which the PSD values are > # desired > # fap : false alarm probability desired > # (see Scargle et al., p. 840, and signi > # keyword). Default equal to 0.01 (99% significance) > # noise: for the normalization of the periodogram and the > # compute of the white noise simulations. If not set, equal > to > # the variance of the original lc. > # multiple: number of white noise simulations for the FAP > # power level. Default equal to 0 (i.e., no simulations). > # numf: number of independent frequencies > # > # > # KEYWORD PARAMETERS: > # verbosity: print out debugging information if set > # > # OUTPUTS: > # psd : the psd-values corresponding to omega > # freq : frequency of PSD > # > # > # OPTIONAL OUTPUTS: > # signi : power threshold corresponding to the given > # false alarm probabilities fap and according to the > # desired number of independent frequencies > # simsigni : power threshold corresponding to the given > # false alarm probabilities fap according to white > # noise simulations > # psdpeaksort : array with the maximum peak pro each > simulation > # > # PROCEDURE: > # The Lomb Scargle PSD is computed according to the > # definitions given by Scargle, 1982, ApJ, 263, 835, and Horne > # and Baliunas, 1986, MNRAS, 302, 757. Beware of patterns and > # clustered data points as the Horne results break down in > # this case! Read and understand the papers and this > # code before using it! For the fast algorithm read W.H. Press > # and G.B. Rybicki 1989, ApJ 338, 277. > # > # > # EXAMPLE: > # > # > # > # MODIFICATION HISTORY: > # Version 1.0, 1997, Joern Wilms IAAT > # Version 1.1, 1998.09.23, JW: Do not normalize if variance > is 0 > # (for computation of LSP of window function...) > # Version 1.2, 1999.01.07, JW: force numf to be int > # Version 1.3, 1999.08.05, JW: added omega keyword > # Version 1.4, 1999.08 > # KP: significance levels > # JW: pmin,pmax keywords > # Version 1.5, 1999.08.27, JW: compute the significance levels > # from the horne number of independent frequencies, and > not from > # numf > # Version 1.6, 2000.07.27, SS and SB: added fast algorithm > and FAP > # according to white noise lc simulations. > # Version 1.7, 2000.07.28 JW: added debug keyword, sped up > # simulations by factor of four (use /slow to get old > # behavior of the simulations) > # Version 2.0 2004.09.01, Thomas Kornack rewritten in Python > > if verbosity>1: print('Starting Lomb (standard)...') > > # defaults > if noise == 0: noise = sqrt(std(signal)) > > # make times manageable (Scargle periodogram is time-shift > invariant) > time = time-time[0] > > # number of independent frequencies > # (Horne and Baliunas, eq. 13) > n0 = len(time) > horne = long(-6.362+1.193*n0+0.00098*n0**2.) > if (horne < 0): horne=5 > numf = horne > > # min.freq is 1/T > fmin = 1./max(time) > > # max. freq: approx. to Nyquist frequency > fmax = n0 / (2.*max(time)) > > # if omega is not given, compute it > if (len(freqin) > 0): > om = freqin*2*pi > numf = len(om) > else: > om = 2.*pi*(fmin+(fmax-fmin)*arange(numf)/(numf-1.)) > > # False Alarm Probability according to Numf > signi = -log( 1. - ((1.-fap)**(1./horne)) ) > > if verbosity>1: print('Setting up periodogram...') > > # Periodogram > # Ref.: W.H. Press and G.B. Rybicki, 1989, ApJ 338, 277 > > # Eq. (6); s2, c2 > # Do we REALLY need these to be doubles?! > s2 = zeros(numf, typecode='f') # , typecode = 'd' > c2 = zeros(numf, typecode ='f') # , typecode = 'd' > #for i in range(numf): > # s2[i] = sum( sin(2.*om[i]*time) ) > # c2[i] = sum( cos(2.*om[i]*time) ) > > import weave > code = """ > for (int i=0; i < Nom[0]; i++) > { > s2[i]=0.0; > c2[i]=0.0; > float om_2 = 2.0*om[i]; > for (int j=0; j < Ntime[0]; j++) > { > float ang = om_2*time[j]; > s2[i] += sin(ang); > c2[i] += cos(ang); > } > } > """ > weave.inline(code,['om','s2','c2','time']) > > # Eq. (2): Definition -> tan(2omtau) > # --- tan(2omtau) = s2 / c2 > omtau = arctan(s2/c2)/2 > > # cos(tau), sin(tau) > cosomtau = cos(omtau) > sinomtau = sin(omtau) > > # Eq. (7); sum(cos(t-tau)**2) and sum(sin(t-tau)**2) > tmp = c2*cos(2.*omtau) + s2*sin(2.*omtau) > tc2 = 0.5*(n0+tmp) # sum(cos(t-tau)**2) > ts2 = 0.5*(n0-tmp) # sum(sin(t-tau)**2) > > # clean up > tmp = 0. > omtau= 0. > s2 = 0. > t2 = 0. > > # computing the periodogram for the original lc > > # Subtract mean from data > cn = signal - mean(signal) > > # Eq. (5); sh and ch > sh = zeros(numf, typecode='f') > ch = zeros(numf, typecode='f') > > if verbosity>1: print('Looping...') > print 'multiple:', multiple > if (multiple > 0): > sisi=zeros([n0,numf], typecode='f') > coco=zeros([n0,numf], typecode='f') > for i in range(numf): > sisi[:,i]=sin(om[i]*time) > coco[:,i]=cos(om[i]*time) > > sh[i]=sum(cn*sisi[:,i]) > ch[i]=sum(cn*coco[:,i]) > else: > #for i in range(numf): > # sh[i] = sum( cn*sin(om[i]*time) ) > # ch[i] = sum( cn*cos(om[i]*time) ) > > code = """ > for (int i=0; i < Nom[0]; i++) > { > sh[i] = 0.0; > ch[i] = 0.0; > for (int j=0; j < Ntime[0]; j++) > { > float ang = om[i]*time[j]; > sh[i] += cn[j]*sin(ang); > ch[i] += cn[j]*cos(ang); > } > } > """ > weave.inline(code,['om','sh','ch','time','cn']) > > # Eq. (3) > px = (ch*cosomtau + sh*sinomtau)**2 / tc2 + (sh*cosomtau - > ch*sinomtau)**2 / ts2 > > # correct normalization > psd = 0.5*px/(noise**2) > > if verbosity>1: print('Running Simulations...') > # --- RUN SIMULATIONS for multiple > 0 > simsigni=[] > psdpeaksort=[] > if multiple > 0: > if (multiple*min(fap) < 10): > print('WARNING: Number of iterations (multiple keyword) > not large enough for false alarm probability requested (need > multiple*FAP > 10 )') > > psdpeak = zeros(multiple, typecode='f') > for m in range(multiple): > if ((m+1)%100 == 0) and verbosity>0: > print('...working on %ith simulation . (%3i% Done)' % > (m,m/multiple)) > > # white noise simulation > cn = normal(0,noise,n0) > cn = cn-mean(cn) # force OBSERVED count rate to zero > > # Eq. (5); sh and ch > for i in range(numf): > sh[i]=sum(cn*sisi[:,i]) > ch[i]=sum(cn*coco[:,i]) > > # Eq. (3) ; computing the periodogram for each simulation > psdpeak[m] = max ( (ch*cosomtau + sh*sinomtau)**2 / tc2 + > (sh*cosomtau - ch*sinomtau)**2 / ts2 ) > > # False Alarm Probability according to simulations > if len(psdpeak) != 0: > idx = sort(psdpeak) > # correct normalization > psdpeaksort = 0.5 * psdpeak[idx]/(noise**2) > simsigni = psdpeaksort[(1-fap)*(multiple-1)] > > freq = om/(2.*pi) > > if verbosity>1: print('Done...') > > return (psd, freq, signi, simsigni, psdpeaksort) > > > if __name__ == '__main__': > print('Testing Lomb-Scargle Periodogram with white noise...') > freq = 10. # Hz - Sample frequency > time = 400. #seconds > noisedata = normal(0,5,int(round(freq*time))) > noisetime = arange(0,time,1/freq) > var = { 'x': noisetime, 'y': noisedata, 'ylabel': 'Amplitude', > 'xlabel':'Time (s)' } > > #from CPTAnalyze import findAverageSampleTime > N=4000 > dt = 1.0 #findAverageSampleTime(var,0) > maxlogx = log(1/(2*dt)) # max frequency is the sampling rate > minlogx = log(1/(max(var['x'])-min(var['x']))) #min frequency is > 1/T > frequencies = exp(arange(N, typecode = > Float)/(N-1.)*(maxlogx-minlogx)+minlogx) > psd, freqs, signi, simsigni, psdpeaks = lomb(var['x'], > var['y'],freqin=frequencies) > #lomb(var['x'], var['y'],freqin=frequencies) > import time > t1 = time.clock() > psd, freqs, signi, simsigni, psdpeaks = lomb(var['x'], > var['y'],freqin=frequencies) > #lomb(var['x'], var['y'],freqin=frequencies) > t2 = time.clock() > print t2 - t1 > print freqs[:5], freqs[-5:] > print psd[:5], psd[-5:] > > """ > import Gnuplot > plotobj = Gnuplot.Gnuplot() > plotobj.title('Testing Lomb-Scargle Periodogram') > plotobj.xlabel('Frequency (Hz)') > plotobj.ylabel('Power Spectral Density (arb/Hz^1/2)') > plotobj('set logscale xy') > plotobj.plot(Gnuplot.Data(freqs,psd, with = 'l 4 0')) > """_______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From hoel at gl-group.com Tue Sep 28 04:52:00 2004 From: hoel at gl-group.com (=?iso-8859-15?Q?Berthold_H=F6llmann?=) Date: Tue, 28 Sep 2004 10:52:00 +0200 Subject: [SciPy-user] Include path when compiling Fortran code Message-ID: Hello, I openend issue 188 on the scipy issue tracker, but would like to ask here if anyone has a workaround for my problem. We have a quite complex project with some Fortran modules in different subdirectories. Trying to include a file fails when compiling a library with the INTEL Fortran compiler. No include paths are added to the command line. The Issue has a small tar file attached to provide a testcase. Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: hoel at gl-group.com Internet: http://www.gl-group.com This e-mail contains confidential information for the exclusive attention of the intended addressee. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. From falted at pytables.org Tue Sep 28 03:32:15 2004 From: falted at pytables.org (Francesc Alted) Date: Tue, 28 Sep 2004 09:32:15 +0200 Subject: [SciPy-user] Calling C code (Newbie Question) In-Reply-To: References: Message-ID: <200409280932.15720.falted@pytables.org> A Dimarts 28 Setembre 2004 05:58, Tom Kornack va escriure: > I feel as > though this must be a very common program for scientists and I did try > to find explicit examples of embedded C code, but to no avail. You may find interesting this excellent comparison of different methods to embed C code into your python program in: http://www.scipy.org/documentation/weave/weaveperformance.html Cheers, -- Francesc Alted From aisaac at american.edu Mon Sep 27 14:25:46 2004 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 27 Sep 2004 14:25:46 -0400 (Eastern Daylight Time) Subject: [SciPy-user] matrix construction In-Reply-To: <4154854E.3010802@colorado.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu><1096057448.3331.43.camel@sunspot.csun.edu><4154854E.3010802@colorado.edu> Message-ID: On Fri, 24 Sep 2004, Fernando Perez apparently wrote: > dims = 4 > size = 52 > a = N.zeros((size,)*dims) > for line in open('data'): > a[map(int,line.split())] += 1 This list seems to accept that nonprogrammers are using SciPy, so hopefully my question is not too OT: Is the fate of the implicit file descriptor determinate (e.g., because it was not explicitly bound)? This example was quite useful, btw. Thank you, Alan Isaac From rkern at ucsd.edu Tue Sep 28 12:54:03 2004 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 28 Sep 2004 09:54:03 -0700 Subject: [SciPy-user] matrix construction In-Reply-To: References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu><1096057448.3331.43.camel@sunspot.csun.edu><4154854E.3010802@colorado.edu> Message-ID: <4159972B.8080202@ucsd.edu> Alan G Isaac wrote: > On Fri, 24 Sep 2004, Fernando Perez apparently wrote: > >>dims = 4 >>size = 52 > > >>a = N.zeros((size,)*dims) >>for line in open('data'): >> a[map(int,line.split())] += 1 > > > > This list seems to accept that nonprogrammers > are using SciPy, so hopefully my question > is not too OT: > > Is the fate of the implicit file descriptor > determinate (e.g., because it was not explicitly > bound)? IIRC, the file object has only one reference while the loop is running since it was not assigned a name. Once the loop is over, the reference count drops to 0 and the object ought to be garbage collected at which time it ought to be closed. The latter is not guaranteed, however, so it is recommended practice to assign the object to a name and explicitly close the object after you are done with it. On the other hand, Python usually works as I described it, so I do things like the example when I'm in the interactive interpreter and take the extra precaution in actual code. > This example was quite useful, btw. > > Thank you, > Alan Isaac -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Tue Sep 28 13:47:58 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 28 Sep 2004 11:47:58 -0600 Subject: [SciPy-user] matrix construction In-Reply-To: <4159972B.8080202@ucsd.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu><1096057448.3331.43.camel@sunspot.csun.edu><4154854E.3010802@colorado.edu> <4159972B.8080202@ucsd.edu> Message-ID: <4159A3CE.8050202@colorado.edu> Robert Kern schrieb: > Alan G Isaac wrote: > >>On Fri, 24 Sep 2004, Fernando Perez apparently wrote: >> >> >>>dims = 4 >>>size = 52 >> >> >>>a = N.zeros((size,)*dims) >>>for line in open('data'): >>> a[map(int,line.split())] += 1 >> >> >> >>This list seems to accept that nonprogrammers >>are using SciPy, so hopefully my question >>is not too OT: >> >>Is the fate of the implicit file descriptor >>determinate (e.g., because it was not explicitly >>bound)? > > > IIRC, the file object has only one reference while the loop is running > since it was not assigned a name. Once the loop is over, the reference > count drops to 0 and the object ought to be garbage collected at which > time it ought to be closed. The latter is not guaranteed, however, so it > is recommended practice to assign the object to a name and explicitly > close the object after you are done with it. On the other hand, Python > usually works as I described it, so I do things like the example when > I'm in the interactive interpreter and take the extra precaution in > actual code. This is totally correct. In general, if I'm only reading data I even do things like the above in real code, since there is no problem with a dangling file descriptor (that I can think of). But if you are _writing_ to a file, then you should defintely always call the close() method manually (which requires binding it to a name) instead of relying on it being closed when going out of scope. Relying on such behavior can lead to data loss, since the closing is _not_ guaranteed to happen (I think jython and CPython differ in this respect, for example). Cheers, f From oliphant at ee.byu.edu Tue Sep 28 14:12:48 2004 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 28 Sep 2004 12:12:48 -0600 Subject: [SciPy-user] Re: [SciPy-dev] MCMC, Kalman Filtering, AI for SciPy? In-Reply-To: <4159A668.6080001@colorado.edu> References: <4156440A.1040204@ee.byu.edu> <41566CBD.70005@enthought.com> <415855AD.4060701@sdl.usu.edu> <4158A23F.6080207@sdl.usu.edu> <1A1C914C-1177-11D9-8495-000A95B68E50@stsci.edu> <4159A668.6080001@colorado.edu> Message-ID: <4159A9A0.8070804@ee.byu.edu> > I don't think the library names should be mercilessly duplicated > everywhere, but something like this can really help in a few places. > Obviously, a good, searchable, cross-referenced documentation is a key > part of the soultion to this problem, as you mention. As far as searching goes. scipy.info started something like this a while back. The functionality was lost a bit when the delayed import mechanism was used (but it could be modified to work). The idea was that scipy.info() would start looking for in all the documentation it could find and then would print what it found. It still works on things that are in the namespace of scipy but not subpackages. try scipy.info("fft") for example It is rudimentary, but shows an idea that could be pursued. We should also think about a help-browser system that could pop up (in a different process I think), when the user enters a help command. This is what maple and matlab do effectively. -Travis From Fernando.Perez at colorado.edu Tue Sep 28 14:49:30 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 28 Sep 2004 12:49:30 -0600 Subject: [SciPy-user] Re: [SciPy-dev] MCMC, Kalman Filtering, AI for SciPy? In-Reply-To: <4159A9A0.8070804@ee.byu.edu> References: <4156440A.1040204@ee.byu.edu> <41566CBD.70005@enthought.com> <415855AD.4060701@sdl.usu.edu> <4158A23F.6080207@sdl.usu.edu> <1A1C914C-1177-11D9-8495-000A95B68E50@stsci.edu> <4159A668.6080001@colorado.edu> <4159A9A0.8070804@ee.byu.edu> Message-ID: <4159B23A.4090909@colorado.edu> Travis Oliphant schrieb: >>I don't think the library names should be mercilessly duplicated >>everywhere, but something like this can really help in a few places. >>Obviously, a good, searchable, cross-referenced documentation is a key >>part of the soultion to this problem, as you mention. > > > As far as searching goes. scipy.info started something like this a > while back. The functionality was lost a bit when the delayed import > mechanism was used (but it could be modified to work). > > The idea was that scipy.info() would start looking for > in all the documentation it could find and then would print what it > found. It still works on things that are in the namespace of scipy but > not subpackages. > > try > > scipy.info("fft") > > for example > > It is rudimentary, but shows an idea that could be pursued. Very nice, and I think a useful tool to have. Note however that somehow scipy messes up the plain python help system: planck[python]> python Python 2.3.3 (#1, May 7 2004, 10:31:40) [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> help('for') Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/site.py", line 309, in __call__ return pydoc.help(*args, **kwds) File "scipy_base/ppimport.py", line 381, in _scipy_pydoc_help_call File "scipy_base/ppimport.py", line 366, in _ppresolve_ignore_failure File "scipy_base/ppimport.py", line 335, in ppresolve File "scipy_base/ppimport.py", line 340, in ppresolve File "scipy_base/ppimport.py", line 202, in ppimport File "scipy_base/ppimport.py", line 273, in _ppimport_importer ImportError: No module named for Typing help('for') in a session where scipy has NOT been imported works correctly. I think the info() functionality is great, but we should be careful not to break the default help() which normal python users may be used to already. > We should also think about a help-browser system that could pop up (in a > different process I think), when the user enters a help command. > > This is what maple and matlab do effectively. Well, I keep hearing about .chm files, so I went ahead and did a bit of testing on this front. It turns out that in my yum config, gnochm is available as a one-step install for a CHM gnome-based reader for linux. I then googled for a second and found this: http://home.comcast.net/~tim.one/ where a Python23.chm file is available with all the docs for python 2.3. I opened that with gnochm and it worked really very well, with searching and good navigation. So Linux users have these easy options for viewing this format: http://gnochm.sourceforge.net/ (happens to be written in python :) http://xchm.sourceforge.net/ konqueror: yes, in KDE (works in 3.3), konqueror can read .chm files directly, though it doesn't support all the fancier search features of the others. For Mac OSX, xchm apparently works, though it seems not to be finked yet. I'm not exactly known for being a microsoft fan, but this is a case where I think using the CHM format is not a bad idea: enthougt already ships windows installers with chm files, it's actually a really convenient format for organizing documentation, and there's a good solution for every platform. It would just be a matter of making the .chm bundle from the enthought edition available as a standalone package for all platforms, and I think the 'graphical doc browser' problem is taken care of. I hope it's clear that this is in _addition_ to the scipy.info() functionality, which I think is very useful on its own. Just my $.02 Cheers, f From oliphant at ee.byu.edu Tue Sep 28 15:01:14 2004 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 28 Sep 2004 13:01:14 -0600 Subject: [SciPy-user] Re: [SciPy-dev] MCMC, Kalman Filtering, AI for SciPy? In-Reply-To: <4159B23A.4090909@colorado.edu> References: <4156440A.1040204@ee.byu.edu> <41566CBD.70005@enthought.com> <415855AD.4060701@sdl.usu.edu> <4158A23F.6080207@sdl.usu.edu> <1A1C914C-1177-11D9-8495-000A95B68E50@stsci.edu> <4159A668.6080001@colorado.edu> <4159A9A0.8070804@ee.byu.edu> <4159B23A.4090909@colorado.edu> Message-ID: <4159B4FA.6020900@ee.byu.edu> Fernando Perez wrote: > Travis Oliphant schrieb: > >>> I don't think the library names should be mercilessly duplicated >>> everywhere, but something like this can really help in a few >>> places. Obviously, a good, searchable, cross-referenced >>> documentation is a key part of the soultion to this problem, as you >>> mention. >> >> >> >> As far as searching goes. scipy.info started something like this a >> while back. The functionality was lost a bit when the delayed import >> mechanism was used (but it could be modified to work). >> The idea was that scipy.info() would start looking for >> in all the documentation it could find and then would print >> what it found. It still works on things that are in the namespace >> of scipy but not subpackages. >> >> try >> >> scipy.info("fft") >> >> for example >> >> It is rudimentary, but shows an idea that could be pursued. > > > Very nice, and I think a useful tool to have. Note however that > somehow scipy messes up the plain python help system: > > planck[python]> python > Python 2.3.3 (#1, May 7 2004, 10:31:40) > [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> help('for') > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib/python2.3/site.py", line 309, in __call__ > return pydoc.help(*args, **kwds) > File "scipy_base/ppimport.py", line 381, in _scipy_pydoc_help_call > File "scipy_base/ppimport.py", line 366, in _ppresolve_ignore_failure > File "scipy_base/ppimport.py", line 335, in ppresolve > File "scipy_base/ppimport.py", line 340, in ppresolve > File "scipy_base/ppimport.py", line 202, in ppimport > File "scipy_base/ppimport.py", line 273, in _ppimport_importer > ImportError: No module named for > This is the delayed import mechanism again (not info). The fancyness hurts some tools that rely on introspection. I'm not sure if the problem can be resolved as I very much want the delayed import functionality. > > Well, I keep hearing about .chm files, so I went ahead and did a bit > of testing on this front. It turns out that in my yum config, gnochm > is available as a one-step install for a CHM gnome-based reader for > linux. I then googled for a second and found this: > > http://home.comcast.net/~tim.one/ > > where a Python23.chm file is available with all the docs for python > 2.3. I opened that with gnochm andit worked really very well, with > searching and good navigation. I rather like the .chm file concept myself, and there are readers on multiple platforms, so it's not a bad way to go. So, let's add some help functionality that opens a .chm file in a new window (if display is available). -Travis From Fernando.Perez at colorado.edu Tue Sep 28 15:15:51 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 28 Sep 2004 13:15:51 -0600 Subject: [SciPy-user] Re: [SciPy-dev] MCMC, Kalman Filtering, AI for SciPy? In-Reply-To: <4159B4FA.6020900@ee.byu.edu> References: <4156440A.1040204@ee.byu.edu> <41566CBD.70005@enthought.com> <415855AD.4060701@sdl.usu.edu> <4158A23F.6080207@sdl.usu.edu> <1A1C914C-1177-11D9-8495-000A95B68E50@stsci.edu> <4159A668.6080001@colorado.edu> <4159A9A0.8070804@ee.byu.edu> <4159B23A.4090909@colorado.edu> <4159B4FA.6020900@ee.byu.edu> Message-ID: <4159B867.1000705@colorado.edu> Travis Oliphant schrieb: > This is the delayed import mechanism again (not info). The fancyness > hurts some tools that rely on introspection. I'm not sure if the > problem can be resolved as I very much want the delayed import > functionality. I realize that the delayed import functionality is pretty critical, but I think that breaking help() is kind of bad :) Unfortunately I know that the delayed import system plays a lot of fancy tricks to get things done, and I don't know the code well enough to offer a fix myself. But hopefully this can be fixed at some point, I'm willing to believe it's possible to somehow work things out. One option might be for scipy, upon import, to inject its own help() wrapper into __builtins__. This wrapper could then detect whether it's being called with a string, in which case it could try to avoid the delayed import things which break the normal help() calls, or if it's called with an object name, case in which throwing a NameError for undefined things is OK. I know this is only a hand-wavy sketch, but I just am not quite willing to swallow that a solution is 'impossible', knowing the flexibility of python. But since I can't really offer code right now, I won't complain either ;) Best, f From pearu at scipy.org Tue Sep 28 15:44:33 2004 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 28 Sep 2004 14:44:33 -0500 (CDT) Subject: [SciPy-user] Re: [SciPy-dev] MCMC, Kalman Filtering, AI for SciPy? In-Reply-To: <4159B867.1000705@colorado.edu> References: <4156440A.1040204@ee.byu.edu> <41566CBD.70005@enthought.com> <415855AD.4060701@sdl.usu.edu> <4158A23F.6080207@sdl.usu.edu><4159A668.6080001@colorado.edu> <4159A9A0.8070804@ee.byu.edu> <4159B23A.4090909@colorado.edu> <4159B4FA.6020900@ee.byu.edu> <4159B867.1000705@colorado.edu> Message-ID: On Tue, 28 Sep 2004, Fernando Perez wrote: > Travis Oliphant schrieb: > >> This is the delayed import mechanism again (not info). The fancyness >> hurts some tools that rely on introspection. I'm not sure if the problem >> can be resolved as I very much want the delayed import functionality. > > I realize that the delayed import functionality is pretty critical, but I > think that breaking help() is kind of bad :) Unfortunately I know that the > delayed import system plays a lot of fancy tricks to get things done, and I > don't know the code well enough to offer a fix myself. > > But hopefully this can be fixed at some point, I'm willing to believe it's > possible to somehow work things out. One option might be for scipy, upon > import, to inject its own help() wrapper into __builtins__. This wrapper > could then detect whether it's being called with a string, in which case it > could try to avoid the delayed import things which break the normal help() > calls, or if it's called with an object name, case in which throwing a > NameError for undefined things is OK. I know this is only a hand-wavy > sketch, but I just am not quite willing to swallow that a solution is > 'impossible', knowing the flexibility of python. But since I can't really > offer code right now, I won't complain either ;) This issue is now fixed in CVS. Btw, ppimport hooks actually work very similar to what you describe above ;-) Pearu From Fernando.Perez at colorado.edu Tue Sep 28 15:27:03 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 28 Sep 2004 13:27:03 -0600 Subject: [SciPy-user] Re: [SciPy-dev] MCMC, Kalman Filtering, AI for SciPy? In-Reply-To: References: <4156440A.1040204@ee.byu.edu> <41566CBD.70005@enthought.com> <415855AD.4060701@sdl.usu.edu> <4158A23F.6080207@sdl.usu.edu><4159A668.6080001@colorado.edu> <4159A9A0.8070804@ee.byu.edu> <4159B23A.4090909@colorado.edu> <4159B4FA.6020900@ee.byu.edu> <4159B867.1000705@colorado.edu> Message-ID: <4159BB07.1090103@colorado.edu> Pearu Peterson schrieb: > On Tue, 28 Sep 2004, Fernando Perez wrote: >>But hopefully this can be fixed at some point, I'm willing to believe it's >>possible to somehow work things out. One option might be for scipy, upon >>import, to inject its own help() wrapper into __builtins__. This wrapper >>could then detect whether it's being called with a string, in which case it >>could try to avoid the delayed import things which break the normal help() >>calls, or if it's called with an object name, case in which throwing a >>NameError for undefined things is OK. I know this is only a hand-wavy >>sketch, but I just am not quite willing to swallow that a solution is >>'impossible', knowing the flexibility of python. But since I can't really >>offer code right now, I won't complain either ;) > > > This issue is now fixed in CVS. Btw, ppimport hooks actually work very > similar to what you describe above ;-) Wow. How's that for speedy service... I guess it was worth complaining a bit :) Best, and many thanks! f From al.danial at ngc.com Tue Sep 28 18:25:54 2004 From: al.danial at ngc.com (Danial, Al (IIS)) Date: Tue, 28 Sep 2004 15:25:54 -0700 Subject: [SciPy-user] trying to use weave to access image data in C Message-ID: I'm inspired by recent posts on using weave to include C code within a Python program. Does anyone have an example of how one would access the bytes of an image (as loaded with imread) within a weave C inline? Here's a rough idea of what I'm trying to do #!/usr/bin/python from scipy import imread, weave img = imread('image.pcx') rows, cols, colors = img.shape code = """ for (int j=0; j < cols; j++) { for (int i=0; i < rows; i++) { printf("img[%3d][%3d]=%3d %3d %3d\n", i, j, img[i][j][0], img[i][j][1], img[i][j][2]); } } """ weave.inline(code,['img','rows','cols']) Of course I'm interested in actually doing some math on the R,G,B values at each pixel, not just printing the values out. At the moment I'm doing the loops over each pixel (indices i and j in the C code above) in Python and it is pretty slow. 1. How should I declare the img variable in the C code so that it correctly binds to the Python variable of the same name? 2. How would I step through the img variable in the C code? For example would the green value at pixel i,j be referenced as img[i][j][2] or img[i + j*rows][2] or img[(i + j*rows)*2] ? 3. This is the bonus round: if I modify the img variable in the C code how can I get the modified value back to Python? I'm studying http://www.scipy.org/documentation/weave/weaveusersguide.html#inline_numeric_argument_conversion which explains the process scalar integers well. Don't know how to extend that to the array type returned by imread though. -- Al From Fernando.Perez at colorado.edu Tue Sep 28 19:14:34 2004 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 28 Sep 2004 17:14:34 -0600 Subject: [SciPy-user] trying to use weave to access image data in C In-Reply-To: References: Message-ID: <4159F05A.9070300@colorado.edu> Danial, Al (IIS) schrieb: > I'm inspired by recent posts on using weave to include C code within > a Python program. Does anyone have an example of how one would access > the bytes of an image (as loaded with imread) within a weave C inline? Sorry that I don't have time right now to give you a detailed answer, but if you look here: http://amath.colorado.edu/faculty/fperez/python/ you'll find some weave example code which may get you going. Best, f From Bob.Cowdery at CGI-Europe.com Wed Sep 29 06:27:32 2004 From: Bob.Cowdery at CGI-Europe.com (Bob.Cowdery at CGI-Europe.com) Date: Wed, 29 Sep 2004 11:27:32 +0100 Subject: [SciPy-user] Performance Python with Weave article updated Message-ID: <9B40DB5FBF7D2048AD79C2A91F862A51773104@STEVAPPS05.Stevenage.CGI-Europe.com> Hi Francesc, Thank you very much for your work. Appologies for the delay in replying I have been laid low with a bug for a few days. It is very interesting to me to see the different ways to do things. I also worked on my code last week and my approach was very similar I think but I copied the techniques I had seen. I have the advantage of course of being able to test my results in my software receiver. The version I have attached runs in a similar time to the weave example although I didn't check actual figures. I am interested in two things, the relative merits of using the ArrayType and the technique that you used. Some arrays are output values and I found that ArrayType did change the source array which was what I wanted. Also I found that when I tried my code for real it didn't work and I found that the ++ operator did not increment, so the code ran fast but wasn't executing correctly. I notice you still use ++, is there some way to make this work in Pyrex? I just attached the whole file, less the blurb, that will contain my extensions, there are two functions in there at present. Regards Bob -----Original Message----- From: Francesc Alted [mailto:falted at pytables.org] Sent: 24 September 2004 17:42 To: scipy-user at scipy.net Cc: Bob.Cowdery at CGI-Europe.com Subject: Re: [SciPy-user] Performance Python with Weave article updated A Divendres 24 Setembre 2004 15:23, Bob.Cowdery at CGI-Europe.com va escriure: > Ok I see the error of my ways, I should have read the article more > closely!! I need to expose each array using ArrayType - I guess that > will make all the difference to the Pyrex time... In fact, As Prahbu has already said it is not really necessary to expose the Array type in Pyrex. Anyway, I looked into your code, and I saw that this is the kind of code where Pyrex can really shine. So, I put my hands to work and this is what I got: 1.- I've converted all the arithmetic where Numeric arrays where implied and substituted by pointers in C. As all the arrays implied were unidimensionals, I was able to keep the indexes (Pyrex can do that for one-dimensional pointers, not for the general multidimensional case). For example, the lines: for tapup from 0 <= tapup <= ln-1: ISum = ISum + filter_phasing[tapdwn] * I_Delay[delay_ptr] QSum = QSum + filter_phasing[tapup] * Q_Delay[delay_ptr] have been replaced by: cdef int buflen cdef void *data cdef double *p_I_Delay cdef double *p_Q_Delay cdef double *p_filter_phasing if PyObject_AsWriteBuffer(I_Delay, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") p_I_Delay = data if PyObject_AsWriteBuffer(Q_Delay, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") p_Q_Delay = data if PyObject_AsWriteBuffer(filter_phasing, &data, &buflen) <> 0: raise RuntimeError("Error getting the array data buffer") p_filter_phasing = data for tapup from 0 <= tapup <= ln-1: ISum = ISum + p_filter_phasing[tapdwn] * p_I_Delay[delay_ptr] QSum = QSum + p_filter_phasing[tapup] * p_Q_Delay[delay_ptr] and so on and so forth for the other vector operations. That first optimization gave a speed-up of 6.2x over a the original python code. You can find the complete code for this step in the attachement named as: src/sigblocks_ext.pyx.versio1 2.- Then, I declared the type of variables in the main function. If you don't do that, all the variables are considered python objects, and access to its value is very expensive. This optimization gave an additional speed-up of 3.7x for a total speed-up of 27.7x. The code is in the attachment: sigblocks_ext.pyx.versio2 3.- I've removed all the python calls in the loops, namely abs() and int(). abs() has been replaced by the fabs() C call, while int() has been removed completely (why you try to convert a double to an int and then assign again to a double? if you want to do that, perhaps a C call to roundf would be better). Also, I've removed the line: if(fabs(usb) > peak): self.m_peak = fabs(usb) and replaced it by: cdef double m_peak if(fabs(usb) > peak): m_peak = fabs(usb) and return this maximum at the end: return m_peak This optimization gave an additional speed-up of 3.7x (yes, again) for a total speed-up of 161x. The code of this final version is in the attachment: sigblocks_ext.pyx Well, this is not exactly the more than 200x speed-up that you have achieved with inline weave, but very close. Perhaps the code might be optimized still further, although I believe the gains would be minimal. Well, I think I have ended with a good exercise for my next seminar about python and scientific computing :) Cheers, -- Francesc Alted *** Confidentiality Notice *** Proprietary/Confidential Information belonging to CGI Group Inc. and its affiliates may be contained in this message. If you are not a recipient indicated or intended in this message (or responsible for delivery of this message to such person), or you think for any reason that this message may have been addressed to you in error, you may not use or copy or deliver this message to anyone else. In such case, you should destroy this message and are asked to notify the sender by reply email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Copy of sigblocks_ext.pyx Type: application/octet-stream Size: 8326 bytes Desc: not available URL: From quendor at nandor.net Wed Sep 29 10:30:54 2004 From: quendor at nandor.net (quendor at nandor.net) Date: Wed, 29 Sep 2004 09:30:54 -0500 Subject: [SciPy-user] Most Basic of Matrix In-Reply-To: <4155CFEA.50405@ucsd.edu> References: <7507F3645A49E841AEF5818BCB2918A81FF64D@ex7.asurite.ad.asu.edu> <1096057448.3331.43.camel@sunspot.csun.edu> <4154854E.3010802@colorado.edu> <4C11C0AA-0EFE-11D9-AAFC-000A95A52B4A@nandor.net> <4155CFEA.50405@ucsd.edu> Message-ID: <2B76112E-1224-11D9-A9CB-000A95A52B4A@nandor.net> Robert, I had been using Numeric 23.0. I re-installed from source and encountered the same problem. I then installed Numeric 23.3 and no more problem. Must have been a bug in 23.0. Thanks I was using Numeric 23.0. I re-installed and encountered the same problem. I then On Sep 25, 2004, at 3:07 PM, Robert Kern wrote: > quendor at nandor.net wrote: >> I forgive in advance what will probably be the lamest of questions. I >> am new to scipy. I can select an element of a matrix, but seem >> unable to change the element. Am I using the wrong API? >> When I run the following code: >> #!/usr/bin/python >> from scipy import * >> A = mat([[1,2],[3,4]]) >> print A[1,1] >> A[1,1] = 100 > > This works fine for me on OS X 10.3, Python 2.3, SciPy CVS, and > Numeric 23.3. What versions of SciPy and Numeric are you using? > > -- > Robert Kern > rkern at ucsd.edu From prabhu at aero.iitm.ernet.in Wed Sep 29 14:16:45 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 29 Sep 2004 23:46:45 +0530 Subject: [SciPy-user] trying to use weave to access image data in C In-Reply-To: References: Message-ID: <16730.64525.331874.994162@monster.linux.in> >>>>> "AD" == Al Danial writes: AD> I'm inspired by recent posts on using weave to include C code AD> within a Python program. Does anyone have an example of how AD> one would access the bytes of an image (as loaded with imread) AD> within a weave C inline? I've attached two files that illustrate two things: 1. How do you what you wanted. This is the first attachment. 2. How to create an extension module using weave's ext_tools for this. This is the second attachment and shows how you can easily build full-fledged extension modules with ext_tools. AD> 1. How should I declare the img variable in the C code so that AD> it correctly binds to the Python variable of the same name? Its magic! You don't have to do anything. weave figures out the types by inspecting the locals() from where you called it. AD> 2. How would I step through the img variable in the C code? AD> For example would the green value at pixel i,j be AD> referenced as AD> img[i][j][2] AD> or AD> img[i + j*rows][2] AD> or AD> img[(i + j*rows)*2] AD> ? Depends on how you weave the magic. If you use converters.blitz you can simply use img(i,j,k). If you don't use the blitz converter you will only get a flattened array (i.e. a 1D array) from Numeric and will need to do pointer arithmetic appropriately to access the elements. AD> 3. This is the bonus round: if I modify the img variable in AD> the C code how can I get the modified value back to Python? If you set u(i, j, k) = 0 (or by using pointers), you are modifying the contents of the array directly and nothing is necessary on your side. The modified img array will be what you see in Python. cheers, prabhu -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: t.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: t1.py URL: From s.e.murdock at soton.ac.uk Wed Sep 29 14:45:02 2004 From: s.e.murdock at soton.ac.uk (Stuart Murdock) Date: Wed, 29 Sep 2004 19:45:02 +0100 Subject: [SciPy-user] Geometry package Message-ID: <415B02AE.8080902@soton.ac.uk> Hi I am looking for a package a bit like Konrad Hinsens Scientific.Geometry except I want it to be able to make things like planes, clinders and general geometrical shapes with the points that I have. The I want to be able to calculate angles and overlaps with other geometrical shapes in a python environment. Is there anything else out there ? Thanks Stuart -- Stuart Murdock Ph.D, Research Fellow, Dept. of Chemistry / E-Science, University of Southampton, Highfield, Southampton, SO17 1BJ, United Kingdom http://www.biosimgrid.org From prabhu at aero.iitm.ernet.in Wed Sep 29 15:32:31 2004 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 30 Sep 2004 01:02:31 +0530 Subject: [SciPy-user] trying to use weave to access image data in C In-Reply-To: <16730.64525.331874.994162@monster.linux.in> References: <16730.64525.331874.994162@monster.linux.in> Message-ID: <16731.3535.270141.666382@monster.linux.in> >>>>> "PR" == Prabhu Ramachandran writes: AD> 2. How would I step through the img variable in the C code? AD> For example would the green value at pixel i,j be referenced AD> as img[i][j][2] or img[i + j*rows][2] or img[(i + j*rows)*2] ? PR> Depends on how you weave the magic. If you use PR> converters.blitz you can simply use img(i,j,k). If you don't PR> use the blitz converter you will only get a flattened array PR> (i.e. a 1D array) from Numeric and will need to do pointer PR> arithmetic appropriately to access the elements. I've attached yet another variant that does the array access directly using pointer arithmetic. This does not use blitz arrays and builds faster. I've also swapped the row and cols names to be consistent with row-major notation and storage. cheers, prabhu -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: t2.py URL: From chris at fisher.forestry.uga.edu Wed Sep 29 23:00:52 2004 From: chris at fisher.forestry.uga.edu (Christopher Fonnesbeck) Date: Wed, 29 Sep 2004 23:00:52 -0400 Subject: [SciPy-user] window re-use in scipy.gplt Message-ID: Is there any way of forcing multiple plots to re-use the same window in scipy.gplt? It seems to happen automatically on Windows, but on UNIX (linux and mac) each plot generates its own window, which makes for a cluttered screen when you generate a sequence dozens of plots (like I do). Thanks for any info. Chris -- Christopher J. Fonnesbeck ( f o n n e s b e c k @ m a c . c o m ) Georgia Cooperative Fish & Wildlife Research Unit University of Georgia, Athens GA From nwagner at mecha.uni-stuttgart.de Thu Sep 30 02:56:58 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 30 Sep 2004 08:56:58 +0200 Subject: [SciPy-user] Problems with python setup.py build Message-ID: <415BAE3A.7060207@mecha.uni-stuttgart.de> Hi all, I am using the latest f2py and scipy from cvs. I cannot build scipy, but for what reason ? gcc: build/src/Lib/interpolate/dfitpackmodule.c build/src/Lib/interpolate/dfitpackmodule.c:407:27: macro "CHECKARRAY" requires 3 arguments, but only 1 given build/src/Lib/interpolate/dfitpackmodule.c: In function `f2py_rout_dfitpack_splev': build/src/Lib/interpolate/dfitpackmodule.c:407: error: `CHECKARRAY' undeclared (first use in this function) build/src/Lib/interpolate/dfitpackmodule.c:407: error: (Each undeclared identifier is reported only once build/src/Lib/interpolate/dfitpackmodule.c:407: error: for each function it appears in.) build/src/Lib/interpolate/dfitpackmodule.c:407: warning: left-hand operand of comma expression has no effect build/src/Lib/interpolate/dfitpackmodule.c:407: warning: left-hand operand of comma expression has no effect build/src/Lib/interpolate/dfitpackmodule.c:407: error: parse error before ')' token build/src/Lib/interpolate/dfitpackmodule.c:581:62: macro "CHECKARRAY" passed 8 arguments, but takes just 3 Nils From pearu at scipy.org Thu Sep 30 07:15:49 2004 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 30 Sep 2004 06:15:49 -0500 (CDT) Subject: [SciPy-user] Problems with python setup.py build In-Reply-To: <415BAE3A.7060207@mecha.uni-stuttgart.de> References: <415BAE3A.7060207@mecha.uni-stuttgart.de> Message-ID: On Thu, 30 Sep 2004, Nils Wagner wrote: > Hi all, > > I am using the latest f2py and scipy from cvs. I cannot build scipy, but for > what reason ? > > gcc: build/src/Lib/interpolate/dfitpackmodule.c > build/src/Lib/interpolate/dfitpackmodule.c:407:27: macro "CHECKARRAY" > requires 3 arguments, but only 1 given That's due the bug in f2py 2.43.239_1813. Update f2py and scipy from CVS, remove the build directory, and rerun `python setup.py build`. Thanks, Pearu From nwagner at mecha.uni-stuttgart.de Thu Sep 30 07:11:02 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 30 Sep 2004 13:11:02 +0200 Subject: [SciPy-user] Problems with python setup.py build In-Reply-To: References: <415BAE3A.7060207@mecha.uni-stuttgart.de> Message-ID: <415BE9C6.2080102@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > > On Thu, 30 Sep 2004, Nils Wagner wrote: > >> Hi all, >> >> I am using the latest f2py and scipy from cvs. I cannot build scipy, >> but for what reason ? >> >> gcc: build/src/Lib/interpolate/dfitpackmodule.c >> build/src/Lib/interpolate/dfitpackmodule.c:407:27: macro "CHECKARRAY" >> requires 3 arguments, but only 1 given > > > That's due the bug in f2py 2.43.239_1813. Update f2py and scipy from > CVS, remove the build directory, and rerun `python setup.py build`. > > Thanks, > Pearu > > ________________________ Pearu, The problem is still there f2py -v 2.43.239_1814 Nils > _______________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Dr.-Ing. Nils Wagner Institut A f?r Mechanik Universit?t Stuttgart Pfaffenwaldring 9 D-70550 Stuttgart Tel.: (+49) 0711 685 6262 Fax.: (+49) 0711 685 6282 E-mail: nwagner at mecha.uni-stuttgart.de From pearu at scipy.org Thu Sep 30 07:35:29 2004 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 30 Sep 2004 06:35:29 -0500 (CDT) Subject: [SciPy-user] Problems with python setup.py build In-Reply-To: <415BE9C6.2080102@mecha.uni-stuttgart.de> References: <415BAE3A.7060207@mecha.uni-stuttgart.de> <415BE9C6.2080102@mecha.uni-stuttgart.de> Message-ID: On Thu, 30 Sep 2004, Nils Wagner wrote: > The problem is still there > > f2py -v > 2.43.239_1814 Did you remove the build directory? It has to be done manually in order to get rid of f2py generated extension module sources that are incorrectly generated with f2py 2.43.239_1813. Pearu From nwagner at mecha.uni-stuttgart.de Thu Sep 30 07:31:17 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 30 Sep 2004 13:31:17 +0200 Subject: [SciPy-user] Problems with python setup.py build In-Reply-To: References: <415BAE3A.7060207@mecha.uni-stuttgart.de> <415BE9C6.2080102@mecha.uni-stuttgart.de> Message-ID: <415BEE85.8050208@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > > On Thu, 30 Sep 2004, Nils Wagner wrote: > >> The problem is still there >> >> f2py -v >> 2.43.239_1814 > > > Did you remove the build directory? It has to be done manually in > order to get rid of f2py generated extension module sources that are > incorrectly generated with f2py 2.43.239_1813. > > Pearu > Oops I have missed that... > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Dr.-Ing. Nils Wagner Institut A f?r Mechanik Universit?t Stuttgart Pfaffenwaldring 9 D-70550 Stuttgart Tel.: (+49) 0711 685 6262 Fax.: (+49) 0711 685 6282 E-mail: nwagner at mecha.uni-stuttgart.de