From dgalant at zahav.net.il Mon Oct 1 02:48:32 2007 From: dgalant at zahav.net.il (dgalant) Date: Mon, 1 Oct 2007 08:48:32 +0200 Subject: [SciPy-user] Finding all the roots in an interval Message-ID: I can't help but to jump in with an old and probably obscure reference to a paper BUSH JONES, W. G. WALLER and ARNOLD FELDMAN, "Root isolation using function values" (Google the title) which gives a rather neat solution to this problem. No, it is not foolproof, but it would be successful for the problem you have stated. Not Python, but Fortran program given. David Galant dgalant at zahav.net.il From gael.varoquaux at normalesup.org Mon Oct 1 03:12:13 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 1 Oct 2007 09:12:13 +0200 Subject: [SciPy-user] All the roots of a function in an interval In-Reply-To: References: <20070929173805.GB4826@clipper.ens.fr> <20070929183623.GD4826@clipper.ens.fr> Message-ID: <20071001071213.GB18924@clipper.ens.fr> Hi Anne, Thanks a lot for your reply, enlightening as usual. On Sat, Sep 29, 2007 at 03:51:30PM -0400, Anne Archibald wrote: > The easiest and most reliable method here is to simply sample the > function at equispaced points the nearest distance apart they can be; > then every sign change gives you an interval bracketing a root. I did exactly as you suggested. It is indeed the best solution. Unfortunately it is slowish, and cannot be well implemented with arrays. Here is the code I got to, just in case it can be of some use for somebody later on: def find_x(m_K, m_Rb, delta_x): """ Returns the 2D array of x for which: m_K * sin(k_Rb * x) - m_Rb * sin(k_K *(x + delta_x)) == 0 with delta_x a 1D array. """ delta_x = c_[delta_x] f = lambda x, delta_x: m_K*sin(k_Rb*x) - m_Rb*sin(k_K*(x + delta_x)) interval = linspace(-5*Delta_x, 5*Delta_x, 150) scan = f(interval, delta_x) sign_changes_mask = (scan[:, 1:]*scan[:, :-1])<0 min_num_x = (sign_changes_mask.sum(axis=-1)).min() x = empty((len(delta_x), min_num_x)) indexes = arange(len(interval)) for index, this_delta_x in enumerate(delta_x.flat): #this_x = _find_x(m_K, m_Rb, this_delta_x) # Keep only the min_num_x inner most solutions: sign_changes_index = indexes[sign_changes_mask[index, :]] sign_changes_index = sign_changes_index[ int((len(sign_changes_index)-min_num_x)/2): int((len(sign_changes_index)+min_num_x)/2)] this_x = [] for column_index in sign_changes_index: a = interval[column_index] b = interval[column_index + 1] this_x.append(optimize.brentq(f, a, b, args=(this_delta_x,))) x[index, :] = this_x return x One of the difficulties was that, depending on my parameter "delta_x", the number of root changed, so I couldn't easily fit this in an array. The good news is that the further away these roots where from 0, the less important they were in the calculation (I have a gaussian prefactor), so I just dump the extra ones, and thus build an array out of a fixed number of roots, which speeds up a lot the calculations I do on these roots. This code has served me well over the week end, and produced good results. I think I just figured out a way to address my problem with a totally different approach, so no more root finding! For those interested, it is an inverse problem, for which I have an analytical model, and it seems that as the root finding is expansive, and the model get harder and harder to derive analytically as I refine it, the brute force approach of calculating the forward problem for all the values of my only unknown parameter, and matching it to the data scales better than using the analytical model to do the inverse problem. I don't know if I am clear, but if not just forget it. Any way, thanks a lot for your help, this mailing list is a gold mine ! Ga?l From gael.varoquaux at normalesup.org Mon Oct 1 03:22:56 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 1 Oct 2007 09:22:56 +0200 Subject: [SciPy-user] Finding all the roots in an interval In-Reply-To: References: Message-ID: <20071001072256.GC18924@clipper.ens.fr> On Mon, Oct 01, 2007 at 08:48:32AM +0200, dgalant wrote: > I can't help but to jump in with an old and probably obscure > reference to a paper > BUSH JONES, W. G. WALLER and ARNOLD FELDMAN, > "Root isolation using function values" It's probably the solution to the problem I was having. I can't read the paper unfortunately, as being in a physics departement I don't have access to this publication. It is interesting to see that it is quoted in a paper on maximum likelihood parameter estimation, which is exactly the context in which I stumbled on this problem. As I stated in my answer to Anne, I might shift totally my strategy and no longer have to find these roots, because both the analytic calculations, and the root finding numerical problem, get much more complicated as I add more physics in my problem. Thanks for the reference, though, it is useful to know where this problem has been discussed. Cheers, Ga?l From berthe.loic at gmail.com Mon Oct 1 05:49:30 2007 From: berthe.loic at gmail.com (LB) Date: Mon, 01 Oct 2007 02:49:30 -0700 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com> <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> Message-ID: <1191232170.557019.18810@d55g2000hsg.googlegroups.com> > I think histogram has had this weird behavior since the numeric era and a > lot of code may break if we fix it. Basically, histogram discards the lower > than range values as outliers but puts the higher than range values into the > last bin. I think this should be clearly explained in the doc string. The current doc string says "Values outside of this range are allocated to the closest bin". This is wrong, can leads to bug and should be fixed. numpy.histogram's behavior seems still weirds to me, and I don't see why values lower than range should always be discarded as outliers. If the real problem is cosistency with older versions from the numeric era, what about adding a new keyword to the function, says "discard", which could be used to decide what to do with values outside the range : - 'low' => values lower than the range are discarded, values higher are added to the last bin - 'up' => values higher than the range are discarded, values lower are added to the first bin - 'out' => values out of the range are discarded - None => values outside of this range are allocated to the closest bin For compatibility reason, a default value of 'low' could be used. > I'm generally using my own histograming routines, I could send them your way > if you're interested. Thanks, I will check the code you've put in the sandbow at home. -- LB From massimo.sandal at unibo.it Mon Oct 1 10:51:58 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 01 Oct 2007 16:51:58 +0200 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com> <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> Message-ID: <4701098E.1060805@unibo.it> David Huard ha scritto: > Hi LB, > > I think histogram has had this weird behavior since the numeric era and > a lot of code may break if we fix it. > Basically, histogram discards the > lower than range values as outliers but puts the higher than range > values into the last bin. > > I'm generally using my own histograming routines, I could send them your > way if you're interested. I'm not a SciPy developer so my weight on the discussion is next to zero, but I personally think that other people code breaking is not a good reason to keep a buggy function. We don't want code that relies on bugs like they were features: we're not Microsoft, aren't we? ;) People should be informed before updating, of course, and maybe an oldhistogram() function could be maintained to allow for easy updating, but if histogram() behavour is so flawed that people is forced to rewrite their own routines, please fix it. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From david.huard at gmail.com Mon Oct 1 10:57:07 2007 From: david.huard at gmail.com (David Huard) Date: Mon, 1 Oct 2007 10:57:07 -0400 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: <1191232170.557019.18810@d55g2000hsg.googlegroups.com> References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com> <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> <1191232170.557019.18810@d55g2000hsg.googlegroups.com> Message-ID: <91cf711d0710010757m65737d7xe281fe2b4d8930b0@mail.gmail.com> 2007/10/1, LB : > > > I think histogram has had this weird behavior since the numeric era and > a > > lot of code may break if we fix it. Basically, histogram discards the > lower > > than range values as outliers but puts the higher than range values into > the > > last bin. > I think this should be clearly explained in the doc string. The > current doc string says > "Values outside of this range are allocated to the closest bin". > This is wrong, can leads to bug and should be fixed. You're right. In fact, it said so at some point but it seems it has been edited out. numpy.histogram's behavior seems still weirds to me, and I don't see > why values > lower than range should always be discarded as outliers. > If the real problem is cosistency with older versions from the numeric > era, > what about adding a new keyword to the function, says "discard", which > could be > used to decide what to do with values outside the range : > - 'low' => values lower than the range are discarded, values > higher are added to the last bin > - 'up' => values higher than the range are discarded, values > lower > are added to the first bin > - 'out' => values out of the range are discarded > - None => values outside of this range are allocated to the > closest > bin > > For compatibility reason, a default value of 'low' could be used. Good idea. Better yet would be to raise a deprecation warning and change the function in the next or second next release, and ideally replace it with something written in C to speed things up. The final decision is up to someone else than me, though. Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From timmichelsen at gmx-topmail.de Mon Oct 1 12:45:47 2007 From: timmichelsen at gmx-topmail.de (Tim) Date: Mon, 1 Oct 2007 16:45:47 +0000 (UTC) Subject: [SciPy-user] plotting a surface Message-ID: Hello, I have some code written in IDL but would very much like to switch to IDl for the next projects because the synatx seems friendlier to me. One question though: How do I conceive surface plots in matplotlib or any other plotting module? Many tutorials state that this isn't possible, so far. Is that right? tutorials with remarks about surface plotting: 1) http://xweb.geos.ed.ac.uk/~hcp/notes_python.pdf cf. An Introduction to matplotlib 3.1 Surface plots Matplotlib has no equivalent of R?s persp or IDL?s surface. This section left blank. 2) matplotlib does not yet have equivalents for all IDL 2-D plotting capability (e.g., surface plots) cf. http://dae.iaa.csic.es/~eperez/Python-IDL-comparison.pdf "Appendix B: Why would I switch from IDL to Python (or not)?" page 3 3) no real equivalent python command in http://cfa-www.harvard.edu/~jbattat/computer/python/science/idl-numpy.html Thanks in adavnce for your comments on that! Tim From aisaac at american.edu Mon Oct 1 13:28:31 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 1 Oct 2007 13:28:31 -0400 Subject: [SciPy-user] plotting a surface In-Reply-To: References: Message-ID: http://www.american.edu/econ/notes/soft.htm#pyplot fwiw, Alan Isaac From aisaac at american.edu Mon Oct 1 14:09:48 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 1 Oct 2007 14:09:48 -0400 Subject: [SciPy-user] Hodrick-Prescott filter Message-ID: I'm looking for a Hodrick-Prescott filter. http://en.wikipedia.org/wiki/Hodrick-Prescott_filter#External_links Is there one in SciPy, or can someone post one? Thank you, Alan Isaac From afraser at lanl.gov Mon Oct 1 14:47:09 2007 From: afraser at lanl.gov (Andy Fraser) Date: Mon, 01 Oct 2007 12:47:09 -0600 Subject: [SciPy-user] plotting a surface In-Reply-To: (timmichelsen@gmx-topmail.de's message of "Mon, 1 Oct 2007 16:45:47 +0000 (UTC)") References: Message-ID: <87ejgeu0ya.fsf@lanl.gov> >>>>> "Tim" == Tim writes: Tim> [...] One question though: Tim> How do I conceive surface plots in matplotlib or any other Tim> plotting module? [...] My solution is to: 1. Write the data to a file. 2. Open a pipe to gnuplot. 3. Write gnuplot commands including "splot" to the pipe It isn't pretty, but it works, and it gives me access to anything that gnuplot can do. Andy From prabhu at aero.iitb.ac.in Mon Oct 1 15:22:34 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Tue, 2 Oct 2007 00:52:34 +0530 Subject: [SciPy-user] plotting a surface In-Reply-To: References: Message-ID: <18177.18683.310.616004@monster.iitb.ac.in> >>>>> "Alan" == Alan G Isaac writes: Alan> http://www.american.edu/econ/notes/soft.htm#pyplot fwiw, Alan> Alan Isaac You might want to try mayavi's mlab: https://svn.enthought.com/enthought/attachment/wiki/MayaVi/mlab.pdf?format=raw cheers, prabhu From amcmorl at gmail.com Mon Oct 1 15:35:05 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Tue, 2 Oct 2007 07:35:05 +1200 Subject: [SciPy-user] plotting a surface In-Reply-To: References: Message-ID: On 02/10/2007, Tim wrote: > Hello, > I have some code written in IDL but would very much like to switch to IDl > for the next projects because the synatx seems friendlier to me. > > One question though: > > How do I conceive surface plots in matplotlib or any other plotting module? Another possibilty: Mayavi2 (see http://code.enthought.com/mayavi2/ and http://www.scipy.org/Cookbook/MayaVi) is an excellent visualisation package that includes a surface function (http://www.scipy.org/Cookbook/MayaVi/Surf) and can be run from the command-line in a manner similar to matplotlib. I switched from IDL a couple of years ago, and have never looked back. Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From aisaac at american.edu Mon Oct 1 17:58:18 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 1 Oct 2007 17:58:18 -0400 Subject: [SciPy-user] recursive least squares Message-ID: Recursive least squares http://en.wikipedia.org/wiki/Recursive_least_squares is often used as a parameter instability diagnostic in time series applications. Is it available in SciPy? Thank you, Alan Isaac From elfnor at gmail.com Mon Oct 1 18:52:50 2007 From: elfnor at gmail.com (elfnor) Date: Mon, 1 Oct 2007 22:52:50 +0000 (UTC) Subject: [SciPy-user] can't import scipy.io Message-ID: I have the enthon 1.0.0 edition of python 2.4 installed and just tried to upgrade to numpy 1.0.3.1 and scipy 0.6.0 using the windows installers (.exe) on the scipy.org site I now get an error RuntimeError: module compiled against version 90907 of C-API but this version of numpy is 1000009 when I try to import scipy.io cheers From robert.kern at gmail.com Mon Oct 1 19:04:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 01 Oct 2007 18:04:28 -0500 Subject: [SciPy-user] can't import scipy.io In-Reply-To: References: Message-ID: <47017CFC.9000606@gmail.com> elfnor wrote: > I have the enthon 1.0.0 edition of python 2.4 installed and just tried to > upgrade to numpy 1.0.3.1 and scipy 0.6.0 using the windows installers (.exe) on > the scipy.org site > > I now get an error > RuntimeError: module compiled against version 90907 of C-API but this version of > numpy is 1000009 > > when I try to import scipy.io Presumably you have leftover files from the Enthon scipy. Delete Python24/Lib/site-packages/scipy/ and re-install scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hasslerjc at comcast.net Mon Oct 1 19:29:33 2007 From: hasslerjc at comcast.net (John Hassler) Date: Mon, 01 Oct 2007 19:29:33 -0400 Subject: [SciPy-user] recursive least squares In-Reply-To: References: Message-ID: <470182DD.1090703@comcast.net> The basic function is very easy to write. (It can get much more complex, of course.) Here is a Matlab version from when I taught a course in self-tuning control a dozen years ago. If I can find a little time this evening, I'll translate it into Python, although it's probably obvious. john function [th,p] = rolsf(x,y,p,th,lam) % function [th,p] = rolsf(x,y,p,th,lam) % Recursive ordinary least squares for single output case, % including the forgetting factor, lambda. % Enter with x = input, y = output, p = covariance, th = estimate, lam = forgetting factor % a=p*x; g=1/(x'*a+lam); k=g*a; e=y-x'*th; th=th+k*e; p=(p-g*a*a')/lam; Alan G Isaac wrote: > Recursive least squares > http://en.wikipedia.org/wiki/Recursive_least_squares > is often used as a parameter instability diagnostic > in time series applications. > > Is it available in SciPy? > > Thank you, > Alan Isaac > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From elfnor at gmail.com Mon Oct 1 20:12:33 2007 From: elfnor at gmail.com (elfnor) Date: Tue, 2 Oct 2007 00:12:33 +0000 (UTC) Subject: [SciPy-user] can't import scipy.io References: <47017CFC.9000606@gmail.com> Message-ID: That worked. Thanks Robert Kern gmail.com> writes: From sleibman at gmail.com Mon Oct 1 21:26:34 2007 From: sleibman at gmail.com (Steve Leibman) Date: Mon, 1 Oct 2007 21:26:34 -0400 Subject: [SciPy-user] How to get past the dfftpack build error "suffix or operands invalid for `movd'" Message-ID: While attempting to build scipy on an Intel core 2 duo box with fedora core 5, I ran into a problem during the compilation of dfftpack. I saw a few other people post this complaint to various lists, but no published solutions, so this email is intended to show the workaround that worked for me. I have not tried to reproduce it in a standalone copy of fftpack from netlib, though presumably the bug is somewhere in there. The symptoms: ============ /tmp/ccCnF9WU.s: Assembler messages: /tmp/ccCnF9WU.s:599: Error: suffix or operands invalid for `movd' /tmp/ccCnF9WU.s:2982: Error: suffix or operands invalid for `movd' /tmp/ccCnF9WU.s: Assembler messages: /tmp/ccCnF9WU.s:599: Error: suffix or operands invalid for `movd' /tmp/ccCnF9WU.s:2982: Error: suffix or operands invalid for `movd' error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 -funroll-loops -march=pentium3 -mmmx -msse2 -msse -f omit-frame-pointer -malign-double -c -c Lib/fftpack/dfftpack/zfftf1.f -o build/temp.linux-i686-2.4/Lib/fftpack/dfftpack/zfftf1.o" failed with exit status 1 The solution: ========== The real solution is to figure out why the generated assembly has a screwy instruction in it. In the meantime... despite the fact that my machine supports the sse2 instructions, I was able to work around the issue by getting rid of the -msse2 flag on the compile line in order to make it work (if you have this problem, run the g77 compile line both with and without "-msse2", to see whether this solution will work for you). The problem is that there's no convenient way to forcefully remove the "-msse2" option... it isn't written into anything civilized like a Makefile. Instead, I temporarily modified my python numpy installation, changing the _has_sse2() function in site-packages/numpy/distutils/cpuinfo.py, such that it always returns False. Pretty nasty, but it worked for me. -- Steve Leibman sleibman + gmail + com From pearu at cens.ioc.ee Tue Oct 2 03:48:42 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 02 Oct 2007 09:48:42 +0200 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs In-Reply-To: <46FC6AF9.3030008@ar.media.kyoto-u.ac.jp> References: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> <46FC6AF9.3030008@ar.media.kyoto-u.ac.jp> Message-ID: <4701F7DA.50302@cens.ioc.ee> I have ported the veclib support from scipy.linalg to scipy.lib.blas. I cannot test it myself. So, anyone on OSX who had problems with scipy.lib, could you execute the following commands: cd svn/scipy/scipy/lib/blas python setup.py build python tests/test_fblas.py to see if tests pass or if there are any compiler errors? Pearu David Cournapeau wrote: > Steve Lianoglou wrote: >> On Sep 27, 2007, at 2:24 PM, Tom Loredo wrote: >> >> >>> All you have to do is start Python, import scipy, and run >>> scipy.test() *multiple times*: >>> >> >> Ran the test 20 times ... no segfault, only one error in the >> check_dot function (as mentioned by Christopher) >> > The error cause bad memory access, so the error, while reproducible > sometimes, does not crash all the time: it is really configuration > dependant. > > Before the problem is fixed, a temporary would be to disable scipy.lib, > since you are not likely to need it: just comment the line which import > lib in scipy/setup.py inside scipy sources: > > config.add_subpackage('lib') -> #config.add_subpackage('lib') > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jh at physics.ucf.edu Tue Oct 2 04:56:56 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Tue, 02 Oct 2007 04:56:56 -0400 Subject: [SciPy-user] plotting a surface Message-ID: <1191315416.6370.780.camel@glup.physics.ucf.edu> Or, you could just do it with matplotlib... http://physicsmajor.wordpress.com/2007/04/22/3d-surface-with-matplotlib/ This was the first hit on a google search for "matplotlib surface". I tested it and it works in 0.90.1. --jh-- From tgrav at mac.com Tue Oct 2 07:42:38 2007 From: tgrav at mac.com (Tommy Grav) Date: Tue, 2 Oct 2007 07:42:38 -0400 Subject: [SciPy-user] Scipy Superpack for OS X Message-ID: I downloaded the Scipy Superpack for OS X at trichech.us, and noticed that there are in-consistencies in the different webpages: scipy.org claims that the binary installation version give you version 0.6.0 trichech.us says that the meta-package contains recent SVN builds of version 0.7 The downloaded scipy superpack for PPC Macintosh however lists the version as 0.5.3 (which is what is installed). Some reconciliation of the three webpages pointing to the same resource might be in order. Cheers Tommy From david at ar.media.kyoto-u.ac.jp Tue Oct 2 07:52:38 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 02 Oct 2007 20:52:38 +0900 Subject: [SciPy-user] Scipy Superpack for OS X In-Reply-To: References: Message-ID: <47023106.1010901@ar.media.kyoto-u.ac.jp> Tommy Grav wrote: > I downloaded the Scipy Superpack for OS X at trichech.us, and noticed > that there > are in-consistencies in the different webpages: > > scipy.org claims that the binary installation version give you > version 0.6.0 > trichech.us says that the meta-package contains recent SVN builds of > version 0.7 0.6.0 is released, but 0.7.0 is not yet (SVN build of 0.7 means that it is between 0.6 and 0.7, if this is what bothers you here) > The downloaded scipy superpack for PPC Macintosh however lists the > version as 0.5.3 > (which is what is installed). This looks more like a problem at trichech.us, no ? cheers, David From tgrav at mac.com Tue Oct 2 08:21:08 2007 From: tgrav at mac.com (Tommy Grav) Date: Tue, 2 Oct 2007 08:21:08 -0400 Subject: [SciPy-user] Scipy Superpack for OS X In-Reply-To: <47023106.1010901@ar.media.kyoto-u.ac.jp> References: <47023106.1010901@ar.media.kyoto-u.ac.jp> Message-ID: <566BB51A-B43D-4E76-ACA9-FD87DFD03F3B@mac.com> > This looks more like a problem at trichech.us, no ? I guess the correct person to correct all this would be trichech.us, which is the maintainer of this great resource. I just wanted to point out the inconsistencies in the hope that someone would be able to reconcile it :) Cheers Tommy From aisaac at american.edu Tue Oct 2 09:37:41 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 2 Oct 2007 09:37:41 -0400 Subject: [SciPy-user] Hodrick-Prescott filter In-Reply-To: References: Message-ID: On Mon, 1 Oct 2007, Alan G Isaac apparently wrote: > I'm looking for a Hodrick-Prescott filter. > http://en.wikipedia.org/wiki/Hodrick-Prescott_filter#External_links > Is there one in SciPy, or can someone post one? Well, nobody replied to this yet, so here is a really basic implementation: ``hpfilter`` in http://econpy.googlecode.com/svn/trunk/pytrix/tseries.py Suggestions welcome. Cheers, Alan Isaac From cburns at berkeley.edu Tue Oct 2 11:40:45 2007 From: cburns at berkeley.edu (Christopher Burns) Date: Tue, 2 Oct 2007 08:40:45 -0700 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs In-Reply-To: <4701F7DA.50302@cens.ioc.ee> References: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> <46FC6AF9.3030008@ar.media.kyoto-u.ac.jp> <4701F7DA.50302@cens.ioc.ee> Message-ID: <764e38540710020840j64f0ed1fs5caea77ea0f8a282@mail.gmail.com> All tests pass on my OSX machine. Thanks Pearu. On 10/2/07, Pearu Peterson wrote: > > > I have ported the veclib support from scipy.linalg to > scipy.lib.blas. I cannot test it myself. So, anyone > on OSX who had problems with scipy.lib, could you > execute the following commands: > > cd svn/scipy/scipy/lib/blas > python setup.py build > python tests/test_fblas.py > > to see if tests pass or if there are any compiler errors? > > Pearu > -- Christopher Burns, Software Engineer Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Tue Oct 2 12:18:42 2007 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Tue, 2 Oct 2007 09:18:42 -0700 Subject: [SciPy-user] NumPy matrix-vector calculation Message-ID: Hello! I am an experienced software engineer but relatively new to Python, NumPy and SciPy. I'll say up front that I've have scanned all the pertinent documentation including the paid-for NumPy book but not found an answer to our problem(s): We have an MxN (M not equal to N) integer matrix A. The data for A is read-in from a file on persistant storage and the data is immutable (ie. does not change and cannot be changed). The vector x is a vector of size Nx1. The data elements of x are calculated during the program execution. We then perform a matrix-vector calculation ie. y = Ax, where the resulting y is a Mx1 vector. Both x and y are then discarded and a new x and y are calculated and this continues until program execution stops but at all times the matrix A remains the same until ... Under certain circumstances, we may have to increase the size of M to M+R leaving N alone ie. append R rows to the end of matrix A. We would want to do this while the program is executing. Here are the questions: - What NumPy array structure do we use for A - an array or matrix (and why)? - If A is a matrix data structure then do the x and y vectors have to be matrix structures too or can you mix the data structures? - If A is a matrix or an array structure, can we append rows during program execution and if so, how do we do this? Any help towards these questions will be gratefully appreciated. Thank-you Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Tue Oct 2 12:39:46 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 2 Oct 2007 12:39:46 -0400 Subject: [SciPy-user] NumPy matrix-vector calculation In-Reply-To: References: Message-ID: On 02/10/2007, Dinesh B Vadhia wrote: > We have an MxN (M not equal to N) integer matrix A. The data for A is > read-in from a file on persistant storage and the data is immutable (ie. > does not change and cannot be changed). > > The vector x is a vector of size Nx1. The data elements of x are calculated > during the program execution. > > We then perform a matrix-vector calculation ie. y = Ax, where the resulting > y is a Mx1 vector. > > Both x and y are then discarded and a new x and y are calculated and this > continues until program execution stops but at all times the matrix A > remains the same until ... > > Under certain circumstances, we may have to increase the size of M to M+R > leaving N alone ie. append R rows to the end of matrix A. We would want to > do this while the program is executing. > > Here are the questions: > > - What NumPy array structure do we use for A - an array or matrix (and why)? > - If A is a matrix data structure then do the x and y vectors have to be > matrix structures too or can you mix the data structures? > - If A is a matrix or an array structure, can we append rows during program > execution and if so, how do we do this? The only difference between numpy arrays and matrices is the way functions act on them - in particular, the * operator behaves differently (for arrays it operates elementwise and for matrices it applies the matrix product, specifically the function dot()). As data structures they are identical. When you talk about increasing M, that presumably means enlarging A. Is your idea that A has changed on disk? (You did say it was immutable.) The short answer is that you basically can't enlarge an array in place, as it is (under the hood) a single large block of memory. Copying is not all that expensive, for a once-in-a-while operation, so you can just use hstack() or vstack() or concatenate() to enlarge A, allocating a new array in the process. If A is on disk, and you want to reflect changes on disk, you can try using numpy's memory mapped arrays: these take advantage of your operating system's ability to make a disk file look like a piece of memory. Each matrix-vector product does require traversing all of A, though, so the matrix will need to be loaded into memory regardless. (Incidentally, if you combine several vectors into a matrix and multiply them by A all at once it will probably be faster, since numpy/scipy uses optimized matrix-multiplication routines that are reasonably smart about cache.) Anne From matthieu.brucher at gmail.com Tue Oct 2 12:40:13 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 2 Oct 2007 18:40:13 +0200 Subject: [SciPy-user] NumPy matrix-vector calculation In-Reply-To: References: Message-ID: Hi, Matrix and array are basically the same, you can do what you want with both, the only difference you will notice is that the multplication is the matrix multiplication in one case and the elementwise multiplication in the other case. Just try to be consistent and use only matrix or only arrays. Then, if you have to add rows or columns, see hstack() and vstack() Matthieu 2007/10/2, Dinesh B Vadhia : > > Hello! I am an experienced software engineer but relatively new to > Python, NumPy and SciPy. I'll say up front that I've have scanned all the > pertinent documentation including the paid-for NumPy book but not found an > answer to our problem(s): > > We have an MxN (M not equal to N) integer matrix A. The data for A is > read-in from a file on persistant storage and the data is immutable (ie. > does not change and cannot be changed). > > The vector x is a vector of size Nx1. The data elements of x are > calculated during the program execution. > > We then perform a matrix-vector calculation ie. y = Ax, where the > resulting y is a Mx1 vector. > > Both x and y are then discarded and a new x and y are calculated and this > continues until program execution stops but at all times the matrix A > remains the same until ... > > Under certain circumstances, we may have to increase the size of M to M+R > leaving N alone ie. append R rows to the end of matrix A. We would want to > do this while the program is executing. > > Here are the questions: > > - What NumPy array structure do we use for A - an array or matrix (and > why)? > - If A is a matrix data structure then do the x and y vectors have to be > matrix structures too or can you mix the data structures? > - If A is a matrix or an array structure, can we append rows during > program execution and if so, how do we do this? > > Any help towards these questions will be gratefully appreciated. > Thank-you > > Dinesh > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lev at columbia.edu Tue Oct 2 13:04:14 2007 From: lev at columbia.edu (Lev Givon) Date: Tue, 2 Oct 2007 13:04:14 -0400 Subject: [SciPy-user] precision question re float96 and float128 Message-ID: <20071002170414.GC26428@avicenna.cc.columbia.edu> I have numpy 1.0.3.1 installed on a Pentium 4 system running Linux 2.6 and an Intel Core 2 Duo system running MacOSX 10.4.10. On the former, the float96 datatype is defined; on the latter, float128. While examining the machine limits with the finfo function on the aforementioned hosts, I noticed that the limits for float96 and float128 were identical. For example, finfo(float96).precision and finfo(float128).precision both returned 18. Is this expected? Shouldn't the precision of the latter be greater? L.G. From mailing.lists at trichech.us Tue Oct 2 13:02:37 2007 From: mailing.lists at trichech.us (Chris) Date: Tue, 2 Oct 2007 17:02:37 +0000 (UTC) Subject: [SciPy-user] Scipy Superpack for OS X References: <47023106.1010901@ar.media.kyoto-u.ac.jp> <566BB51A-B43D-4E76-ACA9-FD87DFD03F3B@mac.com> Message-ID: Tommy Grav mac.com> writes: > > > This looks more like a problem at trichech.us, no ? > > I guess the correct person to correct all this would be > trichech.us, which is the maintainer of this great resource. > I just wanted to point out the inconsistencies in the hope > that someone would be able to reconcile it :) Hi Tommy, I had neglected to update the text in the installer -- it should be 0.7 from SVN. Could you point me to where on the Scipy site it says 0.6? I cannot find it myself. Thanks, Chris (trichech.us) From aisaac at american.edu Tue Oct 2 13:51:39 2007 From: aisaac at american.edu (Alan Isaac) Date: Tue, 2 Oct 2007 13:51:39 -0400 Subject: [SciPy-user] NumPy matrix-vector calculation In-Reply-To: References: Message-ID: On Tue, 2 Oct 2007, Dinesh B Vadhia wrote: > Under certain circumstances, we may have to increase the > size of M to M+R leaving N alone ie. append R rows to the > end of matrix A. We would want to do this while the > program is executing. You may not need to do this. Appending B to A and then post-multiplying by y gives you Ay stacked on top of By. Cheers, Alan Isaac From tgrav at mac.com Tue Oct 2 14:13:05 2007 From: tgrav at mac.com (Tommy Grav) Date: Tue, 2 Oct 2007 14:13:05 -0400 Subject: [SciPy-user] Scipy Superpack for OS X In-Reply-To: References: <47023106.1010901@ar.media.kyoto-u.ac.jp> <566BB51A-B43D-4E76-ACA9-FD87DFD03F3B@mac.com> Message-ID: On Oct 2, 2007, at 1:02 PM, Chris wrote: > Tommy Grav mac.com> writes: > >> >>> This looks more like a problem at trichech.us, no ? >> >> I guess the correct person to correct all this would be >> trichech.us, which is the maintainer of this great resource. >> I just wanted to point out the inconsistencies in the hope >> that someone would be able to reconcile it :) > > Hi Tommy, > > I had neglected to update the text in the installer -- it should be > 0.7 from SVN. Could you point me to where on the Scipy site it > says 0.6? I cannot find it myself. The scipy.org downloads pages (http://www.scipy.org/Download) reads: ???? SciPy Latest version is 0.6.0 (compatible with NumPy 1.0.3.1 and later), released 2007-09-20 [Linux and Windows distros cut] Binary installation for OS X (Tiger) * Make sure you have ActivePython 2.5 or MacPython 2.5. Note: The Superpack?s version detection may fail with other Python distributions (e.g., fink, Darwin Ports), and it will refuse to install. * Download the SciPy Superpack for Python 2.5 * NumPy is included in the Superpack. For best compatibility, make sure you use the version in the Superpack. ????? This (though indirectly) implies that all the listed packages contain the latest 0.6.0 version. I just downloaded your newest upload and I can confirm that it now says it installs Scipy 0.7 and does so (0.7.0.dev3368). Thanks again for a great resource for us that are somewhat challenged in doing our builds :) Cheers Tommy From dmitrey.kroshko at scipy.org Tue Oct 2 15:14:45 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 02 Oct 2007 22:14:45 +0300 Subject: [SciPy-user] does anyone use(d) pysimplex? Message-ID: <470298A5.2050907@scipy.org> Hi all, I get to know about pysimplex using web search. I would like to connect that one to scikits.openopt, however, both 2 links (that google yields): http://www.python.net/crew/aaron_watters/*pysimplex*/ http://www.pythonpros.com/arw/*pysimplex* are dead, and there are no other obtained from google for pysimplex download. Does anyone have any comments or ideas? Best regards, D. From cookedm at physics.mcmaster.ca Tue Oct 2 17:33:34 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 02 Oct 2007 17:33:34 -0400 Subject: [SciPy-user] precision question re float96 and float128 In-Reply-To: <20071002170414.GC26428@avicenna.cc.columbia.edu> (Lev Givon's message of "Tue\, 2 Oct 2007 13\:04\:14 -0400") References: <20071002170414.GC26428@avicenna.cc.columbia.edu> Message-ID: Lev Givon writes: > I have numpy 1.0.3.1 installed on a Pentium 4 system running Linux 2.6 > and an Intel Core 2 Duo system running MacOSX 10.4.10. On the former, > the float96 datatype is defined; on the latter, float128. While > examining the machine limits with the finfo function on the > aforementioned hosts, I noticed that the limits for float96 and > float128 were identical. For example, finfo(float96).precision > and finfo(float128).precision both returned 18. Is this expected? > Shouldn't the precision of the latter be greater? The numbers in float96 and float128 refer to the number of bits of memory the numbers take up. On the Intel processors, you're not actually getting 96 or 128 bits of precision -- they're actually a padded version of the 80 bit internal representation used in the floating-point units (64 bits of which are the mantissa, which gives you 10 for the precision). Instead of using 10 bytes of memory, 12 or 16 bytes are used so that type is aligned on a word or doubleword boundary. For portability purposes, you're better off using longdouble. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From hasslerjc at comcast.net Tue Oct 2 17:35:37 2007 From: hasslerjc at comcast.net (John Hassler) Date: Tue, 02 Oct 2007 17:35:37 -0400 Subject: [SciPy-user] ROLS Message-ID: <4702B9A9.3080306@comcast.net> The question on recursive ordinary least squares (ROLS) estimation caught my interest, because I used to teach a course in adaptive control. At the time (mid '90s), we were using Matlab (since I'd never heard of Python). I found some of the old programs I'd used in class and translated them to Python. Here is one, if anyone is interested. This ROLS program is very simple, even in Fortran. It uses the Matrix Inversion Lemma (MIL) to calculate the update to the covariance matrix. This is known to be somewhat unstable, numerically, but works 'well enough' for many problems. The Bierman UD or Givens square root methods are more stable, but somewhat more complicated to program. For class examples, I usually used the simplest method. From ROLS, it's easy to move on to Approximate Maximum Likelihood (AML), and not too difficult to get to the Kalman filter. Here is a simple ROLS with forgetting factor, and a driver program to play with. john # +++++++++++++++++++++++++++++++++++++++++++ # rolsftest.py # j. c. hassler # 12-feb-95 # Originally written in Matlab. Translated to Python 2-oct-07. # This is a simple driver program to test recursive OLS with a # forgetting factor, rolsf. The process is an ARMA model driven by # a random input. The parameters are estimated by ROLSF. # # ------------------------------------------ import numpy as nu import random from rolsf import * from pylab import * tht = [-.50, .25, 1., .25] # true parameters p = 1000.*nu.eye(4) # initialize covariance matrix # large values because we have very low confidence in the initial guess. th = [0., 0., 0., 0.] # initial guesses at parameters lam = 0.95 # forgetting factor xt = [0., 0., 0., 0.] # other initial values x = [0., 0., 0., 0.] a = nu.zeros(200,float) b = nu.zeros(200,float) indx = nu.zeros(200,int) for i in range(200): if i>100: # change one of the parameters tht[0] = -.40 # .. on the fly. # the point of the forgetting factor is to make the estimator responsive # to such changes. e = 0.02*(.5 - random.random()) # random 'noise' u = 1.*(.5 - random.random()) # random forcing function yt = nu.inner(xt,tht) # truth model xt[1] = xt[0] # stuff in the new truth-value-y ... xt[0] = yt # ... and new input xt[3] = xt[2] xt[2] = u y = yt + e # add 'measurement noise' to the true value [th,p] = rolsf(x,y,p,th,lam) # call recursive OLS with FF x[1] = x[0] # stuff in the new y ... x[0] = y # ... and new input to the design model x[3] = x[2] x[2] = u a[i] = th[0] # save for later plotting b[i] = th[1] indx[i] = i plot(indx,a,indx,b) grid() show() # +++++++++++++++++++++++++++++++++ # function [th,p] = rolsf(x,y,p,th,lam) # j. c. hassler # 12-feb-95 # Originally written in Matlab. Translated to Python 2-oct-07. # Recursive ordinary least squares for single output case, including a # forgetting factor, using the Matrix Inversion Lemma method. # Enter with x(N,1) = input, y = output, p(N,N) = covariance, # th(N,1) = estimate, lam = forgetting factor # ------------------------- import numpy as nu def rolsf(x,y,p,th,lam): a=nu.inner(p,x) g=1./(nu.inner(x,a)+lam) k=g*a e=y-nu.inner(x,th) th=th+k*e p=(p-g*nu.outer(a,a))/lam return th, p From aisaac at american.edu Wed Oct 3 00:04:59 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 3 Oct 2007 00:04:59 -0400 Subject: [SciPy-user] ROLS In-Reply-To: <4702B9A9.3080306@comcast.net> References: <4702B9A9.3080306@comcast.net> Message-ID: On Tue, 02 Oct 2007, John Hassler apparently wrote: > Here is a simple ROLS with forgetting factor, and a driver > program to play with. [ ] SciPy license [ ] MIT license [ ] other: _________________________ Cheers, Alan Isaac From massimo.sandal at unibo.it Wed Oct 3 05:52:03 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 03 Oct 2007 11:52:03 +0200 Subject: [SciPy-user] savitsky-golay smoothing filter? Message-ID: <47036643.9080604@unibo.it> Hi, Is there an implementation of the Savitsky-Golay smoothing filter in SciPy? I googled but found nothing... m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From fredmfp at gmail.com Wed Oct 3 06:02:57 2007 From: fredmfp at gmail.com (fred) Date: Wed, 03 Oct 2007 12:02:57 +0200 Subject: [SciPy-user] savitsky-golay smoothing filter? In-Reply-To: <47036643.9080604@unibo.it> References: <47036643.9080604@unibo.it> Message-ID: <470368D1.40304@gmail.com> massimo sandal a ?crit : > Hi, > > Is there an implementation of the Savitsky-Golay smoothing filter in > SciPy? > > I googled but found nothing... > You googled bad ;-) http://new.scipy.org/Cookbook/SavitzkyGolay -- http://scipy.org/FredericPetit From massimo.sandal at unibo.it Wed Oct 3 06:23:15 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 03 Oct 2007 12:23:15 +0200 Subject: [SciPy-user] savitsky-golay smoothing filter? In-Reply-To: <470368D1.40304@gmail.com> References: <47036643.9080604@unibo.it> <470368D1.40304@gmail.com> Message-ID: <47036D93.2030305@unibo.it> fred ha scritto: > massimo sandal a ?crit : >> Hi, >> >> Is there an implementation of the Savitsky-Golay smoothing filter in >> SciPy? >> >> I googled but found nothing... >> > You googled bad ;-) > > http://new.scipy.org/Cookbook/SavitzkyGolay Ah, SavitZky, not Savitsky! My fault. Thanks a lot. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From massimo.sandal at unibo.it Wed Oct 3 06:32:39 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 03 Oct 2007 12:32:39 +0200 Subject: [SciPy-user] savitsky-golay smoothing filter? In-Reply-To: <470368D1.40304@gmail.com> References: <47036643.9080604@unibo.it> <470368D1.40304@gmail.com> Message-ID: <47036FC7.30800@unibo.it> fred ha scritto: > massimo sandal a ?crit : >> Hi, >> >> Is there an implementation of the Savitsky-Golay smoothing filter in >> SciPy? >> >> I googled but found nothing... >> > You googled bad ;-) > > http://new.scipy.org/Cookbook/SavitzkyGolay By the way: under which license is the code in the Cookbook usable? I'm writing a GPLv2 program... m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From hasslerjc at comcast.net Wed Oct 3 07:06:46 2007 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 03 Oct 2007 07:06:46 -0400 Subject: [SciPy-user] ROLS In-Reply-To: References: <4702B9A9.3080306@comcast.net> Message-ID: <470377C6.2060500@comcast.net> An HTML attachment was scrubbed... URL: From jbednar at inf.ed.ac.uk Wed Oct 3 10:13:52 2007 From: jbednar at inf.ed.ac.uk (James Bednar) Date: Wed, 3 Oct 2007 15:13:52 +0100 Subject: [SciPy-user] (no subject) Message-ID: <18179.41888.169530.482488@lodestar.inf.ed.ac.uk> Subject: Re: [SciPy-user] plotting a surface In-Reply-To: References: X-Mailer: VM 7.18 under Emacs 21.4.1 Reply-To: jbednar at inf.ed.ac.uk Bcc: jbednar at inf.ed.ac.uk From: "James A. Bednar" | Date: Tue, 02 Oct 2007 04:56:56 -0400 | From: Joe Harrington | Subject: Re: [SciPy-user] plotting a surface | Cc: | Message-ID: <1191315416.6370.780.camel at glup.physics.ucf.edu> | Content-Type: text/plain | | Or, you could just do it with matplotlib... | | http://physicsmajor.wordpress.com/2007/04/22/3d-surface-with-matplotlib/ | | This was the first hit on a google search for "matplotlib surface". I | tested it and it works in 0.90.1. | | --jh-- | | | | | ------------------------------ | | Message: 2 | Date: Tue, 2 Oct 2007 07:42:38 -0400 | From: Tommy Grav | Subject: [SciPy-user] Scipy Superpack for OS X | To: scipy-user at scipy.org | Message-ID: | Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed | | I downloaded the Scipy Superpack for OS X at trichech.us, and noticed | that there | are in-consistencies in the different webpages: | | scipy.org claims that the binary installation version give you | version 0.6.0 | trichech.us says that the meta-package contains recent SVN builds of | version 0.7 | The downloaded scipy superpack for PPC Macintosh however lists the | version as 0.5.3 | (which is what is installed). | | Some reconciliation of the three webpages pointing to the same | resource might be | in order. | | Cheers | Tommy | | | | | ------------------------------ | | Message: 3 | Date: Tue, 02 Oct 2007 20:52:38 +0900 | From: David Cournapeau | Subject: Re: [SciPy-user] Scipy Superpack for OS X | To: SciPy Users List | Message-ID: <47023106.1010901 at ar.media.kyoto-u.ac.jp> | Content-Type: text/plain; charset=ISO-8859-1; format=flowed | | Tommy Grav wrote: | > I downloaded the Scipy Superpack for OS X at trichech.us, and noticed | > that there | > are in-consistencies in the different webpages: | > | > scipy.org claims that the binary installation version give you | > version 0.6.0 | > trichech.us says that the meta-package contains recent SVN builds of | > version 0.7 | 0.6.0 is released, but 0.7.0 is not yet (SVN build of 0.7 means that it | is between 0.6 and 0.7, if this is what bothers you here) | > The downloaded scipy superpack for PPC Macintosh however lists the | > version as 0.5.3 | > (which is what is installed). | This looks more like a problem at trichech.us, no ? | | cheers, | | David | | | ------------------------------ | | Message: 4 | Date: Tue, 2 Oct 2007 08:21:08 -0400 | From: Tommy Grav | Subject: Re: [SciPy-user] Scipy Superpack for OS X | To: SciPy Users List | Message-ID: <566BB51A-B43D-4E76-ACA9-FD87DFD03F3B at mac.com> | Content-Type: text/plain; charset=US-ASCII; format=flowed | | > This looks more like a problem at trichech.us, no ? | | I guess the correct person to correct all this would be | trichech.us, which is the maintainer of this great resource. | I just wanted to point out the inconsistencies in the hope | that someone would be able to reconcile it :) | | Cheers | Tommy | | | ------------------------------ | | Message: 5 | Date: Tue, 2 Oct 2007 09:37:41 -0400 | From: Alan G Isaac | Subject: Re: [SciPy-user] Hodrick-Prescott filter | To: SciPy Users List | Message-ID: | Content-Type: TEXT/PLAIN; CHARSET=UTF-8 | | On Mon, 1 Oct 2007, Alan G Isaac apparently wrote: | > I'm looking for a Hodrick-Prescott filter. | > http://en.wikipedia.org/wiki/Hodrick-Prescott_filter#External_links | > Is there one in SciPy, or can someone post one? | | | Well, nobody replied to this yet, so here is a really basic | implementation: ``hpfilter`` in | http://econpy.googlecode.com/svn/trunk/pytrix/tseries.py | Suggestions welcome. | | Cheers, | Alan Isaac | | | | | | ------------------------------ | | Message: 6 | Date: Tue, 2 Oct 2007 08:40:45 -0700 | From: "Christopher Burns" | Subject: Re: [SciPy-user] OS X users - Please try multiple | scipy.test() runs | To: "SciPy Users List" | Message-ID: | <764e38540710020840j64f0ed1fs5caea77ea0f8a282 at mail.gmail.com> | Content-Type: text/plain; charset="iso-8859-1" | | All tests pass on my OSX machine. Thanks Pearu. | | On 10/2/07, Pearu Peterson wrote: | > | > | > I have ported the veclib support from scipy.linalg to | > scipy.lib.blas. I cannot test it myself. So, anyone | > on OSX who had problems with scipy.lib, could you | > execute the following commands: | > | > cd svn/scipy/scipy/lib/blas | > python setup.py build | > python tests/test_fblas.py | > | > to see if tests pass or if there are any compiler errors? | > | > Pearu | > | | -- | Christopher Burns, Software Engineer | Computational Infrastructure for Research Labs | 10 Giannini Hall, UC Berkeley | phone: 510.643.4014 | http://cirl.berkeley.edu/ | -------------- next part -------------- | An HTML attachment was scrubbed... | URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20071002/cab22a10/attachment-0001.html | | ------------------------------ | | Message: 7 | Date: Tue, 2 Oct 2007 09:18:42 -0700 | From: "Dinesh B Vadhia" | Subject: [SciPy-user] NumPy matrix-vector calculation | To: | Message-ID: | Content-Type: text/plain; charset="iso-8859-1" | | Hello! I am an experienced software engineer but relatively new to Python, NumPy and SciPy. I'll say up front that I've have scanned all the pertinent documentation including the paid-for NumPy book but not found an answer to our problem(s): | | We have an MxN (M not equal to N) integer matrix A. The data for A is read-in from a file on persistant storage and the data is immutable (ie. does not change and cannot be changed). | | The vector x is a vector of size Nx1. The data elements of x are calculated during the program execution. | | We then perform a matrix-vector calculation ie. y = Ax, where the resulting y is a Mx1 vector. | | Both x and y are then discarded and a new x and y are calculated and this continues until program execution stops but at all times the matrix A remains the same until ... | | Under certain circumstances, we may have to increase the size of M to M+R leaving N alone ie. append R rows to the end of matrix A. We would want to do this while the program is executing. | | Here are the questions: | | - What NumPy array structure do we use for A - an array or matrix (and why)? | - If A is a matrix data structure then do the x and y vectors have to be matrix structures too or can you mix the data structures? | - If A is a matrix or an array structure, can we append rows during program execution and if so, how do we do this? | | Any help towards these questions will be gratefully appreciated. Thank-you | | Dinesh | -------------- next part -------------- | An HTML attachment was scrubbed... | URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20071002/6f1cbb61/attachment-0001.html | | ------------------------------ | | Message: 8 | Date: Tue, 2 Oct 2007 12:39:46 -0400 | From: "Anne Archibald" | Subject: Re: [SciPy-user] NumPy matrix-vector calculation | To: "SciPy Users List" | Message-ID: | | Content-Type: text/plain; charset=UTF-8 | | On 02/10/2007, Dinesh B Vadhia wrote: | | > We have an MxN (M not equal to N) integer matrix A. The data for A is | > read-in from a file on persistant storage and the data is immutable (ie. | > does not change and cannot be changed). | > | > The vector x is a vector of size Nx1. The data elements of x are calculated | > during the program execution. | > | > We then perform a matrix-vector calculation ie. y = Ax, where the resulting | > y is a Mx1 vector. | > | > Both x and y are then discarded and a new x and y are calculated and this | > continues until program execution stops but at all times the matrix A | > remains the same until ... | > | > Under certain circumstances, we may have to increase the size of M to M+R | > leaving N alone ie. append R rows to the end of matrix A. We would want to | > do this while the program is executing. | > | > Here are the questions: | > | > - What NumPy array structure do we use for A - an array or matrix (and why)? | > - If A is a matrix data structure then do the x and y vectors have to be | > matrix structures too or can you mix the data structures? | > - If A is a matrix or an array structure, can we append rows during program | > execution and if so, how do we do this? | | The only difference between numpy arrays and matrices is the way | functions act on them - in particular, the * operator behaves | differently (for arrays it operates elementwise and for matrices it | applies the matrix product, specifically the function dot()). As data | structures they are identical. | | When you talk about increasing M, that presumably means enlarging A. | Is your idea that A has changed on disk? (You did say it was | immutable.) The short answer is that you basically can't enlarge an | array in place, as it is (under the hood) a single large block of | memory. Copying is not all that expensive, for a once-in-a-while | operation, so you can just use hstack() or vstack() or concatenate() | to enlarge A, allocating a new array in the process. If A is on disk, | and you want to reflect changes on disk, you can try using numpy's | memory mapped arrays: these take advantage of your operating system's | ability to make a disk file look like a piece of memory. Each | matrix-vector product does require traversing all of A, though, so the | matrix will need to be loaded into memory regardless. (Incidentally, | if you combine several vectors into a matrix and multiply them by A | all at once it will probably be faster, since numpy/scipy uses | optimized matrix-multiplication routines that are reasonably smart | about cache.) | | Anne | | | ------------------------------ | | Message: 9 | Date: Tue, 2 Oct 2007 18:40:13 +0200 | From: "Matthieu Brucher" | Subject: Re: [SciPy-user] NumPy matrix-vector calculation | To: "SciPy Users List" | Message-ID: | | Content-Type: text/plain; charset="iso-8859-1" | | Hi, | | Matrix and array are basically the same, you can do what you want with both, | the only difference you will notice is that the multplication is the matrix | multiplication in one case and the elementwise multiplication in the other | case. Just try to be consistent and use only matrix or only arrays. | Then, if you have to add rows or columns, see hstack() and vstack() | | Matthieu | | 2007/10/2, Dinesh B Vadhia : | > | > Hello! I am an experienced software engineer but relatively new to | > Python, NumPy and SciPy. I'll say up front that I've have scanned all the | > pertinent documentation including the paid-for NumPy book but not found an | > answer to our problem(s): | > | > We have an MxN (M not equal to N) integer matrix A. The data for A is | > read-in from a file on persistant storage and the data is immutable (ie. | > does not change and cannot be changed). | > | > The vector x is a vector of size Nx1. The data elements of x are | > calculated during the program execution. | > | > We then perform a matrix-vector calculation ie. y = Ax, where the | > resulting y is a Mx1 vector. | > | > Both x and y are then discarded and a new x and y are calculated and this | > continues until program execution stops but at all times the matrix A | > remains the same until ... | > | > Under certain circumstances, we may have to increase the size of M to M+R | > leaving N alone ie. append R rows to the end of matrix A. We would want to | > do this while the program is executing. | > | > Here are the questions: | > | > - What NumPy array structure do we use for A - an array or matrix (and | > why)? | > - If A is a matrix data structure then do the x and y vectors have to be | > matrix structures too or can you mix the data structures? | > - If A is a matrix or an array structure, can we append rows during | > program execution and if so, how do we do this? | > | > Any help towards these questions will be gratefully appreciated. | > Thank-you | > | > Dinesh | > | > | > _______________________________________________ | > SciPy-user mailing list | > SciPy-user at scipy.org | > http://projects.scipy.org/mailman/listinfo/scipy-user | > | > | -------------- next part -------------- | An HTML attachment was scrubbed... | URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20071002/d1fcd0e8/attachment.html | | ------------------------------ | | _______________________________________________ | SciPy-user mailing list | SciPy-user at scipy.org | http://projects.scipy.org/mailman/listinfo/scipy-user | | | End of SciPy-user Digest, Vol 50, Issue 3 | ***************************************** From jbednar at inf.ed.ac.uk Wed Oct 3 10:36:59 2007 From: jbednar at inf.ed.ac.uk (James A. Bednar) Date: Wed, 3 Oct 2007 15:36:59 +0100 Subject: [SciPy-user] SciPy-user Digest, Vol 50, Issue 3 In-Reply-To: References: Message-ID: <18179.43275.456419.928858@lodestar.inf.ed.ac.uk> | Date: Tue, 02 Oct 2007 04:56:56 -0400 | From: Joe Harrington | Subject: Re: [SciPy-user] plotting a surface | To: scipy-user at scipy.org | Cc: jh at physics.ucf.edu | Message-ID: <1191315416.6370.780.camel at glup.physics.ucf.edu> | Content-Type: text/plain | | Or, you could just do it with matplotlib... | | http://physicsmajor.wordpress.com/2007/04/22/3d-surface-with-matplotlib/ | | This was the first hit on a google search for "matplotlib surface". I | tested it and it works in 0.90.1. Interesting! I couldn't find any documentation at all, but after some hacking on that script I was able to make matplotlib 0.90.1 plot a wireframe surface for a 2D numpy array, so I thought it could be useful to include the code (below). Note that the original example uses plot_surface instead of plot_wireframe, but I've found plot_surface to be quite buggy, with plots disappearing entirely much of the time, while plot_wireframe has been reliable so far. There is also contour3D, though that doesn't look very useful yet. Hopefully these 3D plots will all be polished up a bit and made public in a new matplotlib release soon! Jim _______________________________________________________________________________ import pylab from numpy import outer,arange,cos,sin,ones,zeros,array from matplotlib import axes3d def matrixplot3d(mat,title=None): fig = pylab.figure() ax = axes3d.Axes3D(fig) # Construct matrices for r and c values rn,cn = mat.shape c = outer(ones(rn),arange(cn*1.0)) r = outer(arange(rn*1.0),ones(cn)) ax.plot_wireframe(r,c,mat) ax.set_xlabel('R') ax.set_ylabel('C') ax.set_zlabel('Value') if title: windowtitle(title) pylab.show() matrixplot3d(array([[0.1,0.5,0.9],[0.2,0.1,0.0]])) From dmitrey.kroshko at scipy.org Wed Oct 3 14:45:29 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 03 Oct 2007 21:45:29 +0300 Subject: [SciPy-user] howto call lapack funcs from scipy? Message-ID: <4703E349.4020908@scipy.org> Hi all, does anyone know an URL to an obvious explanation of howto call lapack funcs from scipy? For example, I tried to use cgelss routine but I didn't get to know how to use the one - neither from scipy website (using search) nor from code: ------------------------------------- import scipy from scipy.linalg.flapack import cgelss print help(cgelss) print dir(cgelss) ---------------------------------- >>> Help on fortran object: class fortran(object) | Methods defined here: | | __call__(...) | x.__call__(...) <==> x(...) None [] ------------------- Thank you in advance, Dmitrey From robert.kern at gmail.com Wed Oct 3 20:46:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 03 Oct 2007 19:46:42 -0500 Subject: [SciPy-user] howto call lapack funcs from scipy? In-Reply-To: <4703E349.4020908@scipy.org> References: <4703E349.4020908@scipy.org> Message-ID: <470437F2.3090906@gmail.com> dmitrey wrote: > Hi all, > does anyone know an URL to an obvious explanation of howto call lapack > funcs from scipy? > For example, I tried to use cgelss routine but I didn't get to know how > to use the one - neither from scipy website (using search) nor from code: > ------------------------------------- > import scipy > from scipy.linalg.flapack import cgelss > print help(cgelss) > print dir(cgelss) The help() function doesn't know anything about f2py wrapper functions like cgelss, so it gives up. However, you can print cgelss.__doc__: In [6]: from scipy.linalg.flapack import cgelss In [7]: print cgelss.__doc__ cgelss - Function signature: v,x,s,rank,info = cgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Required arguments: a : input rank-2 array('F') with bounds (m,n) b : input rank-2 array('F') with bounds (maxmn,nrhs) Optional arguments: overwrite_a := 0 input int overwrite_b := 0 input int cond := -1.0 input float lwork := 2*minmn+MAX(maxmn,nrhs) input int Return objects: v : rank-2 array('F') with bounds (m,n) and a storage x : rank-2 array('F') with bounds (maxmn,nrhs) and b storage s : rank-1 array('f') with bounds (minmn) rank : int info : int -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From novak at ucolick.org Wed Oct 3 22:01:16 2007 From: novak at ucolick.org (Greg Novak) Date: Wed, 3 Oct 2007 19:01:16 -0700 Subject: [SciPy-user] What happened to python-idl? In-Reply-To: References: Message-ID: If you're wedded to the idea of calling IDL code from Python, see: http://www.its.caltech.edu/~mmckerns/software.html I use it because I have colleagues who write in IDL, and I don't have control over their code. This works for me. Greg On 9/27/07, Tim wrote: > Hello, > I stumbled upon the annoucement of Python-IDL: > http://mail.python.org/pipermail/python-announce-list/2003-January/001987.html > That al liked to > http://www.astro.uio.no/~mcmurry/python-idl/ > which is a dead link. > > Does anyone know what has happened to this package? > > What would the members of the list do if you have some good and useful code in > IDL but rather would like to use it or similar routines in Python? > > Thanks in advance, > Tim > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From steve at shrogers.com Wed Oct 3 23:10:28 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Wed, 03 Oct 2007 21:10:28 -0600 Subject: [SciPy-user] APL2007 Update Message-ID: <470459A4.90507@shrogers.com> APL 2007 in Montreal (only 2 1/2 weeks away, Oct 20-22). Summary program information is now available on the APL2007 home page http://www.sigapl.org/apl2007.html with a link to the comprehensive program description at http://www.sigapl.org/apl2007-program.html#a2 Registration for APL2007 is at http://www.regmaster.com/conf/oopsla2007.html From aisaac at american.edu Thu Oct 4 01:16:55 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 4 Oct 2007 01:16:55 -0400 Subject: [SciPy-user] Granger causality Message-ID: I'm looking for a nicely coded NumPy-based Granger causality test. http://en.wikipedia.org/wiki/Granger_causality Thank you, Alan Isaac From dmitrey.kroshko at scipy.org Thu Oct 4 02:06:30 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 04 Oct 2007 09:06:30 +0300 Subject: [SciPy-user] does numpy/scipy has matlab colmmd (Sparse column minimum degree permutation) equivalent? Message-ID: <470482E6.9030800@scipy.org> hi all, does numpy/scipy has matlab colmmd equivalent? http://www.caspur.it/risorse/softappl/doc/matlab_help/techdoc/ref/colmmd.html regards, D From textdirected at gmail.com Thu Oct 4 03:13:52 2007 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Thu, 4 Oct 2007 16:13:52 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c Message-ID: Hello all, On my old MAC OS X 10.3.9 (not Tiger), I encounterd compilation error of fblaswrap_veclib_c.c. Can anybody help me? $ pwd /Users/zgzg/scipy-0.6.0/scipy/linalg/src $ gcc --version gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1671) Copyright (C) 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -DNO_ATLAS_INFO=3 -Ibuild/src.macosx-10.3-ppc-2.5 -I/Users/zgzg/lib/python2.5/site-packages/numpy/core/include -I/Users/zgzg/include/python2.5 -c fblaswrap_veclib_c.c -o fblaswrap_veclib_c.o -faltivec fblaswrap_veclib_c.c:5: error: parse error before '*' token fblaswrap_veclib_c.c:6: warning: function declaration isn't a prototype fblaswrap_veclib_c.c: In function `wcdotc_': fblaswrap_veclib_c.c:7: error: `N' undeclared (first use in this function) fblaswrap_veclib_c.c:7: error: (Each undeclared identifier is reported only once fblaswrap_veclib_c.c:7: error: for each function it appears in.) fblaswrap_veclib_c.c:7: error: `X' undeclared (first use in this function) fblaswrap_veclib_c.c:7: error: `incX' undeclared (first use in this function) fblaswrap_veclib_c.c:7: error: `Y' undeclared (first use in this function) fblaswrap_veclib_c.c:7: error: `incY' undeclared (first use in this function) fblaswrap_veclib_c.c:7: error: `dotc' undeclared (first use in this function) fblaswrap_veclib_c.c: At top level: fblaswrap_veclib_c.c:10: error: parse error before '*' token fblaswrap_veclib_c.c:11: warning: function declaration isn't a prototype fblaswrap_veclib_c.c: In function `wcdotu_': fblaswrap_veclib_c.c:12: error: `N' undeclared (first use in this function) fblaswrap_veclib_c.c:12: error: `X' undeclared (first use in this function) fblaswrap_veclib_c.c:12: error: `incX' undeclared (first use in this function) fblaswrap_veclib_c.c:12: error: `Y' undeclared (first use in this function) fblaswrap_veclib_c.c:12: error: `incY' undeclared (first use in this function) fblaswrap_veclib_c.c:12: error: `dotu' undeclared (first use in this function) fblaswrap_veclib_c.c: At top level: fblaswrap_veclib_c.c:15: error: parse error before '*' token fblaswrap_veclib_c.c:16: warning: function declaration isn't a prototype fblaswrap_veclib_c.c: In function `wzdotc_': fblaswrap_veclib_c.c:17: error: `N' undeclared (first use in this function) fblaswrap_veclib_c.c:17: error: `X' undeclared (first use in this function) fblaswrap_veclib_c.c:17: error: `incX' undeclared (first use in this function) fblaswrap_veclib_c.c:17: error: `Y' undeclared (first use in this function) fblaswrap_veclib_c.c:17: error: `incY' undeclared (first use in this function) fblaswrap_veclib_c.c:17: error: `dotu' undeclared (first use in this function) fblaswrap_veclib_c.c: At top level: fblaswrap_veclib_c.c:19: error: parse error before '*' token fblaswrap_veclib_c.c:20: warning: function declaration isn't a prototype fblaswrap_veclib_c.c: In function `wzdotu_': fblaswrap_veclib_c.c:21: error: `N' undeclared (first use in this function) fblaswrap_veclib_c.c:21: error: `X' undeclared (first use in this function) fblaswrap_veclib_c.c:21: error: `incX' undeclared (first use in this function) fblaswrap_veclib_c.c:21: error: `Y' undeclared (first use in this function) fblaswrap_veclib_c.c:21: error: `incY' undeclared (first use in this function) fblaswrap_veclib_c.c:21: error: `dotu' undeclared (first use in this function) From david at ar.media.kyoto-u.ac.jp Thu Oct 4 03:18:16 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 04 Oct 2007 16:18:16 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: References: Message-ID: <470493B8.6050805@ar.media.kyoto-u.ac.jp> HEMMI, Shigeru wrote: > Hello all, > > On my old MAC OS X 10.3.9 (not Tiger), I encounterd compilation error > of fblaswrap_veclib_c.c. Can anybody help me? > > $ pwd > /Users/zgzg/scipy-0.6.0/scipy/linalg/src > > $ gcc --version > gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1671) > Copyright (C) 2002 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > $ gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp > -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes > -DNO_ATLAS_INFO=3 -Ibuild/src.macosx-10.3-ppc-2.5 > -I/Users/zgzg/lib/python2.5/site-packages/numpy/core/include > -I/Users/zgzg/include/python2.5 -c fblaswrap_veclib_c.c -o > fblaswrap_veclib_c.o -faltivec > fblaswrap_veclib_c.c:5: error: parse error before '*' token > It looks like the complex type is not recognized for some reason, but this is strange, since gcc 3.3 knows about it. Can you try to compile the following snipset: #include int main() { complex a = 1.0 * I; return 0; } If this works, replaces #include by #include , and try compiling it too. If both work, then I don't know why it is not working. cheers, David From textdirected at gmail.com Thu Oct 4 04:05:51 2007 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Thu, 4 Oct 2007 17:05:51 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: <470493B8.6050805@ar.media.kyoto-u.ac.jp> References: <470493B8.6050805@ar.media.kyoto-u.ac.jp> Message-ID: Dear David Cournapeau, Thanks for the reply. For the code you suggest, #include int main() { complex a = 1.0 * I; return 0; } it worked, but after replacing #include by #include , I encounterd an error. $ gcc tmptesting.c tmptesting.c: In function `main': tmptesting.c:5: error: `complex' undeclared (first use in this function) tmptesting.c:5: error: (Each undeclared identifier is reported only once tmptesting.c:5: error: for each function it appears in.) tmptesting.c:5: error: parse error before "a" Is there any suggestions? (I believe that gcc I am using is just the gcc that Apple supplied. However, I am not sure is stil correct or not. I have been trying installing/uninstalling some other gccs many times and now I believe I am using Apple supplied gcc.) Regards, 2007/10/4, David Cournapeau : > HEMMI, Shigeru wrote: > > Hello all, > > > > On my old MAC OS X 10.3.9 (not Tiger), I encounterd compilation error > > of fblaswrap_veclib_c.c. Can anybody help me? > > > > $ pwd > > /Users/zgzg/scipy-0.6.0/scipy/linalg/src > > > > $ gcc --version > > gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1671) > > Copyright (C) 2002 Free Software Foundation, Inc. > > This is free software; see the source for copying conditions. There is NO > > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > > > $ gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp > > -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes > > -DNO_ATLAS_INFO=3 -Ibuild/src.macosx-10.3-ppc-2.5 > > -I/Users/zgzg/lib/python2.5/site-packages/numpy/core/include > > -I/Users/zgzg/include/python2.5 -c fblaswrap_veclib_c.c -o > > fblaswrap_veclib_c.o -faltivec > > fblaswrap_veclib_c.c:5: error: parse error before '*' token > > > It looks like the complex type is not recognized for some reason, but > this is strange, since gcc 3.3 knows about it. Can you try to compile > the following snipset: > > #include > int main() > { > complex a = 1.0 * I; > return 0; > } > > If this works, replaces #include by #include > , and try compiling it too. If both work, then I don't > know why it is not working. > > cheers, > > David > From david at ar.media.kyoto-u.ac.jp Thu Oct 4 04:03:59 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 04 Oct 2007 17:03:59 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: References: <470493B8.6050805@ar.media.kyoto-u.ac.jp> Message-ID: <47049E6F.1060007@ar.media.kyoto-u.ac.jp> HEMMI, Shigeru wrote: > Dear David Cournapeau, > > Thanks for the reply. > > For the code you suggest, > > #include > int main() > { > complex a = 1.0 * I; > return 0; > } > > it worked, but after replacing #include by #include > , I encounterd an error. > > $ gcc tmptesting.c > tmptesting.c: In function `main': > tmptesting.c:5: error: `complex' undeclared (first use in this function) > tmptesting.c:5: error: (Each undeclared identifier is reported only once > tmptesting.c:5: error: for each function it appears in.) > tmptesting.c:5: error: parse error before "a" If you just try: #include int main() { return 0; } What does it do ? (My explanation would be either the vecLib.h is not found, or on panther, it does not include complex.h). cheers, David From textdirected at gmail.com Thu Oct 4 04:24:57 2007 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Thu, 4 Oct 2007 17:24:57 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: <47049E6F.1060007@ar.media.kyoto-u.ac.jp> References: <470493B8.6050805@ar.media.kyoto-u.ac.jp> <47049E6F.1060007@ar.media.kyoto-u.ac.jp> Message-ID: Dear David, > If you just try: > > #include > > int main() > { > return 0; > } > > What does it do ? It worked; see below. $ cat tmptesting2.c #include int main() { return 0; } $ gcc tmptesting2.c $ ls a.out* tmptesting.c tmptesting2.c > (My explanation would be either the vecLib.h is not > found, or on panther, it does not include complex.h). vecLib.h seems exist but on Panther it is not correct one as you wrote. From david at ar.media.kyoto-u.ac.jp Thu Oct 4 04:28:37 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 04 Oct 2007 17:28:37 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: References: <470493B8.6050805@ar.media.kyoto-u.ac.jp> <47049E6F.1060007@ar.media.kyoto-u.ac.jp> Message-ID: <4704A435.1080309@ar.media.kyoto-u.ac.jp> HEMMI, Shigeru wrote: > Dear David, > >> If you just try: >> >> #include >> >> int main() >> { >> return 0; >> } >> >> What does it do ? > It worked; see below. > > $ cat tmptesting2.c > #include > int main() > { > return 0; > } > $ gcc tmptesting2.c > $ ls > a.out* tmptesting.c tmptesting2.c > > >> (My explanation would be either the vecLib.h is not >> found, or on panther, it does not include complex.h). > > vecLib.h seems exist but on Panther it is not correct one > as you wrote. It may be the correct one, just that on panther, you may need to include complex separately. You could include just before , but I am not sure it is the right thing to do (maybe on panther, vecLib has its own complex type, or something like that: on gcc 3.3 page, it is said that complex support in C99 manner is broken, which is why I am a bit reluctant to just add :http://gcc.gnu.org/gcc-3.3/c99status.html ). cheers, David From textdirected at gmail.com Thu Oct 4 05:06:15 2007 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Thu, 4 Oct 2007 18:06:15 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: <4704A435.1080309@ar.media.kyoto-u.ac.jp> References: <470493B8.6050805@ar.media.kyoto-u.ac.jp> <47049E6F.1060007@ar.media.kyoto-u.ac.jp> <4704A435.1080309@ar.media.kyoto-u.ac.jp> Message-ID: Dear David, Thanks for your kind support. I have added complex.h separately, and I was able to build scipy 0.6.0 on my MAC OS X Panther. $ python Python 2.5.1 (r251:54863, May 19 2007, 11:43:31) [GCC 3.3 20030304 (Apple Computer, Inc. build 1671)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Found 9 tests for scipy.cluster.vq Found 18 tests for scipy.fftpack.basic (snip) Best regards, 2007/10/4, David Cournapeau : > HEMMI, Shigeru wrote: > > Dear David, > > > >> If you just try: > >> > >> #include > >> > >> int main() > >> { > >> return 0; > >> } > >> > >> What does it do ? > > It worked; see below. > > > > $ cat tmptesting2.c > > #include > > int main() > > { > > return 0; > > } > > $ gcc tmptesting2.c > > $ ls > > a.out* tmptesting.c tmptesting2.c > > > > > >> (My explanation would be either the vecLib.h is not > >> found, or on panther, it does not include complex.h). > > > > vecLib.h seems exist but on Panther it is not correct one > > as you wrote. > It may be the correct one, just that on panther, you may need to include > complex separately. You could include just before > , but I am not sure it is the right thing to do (maybe on > panther, vecLib has its own complex type, or something like that: on gcc > 3.3 page, it is said that complex support in C99 manner is broken, which > is why I am a bit reluctant to just add > :http://gcc.gnu.org/gcc-3.3/c99status.html ). > > cheers, > > David > From david at ar.media.kyoto-u.ac.jp Thu Oct 4 05:04:49 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 04 Oct 2007 18:04:49 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: References: <470493B8.6050805@ar.media.kyoto-u.ac.jp> <47049E6F.1060007@ar.media.kyoto-u.ac.jp> <4704A435.1080309@ar.media.kyoto-u.ac.jp> Message-ID: <4704ACB1.7020302@ar.media.kyoto-u.ac.jp> HEMMI, Shigeru wrote: > Dear David, > > Thanks for your kind support. > I have added complex.h separately, and > I was able to build scipy 0.6.0 on my MAC OS X Panther. > To be more precise, it is likely that it is the right solution, but I would prefer first testing for the case and including it conditionally. Could you open a ticket in scipy trac, with a short description, so that we can follow the problem, cheers, David From robince at gmail.com Thu Oct 4 07:56:27 2007 From: robince at gmail.com (Robin) Date: Thu, 4 Oct 2007 12:56:27 +0100 Subject: [SciPy-user] Invert a sparse matrix Message-ID: Hello, Is there any function to invert a sparse matrix, giving a sparse result? I couldn't find much information on this. Also how does one do sparse matrix-vector multiplication? The resulting vector doesn't need to be sparse, but it's important the multiplication is done without converting the array/matrix to full form (not enough ram). I will have a very large matrix, that must remain sparse to fit in memory, and I need to use it for some basic operations (matrix-vector multiplication by itself and it's inverse). Thanks for your help, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From textdirected at gmail.com Thu Oct 4 08:41:53 2007 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Thu, 4 Oct 2007 21:41:53 +0900 Subject: [SciPy-user] Compilation error of fblaswrap_veclib_c.c In-Reply-To: <4704ACB1.7020302@ar.media.kyoto-u.ac.jp> References: <470493B8.6050805@ar.media.kyoto-u.ac.jp> <47049E6F.1060007@ar.media.kyoto-u.ac.jp> <4704A435.1080309@ar.media.kyoto-u.ac.jp> <4704ACB1.7020302@ar.media.kyoto-u.ac.jp> Message-ID: > Could you open a ticket in scipy trac, with a short description, so that > we can follow the problem, Yes, I opened a ticket: http://scipy.org/scipy/scipy/ticket/510 Thanks, regards, From aisaac at american.edu Thu Oct 4 09:34:55 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 4 Oct 2007 09:34:55 -0400 Subject: [SciPy-user] Invert a sparse matrix In-Reply-To: References: Message-ID: On Thu, 4 Oct 2007, Robin apparently wrote: > I will have a very large matrix, that must remain sparse > to fit in memory, and I need to use it for some basic > operations (matrix-vector multiplication by itself and > it's inverse). Are you sure you need an inverse? If you are just solving equation systems, you usually do not. Cheers, Alan Isaac From lbolla at gmail.com Thu Oct 4 09:46:42 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 4 Oct 2007 15:46:42 +0200 Subject: [SciPy-user] Invert a sparse matrix In-Reply-To: References: Message-ID: <80c99e790710040646w626d32eew72834b3bc91f065d@mail.gmail.com> another hint: to build a sparse matrix use scipy.sparse.lil_matrix (if possible) and to do multiplications first convert it to csr (or csc) format. L. On 10/4/07, Alan G Isaac wrote: > > On Thu, 4 Oct 2007, Robin apparently wrote: > > I will have a very large matrix, that must remain sparse > > to fit in memory, and I need to use it for some basic > > operations (matrix-vector multiplication by itself and > > it's inverse). > > Are you sure you need an inverse? > If you are just solving equation systems, > you usually do not. > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Thu Oct 4 10:22:51 2007 From: robince at gmail.com (Robin) Date: Thu, 4 Oct 2007 15:22:51 +0100 Subject: [SciPy-user] Invert a sparse matrix In-Reply-To: References: Message-ID: On 10/4/07, Alan G Isaac wrote: > > > Are you sure you need an inverse? > If you are just solving equation systems, > you usually do not. I'm pretty sure I need it... I am not solving equation systems. I am doing coordinate transformations in a large dimensional space (hence the large sparse matrix)... Some of the transformations required the inverse and the transpose of the core matrix. I'm actually trying to translate my code from MATLAB, where I used the sparse inverse, but I suppose if I want to do something like vec1 = A^(-T) ln (vec2) then I can use spsolve on A^T.vec1 = ln(vec2). The only thing is this is going to be inside an optimisation loop so I really need to get it as efficient as possible - I thought sparse matrix multiplication by the inverse is going to be a lot faster than solving each time... Thanks, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Thu Oct 4 11:32:03 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Thu, 4 Oct 2007 11:32:03 -0400 Subject: [SciPy-user] Invert a sparse matrix In-Reply-To: References: Message-ID: <8793ae6e0710040832m51d468aeg324eff8437bf0a35@mail.gmail.com> On 10/4/07, Robin wrote: > On 10/4/07, Alan G Isaac wrote: > > > > Are you sure you need an inverse? > > If you are just solving equation systems, > > you usually do not. > > I'm pretty sure I need it... I am not solving equation systems. I am doing > coordinate transformations in a large dimensional space (hence the large > sparse matrix)... Some of the transformations required the inverse and the > transpose of the core matrix. > > I'm actually trying to translate my code from MATLAB, where I used the > sparse inverse, but I suppose if I want to do something like > vec1 = A^(-T) ln (vec2) > then I can use spsolve on A^T.vec1 = ln(vec2). > > The only thing is this is going to be inside an optimisation loop so I > really need to get it as efficient as possible - I thought sparse matrix > multiplication by the inverse is going to be a lot faster than solving each > time... I presume then that A does not change from one loop pass to the next. It might be more efficient to factorize A once outside the loop and keep its factors around (L and U if A is not symmetric, or just L if it is symmetric and positive definite, etc). E.g., you can use UMFPACK and call umfpack.solve, asking for a solve with A^T. Cases where you *really* need the inverse are rare. Moreover, the inverse of a sparse matrix is not necessarily sparse. Typically, inverting is more expensive than an LU factorization and prone to rounding errors. "Almost anything you can do with A^{-1} can be done without it" (George E. Forsythe and Cleve B. Moler, Computer Solution of Linear Algebraic Systems, 1967). See: from scipy import linsolve help(linsolve.umfpack) Dominique From robince at gmail.com Thu Oct 4 12:28:22 2007 From: robince at gmail.com (Robin) Date: Thu, 4 Oct 2007 17:28:22 +0100 Subject: [SciPy-user] Invert a sparse matrix In-Reply-To: <8793ae6e0710040832m51d468aeg324eff8437bf0a35@mail.gmail.com> References: <8793ae6e0710040832m51d468aeg324eff8437bf0a35@mail.gmail.com> Message-ID: On 10/4/07, Dominique Orban wrote: > > I presume then that A does not change from one loop pass to the next. > It might be more efficient to factorize A once outside the loop and > keep its factors around (L and U if A is not symmetric, or just L if > it is symmetric and positive definite, etc). E.g., you can use UMFPACK > and call umfpack.solve, asking for a solve with A^T. > > Cases where you *really* need the inverse are rare. Moreover, the > inverse of a sparse matrix is not necessarily sparse. Typically, > inverting is more expensive than an LU factorization and prone to > rounding errors. Hi, Thanks for your help. Yes, A does not change. If I factorise into LU outside of the loop and then solve inside the loop could it really be as quick as multiplying by the inverse? Unfortunately the other points don't apply - in this case the nonzero's of the matrix are all ones - the inverse contains just ones and -1's, and is sparse, and (at least in MATLAB) was calculated easily without any round off error. To me it doesn't matter that inverting is more expensive than LU because it only happens once, it is the inside of the loop that I care about. In the mean time I will try rebuilding scipy with UMFPACK to see how it goes. Thanks Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From meesters at uni-mainz.de Thu Oct 4 13:51:38 2007 From: meesters at uni-mainz.de (Christian Meesters) Date: Thu, 4 Oct 2007 19:51:38 +0200 Subject: [SciPy-user] savitsky-golay smoothing filter? In-Reply-To: <47036FC7.30800@unibo.it> References: <47036643.9080604@unibo.it> <470368D1.40304@gmail.com> <47036FC7.30800@unibo.it> Message-ID: <1191520298.27294.61.camel@cmeesters> Hoi Massimo, > By the way: under which license is the code in the Cookbook usable? I'm > writing a GPLv2 program... > I was the one posting the first snippet there (btw. thanks for pointing out the limitation that one can only reasonably apply it if the vector starts and end at zero). Before posting I asked Andrew Dalke, whose web page is linked on top of the snippet and he said (don't have his mail anymore, but that's the digest): 'go ahead and post your adaptation'. This is also my attitude: It's a cookbook and just by posting there IMHO I lost all 'rights' on that snippet. I wouldn't post a snippet, if I wouldn't think it's nice if others are using it. The algorithm is published anyway. So, go ahead and I hope it's useful for your application! (Though I don't take any responsibility for the correctness of that snippet, of course. Should have corrected it myself ...) Are there intrinsic license issue for scipy's cookbook snippets, anyway? Cheers Christian From robince at gmail.com Thu Oct 4 14:09:22 2007 From: robince at gmail.com (Robin) Date: Thu, 4 Oct 2007 19:09:22 +0100 Subject: [SciPy-user] Problem building scipy with UMFPACK Message-ID: Hi, I have succesfully built scipy with ATLAS 3.7.37 and fftw. However when I try to include UMFPACK I start to get problems. I am using cygwin with gcc 3.4.4 on Windows XP. I managed to get everything working so far by adding in flags -mno-cygwin and -fno-second-underscore everywhere I could. My UFConfig.mk includes the following BLAS = -L/cygdrive/c/scipy/libs -llapack -lf77blas -lcblas -latlas -lg2c LAPACK = -L/cygdrive/c/scipy/libs -llapack -lf77blas -lcblas -latlas -lg2c and UMFPACK builds without error. However when I try to build scipy I get the following: building 'scipy.linsolve.umfpack.__umfpack' extension compiling C sources C compiler: gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H -DATLAS_INFO="\"?.?.?\"" -Ic:\ scipy\libs\include\umfpack -Ic:\scipy\libs\include -Ic:\Python25\lib\site-packag es\numpy\core\include -Ic:\Python25\include -Ic:\Python25\PC -c' g++ -mno-cygwin -shared build\temp.win32- 2.5\Release\build\src.win32-2.5\scipy\l insolve\umfpack\_umfpack_wrap.o -Lc:\scipy\libs -Lc:\Python25\libs -Lc:\Python25 \PCBuild -Lbuild\temp.win32-2.5 -lumfpack -lamd -lamd -lg2c -lgcc -lumfpack -lg2 c -lgcc -llapack -lf77blas -lcblas -latlas -lpython25 -lmsvcr71 -o build\lib.win 32-2.5\scipy\linsolve\umfpack\__umfpack.pyd Found executable C:\cygwin\bin\g++.exe c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0xe): undefined reference t o `_s_wsfe' c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x29): undefined reference to `_do_fio' c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x44): undefined reference to `_do_fio' c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x49): undefined reference to `_e_wsfe' c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x5d): undefined reference to `_s_stop' collect2: ld returned 1 exit status error: Command "g++ -mno-cygwin -shared build\temp.win32- 2.5\Release\build\src.w in32-2.5\scipy\linsolve\umfpack\_umfpack_wrap.o -Lc:\scipy\libs -Lc:\Python25\li bs -Lc:\Python25\PCBuild -Lbuild\temp.win32-2.5 -lumfpack -lamd -lamd -lg2c -lgc c -lumfpack -lg2c -lgcc -llapack -lf77blas -lcblas -latlas -lpython25 -lmsvcr71 -o build\lib.win32-2.5\scipy\linsolve\umfpack\__umfpack.pyd" failed with exit st atus 1 >From searching the internet it seems the undefined references are part of the fortran IO library, so I added in g2c and gcc as required libraries and copied them to my scipy.libs directory. However as you can see the error still occurs... I'm a bit stuck as to what to try next. Thanks, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Oct 4 14:28:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 04 Oct 2007 13:28:18 -0500 Subject: [SciPy-user] Problem building scipy with UMFPACK In-Reply-To: References: Message-ID: <470530C2.4050902@gmail.com> Robin wrote: > Hi, > > I have succesfully built scipy with ATLAS 3.7.37 and fftw. However when > I try to include UMFPACK I start to get problems. > > I am using cygwin with gcc 3.4.4 on Windows XP. I managed to get > everything working so far by adding in flags -mno-cygwin and > -fno-second-underscore everywhere I could. > > My UFConfig.mk includes the following > BLAS = -L/cygdrive/c/scipy/libs -llapack -lf77blas -lcblas -latlas -lg2c > LAPACK = -L/cygdrive/c/scipy/libs -llapack -lf77blas -lcblas -latlas -lg2c > and UMFPACK builds without error. > > However when I try to build scipy I get the following: > > building 'scipy.linsolve.umfpack.__umfpack' extension > compiling C sources > C compiler: gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes > > compile options: '-DSCIPY_UMFPACK_H -DSCIPY_AMD_H > -DATLAS_INFO="\"?.?.?\"" -Ic:\ > scipy\libs\include\umfpack -Ic:\scipy\libs\include > -Ic:\Python25\lib\site-packag > es\numpy\core\include -Ic:\Python25\include -Ic:\Python25\PC -c' > g++ -mno-cygwin -shared > build\temp.win32-2.5\Release\build\src.win32-2.5\scipy\l > insolve\umfpack\_umfpack_wrap.o -Lc:\scipy\libs -Lc:\Python25\libs > -Lc:\Python25 > \PCBuild -Lbuild\temp.win32-2.5 -lumfpack -lamd -lamd -lg2c -lgcc > -lumfpack -lg2 > c -lgcc -llapack -lf77blas -lcblas -latlas -lpython25 -lmsvcr71 -o > build\lib.win > 32-2.5\scipy\linsolve\umfpack\__umfpack.pyd > Found executable C:\cygwin\bin\g++.exe > c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f: (.text+0xe): undefined > reference t > o `_s_wsfe' > c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x29): undefined > reference > to `_do_fio' > c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x44): undefined > reference > to `_do_fio' > c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x49): undefined > reference > to `_e_wsfe' > c:\scipy\libs/libf77blas.a(xerbla.o):xerbla.f:(.text+0x5d): undefined > reference > to `_s_stop' > collect2: ld returned 1 exit status > error: Command "g++ -mno-cygwin -shared > build\temp.win32-2.5\Release\build\src.w > in32-2.5\scipy\linsolve\umfpack\_umfpack_wrap.o -Lc:\scipy\libs > -Lc:\Python25\li > bs -Lc:\Python25\PCBuild -Lbuild\temp.win32- 2.5 -lumfpack -lamd -lamd > -lg2c -lgc > c -lumfpack -lg2c -lgcc -llapack -lf77blas -lcblas -latlas -lpython25 > -lmsvcr71 > -o build\lib.win32-2.5\scipy\linsolve\umfpack\__umfpack.pyd" failed with > exit st > atus 1 > > From searching the internet it seems the undefined references are part > of the fortran IO library, so I added in g2c and gcc as required > libraries and copied them to my scipy.libs directory. However as you can > see the error still occurs... I'm a bit stuck as to what to try next. The problem is that the -lg2c flag is coming before the -lf77blas flag. The order of these is important if the libraries depend on each other, like here. How did you configure the library flags? Did you use a site.cfg file? If so, please post it, here. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robince at gmail.com Thu Oct 4 14:46:15 2007 From: robince at gmail.com (Robin) Date: Thu, 4 Oct 2007 19:46:15 +0100 Subject: [SciPy-user] Problem building scipy with UMFPACK In-Reply-To: <470530C2.4050902@gmail.com> References: <470530C2.4050902@gmail.com> Message-ID: > The problem is that the -lg2c flag is coming before the -lf77blas flag. > The > order of these is important if the libraries depend on each other, like > here. > How did you configure the library flags? Did you use a site.cfg file? If > so, > please post it, here. I had put the g2c lib on all the entries in site.cfg... taking it off AMD and UMFPACK and just putting it at the end of ATLAS section fixed the problem... I didn't realise the order of -l options made a difference. Thanks very much, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Thu Oct 4 16:31:45 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 04 Oct 2007 22:31:45 +0200 Subject: [SciPy-user] savitsky-golay smoothing filter? In-Reply-To: <1191520298.27294.61.camel@cmeesters> References: <47036643.9080604@unibo.it> <470368D1.40304@gmail.com> <47036FC7.30800@unibo.it> <1191520298.27294.61.camel@cmeesters> Message-ID: <47054DB1.80406@unibo.it> Christian Meesters ha scritto: > I was the one posting the first snippet there (btw. thanks for pointing > out the limitation that one can only reasonably apply it if the vector > starts and end at zero). Before posting I asked Andrew Dalke, whose web > page is linked on top of the snippet and he said (don't have his mail > anymore, but that's the digest): 'go ahead and post your adaptation'. > This is also my attitude: It's a cookbook and just by posting there IMHO > I lost all 'rights' on that snippet. I wouldn't post a snippet, if I > wouldn't think it's nice if others are using it. Yes, that's what I would assume it's logical. But today copyrights, licences etc. are always traps, even in the free software world. > The algorithm is > published anyway. But the code is a different thing :) > So, go ahead and I hope it's useful for your application! (Though I > don't take any responsibility for the correctness of that snippet, of > course. Should have corrected it myself ...) Thanks! m. From listservs at mac.com Thu Oct 4 14:14:41 2007 From: listservs at mac.com (Chris) Date: Thu, 4 Oct 2007 18:14:41 +0000 (UTC) Subject: [SciPy-user] SciPy svn build chokes on swig Message-ID: Building SciPy from SVN on OS X 10.4, which has worked up until today, chokes when it tries to look for swig: swig: scipy/io/nifti/nifticlib.i swig -python -Iscipy/io/nifti -Iscipy/io/nifti/nifticlib/fsliolib -Iscipy/io/nifti/nifticlib/niftilib -Iscipy/io/nifti/nifticlib/znzlib -o build/src.macosx-10.3-fat-2.5/scipy/io/nifti/nifticlib_wrap.c -outdir build/src.macosx-10.3-fat-2.5/scipy/io/nifti scipy/io/nifti/nifticlib.i unable to execute swig: No such file or directory error: command 'swig' failed with exit status 1 I was building just after a successful build and install of numpy from SVN minutes prior. From lbolla at gmail.com Thu Oct 4 18:22:59 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 5 Oct 2007 00:22:59 +0200 Subject: [SciPy-user] does numpy/scipy has matlab colmmd (Sparse column minimum degree permutation) equivalent? In-Reply-To: <470482E6.9030800@scipy.org> References: <470482E6.9030800@scipy.org> Message-ID: <80c99e790710041522tec94e0elf0b467cd6130bf77@mail.gmail.com> there is in SuperLU and, hence, it should be in scipy which has wrappers to SuperLU. L. On 10/4/07, dmitrey wrote: > > hi all, > does numpy/scipy has matlab colmmd equivalent? > > http://www.caspur.it/risorse/softappl/doc/matlab_help/techdoc/ref/colmmd.html > regards, D > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cburns at berkeley.edu Thu Oct 4 18:47:15 2007 From: cburns at berkeley.edu (Christopher Burns) Date: Thu, 4 Oct 2007 15:47:15 -0700 Subject: [SciPy-user] SciPy svn build chokes on swig In-Reply-To: References: Message-ID: <764e38540710041547t65a2ab4h1755d0d321833f36@mail.gmail.com> Chris, Update scipy and give it another try. We've temporarily removed inclusion of the nifti module until we resolve the swig dependencies. Sorry about that. Christopher On 10/4/07, Chris wrote: > > Building SciPy from SVN on OS X 10.4, which has worked up until today, > chokes when it tries to look for swig: > > swig: scipy/io/nifti/nifticlib.i > swig -python -Iscipy/io/nifti -Iscipy/io/nifti/nifticlib/fsliolib > -Iscipy/io/nifti/nifticlib/niftilib -Iscipy/io/nifti/nifticlib/znzlib -o > build/src.macosx-10.3-fat-2.5/scipy/io/nifti/nifticlib_wrap.c -outdir > build/src.macosx-10.3-fat-2.5/scipy/io/nifti scipy/io/nifti/nifticlib.i > unable to execute swig: No such file or directory > error: command 'swig' failed with exit status 1 > > I was building just after a successful build and install of numpy > from SVN minutes prior. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Christopher Burns, Software Engineer Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb at rudel.de Thu Oct 4 18:45:26 2007 From: seb at rudel.de (Sebastian) Date: Thu, 4 Oct 2007 22:45:26 +0000 (UTC) Subject: [SciPy-user] fromimage returns instance instead of numpy array Message-ID: Hello, after I updated numpy and scipy to the newest version a script which uses the scipy.misc.pilutil package dosent't work any more. The function "fromimage" returns an image as image and dosen't convert it to a numpy array. The code looks like that: from scipy.misc.pilutil import * import Image dummy_data = Image.open(self.name) dummy_data = dummy_data.convert("I") dummy_data = dummy_data.convert("F") self.image = fromimage(dummy_data) Is there a bug or is the useage of "fromimage" changed? Greetings Sebastian From robert.kern at gmail.com Thu Oct 4 19:16:06 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 04 Oct 2007 18:16:06 -0500 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: References: Message-ID: <47057436.3060600@gmail.com> Sebastian wrote: > Hello, > after I updated numpy and scipy to the newest version a script which uses the > scipy.misc.pilutil package dosent't work any more. The function "fromimage" > returns an image as image and dosen't convert it to a numpy array. Exactly what version of scipy are you talking about? The one I have from a recent SVN checkout works fine. It was modified to its current form on 2007-08-28. It might also be that this code relies on behavior of recent releases of PIL. I am using PIL 1.1.6b2. What are you using? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu Oct 4 20:29:49 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 4 Oct 2007 20:29:49 -0400 Subject: [SciPy-user] does anyone use(d) pysimplex? In-Reply-To: <470298A5.2050907@scipy.org> References: <470298A5.2050907@scipy.org> Message-ID: Did you try contacting Aaron Watters? Maybe aaron AT funcity.njit.edu or more likely aaron AT reportlab.com will work. Cheers, Alan Isaac From pete.forman at westerngeco.com Thu Oct 4 11:56:43 2007 From: pete.forman at westerngeco.com (Pete Forman) Date: Thu, 04 Oct 2007 16:56:43 +0100 Subject: [SciPy-user] howto call lapack funcs from scipy? References: <4703E349.4020908@scipy.org> <470437F2.3090906@gmail.com> Message-ID: <3awqq3es.fsf@wgmail2.gatwick.eur.slb.com> Robert Kern writes: > The help() function doesn't know anything about f2py wrapper > functions like cgelss, so it gives up. However, you can print > cgelss.__doc__: IPython's help figures this out as well. In [24]: import scipy.linalg.flapack In [25]: scipy.linalg.flapack.cgelss? Type: fortran String Form: Namespace: Interactive Docstring: cgelss - Function signature: v,x,s,rank,info = cgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Required arguments: a : input rank-2 array('F') with bounds (m,n) b : input rank-2 array('F') with bounds (maxmn,nrhs) Optional arguments: overwrite_a := 0 input int overwrite_b := 0 input int cond := -1.0 input float lwork := 2*minmn+MAX(maxmn,nrhs) input int Return objects: v : rank-2 array('F') with bounds (m,n) and a storage x : rank-2 array('F') with bounds (maxmn,nrhs) and b storage s : rank-1 array('f') with bounds (minmn) rank : int info : int -- Pete Forman -./\.- Disclaimer: This post is originated WesternGeco -./\.- by myself and does not represent pete.forman at westerngeco.com -./\.- the opinion of Schlumberger or http://petef.port5.com -./\.- WesternGeco. From marco.turchi at gmail.com Fri Oct 5 07:43:19 2007 From: marco.turchi at gmail.com (marco turchi) Date: Fri, 5 Oct 2007 12:43:19 +0100 Subject: [SciPy-user] problem installing sciPy Message-ID: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> dear experts, I'm trying to install sciPy on a linux machine with RedHat 4. This machine has an old Python, so I have installed a new version 2.5.1 in /usr/local. I'm not administrator of this machine, so I did not do "make install", but just "make". Then I have installed numPy using the new version of Python, and everything works properly. Now I try to install sciPy writing: /usr/local/Python-2.5.1/python setup.py install but I got several errors, it seems that it is not able to find some libraries: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE etc etc sorry, but I'm new in this kind of staff, please can u help to install it??? thanks a lot Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Oct 5 07:42:23 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 05 Oct 2007 20:42:23 +0900 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> Message-ID: <4706231F.1010001@ar.media.kyoto-u.ac.jp> marco turchi wrote: > dear experts, > I'm trying to install sciPy on a linux machine with RedHat 4. > This machine has an old Python, so I have installed a new version > 2.5.1 in /usr/local. I'm not administrator of this machine, so I did > not do "make install", but just "make". > Then I have installed numPy using the new version of Python, and > everything works properly. > Now I try to install sciPy writing: > /usr/local/Python-2.5.1/python setup.py install Hi Marko, First, RHEL 4 uses python 2.3, which is supported by scipy, so there is no need to install python2.5 just for scipy. If you are new to python installation, I advise you against compiling your own python. > > but I got several errors, it seems that it is not able to find some > libraries: > mkl_info: > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > NOT AVAILABLE > > fftw3_info: > libraries fftw3 not found in /usr/local/lib > libraries fftw3 not found in /usr/lib > fftw3 not found > NOT AVAILABLE > > fftw2_info: > libraries rfftw,fftw not found in /usr/local/lib > libraries rfftw,fftw not found in /usr/lib > fftw2 not found > NOT AVAILABLE > > etc etc > > sorry, but I'm new in this kind of staff, please can u help to install > it??? This is not a big problem, because those are optional. Did the build failed ? cheers, David From marco.turchi at gmail.com Fri Oct 5 08:03:04 2007 From: marco.turchi at gmail.com (marco turchi) Date: Fri, 5 Oct 2007 13:03:04 +0100 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <4706231F.1010001@ar.media.kyoto-u.ac.jp> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> Message-ID: <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> Dear Davis, I need to install the new version of python, because the old one has not the python-dev and the administrator of the machine told me that he could not install it... do not ask me the reason :-( anyway the python installation works, i can use the new one... the scipy installation failed, I got: /usr/local/lib/python2.5/site-packages/numpy/distutils/system_info.py:1326: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 53, in setup_package() File "setup.py", line 45, in setup_package configuration=configuration ) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 142, in setup config = configuration() File "setup.py", line 19, in configuration config.add_subpackage('scipy') File "/usr/local/lib/python2.5/site-packages/numpy/distutils/misc_util.py", line 798, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/misc_util.py", line 781, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/misc_util.py", line 728, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 7, in configuration config.add_subpackage('integrate') File "/usr/local/lib/python2.5/site-packages/numpy/distutils/misc_util.py", line 798, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/misc_util.py", line 781, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/misc_util.py", line 728, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/integrate/setup.py", line 11, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/system_info.py", line 405, in get_info raise self.notfounderror,self.notfounderror.__doc__ numpy.distutils.system_info.BlasNotFoundError: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. Do i need to install BLAS??? Where can i find it? I'm able to run numpy, call the help and so on, but I try to run the numpy test, but I got this ./../Python-2.5.1/python -c 'import numpy; numpy.test()' Running from numpy source directory. Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'test' Do you think that the problem is here in numpy?? thanks a lot Marco On 10/5/07, David Cournapeau wrote: > > marco turchi wrote: > > dear experts, > > I'm trying to install sciPy on a linux machine with RedHat 4. > > This machine has an old Python, so I have installed a new version > > 2.5.1 in /usr/local. I'm not administrator of this machine, so I did > > not do "make install", but just "make". > > Then I have installed numPy using the new version of Python, and > > everything works properly. > > Now I try to install sciPy writing: > > /usr/local/Python-2.5.1/python setup.py install > Hi Marko, > > First, RHEL 4 uses python 2.3, which is supported by scipy, so there > is no need to install python2.5 just for scipy. If you are new to python > installation, I advise you against compiling your own python. > > > > but I got several errors, it seems that it is not able to find some > > libraries: > > mkl_info: > > libraries mkl,vml,guide not found in /usr/local/lib > > libraries mkl,vml,guide not found in /usr/lib > > NOT AVAILABLE > > > > fftw3_info: > > libraries fftw3 not found in /usr/local/lib > > libraries fftw3 not found in /usr/lib > > fftw3 not found > > NOT AVAILABLE > > > > fftw2_info: > > libraries rfftw,fftw not found in /usr/local/lib > > libraries rfftw,fftw not found in /usr/lib > > fftw2 not found > > NOT AVAILABLE > > > > etc etc > > > > sorry, but I'm new in this kind of staff, please can u help to install > > it??? > This is not a big problem, because those are optional. Did the build > failed ? > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Oct 5 08:03:08 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 05 Oct 2007 21:03:08 +0900 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> Message-ID: <470627FC.50200@ar.media.kyoto-u.ac.jp> marco turchi wrote: > Dear Davis, > I need to install the new version of python, because the old one has > not the python-dev and the administrator of the machine told me that > he could not install it... do not ask me the reason :-( Ah, ok. Then you do not seem to have any alternative, indeed. > Do i need to install BLAS??? Where can i find it? Yes, you need to install BLAS (and LAPACK as well). Either you can use RHEL packages for blas/lapack, or you need to compile them by yourself too. For that, you need a fortran compiler as well (if you have to install BLAS/LAPACK by yourself, you may be interested in my gar scripts: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.3.tbz2, which makes the installation a bit easier: it installs everything from blas/lapack to scipy in a self contained directory, for cases exactly like yours where people do not have root rights). cheers, David From marco.turchi at gmail.com Fri Oct 5 09:02:06 2007 From: marco.turchi at gmail.com (marco turchi) Date: Fri, 5 Oct 2007 14:02:06 +0100 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <470627FC.50200@ar.media.kyoto-u.ac.jp> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> <470627FC.50200@ar.media.kyoto-u.ac.jp> Message-ID: <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> dear Davis, thanks a lot, I'm going to try... i do not know if I have the fortran compiler, i think so because the administrator has installed gcc4.1 few months ago and i guess he installed all the packages thanks On 10/5/07, David Cournapeau wrote: > > marco turchi wrote: > > Dear Davis, > > I need to install the new version of python, because the old one has > > not the python-dev and the administrator of the machine told me that > > he could not install it... do not ask me the reason :-( > Ah, ok. Then you do not seem to have any alternative, indeed. > > Do i need to install BLAS??? Where can i find it? > Yes, you need to install BLAS (and LAPACK as well). Either you can use > RHEL packages for blas/lapack, or you need to compile them by yourself > too. For that, you need a fortran compiler as well (if you have to > install BLAS/LAPACK by yourself, you may be interested in my gar > scripts: > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.3.tbz2 > , > which makes the installation a bit easier: it installs everything from > blas/lapack to scipy in a self contained directory, for cases exactly > like yours where people do not have root rights). > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Fri Oct 5 08:53:38 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 05 Oct 2007 15:53:38 +0300 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite Message-ID: <470633D2.7060701@scipy.org> hi all, howto construct matrix A of shape (m,n): dot(A.T, A) is positive-definite? I try to translate some matlab code (box-bounded linear least squares) to Python, and I constantly get "LinAlgError: Matrix is not positive definite - Cholesky decomposition cannot be computed" Maybe it's because I comment those lines of using colmmd func (Sparse column minimum degree permutation)- can it be the reason? Thank you in advance for your suggestions, D. From marco.turchi at gmail.com Fri Oct 5 09:43:24 2007 From: marco.turchi at gmail.com (marco turchi) Date: Fri, 5 Oct 2007 14:43:24 +0100 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> <470627FC.50200@ar.media.kyoto-u.ac.jp> <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> Message-ID: <79a042480710050643v67894f6bsaffe1af9fe826b76@mail.gmail.com> Dear David, is there a place where I can see what have been installed correctly? I have seen some errors during the installation of you file, but it scrolls to fast that I'm not able to read shall I remove the numpy that I have installed before? Shall i install scipy after you software or it does everything? When I install numpy and scipy I need to force the installer to use not the normal python, but the new version, how can I set it? thanks Marc On 10/5/07, marco turchi wrote: > > dear Davis, > thanks a lot, I'm going to try... > i do not know if I have the fortran compiler, i think so because the > administrator has installed gcc4.1 few months ago and i guess he installed > all the packages > > thanks > > > On 10/5/07, David Cournapeau wrote: > > > > marco turchi wrote: > > > Dear Davis, > > > I need to install the new version of python, because the old one has > > > not the python-dev and the administrator of the machine told me that > > > he could not install it... do not ask me the reason :-( > > Ah, ok. Then you do not seem to have any alternative, indeed. > > > Do i need to install BLAS??? Where can i find it? > > Yes, you need to install BLAS (and LAPACK as well). Either you can use > > RHEL packages for blas/lapack, or you need to compile them by yourself > > too. For that, you need a fortran compiler as well (if you have to > > install BLAS/LAPACK by yourself, you may be interested in my gar > > scripts: > > > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.3.tbz2 > > , > > which makes the installation a bit easier: it installs everything from > > blas/lapack to scipy in a self contained directory, for cases exactly > > like yours where people do not have root rights). > > > > cheers, > > > > David > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Fri Oct 5 10:07:31 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 5 Oct 2007 16:07:31 +0200 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: <470633D2.7060701@scipy.org> References: <470633D2.7060701@scipy.org> Message-ID: <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> for every A, dot(A.T, A) is always semi positive definite. it is strictly positive definite if it hasn't a zero eigenvalue. make sure your A doesn't have a zero eigenvalue. (if your A is sparse, as it seems, it could have at least one row or one column filled with zeros, hence a zero eigenvalue). hth, L. On 10/5/07, dmitrey wrote: > > hi all, > > howto construct matrix A of shape (m,n): dot(A.T, A) is positive-definite? > > I try to translate some matlab code (box-bounded linear least squares) > to Python, and I constantly get > "LinAlgError: Matrix is not positive definite - Cholesky > decomposition cannot be computed" > > Maybe it's because I comment those lines of using colmmd func (Sparse > column minimum degree permutation)- > can it be the reason? > > Thank you in advance for your suggestions, D. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Fri Oct 5 10:45:15 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 5 Oct 2007 10:45:15 -0400 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> References: <470633D2.7060701@scipy.org> <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> Message-ID: On 05/10/2007, lorenzo bolla wrote: > for every A, dot(A.T, A) is always semi positive definite. > it is strictly positive definite if it hasn't a zero eigenvalue. > make sure your A doesn't have a zero eigenvalue. > (if your A is sparse, as it seems, it could have at least one row or one > column filled with zeros, hence a zero eigenvalue). Also, if A is n by m, then dot(A.T, A) will be m by m, but its rank will be at most min(m,n). So in particular if n References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> <470627FC.50200@ar.media.kyoto-u.ac.jp> <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> <79a042480710050643v67894f6bsaffe1af9fe826b76@mail.gmail.com> Message-ID: <79a042480710050748h1ead5eabk5329c1cba4b27f71@mail.gmail.com> Dear Davis, I have send to a file the output of the installation. I have found the following error: 1) ./xconfig -d s /usr/local/scipy/garnumpy-0.3 /bootstrap/ATLAS/work/main.d/atlas-3.7.33/GarObj/../ -d b /usr/local/scipy/garnumpy- 0.3/bootstrap/ATLAS/work/main.d/atlas-3.7.33/GarObj -Si archdef 1 -Ss flapack /home/ematxmt/garnumpyinstall/lib/liblapack.a -b 32 -C if g77 -Fa alg ERROR around arg 20 (out of arguments). 2) building extension "numpy.random.mtrand" sources creating build/src.linux-i686-2.5/numpy/random C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/local/Python- 2.5.1/Include -I/usr/local/Python-2.5.1 -c' gcc: _configtest.c _configtest.c:7:2: #error No _WIN32 _configtest.c:7:2: #error No _WIN32 failure. removing: _configtest.c _configtest.o 3) C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/local/Python- 2.5.1/Include -I/usr/local/Python-2.5.1 -c' gcc: _configtest.c _configtest.c:7:2: #error No _WIN32 _configtest.c:7:2: #error No _WIN32 failure. removing: _configtest.c _configtest.o 4) gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -I/home/ematxmt/garnumpyinstall/lib/python2.5/site-packages/numpy/core/include -I/usr/local/include -I/usr/include -I. -Isrc -Iswig -Iagg23/include -I. -I/usr/local/include -I/usr/include -I. -I/home/ematxmt/garnumpyinstall/lib/python2.5/site-packages/numpy/core/include/freetype2 -I/usr/local/include/freetype2 -I/usr/include/freetype2 -I./freetype2 -Isrc/freetype2 -Iswig/freetype2 -Iagg23/include/freetype2 -I./freetype2 -I/usr/local/include/freetype2 -I/usr/include/freetype2 -I./freetype2 -I/usr/local/Python-2.5.1/Include -I/usr/local/Python-2.5.1 -c src/_image.cpp -o build/temp.linux-i686-2.5/src/_image.o -DSCIPY=1 cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++ src/_image.cpp:5:17: png.h: No such file or directory In file included from /usr/local/Python-2.5.1/Include/Python.h:8, from src/_image.cpp:7: /usr/local/Python-2.5.1/pyconfig.h:932:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/i386-redhat-linux/bits/os_defines.h:39, from /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/i386-redhat-linux/bits/c++config.h:35, from /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/iostream:44, from src/_image.cpp:1: /usr/include/features.h:150:1: warning: this is the location of the previous definition src/_image.cpp: In member function `Py::Object Image::write_png(const Py::Tuple&)': src/_image.cpp:654: error: `png_structp' was not declared in this scope src/_image.cpp:654: error: expected `;' before "png_ptr" src/_image.cpp:655: error: `png_infop' was not declared in this scope src/_image.cpp:655: error: expected `;' before "info_ptr" etc etc is it possible that for error 4) it depends on the old version of gcc? sorry about all these email... Thanks Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Fri Oct 5 11:31:43 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 5 Oct 2007 11:31:43 -0400 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: References: <470633D2.7060701@scipy.org> <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> Message-ID: <8793ae6e0710050831n172d41f3u164f2b86b0bad059@mail.gmail.com> On 10/5/07, Anne Archibald wrote: > On 05/10/2007, lorenzo bolla wrote: > > for every A, dot(A.T, A) is always semi positive definite. > > it is strictly positive definite if it hasn't a zero eigenvalue. > > make sure your A doesn't have a zero eigenvalue. > > (if your A is sparse, as it seems, it could have at least one row or one > > column filled with zeros, hence a zero eigenvalue). > > Also, if A is n by m, then dot(A.T, A) will be m by m, but its rank > will be at most min(m,n). So in particular if n never be positive definite. dot(A,A.T) may be. I believe dot(X,Y) means X^T Y, so that dot(A.T, A) is in fact AA=A^2. It is dot(A,A) that is always positive semi-definite (and positive definite iff A has full rank). Also matrices appearing in least-squares algorithms are usually symmetric and most people define "positive definiteness" for symmetric matrices only. Do you also want AA to be symmetric? Dominique From dominique.orban at gmail.com Fri Oct 5 11:33:40 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 5 Oct 2007 11:33:40 -0400 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: <470633D2.7060701@scipy.org> References: <470633D2.7060701@scipy.org> Message-ID: <8793ae6e0710050833x235e438er9537102d59865790@mail.gmail.com> On 10/5/07, dmitrey wrote: > Maybe it's because I comment those lines of using colmmd func (Sparse > column minimum degree permutation)- > can it be the reason? No. This only affects sparsity of the Cholesky factors. Dominique From robert.kern at gmail.com Fri Oct 5 12:38:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 05 Oct 2007 11:38:28 -0500 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: <8793ae6e0710050831n172d41f3u164f2b86b0bad059@mail.gmail.com> References: <470633D2.7060701@scipy.org> <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> <8793ae6e0710050831n172d41f3u164f2b86b0bad059@mail.gmail.com> Message-ID: <47066884.5060900@gmail.com> Dominique Orban wrote: > I believe dot(X,Y) means X^T Y, so that dot(A.T, A) is in fact AA=A^2. No, this is not correct. dot() is simply matrix multiplication. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cburns at berkeley.edu Fri Oct 5 12:51:43 2007 From: cburns at berkeley.edu (Christopher Burns) Date: Fri, 5 Oct 2007 09:51:43 -0700 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> Message-ID: <764e38540710050951n271dc2d0g2034623ae9d7e91a@mail.gmail.com> > I'm able to run numpy, call the help and so on, but I try to run the numpy > test, but I got this > ./../Python-2.5.1/python -c 'import numpy; numpy.test()' > Running from numpy source directory. > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'module' object has no attribute 'test' > > Do you think that the problem is here in numpy?? > > Numpy is probably working fine. You need to change out of the numpy source directory. Try your home directory or go up one level and run the test again. -- Christopher Burns, Software Engineer Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Fri Oct 5 12:58:54 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 5 Oct 2007 12:58:54 -0400 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: <47066884.5060900@gmail.com> References: <470633D2.7060701@scipy.org> <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> <8793ae6e0710050831n172d41f3u164f2b86b0bad059@mail.gmail.com> <47066884.5060900@gmail.com> Message-ID: <8793ae6e0710050958r12a9b0f8l605e10cbb9c2cf8@mail.gmail.com> On 10/5/07, Robert Kern wrote: > Dominique Orban wrote: > > > I believe dot(X,Y) means X^T Y, so that dot(A.T, A) is in fact AA=A^2. > > No, this is not correct. dot() is simply matrix multiplication. Isn't this counter-intuitive, given that for vectors, dot(u,v) is u^T v ? Dominique From gael.varoquaux at normalesup.org Fri Oct 5 13:14:52 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 5 Oct 2007 19:14:52 +0200 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: <8793ae6e0710050958r12a9b0f8l605e10cbb9c2cf8@mail.gmail.com> References: <470633D2.7060701@scipy.org> <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> <8793ae6e0710050831n172d41f3u164f2b86b0bad059@mail.gmail.com> <47066884.5060900@gmail.com> <8793ae6e0710050958r12a9b0f8l605e10cbb9c2cf8@mail.gmail.com> Message-ID: <20071005171452.GC30895@clipper.ens.fr> On Fri, Oct 05, 2007 at 12:58:54PM -0400, Dominique Orban wrote: > > No, this is not correct. dot() is simply matrix multiplication. > Isn't this counter-intuitive, given that for vectors, dot(u,v) is u^T v ? That's why, when you do linear algebra, you write u^t v for the scalar product of two vectors. In a curved space of metric G (G is a square matrix of dimension the dimension of the space), you write the scalar product u.v = u^t G v, ... And so on, and so forth. You cannot interchange scalar product and matrix multiplication. Ga?l From robert.kern at gmail.com Fri Oct 5 13:29:34 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 05 Oct 2007 12:29:34 -0500 Subject: [SciPy-user] howto construct matrix A: dot(A.T, A) is positive-definite In-Reply-To: <8793ae6e0710050958r12a9b0f8l605e10cbb9c2cf8@mail.gmail.com> References: <470633D2.7060701@scipy.org> <80c99e790710050707n79bb0dd6ma21c3c04e2ea3f99@mail.gmail.com> <8793ae6e0710050831n172d41f3u164f2b86b0bad059@mail.gmail.com> <47066884.5060900@gmail.com> <8793ae6e0710050958r12a9b0f8l605e10cbb9c2cf8@mail.gmail.com> Message-ID: <4706747E.7090104@gmail.com> Dominique Orban wrote: > On 10/5/07, Robert Kern wrote: >> Dominique Orban wrote: >> >>> I believe dot(X,Y) means X^T Y, so that dot(A.T, A) is in fact AA=A^2. >> No, this is not correct. dot() is simply matrix multiplication. > > Isn't this counter-intuitive, given that for vectors, dot(u,v) is u^T v ? Well, I lied about dot() being matrix multiplication. It is if the two arguments are matrices. Really it is an operation on N-D arrays: and product-sum over the last dimension of the first argument and the second-to-last dimension (or the closest in case of len(v.shape) < 2) of the second argument. There are two hypothetical paths you can take to explain the "meaning" of that very mechanistic operation: * You could say that we treat u and v both like a (n,1) column vector and dot(u, v) is equivalent to (u^T v). * You could say that we treat the first shape-(n,) array like a (1,n) row vector and the the second shape-(n,) array like an (n,1) column vector and did matrix multiplication on them directly. The latter is consistent with the other behavior of dot() while the former is not. Since they're both hypothetical semantics, we are free to choose the one that makes sense. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From drfredkin at ucsd.edu Fri Oct 5 14:46:21 2007 From: drfredkin at ucsd.edu (Donald Fredkin) Date: Fri, 5 Oct 2007 18:46:21 +0000 (UTC) Subject: [SciPy-user] first order pde References: <43075.146.230.224.85.1190984554.squirrel@webmail.aims.ac.za> Message-ID: Issa Karambal wrote: > Hi all, > > I am having problem to solve numerically a hyperbolic equation like > u_x=-pi*u_t, u(x,0)=exp(sin(x)), u(0,t)=u(2pi,t)=1. I used the method > of lines; so I discretized the spatial domain using the centered > finite difference method and thereafter I used the implicit Euler > method, but I am still having problem after long time integration. > If anyone has an idea on how I can solve numerically my problem? > > THANKS, > > issa You can use the method of characteristics. The example you give is easily solved numerically (and there is trouble with the boundary condition at x = 2pi, as you might expect). If you must solve the equation numerically, I'd suggest an upwind differencing method. Don -- From mhaligowski at googlemail.com Fri Oct 5 16:46:02 2007 From: mhaligowski at googlemail.com (halish) Date: Fri, 5 Oct 2007 21:46:02 +0100 Subject: [SciPy-user] Multivariate regression? Message-ID: Hello, in the first place, I'd like to say 'Hi' to all the subscribers, as this is my first mail. I study econometrics and statistics, and i wanted to try out my favourite programming as a scientific tool. Unfortunately, I could not find a function for ordinary least squares estimation. The stats.linregress() is insufficient for my needs, and stats.glm() seems to be for something else (not sure really). Would you guys suggest something? Or should I just write it on my own? Regards, halish From ryanlists at gmail.com Fri Oct 5 17:10:02 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 5 Oct 2007 16:10:02 -0500 Subject: [SciPy-user] Multivariate regression? In-Reply-To: References: Message-ID: This may be unnecessarily complicated, but I almost always use fmin where I have defined some squared error that I want to minimize. This essentially lets you do an arbitrarily complicated expression. To use it to do a linear regression, you would use a cost function like: def myfunc(c): model = c[0]*x+c[1] error_squared = (model - y_exp)**2 return error_squared.sum() where x and y_exp would have to be defined in the script before the function is called. c_final = optimize.fmin(myfunc, [m0, b0]) would then fit the data using m0 and b0 as initial guesses. fmin essentially varies the parameters in the vector c until a minimum of the returned value is found. You could use any model you wanted. HTH, Ryan On 10/5/07, halish wrote: > Hello, > > in the first place, I'd like to say 'Hi' to all the subscribers, as > this is my first mail. > > I study econometrics and statistics, and i wanted to try out my > favourite programming as a scientific tool. Unfortunately, I could not > find a function for ordinary least squares estimation. The > stats.linregress() is insufficient for my needs, and stats.glm() seems > to be for something else (not sure really). > > Would you guys suggest something? Or should I just write it on my own? > > Regards, > halish > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Fri Oct 5 17:18:25 2007 From: aisaac at american.edu (Alan Isaac) Date: Fri, 5 Oct 2007 17:18:25 -0400 Subject: [SciPy-user] Multivariate regression? In-Reply-To: References: Message-ID: On Fri, 5 Oct 2007, halish wrote: > I study econometrics and statistics, and i wanted to try > out my favourite programming as a scientific tool. > Unfortunately, I could not find a function for ordinary > least squares estimation. The stats.linregress() is > insufficient for my needs http://econpy.googlecode.com/svn/trunk/pytrix/ls.py hth, Alan Isaac From cburns at berkeley.edu Fri Oct 5 17:46:28 2007 From: cburns at berkeley.edu (Christopher Burns) Date: Fri, 5 Oct 2007 14:46:28 -0700 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <79a042480710050748h1ead5eabk5329c1cba4b27f71@mail.gmail.com> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> <470627FC.50200@ar.media.kyoto-u.ac.jp> <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> <79a042480710050643v67894f6bsaffe1af9fe826b76@mail.gmail.com> <79a042480710050748h1ead5eabk5329c1cba4b27f71@mail.gmail.com> Message-ID: <764e38540710051446n3966db1avc5e7abec58fca590@mail.gmail.com> Marco, I have not looked through your error messages closely, but perhaps it's worth taking a step back and trying an easier install. As David mentioned, SciPy supports Python 2.3, which should be installed on your system. I'd recommend just going with that. A basic install: 0) Read scipy-0.6.0/INSTALL.txt. (if you haven't done so already) It includes the basic install instructions and various install options. 1) Install lapack through an rpm. You can find them here: http://www.netlib.org/lapack/rpms/ 2) Install numpy 1.0.3.1 from the tarball. (I appears you already did this successfully once.) 3) Install scipy 0.6.0 from the tarball. That should be sufficient to get you up and running. As you mentioned, there are a lot of messages flying past during the build, some of which appear to be errors. If the build succeeds you can probably ignore those messages. They're usually along the lines of: couldn't find ideal fortran compiler but found this one instead... Chris On 10/5/07, marco turchi wrote: > > Dear Davis, > I have send to a file the output of the installation. > I have found the following error: > 1) > ./xconfig -d s /usr/local/scipy/garnumpy-0.3 > /bootstrap/ATLAS/work/main.d/atlas-3.7.33/GarObj/../ -d b > /usr/local/scipy/garnumpy- 0.3/bootstrap/ATLAS/work/main.d/atlas-3.7.33/GarObj > -Si archdef 1 -Ss flapack /home/ematxmt/garnumpyinstall/lib/liblapack.a -b > 32 -C if g77 -Fa alg > > ERROR around arg 20 (out of arguments). > > 2) > building extension " numpy.random.mtrand" sources > creating build/src.linux-i686-2.5/numpy/random > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -fPIC > compile options: '-Inumpy/core/src -Inumpy/core/include > -I/usr/local/Python- 2.5.1/Include -I/usr/local/Python-2.5.1 -c' > gcc: _configtest.c > _configtest.c:7:2: #error No _WIN32 > _configtest.c:7:2: #error No _WIN32 > failure. > removing: _configtest.c _configtest.o > > 3) > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -fPIC > > compile options: '-Inumpy/core/src -Inumpy/core/include > -I/usr/local/Python-2.5.1/Include -I/usr/local/Python-2.5.1 -c' > gcc: _configtest.c > _configtest.c:7:2: #error No _WIN32 > _configtest.c:7:2: #error No _WIN32 > failure. > removing: _configtest.c _configtest.o > > 4) > gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -fPIC > -I/home/ematxmt/garnumpyinstall/lib/python2.5/site-packages/numpy/core/include > -I/usr/local/include -I/usr/include -I. -Isrc -Iswig -Iagg23/include -I. > -I/usr/local/include -I/usr/include -I. > -I/home/ematxmt/garnumpyinstall/lib/python2.5/site-packages/numpy/core/include/freetype2 > -I/usr/local/include/freetype2 -I/usr/include/freetype2 -I./freetype2 > -Isrc/freetype2 -Iswig/freetype2 -Iagg23/include/freetype2 -I./freetype2 > -I/usr/local/include/freetype2 -I/usr/include/freetype2 -I./freetype2 > -I/usr/local/Python- 2.5.1/Include -I/usr/local/Python-2.5.1 -c > src/_image.cpp -o build/temp.linux-i686-2.5/src/_image.o -DSCIPY=1 > cc1plus: warning: command line option "-Wstrict-prototypes" is valid for > Ada/C/ObjC but not for C++ > src/_image.cpp:5:17: png.h: No such file or directory > In file included from /usr/local/Python-2.5.1/Include/Python.h:8, > from src/_image.cpp:7: > /usr/local/Python-2.5.1/pyconfig.h:932:1: warning: "_POSIX_C_SOURCE" > redefined > In file included from > /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/i386-redhat-linux/bits/os_defines.h:39, > from > /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/i386-redhat-linux/bits/c++config.h:35, > > from > /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/iostream:44, > from src/_image.cpp:1: > /usr/include/features.h:150:1: warning: this is the location of the > previous definition > src/_image.cpp: In member function `Py::Object Image::write_png(const > Py::Tuple&)': > src/_image.cpp:654: error: `png_structp' was not declared in this scope > src/_image.cpp:654: error: expected `;' before "png_ptr" > src/_image.cpp:655: error: `png_infop' was not declared in this scope > src/_image.cpp:655: error: expected `;' before "info_ptr" > etc etc > > is it possible that for error 4) it depends on the old version of gcc? > > sorry about all these email... > Thanks > Marco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Christopher Burns, Software Engineer Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.turchi at gmail.com Fri Oct 5 18:07:53 2007 From: marco.turchi at gmail.com (marco turchi) Date: Fri, 5 Oct 2007 23:07:53 +0100 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <764e38540710051446n3966db1avc5e7abec58fca590@mail.gmail.com> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> <470627FC.50200@ar.media.kyoto-u.ac.jp> <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> <79a042480710050643v67894f6bsaffe1af9fe826b76@mail.gmail.com> <79a042480710050748h1ead5eabk5329c1cba4b27f71@mail.gmail.com> <764e38540710051446n3966db1avc5e7abec58fca590@mail.gmail.com> Message-ID: <79a042480710051507j40a7968dr4f6a8ec5351cab23@mail.gmail.com> Dear Chris, I have just said to David that i have tried to install numpy and scipy using python 2.3, but on that machine there is not python-dev, and the administrator cannot install it, he has some problems with licences. I guess that I cannot use rpm, because I'm not administrator. That's the reason because I have installed in /usr/local another version of Python and then I have followed the normal instruction for numpy and scipy. The installation arrives to the end, but when I open Python, new or old version, I can import Numpy, but I cannot import scipy, because the module is not present. I can try another time to do everything from the beginning, but i do not know how useful it can be. Sorry for all these troubles. thanks a lot Marco On 10/5/07, Christopher Burns wrote: > > Marco, > > I have not looked through your error messages closely, but perhaps it's > worth taking a step back and trying an easier install. > > As David mentioned, SciPy supports Python 2.3, which should be installed > on your system. I'd recommend just going with that. A basic install: > 0) Read scipy-0.6.0/INSTALL.txt. (if you haven't done so already) > It includes the basic install instructions and various install > options. > 1) Install lapack through an rpm. You can find them here: > http://www.netlib.org/lapack/rpms/ > 2) Install numpy 1.0.3.1 from the tarball. (I appears you already did > this successfully once.) > 3) Install scipy 0.6.0 from the tarball. > > That should be sufficient to get you up and running. > > As you mentioned, there are a lot of messages flying past during the > build, some of which appear to be errors. If the build succeeds you can > probably ignore those messages. They're usually along the lines of: > couldn't find ideal fortran compiler but found this one instead... > > Chris > > On 10/5/07, marco turchi wrote: > > > > Dear Davis, > > I have send to a file the output of the installation. > > I have found the following error: > > 1) > > ./xconfig -d s /usr/local/scipy/garnumpy-0.3 > > /bootstrap/ATLAS/work/main.d/atlas-3.7.33/GarObj/../ -d b > > /usr/local/scipy/garnumpy- 0.3/bootstrap/ATLAS/work/main.d/atlas-3.7.33/GarObj > > -Si archdef 1 -Ss flapack /home/ematxmt/garnumpyinstall/lib/liblapack.a -b > > 32 -C if g77 -Fa alg > > > > ERROR around arg 20 (out of arguments). > > > > 2) > > building extension " numpy.random.mtrand" sources > > creating build/src.linux-i686-2.5/numpy/random > > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > > -Wstrict-prototypes -fPIC > > compile options: '-Inumpy/core/src -Inumpy/core/include > > -I/usr/local/Python- 2.5.1/Include -I/usr/local/Python-2.5.1 -c' > > gcc: _configtest.c > > _configtest.c:7:2: #error No _WIN32 > > _configtest.c:7:2: #error No _WIN32 > > failure. > > removing: _configtest.c _configtest.o > > > > 3) > > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > > -Wstrict-prototypes -fPIC > > > > compile options: '-Inumpy/core/src -Inumpy/core/include > > -I/usr/local/Python-2.5.1/Include -I/usr/local/Python-2.5.1 -c' > > gcc: _configtest.c > > _configtest.c:7:2: #error No _WIN32 > > _configtest.c:7:2: #error No _WIN32 > > failure. > > removing: _configtest.c _configtest.o > > > > 4) > > gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > > -Wstrict-prototypes -fPIC > > -I/home/ematxmt/garnumpyinstall/lib/python2.5/site-packages/numpy/core/include > > -I/usr/local/include -I/usr/include -I. -Isrc -Iswig -Iagg23/include -I. > > -I/usr/local/include -I/usr/include -I. > > -I/home/ematxmt/garnumpyinstall/lib/python2.5/site-packages/numpy/core/include/freetype2 > > -I/usr/local/include/freetype2 -I/usr/include/freetype2 -I./freetype2 > > -Isrc/freetype2 -Iswig/freetype2 -Iagg23/include/freetype2 -I./freetype2 > > -I/usr/local/include/freetype2 -I/usr/include/freetype2 -I./freetype2 > > -I/usr/local/Python- 2.5.1/Include -I/usr/local/Python-2.5.1 -c > > src/_image.cpp -o build/temp.linux-i686-2.5/src/_image.o -DSCIPY=1 > > cc1plus: warning: command line option "-Wstrict-prototypes" is valid for > > Ada/C/ObjC but not for C++ > > src/_image.cpp:5:17: png.h: No such file or directory > > In file included from /usr/local/Python-2.5.1/Include/Python.h:8, > > from src/_image.cpp:7: > > /usr/local/Python-2.5.1/pyconfig.h:932:1: warning: "_POSIX_C_SOURCE" > > redefined > > In file included from > > /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/i386-redhat-linux/bits/os_defines.h:39, > > from > > /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/i386-redhat-linux/bits/c++config.h:35, > > > > from > > /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/iostream:44, > > from src/_image.cpp:1: > > /usr/include/features.h:150:1: warning: this is the location of the > > previous definition > > src/_image.cpp: In member function `Py::Object Image::write_png(const > > Py::Tuple&)': > > src/_image.cpp:654: error: `png_structp' was not declared in this scope > > src/_image.cpp:654: error: expected `;' before "png_ptr" > > src/_image.cpp:655: error: `png_infop' was not declared in this scope > > src/_image.cpp:655: error: expected `;' before "info_ptr" > > etc etc > > > > is it possible that for error 4) it depends on the old version of gcc? > > > > sorry about all these email... > > Thanks > > Marco > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > Christopher Burns, Software Engineer > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Oct 5 23:04:48 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 06 Oct 2007 12:04:48 +0900 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <79a042480710051507j40a7968dr4f6a8ec5351cab23@mail.gmail.com> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> <470627FC.50200@ar.media.kyoto-u.ac.jp> <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> <79a042480710050643v67894f6bsaffe1af9fe826b76@mail.gmail.com> <79a042480710050748h1ead5eabk5329c1cba4b27f71@mail.gmail.com> <764e38540710051446n3966db1avc5e7abec58fca590@mail.gmail.com> <79a042480710051507j40a7968dr4f6a8ec5351cab23@mail.gmail.com> Message-ID: <4706FB50.8090803@ar.media.kyoto-u.ac.jp> marco turchi wrote: > Dear Chris, > I have just said to David that i have tried to install numpy and > scipy using python 2.3, but on that machine there is not python-dev, > and the administrator cannot install it, he has some problems with > licences. I really cannot see any license related problems with python-dev which would not be present in python. But I digress. > I guess that I cannot use rpm, because I'm not administrator. That's > the reason because I have installed in /usr/local another version of > Python and then I have followed the normal instruction for numpy and > scipy. > > The installation arrives to the end, but when I open Python, new or > old version, I can import Numpy, but I cannot import scipy, because > the module is not present. > > I can try another time to do everything from the beginning, but i do > not know how useful it can be. As Chris said: if you can get your administrator to install python-dev, blas and lapack, this is the easiest. If you can't, then garnumpy can help you. The principle of garnumpy is : - to fetch the sources of all necessary softwares/libraries from internet - build them with consistent compiler options - install it in one location (for example, $HOME/garnumpyinstall, the default). I think this should be easier than compiling everything by yourself, specially if you are not familiar with building complicated packages. I've put a new version online, because I realized that I did not update the scripts for the new numpy/scipy versions. So remove the one you've download, and use this URL instead: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.4.tbz2 Most of the options should be changed in gar.conf.mk. For example, changing python -> change the line PYTHON to the full path of your python (e.g. /usr/local/bin/python in your case). In your case, this is the only required change. To build and install numpy/scipy with it, you do: - make clean -> do this if you change anything in gar.conf.mk. Do it once you changed the python location. - make garchive -> this will download everything. - cd platform/scipy && make install -> this will build scipy and all the necessary softwares. If this fails, please paste all the output, because otherwise, it is difficult to see the exact problem. But if you have a C and fortran compiler, it should not fail, cheers, David From thamelry at binf.ku.dk Sat Oct 6 03:53:42 2007 From: thamelry at binf.ku.dk (Thomas Hamelryck) Date: Sat, 6 Oct 2007 09:53:42 +0200 Subject: [SciPy-user] Multivariate regression? In-Reply-To: References: Message-ID: <2d7c25310710060053q5e53ad5rae1249abada4ffe6@mail.gmail.com> Another solution would be to use R via its python bindings. See http://rpy.sourceforge.net/ Cheers, -Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnata at obs.univ-lyon1.fr Sat Oct 6 08:38:08 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Sat, 06 Oct 2007 14:38:08 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FE8724.80500@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> Message-ID: <470781B0.7060608@obs.univ-lyon1.fr> Xavier Gnata wrote: > Xavier Gnata wrote: > >> David Cournapeau wrote: >> >> >>> Xavier Gnata wrote: >>> >>> >>> >>>> yes sure! It was only a test to see if the bug is stil there or not. >>>> The result is clear : It is still here. >>>> Could someting else help you to fix that? >>>> Can anyone reproduce that? >>>> >>>> >>>> >>> Well, the problem in #404 looks like the worse ones: the ones which >>> depend on compiler/interpreter versions. The fact that the problem does >>> not appear under valgrind is quite intriguing. >>> >>> cheers, >>> >>> David >>> >>> >>> >>> >> It *does* on my box. >> >> ./valgrind_py.sh >> /usr/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py >> >> >> ==8678== LEAK SUMMARY: >> ==8678== definitely lost: 192 bytes in 2 blocks. >> ==8678== possibly lost: 33,027 bytes in 45 blocks. >> ==8678== still reachable: 14,427,776 bytes in 3,975 blocks. >> ==8678== suppressed: 0 bytes in 0 blocks. >> ==8678== Reachable blocks (those to which a pointer was found) are not >> shown. >> ==8678== To see them, rerun with: --leak-check=full --show-reachable=yes >> --8678-- memcheck: sanity checks: 2666 cheap, 107 expensive >> --8678-- memcheck: auxmaps: 0 auxmap entries (0k, 0M) in use >> --8678-- memcheck: auxmaps: 0 searches, 0 comparisons >> --8678-- memcheck: SMs: n_issued = 454 (7264k, 7M) >> --8678-- memcheck: SMs: n_deissued = 14 (224k, 0M) >> --8678-- memcheck: SMs: max_noaccess = 65535 (1048560k, 1023M) >> --8678-- memcheck: SMs: max_undefined = 5 (80k, 0M) >> --8678-- memcheck: SMs: max_defined = 548 (8768k, 8M) >> --8678-- memcheck: SMs: max_non_DSM = 442 (7072k, 6M) >> --8678-- memcheck: max sec V bit nodes: 0 (0k, 0M) >> --8678-- memcheck: set_sec_vbits8 calls: 0 (new: 0, updates: 0) >> --8678-- memcheck: max shadow mem size: 7376k, 7M >> --8678-- translate: fast SP updates identified: 29,307 ( 85.4%) >> --8678-- translate: generic_known SP updates identified: 3,821 ( 11.1%) >> --8678-- translate: generic_unknown SP updates identified: 1,168 ( 3.4%) >> --8678-- tt/tc: 2,527,821 tt lookups requiring 3,090,497 probes >> --8678-- tt/tc: 2,527,821 fast-cache updates, 2 flushes >> --8678-- transtab: new 28,855 (653,474 -> 10,476,946; ratio >> 160:10) [0 scs] >> --8678-- transtab: dumped 0 (0 -> ??) >> --8678-- transtab: discarded 0 (0 -> ??) >> --8678-- scheduler: 266,643,629 jumps (bb entries). >> --8678-- scheduler: 2,666/3,334,044 major/minor sched events. >> --8678-- sanity: 2667 cheap, 107 expensive checks. >> --8678-- exectx: 30,011 lists, 7,141 contexts (avg 0 per list) >> --8678-- exectx: 1,014,663 searches, 1,036,327 full compares (1,021 >> per 1000) >> --8678-- exectx: 1,115,754 cmp2, 6,184 cmp4, 0 cmpAll >> Illegal instruction >> >> and then valgrind crashes. >> >> >> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge >> mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss tm pbe nx est tm2 >> >> g77 -v >> Reading specs from /usr/lib/gcc/i486-linux-gnu/3.4.6/specs >> Configured with: ../src/configure -v --enable-languages=c,c++,f77,pascal >> --prefix=/usr --libexecdir=/usr/lib >> --with-gxx-include-dir=/usr/include/c++/3.4 --enable-shared >> --with-system-zlib --enable-nls --without-included-gettext >> --program-suffix=-3.4 --enable-__cxa_atexit --enable-clocale=gnu >> --enable-libstdcxx-debug --with-tune=i686 i486-linux-gnu >> Thread model: posix >> >> Looks like compilation flags mismatch. >> >> Xavier >> >> >> >> > Using gdb : > > Program received signal SIGILL, Illegal instruction. > [Switching to Thread 0xb7d908c0 (LWP 9176)] > 0xb4ffad43 in Py_FilterFunc (buffer=0x833a088, filter_size=2, > output=0xbf8ee518, > data=0xbf8ee58c) at scipy/ndimage/src/nd_image.c:346 > > backtrace : > > (gdb) backtrace > #0 0xb4ffad43 in Py_FilterFunc (buffer=0x833a088, filter_size=2, > output=0xbf8ee518, data=0xbf8ee58c) at scipy/ndimage/src/nd_image.c:346 > #1 0xb4ffe211 in NI_GenericFilter (input=0x833a258, > function=0xb4ffad40 , data=0xbf8ee58c, > footprint=0x833a2b8, > output=0x8339fe8, mode=NI_EXTEND_REFLECT, cvalue=0, origins=0x81d9548) > at scipy/ndimage/src/ni_filters.c:858 > #2 0xb4ffc5ed in Py_GenericFilter (obj=0x0, args=0xb7a21dac) > at scipy/ndimage/src/nd_image.c:411 > #3 0x080b9f67 in PyEval_EvalFrame () > #4 0x080bb125 in PyEval_EvalCodeEx () > #5 0x080b9492 in PyEval_EvalFrame () > #6 0x080bb125 in PyEval_EvalCodeEx () > #7 0x080b9492 in PyEval_EvalFrame () > #8 0x080bb125 in PyEval_EvalCodeEx () > #9 0x08101ae6 in ?? () > #10 0xb70922a0 in ?? () > #11 0xb7030dfc in ?? () > #12 0x00000000 in ?? () > > So it is not the backtrace of #404...but it is close and most likely > related. > > Xavier > > The bug is still there using the last svn version. Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From lxander.m at gmail.com Sat Oct 6 10:13:53 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Sat, 6 Oct 2007 07:13:53 -0700 Subject: [SciPy-user] Multivariate regression? In-Reply-To: References: Message-ID: <525f23e80710060713t669aa894gdd3983fdace702a1@mail.gmail.com> On 10/5/07, halish wrote: > Hello, > > in the first place, I'd like to say 'Hi' to all the subscribers, as > this is my first mail. > > I study econometrics and statistics, and i wanted to try out my > favourite programming as a scientific tool. Unfortunately, I could not > find a function for ordinary least squares estimation. The > stats.linregress() is insufficient for my needs, and stats.glm() seems > to be for something else (not sure really). > > Would you guys suggest something? Or should I just write it on my own? You may find cookbook example OLS interesting: From matthew.brett at gmail.com Sat Oct 6 13:34:23 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 6 Oct 2007 13:34:23 -0400 Subject: [SciPy-user] Multivariate regression? In-Reply-To: <525f23e80710060713t669aa894gdd3983fdace702a1@mail.gmail.com> References: <525f23e80710060713t669aa894gdd3983fdace702a1@mail.gmail.com> Message-ID: <1e2af89e0710061034i60d4afd8ucd95d11cc88375af@mail.gmail.com> Hi, > > I study econometrics and statistics, and i wanted to try out my > > favourite programming as a scientific tool. Unfortunately, I could not > > find a function for ordinary least squares estimation. The > > stats.linregress() is insufficient for my needs, and stats.glm() seems > > to be for something else (not sure really). I think you may want the OLS model in scipy.stats.models: import numpy as N import scipy.stats.models as SSM covariates = N.random.normal(size=(100,3)) intercept = N.ones((100,1)) design = N.c_[covariates, intercept] data = N.random.normal(size=(100,1)) model = SSM.regression.ols_model(design) results = model.fit(data) results.beta The models package is still being developed, but it already does the basic stuff, Best, Matthew From robince at gmail.com Sat Oct 6 14:20:02 2007 From: robince at gmail.com (Robin) Date: Sat, 6 Oct 2007 19:20:02 +0100 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <470781B0.7060608@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> Message-ID: I get the a very similar problem on Windows with Cygwin gcc 3.4.4, ATLAS 3.7.37, lapack 3.1.1... In my case the error looks more like that described here: http://projects.scipy.org/pipermail/scipy-dev/2007-April/006846.html > generation of a binary structure 3 ... ok > generation of a binary structure 4 ... ok > generic filter 1 ... ERROR > generic 1d filter 1 but there didn't seem to be any follow up to that problem. If there's any more information I could provide please let me know (although I won't have access to the machine again until Monday). Cheers Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Oct 6 14:34:28 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 6 Oct 2007 14:34:28 -0400 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> Message-ID: <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> Hi, > > generation of a binary structure 3 ... ok > > generation of a binary structure 4 ... ok > > generic filter 1 ... ERROR > > generic 1d filter 1 > > but there didn't seem to be any follow up to that problem. One cause of that error was fixed a long time ago. Are you using a recent version of SciPy? Matthew From robince at gmail.com Sat Oct 6 14:42:08 2007 From: robince at gmail.com (Robin) Date: Sat, 6 Oct 2007 19:42:08 +0100 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> Message-ID: Yes, latest SVN (as of Friday I think) On 10/6/07, Matthew Brett wrote: > > Hi, > > > > generation of a binary structure 3 ... ok > > > generation of a binary structure 4 ... ok > > > generic filter 1 ... ERROR > > > generic 1d filter 1 > > > > but there didn't seem to be any follow up to that problem. > > One cause of that error was fixed a long time ago. Are you using a > recent version of SciPy? > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Oct 6 21:10:15 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 7 Oct 2007 03:10:15 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: References: <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> Message-ID: <20071007011015.GC9063@mentat.za.net> One common denominator here seems to be gcc 3.4. Have you tried compiling scipy with a more recent version? I'd appreciate it if you could. Please add your results to http://projects.scipy.org/scipy/scipy/ticket/404 I ran the latest ndimage through valgrind, and no memory errors are reported. Cheers St?fan On Sat, Oct 06, 2007 at 07:42:08PM +0100, Robin wrote: > Yes, latest SVN (as of Friday I think) > > On 10/6/07, Matthew Brett wrote: > > Hi, > > > > generation of a binary structure 3 ... ok > > > generation of a binary structure 4 ... ok > > > generic filter 1 ... ERROR > > > generic 1d filter 1 From dmitrey.kroshko at scipy.org Sun Oct 7 09:20:50 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 07 Oct 2007 16:20:50 +0300 Subject: [SciPy-user] scipy.optimize.anneal doesn't work properly? Message-ID: <4708DD32.1090804@scipy.org> Hi all, I try to connect scipy.optimize anneal to openopt, and here are suspicious examples I constantly get, that seems to show anneal() don't work properly: >>> from numpy import asfarray >>> x_opt = anneal(lambda x: (x**2).sum(), asfarray([1,-1])) Warning: Cooled to 31.093245 at [-5.56463126 0.35794414] but this is not the smallest point found. >>> x_opt (array([ 1.88050049, -1.49370399]), 5) (this is typical output, because anneal uses random numbers generator) So, as you see, f(x_opt) is even greater than f(x0), because 1.88050049**2 +1.49370399**2 > 1**2+(-1)**2 Same problem we can observe with even nVars = 1: >>> x_opt = anneal(lambda x: (x**2).sum(), asfarray([0.1])) Warning: Cooled to 0.234018 at 0.483754056848 but this is not the smallest point found. >>> x_opt (array([ -8.99753917e-09]), 1) (typical output is x=+/-0.01...+/-0.8) Of course, maybe scipy.optimize.anneal default lb-ub bounds that are set to +/-100 affect the solution obtained (as for me I dislike these defaults), but anyway - isn't it a bug, especially 1st example, where f(x_opt) is greater than f(x0)? for nVars = 3 the situation (as I expected) is even more bad: >>> x_opt = anneal(lambda x: (x**2).sum(), asfarray([1,2,3])) Warning: Cooled to 1205.692393 at [ 9.59094938 31.96016313 -9.60489749] but this is not the smallest point found. >>> x_opt (array([-28.66744536, 3.62546302, 15.94018009]), 5) Best regards, Dmitrey From lev at columbia.edu Sun Oct 7 10:11:24 2007 From: lev at columbia.edu (Lev Givon) Date: Sun, 7 Oct 2007 10:11:24 -0400 Subject: [SciPy-user] making an existing scipy installation use atlas libraries Message-ID: <20071007101124.0c738200@columbia.edu> The scipy installation instructions for Linux relating to atlas (http://tinyurl.com/2kuwcy) appear to imply that one can force an existing scipy installation that was built against netlib blas and lapack to take advantage of subsequently installed atlas libraries by directing the library loader to point to the atlas-provided libblas.so and liblapack.so libraries rather than their netlib equivalents. Does this approach provide inferior performance compared to building numpy/scipy directly against atlas (i.e., such that the atlas objects are built into the cblas/clapack modules)? L.G. From robince at gmail.com Sun Oct 7 11:43:54 2007 From: robince at gmail.com (Robin) Date: Sun, 7 Oct 2007 16:43:54 +0100 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <20071007011015.GC9063@mentat.za.net> References: <46FD729D.3060005@obs.univ-lyon1.fr> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> <20071007011015.GC9063@mentat.za.net> Message-ID: On 10/7/07, Stefan van der Walt wrote: > > One common denominator here seems to be gcc 3.4. Have you tried > compiling scipy with a more recent version? I'd appreciate it if you > could. I don't think that'll be so straightforward for me. Cygwin comes with 3.4.4 and I can't find any binaries for later versions so it would mean building it from source. Also I understand the reason MinGW and Cygwin haven't released any later versions yet is because of lots of problems on Win32... I'm reluctant to spend a lot more time on this (took me several tries and a couple of days to actually get atlas/numpy/scipy to build on Windows in the first place) with no guarantee it'll work. If anyone knows a later version of gcc that is known to build on Cygwin and has been used to successfully compile atlas/lapack/numpy/scipy then I'm happy to give it a try, but I'm reluctant to start investigating on my own, at the risk of spending a lot of time on a gcc version that just won't work. Given the difficulty of obtaining later versions of gcc on windows I think that for Cygwin/Mingw to be a supported platform, scipy should really build with the released versions and not require users to compile their own gcc in order to get it to work... Cheers Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sun Oct 7 11:50:46 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 7 Oct 2007 17:50:46 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: References: <46FD729D.3060005@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> <20071007011015.GC9063@mentat.za.net> Message-ID: > > If anyone knows a later version of gcc that is known to build on Cygwin > and has been used to successfully compile atlas/lapack/numpy/scipy then I'm > happy to give it a try, but I'm reluctant to start investigating on my own, > at the risk of spending a lot of time on a gcc version that just won't work. > > There is a French tutorial that can help here : http://philippe-dunski.developpez.com/compiler_gcc/ Didn't try it, but it could help for this matter. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnata at obs.univ-lyon1.fr Sun Oct 7 17:18:51 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Sun, 07 Oct 2007 23:18:51 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: References: <46FD729D.3060005@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> <20071007011015.GC9063@mentat.za.net> Message-ID: <47094D3B.3080509@obs.univ-lyon1.fr> Matthieu Brucher wrote: > > If anyone knows a later version of gcc that is known to build on > Cygwin and has been used to successfully compile > atlas/lapack/numpy/scipy then I'm happy to give it a try, but I'm > reluctant to start investigating on my own, at the risk of > spending a lot of time on a gcc version that just won't work. > > > There is a French tutorial that can help here : > http://philippe-dunski.developpez.com/compiler_gcc/ > Didn't try it, but it could help for this matter. > > Matthieu I cannot compile it with gcc4.2 because I'm using debian packages of blas/lapack/atlas. These packages are compiled with g77 (gcc3.4). I have no clue why. Of course, I can remove thses package on compile blas/lapack/atlas (It is easy, I know the exact way to do that) but I simply have no time to do that before at least one or two weeks :( Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From robert.kern at gmail.com Sun Oct 7 15:29:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 07 Oct 2007 14:29:42 -0500 Subject: [SciPy-user] making an existing scipy installation use atlas libraries In-Reply-To: <20071007101124.0c738200@columbia.edu> References: <20071007101124.0c738200@columbia.edu> Message-ID: <470933A6.3030005@gmail.com> Lev Givon wrote: > The scipy installation instructions for Linux relating to atlas > (http://tinyurl.com/2kuwcy) appear to imply that one can force an existing > scipy installation that was built against netlib blas and lapack to take > advantage of subsequently installed atlas libraries by directing the > library loader to point to the atlas-provided libblas.so and > liblapack.so libraries rather than their netlib equivalents. Does this > approach provide inferior performance compared to building numpy/scipy > directly against atlas (i.e., such that the atlas objects are built > into the cblas/clapack modules)? ATLAS has a few other libraries that are needed besides its libblas.so and liblapack.so. You would essentially have to relink the extension modules, not just change LD_LIBRARY_PATH. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robince at gmail.com Mon Oct 8 03:54:52 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 08:54:52 +0100 Subject: [SciPy-user] Best way to save and load sparse matrix Message-ID: Hello, What is the best way to save and load a large sparse matrix to and from a file? It will eventually be as large as memory in the computer permits so I would prefer a binary format. As a converting MATLAB user my preference was to try the matlab v4 file but it seems they don't support sparse matrices. I also haven't had much luck with any other method - the binary method in the cookbook for example says you have to know the size and datatype of the array you're loading, but doesn't indicate how this would work for sparse matrices. What would be the easiest option? Thanks, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Mon Oct 8 04:19:51 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 09:19:51 +0100 Subject: [SciPy-user] more help with sparse matrix: creating Message-ID: Hi, I have to generate a large sparse matrix. I can't generate it all at once, but I do know the number of nnz it will contain. In MATLAB I first allocate the sparse matrix structure A = sparse([],[],1,dim,dim,Annz); And then fill it at my leisure with normal slicing. How would I achieve a similar thing in numpy (to first allocate with fixed nnz, then fill in sections), preferably with slicing/fancy indexing? (I was reading about lil_matrix etc. but it wasn't clear to me how to do the above) Thanks, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Oct 8 05:24:25 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Oct 2007 10:24:25 +0100 Subject: [SciPy-user] Best way to save and load sparse matrix In-Reply-To: References: Message-ID: <1e2af89e0710080224y71b3df69g9e767fd8d03cbb2a@mail.gmail.com> Hi, > It will eventually be as large as memory in the computer permits so I would > prefer a binary format. As a converting MATLAB user my preference was to try > the matlab v4 file but it seems they don't support sparse matrices. Really? Do you have an old version of scipy? If not, what error do you get? I put in sparse support for matlab v4 files some time ago, but it has not been fiercely tested... Matthew From stephenemslie at gmail.com Mon Oct 8 05:46:49 2007 From: stephenemslie at gmail.com (stephen emslie) Date: Mon, 8 Oct 2007 10:46:49 +0100 Subject: [SciPy-user] timeseries svn url wrong on wiki? Message-ID: <51f97e530710080246n2161baffhc0643d0d9220c4ce@mail.gmail.com> I noticed that the timeseries wiki here: http://www.scipy.org/SciPyPackages/TimeSeries says that timeseries can be checked out here: http://svn.scipy.org/svn/scipy/trunk/Lib/sandbox/timeseries timeseries That doesn't seem to exist, but this does: http://svn.scipy.org/svn/scipy/trunk/scipy/sandbox/timeseries Should that be corrected? Stephen Emslie From robince at gmail.com Mon Oct 8 06:59:01 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 11:59:01 +0100 Subject: [SciPy-user] Best way to save and load sparse matrix In-Reply-To: <1e2af89e0710080224y71b3df69g9e767fd8d03cbb2a@mail.gmail.com> References: <1e2af89e0710080224y71b3df69g9e767fd8d03cbb2a@mail.gmail.com> Message-ID: On 10/8/07, Matthew Brett wrote: > > Really? Do you have an old version of scipy? If not, what error do > you get? I put in sparse support for matlab v4 files some time ago, > but it has not been fiercely tested... Apologies, it does seem to be working. I had got an error about nnz atribute not found when trying to save a matrix in lil format, and I found some old mailing list posts I think saying sparce matrices weren't supported. Obviously that's changed since then... It does work with csc and csr matrices, both seem to load and save correctly in python, although only CSC loads correctly in MATLAB (Csr has incorrect data). Thanks, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Mon Oct 8 07:10:08 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 12:10:08 +0100 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: Message-ID: On 10/8/07, Robin wrote: > How would I achieve a similar thing in numpy (to first allocate with fixed > nnz, then fill in sections), preferably with slicing/fancy indexing? > (I was reading about lil_matrix etc. but it wasn't clear to me how to do > the above) So as I understand to allocate the sparse array with a specified nnz I have to use sparse.csr_matrix, since lil_matrix constructor doesn't seem to support this. Then I need to convert my allocated csr matrix in place to lil format so that I can do fancy indexing to fill it as required - how can I do this in place without making another copy of the data? Once filled I need to convert back to csr again to do matrix vector multiplication and save to matlab file. Does this seem correct? Unfortunately I have a problem with the first step, trying to allocate an empty csr matrix: >From the doc string: - csr_matrix((M, N), [nzmax, dtype]) to construct a container, where (M, N) are dimensions and nzmax, dtype are optional, defaulting to nzmax=sparse.NZMAX and dtype='d'. However when I try to include the nzmax argument I get the following error: In [102]: B = sparse.csr_matrix((10,10),[10,'b']) --------------------------------------------------------------------------- Traceback (most recent call last) /home/robince/ in () /usr/lib/python2.5/site-packages/scipy/sparse/sparse.py in __init__(self, arg1, dims, nzmax, dtype, copy, check) 1307 N = max(oldN, N) 1308 -> 1309 self.shape = (M, N) 1310 1311 self._check(check) /usr/lib/python2.5/site-packages/scipy/sparse/sparse.py in set_shape(self, shape) 111 except NotImplementedError: 112 raise NotImplementedError("Reshaping not implemented for %s." % --> 113 self.__class__.__name__) 114 self._shape = shape 115 : Reshaping not implemented for csr_matrix. Am I doing something wrong? Is there a better way to achieve what I'm trying to do? Thanks, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Mon Oct 8 08:06:34 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 13:06:34 +0100 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: Message-ID: On further reading I realized I was providing the arguments incorrectly - specifying them by name worked. Now I am just stuck on the part where I convert the allocated csr array to lil format, to fill it and then convert it back to csr. I really need this to be in place, since there won't be enough memory to hold two copies of the array. Is this possible? Thanks, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Mon Oct 8 08:46:07 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 8 Oct 2007 08:46:07 -0400 Subject: [SciPy-user] timeseries svn url wrong on wiki? In-Reply-To: <51f97e530710080246n2161baffhc0643d0d9220c4ce@mail.gmail.com> References: <51f97e530710080246n2161baffhc0643d0d9220c4ce@mail.gmail.com> Message-ID: <200710080846.08197.pgmdevlist@gmail.com> On Monday 08 October 2007 05:46:49 stephen emslie wrote: > I noticed that the timeseries wiki here: > http://www.scipy.org/SciPyPackages/TimeSeries > says that timeseries can be checked out here: ... > Should that be corrected? Stephen, Good call, I just modified the wiki page in consequence. Thanks again for your interest in timeseries and maskedarray. Pierre From seb at rudel.de Mon Oct 8 10:21:39 2007 From: seb at rudel.de (Sebastian) Date: Mon, 8 Oct 2007 14:21:39 +0000 (UTC) Subject: [SciPy-user] fromimage returns instance instead of numpy array References: <47057436.3060600@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > Sebastian wrote: > > Hello, > > after I updated numpy and scipy to the newest version a script which uses the > > scipy.misc.pilutil package dosent't work any more. The function "fromimage" > > returns an image as image and dosen't convert it to a numpy array. > > Exactly what version of scipy are you talking about? The one I have from a > recent SVN checkout works fine. It was modified to its current form on > 2007-08-28. It might also be that this code relies on behavior of recent > releases of PIL. I am using PIL 1.1.6b2. What are you using? > Hello, I used the PIL 1.1.5. After installing the PIL 1.1.6 the problem was solved. Thanks for your kind help. Greeting Sebastian From robince at gmail.com Mon Oct 8 10:29:43 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 15:29:43 +0100 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: Message-ID: On 10/8/07, Robin wrote: > > On further reading I realized I was providing the arguments incorrectly - > specifying them by name worked. > > Now I am just stuck on the part where I convert the allocated csr array to > lil format, to fill it and then convert it back to csr. I really need this > to be in place, since there won't be enough memory to hold two copies of the > array. Is this possible? > So it seems this isn't possible to do with scipy.sparse unless I'm missing something. I see from this post that Ed Schofield made a seperate branch to address this and other issues with sparse http://projects.scipy.org/pipermail/scipy-user/2006-February/007192.html but it doesn't look like that has seen any activity for more than a year. Were those changes ever merged? Is there any chance they will be? Alternatively I think it might be possible with PySparse so I am continuing to look at that.... Any advice on comments from more experienced folk on whether this is possible or whether I'm better off just sticking with MATLAB for this project would be appreciated. Cheers Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lev at columbia.edu Mon Oct 8 11:00:36 2007 From: lev at columbia.edu (Lev Givon) Date: Mon, 8 Oct 2007 11:00:36 -0400 Subject: [SciPy-user] making an existing scipy installation use atlas libraries In-Reply-To: <470933A6.3030005@gmail.com> References: <20071007101124.0c738200@columbia.edu> <470933A6.3030005@gmail.com> Message-ID: <20071008150033.GB15688@avicenna.cc.columbia.edu> Received from Robert Kern on Sun, Oct 07, 2007 at 03:29:42PM EDT: > Lev Givon wrote: > > The scipy installation instructions for Linux relating to atlas > > (http://tinyurl.com/2kuwcy) appear to imply that one can force an existing > > scipy installation that was built against netlib blas and lapack to take > > advantage of subsequently installed atlas libraries by directing the > > library loader to point to the atlas-provided libblas.so and > > liblapack.so libraries rather than their netlib equivalents. Does this > > approach provide inferior performance compared to building numpy/scipy > > directly against atlas (i.e., such that the atlas objects are built > > into the cblas/clapack modules)? > > ATLAS has a few other libraries that are needed besides its > libblas.so and liblapack.so. You would essentially have to relink > the extension modules, not just change LD_LIBRARY_PATH. True, but isn't it sufficient that those libraries (e.g., libatlas.so) are already linked to by the atlas-provided libblas.so and liblapack.so libraries? After all, aren't the BLAS/LAPACK functions the only public interfaces to the atlas stuff that are accessed by programs/libraries that use them? Or do some of the extension modules you mentioned actually access other atlas-specific routines when built against atlas? L.G. From matthew.brett at gmail.com Mon Oct 8 11:25:07 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Oct 2007 16:25:07 +0100 Subject: [SciPy-user] Best way to save and load sparse matrix In-Reply-To: References: <1e2af89e0710080224y71b3df69g9e767fd8d03cbb2a@mail.gmail.com> Message-ID: <1e2af89e0710080825h21ccd404k2f8bfce780809a72@mail.gmail.com> > It does work with csc and csr matrices, both seem to load and save correctly > in python, although only CSC loads correctly in MATLAB (Csr has incorrect > data). Ah, thank you, I will look into that... Matthew From robert.kern at gmail.com Mon Oct 8 12:36:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 08 Oct 2007 11:36:52 -0500 Subject: [SciPy-user] making an existing scipy installation use atlas libraries In-Reply-To: <20071008150033.GB15688@avicenna.cc.columbia.edu> References: <20071007101124.0c738200@columbia.edu> <470933A6.3030005@gmail.com> <20071008150033.GB15688@avicenna.cc.columbia.edu> Message-ID: <470A5CA4.5020001@gmail.com> Lev Givon wrote: > Received from Robert Kern on Sun, Oct 07, 2007 at 03:29:42PM EDT: >> Lev Givon wrote: >>> The scipy installation instructions for Linux relating to atlas >>> (http://tinyurl.com/2kuwcy) appear to imply that one can force an existing >>> scipy installation that was built against netlib blas and lapack to take >>> advantage of subsequently installed atlas libraries by directing the >>> library loader to point to the atlas-provided libblas.so and >>> liblapack.so libraries rather than their netlib equivalents. Does this >>> approach provide inferior performance compared to building numpy/scipy >>> directly against atlas (i.e., such that the atlas objects are built >>> into the cblas/clapack modules)? >> ATLAS has a few other libraries that are needed besides its >> libblas.so and liblapack.so. You would essentially have to relink >> the extension modules, not just change LD_LIBRARY_PATH. > > True, but isn't it sufficient that those libraries (e.g., libatlas.so) > are already linked to by the atlas-provided libblas.so and > liblapack.so libraries? Is it? Try it, and tell us. One problem you might run into is that ATLAS's BLAS is usually in a file called libf77blas.so, not libblas.so, but maybe yours is different. Dynamic linking is sufficiently finicky and platform-dependent that I don't try to play games with it. It might work, but I doubt anyone has tried. > After all, aren't the BLAS/LAPACK functions > the only public interfaces to the atlas stuff that are accessed by > programs/libraries that use them? Or do some of the extension modules > you mentioned actually access other atlas-specific routines when built > against atlas? A scipy built against ATLAS will wrap the row-major versions of the functions that it provides in addition to the FORTRAN column-major versions. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dominique.orban at gmail.com Mon Oct 8 13:02:04 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 8 Oct 2007 13:02:04 -0400 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: Message-ID: <8793ae6e0710081002k27f39ed4r82507314e92402eb@mail.gmail.com> On 10/8/07, Robin wrote: > On 10/8/07, Robin wrote: > > On further reading I realized I was providing the arguments incorrectly - > specifying them by name worked. > > > > Now I am just stuck on the part where I convert the allocated csr array to > lil format, to fill it and then convert it back to csr. I really need this > to be in place, since there won't be enough memory to hold two copies of the > array. Is this possible? > > > > So it seems this isn't possible to do with scipy.sparse unless I'm missing > something. > > I see from this post that Ed Schofield made a seperate branch to address > this and other issues with sparse > http://projects.scipy.org/pipermail/scipy-user/2006-February/007192.html > but it doesn't look like that has seen any activity for more than a year. > Were those changes ever merged? > Is there any chance they will be? > > Alternatively I think it might be possible with PySparse so I am continuing > to look at that.... > > Any advice on comments from more experienced folk on whether this is > possible or whether I'm better off just sticking with MATLAB for this > project would be appreciated. Robin, I believe what you want to do is quite easy with PySparse: from pysparse import spmatrix M = spmatrix.ll_mat( ncol, nrow, nnz ) will allocate room for an nrow-by-ncol sparse matrix with nnz nonzeros in linked-list format. Alternatively, if you know that your matrix is symmetric (and hence, square), you only keep one triangle in memory and allocate it by saying: M = spmatrix.ll_mat_sym( n, nnz ) Then you can assign values to slices of M using M[:k,:p] = ..., etc. If your guess for nnz was an under-estimate, M expands dynamically (nnz is "just a hint"). The spmatrix manual, that you can find in the source distribution of PySparse, is a great handbook. The default format in PySparse is ll_mat. If you need to, you can convert your matrix to csr format with M.to_csr(). You can save your matrix to file in MatrixMarket format using M.export_mtx() or read in a matrix using ll_mat_from_mtx(). I hope this helps. Dominique From eike.welk at gmx.net Mon Oct 8 13:07:25 2007 From: eike.welk at gmx.net (Eike Welk) Date: Mon, 08 Oct 2007 19:07:25 +0200 Subject: [SciPy-user] =?iso-8859-1?q?making_an_existing_scipy_installation?= =?iso-8859-1?q?_use_atlas=09libraries?= In-Reply-To: <20071008150033.GB15688@avicenna.cc.columbia.edu> References: <20071007101124.0c738200@columbia.edu> <470933A6.3030005@gmail.com> <20071008150033.GB15688@avicenna.cc.columbia.edu> Message-ID: <200710081907.25968.eike.welk@gmx.net> On Monday 08 October 2007 17:00, Lev Givon wrote: > Received from Robert Kern on Sun, Oct 07, 2007 at 03:29:42PM EDT: > > Lev Givon wrote: > > > The scipy installation instructions for Linux relating to atlas > > > (http://tinyurl.com/2kuwcy) appear to imply that one can force > > > an existing scipy installation that was built against netlib > > > blas and lapack to take advantage of subsequently installed > > > atlas libraries by directing the library loader to point to the > > > atlas-provided libblas.so and liblapack.so libraries rather > > > than their netlib equivalents. Does this approach provide > > > inferior performance compared to building numpy/scipy directly > > > against atlas (i.e., such that the atlas objects are built into > > > the cblas/clapack modules)? > > > > ATLAS has a few other libraries that are needed besides its > > libblas.so and liblapack.so. You would essentially have to > > relink the extension modules, not just change LD_LIBRARY_PATH. > > True, but isn't it sufficient that those libraries (e.g., > libatlas.so) are already linked to by the atlas-provided libblas.so > and liblapack.so libraries? After all, aren't the BLAS/LAPACK > functions the only public interfaces to the atlas stuff that are > accessed by programs/libraries that use them? Or do some of the > extension modules you mentioned actually access other > atlas-specific routines when built against atlas? You could ask David Cournapeau. He has created an Atlas package for Suse Linux, that works the way you propose: Select Atlas by putting the directory, where it is stored, into LD_LIBRARY_PATH. See 'ashigabou repository' at: http://www.scipy.org/Installing_SciPy/Linux Regards, Eike. From lev at columbia.edu Mon Oct 8 13:09:34 2007 From: lev at columbia.edu (Lev Givon) Date: Mon, 8 Oct 2007 13:09:34 -0400 Subject: [SciPy-user] making an existing scipy installation use atlas libraries In-Reply-To: <470A5CA4.5020001@gmail.com> References: <20071007101124.0c738200@columbia.edu> <470933A6.3030005@gmail.com> <20071008150033.GB15688@avicenna.cc.columbia.edu> <470A5CA4.5020001@gmail.com> Message-ID: <20071008170933.GC15688@avicenna.cc.columbia.edu> Received from Robert Kern on Mon, Oct 08, 2007 at 12:36:52PM EDT: > Lev Givon wrote: > > Received from Robert Kern on Sun, Oct 07, 2007 at 03:29:42PM EDT: > >> Lev Givon wrote: > >>> The scipy installation instructions for Linux relating to atlas > >>> (http://tinyurl.com/2kuwcy) appear to imply that one can force > >>> an existing scipy installation that was built against netlib > >>> blas and lapack to take advantage of subsequently installed > >>> atlas libraries by directing the library loader to point to the > >>> atlas-provided libblas.so and liblapack.so libraries rather than > >>> their netlib equivalents. Does this approach provide inferior > >>> performance compared to building numpy/scipy directly against > >>> atlas (i.e., such that the atlas objects are built into the > >>> cblas/clapack modules)? > >> > >> ATLAS has a few other libraries that are needed besides its > >> libblas.so and liblapack.so. You would essentially have to relink > >> the extension modules, not just change LD_LIBRARY_PATH. > > > > True, but isn't it sufficient that those libraries (e.g., libatlas.so) > > are already linked to by the atlas-provided libblas.so and > > liblapack.so libraries? > > Is it? Try it, and tell us. One problem you might run into is that > ATLAS's BLAS is usually in a file called libf77blas.so, not > libblas.so, but maybe yours is different. Dynamic linking is > sufficiently finicky and platform-dependent that I don't try to play > games with it. It might work, but I doubt anyone has tried. > Well, with a numpy 1.0.3.1 / scipy 0.6.0 installation built against the netlib blas/lapack, I didn't observe any failures when running scipy.linalg.test() or scipy.lib.test() after redirecting the library loader to use subsequently installed atlas libblas.so and liblapack.so files (I verified that the various scipy shared libraries dynamically linked against those files were indeed referencing the atlas ones rather than the netlib ones with ldd). L.G. From lev at columbia.edu Mon Oct 8 13:20:15 2007 From: lev at columbia.edu (Lev Givon) Date: Mon, 8 Oct 2007 13:20:15 -0400 Subject: [SciPy-user] making an existing scipy installation use atlas?libraries In-Reply-To: <200710081907.25968.eike.welk@gmx.net> References: <20071007101124.0c738200@columbia.edu> <470933A6.3030005@gmail.com> <20071008150033.GB15688@avicenna.cc.columbia.edu> <200710081907.25968.eike.welk@gmx.net> Message-ID: <20071008172015.GD15688@avicenna.cc.columbia.edu> Received from Eike Welk on Mon, Oct 08, 2007 at 01:07:25PM EDT: > On Monday 08 October 2007 17:00, Lev Givon wrote: > > Received from Robert Kern on Sun, Oct 07, 2007 at 03:29:42PM EDT: > > > Lev Givon wrote: > > > > The scipy installation instructions for Linux relating to atlas > > > > (http://tinyurl.com/2kuwcy) appear to imply that one can force > > > > an existing scipy installation that was built against netlib > > > > blas and lapack to take advantage of subsequently installed > > > > atlas libraries by directing the library loader to point to the > > > > atlas-provided libblas.so and liblapack.so libraries rather > > > > than their netlib equivalents. Does this approach provide > > > > inferior performance compared to building numpy/scipy directly > > > > against atlas (i.e., such that the atlas objects are built into > > > > the cblas/clapack modules)? > > > > > > ATLAS has a few other libraries that are needed besides its > > > libblas.so and liblapack.so. You would essentially have to > > > relink the extension modules, not just change LD_LIBRARY_PATH. > > > > True, but isn't it sufficient that those libraries (e.g., > > libatlas.so) are already linked to by the atlas-provided libblas.so > > and liblapack.so libraries? After all, aren't the BLAS/LAPACK > > functions the only public interfaces to the atlas stuff that are > > accessed by programs/libraries that use them? Or do some of the > > extension modules you mentioned actually access other > > atlas-specific routines when built against atlas? > > You could ask David Cournapeau. He has created an Atlas package for > Suse Linux, that works the way you propose: Select Atlas by putting > the directory, where it is stored, into LD_LIBRARY_PATH. > > See 'ashigabou repository' at: > http://www.scipy.org/Installing_SciPy/Linux That snippet on the above page is actually what prompted my curiosity regarding this whole topic :-) The Fedora folks, incidentally, have also concocted an rpm package that facilitates building and installing atlas in a manner germane to the above approach to making an existing dynamically linked blas/lapack-dependent program/library use atlas. L.G. From robince at gmail.com Mon Oct 8 13:33:57 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 18:33:57 +0100 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: <8793ae6e0710081002k27f39ed4r82507314e92402eb@mail.gmail.com> References: <8793ae6e0710081002k27f39ed4r82507314e92402eb@mail.gmail.com> Message-ID: On 10/8/07, Dominique Orban wrote: > > I hope this helps. > Dominique It certainly does help - thanks a lot for your time... In this case I know the nnz exactly in advance, which is why I was keen to preallocate. Unfortunately it seems I still can't do the MATLAB style indexing I wanted, something like M[2, [1,2,3]] = 1 but since I will be updating row by row anyway I can get around this by creating a temporary row vector to insert with slicing. Also, if I understand correctly the conversion to csr necessary for matrix-vector multiplication cannot be done in place - this is a real shame as it doubles the memory requirements for this over the MATLAB version (I really want to make this array as big as possible) Still I'll carry on for now - may have to talk my supervisor into getting me some more ram! Thanks again, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Mon Oct 8 13:47:36 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 8 Oct 2007 13:47:36 -0400 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: <8793ae6e0710081002k27f39ed4r82507314e92402eb@mail.gmail.com> Message-ID: <8793ae6e0710081047ye21e5e2vacf27dc05076e744@mail.gmail.com> On 10/8/07, Robin wrote: > Unfortunately it seems I still can't do the MATLAB style indexing I wanted, > something like > M[2, [1,2,3]] = 1 > but since I will be updating row by row anyway I can get around this by > creating a temporary row vector to insert with slicing. I am not sure what the above operation does in Matlab. > Also, if I understand correctly the conversion to csr necessary for > matrix-vector multiplication cannot be done in place - this is a real shame That's unfortunately right. It would be a nice addition to PySparse. However in PySparse, once converted, you cannot modify a csr matrix anymore. > as it doubles the memory requirements for this over the MATLAB version > (I really want to make this array as big as possible) You don't HAVE to convert it to do matrix-vector product. Objects of type ll_mat also have a matvec() method. However, it is true that the csr format is more compact and speeds up products. Good luck, Dominique From robince at gmail.com Mon Oct 8 14:26:03 2007 From: robince at gmail.com (Robin) Date: Mon, 8 Oct 2007 19:26:03 +0100 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: <8793ae6e0710081047ye21e5e2vacf27dc05076e744@mail.gmail.com> References: <8793ae6e0710081002k27f39ed4r82507314e92402eb@mail.gmail.com> <8793ae6e0710081047ye21e5e2vacf27dc05076e744@mail.gmail.com> Message-ID: On 10/8/07, Dominique Orban wrote: > > On 10/8/07, Robin wrote: > > > Unfortunately it seems I still can't do the MATLAB style indexing I > wanted, > > something like > > M[2, [1,2,3]] = 1 > > but since I will be updating row by row anyway I can get around this by > > creating a temporary row vector to insert with slicing. > > I am not sure what the above operation does in Matlab. I guess in this example its the same as M[2,1:4]=1, so updates (2,1) (2,2) and (2,3) positions, but the advantage of this way is that they don't have to be contiguous. Ie in my case I calculated the positions I want to set to 1 for each row, then want to do M[2, positions2update] = 1 Anyway it should be easy for me to work round this for my case. > Also, if I understand correctly the conversion to csr necessary for > > matrix-vector multiplication cannot be done in place - this is a real > shame > > That's unfortunately right. It would be a nice addition to PySparse. > However in PySparse, once converted, you cannot modify a csr matrix > anymore. > > > as it doubles the memory requirements for this over the MATLAB version > > (I really want to make this array as big as possible) > > You don't HAVE to convert it to do matrix-vector product. Objects of > type ll_mat also have a matvec() method. However, it is true that the > csr format is more compact and speeds up products. While generating the matrix takes a long time it only has to be done once - the products are going to be performed inside an optimisation loop so have to be as fast as possible... Also I guess the pysparse objects cant be saved with io.savemat. I thought it would be better to use a binary format, but I will try matrixmarket. Actually the other issue is that you can't specify dtype with pysparse as far as I can tell. My matrix contains only integer values (0 or 1) so I was hoping to use dtype=byte. I think it might use less ram to do the scipy.sparse way of allocating csr with small dtype, converting to lil to fill, then converting back. Is it correct that if I create an empty csr from scipy.sparse with a specified nnz and convert it lil that lil will have memory reserved for appropriate number of entries? (or does lil not have a notion of nnz allocation and just expands dynamically as needed?) Thanks Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Oct 8 16:17:36 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 8 Oct 2007 22:17:36 +0200 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: Message-ID: <20071008201736.GT9063@mentat.za.net> Hi Robin On Mon, Oct 08, 2007 at 01:06:34PM +0100, Robin wrote: > On further reading I realized I was providing the arguments incorrectly - > specifying them by name worked. > > Now I am just stuck on the part where I convert the allocated csr array to lil > format, to fill it and then convert it back to csr. I really need this to be in > place, since there won't be enough memory to hold two copies of the array. Is > this possible? It isn't necessary to first create a csr matrix. Start by creating the lil_matrix, fill it, and then convert to csc_array. As you noticed, if you don't have enough memory to fit all the data into memory twice, this won't work. Another route is to allocate the csr matrix directly: z = sp.csr_matrix((3,3),nzmax=5) z[0,0] = 3 z[1,1] = 4 etc. You can investigate the underlying data: In [57]: z.data Out[57]: array([ 3., 4., 0., 0., 0.]) In [58]: z.indices Out[58]: array([0, 1, 0, 0, 0]) Cheers St?fan From stefan at sun.ac.za Mon Oct 8 16:19:59 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 8 Oct 2007 22:19:59 +0200 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: <8793ae6e0710081002k27f39ed4r82507314e92402eb@mail.gmail.com> Message-ID: <20071008201959.GU9063@mentat.za.net> On Mon, Oct 08, 2007 at 06:33:57PM +0100, Robin wrote: > In this case I know the nnz exactly in advance, which is why I was keen to > preallocate. > > Unfortunately it seems I still can't do the MATLAB style indexing I wanted, > something like > M[2, [1,2,3]] = 1 > but since I will be updating row by row anyway I can get around this by > creating a temporary row vector to insert with slicing. In scipy's lil_matrix, the structure is grown dynamically. You can do: In [61]: z = sp.lil_matrix((4,4)) In [62]: z[2,[1,2,3]] = 1 In [63]: z.todense() Out[63]: matrix([[ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 1., 1., 1.], [ 0., 0., 0., 0.]]) Cheers St?fan From stefan at sun.ac.za Mon Oct 8 16:27:10 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 8 Oct 2007 22:27:10 +0200 Subject: [SciPy-user] more help with sparse matrix: creating In-Reply-To: References: <8793ae6e0710081002k27f39ed4r82507314e92402eb@mail.gmail.com> <8793ae6e0710081047ye21e5e2vacf27dc05076e744@mail.gmail.com> Message-ID: <20071008202710.GV9063@mentat.za.net> On Mon, Oct 08, 2007 at 07:26:03PM +0100, Robin wrote: > Actually the other issue is that you can't specify dtype with pysparse as far > as I can tell. My matrix contains only integer values (0 or 1) so I was hoping > to use dtype=byte. I think it might use less ram to do the scipy.sparse way of > allocating csr with small dtype, converting to lil to fill, then converting > back. This is still on the TODO list: http://projects.scipy.org/scipy/scipy/ticket/225 > Is it correct that if I create an empty csr from scipy.sparse with a specified > nnz and convert it lil that lil will have memory reserved for appropriate > number of entries? (or does lil not have a notion of nnz allocation and just > expands dynamically as needed?) Lil stands for "list of lists". The internal structures are not allocated beforehand, but are grown on demand, as needed. Take a look: In [69]: z = sp.lil_matrix((4,4)) In [70]: z.data Out[70]: array([[], [], [], []], dtype=object) In [71]: z.rows Out[71]: array([[], [], [], []], dtype=object) In [72]: z[:2,:2] = 4 In [73]: z.data Out[73]: array([[array(4.0), array(4.0)], [array(4.0), array(4.0)], [], []], dtype=object) In [74]: z.rows Out[74]: array([[0, 1], [0, 1], [], []], dtype=object) Hrm. I think the output of line 73 may be a bug. That should be array([[4.0,4.0],[4.0,4.0],[],[]]) Cheers St?fan From stefan at sun.ac.za Mon Oct 8 18:46:33 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 9 Oct 2007 00:46:33 +0200 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: References: <47057436.3060600@gmail.com> Message-ID: <20071008224633.GB21653@mentat.za.net> On Mon, Oct 08, 2007 at 02:21:39PM +0000, Sebastian wrote: > Robert Kern gmail.com> writes: > > Exactly what version of scipy are you talking about? The one I have from a > > recent SVN checkout works fine. It was modified to its current form on > > 2007-08-28. It might also be that this code relies on behavior of recent > > releases of PIL. I am using PIL 1.1.6b2. What are you using? > > > > Hello, > I used the PIL 1.1.5. After installing the PIL 1.1.6 the problem was solved. > Thanks for your kind help. PIL 1.1.6 is still broken to some extent. I sent a patch to the Image SIG: http://mail.python.org/pipermail/image-sig/2007-August/004570.html Unfortunately, there was no reaction. Cheers St?fan From david at ar.media.kyoto-u.ac.jp Mon Oct 8 23:39:53 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 09 Oct 2007 12:39:53 +0900 Subject: [SciPy-user] making an existing scipy installation use atlas libraries In-Reply-To: <200710081907.25968.eike.welk@gmx.net> References: <20071007101124.0c738200@columbia.edu> <470933A6.3030005@gmail.com> <20071008150033.GB15688@avicenna.cc.columbia.edu> <200710081907.25968.eike.welk@gmx.net> Message-ID: <470AF809.5050905@ar.media.kyoto-u.ac.jp> Eike Welk wrote: > On Monday 08 October 2007 17:00, Lev Givon wrote: >> Received from Robert Kern on Sun, Oct 07, 2007 at 03:29:42PM EDT: >>> Lev Givon wrote: >>>> The scipy installation instructions for Linux relating to atlas >>>> (http://tinyurl.com/2kuwcy) appear to imply that one can force >>>> an existing scipy installation that was built against netlib >>>> blas and lapack to take advantage of subsequently installed >>>> atlas libraries by directing the library loader to point to the >>>> atlas-provided libblas.so and liblapack.so libraries rather >>>> than their netlib equivalents. Does this approach provide >>>> inferior performance compared to building numpy/scipy directly >>>> against atlas (i.e., such that the atlas objects are built into >>>> the cblas/clapack modules)? >>> ATLAS has a few other libraries that are needed besides its >>> libblas.so and liblapack.so. You would essentially have to >>> relink the extension modules, not just change LD_LIBRARY_PATH. Robert, you are right that atlas does need other libraries by default, but this does not prevent from doing what Lev Givon wants to do. You just have to relink the atlas libraries (actually, the debian packages do that to build a complete libblas.so and liblapack.so, which contains all necessary symbols from libatlas.so), which is easier than relinking numpy/scipy extensions (you only have to do it once, for example). Incidently, that's what I am doing in the ashigabou repository as well (in the atlas package). I would also like to know if this makes any difference (using atlas as a pure drop-in of blas/lapack vs explicitely compiling numpy/scipy with atlas) performance wise (I would think not, because once distutils got atlas, it just set the linking options to use it as blas/lapack, but I may be wrong). cheers, David From david at ar.media.kyoto-u.ac.jp Mon Oct 8 23:52:35 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 09 Oct 2007 12:52:35 +0900 Subject: [SciPy-user] making an existing scipy installation use atlas?libraries In-Reply-To: <20071008172015.GD15688@avicenna.cc.columbia.edu> References: <20071007101124.0c738200@columbia.edu> <470933A6.3030005@gmail.com> <20071008150033.GB15688@avicenna.cc.columbia.edu> <200710081907.25968.eike.welk@gmx.net> <20071008172015.GD15688@avicenna.cc.columbia.edu> Message-ID: <470AFB03.9020103@ar.media.kyoto-u.ac.jp> Lev Givon wrote: > Received from Eike Welk on Mon, Oct 08, 2007 at 01:07:25PM EDT: > >> On Monday 08 October 2007 17:00, Lev Givon wrote: >> >>> Received from Robert Kern on Sun, Oct 07, 2007 at 03:29:42PM EDT: >>> >>>> Lev Givon wrote: >>>> >>>>> The scipy installation instructions for Linux relating to atlas >>>>> (http://tinyurl.com/2kuwcy) appear to imply that one can force >>>>> an existing scipy installation that was built against netlib >>>>> blas and lapack to take advantage of subsequently installed >>>>> atlas libraries by directing the library loader to point to the >>>>> atlas-provided libblas.so and liblapack.so libraries rather >>>>> than their netlib equivalents. Does this approach provide >>>>> inferior performance compared to building numpy/scipy directly >>>>> against atlas (i.e., such that the atlas objects are built into >>>>> the cblas/clapack modules)? >>>>> >>>> ATLAS has a few other libraries that are needed besides its >>>> libblas.so and liblapack.so. You would essentially have to >>>> relink the extension modules, not just change LD_LIBRARY_PATH. >>>> >>> True, but isn't it sufficient that those libraries (e.g., >>> libatlas.so) are already linked to by the atlas-provided libblas.so >>> and liblapack.so libraries? After all, aren't the BLAS/LAPACK >>> functions the only public interfaces to the atlas stuff that are >>> accessed by programs/libraries that use them? Or do some of the >>> extension modules you mentioned actually access other >>> atlas-specific routines when built against atlas? >>> >> >> You could ask David Cournapeau. He has created an Atlas package for >> Suse Linux, that works the way you propose: Select Atlas by putting >> the directory, where it is stored, into LD_LIBRARY_PATH. >> >> See 'ashigabou repository' at: >> http://www.scipy.org/Installing_SciPy/Linux >> > > That snippet on the above page is actually what prompted my curiosity > regarding this whole topic :-) > > The Fedora folks, incidentally, have also concocted an rpm package > that facilitates building and installing atlas in a manner germane to > the above approach to making an existing dynamically linked > blas/lapack-dependent program/library use atlas. > If you want to do it by yourself when compiling atlas, the relevant lines in the makefile are: (for lapack, in ATLAS/makes/Make.lib) liblapack.so.$(LAPACK_MAJOR) : liblapack.a libcblas.a ld $(LDFLAGS) -shared -soname $@ -o $@ --whole-archive \ liblapack.a libcblas.a --no-whole-archive $(F77SYSLIB) (for blas, same makefile) libblas.so.$(BLAS_MAJOR) : libf77blas.a libcblas.a ld $(LDFLAGS) -shared -soname $@ -o $@ --whole-archive libcblas.a libf77blas.a libatlas.a \ --no-whole-archive $(F77SYSLIB) As you can see, this is rather trivial: just give the --whole-archive argument before the archives you want to include (this requires of course that all object code in the archives are built with -fPIC). cheers, David From lfriedri at imtek.de Tue Oct 9 08:04:29 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Tue, 09 Oct 2007 14:04:29 +0200 Subject: [SciPy-user] Associated legendre functions Message-ID: <470B6E4D.5080802@imtek.de> Hello, I would like to compute the values of associated lengendre functions. I found scipy.special.lpmn which can be used for this purpose. However, it is not possible to pass numpy-arrays for the function argument z. What is the recommended way to deal with this? I could do a for loop, but I guess this would be slow. Are there other implementations available? Thanks, Lars -- Dipl.-Ing. Lars Friedrich Photonic Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-K?hler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfriedri at imtek.de From fredmfp at gmail.com Tue Oct 9 08:54:52 2007 From: fredmfp at gmail.com (fred) Date: Tue, 09 Oct 2007 14:54:52 +0200 Subject: [SciPy-user] Associated legendre functions In-Reply-To: <470B6E4D.5080802@imtek.de> References: <470B6E4D.5080802@imtek.de> Message-ID: <470B7A1C.8020206@gmail.com> Lars Friedrich a ?crit : > Hello, > > I would like to compute the values of associated lengendre functions. I > found scipy.special.lpmn which can be used for this purpose. However, it > is not possible to pass numpy-arrays for the function argument z. What > is the recommended way to deal with this? I could do a for loop, but I > guess this would be slow. Are there other implementations available? > I got the same problem a few months ago. I had to hardcode it in specfun.f. If somebody else has another solution, I'm interested. Cheers, -- http://scipy.org/FredericPetit From pearu at cens.ioc.ee Tue Oct 9 08:59:53 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 09 Oct 2007 14:59:53 +0200 Subject: [SciPy-user] polyhedron (cddlib wrapper) package update Message-ID: <470B7B49.5000308@cens.ioc.ee> Hi, I just updated the polyhedron package that is a wrapper of the cddlib library to use NumPy. More info in: http://cens.ioc.ee/projects/polyhedron/ Best regars,, Pearu From gruben at bigpond.net.au Tue Oct 9 09:13:41 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue, 09 Oct 2007 23:13:41 +1000 Subject: [SciPy-user] Associated legendre functions In-Reply-To: <470B6E4D.5080802@imtek.de> References: <470B6E4D.5080802@imtek.de> Message-ID: <470B7E85.5010709@bigpond.net.au> There's probably a better way than this, but you can use vectorize by creating an object array thus: -- In [25]: import scipy.special as sp In [26]: v=vectorize(sp.lpmn,otypes='O') In [27]: a=v(0,1,[0,1,2]) In [28]: a Out[28]: (array([[[ 1. 0.]], [[ 1. 1.]], [[ 1. 2.]]], dtype=object),) In [29]: a[0] Out[29]: array([[[ 1. 0.]], [[ 1. 1.]], [[ 1. 2.]]], dtype=object) In [30]: a[0][0] Out[30]: array([[ 1., 0.]]) -- Gary R. Lars Friedrich wrote: > Hello, > > I would like to compute the values of associated lengendre functions. I > found scipy.special.lpmn which can be used for this purpose. However, it > is not possible to pass numpy-arrays for the function argument z. What > is the recommended way to deal with this? I could do a for loop, but I > guess this would be slow. Are there other implementations available? > > Thanks, > > Lars From fredmfp at gmail.com Tue Oct 9 09:31:58 2007 From: fredmfp at gmail.com (fred) Date: Tue, 09 Oct 2007 15:31:58 +0200 Subject: [SciPy-user] Associated legendre functions In-Reply-To: <470B7E85.5010709@bigpond.net.au> References: <470B6E4D.5080802@imtek.de> <470B7E85.5010709@bigpond.net.au> Message-ID: <470B82CE.1030706@gmail.com> Gary Ruben a ?crit : > There's probably a better way than this, but you can use vectorize by > Will it be as efficient as (my) hardcoded version ? -- http://scipy.org/FredericPetit From robince at gmail.com Tue Oct 9 09:36:32 2007 From: robince at gmail.com (Robin) Date: Tue, 9 Oct 2007 14:36:32 +0100 Subject: [SciPy-user] sparse matrix dtype Message-ID: Hi, I am trying to make a large sparse matrix - the values are all integers (in fact all non-zeros will be 1) so it would save me a lot of memory if I could use dtype=byte. However: In [23]: test=sparse.lil_matrix((10,10),dtype=byte) In [24]: test.dtype Out[24]: dtype('float64') Is this correct or is there something up here? Thanks Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Tue Oct 9 10:03:20 2007 From: robince at gmail.com (Robin) Date: Tue, 9 Oct 2007 15:03:20 +0100 Subject: [SciPy-user] sparse matrix dtype In-Reply-To: References: Message-ID: On 10/9/07, Robin wrote: > I am trying to make a large sparse matrix - the values are all integers > (in fact all non-zeros will be 1) so it would save me a lot of memory if I > could use dtype=byte. I added 'b' to the string of allowed dtypes in getdtype() on line 2791 of sparse.py. It now seems to behave as I would expect (hope), but it can't be that simple can it? Is it likely that doing this will break something else? Why are the dtypes restricted in the first place. Thanks Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Tue Oct 9 10:15:40 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Wed, 10 Oct 2007 00:15:40 +1000 Subject: [SciPy-user] Associated legendre functions In-Reply-To: <470B82CE.1030706@gmail.com> References: <470B6E4D.5080802@imtek.de> <470B7E85.5010709@bigpond.net.au> <470B82CE.1030706@gmail.com> Message-ID: <470B8D0C.8020609@bigpond.net.au> It's almost certain that your version is more efficient. Vectorize is very slow in my limited experience, but very useful. Gary R. fred wrote: > Gary Ruben a ?crit : >> There's probably a better way than this, but you can use vectorize by >> > Will it be as efficient as (my) hardcoded version ? From fredmfp at gmail.com Tue Oct 9 10:34:34 2007 From: fredmfp at gmail.com (fred) Date: Tue, 09 Oct 2007 16:34:34 +0200 Subject: [SciPy-user] Associated legendre functions In-Reply-To: <470B8D0C.8020609@bigpond.net.au> References: <470B6E4D.5080802@imtek.de> <470B7E85.5010709@bigpond.net.au> <470B82CE.1030706@gmail.com> <470B8D0C.8020609@bigpond.net.au> Message-ID: <470B917A.3000807@gmail.com> Gary Ruben a ?crit : > It's almost certain that your version is more efficient. Vectorize is > very slow in my limited experience, but very useful. > Was my thought too :-) Cheers, -- http://scipy.org/FredericPetit From Andy.cheesman at bristol.ac.uk Tue Oct 9 11:47:58 2007 From: Andy.cheesman at bristol.ac.uk (Andy Cheesman) Date: Tue, 09 Oct 2007 16:47:58 +0100 Subject: [SciPy-user] Scientific Python publications Message-ID: <470BA2AE.8070207@bristol.ac.uk> Dear people, Slightly off-topic, I was wondering if anyone could suggest a peer-reviewed journal article which features a scientific use of The reason is that I've got to give a brief ~12 mins internal talk to the molecular science group at the university of Bristol on a recent scientific paper and thought that I could use the talk as an excuse to advertise the wonders of python. Its the opportunity for some blatant self promotion. Cheers Andy From lbolla at gmail.com Tue Oct 9 11:55:00 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 9 Oct 2007 17:55:00 +0200 Subject: [SciPy-user] Scientific Python publications In-Reply-To: <470BA2AE.8070207@bristol.ac.uk> References: <470BA2AE.8070207@bristol.ac.uk> Message-ID: <80c99e790710090855o4e0d5c31rec25b276a8548be0@mail.gmail.com> here is a small list: http://lbolla.wordpress.com/2007/04/25/papers-about-python-and-scientific-computing/ L. On 10/9/07, Andy Cheesman wrote: > > Dear people, > > Slightly off-topic, I was wondering if anyone could suggest a > peer-reviewed journal article which features a scientific use of > The reason is that I've got to give a brief ~12 mins internal talk to > the molecular science group at the university of Bristol on a recent > scientific paper and thought that I could use the talk as an excuse to > advertise the wonders of python. > Its the opportunity for some blatant self promotion. > Cheers > > Andy > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thamelry at binf.ku.dk Tue Oct 9 11:57:28 2007 From: thamelry at binf.ku.dk (Thomas Hamelryck) Date: Tue, 9 Oct 2007 17:57:28 +0200 Subject: [SciPy-user] Scientific Python publications In-Reply-To: <470BA2AE.8070207@bristol.ac.uk> References: <470BA2AE.8070207@bristol.ac.uk> Message-ID: <2d7c25310710090857o7f387129jae9b25cc5d2de43a@mail.gmail.com> Hi Andy, Biopython's structural biology toolkit is described in: Hamelryck, T., Manderick, B. (2003) PDB parser and structure class implemented in Python. *Bioinformatics*, 19, 2308-2310. http://bioinformatics.oupjournals.org/cgi/content/abstract/19/17/2308?ijkey=61pti591becLU&keytype=ref It's used for data mining protein structures. Cheers, -Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From timmichelsen at gmx-topmail.de Tue Oct 9 12:15:22 2007 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Tue, 9 Oct 2007 16:15:22 +0000 (UTC) Subject: [SciPy-user] What happened to python-idl? References: <1190990509.6370.437.camel@glup.physics.ucf.edu> Message-ID: > http://software.pseudogreen.org/i2py/ > > This package appears to produce numarray code, but I imagine that making > it produce numpy code would be straightforward. I tried this one. The display (-d option) did work well. But the actual code conversion was not that satisfying. Kind regards, Timmie From lou_boog2000 at yahoo.com Tue Oct 9 12:57:47 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 9 Oct 2007 09:57:47 -0700 (PDT) Subject: [SciPy-user] Scientific Python publications In-Reply-To: <470BA2AE.8070207@bristol.ac.uk> Message-ID: <652419.41705.qm@web34410.mail.mud.yahoo.com> Check the May 2007 issue of Computing in Science & Engineering (an IEEE publication). The whole issue is dedicated to Python. Very nice articles on what people are doing with Python in science and enginnering. -- Lou Pecora --- Andy Cheesman wrote: > Dear people, > > Slightly off-topic, I was wondering if anyone could > suggest a > peer-reviewed journal article which features a > scientific use of > The reason is that I've got to give a brief ~12 mins > internal talk to > the molecular science group at the university of > Bristol on a recent > scientific paper and thought that I could use the > talk as an excuse to > advertise the wonders of python. > Its the opportunity for some blatant self promotion. > Cheers -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase. http://farechase.yahoo.com/ From dmitrey.kroshko at scipy.org Tue Oct 9 13:40:20 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 09 Oct 2007 20:40:20 +0300 Subject: [SciPy-user] could a U.S. citizen help? (connecting PyTrilinos to OpenOpt) Message-ID: <470BBD04.6070005@scipy.org> Hi all, (excuse my English) does anyone here interested in connecting NOX (more powerfull scipy.optimize.fsolve equivalent with ability of parallel calculations), MOOCHO (constrained NLP solver, same) (and maybe other solvers in future) to OpenOpt? I send a letter to PyTrilinos developers and here are 2 letters received from them (see below, as well as mine). Unfortunately, I have no possibility in visiting for the Trilinos Users Group meeting - even not enough documents of Ukraine citizen to apply for U.S. visa. What would you recommend me to answer? All I need from PyTrilinos developers are just several working examples of NOX and MOOCHO usage, with all default settings, like mine from here http://scipy.org/scipy/scikits/wiki/OpenOptExamples Those example from PyTrilinos documentation are very complicated. Thank you in advance, Dmitrey. Bill Spotz wrote: > Dmitrey, > > I have heard that we do have enough time for the paperwork if you are > interested in visiting for the Trilinos Users Group meeting. November > 6 and 7 are aimed at users and November 8 is aimed at developers. It > would be a good chance for you to get up to speed on Trilinos > capabilities, and an opportunity for us to speak in person. Bill Spotz wrote: > Dmitrey, > > I would be interested in linking PyTrilinos to OpenOpt. Let me know > what you think will be involved. > > Do you have a collaborator who is a U.S. citizen who could attend the > Trilinos Users Group meeting November 6-7 in Albuquerque? We can > accommodate foreign nationals, but we do not have enough time for the > paperwork to go through. > > I have CC'd the Trilinos, NOX and MOOCHO lead developers. > > On Oct 8, 2007, at 11:35 AM, dmitrey wrote: > >> Hallo! >> >> Excuse my English. >> >> I'm a developer of a new SciKit - an optimization framework from >> scipy developers team and optimization department of CI NAS Ukraine >> http://scipy.org/scipy/scikits/wiki/OpenOpt >> >> Are you interested in connecting some Pytrilinos solvers to OpenOpt? >> I would do it by myself but I feel lack of documentation, as well as >> other scipy users, see for example >> http://permalink.gmane.org/gmane.comp.python.scientific.user/13432 >> >> If you are, I guess first of all users would be interested in >> connecting MOOCHO and NOX. >> Here are some reasons that I hope would convince you to provide >> enough information for Pytrilinos-OpenOpt bridge: >> http://scipy.org/scipy/scikits/wiki/whereProfitsForOpenOptConnectedSolverOwners >> >> (I need example of Pytrilinos usage of NOX and/or MOOCHO) >> If you are interested in, I will explain more detailed what info is >> needed. >> >> Regards, Dmitrey > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > From skraelings001 at gmail.com Tue Oct 9 13:58:13 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Tue, 09 Oct 2007 12:58:13 -0500 Subject: [SciPy-user] Scientific Python publications In-Reply-To: <470BA2AE.8070207@bristol.ac.uk> References: <470BA2AE.8070207@bristol.ac.uk> Message-ID: <470BC135.1030501@gmail.com> Andy Cheesman escribi?: > Dear people, > > Slightly off-topic, I was wondering if anyone could suggest a > peer-reviewed journal article which features a scientific use of > The reason is that I've got to give a brief ~12 mins internal talk to > the molecular science group at the university of Bristol on a recent > scientific paper and thought that I could use the talk as an excuse to > advertise the wonders of python. > Its the opportunity for some blatant self promotion. > Cheers > > Andy > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > I found the "Computing in Science & Engineering Vol. 9, No. 2, May/June 2007" in http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_1of4.pdf http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_2of4.pdf http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_3of4.pdf http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_4of4.pdf but it seems to be down, i can email them to you if you want. Cheers, Reynaldo From marcos.capistran at gmail.com Tue Oct 9 14:25:03 2007 From: marcos.capistran at gmail.com (Marcos Capistran) Date: Tue, 9 Oct 2007 13:25:03 -0500 Subject: [SciPy-user] Scientific Python publications In-Reply-To: <470BC135.1030501@gmail.com> References: <470BA2AE.8070207@bristol.ac.uk> <470BC135.1030501@gmail.com> Message-ID: Reynaldo, could you please email me this pdfs. Thank you very much! On 10/9/07, Reynaldo Baquerizo wrote: > Andy Cheesman escribi?: > > Dear people, > > > > Slightly off-topic, I was wondering if anyone could suggest a > > peer-reviewed journal article which features a scientific use of > > The reason is that I've got to give a brief ~12 mins internal talk to > > the molecular science group at the university of Bristol on a recent > > scientific paper and thought that I could use the talk as an excuse to > > advertise the wonders of python. > > Its the opportunity for some blatant self promotion. > > Cheers > > > > Andy > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > I found the "Computing in Science & Engineering Vol. 9, No. 2, May/June > 2007" in > > http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_1of4.pdf > http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_2of4.pdf > http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_3of4.pdf > http://naniloa.cse.ucdavis.edu/~cmg/Group/readings/pythonissue_4of4.pdf > > but it seems to be down, i can email them to you if you want. > > Cheers, > Reynaldo > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Marcos Aurelio Capistr?n Ocampo CIMAT A. P. 402 Jalisco S/N, Valenciana Guanajuato, GTO 36240 Tel: (473) 73 2 71 55 Ext. 49640 From stefan at sun.ac.za Tue Oct 9 15:41:18 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 9 Oct 2007 21:41:18 +0200 Subject: [SciPy-user] sparse matrix dtype In-Reply-To: References: Message-ID: <20071009194118.GE11322@mentat.za.net> On Tue, Oct 09, 2007 at 03:03:20PM +0100, Robin wrote: > > On 10/9/07, Robin wrote: > > I am trying to make a large sparse matrix - the values are all integers (in > fact all non-zeros will be 1) so it would save me a lot of memory if I > could use dtype=byte. > > > I added 'b' to the string of allowed dtypes in getdtype() on line 2791 of > sparse.py. > > It now seems to behave as I would expect (hope), but it can't be that simple > can it? > > Is it likely that doing this will break something else? Why are the dtypes > restricted in the first place. Like I mentioned before, this is on the TODO list: http://projects.scipy.org/scipy/scipy/ticket/225 For most situations, changing the line you did should work. I haven't looked into what effect it will have on the routines implemented in C++. St?fan From tjhnson at gmail.com Tue Oct 9 16:39:17 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Tue, 9 Oct 2007 13:39:17 -0700 Subject: [SciPy-user] hash function on arrays Message-ID: Hi, I need to do comparisons on a very large number of arrays. I was thinking that it might speed things up if I first compared on a hash value...and then compared on those arrays with the same hash value (if needed) I am wondering: 1) about other methods for doing this... 2) how many collisions I can expect...(that is, will this technique be effective) 3) what is the hash function (specific to scipy arrays or just python's builtin) Thanks. From dineshbvadhia at hotmail.com Tue Oct 9 16:48:07 2007 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Tue, 9 Oct 2007 13:48:07 -0700 Subject: [SciPy-user] Solution y <- Ax, where A is sparse Message-ID: Hello! Co-incidentally, I too am trying (or rather struggling) to write Python code to solve the problem: y <- Ax where A is an m (column) by n (row) sparse matrix (with m>>n) and the matrix entries are either 0's or 1's (with no descernible pattern ie. assume random entries), x is a floating point vector and so is the result vector y. I was looking at the SciPy sparse matrix functions (see http://www.rexx.com/~dkuhlman/scipy_course_01.html#sparse) and will look at PySparse. I was hoping to find in the Python world something as simple to use as the SparseLib++ for C++ library (see http://math.nist.gov/sparselib++/) which we have used successfully for many years. In the meantime, any help and pointers would be truly appreciated. Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Oct 9 17:16:08 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Oct 2007 16:16:08 -0500 Subject: [SciPy-user] hash function on arrays In-Reply-To: References: Message-ID: <470BEF98.6080400@gmail.com> Tom Johnson wrote: > Hi, > > I need to do comparisons on a very large number of arrays. I was > thinking that it might speed things up if I first compared on a hash > value...and then compared on those arrays with the same hash value (if > needed) > > I am wondering: > > 1) about other methods for doing this... > 2) how many collisions I can expect...(that is, will this technique be > effective) > 3) what is the hash function (specific to scipy arrays or just python's builtin) numpy arrays do not have a hash function defined because they are mutable. You can probably construct one suitable to your purpose if you are careful. If you are looking for exact matches, then you could hash a tuple like so: ('numpy.ndarray', a.shape, a.dtype, a.strides, str(a.flags), buffer(a)) Note that this means that a0=arange(10) won't match a1=arange(10).astype(float) although (a0 == a1).all() is True. If you do want to accept a0 and a1 as equal, then you have a harder problem. You will want to look at Objects/stringobject.c:string_hash() and Objects/tupleobject.c:tuplehash() for the hash function implementations for strings and tuples in order to determine how good this will be. Most likely, you'll just have to try it and see. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Tue Oct 9 17:57:20 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 9 Oct 2007 14:57:20 -0700 (PDT) Subject: [SciPy-user] Problem: Quadrature - causes printing in function call. In-Reply-To: Message-ID: <173450.85774.qm@web34411.mail.mud.yahoo.com> I am using the scipy.integrate function 'quadrature'. (Python 2.4, scipy version '0.5.2.1'). The function seems to work so far, but it causes a print out (to stdout) of the number of points the quadrature calculation took. E.g.: Took 35 points. This seems to be happening internal to the quadrature function and I don't see how to turn it off. Does anyone know how to turn it off or can anyone 'correct' the problem -- maybe it's a relic of a debugging session when it was written? Thanks for any help. -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Catch up on fall's hot new shows on Yahoo! TV. Watch previews, get listings, and more! http://tv.yahoo.com/collections/3658 From tjhnson at gmail.com Tue Oct 9 18:07:29 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Tue, 9 Oct 2007 15:07:29 -0700 Subject: [SciPy-user] hash function on arrays In-Reply-To: <470BEF98.6080400@gmail.com> References: <470BEF98.6080400@gmail.com> Message-ID: On 10/9/07, Robert Kern wrote: > > ('numpy.ndarray', a.shape, a.dtype, a.strides, str(a.flags), buffer(a)) > Will this work for arrays defined in different python processes? I will be storing these hash values (along with the matrices) in a database and doing comparisons at some later time, in some other python process. From spmcinerney at hotmail.com Tue Oct 9 18:13:12 2007 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Tue, 9 Oct 2007 15:13:12 -0700 Subject: [SciPy-user] Scientific Python publications In-Reply-To: References: Message-ID: Would it be useful to collect citations on a publications page on scipy.org? (hyperlinked if possible) Regards, Stephen _________________________________________________________________ Peek-a-boo FREE Tricks & Treats for You! http://www.reallivemoms.com?ocid=TXT_TAGHM&loc=us -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Oct 9 18:33:39 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Oct 2007 17:33:39 -0500 Subject: [SciPy-user] hash function on arrays In-Reply-To: References: <470BEF98.6080400@gmail.com> Message-ID: <470C01C3.1070703@gmail.com> Tom Johnson wrote: > On 10/9/07, Robert Kern wrote: >> ('numpy.ndarray', a.shape, a.dtype, a.strides, str(a.flags), buffer(a)) > > Will this work for arrays defined in different python processes? > > I will be storing these hash values (along with the matrices) in a > database and doing comparisons at some later time, in some other > python process. It depends on how you serialize the arrays and whether or not you care about things like contiguity. Some ways of serializing might store contiguous versions of discontiguous inputs. You may want to consider making sure that the arrays are contiguous and of a fixed byteorder before taking the hash or storing the array. But nothing should be relying on the memory address of the data or any Python object. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Oct 9 18:49:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Oct 2007 17:49:15 -0500 Subject: [SciPy-user] hash function on arrays In-Reply-To: References: <470BEF98.6080400@gmail.com> Message-ID: <470C056B.1020009@gmail.com> Tom Johnson wrote: > On 10/9/07, Robert Kern wrote: >> ('numpy.ndarray', a.shape, a.dtype, a.strides, str(a.flags), buffer(a)) > > Will this work for arrays defined in different python processes? > > I will be storing these hash values (along with the matrices) in a > database and doing comparisons at some later time, in some other > python process. Sorry, the hash of the dtype does depend on the pointer. For the natively-supported types on your machine (dtype(float), dtype('=i4'), etc.), this shouldn't matter since they appear to be shortcutted such that you get the same object out always. However, non-native types like byteswapped versions and custom dtypes give different hashes every time. If you want to support the non-native types, then you have to expand the dtype somewhat. If you only need to support byteswapped versions of the usual datatypes, you can probably just use a.dtype.str. If you need to support simples record arrays, tupe(a.dtype.descr) will probably work. If you need to support nested record arrays ... you have some more work ahead of you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vincent.nijs at gmail.com Tue Oct 9 22:30:01 2007 From: vincent.nijs at gmail.com (Vincent) Date: Wed, 10 Oct 2007 02:30:01 -0000 Subject: [SciPy-user] Multivariate regression? In-Reply-To: <1e2af89e0710061034i60d4afd8ucd95d11cc88375af@mail.gmail.com> References: <525f23e80710060713t669aa894gdd3983fdace702a1@mail.gmail.com> <1e2af89e0710061034i60d4afd8ucd95d11cc88375af@mail.gmail.com> Message-ID: <1191983401.219641.148820@50g2000hsm.googlegroups.com> Interesting. When did this get added to scipy? It is not in version 0.5.3.dev3227. In [14]: import scipy.stats.models ImportError: No module named models Is there any documentation for the module? Thanks, Vincent On Oct 6, 12:34 pm, "Matthew Brett" wrote: > Hi, > > > > I study econometrics and statistics, and i wanted to try out my > > > favourite programming as a scientific tool. Unfortunately, I could not > > > find a function for ordinary least squares estimation. The > > > stats.linregress() is insufficient for my needs, and stats.glm() seems > > > to be for something else (not sure really). > > I think you may want the OLS model in scipy.stats.models: > > import numpy as N > import scipy.stats.models as SSM > > covariates = N.random.normal(size=(100,3)) > intercept = N.ones((100,1)) > design = N.c_[covariates, intercept] > data = N.random.normal(size=(100,1)) > > model = SSM.regression.ols_model(design) > results = model.fit(data) > > results.beta > > The models package is still being developed, but it already does the > basic stuff, > > Best, > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Oct 9 22:50:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Oct 2007 21:50:32 -0500 Subject: [SciPy-user] Multivariate regression? In-Reply-To: <1191983401.219641.148820@50g2000hsm.googlegroups.com> References: <525f23e80710060713t669aa894gdd3983fdace702a1@mail.gmail.com> <1e2af89e0710061034i60d4afd8ucd95d11cc88375af@mail.gmail.com> <1191983401.219641.148820@50g2000hsm.googlegroups.com> Message-ID: <470C3DF8.9060106@gmail.com> Vincent wrote: > Interesting. When did this get added to scipy? It is not in version > 0.5.3.dev3227. After 0.6, I believe. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Tue Oct 9 23:19:13 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Tue, 9 Oct 2007 23:19:13 -0400 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: <20071008224633.GB21653@mentat.za.net> References: <47057436.3060600@gmail.com> <20071008224633.GB21653@mentat.za.net> Message-ID: <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> > PIL 1.1.6 is still broken to some extent. I sent a patch to the Image > SIG: > > http://mail.python.org/pipermail/image-sig/2007-August/004570.html > > Unfortunately, there was no reaction. I sent a very similar patch a while ago, and got similar results (i.e. no response). Moreover, the PIL is extremely inconsistent about its handling of 16- bit integer images (important for many scientific uses), and has various byte-swapping bugs for these on little-endian architectures. There was also little interest in my patches about those issues... I have for a while been toying with taking the image format parsers (all in pure python) from the PIL and hooking that up to numpy for actually unpacking the bits from the file. (Right now, the PIL uses some custom and occasionally questionable C code to do this.) I've made no progress, but I think it would be possible to do so with not too much fuss. Would there be any interest in something like this? My current solution is a fork of PIL that I made, which rips out basically everything except image IO, and for which I fixed the 16- bit image problems and drastically beefed up the numpy compatibility. I'd be happy to send this to anyone who desires, or if there was a specific clamor, make it a scikit or something. Is there any interest regarding that either? Zach Pincus On Oct 8, 2007, at 6:46 PM, Stefan van der Walt wrote: > On Mon, Oct 08, 2007 at 02:21:39PM +0000, Sebastian wrote: >> Robert Kern gmail.com> writes: >>> Exactly what version of scipy are you talking about? The one I >>> have from a >>> recent SVN checkout works fine. It was modified to its current >>> form on >>> 2007-08-28. It might also be that this code relies on behavior of >>> recent >>> releases of PIL. I am using PIL 1.1.6b2. What are you using? >>> >> >> Hello, >> I used the PIL 1.1.5. After installing the PIL 1.1.6 the problem >> was solved. >> Thanks for your kind help. > > PIL 1.1.6 is still broken to some extent. I sent a patch to the Image > SIG: > > http://mail.python.org/pipermail/image-sig/2007-August/004570.html > > Unfortunately, there was no reaction. > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From aisaac at american.edu Wed Oct 10 02:29:17 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 10 Oct 2007 02:29:17 -0400 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> References: <47057436.3060600@gmail.com><20071008224633.GB21653@mentat.za.net><0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> Message-ID: On Tue, 9 Oct 2007, Zachary Pincus apparently wrote: > My current solution is a fork of PIL that I made, which > rips out basically everything except image IO, and for > which I fixed the 16- bit image problems and drastically > beefed up the numpy compatibility. I'd be happy to send > this to anyone who desires, or if there was a specific > clamor, make it a scikit or something. Is there any > interest regarding that either? Is Pythonware not interested in the fixes? IMO, making this available as a SciKit is desirable only if the better solution, a fixed PIL, is not feasible. Cheers, Alan Isaac From haase at msg.ucsf.edu Wed Oct 10 03:18:14 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 10 Oct 2007 09:18:14 +0200 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: References: <47057436.3060600@gmail.com> <20071008224633.GB21653@mentat.za.net> <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> Message-ID: Hi, Somewhat off-topic: Does anyone here maybe know, how one could add write support for mulit-page TIFF files ? PIL can already read them. I use them for 3D images. But I can't write them. I tried reading the TIFF specs, but am getting lost every time ... ;-) Furthermore, more numpy related: Is it conceivable to open a (multi-page) TIFF in a memmapped-way ? For a different, easier, file-format, I am using this and it greatly increases the file-open speed. Thanks, Sebastian Haase On 10/10/07, Alan G Isaac wrote: > On Tue, 9 Oct 2007, Zachary Pincus apparently wrote: > > My current solution is a fork of PIL that I made, which > > rips out basically everything except image IO, and for > > which I fixed the 16- bit image problems and drastically > > beefed up the numpy compatibility. I'd be happy to send > > this to anyone who desires, or if there was a specific > > clamor, make it a scikit or something. Is there any > > interest regarding that either? > > Is Pythonware not interested in the fixes? > IMO, making this available as a SciKit is desirable > only if the better solution, a fixed PIL, is not > feasible. > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From sinclaird at ukzn.ac.za Wed Oct 10 03:40:20 2007 From: sinclaird at ukzn.ac.za (Scott Sinclair) Date: Wed, 10 Oct 2007 09:40:20 +0200 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: References: <47057436.3060600@gmail.com> <20071008224633.GB21653@mentat.za.net> <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> Message-ID: <470C9D89.F934.009F.0@ukzn.ac.za> If you don't get your head around libtiff then the Python bindings to ImageMagick might be more friendly. http://www.imagemagick.org/script/index.php http://www.imagemagick.org/download/python/ I've never used them, but they look promising. Cheers, Scott >>> "Sebastian Haase" 10/10/2007 09:18 >>> Hi, Somewhat off-topic: Does anyone here maybe know, how one could add write support for mulit-page TIFF files ? PIL can already read them. I use them for 3D images. But I can't write them. I tried reading the TIFF specs, but am getting lost every time ... ;-) Furthermore, more numpy related: Is it conceivable to open a (multi-page) TIFF in a memmapped-way ? For a different, easier, file-format, I am using this and it greatly increases the file-open speed. Thanks, Sebastian Haase Please find our Email Disclaimer here: http://www.ukzn.ac.za/disclaimer/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From et.gaudrain at free.fr Wed Oct 10 04:36:21 2007 From: et.gaudrain at free.fr (Etienne Gaudrain) Date: Wed, 10 Oct 2007 10:36:21 +0200 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux -- my mistake ! Message-ID: <470C8F05.30706@free.fr> Hi ! Sorry to re-activate this post if it is really a false issue... but I do have the same error. I restarted all consoles, I even restarted Windows, but I still have the error when I do from scipy import * : Traceback (most recent call last): File "toto.py", line 1, in from scipy import * File "C:\Python25\Lib\site-packages\scipy\linsolve\__init__.py", line 5, in import umfpack File "C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\__init__.py", line 3, in from umfpack import * File "C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\umfpack.py", line 11, in import scipy.sparse as sp File "C:\Python25\Lib\site-packages\scipy\sparse\__init__.py", line 5, in from sparse import * File "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", line 21, in from scipy.sparse.sparsetools import cscmux, csrmux, \ ImportError: cannot import name cscmux Can you please give me some details of what you exaclty did... even if it seems simple... I use Python 2.5.1 on Windows XP 32bits, Scipy version is 0.6.0 and Numpy is 1.0.3.1. Thanks ! -Etienne -----Original Message----- From: Jim Vickroy noaa.gov> Subject: Re: scipy 0.6.0 ImportError: cannot import name cscmux -- my mistake ! Newsgroups: gmane.comp.python.scientific.user Date: 2007-09-21 18:52:16 GMT (2 weeks, 4 days, 13 hours and 38 minutes ago) The import error did not occur when I restarted my IDE -- which is where I was working when it occurred. I apologize for raising this false issue. -- jv -----Original Message----- From: scipy-user-bounces scipy.org [mailto:scipy-user-bounces scipy.org] On Behalf Of Jim Vickroy Sent: Friday, September 21, 2007 12:01 PM To: 'SciPy Users List' Subject: Re: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux -----Original Message----- From: scipy-user-bounces scipy.org [mailto:scipy-user-bounces scipy.org] On Behalf Of Jarrod Millman Sent: Friday, September 21, 2007 9:10 AM To: SciPy Users List Subject: Re: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux On 9/21/07, Jim Vickroy noaa.gov> wrote: > I've just installed scipy 0.6.0 via scipy-0.6.0.win32-py2.5.msi. The > installation appeared to proceed normally, but the scipy import had the > following result: Hey Jim, Could you try installing the exe to see if it is a problem with just the msi? OK, I just uninstalled the "msi" version of scipy 0.6.0 and reinstalled using scipy-0.6.0.win32-py2.5.exe. The exact same import exception is raised with the "exe" version. Also, what version of NumPy are you running? Here are particulars about my system: >>> import numpy >>> numpy.__version__ '1.0.3.1' >>> import sys >>> sys.version '2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)]' >>> import scipy >>> scipy.__version__ '0.6.0' >>> Don't know if this is important but notice that "import scipy" does not generate an exception while "from scipy import *" does. Thanks for your interest. -- jv Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ _______________________________________________ SciPy-user mailing list SciPy-user scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Etienne Gaudrain Universite Claude Bernard LYON 1 CNRS - UMR5020, Neurosciences Sensorielles, Comportement, Cognition 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 85 Fax : 04 37 28 76 01 Page web equipe : http://olfac.univ-lyon1.fr/unite/equipe-02/equipe-2-f.html ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From calhoun at amath.washington.edu Wed Oct 10 06:46:19 2007 From: calhoun at amath.washington.edu (Donna Calhoun) Date: Wed, 10 Oct 2007 10:46:19 +0000 (UTC) Subject: [SciPy-user] BLAS and srotgm References: <46E6D84A.7020800@gmail.com> <46EAB01B.1000904@gmail.com> Message-ID: > > I removed the library flags to libc, and libutil and the build picked up the > > correct blas library.> > I'm curious as to why you had -L/usr/lib in there. Did it come from Python's > build or do you have an LDFLAGS environment variable sitting around that's > interfering? > (sorry for the long delay in a reply) Yes, I had added /usr/lib to my LDFLAGS environment variable. Before I figured out that I needed the "-shared" flag, I was getting lots of undefined references to things from posix libraries, etc. So I added references to c libs. But then I came across a post that hinted at the "-shared" flag, and that solved all the undefined references problems. If in fact, the "-shared" flag is required, is there a mention of this somewhere in the install instructions? It took me two days to figure out what the problem was. I was installing version 0.5.2.1. I have since gone on to install 0.6.0 on another machine, and it seems as though this version didn't need the flag (although I don't remmber exactly whether I may have in fact had it set). Did anything change from the last version to this one? For some reason, I didn't need this flag with numpy. Thank you again, Donna From haase at msg.ucsf.edu Wed Oct 10 07:21:51 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 10 Oct 2007 13:21:51 +0200 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: <470C9D89.F934.009F.0@ukzn.ac.za> References: <47057436.3060600@gmail.com> <20071008224633.GB21653@mentat.za.net> <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> <470C9D89.F934.009F.0@ukzn.ac.za> Message-ID: On 10/10/07, Scott Sinclair wrote: > > If you don't get your head around libtiff then the Python bindings to > ImageMagick might be more friendly. > > http://www.imagemagick.org/script/index.php > > http://www.imagemagick.org/download/python/ > > I've never used them, but they look promising. Thanks for the hint. When I looked into the python binding of imagemagick it looked very scary !! Looked like a very old - if not dead - project. Somewhere I read the library itself, was (potentially?) going to change - i.e. was not stable yet. Is imagemagick python binding really a good idea ? Thanks, Sebastian > >>> "Sebastian Haase" 10/10/2007 09:18 >>> > Hi, > Somewhat off-topic: > Does anyone here maybe know, how one could add > write support for mulit-page TIFF files ? > PIL can already read them. I use them for 3D images. But I can't write > them. > I tried reading the TIFF specs, but am getting lost every time ... ;-) > > Furthermore, more numpy related: Is it conceivable to open a (multi-page) > TIFF > in a memmapped-way ? For a different, easier, file-format, I am using > this and it greatly increases the file-open speed. > > Thanks, > Sebastian Haase > > > > > Please find our Email Disclaimer here-->: > http://www.ukzn.ac.za/disclaimer > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From aisaac at american.edu Wed Oct 10 10:36:39 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 10 Oct 2007 10:36:39 -0400 Subject: [SciPy-user] =?utf-8?q?could_a_U=2ES=2E_citizen_help=3F_=28connec?= =?utf-8?q?ting_PyTrilinos_to=09OpenOpt=29?= In-Reply-To: <470BBD04.6070005@scipy.org> References: <470BBD04.6070005@scipy.org> Message-ID: On Tue, 09 Oct 2007, dmitrey apparently wrote: > does anyone here interested in connecting NOX (more > powerfull scipy.optimize.fsolve equivalent with ability of > parallel calculations), MOOCHO (constrained NLP solver, > same) (and maybe other solvers in future) to OpenOpt? > I send a letter to PyTrilinos developers and here are > 2 letters received from them (see below, as well as mine). > Unfortunately, I have no possibility in visiting for the > Trilinos Users Group meeting - even not enough documents > of Ukraine citizen to apply for U.S. visa. > What would you recommend me to answer? Since you cannot attend, and nobody on this list has stepped forward, I recommend drafting to the developer a note that explains that - regrettably, attending is not possible for you. Also ask if there is a participant with whom you might have further discussions - that you are not getting enough guidance from the existing documentation - that if there is a documentation sprint in Albuqurque you would find it very helpful to be in contact with the participants of that sprint. Note that you might be able to point out shortcomings of the docs from the point of view of a new user. - that what would be really helpful right now is several working examples of NOX and MOOCHO usage, with *all* default settings. You can point to your nice examples at as an example of what you need. Finally, see if you can get James interested in helping you push this forward. Cheers, Alan Isaac From robert.kern at gmail.com Wed Oct 10 11:10:53 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Oct 2007 10:10:53 -0500 Subject: [SciPy-user] BLAS and srotgm In-Reply-To: References: <46E6D84A.7020800@gmail.com> <46EAB01B.1000904@gmail.com> Message-ID: <470CEB7D.1080402@gmail.com> Donna Calhoun wrote: >>> I removed the library flags to libc, and libutil and the build picked up the >>> correct blas library.> >> I'm curious as to why you had -L/usr/lib in there. Did it come from Python's >> build or do you have an LDFLAGS environment variable sitting around that's >> interfering? > > (sorry for the long delay in a reply) > > Yes, I had added /usr/lib to my LDFLAGS environment variable. Before I > figured out that I needed the "-shared" flag, I was getting lots of undefined > references to things from posix libraries, etc. So I added references to c > libs. But then I came across a post that hinted at the "-shared" flag, and that > solved all the undefined references problems. > > If in fact, the "-shared" flag is required, is there a mention of this > somewhere in the install instructions? It took me two days to figure > out what the problem was. > > I was installing version 0.5.2.1. I have since gone on to install 0.6.0 on > another machine, and it seems as though this version didn't need the flag > (although I don't remmber exactly whether I may have in fact had it set). Did > anything change from the last version to this one? For some reason, I didn't > need this flag with numpy. The problem is that you set LDFLAGS, which *overrides* the linking arguments for FORTRAN extension modules, even the flags that are added by Python itself like -shared. If you leave it unset, things should be fine. Using site.cfg is the appropriate way to add the usual -L and -l flags. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jdh2358 at gmail.com Wed Oct 10 13:12:17 2007 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 10 Oct 2007 12:12:17 -0500 Subject: [SciPy-user] skew and kurtosis return different types Message-ID: <88e473830710101012m66ebdff7u57e5a3ec0ec19548@mail.gmail.com> Is it right and good that the skew and kurtosis functions in scipy.stats return different types? It seems that both should return float In [68]: import numpy; print numpy.__version__ 1.0.4.dev4154 In [69]: import scipy; print scipy.__version__ 0.7.0.dev3412 In [74]: import scipy.stats In [75]: a = numpy.arange(100) In [76]: print "skew return type=%s" % type(scipy.stats.skew(a)) skew return type= In [77]: print "kurtosis return type=%s" % type(scipy.stats.kurtosis(a)) kurtosis return type= From calhoun at amath.washington.edu Wed Oct 10 13:00:35 2007 From: calhoun at amath.washington.edu (Donna Calhoun) Date: Wed, 10 Oct 2007 17:00:35 +0000 (UTC) Subject: [SciPy-user] BLAS and srotgm References: <46E6D84A.7020800@gmail.com> <46EAB01B.1000904@gmail.com> <470CEB7D.1080402@gmail.com> Message-ID: > Using site.cfg is the appropriate way to add the usual -L and -l flags. > Thank you for your reply. How do I set these flags in a site.cfg file? Donna From millman at berkeley.edu Wed Oct 10 14:20:33 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 10 Oct 2007 11:20:33 -0700 Subject: [SciPy-user] skew and kurtosis return different types In-Reply-To: <88e473830710101012m66ebdff7u57e5a3ec0ec19548@mail.gmail.com> References: <88e473830710101012m66ebdff7u57e5a3ec0ec19548@mail.gmail.com> Message-ID: Hey John, I agree moments of a distribution should be of type float. Currently, it is for everything except skew and Pearson's kurtosis: >>> scipy.stats.moment(a) 0.0 >>> scipy.stats.variation(a) 0.58315293025701254 >>> scipy.stats.skew(a) array(0.0) >>> scipy.stats.kurtosis(a) -1.2002400240024003 >>> scipy.stats.kurtosis(a,fisher=False) array(1.7997599759975997) We could fix this by either having the functions subtract zero or make a method call: >>> scipy.stats.skew(a) array(0.0) >>> scipy.stats.skew(a) - 0 0.0 >>> scipy.stats.skew(a).tolist() 0.0 >>> scipy.stats.skew(a).item() 0.0 Does it make any difference if we do it one way or another? Are there any situations where returning an array with 1 element is importantly different than returning a float? If we want all the moments to return values of the same type, my preference would be to use the .item method. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Wed Oct 10 14:30:38 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Oct 2007 13:30:38 -0500 Subject: [SciPy-user] BLAS and srotgm In-Reply-To: References: <46E6D84A.7020800@gmail.com> <46EAB01B.1000904@gmail.com> <470CEB7D.1080402@gmail.com> Message-ID: <470D1A4E.5010104@gmail.com> Donna Calhoun wrote: > >> Using site.cfg is the appropriate way to add the usual -L and -l flags. >> > > Thank you for your reply. > > How do I set these flags in a site.cfg file? numpy and scipy come with a fairly complete example in site.cfg.example. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Wed Oct 10 14:33:00 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 10 Oct 2007 20:33:00 +0200 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> References: <47057436.3060600@gmail.com> <20071008224633.GB21653@mentat.za.net> <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> Message-ID: <20071010183300.GB4063@mentat.za.net> Hi Zachary On Tue, Oct 09, 2007 at 11:19:13PM -0400, Zachary Pincus wrote: > > PIL 1.1.6 is still broken to some extent. I sent a patch to the Image > > SIG: > > > > http://mail.python.org/pipermail/image-sig/2007-August/004570.html > > > > Unfortunately, there was no reaction. > > I sent a very similar patch a while ago, and got similar results > (i.e. no response). > > Moreover, the PIL is extremely inconsistent about its handling of 16- > bit integer images (important for many scientific uses), and has > various byte-swapping bugs for these on little-endian architectures. > There was also little interest in my patches about those issues... > > I have for a while been toying with taking the image format parsers > (all in pure python) from the PIL and hooking that up to numpy for > actually unpacking the bits from the file. (Right now, the PIL uses > some custom and occasionally questionable C code to do this.) I've > made no progress, but I think it would be possible to do so with not > too much fuss. Would there be any interest in something like this? > > My current solution is a fork of PIL that I made, which rips out > basically everything except image IO, and for which I fixed the 16- > bit image problems and drastically beefed up the numpy compatibility. > I'd be happy to send this to anyone who desires, or if there was a > specific clamor, make it a scikit or something. Is there any interest > regarding that either? Currently, we have 3 systems involved here: - scipy.misc.pilutil - scipy.ndimage - PIL Jarrod Millman mentioned that they will soon start a rewrite/major refactoring of ndimage. I think we should merge all the image processing and I/O abilities currently available in scipy into the new package. Ndimage was written before numpy reached 1.0, and duplicates a lot of functionality that can now be written in Python. Converting the code from C to Python will certainly lessen the burden on developers. I hope that your code finds it way into the new ndimage (it may help if you make the patches available now, already -- or if you write a wiki page about the deficiencies you noticed). I'm keeping my eyes open, and hope to see some more messages on the topic from the nipy guys. Cheers St?fan From haase at msg.ucsf.edu Wed Oct 10 15:03:35 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 10 Oct 2007 21:03:35 +0200 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: <20071010183300.GB4063@mentat.za.net> References: <47057436.3060600@gmail.com> <20071008224633.GB21653@mentat.za.net> <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> <20071010183300.GB4063@mentat.za.net> Message-ID: On 10/10/07, Stefan van der Walt wrote: > Hi Zachary > > On Tue, Oct 09, 2007 at 11:19:13PM -0400, Zachary Pincus wrote: > > > PIL 1.1.6 is still broken to some extent. I sent a patch to the Image > > > SIG: > > > > > > http://mail.python.org/pipermail/image-sig/2007-August/004570.html > > > > > > Unfortunately, there was no reaction. > > > > I sent a very similar patch a while ago, and got similar results > > (i.e. no response). > > > > Moreover, the PIL is extremely inconsistent about its handling of 16- > > bit integer images (important for many scientific uses), and has > > various byte-swapping bugs for these on little-endian architectures. > > There was also little interest in my patches about those issues... > > > > I have for a while been toying with taking the image format parsers > > (all in pure python) from the PIL and hooking that up to numpy for > > actually unpacking the bits from the file. (Right now, the PIL uses > > some custom and occasionally questionable C code to do this.) I've > > made no progress, but I think it would be possible to do so with not > > too much fuss. Would there be any interest in something like this? > > > > My current solution is a fork of PIL that I made, which rips out > > basically everything except image IO, and for which I fixed the 16- > > bit image problems and drastically beefed up the numpy compatibility. > > I'd be happy to send this to anyone who desires, or if there was a > > specific clamor, make it a scikit or something. Is there any interest > > regarding that either? > > Currently, we have 3 systems involved here: > > - scipy.misc.pilutil > - scipy.ndimage > - PIL > > Jarrod Millman mentioned that they will soon start a rewrite/major > refactoring of ndimage. I think we should merge all the image > processing and I/O abilities currently available in scipy into the new > package. Ndimage was written before numpy reached 1.0, and duplicates > a lot of functionality that can now be written in Python. Converting > the code from C to Python will certainly lessen the burden on > developers. > > I hope that your code finds it way into the new ndimage (it may help > if you make the patches available now, already -- or if you write a > wiki page about the deficiencies you noticed). I'm keeping my eyes > open, and hope to see some more messages on the topic from the nipy > guys. > Please keep in mind that "image" in ndimage does not refer to the kind of "image" as is "image file format" -- those are two entirely different things. This was discussed here on this list before, some time ago. Cheers, Sebastian Haase From zpincus at stanford.edu Wed Oct 10 15:19:48 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 10 Oct 2007 15:19:48 -0400 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: References: <47057436.3060600@gmail.com><20071008224633.GB21653@mentat.za.net><0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> Message-ID: > On Tue, 9 Oct 2007, Zachary Pincus apparently wrote: >> My current solution is a fork of PIL that I made, which >> rips out basically everything except image IO, and for >> which I fixed the 16- bit image problems and drastically >> beefed up the numpy compatibility. I'd be happy to send >> this to anyone who desires, or if there was a specific >> clamor, make it a scikit or something. Is there any >> interest regarding that either? > > Is Pythonware not interested in the fixes? As far as I can tell there was no interest in any of the patches that I put forward (either for simple or larger issues). Stefan seems to have had a similar experience. > IMO, making this available as a SciKit is desirable > only if the better solution, a fixed PIL, is not > feasible. I agree entirely. The PIL folks said that for PIL 2.0 (is this even real, or is it mythical, I do not know) they were re-working the memory model, which should ameliorate some of the 16-bit image problems, and will probably comply better with the new buffer interface. But I have no idea when that might actually be happening, and it sounds like the developers aren't interested in addressing the issues in the mean time. Zach From stefan at sun.ac.za Wed Oct 10 16:20:32 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 10 Oct 2007 22:20:32 +0200 Subject: [SciPy-user] fromimage returns instance instead of numpy array In-Reply-To: References: <47057436.3060600@gmail.com> <20071008224633.GB21653@mentat.za.net> <0085CA32-F096-4DF4-A78E-0C0ECC813B8E@stanford.edu> <20071010183300.GB4063@mentat.za.net> Message-ID: <20071010202032.GD4063@mentat.za.net> Hi Sebastian On Wed, Oct 10, 2007 at 09:03:35PM +0200, Sebastian Haase wrote: > Please keep in mind that "image" in ndimage does not refer to the kind > of "image" as is "image file format" -- those are two entirely > different things. > This was discussed here on this list before, some time ago. I must have missed that discussion, would you mind clarifying? Thanks St?fan From stefan at sun.ac.za Wed Oct 10 20:29:40 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 11 Oct 2007 02:29:40 +0200 Subject: [SciPy-user] skew and kurtosis return different types In-Reply-To: References: <88e473830710101012m66ebdff7u57e5a3ec0ec19548@mail.gmail.com> Message-ID: <20071011002940.GA20225@mentat.za.net> On Wed, Oct 10, 2007 at 11:20:33AM -0700, Jarrod Millman wrote: > We could fix this by either having the functions subtract zero or make > a method call: > >>> scipy.stats.skew(a) > array(0.0) > >>> scipy.stats.skew(a) - 0 > 0.0 > >>> scipy.stats.skew(a).tolist() > 0.0 > >>> scipy.stats.skew(a).item() > 0.0 > > Does it make any difference if we do it one way or another? Are there > any situations where returning an array with 1 element is importantly > different than returning a float? If we want all the moments to > return values of the same type, my preference would be to use the > .item method. The .item method won't work in cases where more than one value is returned, e.g. scipy.stats.skew(numpy.random.random((3,3))) .tolist() would work, but then we don't return numpy arrays any more. Another alternative is to be explicit and write a helper function: def unpack_if_scalar: if x.size == 1: return x.item() else: return x This also accounts for the case where a 1-element array is returned. Kurtosis exhibits the same problem, btw: scipy.stats.kurtosis(N.array([1,2,3]),fisher=False) Regards St?fan From calhoun at amath.washington.edu Thu Oct 11 03:06:28 2007 From: calhoun at amath.washington.edu (Donna Calhoun) Date: Thu, 11 Oct 2007 07:06:28 +0000 (UTC) Subject: [SciPy-user] BLAS and srotgm References: <46E6D84A.7020800@gmail.com> <46EAB01B.1000904@gmail.com> <470CEB7D.1080402@gmail.com> <470D1A4E.5010104@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > Donna Calhoun wrote: > > > >> Using site.cfg is the appropriate way to add the usual -L and -l flags. > >> > > > > Thank you for your reply. > > > > How do I set these flags in a site.cfg file? > > numpy and scipy come with a fairly complete example in site.cfg.example. > I built Scipy from source - scipy-0.5.2.1.tar.gz. The distribtion didn't come with a site.cfg.example file (or any mention of site.cfg in any of the .txt files - e.g. INSTALL.txt). Also, the general instructions under the Wiki don't make any mention of this file, and in fact, only under "INTEL and MKL" is there a brief mention of how one can compile using MKL by adding an mkl header to the site.cfg file. Am I missing something? Thanks, Donna From robert.kern at gmail.com Thu Oct 11 03:15:38 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Oct 2007 02:15:38 -0500 Subject: [SciPy-user] BLAS and srotgm In-Reply-To: References: <46E6D84A.7020800@gmail.com> <46EAB01B.1000904@gmail.com> <470CEB7D.1080402@gmail.com> <470D1A4E.5010104@gmail.com> Message-ID: <470DCD9A.1030400@gmail.com> Donna Calhoun wrote: > Robert Kern gmail.com> writes: > >> Donna Calhoun wrote: >>>> Using site.cfg is the appropriate way to add the usual -L and -l flags. >>>> >>> Thank you for your reply. >>> >>> How do I set these flags in a site.cfg file? >> numpy and scipy come with a fairly complete example in site.cfg.example. >> > > I built Scipy from source - scipy-0.5.2.1.tar.gz. The distribtion didn't come > with a site.cfg.example file (or any mention of site.cfg in any of the .txt > files - e.g. INSTALL.txt). Sorry, it was added a bit after that. http://projects.scipy.org/scipy/numpy/browser/trunk/site.cfg.example -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Thu Oct 11 03:07:30 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 11 Oct 2007 16:07:30 +0900 Subject: [SciPy-user] BLAS and srotgm In-Reply-To: References: <46E6D84A.7020800@gmail.com> <46EAB01B.1000904@gmail.com> <470CEB7D.1080402@gmail.com> <470D1A4E.5010104@gmail.com> Message-ID: <470DCBB2.6090701@ar.media.kyoto-u.ac.jp> Donna Calhoun wrote: > R > > I built Scipy from source - scipy-0.5.2.1.tar.gz. The distribtion didn't come > with a site.cfg.example file (or any mention of site.cfg in any of the .txt > files - e.g. INSTALL.txt). Also, the general instructions under the Wiki don't > make any mention of this file, and in fact, only under "INTEL and MKL" is there > a brief mention of how one can compile using MKL by adding an mkl header to the > site.cfg file. > > Am I missing something? > Yes, it is in numpy sources: http://projects.scipy.org/scipy/numpy/browser/trunk/site.cfg.example David From jaonary at gmail.com Thu Oct 11 03:57:45 2007 From: jaonary at gmail.com (Jaonary Rabarisoa) Date: Thu, 11 Oct 2007 09:57:45 +0200 Subject: [SciPy-user] swig and numpy.i Message-ID: Hi all, I'm trying to write a c++ code that I need to call in python and is able to use numpy and scipy. I've already read the documentation about the numpy.i interface file and succed to use it correctly when I want to pass an array as argument of my function. In other words, I've tried something like this : in the file "sum.h" double sum(double* v,int n); // a function the compute the sum of the element of v in the swig interface file "sum.i" %module mysum %{ #define SWIG_FILE_WITH_INIT #include "sum.h" %} %include "numpy.i" %init %{ import_array(); %} %apply(double* IN_ARRAY1,int DIM1){(double* v,int n)} %include "sum.h" This thing works correctly and I'm happy. But now I'd like to do something a little bit complicated. I'd like to use a function that uses not only an array but another argument such as scalar. For example, I need to wrap the following C++ code : void multiply(double* v,int n,double a,double* res); // this code will multiply all elements of v by a and return a new vector that contain the result, res[i] = v[i]*a. And here comes the problem, I cant figure out how to use the numpy.i interface file to do this. Any tips and help will be appreciated. Best regards, Jaonary -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Thu Oct 11 11:05:25 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 11 Oct 2007 10:05:25 -0500 Subject: [SciPy-user] changing the default data type of zeros and ones Message-ID: I have successfully coerced my mechatronics students into using Scipy/Numpy this semester for signal processing and lti type stuff. I think it is going fairly well. One common tripping point has been using the zeros or ones functions to create vectors that will later be loaded with data. For mechanical engineering students coming from a Matlab background, the idea that everything isn't a floating point number seems to be a bit strange. I understand the computer science arguments for zeros and ones returning integers by default, but is this something we are willing to consider changing? Does the empty function meet this need? What does it mean to say that the values are uninitialized? Could they really be anything? Could this cause problems if a value wasn't loaded into each slot in the array? When I run empty on my computer, I get a fair number of things that are really, really small: 4.24399162e-312, 3.06953241e-267, 3.06982818e-267, 3.06956199e-267, -9.94445468e-041, 2.25029465e-269, -1.31049432e-039, 4.94065646e-324, Any thoughts? Thanks, Ryan From lou_boog2000 at yahoo.com Thu Oct 11 11:23:39 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 11 Oct 2007 08:23:39 -0700 (PDT) Subject: [SciPy-user] changing the default data type of zeros and ones In-Reply-To: Message-ID: <82285.18758.qm@web34411.mail.mud.yahoo.com> I have numpy version 1.0.3.1. zeros, ones, and empty *all* return float arrays by default. --- Ryan Krauss wrote: > I have successfully coerced my mechatronics students > into using > Scipy/Numpy this semester for signal processing and > lti type stuff. I > think it is going fairly well. One common tripping > point has been > using the zeros or ones functions to create vectors > that will later be > loaded with data. For mechanical engineering > students coming from a > Matlab background, the idea that everything isn't a > floating point > number seems to be a bit strange. I understand the > computer science > arguments for zeros and ones returning integers by > default, but is > this something we are willing to consider changing? > > Does the empty function meet this need? What does > it mean to say that > the values are uninitialized? Could they really be > anything? Could > this cause problems if a value wasn't loaded into > each slot in the > array? > > When I run empty on my computer, I get a fair number > of things that > are really, really small: > 4.24399162e-312, 3.06953241e-267, > 3.06982818e-267, > 3.06956199e-267, -9.94445468e-041, > 2.25029465e-269, > -1.31049432e-039, 4.94065646e-324, > > Any thoughts? > > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From aisaac at american.edu Thu Oct 11 11:33:28 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 11 Oct 2007 11:33:28 -0400 Subject: [SciPy-user] changing the default data type of zeros and ones In-Reply-To: References: Message-ID: On Thu, 11 Oct 2007, Ryan Krauss apparently wrote: > I understand the computer science arguments for zeros and > ones returning integers by default, but is this something > we are willing to consider changing? It was changed some time ago. Cheers, Alan Isaac >>> import numpy >>> numpy.__version__ '1.0.3.1' >>> numpy.zeros(10) array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> numpy.zeros(10).dtype dtype('float64') >>> From aisaac at american.edu Thu Oct 11 11:34:58 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 11 Oct 2007 11:34:58 -0400 Subject: [SciPy-user] changing the default data type of zeros and ones In-Reply-To: References: Message-ID: On Thu, 11 Oct 2007, Ryan Krauss apparently wrote: > Does the empty function meet this need? What does it mean > to say that the values are uninitialized? The contents are whatever was already in the allocated memory. > Could they really be anything? Could this cause problems > if a value wasn't loaded into each slot in the array? Yes. Cheers, Alan Isaac From ryanlists at gmail.com Thu Oct 11 11:34:54 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 11 Oct 2007 10:34:54 -0500 Subject: [SciPy-user] changing the default data type of zeros and ones In-Reply-To: References: Message-ID: Sorry, this is a pylab/numpy incompatibility problem. In order to not spend a lot of time talking about namespaces and such, I teach them to put from scipy import * from pylab import * at the beginning of every script and from there it will kind of feel like Matlab. It is pylab.zeros and pylab.ones that is the problem. How do I best explain this to my students? Ryan On 10/11/07, Alan G Isaac wrote: > On Thu, 11 Oct 2007, Ryan Krauss apparently wrote: > > I understand the computer science arguments for zeros and > > ones returning integers by default, but is this something > > we are willing to consider changing? > > It was changed some time ago. > > Cheers, > Alan Isaac > > >>> import numpy > >>> numpy.__version__ > '1.0.3.1' > >>> numpy.zeros(10) > array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > >>> numpy.zeros(10).dtype > dtype('float64') > >>> > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Thu Oct 11 11:37:40 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 11 Oct 2007 17:37:40 +0200 Subject: [SciPy-user] changing the default data type of zeros and ones In-Reply-To: References: Message-ID: <20071011153740.GC19504@clipper.ens.fr> On Thu, Oct 11, 2007 at 10:34:54AM -0500, Ryan Krauss wrote: > Sorry, this is a pylab/numpy incompatibility problem. In order to not > spend a lot of time talking about namespaces and such, I teach them to > put > from scipy import * > from pylab import * > at the beginning of every script and from there it will kind of feel > like Matlab. It is pylab.zeros and pylab.ones that is the problem. > How do I best explain this to my students? Reverse the order of the scipy and pylab import. With all due respect to the MPL guys who have done a fantastic job of making things simple, I'd much rather have scipy functions overridding pylab's than the opposite. Ga?l From ryanlists at gmail.com Thu Oct 11 11:39:32 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 11 Oct 2007 10:39:32 -0500 Subject: [SciPy-user] changing the default data type of zeros and ones In-Reply-To: <20071011153740.GC19504@clipper.ens.fr> References: <20071011153740.GC19504@clipper.ens.fr> Message-ID: I guess it can be that simple. I will start doing that. Thanks, Ryan On 10/11/07, Gael Varoquaux wrote: > On Thu, Oct 11, 2007 at 10:34:54AM -0500, Ryan Krauss wrote: > > Sorry, this is a pylab/numpy incompatibility problem. In order to not > > spend a lot of time talking about namespaces and such, I teach them to > > put > > > from scipy import * > > from pylab import * > > > at the beginning of every script and from there it will kind of feel > > like Matlab. It is pylab.zeros and pylab.ones that is the problem. > > > How do I best explain this to my students? > > Reverse the order of the scipy and pylab import. With all due respect to > the MPL guys who have done a fantastic job of making things simple, I'd > much rather have scipy functions overridding pylab's than the opposite. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From marco.turchi at gmail.com Thu Oct 11 12:17:31 2007 From: marco.turchi at gmail.com (marco turchi) Date: Thu, 11 Oct 2007 17:17:31 +0100 Subject: [SciPy-user] problem installing sciPy In-Reply-To: <4706FB50.8090803@ar.media.kyoto-u.ac.jp> References: <79a042480710050443h2848c8b4q9dc7a899f5104a10@mail.gmail.com> <4706231F.1010001@ar.media.kyoto-u.ac.jp> <79a042480710050503q63e6aa6ax7dcd21884dab4e28@mail.gmail.com> <470627FC.50200@ar.media.kyoto-u.ac.jp> <79a042480710050602m5d2c0fc8gf06dc622a8aac442@mail.gmail.com> <79a042480710050643v67894f6bsaffe1af9fe826b76@mail.gmail.com> <79a042480710050748h1ead5eabk5329c1cba4b27f71@mail.gmail.com> <764e38540710051446n3966db1avc5e7abec58fca590@mail.gmail.com> <79a042480710051507j40a7968dr4f6a8ec5351cab23@mail.gmail.com> <4706FB50.8090803@ar.media.kyoto-u.ac.jp> Message-ID: <79a042480710110917r10daf427m8eb1badd2887f7ba@mail.gmail.com> Dear experts, I have sent two file attached to a previous email which contain the errors that I continue to get from the SciPy installation, but I guess they have been blocked by the moderator. Is there another way to send you the errors logs? The two files are quite long so I cannot paste the content inside an email. Thanks Marco On 10/6/07, David Cournapeau wrote: > > marco turchi wrote: > > Dear Chris, > > I have just said to David that i have tried to install numpy and > > scipy using python 2.3, but on that machine there is not python-dev, > > and the administrator cannot install it, he has some problems with > > licences. > I really cannot see any license related problems with python-dev which > would not be present in python. But I digress. > > I guess that I cannot use rpm, because I'm not administrator. That's > > the reason because I have installed in /usr/local another version of > > Python and then I have followed the normal instruction for numpy and > > scipy. > > > > The installation arrives to the end, but when I open Python, new or > > old version, I can import Numpy, but I cannot import scipy, because > > the module is not present. > > > > I can try another time to do everything from the beginning, but i do > > not know how useful it can be. > As Chris said: if you can get your administrator to install python-dev, > blas and lapack, this is the easiest. If you can't, then garnumpy can > help you. The principle of garnumpy is : > - to fetch the sources of all necessary softwares/libraries from > internet > - build them with consistent compiler options > - install it in one location (for example, $HOME/garnumpyinstall, > the default). > > I think this should be easier than compiling everything by yourself, > specially if you are not familiar with building complicated packages. > I've put a new version online, because I realized that I did not update > the scripts for the new numpy/scipy versions. So remove the one you've > download, and use this URL instead: > > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.4.tbz2 > > Most of the options should be changed in gar.conf.mk. For example, > changing python -> change the line PYTHON to the full path of your > python (e.g. /usr/local/bin/python in your case). In your case, this is > the only required change. > > To build and install numpy/scipy with it, you do: > - make clean -> do this if you change anything in gar.conf.mk. Do it > once you changed the python location. > - make garchive -> this will download everything. > - cd platform/scipy && make install -> this will build scipy and all > the necessary softwares. > > If this fails, please paste all the output, because otherwise, it is > difficult to see the exact problem. But if you have a C and fortran > compiler, it should not fail, > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Thu Oct 11 17:38:24 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 11 Oct 2007 23:38:24 +0200 Subject: [SciPy-user] Generating and Sampling Random Numbers Message-ID: <470E97D0.9090500@gmail.com> Dear All, I will probably soon have to deal with the following problem. Think about a 2D box (a simple rectangle) containing a particle (actually many particles) undergoing Brownian motion (or some stochastic process in general) in the box. Simulating its dynamics requires an integration scheme which in turn relies on sampling a random normally-distributed variable at every time step. Here is my problem: in general I should not call e.g. stats.norm.rvs at every time step, since it amounts to continually re-initializing the random number generator, and any generator is guaranteed to have good "randomness" only within a sequence. Instead frequent initializations, i.e. many independent callings rather than generating a single long sequence, may introduce some correlations in the sampled numbers. However, it is rather cumbersome to do something like: my_random=stats.norm.rvs(size=100000, loc=0.) and then read a number from the my_random array at every time step while also keeping track of the position of the last read number on the array. Is there any better way of doing this? Many thanks Lorenzo From aisaac at american.edu Thu Oct 11 17:57:11 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 11 Oct 2007 17:57:11 -0400 Subject: [SciPy-user] Generating and Sampling Random Numbers In-Reply-To: <470E97D0.9090500@gmail.com> References: <470E97D0.9090500@gmail.com> Message-ID: On Thu, 11 Oct 2007, Lorenzo Isella apparently wrote: > Here is my problem: in general I should not call e.g. > stats.norm.rvs at every time step, since it amounts to > continually re-initializing the random number generator, > and any generator is guaranteed to have good "randomness" > only within a sequence. Instead frequent initializations, > i.e. many independent callings rather than generating > a single long sequence, may introduce some correlations in > the sampled numbers. However, it is rather cumbersome to > do something like: > my_random=stats.norm.rvs(size=100000, loc=0.) > and then read a number from the my_random array at every time step while > also keeping track of the position of the last read number > on the array. Is there any better way of doing this? I am not sure I grok your concern, but perhaps what you want is to look at help(numpy.random.seed) You can make your own RNG instance and seed it. If you did do it the other way you suggested (but there is no need as far as I can tell), just use a 2d shape and iterate over it. (Assuming you are not losing particles.) Cheers, Alan Isaac From peridot.faceted at gmail.com Thu Oct 11 18:58:11 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 11 Oct 2007 18:58:11 -0400 Subject: [SciPy-user] Generating and Sampling Random Numbers In-Reply-To: <470E97D0.9090500@gmail.com> References: <470E97D0.9090500@gmail.com> Message-ID: On 11/10/2007, Lorenzo Isella wrote: > Here is my problem: in general I should not call e.g. stats.norm.rvs at > every time step, since it amounts to continually re-initializing the > random number generator, and any generator is guaranteed to have good > "randomness" only within a sequence. Instead frequent initializations, > i.e. many independent callings rather than generating a single long > sequence, may introduce some correlations in the sampled numbers. I think you're confused here. Unless I'm grievously mistaken, scipy's random number generator is initialized once the first time it is used, then all subsequent numbers are drawn from it. The normal use of scipy's random number generators is to call them and ask for however many random numbers you need; when you need more, call them again and more numbers will be generated from the sequence. The array-generation routines are a simple convenience. Anne From haase at msg.ucsf.edu Fri Oct 12 03:23:52 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 12 Oct 2007 09:23:52 +0200 Subject: [SciPy-user] Generating and Sampling Random Numbers In-Reply-To: References: <470E97D0.9090500@gmail.com> Message-ID: Another comment: Performance wise it will be much faster to generate a larger array of rands in one go, rather than calling the python function at every single time step. Furthermore you might also want to perform as many time-steps in a single python-call -- look at the cumsum function. Compare the times - I guess I'm talking at least about a factor 10 in speed. Cheers, Sebastian Haase On 10/12/07, Anne Archibald wrote: > On 11/10/2007, Lorenzo Isella wrote: > > > Here is my problem: in general I should not call e.g. stats.norm.rvs at > > every time step, since it amounts to continually re-initializing the > > random number generator, and any generator is guaranteed to have good > > "randomness" only within a sequence. Instead frequent initializations, > > i.e. many independent callings rather than generating a single long > > sequence, may introduce some correlations in the sampled numbers. > > I think you're confused here. Unless I'm grievously mistaken, scipy's > random number generator is initialized once the first time it is used, > then all subsequent numbers are drawn from it. The normal use of > scipy's random number generators is to call them and ask for however > many random numbers you need; when you need more, call them again and > more numbers will be generated from the sequence. The array-generation > routines are a simple convenience. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jaropis at gazeta.pl Tue Oct 9 16:24:36 2007 From: jaropis at gazeta.pl (jaropis) Date: Tue, 09 Oct 2007 22:24:36 +0200 Subject: [SciPy-user] Scientific Python publications References: <470BA2AE.8070207@bristol.ac.uk> Message-ID: > the molecular science group at the university of Bristol on a recent > scientific paper and thought that I could use the talk as an excuse to > advertise the wonders of python. I use Python a lot - for example this paper: http://www.iop.org/EJ/abstract/0967-3334/28/3/005 has an animation done in SciPy+pylab, and all the calculations are in Python. I give Matlab code for the same calculations in an appendix, but they were originally done in SciPy and then translated to Matlab. Jarek P From jaonary at gmail.com Fri Oct 12 10:20:59 2007 From: jaonary at gmail.com (Jaonary Rabarisoa) Date: Fri, 12 Oct 2007 16:20:59 +0200 Subject: [SciPy-user] running scipy code simultaneously on several machines Message-ID: Hi all, I need to perform several times one python function that is very time consuming. Suppose to be simple that this function takes only one argument and return one value, so its prototype is as follow : def my_func(A) : .... return res I need too call this function for different values of A. A naive approach to do this is the following for A in my_array_of_A : res = my_func(A) all_res.append(res) My problem is that one call of my_func takes several hours. Then, I wonder if it's possible to distribute this "for" loop between several machines (or processors) in order to speed up the process. I've heard something about the cow module in scipy and pympi package but I just do not know how to tackle this probelm correctly with one of these modules. So, if one of you could give some hints in how to do this ? Best regards, Jaonary -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincefn at users.sourceforge.net Fri Oct 12 10:21:24 2007 From: vincefn at users.sourceforge.net (Favre-Nicolin Vincent) Date: Fri, 12 Oct 2007 16:21:24 +0200 Subject: [SciPy-user] scipy, numpy in Scientific Linux ? Message-ID: <200710121621.24309.vincefn@users.sourceforge.net> Hi, Some of our servers are now installed with Scientific Linux (version 5). I'd like to use scipy/numpy (and matplotlib) for this distribution. I have found some references on the SL website (https://www.scientificlinux.org/) about numpy packages, but only for older version (SL 4.4 and 4.5), and apparently only with numpy (not scipy). The SL site really lacks an engine to list all packages. Is there anybody using scientific Linux 5 who would now what package & repository to install for scipy/numpy/matplotlib ? I can compile them of course, but it would seem reasonable that such packages already exist for this scientific distribution ? Vincent From schut at sarvision.nl Fri Oct 12 10:43:58 2007 From: schut at sarvision.nl (Vincent Schut) Date: Fri, 12 Oct 2007 16:43:58 +0200 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: Message-ID: Jaonary Rabarisoa wrote: > Hi all, > > I need to perform several times one python function that is very time > consuming. Suppose > to be simple that this function takes only one argument and return one > value, so its prototype > is as follow : > > def my_func(A) : > .... > return res > > I need too call this function for different values of A. A naive > approach to do this is the following > > for A in my_array_of_A : > res = my_func(A) > all_res.append(res) > > My problem is that one call of my_func takes several hours. Then, I > wonder if it's possible to distribute > this "for" loop between several machines (or processors) in order to > speed up the process. > > I've heard something about the cow module in scipy and pympi package > but I just do not know how > to tackle this probelm correctly with one of these modules. So, if one > of you could give some hints in how to do this ? As an alternative, you could check out parallelpython: www.parallelpython.org. Works like a charm here, both for smp processing on one machine, and for cluster processing. Cheers, Vincent. > > Best regards, > > Jaonary > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From haase at msg.ucsf.edu Fri Oct 12 10:53:31 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 12 Oct 2007 16:53:31 +0200 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: Message-ID: > As an alternative, you could check out parallelpython: > www.parallelpython.org. Works like a charm here, both for smp processing > on one machine, and for cluster processing. > > Cheers, > Vincent. Hi, Can some SciPy insider compare the above mentioned parallelpython.org apporach with the venerable "cow" module that lived in SciPy-2 (or 1) and is now lost somewhere in the sandbox ... ? Thanks, Sebastian Haase From millman at berkeley.edu Fri Oct 12 11:33:16 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 12 Oct 2007 08:33:16 -0700 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: Message-ID: You may want to look into IPython1: http://ipython.scipy.org/moin/Parallel_Computing -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From ellisonbg.net at gmail.com Fri Oct 12 11:42:30 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 12 Oct 2007 09:42:30 -0600 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: Message-ID: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> Mentioned below by Jarrod, IPython1 is probably the best solution for this. Here is the simplest parallel implementation in IPython1: In [1]: import ipython1.kernel.api as kernel In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) In [3]: rc.getIDs() Out[3]: [0, 1, 2, 3] In [4]: def my_func(A): return 'result' ...: In [5]: rc.mapAll(my_func, range(16)) Out[5]: ['result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result', 'result'] This partitions the input array (range(16)) amongst 4 processors, calls my_func on each element and then gathers the result back. This is the simplest approach, but IPython1 supports many other styles and approaches, including a dynamically load balanced task farming system. I don't know if you need it, but IPython1 also has full integration with mpi. Please let us know if you have questions. Cheers, Brian On 10/12/07, Jaonary Rabarisoa wrote: > Hi all, > > I need to perform several times one python function that is very time > consuming. Suppose > to be simple that this function takes only one argument and return one > value, so its prototype > is as follow : > > def my_func(A) : > .... > return res > > I need too call this function for different values of A. A naive approach to > do this is the following > > for A in my_array_of_A : > res = my_func(A) > all_res.append(res) > > My problem is that one call of my_func takes several hours. Then, I wonder > if it's possible to distribute > this "for" loop between several machines (or processors) in order to speed > up the process. > > I've heard something about the cow module in scipy and pympi package but I > just do not know how > to tackle this probelm correctly with one of these modules. So, if one of > you could give some hints in how to do this ? > > Best regards, > > Jaonary > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Fri Oct 12 13:14:04 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Oct 2007 12:14:04 -0500 Subject: [SciPy-user] Generating and Sampling Random Numbers In-Reply-To: References: <470E97D0.9090500@gmail.com> Message-ID: <470FAB5C.4010503@gmail.com> Sebastian Haase wrote: > Another comment: > Performance wise it will be much faster to generate a larger array of > rands in one go, rather than calling the python function at every > single time step. > Furthermore you might also want to perform as many time-steps in a > single python-call -- look at the cumsum function. > > Compare the times - I guess I'm talking at least about a factor 10 in speed. Only a factor of 2. It's probably going to be swamped by anything else one does in the loop, like testing for intersection with the boundaries of the box. I think the convenience and clarity of calling the function is probably going to outweigh the performance advantage of building an array. In [14]: from numpy import * In [15]: %timeit for i in xrange(10000): x=random.normal() 100 loops, best of 3: 4.6 ms per loop # The next two need to be added together since %timeit doesn't allow ; and for # loops together. In [16]: %timeit x=random.normal(size=10000) 1000 loops, best of 3: 1.24 ms per loop In [17]: %timeit for y in x: pass 1000 loops, best of 3: 1.64 ms per loop # This way of iterating is even worse. Unfortunately, it's probably the one that # would have to be used. In [28]: %timeit for i in xrange(10000): x[i] 100 loops, best of 3: 2.39 ms per loop -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Fri Oct 12 15:36:50 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 12 Oct 2007 15:36:50 -0400 Subject: [SciPy-user] scipy.misc.limits deprecated In-Reply-To: References: Message-ID: On Sun, 23 Sep 2007, Jarrod Millman apparently wrote: > As of r3362, scipy.misc.limits is officially deprecated: > http://projects.scipy.org/scipy/scipy/changeset/3362 > If you need to work with the machine limits, please use numpy.finfo instead: > http://scipy.org/scipy/numpy/browser/trunk/numpy/lib/getlimits.py > For example, > from numpy import finfo, single > single_epsilon = finfo(single).eps Query: is this an example of a module where we can expect the new naming conventions to be implemented? Cheers, Alan Isaac From jaonary at gmail.com Sat Oct 13 02:47:03 2007 From: jaonary at gmail.com (Jaonary Rabarisoa) Date: Sat, 13 Oct 2007 08:47:03 +0200 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> Message-ID: Thank you guys for these hints. I think I'll try IPython1 first. I'll let you know soon about my experiment on this. Cheers, Jaonary On 10/12/07, Brian Granger wrote: > > Mentioned below by Jarrod, IPython1 is probably the best solution for > this. Here is the simplest parallel implementation in IPython1: > > In [1]: import ipython1.kernel.api as kernel > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > In [3]: rc.getIDs() > Out[3]: [0, 1, 2, 3] > > In [4]: def my_func(A): return 'result' > ...: > > In [5]: rc.mapAll(my_func, range(16)) > Out[5]: > ['result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result', > 'result'] > > This partitions the input array (range(16)) amongst 4 processors, > calls my_func on each element and then gathers the result back. > > This is the simplest approach, but IPython1 supports many other styles > and approaches, including a dynamically load balanced task farming > system. I don't know if you need it, but IPython1 also has full > integration with mpi. > > Please let us know if you have questions. > > Cheers, > > Brian > > On 10/12/07, Jaonary Rabarisoa wrote: > > Hi all, > > > > I need to perform several times one python function that is very time > > consuming. Suppose > > to be simple that this function takes only one argument and return one > > value, so its prototype > > is as follow : > > > > def my_func(A) : > > .... > > return res > > > > I need too call this function for different values of A. A naive > approach to > > do this is the following > > > > for A in my_array_of_A : > > res = my_func(A) > > all_res.append(res) > > > > My problem is that one call of my_func takes several hours. Then, I > wonder > > if it's possible to distribute > > this "for" loop between several machines (or processors) in order to > speed > > up the process. > > > > I've heard something about the cow module in scipy and pympi package > but I > > just do not know how > > to tackle this probelm correctly with one of these modules. So, if one > of > > you could give some hints in how to do this ? > > > > Best regards, > > > > Jaonary > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sat Oct 13 07:55:19 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 13 Oct 2007 13:55:19 +0200 Subject: [SciPy-user] swig and numpy.i In-Reply-To: References: Message-ID: Hi, In fact, you would want to have res being an array with the same dimensions than v ? In that case, you could define a new function with an additional size parameter, check that the two sizes are equal and then call your function. Matthieu 2007/10/11, Jaonary Rabarisoa : > > Hi all, > > I'm trying to write a c++ code that I need to call in python and is able > to use numpy and scipy. > I've already read the documentation about the numpy.i interface file and > succed to use it correctly when I want to pass an array as argument of my > function. In other words, I've tried something like this : > > in the file "sum.h" > double sum(double* v,int n); // a function the compute the sum of the > element of v > > in the swig interface file "sum.i" > > %module mysum > %{ > #define SWIG_FILE_WITH_INIT > #include "sum.h" > %} > > %include "numpy.i" > > %init %{ > import_array(); > %} > > %apply(double* IN_ARRAY1,int DIM1){(double* v,int n)} > %include "sum.h" > > This thing works correctly and I'm happy. But now I'd like to do > something a little bit complicated. I'd like to use a function that uses > not only an array but another argument such as scalar. For example, I need > to wrap the following C++ code : > > void multiply(double* v,int n,double a,double* res); // this code > will multiply all elements of v by a and return a new vector that contain > the result, res[i] = v[i]*a. And here comes the problem, I cant figure out > how to use the numpy.i interface file to do this. > > Any tips and help will be appreciated. > > Best regards, > > Jaonary > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Sat Oct 13 11:28:57 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 13 Oct 2007 10:28:57 -0500 Subject: [SciPy-user] trying to update after recent windows update - exe's won't run Message-ID: I was trying to update to 0.6 after a recent Windows update, but the exe's I downloaded won't run. A dos window pops up and then immediately goes away with no info and nothing happens. If I download the same exe's from my Linux box and copy them to my flash drive, they run fine (after I remove an extra .bin that got appended to the filename). What is this windows craziness? Is there a way around it? Thanks, Ryan From arserlom at gmail.com Sat Oct 13 15:53:31 2007 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Sat, 13 Oct 2007 21:53:31 +0200 Subject: [SciPy-user] interp1d out of bounds behaviour Message-ID: Hello, I'm using interpolate.interp1d. The bounds_error flag gives the option to either raise an error or use a fill value when an x is out of bounds. Instead of this I'd like to get y(max(x)) if x_new > max(x) and y(min(x)) if x_new < min(x). Should I make the clipping myself in my program or is there an option that I'm missing? Armando. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnata at obs.univ-lyon1.fr Sat Oct 13 17:10:42 2007 From: gnata at obs.univ-lyon1.fr (Gnata Xavier) Date: Sat, 13 Oct 2007 23:10:42 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <47094D3B.3080509@obs.univ-lyon1.fr> References: <46FD729D.3060005@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> <46FE8724.80500@obs.univ-lyon1.fr> <470781B0.7060608@obs.univ-lyon1.fr> <1e2af89e0710061134jeaa70dfg8a5fd0e32364463e@mail.gmail.com> <20071007011015.GC9063@mentat.za.net> <47094D3B.3080509@obs.univ-lyon1.fr> Message-ID: <47113452.2020005@obs.univ-lyon1.fr> Xavier Gnata a ?crit : > Matthieu Brucher wrote: > >> If anyone knows a later version of gcc that is known to build on >> Cygwin and has been used to successfully compile >> atlas/lapack/numpy/scipy then I'm happy to give it a try, but I'm >> reluctant to start investigating on my own, at the risk of >> spending a lot of time on a gcc version that just won't work. >> >> >> There is a French tutorial that can help here : >> http://philippe-dunski.developpez.com/compiler_gcc/ >> Didn't try it, but it could help for this matter. >> >> Matthieu >> > I cannot compile it with gcc4.2 because I'm using debian packages of > blas/lapack/atlas. These packages are compiled with g77 (gcc3.4). I > have no clue why. Of course, I can remove thses package on compile > blas/lapack/atlas (It is easy, I know the exact way to do that) but I > simply have no time to do that before at least one or two weeks :( > > Xavier > > > I had a quick look at the svn history but I have no time to bissect this week. I would like to know first if otjer users of SciPy, i386 and g77 are able to reproduce this bug. Xavier From kwmsmith at gmail.com Sun Oct 14 00:45:51 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Sat, 13 Oct 2007 23:45:51 -0500 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> Message-ID: On 10/12/07, Brian Granger wrote: > Mentioned below by Jarrod, IPython1 is probably the best solution for > this. Here is the simplest parallel implementation in IPython1: > > In [1]: import ipython1.kernel.api as kernel > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > In [3]: rc.getIDs() > Out[3]: [0, 1, 2, 3] Hi all: I installed everything to ipython1's liking, and here is what I get running the above code: ksmith at laptop:~/Devel/python/ipython1 [183]$ ipython Python 2.5.1 (r251:54863, Sep 21 2007, 22:12:00) Type "copyright", "credits" or "license" for more information. IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import ipython1.kernel.api as kernel In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) In [3]: rc.getIDs() [snip traceback] ConnectionError: Error connecting to the server, please recreate the client. The original internal error was: error(61, 'Connection refused') I'm sure it's something really simple, but I don't know what. Pointers? I'm on a MacBook, 10.4.10, python 2.5.1, ipython1 from svn. Thanks for any help you can give -- googling turned up nothing. Kurt From robert.kern at gmail.com Sun Oct 14 00:54:06 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 13 Oct 2007 23:54:06 -0500 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> Message-ID: <4711A0EE.7020901@gmail.com> Kurt Smith wrote: > On 10/12/07, Brian Granger wrote: >> Mentioned below by Jarrod, IPython1 is probably the best solution for >> this. Here is the simplest parallel implementation in IPython1: >> >> In [1]: import ipython1.kernel.api as kernel >> >> In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) >> >> In [3]: rc.getIDs() >> Out[3]: [0, 1, 2, 3] > > Hi all: > > I installed everything to ipython1's liking, and here is what I get > running the above code: > > ksmith at laptop:~/Devel/python/ipython1 > [183]$ ipython > Python 2.5.1 (r251:54863, Sep 21 2007, 22:12:00) > Type "copyright", "credits" or "license" for more information. > > IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python. > ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: import ipython1.kernel.api as kernel > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > In [3]: rc.getIDs() > [snip traceback] > ConnectionError: Error connecting to the server, please recreate the client. > The original internal error was: > error(61, 'Connection refused') > > I'm sure it's something really simple, but I don't know what. > Pointers? I'm on a MacBook, 10.4.10, python 2.5.1, ipython1 from svn. > > Thanks for any help you can give -- googling turned up nothing. You have to start up the remote interpreters first. Use the ipcluster script that comes with ipython1. Here is the tutorial documentation: http://ipython.scipy.org/moin/Parallel_Computing_With_IPython1 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sun Oct 14 04:35:09 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 14 Oct 2007 17:35:09 +0900 Subject: [SciPy-user] trying to update after recent windows update - exe's won't run In-Reply-To: References: Message-ID: <4711D4BD.8070100@ar.media.kyoto-u.ac.jp> Ryan Krauss wrote: > I was trying to update to 0.6 after a recent Windows update, but the > exe's I downloaded won't run. A dos window pops up and then > immediately goes away with no info and nothing happens. If I download > the same exe's from my Linux box and copy them to my flash drive, they > run fine (after I remove an extra .bin that got appended to the > filename). What is this windows craziness? Is there a way around it? > Did you download the file with IE on windows ? It could just be that the file is corrupted (you could also try to launch the exe from the shell, so that you can see the message in the dos box). cheers, David From ellisonbg.net at gmail.com Sun Oct 14 12:33:04 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Sun, 14 Oct 2007 10:33:04 -0600 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> Message-ID: <6ce0ac130710140933w35ff0712te900b5d91d9a9021@mail.gmail.com> Robert gives the link to more details about how to start everything. But if you are just running on a single machine, the easiest way is to just do: $ ipcluster -n 4 >From the shell after installing ipython1. Let us know if you have further problems. Brian On 10/13/07, Kurt Smith wrote: > On 10/12/07, Brian Granger wrote: > > Mentioned below by Jarrod, IPython1 is probably the best solution for > > this. Here is the simplest parallel implementation in IPython1: > > > > In [1]: import ipython1.kernel.api as kernel > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > In [3]: rc.getIDs() > > Out[3]: [0, 1, 2, 3] > > Hi all: > > I installed everything to ipython1's liking, and here is what I get > running the above code: > > ksmith at laptop:~/Devel/python/ipython1 > [183]$ ipython > Python 2.5.1 (r251:54863, Sep 21 2007, 22:12:00) > Type "copyright", "credits" or "license" for more information. > > IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python. > ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: import ipython1.kernel.api as kernel > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > In [3]: rc.getIDs() > [snip traceback] > ConnectionError: Error connecting to the server, please recreate the client. > The original internal error was: > error(61, 'Connection refused') > > I'm sure it's something really simple, but I don't know what. > Pointers? I'm on a MacBook, 10.4.10, python 2.5.1, ipython1 from svn. > > Thanks for any help you can give -- googling turned up nothing. > > Kurt > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From kwmsmith at gmail.com Sun Oct 14 16:39:15 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Sun, 14 Oct 2007 15:39:15 -0500 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: <6ce0ac130710140933w35ff0712te900b5d91d9a9021@mail.gmail.com> References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> <6ce0ac130710140933w35ff0712te900b5d91d9a9021@mail.gmail.com> Message-ID: On 10/14/07, Brian Granger wrote: > Robert gives the link to more details about how to start everything. > But if you are just running on a single machine, the easiest way is to > just do: > > $ ipcluster -n 4 > > >From the shell after installing ipython1. Let us know if you have > further problems. > > Brian Thanks for the pointers! Very impressive piece of work -- it will help me very much with my batch jobs and I'll be watching it as it matures. Impressive scatter/gather interface, too -- very clean. There was a bit of a hangup when running ipcluster -- on my MacBook, if you put it in the background right away, the process is stopped immediately before the script has a chance to set up the controller and spawn the engines: $ ipcluster -n 4 & [1] 450 $ Starting controller: Controller PID: 452 [1]+ Stopped ipcluster -n 4 $ Since it is stopped and the engines aren't created, trying to use it fails. I have to run 'ipcluster -n ' in the foreground, let it set up the engines, suspend it (CTRL-Z) and put it in the background for things to work nicely. Is there any way around this? Not a big deal, just wondering if I'm doing anything wrong. Thanks again! Kurt > > On 10/13/07, Kurt Smith wrote: > > On 10/12/07, Brian Granger wrote: > > > Mentioned below by Jarrod, IPython1 is probably the best solution for > > > this. Here is the simplest parallel implementation in IPython1: > > > > > > In [1]: import ipython1.kernel.api as kernel > > > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > > > In [3]: rc.getIDs() > > > Out[3]: [0, 1, 2, 3] > > > > Hi all: > > > > I installed everything to ipython1's liking, and here is what I get > > running the above code: > > > > ksmith at laptop:~/Devel/python/ipython1 > > [183]$ ipython > > Python 2.5.1 (r251:54863, Sep 21 2007, 22:12:00) > > Type "copyright", "credits" or "license" for more information. > > > > IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python. > > ? -> Introduction and overview of IPython's features. > > %quickref -> Quick reference. > > help -> Python's own help system. > > object? -> Details about 'object'. ?object also works, ?? prints more. > > > > In [1]: import ipython1.kernel.api as kernel > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > In [3]: rc.getIDs() > > [snip traceback] > > ConnectionError: Error connecting to the server, please recreate the client. > > The original internal error was: > > error(61, 'Connection refused') > > > > I'm sure it's something really simple, but I don't know what. > > Pointers? I'm on a MacBook, 10.4.10, python 2.5.1, ipython1 from svn. > > > > Thanks for any help you can give -- googling turned up nothing. > > > > Kurt > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Sun Oct 14 16:54:38 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 14 Oct 2007 14:54:38 -0600 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> <6ce0ac130710140933w35ff0712te900b5d91d9a9021@mail.gmail.com> Message-ID: On 10/14/07, Kurt Smith wrote: > On 10/14/07, Brian Granger wrote: > > Robert gives the link to more details about how to start everything. > > But if you are just running on a single machine, the easiest way is to > > just do: > > > > $ ipcluster -n 4 > > > > >From the shell after installing ipython1. Let us know if you have > > further problems. > > > > Brian > > Thanks for the pointers! Very impressive piece of work -- it will > help me very much with my batch jobs and I'll be watching it as it > matures. Impressive scatter/gather interface, too -- very clean. > > There was a bit of a hangup when running ipcluster -- on my MacBook, > if you put it in the background right away, the process is stopped > immediately before the script has a chance to set up the controller > and spawn the engines: > > $ ipcluster -n 4 & > [1] 450 > $ Starting controller: Controller PID: 452 > [1]+ Stopped ipcluster -n 4 > $ > > Since it is stopped and the engines aren't created, trying to use it fails. > > I have to run 'ipcluster -n ' in the foreground, let it set up > the engines, suspend it (CTRL-Z) and put it in the background for > things to work nicely. Is there any way around this? Not a big deal, > just wondering if I'm doing anything wrong. I suspect the problem is that it's trying to print out to stdout, though on linux it can be backgrounded immediately (but I've seen similarly other things that complain because they can't print to the TTY, I don't know what determines whether the shell stops the process or not when it tries to print from the background). Since the ipcluster script is a convenient one to later stop via Ctrl-C for cleanup, and waiting to see that the engines are all up and running is also a good way things are ready to start working, I typically just keep an open terminal for that one guy and leave it open without backgrounding it. But the whole process startup system is being reworked into something more modular, so we should probably add a 'quiet' mode for it that can be immediately backgrounded without any more fuss, since it's also a valid use case. Cheers, f ps - for further questions, I'd encourage to ask us on the ipython list, so we don't clutter the scipy one with matters specific to ipython that others here may not necessarily be interested in. From zunzun at zunzun.com Sun Oct 14 18:01:56 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 14 Oct 2007 18:01:56 -0400 Subject: [SciPy-user] Solved-ish: matplotlib, Tkinter and multi-threaded web applications Message-ID: <20071014220156.GA26338@zunzun.com> The agg and tkagg backend uses tk, and the integrating the resulting matplotlib threading model into a multi-threaded webb application had me scratching my head for a while. This code does work: http://wiki.pylonshq.com/display/pylonscommunity/Adding+graphical+output If the more experienced - or the merely curious - would briefly review my newbie code, I would be most grateful. James From robert.kern at gmail.com Sun Oct 14 18:09:36 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 14 Oct 2007 17:09:36 -0500 Subject: [SciPy-user] Solved-ish: matplotlib, Tkinter and multi-threaded web applications In-Reply-To: <20071014220156.GA26338@zunzun.com> References: <20071014220156.GA26338@zunzun.com> Message-ID: <471293A0.5050405@gmail.com> zunzun at zunzun.com wrote: > The agg and tkagg backend uses tk, and the integrating the resulting > matplotlib threading model into a multi-threaded webb application > had me scratching my head for a while. Umm, only the TkAgg backend uses Tk. Why not just use a backend that doesn't have a GUI? Specify backend : Agg in your .matplotlibrc or call matplotlib.use('Agg') before importing pylab. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zunzun at zunzun.com Sun Oct 14 18:14:17 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 14 Oct 2007 18:14:17 -0400 Subject: [SciPy-user] Solved-ish: matplotlib, Tkinter and multi-threaded web applications In-Reply-To: <471293A0.5050405@gmail.com> References: <20071014220156.GA26338@zunzun.com> <471293A0.5050405@gmail.com> Message-ID: <20071014221417.GA26532@zunzun.com> On Sun, Oct 14, 2007 at 05:09:36PM -0500, Robert Kern wrote: > > Umm, only the TkAgg backend uses Tk. Why not just use a backend that doesn't > have a GUI? Specify > > backend : Agg > > in your .matplotlibrc or call matplotlib.use('Agg') before importing pylab. Tried it, didn't work. Tk is still started and gave a threading error. It might not be used, but it does start in my tests. James From robert.kern at gmail.com Sun Oct 14 20:23:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 14 Oct 2007 19:23:19 -0500 Subject: [SciPy-user] Solved-ish: matplotlib, Tkinter and multi-threaded web applications In-Reply-To: <20071014221417.GA26532@zunzun.com> References: <20071014220156.GA26338@zunzun.com> <471293A0.5050405@gmail.com> <20071014221417.GA26532@zunzun.com> Message-ID: <4712B2F7.20907@gmail.com> zunzun at zunzun.com wrote: > On Sun, Oct 14, 2007 at 05:09:36PM -0500, Robert Kern wrote: >> Umm, only the TkAgg backend uses Tk. Why not just use a backend that doesn't >> have a GUI? Specify >> >> backend : Agg >> >> in your .matplotlibrc or call matplotlib.use('Agg') before importing pylab. > > Tried it, didn't work. Tk is still started and gave a threading > error. It might not be used, but it does start in my tests. Please show us the code that you used on the matplotlib list. This would be a bug. For example, I have 'WXAgg' set as the backend in my matplotlibrc, but if I call matplotlib.use('Agg') before loading pylab, wx never gets loaded: [~]$ ipython Activating auto-logging. Current session state plus future input saved. Filename : /Users/rkern/.ipython/ipython.log Mode : backup Output logging : False Raw input log : False Timestamping : False State : active In [1]: import sys In [2]: 'wx' in sys.modules Out[2]: False In [3]: import matplotlib In [4]: 'wx' in sys.modules Out[4]: False In [5]: import pylab In [6]: 'wx' in sys.modules Out[6]: True In [7]: Do you really want to exit ([y]/n)? ^[[A% [~]$ ipython Activating auto-logging. Current session state plus future input saved. Filename : /Users/rkern/.ipython/ipython.log Mode : backup Output logging : False Raw input log : False Timestamping : False State : active In [1]: import sys In [2]: 'wx' in sys.modules Out[2]: False In [3]: import matplotlib In [4]: 'wx' in sys.modules Out[4]: False In [5]: matplotlib.use('Agg') In [6]: 'wx' in sys.modules Out[6]: False In [7]: import pylab In [8]: 'wx' in sys.modules Out[8]: False -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zunzun at zunzun.com Sun Oct 14 20:27:32 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 14 Oct 2007 20:27:32 -0400 Subject: [SciPy-user] Solved-ish: matplotlib, Tkinter and multi-threaded web applications In-Reply-To: <4712B2F7.20907@gmail.com> References: <20071014220156.GA26338@zunzun.com> <471293A0.5050405@gmail.com> <20071014221417.GA26532@zunzun.com> <4712B2F7.20907@gmail.com> Message-ID: <20071015002732.GA28782@zunzun.com> On Sun, Oct 14, 2007 at 07:23:19PM -0500, Robert Kern wrote: > > Please show us the code that you used on the matplotlib list. timesRequested = 0 def getBufferForPNG(self): global timesRequested # use the global variable defined at the top of the module # make a buffer to hold our data buffer = StringIO.StringIO() # pylab uses Tkinter, so this must be the top-level thread calling Tk. # calling pylab.close() here forces pylab to create a new top-level thread pylab.close() # tell matplotlib to use the 'Anti-Grain Geometry' toolkit matplotlib.use('Agg') canvas = pylab.get_current_fig_manager().canvas # quick simple plot pylab.plot([1+timesRequested, 2+timesRequested, 3+timesRequested],[40+timesRequested, 50+timesRequested, 60+timesRequested]) timesRequested += 1 # this is only to give a different graph with each refresh canvas.draw() imageSize = canvas.get_width_height() imageRgb = canvas.tostring_rgb() pilImage = PIL.Image.fromstring("RGB", imageSize, imageRgb) pilImage.save(buffer, "PNG") # <-- we will be sending the browser a "PNG file" pylab.close() # close again to be sure of Tk threading return buffer From darren.dale at cornell.edu Sun Oct 14 20:41:38 2007 From: darren.dale at cornell.edu (Darren Dale) Date: Sun, 14 Oct 2007 20:41:38 -0400 Subject: [SciPy-user] Solved-ish: matplotlib, Tkinter and multi-threaded web applications In-Reply-To: <20071015002732.GA28782@zunzun.com> References: <20071014220156.GA26338@zunzun.com> <4712B2F7.20907@gmail.com> <20071015002732.GA28782@zunzun.com> Message-ID: <200710142041.38593.darren.dale@cornell.edu> On Sunday 14 October 2007 08:27:32 pm zunzun at zunzun.com wrote: > On Sun, Oct 14, 2007 at 07:23:19PM -0500, Robert Kern wrote: > > Please show us the code that you used on the matplotlib list. [...] > # pylab uses Tkinter, so this must be the top-level thread calling > Tk. # calling pylab.close() here forces pylab to create a new top-level > thread pylab.close() > > # tell matplotlib to use the 'Anti-Grain Geometry' toolkit > matplotlib.use('Agg') Like Robert said, you have to call matplotlib.use *before* importing pylab. import matplotlib matplotlib.use('Agg') import pylab From ellisonbg.net at gmail.com Sun Oct 14 22:20:25 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Sun, 14 Oct 2007 20:20:25 -0600 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> <6ce0ac130710140933w35ff0712te900b5d91d9a9021@mail.gmail.com> Message-ID: <6ce0ac130710141920o5b5ed148x1470fd4552225cb4@mail.gmail.com> > Thanks for the pointers! Very impressive piece of work -- it will > help me very much with my batch jobs and I'll be watching it as it > matures. Impressive scatter/gather interface, too -- very clean. Thanks, we would love any further feedback you have. > There was a bit of a hangup when running ipcluster -- on my MacBook, > if you put it in the background right away, the process is stopped > immediately before the script has a chance to set up the controller > and spawn the engines: > > $ ipcluster -n 4 & > [1] 450 > $ Starting controller: Controller PID: 452 > [1]+ Stopped ipcluster -n 4 > $ > > Since it is stopped and the engines aren't created, trying to use it fails. On OS X, the following works (I have no idea why?!?!) for backgrounding: $ (ipcluster -n 4 &) The parenthesis around everything does the trick. > I have to run 'ipcluster -n ' in the foreground, let it set up > the engines, suspend it (CTRL-Z) and put it in the background for > things to work nicely. Is there any way around this? Not a big deal, > just wondering if I'm doing anything wrong. Nope, just some wierdness with OS X. > Thanks again! > > Kurt > > > > > > On 10/13/07, Kurt Smith wrote: > > > On 10/12/07, Brian Granger wrote: > > > > Mentioned below by Jarrod, IPython1 is probably the best solution for > > > > this. Here is the simplest parallel implementation in IPython1: > > > > > > > > In [1]: import ipython1.kernel.api as kernel > > > > > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > > > > > In [3]: rc.getIDs() > > > > Out[3]: [0, 1, 2, 3] > > > > > > Hi all: > > > > > > I installed everything to ipython1's liking, and here is what I get > > > running the above code: > > > > > > ksmith at laptop:~/Devel/python/ipython1 > > > [183]$ ipython > > > Python 2.5.1 (r251:54863, Sep 21 2007, 22:12:00) > > > Type "copyright", "credits" or "license" for more information. > > > > > > IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python. > > > ? -> Introduction and overview of IPython's features. > > > %quickref -> Quick reference. > > > help -> Python's own help system. > > > object? -> Details about 'object'. ?object also works, ?? prints more. > > > > > > In [1]: import ipython1.kernel.api as kernel > > > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > > > In [3]: rc.getIDs() > > > [snip traceback] > > > ConnectionError: Error connecting to the server, please recreate the client. > > > The original internal error was: > > > error(61, 'Connection refused') > > > > > > I'm sure it's something really simple, but I don't know what. > > > Pointers? I'm on a MacBook, 10.4.10, python 2.5.1, ipython1 from svn. > > > > > > Thanks for any help you can give -- googling turned up nothing. > > > > > > Kurt > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From zunzun at zunzun.com Mon Oct 15 06:31:23 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Mon, 15 Oct 2007 06:31:23 -0400 Subject: [SciPy-user] Solved-ish: matplotlib, Tkinter and multi-threaded web applications In-Reply-To: <200710142041.38593.darren.dale@cornell.edu> References: <20071014220156.GA26338@zunzun.com> <4712B2F7.20907@gmail.com> <20071015002732.GA28782@zunzun.com> <200710142041.38593.darren.dale@cornell.edu> Message-ID: <20071015103123.GA7921@zunzun.com> On Sun, Oct 14, 2007 at 08:41:38PM -0400, Darren Dale wrote: > > Like Robert said, you have to call matplotlib.use *before* importing pylab. > > import matplotlib > matplotlib.use('Agg') > import pylab That worked fine. My thanks to you and Robert for helping me. James From mhearne at usgs.gov Mon Oct 15 15:26:21 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Mon, 15 Oct 2007 13:26:21 -0600 Subject: [SciPy-user] egg for numpy/scipy on linux? Message-ID: A couple of numpy/scipy questions: I see that there are numpy/scipy eggs in CheeseShop, which are versions 1.0.3 and 0.52 (a little old, I think). However, there is no mention of these eggs in the download/installation instructions for scipy/numpy. Why is this? Also, I thought eggs were supposed to be cross-platform, but if that were true, then why is there the SciPy Superpack for OS X? I'm hoping to use easy_install for software I am building, but I'm confused about what it's capabilities are... Thanks, Mike ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Oct 15 16:07:52 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 15 Oct 2007 22:07:52 +0200 Subject: [SciPy-user] egg for numpy/scipy on linux? In-Reply-To: References: Message-ID: Hi, Eggs are not cross-platform if there is compiled code somewhere. With numpy and scipy, there is, so eggs are not cross-platform. Matthieu 2007/10/15, Michael Hearne : > > A couple of numpy/scipy questions: > I see that there are numpy/scipy eggs in CheeseShop, which are versions > 1.0.3 and 0.52 (a little old, I think). However, there is no mention of > these eggs in the download/installation instructions for scipy/numpy. Why > is this? > > Also, I thought eggs were supposed to be cross-platform, but if that were > true, then why is there the SciPy Superpack for OS X? > > I'm hoping to use easy_install for software I am building, but I'm > confused about what it's capabilities are... > > Thanks, > > Mike > > > > ------------------------------------------------------ > Michael Hearne > mhearne at usgs.gov > (303) 273-8620 > USGS National Earthquake Information Center > 1711 Illinois St. Golden CO 80401 > Senior Software Engineer > Synergetics, Inc. > ------------------------------------------------------ > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists.steve at arachnedesign.net Mon Oct 15 16:53:26 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Mon, 15 Oct 2007 16:53:26 -0400 Subject: [SciPy-user] Inconsistencies between numpy & scipy corrcoef Message-ID: <4E9ED9B3-376A-46FF-92A8-74F1134DECD6@arachnedesign.net> Hey all, Just curious if there's any reason why scipy.stats.corrcoeff and numpy.corrcoef (andy by extension the `cov` functions) act differently, e.g if you are passing both of them a 2d array, scipy will take the rows to be the observations and the cols are the vars, numpy is the opposite. I feel like they should both function the same on the same inputs, no? Thanks, -steve From paul.lambkin at tyndall.ie Mon Oct 15 16:56:34 2007 From: paul.lambkin at tyndall.ie (Paul Lambkin) Date: Mon, 15 Oct 2007 21:56:34 +0100 Subject: [SciPy-user] (no subject) Message-ID: <5D368F12C1A22C429696CC32AD69E16D036C97@MAIL.tyndall.ie> Could someone provide me with a basic guide to running f2py on windows - suitable for a novice. I've followed the recipe at: http://www.scipy.org/F2PY_Windows but without much success. I am running Windows XP with python 2.5 and numpy 1.0.3.1.win32-py2.5. Gfortran is also installed. C:\Python25\Lib\site-packages\numpy\f2py does not contain f2py.py, rather f2py2e.py . When executed from the command line C:\Python25\Lib\site-packages\numpy\f2py\f2py2e.py I expected to generate the usage documentation but nothing was output. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Oct 15 17:26:42 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 15 Oct 2007 17:26:42 -0400 Subject: [SciPy-user] Inconsistencies between numpy & scipy corrcoef In-Reply-To: <4E9ED9B3-376A-46FF-92A8-74F1134DECD6@arachnedesign.net> References: <4E9ED9B3-376A-46FF-92A8-74F1134DECD6@arachnedesign.net> Message-ID: On Mon, 15 Oct 2007, Steve Lianoglou apparently wrote: > Just curious if there's any reason why > scipy.stats.corrcoeff and numpy.corrcoef (andy by > extension the `cov` functions) act differently, e.g if you > are passing both of them a 2d array, scipy will take the > rows to be the observations and the cols are the vars, > numpy is the opposite. It is actually worse than this: scipy.corrcoef is just numpy.corrcoef, but scipy.stats.corrcoef is as you say, so SciPy itself contains two conflicting versions. I agree that this shd match numpy. The stats version may need to be deprecated or to force an axis argument. Cheers, Alan Isaac From lists.steve at arachnedesign.net Mon Oct 15 17:51:27 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Mon, 15 Oct 2007 17:51:27 -0400 Subject: [SciPy-user] Inconsistencies between numpy & scipy corrcoef In-Reply-To: References: <4E9ED9B3-376A-46FF-92A8-74F1134DECD6@arachnedesign.net> Message-ID: <0DB0D6F6-2114-4E88-9B27-1E05CAD3DF57@arachnedesign.net> >> Just curious if there's any reason why >> scipy.stats.corrcoeff and numpy.corrcoef (andy by >> extension the `cov` functions) act differently, e.g if you >> are passing both of them a 2d array, scipy will take the >> rows to be the observations and the cols are the vars, >> numpy is the opposite. > > It is actually worse than this: > scipy.corrcoef is just numpy.corrcoef, > but scipy.stats.corrcoef is as you say, > so SciPy itself contains two conflicting versions. > > I agree that this shd match numpy. > The stats version may need to be deprecated > or to force an axis argument. So why not have the scipy methods just call down back the numpy methods (or do some import magic in the scipy.stats module to just import the numpy funcs into the namespace)? Since you need numpy before installing scipy, why have duplicate code? -steve From pgmdevlist at gmail.com Mon Oct 15 18:14:24 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 15 Oct 2007 18:14:24 -0400 Subject: [SciPy-user] Inconsistencies between numpy & scipy corrcoef In-Reply-To: <0DB0D6F6-2114-4E88-9B27-1E05CAD3DF57@arachnedesign.net> References: <4E9ED9B3-376A-46FF-92A8-74F1134DECD6@arachnedesign.net> <0DB0D6F6-2114-4E88-9B27-1E05CAD3DF57@arachnedesign.net> Message-ID: <200710151814.25218.pgmdevlist@gmail.com> > So why not have the scipy methods just call down back the numpy > methods (or do some import magic in the scipy.stats module to just > import the numpy funcs into the namespace)? AAMOF, corrcoeff is not the only one that behaves inconsistently from one version to another: scipy.var and numpy.var both use biased estimates of the variance, when scipy.stats.var uses the unbiased estimates... Which one should take precedence ? Would it be worth to add a "biased" keyword to the numpy.method (that would default to True, so that we'd keep the current behavior) ? From ryanlists at gmail.com Mon Oct 15 19:42:25 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 15 Oct 2007 18:42:25 -0500 Subject: [SciPy-user] trying to update after recent windows update - exe's won't run In-Reply-To: <4711D4BD.8070100@ar.media.kyoto-u.ac.jp> References: <4711D4BD.8070100@ar.media.kyoto-u.ac.jp> Message-ID: I try never to use IE. But I did just upgrade my internet security suite (my ISP provides it). I will try running from a command prompt. On 10/14/07, David Cournapeau wrote: > Ryan Krauss wrote: > > I was trying to update to 0.6 after a recent Windows update, but the > > exe's I downloaded won't run. A dos window pops up and then > > immediately goes away with no info and nothing happens. If I download > > the same exe's from my Linux box and copy them to my flash drive, they > > run fine (after I remove an extra .bin that got appended to the > > filename). What is this windows craziness? Is there a way around it? > > > Did you download the file with IE on windows ? It could just be that the > file is corrupted (you could also try to launch the exe from the shell, > so that you can see the message in the dos box). > > cheers, > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robince at gmail.com Mon Oct 15 20:08:09 2007 From: robince at gmail.com (Robin) Date: Tue, 16 Oct 2007 01:08:09 +0100 Subject: [SciPy-user] NotImplementedError in sparse submatrix Message-ID: Hi, I am using sparse matrices from scipy.sparse. I create the matrix as a lil, then convert it to csc (or csr, the problem is the same). The first time I try to access a slice in the form A[:16,:] I get the following error (stack trace below). The funny thing is my code has a facility to save a reload the matrix with savemat/loadmat and if the matrix is loaded from a file instead of generated everything works fine without the error. : Wrong number of arguments for overloaded function 'get_csr_submatrix'. This happens with dev3434 on Windows/Cygwin/gcc 3.4.4 and on dev3437 on Mac OS. Here is the full trace, if anymore debug info from my code would be helpful please let me know. /Users/robince/phd/maxent/python/amari.py in solve_single(self, Pr, k) 208 sf = self._solvefunc 209 --> 210 Asmall = self.A[:l,:] 211 Bsmall = Asmall.T 212 eta_sampled = Asmall.matvec(Pr) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparse.py in __getitem__(self, key) 1045 if isinstance(col, slice): 1046 # Returns a new matrix! -> 1047 return self.get_submatrix( row, col ) 1048 elif isinstance(row, slice): 1049 return self._getslice(row, col) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparse.py in get_submatrix(self, slice0, slice1) 1178 or scalars.""" 1179 aux = _cs_matrix._get_submatrix( self, self.shape[1], self.shape[0], -> 1180 slice1, slice0 ) 1181 nr, nc = aux[3:] 1182 return self.__class__( aux[:3], dims = (nc, nr) ) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparse.py in _get_submatrix(self, shape0, shape1, slice0, slice1) 806 self.indptr, self.indices, 807 self.data, --> 808 i0, i1, j0, j1 ) 809 data, indices, indptr = aux[2], aux[1], aux[0] 810 return data, indices, indptr, i1 - i0, j1 - j0 /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparsetools.py in get_csr_submatrix(*args) 603 std::vector<(npy_cdouble_wrapper)> Bx) 604 """ --> 605 return _sparsetools.get_csr_submatrix(*args) 606 607 : Wrong number of arguments for overloaded function 'get_csr_submatrix'. Possible C/C++ prototypes are: get_csr_submatrix<(int,int)>(int const,int const,int const [],int const [],int const [],int const,int const,int const,int const,std::vector *,std::vector *,std::vector *) get_csr_submatrix<(int,long)>(int const,int const,int const [],int const [],long const [],int const,int const,int const,int const,std::vector *,std::vector *,std::vector *) get_csr_submatrix<(int,float)>(int const,int const,int const [],int const [],float const [],int const,int const,int const,int const,std::vector *,std::vector *,std::vector *) get_csr_submatrix<(int,double)>(int const,int const,int const [],int const [],double const [],int const,int const,int const,int const,std::vector *,std::vector *,std::vector *) get_csr_submatrix<(int,npy_cfloat_wrapper)>(int const,int const,int const [],int const [],npy_cfloat_wrapper const [],int const,int const,int const,int const,std::vector *,std::vector *,std::vector *) get_csr_submatrix<(int,npy_cdouble_wrapper)>(int const,int const,int const [],int const [],npy_cdouble_wrapper const [],int const,int const,int const,int const,std::vector *,std::vector *,std::vector *) > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparsetools.py(605)get_csr_submatrix() 604 """ --> 605 return _sparsetools.get_csr_submatrix(*args) 606 ipdb> WARNING: Failure executing file: Thanks Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Oct 16 00:43:08 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 16 Oct 2007 13:43:08 +0900 Subject: [SciPy-user] Inconsistencies between numpy & scipy corrcoef In-Reply-To: <200710151814.25218.pgmdevlist@gmail.com> References: <4E9ED9B3-376A-46FF-92A8-74F1134DECD6@arachnedesign.net> <0DB0D6F6-2114-4E88-9B27-1E05CAD3DF57@arachnedesign.net> <200710151814.25218.pgmdevlist@gmail.com> Message-ID: <4714415C.6020902@ar.media.kyoto-u.ac.jp> Pierre GM wrote: >> So why not have the scipy methods just call down back the numpy >> methods (or do some import magic in the scipy.stats module to just >> import the numpy funcs into the namespace)? >> > > AAMOF, corrcoeff is not the only one that behaves inconsistently from one > version to another: > scipy.var and numpy.var both use biased estimates of the variance, when > scipy.stats.var uses the unbiased estimates... Which one should take > precedence ? Would it be worth to add a "biased" keyword to the numpy.method > (that would default to True, so that we'd keep the current behavior) ? > I already complained about this issue, and let it slipped. Actually, instead of complaining, I should have submitted a patch since then. http://scipy.org/scipy/scipy/ticket/425 Having inconsistencies for basic things like mean, std is bad. Maybe we should have a list of those inconsistencies, to keep track of them (maybe by creating a numpy/scipy inconsistencies roadmap on scipy or numpy trac ?). For corrcoeff, the problem was that the numpy version cannot be changed for backward compatibility: a new, fixed should be implemented anyway, but should have a different name (at least, that's what I remember from the discussion which happened at that time). cheers, David From mhearne at usgs.gov Tue Oct 16 12:57:30 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 16 Oct 2007 10:57:30 -0600 Subject: [SciPy-user] interpolation question Message-ID: <08533109-F44E-4EA0-BB3E-8D4293D4C7D8@usgs.gov> I have a question regarding a warning I'm getting trying to do a 2D interpolation. If I see the message below, does that mean I should not trust my results? Warning: No more knots can be added because the number of B- spline coefficients already exceeds the number of data points m. Probably causes: either s or m too small. (fp>s) ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Karl.Young at ucsf.edu Tue Oct 16 14:47:55 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 16 Oct 2007 11:47:55 -0700 Subject: [SciPy-user] lapack unresolved symbol question Message-ID: <4715075B.6080703@ucsf.edu> From googling around a bit it looks like I'm having an unresolved symbol problem that was apparently typical at one point but also apparently solved. But I can't quite pinpoint the fix so I figured I'd check re. easy fixes before rebuilding libraries and the like. The issues seems to be that the symbol clapack_sgesv is missing from /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so - apparently confirmed via nm, i.e. nm /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so | grep clapack_sgesv U clapack_sgesv 00012640 d doc_f2py_rout_clapack_sgesv 000094c0 t f2py_rout_clapack_sgesv In detail: ------------------------------------------------------------------------------------------------------------------------------ In [1]: from scipy.linalg import * exceptions.ImportError Traceback (most recent call last) /home/karl/projects/MCMC/ /usr/lib/python2.4/site-packages/scipy/linalg/__init__.py 6 from linalg_version import linalg_version as __version__ 7 ----> 8 from basic import * 9 from decomp import * 10 from matfuncs import * /usr/lib/python2.4/site-packages/scipy/linalg/basic.py 15 #from blas import get_blas_funcs 16 from flinalg import get_flinalg_funcs ---> 17 from lapack import get_lapack_funcs 18 from numpy import asarray,zeros,sum,newaxis,greater_equal,subtract,arange,\ 19 conjugate,ravel,r_,mgrid,take,ones,dot,transpose,sqrt,add,real /usr/lib/python2.4/site-packages/scipy/linalg/lapack.py 16 17 from scipy.linalg import flapack ---> 18 from scipy.linalg import clapack 19 _use_force_clapack = 1 20 if hasattr(clapack,'empty_module'): ImportError: /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv ------------------------------------------------------------------------------------------------------------------------------ My situation: os version = Fedora Core release 5 (Bordeaux) python version = Python 2.4.3 (#1, Oct 23 2006, 14:19:47) scipy version = '0.5.3.dev2950' Any tips appreciated, thanks, -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From reckoner at gmail.com Tue Oct 16 16:01:19 2007 From: reckoner at gmail.com (Reckoner) Date: Tue, 16 Oct 2007 13:01:19 -0700 Subject: [SciPy-user] Format numbers as with Matlab format command ? Message-ID: Matlab has a "format" command which will format the displayed numbers to show a fixed precision or number of digits. Does Ipython/scipy have anything similar? For example, in Matlab, ? format compact ? pi ans = 3.1416 ? format long ? pi ans = 3.14159265358979 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Oct 16 16:07:14 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 16 Oct 2007 22:07:14 +0200 Subject: [SciPy-user] Format numbers as with Matlab format command ? In-Reply-To: References: Message-ID: On Tue, 16 Oct 2007 13:01:19 -0700 Reckoner wrote: > Matlab has a "format" command which will format the >displayed numbers to > show a fixed > precision or number of digits. Does Ipython/scipy have >anything similar? > >For example, in Matlab, > > ? format compact > ? pi > ans = > 3.1416 > ? format long > ? pi > ans = > 3.14159265358979 set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, suppress=None, nanstr=None, infstr=None) Nils From robert.kern at gmail.com Tue Oct 16 16:29:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 16 Oct 2007 15:29:32 -0500 Subject: [SciPy-user] lapack unresolved symbol question In-Reply-To: <4715075B.6080703@ucsf.edu> References: <4715075B.6080703@ucsf.edu> Message-ID: <47151F2C.2010802@gmail.com> Karl Young wrote: > From googling around a bit it looks like I'm having an unresolved > symbol problem that was apparently typical at one point but also > apparently solved. But I can't quite pinpoint the fix so I figured I'd > check re. easy fixes before rebuilding libraries and the like. > > The issues seems to be that the symbol clapack_sgesv is missing from > /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so > - apparently confirmed via nm, i.e. > > nm /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so | grep > clapack_sgesv > U clapack_sgesv > 00012640 d doc_f2py_rout_clapack_sgesv > 000094c0 t f2py_rout_clapack_sgesv It would be good to know how you built scipy and whether you built against ATLAS or not. Most likely, scipy got built against an ATLAS-accelerated LAPACK library (which has the clapack_* symbols), but the LAPACK shared library that's getting picked up at runtime does not have the accelerated LAPACK functions. Check what shared libraries this extension module was built with using ldd(1): [~]$ ldd $PYLIB/scipy-0.7.0.dev3435-py2.5-linux-x86_64.egg/scipy/linalg/clapack.so liblapack.so.3 => /usr/lib/atlas/liblapack.so.3 (0x00002aad3362f000) libf77blas.so.3 => /usr/lib/libf77blas.so.3 (0x00002aad33f87000) libcblas.so.3 => /usr/lib/libcblas.so.3 (0x00002aad348af000) libatlas.so.3 => /usr/lib/libatlas.so.3 (0x00002aad351d4000) libgfortran.so.1 => /usr/lib/libgfortran.so.1 (0x00002aad35b3b000) libm.so.6 => /lib/libm.so.6 (0x00002aad35dd6000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00002aad36058000) libc.so.6 => /lib/libc.so.6 (0x00002aad36266000) libblas.so.3 => /usr/lib/atlas/libblas.so.3 (0x00002aad365b8000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0x00002aad36f56000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) Check whichever liblapack library you have for the clapack_sgesv symbol. The .so is often stripped of symbols, but if you have a liblapack.a file that goes with it, you can check that, instead: [~]$ nm /usr/lib/atlas/liblapack.so.3 | grep clapack_sgesv nm: /usr/lib/atlas/liblapack.so.3: no symbols [~]$ nm /usr/lib/atlas/liblapack.a | grep clapack_sgesv clapack_sgesv.o: 0000000000000000 T clapack_sgesv -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Karl.Young at ucsf.edu Tue Oct 16 16:58:19 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 16 Oct 2007 13:58:19 -0700 Subject: [SciPy-user] lapack unresolved symbol question In-Reply-To: <47151F2C.2010802@gmail.com> References: <4715075B.6080703@ucsf.edu> <47151F2C.2010802@gmail.com> Message-ID: <471525EB.3050607@ucsf.edu> Thanks Robert; I'll check the installed libraries. >Karl Young wrote: > > >> From googling around a bit it looks like I'm having an unresolved >>symbol problem that was apparently typical at one point but also >>apparently solved. But I can't quite pinpoint the fix so I figured I'd >>check re. easy fixes before rebuilding libraries and the like. >> >>The issues seems to be that the symbol clapack_sgesv is missing from >>/usr/lib/python2.4/site-packages/scipy/linalg/clapack.so >>- apparently confirmed via nm, i.e. >> >>nm /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so | grep >>clapack_sgesv >> U clapack_sgesv >>00012640 d doc_f2py_rout_clapack_sgesv >>000094c0 t f2py_rout_clapack_sgesv >> >> > >It would be good to know how you built scipy and whether you built against ATLAS >or not. Most likely, scipy got built against an ATLAS-accelerated LAPACK library >(which has the clapack_* symbols), but the LAPACK shared library that's getting >picked up at runtime does not have the accelerated LAPACK functions. > >Check what shared libraries this extension module was built with using ldd(1): > >[~]$ ldd $PYLIB/scipy-0.7.0.dev3435-py2.5-linux-x86_64.egg/scipy/linalg/clapack.so > liblapack.so.3 => /usr/lib/atlas/liblapack.so.3 (0x00002aad3362f000) > libf77blas.so.3 => /usr/lib/libf77blas.so.3 (0x00002aad33f87000) > libcblas.so.3 => /usr/lib/libcblas.so.3 (0x00002aad348af000) > libatlas.so.3 => /usr/lib/libatlas.so.3 (0x00002aad351d4000) > libgfortran.so.1 => /usr/lib/libgfortran.so.1 (0x00002aad35b3b000) > libm.so.6 => /lib/libm.so.6 (0x00002aad35dd6000) > libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00002aad36058000) > libc.so.6 => /lib/libc.so.6 (0x00002aad36266000) > libblas.so.3 => /usr/lib/atlas/libblas.so.3 (0x00002aad365b8000) > libg2c.so.0 => /usr/lib/libg2c.so.0 (0x00002aad36f56000) > /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) > >Check whichever liblapack library you have for the clapack_sgesv symbol. The .so >is often stripped of symbols, but if you have a liblapack.a file that goes with >it, you can check that, instead: > >[~]$ nm /usr/lib/atlas/liblapack.so.3 | grep clapack_sgesv >nm: /usr/lib/atlas/liblapack.so.3: no symbols >[~]$ nm /usr/lib/atlas/liblapack.a | grep clapack_sgesv >clapack_sgesv.o: >0000000000000000 T clapack_sgesv > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From Karl.Young at ucsf.edu Tue Oct 16 17:48:10 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 16 Oct 2007 14:48:10 -0700 Subject: [SciPy-user] lapack unresolved symbol question In-Reply-To: <47151F2C.2010802@gmail.com> References: <4715075B.6080703@ucsf.edu> <47151F2C.2010802@gmail.com> Message-ID: <4715319A.0@ucsf.edu> Thanks again Robert; I "solved" the problem inspired by your suggestion but maybe you or someone else has a suggestion for something more sensible or elegant. The problem seems to be that there's a liblapack.so.3 in /usr/lib (presumably not built against an ATLAS-accelerated LAPACK) that gets picked up if I have /usr/lib in LD_LIBRARY_PATH . I had forgotten that I'd had this problem before and as a temporary solution had just taken /usr/lib out of LD_LIBRARY_PATH; /usr/lib got put back in as a result of installing of something else (hence the recurrence). Obviously removing /usr/lib from LD_LIBRARY_PATH isn't a great "solution" so pardon my ignorance but is there a python environment variable can I set to get python to look in /usr/lib/atlas before /usr/lib (PYTHONPATH doesn't seem to work for this) ? Thanks, >Karl Young wrote: > > >> From googling around a bit it looks like I'm having an unresolved >>symbol problem that was apparently typical at one point but also >>apparently solved. But I can't quite pinpoint the fix so I figured I'd >>check re. easy fixes before rebuilding libraries and the like. >> >>The issues seems to be that the symbol clapack_sgesv is missing from >>/usr/lib/python2.4/site-packages/scipy/linalg/clapack.so >>- apparently confirmed via nm, i.e. >> >>nm /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so | grep >>clapack_sgesv >> U clapack_sgesv >>00012640 d doc_f2py_rout_clapack_sgesv >>000094c0 t f2py_rout_clapack_sgesv >> >> > >It would be good to know how you built scipy and whether you built against ATLAS >or not. Most likely, scipy got built against an ATLAS-accelerated LAPACK library >(which has the clapack_* symbols), but the LAPACK shared library that's getting >picked up at runtime does not have the accelerated LAPACK functions. > >Check what shared libraries this extension module was built with using ldd(1): > >[~]$ ldd $PYLIB/scipy-0.7.0.dev3435-py2.5-linux-x86_64.egg/scipy/linalg/clapack.so > liblapack.so.3 => /usr/lib/atlas/liblapack.so.3 (0x00002aad3362f000) > libf77blas.so.3 => /usr/lib/libf77blas.so.3 (0x00002aad33f87000) > libcblas.so.3 => /usr/lib/libcblas.so.3 (0x00002aad348af000) > libatlas.so.3 => /usr/lib/libatlas.so.3 (0x00002aad351d4000) > libgfortran.so.1 => /usr/lib/libgfortran.so.1 (0x00002aad35b3b000) > libm.so.6 => /lib/libm.so.6 (0x00002aad35dd6000) > libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00002aad36058000) > libc.so.6 => /lib/libc.so.6 (0x00002aad36266000) > libblas.so.3 => /usr/lib/atlas/libblas.so.3 (0x00002aad365b8000) > libg2c.so.0 => /usr/lib/libg2c.so.0 (0x00002aad36f56000) > /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) > >Check whichever liblapack library you have for the clapack_sgesv symbol. The .so >is often stripped of symbols, but if you have a liblapack.a file that goes with >it, you can check that, instead: > >[~]$ nm /usr/lib/atlas/liblapack.so.3 | grep clapack_sgesv >nm: /usr/lib/atlas/liblapack.so.3: no symbols >[~]$ nm /usr/lib/atlas/liblapack.a | grep clapack_sgesv >clapack_sgesv.o: >0000000000000000 T clapack_sgesv > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From robert.kern at gmail.com Tue Oct 16 18:23:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 16 Oct 2007 17:23:25 -0500 Subject: [SciPy-user] lapack unresolved symbol question In-Reply-To: <4715319A.0@ucsf.edu> References: <4715075B.6080703@ucsf.edu> <47151F2C.2010802@gmail.com> <4715319A.0@ucsf.edu> Message-ID: <471539DD.3000105@gmail.com> Karl Young wrote: > Thanks again Robert; I "solved" the problem inspired by your suggestion > but maybe you or someone else has a suggestion for something more > sensible or elegant. The problem seems to be that there's a > liblapack.so.3 in /usr/lib (presumably not built against an > ATLAS-accelerated LAPACK) that gets picked up if I have /usr/lib in > LD_LIBRARY_PATH . I had forgotten that I'd had this problem before and > as a temporary solution had just taken /usr/lib out of LD_LIBRARY_PATH; > /usr/lib got put back in as a result of installing of something else > (hence the recurrence). Obviously removing /usr/lib from LD_LIBRARY_PATH > isn't a great "solution" so pardon my ignorance but is there a python > environment variable can I set to get python to look in /usr/lib/atlas > before /usr/lib (PYTHONPATH doesn't seem to work for this) ? Thanks, Its presence in LD_LIBRARY_PATH is probably unnecessary, but it was probably masking later entries in LD_LIBRARY_PATH or in /etc/ld.so.conf. Ubuntu's ld.so, for example, always checks it, whether it is specified in LD_LIBRARY_PATH or /etc/ls.do.conf, but only if the desired library was not found in the LD_LIBRARY_PATH or /etc/ld.so.conf paths. Look in the ld.so(8) man page for more information about how this works on your system. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Oct 16 22:28:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 17 Oct 2007 11:28:14 +0900 Subject: [SciPy-user] lapack unresolved symbol question In-Reply-To: <4715319A.0@ucsf.edu> References: <4715075B.6080703@ucsf.edu> <47151F2C.2010802@gmail.com> <4715319A.0@ucsf.edu> Message-ID: <4715733E.4020406@ar.media.kyoto-u.ac.jp> Karl Young wrote: > Thanks again Robert; I "solved" the problem inspired by your suggestion > but maybe you or someone else has a suggestion for something more > sensible or elegant. The problem seems to be that there's a > liblapack.so.3 in /usr/lib (presumably not built against an > ATLAS-accelerated LAPACK) that gets picked up if I have /usr/lib in > LD_LIBRARY_PATH . I had forgotten that I'd had this problem before and > as a temporary solution had just taken /usr/lib out of LD_LIBRARY_PATH; > /usr/lib got put back in as a result of installing of something else > (hence the recurrence). Obviously removing /usr/lib from LD_LIBRARY_PATH > isn't a great "solution" so pardon my ignorance but is there a python > environment variable can I set to get python to look in /usr/lib/atlas > before /usr/lib (PYTHONPATH doesn't seem to work for this) ? Thanks, PYTHONPATH won't work for this, indeed. This is outside python's hand, so no python solution here. I don't know why LD_LIBRARY_PATH contains /usr/lib: this is rather strange, and kind of defeat its purpose; LD_LIBRARY_PATH itself is a hack [1], though, so modifying it is not worse than using it in most cases. In your case, if it gets rewritten, it can be a pain, though. One solution would be to launch a python shell from a shell command which set LD_LIBRARY_PATH correctly (that's what I would do in your case, but this just a workaround). Actually, there is something we could do on scipy's side to make all this easier, but it is not trivial to implement. Alternatively, you could just install the rpm, which are available for FC5: http://download.opensuse.org/repositories/home:/ashigabou/Fedora_Extras_5/ This includes numpy 1.0.4, scipy 0.6, blas/lapack, and a source rpm to ATLAS (you can get instructions here: http://www.scipy.org/Installing_SciPy/Linux cheers, David [1]: http://xahlee.org/UnixResource_dir/_/ldpath.html From couge.chen at gmail.com Tue Oct 16 23:46:50 2007 From: couge.chen at gmail.com (Couge Chen) Date: Tue, 16 Oct 2007 23:46:50 -0400 Subject: [SciPy-user] About fftpack Message-ID: <2a05ece60710162046i624513a6t392413d6825ea18d@mail.gmail.com> Hi all, Here is a piece of C++ codes to do 2D IFFT from a square complex array in K-space to a real array with the same size in real sapce. p_forward = rfftwnd_create_plan(numOfPoints.dim(),numOfPoints, FFTW_REAL_TO_COMPLEX, FFTW_ESTIMATE); : : static void FFT(Array2D< double > &in, Array2D< fftw_complex > &out) { rfftwnd_one_real_to_complex(p_forward,in[0],out[0]); }; : : myFFTclass::IFFT(complex_array,real_array); For those C++ codes, fftw2.1.5 is used. Now I want to translate those codes into Python codes with scipy.fftpack. I used fftpack.ifft2(complex_array), but got a new complex array. Then I compared the real part of this new complex array with the "real_array" calculated by C++. They looked quite different. So I was wondering how I use the function of fftpack to generate the same "real_array". Anyone could help me? Thanks a lot! Couge -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl.young at ucsf.edu Wed Oct 17 02:46:48 2007 From: karl.young at ucsf.edu (Young, Karl) Date: Tue, 16 Oct 2007 23:46:48 -0700 Subject: [SciPy-user] lapack unresolved symbol question References: <4715075B.6080703@ucsf.edu> <47151F2C.2010802@gmail.com> <4715319A.0@ucsf.edu> <4715733E.4020406@ar.media.kyoto-u.ac.jp> Message-ID: <9D202D4E86A4BF47BA6943ABDF21BE78039F095F@EXVS06.net.ucsf.edu> Thanks again David and Robert re. suggestions. I did think about a shell script to set LD_LIBRARY_PATH and run python but though functional somehow that seemed like too much intervention to me. You guys have convinced me that having /usr/lib in LD_LIBRARY_PATH is weird anyway so just setting LD_LIBRARY_PATH in .cshrc will probably do for now (and didn't seem to cause any other problems when I did it that way before). -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of David Cournapeau Sent: Tue 10/16/2007 7:28 PM To: SciPy Users List Subject: Re: [SciPy-user] lapack unresolved symbol question Karl Young wrote: > Thanks again Robert; I "solved" the problem inspired by your suggestion > but maybe you or someone else has a suggestion for something more > sensible or elegant. The problem seems to be that there's a > liblapack.so.3 in /usr/lib (presumably not built against an > ATLAS-accelerated LAPACK) that gets picked up if I have /usr/lib in > LD_LIBRARY_PATH . I had forgotten that I'd had this problem before and > as a temporary solution had just taken /usr/lib out of LD_LIBRARY_PATH; > /usr/lib got put back in as a result of installing of something else > (hence the recurrence). Obviously removing /usr/lib from LD_LIBRARY_PATH > isn't a great "solution" so pardon my ignorance but is there a python > environment variable can I set to get python to look in /usr/lib/atlas > before /usr/lib (PYTHONPATH doesn't seem to work for this) ? Thanks, PYTHONPATH won't work for this, indeed. This is outside python's hand, so no python solution here. I don't know why LD_LIBRARY_PATH contains /usr/lib: this is rather strange, and kind of defeat its purpose; LD_LIBRARY_PATH itself is a hack [1], though, so modifying it is not worse than using it in most cases. In your case, if it gets rewritten, it can be a pain, though. One solution would be to launch a python shell from a shell command which set LD_LIBRARY_PATH correctly (that's what I would do in your case, but this just a workaround). Actually, there is something we could do on scipy's side to make all this easier, but it is not trivial to implement. Alternatively, you could just install the rpm, which are available for FC5: http://download.opensuse.org/repositories/home:/ashigabou/Fedora_Extras_5/ This includes numpy 1.0.4, scipy 0.6, blas/lapack, and a source rpm to ATLAS (you can get instructions here: http://www.scipy.org/Installing_SciPy/Linux cheers, David [1]: http://xahlee.org/UnixResource_dir/_/ldpath.html _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From jaonary at gmail.com Wed Oct 17 05:12:59 2007 From: jaonary at gmail.com (Jaonary Rabarisoa) Date: Wed, 17 Oct 2007 11:12:59 +0200 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: <6ce0ac130710141920o5b5ed148x1470fd4552225cb4@mail.gmail.com> References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> <6ce0ac130710140933w35ff0712te900b5d91d9a9021@mail.gmail.com> <6ce0ac130710141920o5b5ed148x1470fd4552225cb4@mail.gmail.com> Message-ID: Hi all, Finally, after looking at the documentation of both Ipython1 and parallel python, I've decided to use Parallel Python because I found it very easy to use and totaly fit my needs. It may be interesting to compare the two package more deeply. I'll try to do the same thing with I python and I'll report here what I think about them. Again, thank you for your help. Jaonary On 10/15/07, Brian Granger wrote: > > > Thanks for the pointers! Very impressive piece of work -- it will > > help me very much with my batch jobs and I'll be watching it as it > > matures. Impressive scatter/gather interface, too -- very clean. > > Thanks, we would love any further feedback you have. > > > There was a bit of a hangup when running ipcluster -- on my MacBook, > > if you put it in the background right away, the process is stopped > > immediately before the script has a chance to set up the controller > > and spawn the engines: > > > > $ ipcluster -n 4 & > > [1] 450 > > $ Starting controller: Controller PID: 452 > > [1]+ Stopped ipcluster -n 4 > > $ > > > > Since it is stopped and the engines aren't created, trying to use it > fails. > > On OS X, the following works (I have no idea why?!?!) for backgrounding: > > $ (ipcluster -n 4 &) > > The parenthesis around everything does the trick. > > > I have to run 'ipcluster -n ' in the foreground, let it set up > > the engines, suspend it (CTRL-Z) and put it in the background for > > things to work nicely. Is there any way around this? Not a big deal, > > just wondering if I'm doing anything wrong. > > Nope, just some wierdness with OS X. > > > Thanks again! > > > > Kurt > > > > > > > > > > On 10/13/07, Kurt Smith wrote: > > > > On 10/12/07, Brian Granger wrote: > > > > > Mentioned below by Jarrod, IPython1 is probably the best solution > for > > > > > this. Here is the simplest parallel implementation in IPython1: > > > > > > > > > > In [1]: import ipython1.kernel.api as kernel > > > > > > > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > > > > > > > In [3]: rc.getIDs() > > > > > Out[3]: [0, 1, 2, 3] > > > > > > > > Hi all: > > > > > > > > I installed everything to ipython1's liking, and here is what I get > > > > running the above code: > > > > > > > > ksmith at laptop:~/Devel/python/ipython1 > > > > [183]$ ipython > > > > Python 2.5.1 (r251:54863, Sep 21 2007, 22:12:00) > > > > Type "copyright", "credits" or "license" for more information. > > > > > > > > IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python. > > > > ? -> Introduction and overview of IPython's features. > > > > %quickref -> Quick reference. > > > > help -> Python's own help system. > > > > object? -> Details about 'object'. ?object also works, ?? prints > more. > > > > > > > > In [1]: import ipython1.kernel.api as kernel > > > > > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > > > > > In [3]: rc.getIDs() > > > > [snip traceback] > > > > ConnectionError: Error connecting to the server, please recreate the > client. > > > > The original internal error was: > > > > error(61, 'Connection refused') > > > > > > > > I'm sure it's something really simple, but I don't know what. > > > > Pointers? I'm on a MacBook, 10.4.10, python 2.5.1, ipython1 from > svn. > > > > > > > > Thanks for any help you can give -- googling turned up nothing. > > > > > > > > Kurt > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaonary at gmail.com Wed Oct 17 06:19:38 2007 From: jaonary at gmail.com (Jaonary Rabarisoa) Date: Wed, 17 Oct 2007 12:19:38 +0200 Subject: [SciPy-user] Swig and numpy again Message-ID: Hi all, I have another question about using numpy.i interface file. There are a lot of macro and type maps defined in this file that we can use to write rapidly a c/c++ extenstion code that use numpy. But there is some case that causes problem to me. Suppose, to be simple that I want too wrap a function that compute an element wise sum of two matricies. The c/c++ function prototype could be like this : void prod(double* A,int row_A,int col_A,double* B,in row_B,int col_B,double* res) { ........... res[i] = A[i]+B[i]; } in python, I'd like to call this function like below : A = numpy.ones((10,10)) B = numpy.ones((10,10)) R = prod(A,B) There is a type map name ARGOUT_ARRAY in numpy.i but it is not really suited for my situation. In fact, with 2d array it needs the size of res to be hard code and with 1D array I have to pass the size of the result as argument in my python code (R = prod(A,B,size of r)). So how can I solve this ? Any help will be appreciated. Regards, Jaonary -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandricionut at yahoo.com Wed Oct 17 10:30:51 2007 From: sandricionut at yahoo.com (sandric ionut) Date: Wed, 17 Oct 2007 07:30:51 -0700 (PDT) Subject: [SciPy-user] Moran I autocorrelation Message-ID: <270636.79951.qm@web51303.mail.re2.yahoo.com> Hello: How can I calculate Moran I autocorrelation index using scipy? Thank you Ionut __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg.net at gmail.com Wed Oct 17 12:51:23 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 17 Oct 2007 10:51:23 -0600 Subject: [SciPy-user] running scipy code simultaneously on several machines In-Reply-To: References: <6ce0ac130710120842g15ca4910yabd34fb41c36531b@mail.gmail.com> <6ce0ac130710140933w35ff0712te900b5d91d9a9021@mail.gmail.com> <6ce0ac130710141920o5b5ed148x1470fd4552225cb4@mail.gmail.com> Message-ID: <6ce0ac130710170951h1b6c3c8p604794459fe2c836@mail.gmail.com> Jaonary, If Parallel Python fits your needs best you should use it. But, I am curious about what specific things about Parallel Python do you like better than IPython1? Simplicity? Documentation? Intuitive? Performance? Thanks! Brian On 10/17/07, Jaonary Rabarisoa wrote: > Hi all, > > Finally, after looking at the documentation of both Ipython1 and parallel > python, I've decided to use Parallel Python because I found it very > easy to use and totaly fit my needs. It may be interesting to compare the > two package more deeply. I'll try to do the same thing with I python > and I'll report here what I think about them. > > Again, thank you for your help. > > Jaonary > > > On 10/15/07, Brian Granger < ellisonbg.net at gmail.com> wrote: > > > Thanks for the pointers! Very impressive piece of work -- it will > > > help me very much with my batch jobs and I'll be watching it as it > > > matures. Impressive scatter/gather interface, too -- very clean. > > > > Thanks, we would love any further feedback you have. > > > > > There was a bit of a hangup when running ipcluster -- on my MacBook, > > > if you put it in the background right away, the process is stopped > > > immediately before the script has a chance to set up the controller > > > and spawn the engines: > > > > > > $ ipcluster -n 4 & > > > [1] 450 > > > $ Starting controller: Controller PID: 452 > > > [1]+ Stopped ipcluster -n 4 > > > $ > > > > > > Since it is stopped and the engines aren't created, trying to use it > fails. > > > > On OS X, the following works (I have no idea why?!?!) for backgrounding: > > > > $ (ipcluster -n 4 &) > > > > The parenthesis around everything does the trick. > > > > > I have to run 'ipcluster -n ' in the foreground, let it set up > > > the engines, suspend it (CTRL-Z) and put it in the background for > > > things to work nicely. Is there any way around this? Not a big deal, > > > just wondering if I'm doing anything wrong. > > > > Nope, just some wierdness with OS X. > > > > > Thanks again! > > > > > > Kurt > > > > > > > > > > > > > > On 10/13/07, Kurt Smith wrote: > > > > > On 10/12/07, Brian Granger < ellisonbg.net at gmail.com> wrote: > > > > > > Mentioned below by Jarrod, IPython1 is probably the best solution > for > > > > > > this. Here is the simplest parallel implementation in IPython1: > > > > > > > > > > > > In [1]: import ipython1.kernel.api as kernel > > > > > > > > > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1 ',10105)) > > > > > > > > > > > > In [3]: rc.getIDs() > > > > > > Out[3]: [0, 1, 2, 3] > > > > > > > > > > Hi all: > > > > > > > > > > I installed everything to ipython1's liking, and here is what I get > > > > > running the above code: > > > > > > > > > > ksmith at laptop:~/Devel/python/ipython1 > > > > > [183]$ ipython > > > > > Python 2.5.1 (r251:54863, Sep 21 2007, 22:12:00) > > > > > Type "copyright", "credits" or "license" for more information. > > > > > > > > > > IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python. > > > > > ? -> Introduction and overview of IPython's features. > > > > > %quickref -> Quick reference. > > > > > help -> Python's own help system. > > > > > object? -> Details about 'object'. ?object also works, ?? prints > more. > > > > > > > > > > In [1]: import ipython1.kernel.api as kernel > > > > > > > > > > In [2]: rc = kernel.RemoteController(('127.0.0.1',10105)) > > > > > > > > > > In [3]: rc.getIDs() > > > > > [snip traceback] > > > > > ConnectionError: Error connecting to the server, please recreate the > client. > > > > > The original internal error was: > > > > > error(61, 'Connection refused') > > > > > > > > > > I'm sure it's something really simple, but I don't know what. > > > > > Pointers? I'm on a MacBook, 10.4.10, python 2.5.1, ipython1 from > svn. > > > > > > > > > > Thanks for any help you can give -- googling turned up nothing. > > > > > > > > > > Kurt > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From lev at columbia.edu Wed Oct 17 13:21:48 2007 From: lev at columbia.edu (Lev Givon) Date: Wed, 17 Oct 2007 13:21:48 -0400 Subject: [SciPy-user] Is anyone using scipy on Xgrid clusters? Message-ID: <20071017132148.6ff99eff@columbia.edu> Out of curiosity, has anyone on the list had occasion to successfully run multiple Python jobs that use scipy (but not MPI) on Apple clusters with Xgrid? Several of my colleagues had recently begun to do so, but encountered some curious issues regarding the parallel execution of jobs (which admittedly might have more to do with Python/Xgrid interactions than with scipy specifically); I would consequently be interested in learning of similar experiences. L.G. From mhearne at usgs.gov Wed Oct 17 14:29:33 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 17 Oct 2007 12:29:33 -0600 Subject: [SciPy-user] interpolation question In-Reply-To: <08533109-F44E-4EA0-BB3E-8D4293D4C7D8@usgs.gov> References: <08533109-F44E-4EA0-BB3E-8D4293D4C7D8@usgs.gov> Message-ID: <32BB8C03-3075-4725-8D4B-DDEA85B40075@usgs.gov> Is the 'linear' option in interpolate.interp2d() doing 'bilinear' interpolation, or something else? A couple of months ago I posted a simple interpolation problem I was experiencing, which was solved by installing a newer version of scipy. This time I have a slightly more complicated example. Following are examples of a Matlab script and a Python script that should produce the same result, but do not. I think it may be due to the fact that interp2d() is not doing a bilinear interpolation... I am using scipy version '0.7.0.dev3442' on a Mac OS X. Matlab code: ------------------------------------------------------------ x = [4.0600 4.0700 4.0800 3.9500 4.1100 3.8400 4.1200 3.8600; 4.0700 4.0800 4.0900 4.1100 4.1600 3.9200 3.9000 3.8700; 4.0700 4.2900 4.3000 4.3100 3.9400 4.0300 4.1700 4.1500; 4.0800 4.0300 4.1100 3.8800 4.0100 4.2000 4.0100 3.8400; 4.0900 3.9600 4.1100 4.1200 4.1800 3.9000 4.2000 4.1700; 4.1000 4.1100 4.0500 4.1300 4.1500 3.8900 3.9000 4.1800; 4.1000 4.1100 4.0600 4.1400 4.1500 4.1000 3.9000 4.1900; 4.1100 4.0600 4.1300 4.1500 4.3700 3.9000 4.1800 4.2000]; xi = [1.77 2.83 3.45 4.66 5.33 6.72 7.11]; yi = xi'; z = interp2(x,xi,yi,'linear') ------------------------------------------------------------ Python code: ------------------------------------------------------------ import numpy from scipy.interpolate import interpolate numpy.set_printoptions(precision=4,linewidth=120) data = numpy.array (([4.0600,4.0700,4.0800,3.9500,4.1100,3.8400,4.1200,3.8600], [4.0700,4.0800,4.0900,4.1100,4.1600,3.9200,3.9000,3.8700], [4.0700,4.2900,4.3000,4.3100,3.9400,4.0300,4.1700,4.1500], [4.0800,4.0300,4.1100,3.8800,4.0100,4.2000,4.0100,3.8400], [4.0900,3.9600,4.1100,4.1200,4.1800,3.9000,4.2000,4.1700], [4.1000,4.1100,4.0500,4.1300,4.1500,3.8900,3.9000,4.1800], [4.1000,4.1100,4.0600,4.1400,4.1500,4.1000,3.9000,4.1900], [4.1100,4.0600,4.1300,4.1500,4.3700,3.9000,4.1800,4.2000])) #indices are different from Matlab because Matlab indices are 1-based xi = numpy.array(([0.77,1.83,2.45,3.66,4.33,5.72,6.11])) yi = xi xrange = numpy.arange(8) yrange = numpy.arange(8) X,Y = numpy.meshgrid(xrange,yrange) outgrid = interpolate.interp2d(X,Y,data,kind='linear') z = outgrid(xi,yi) print z ------------------------------------------------------------ Matlab results look like this: z = 4.0754 4.0860 4.0812 4.1229 4.0670 3.9369 3.9415 4.2119 4.2626 4.2696 4.0789 3.9886 4.0925 4.1217 4.1503 4.2074 4.1704 4.0208 4.0160 4.1004 4.0884 4.0074 4.0885 4.0778 4.0937 4.0825 4.0980 4.1269 4.0288 4.0765 4.1051 4.1542 4.0799 4.0438 4.1090 4.1077 4.0662 4.0932 4.1456 4.1141 3.9395 3.9316 4.1037 4.0740 4.1007 4.1629 4.1425 3.9720 3.9594 Python results look like this: [[ 4.0233 4.0084 3.9881 4.1503 4.2896 3.8178 3.7936] [ 4.2695 4.3692 4.469 4.1137 3.7766 4.2214 4.2578] [ 4.1597 4.1808 4.0441 3.939 3.9272 3.9874 3.9296] [ 3.9316 4.0521 4.1188 4.1464 4.1767 4.3089 4.3022] [ 4.0464 4.0654 4.0979 4.1724 4.2222 4.0362 4.0943] [ 4.1178 4.0647 4.1023 4.105 4.0723 3.7996 3.8618] [ 4.1037 4.0776 4.1058 4.1629 4.1851 3.9047 3.9594]] Thanks, Mike Hearne On Oct 16, 2007, at 10:57 AM, Michael Hearne wrote: > I have a question regarding a warning I'm getting trying to do a 2D > interpolation. If I see the message below, does that mean I should > not trust my results? > > Warning: No more knots can be added because the number of B- > spline coefficients > already exceeds the number of data points m. Probably causes: > either > s or m too small. (fp>s) > > > > > ------------------------------------------------------ > Michael Hearne > mhearne at usgs.gov > (303) 273-8620 > USGS National Earthquake Information Center > 1711 Illinois St. Golden CO 80401 > Senior Software Engineer > Synergetics, Inc. > ------------------------------------------------------ > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtravs at gmail.com Wed Oct 17 15:17:09 2007 From: jtravs at gmail.com (John Travers) Date: Wed, 17 Oct 2007 20:17:09 +0100 Subject: [SciPy-user] interpolation question In-Reply-To: <32BB8C03-3075-4725-8D4B-DDEA85B40075@usgs.gov> References: <08533109-F44E-4EA0-BB3E-8D4293D4C7D8@usgs.gov> <32BB8C03-3075-4725-8D4B-DDEA85B40075@usgs.gov> Message-ID: <3a1077e70710171217i11b1ee80x6652d7f55dacd8a0@mail.gmail.com> Hi Micheal, On 17/10/2007, Michael Hearne wrote: > > Is the 'linear' option in interpolate.interp2d() doing 'bilinear' > interpolation, or something else? Something else. It should be bilinear, but the underlying code is designed for scattered grids of data and therefore has a more complicated algorithm. For regular girds (like yours) you should use RectBivariateSpline ------------------------------ from scipy.interpolate import RectBivariateSpline # code in between outgrid = RectBivariateSpline(xrange,yrange,data,kx=1,ky=1) ------------------------------ This gives me the same results as your matlab run. Note that the input x and y are 1d arrays rather than 2d. RectBivariateSpline is generally much more robust for regular grids, and should be used unless you really do have scattered data. Note to other developers: RectBivariateSpline should be used in interp2d when possible. If I get time at the end of the month I'll do this. Cheers, John From tjhnson at gmail.com Wed Oct 17 15:31:31 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Wed, 17 Oct 2007 12:31:31 -0700 Subject: [SciPy-user] Sparse Random Variables Message-ID: Hi, I have some stats questions. 1) Please excuse my ignorance here...but how does one use rv_discrete without initializing with the 'values' keyword? For example, if I use values=((1,2),(.3,.7)), then xk and pk will both be defined....and it seems strange that pmf() should even bother using the cdf to compute the probability (and slower?). 2) Suppose I want to store a log distribution, is this easily achievable? 3) I didn't do extensive tests, but it seemed like _entropy() was usually faster than entropy() even when the distribution was 1e6 possible values. Is there a reason that default calls to entropy use the vectorized function? It seems like most usage cases will be random variables with much less than 1e6 values...but perhaps not. 4) Also, for some reason entropy() doesn't always work on the first try... >>> from scipy import * >>> x = 1e3 >>> v = rand(x) >>> v = v/sum(x) >>> a = stats.rv_discrete(name='test', values=(range(x), v)) >>> a.entropy() >>> a.entropy() The first entropy raises an error. The second works. The problem seems to be with: /home/me/lib/python/scipy/stats/distributions.py in entropy(self, *args, **kwds) -> 3794 place(output,cond0,self.vecentropy(*goodargs)) /home/me/lib/python/numpy/lib/function_base.py in __call__(self, *args) 940 941 if self.nout == 1: --> 942 _res = array(self.ufunc(*args),copy=False).astype(self.otypes[0]) 943 else: 944 _res = tuple([array(x,copy=False).astype(c) \ : function not supported for these types, and can't coerce safely to supported types 5) I really need to have random variables where the xk are tuples of the same type (integers xor floats xor strings ...) p( (0,0) ) = .25 p( (0,1) ) = .25 p( (1,0) ) = .25 p( (1,1) ) = .25 but a = stats.rv_discrete(name='test', values=(((0,0),(0,1),(1,0),(1,1)), [.25]*4)) yields /home/me/lib/python/numpy/core/fromnumeric.py in take(a, indices, axis, out, mode) 79 except AttributeError: 80 return _wrapit(a, 'take', indices, axis, out, mode) ---> 81 return take(indices, axis, out, mode) 82 83 : index out of range for array My initial thought would be that the xk could be anything that is hashable. For dictionary-based discrete distributions, I do use tuples...but I would like to start using scipy.stats. I am fishing for too much or in the wrong lake? Thanks. From mhearne at usgs.gov Wed Oct 17 16:46:31 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 17 Oct 2007 14:46:31 -0600 Subject: [SciPy-user] interpolation question In-Reply-To: <3a1077e70710171217i11b1ee80x6652d7f55dacd8a0@mail.gmail.com> References: <08533109-F44E-4EA0-BB3E-8D4293D4C7D8@usgs.gov> <32BB8C03-3075-4725-8D4B-DDEA85B40075@usgs.gov> <3a1077e70710171217i11b1ee80x6652d7f55dacd8a0@mail.gmail.com> Message-ID: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> John - Thanks very much, this is great! I do have one more question - when I use the code with a non-square grid, I get an error saying that my x data does not match the x dimension of my z data. In looking at the following code, it seems like x is being compared with the number of rows of Z data, and y being compared with the number of columns: if not x.size == z.shape[0]: raise TypeError,\ 'x dimension of z must have same number of elements as x' if not y.size == z.shape[1]: raise TypeError,\ 'y dimension of z must have same number of elements as y' Isn't numpy row-major like Matlab? I.e, The first dimension is that of rows, the second is that of columns? Thanks, Mike Hearne On Oct 17, 2007, at 1:17 PM, John Travers wrote: > from scipy.interpolate import RectBivariateSpline ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg.net at gmail.com Wed Oct 17 17:42:03 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 17 Oct 2007 15:42:03 -0600 Subject: [SciPy-user] Is anyone using scipy on Xgrid clusters? In-Reply-To: <20071017132148.6ff99eff@columbia.edu> References: <20071017132148.6ff99eff@columbia.edu> Message-ID: <6ce0ac130710171442v1ee458c7mecad03fd43def6d7@mail.gmail.com> I have used XGrid with many different python libraries, including scipy. But, from the recent posts about these problems on the XGrid list, it looks like the problems have been solved and were simply related to issues with the python installation. Just an aside, if you are using Python with XGrid, you may want to look at PyXG: pyxg.scipy.org Cheers, Brian Disclaimer: I am the lead dev of pyxg :) On 10/17/07, Lev Givon wrote: > Out of curiosity, has anyone on the list had occasion to successfully run > multiple Python jobs that use scipy (but not MPI) on Apple clusters with > Xgrid? Several of my colleagues had recently begun to do so, but encountered some > curious issues regarding the parallel execution of jobs (which > admittedly might have more to do with Python/Xgrid interactions than with > scipy specifically); I would consequently be interested in learning of similar experiences. > > L.G. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From zunzun at zunzun.com Thu Oct 18 05:40:08 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 18 Oct 2007 05:40:08 -0400 Subject: [SciPy-user] ANN: numpy/scipy curve and surface fitting web site tutorial Message-ID: <20071018094008.GA2660@zunzun.com> I have completed a new curve and surface fitting web site tutorial, the URL is; http://wiki.pylonshq.com/display/pylonscommunity/Pylons+Curve+and+Surface+Fitting+Tutorial Of course, numpy and scipy are required :) James Phillips From rgold at lsw.uni-heidelberg.de Thu Oct 18 12:35:49 2007 From: rgold at lsw.uni-heidelberg.de (rgold at lsw.uni-heidelberg.de) Date: Thu, 18 Oct 2007 18:35:49 +0200 (CEST) Subject: [SciPy-user] Integrating Array Message-ID: <34353.147.142.111.40.1192725349.squirrel@srv0.lsw.uni-heidelberg.de> Hi everybody, I need to integrate a set of data points stored in a 1-dim array. That means I can only use scipy.integrate.simps and scipy.integrate.trapz but these methods are not as accurate as the routines for function-input like scipy.integrate.quadrature which efficiently use more samples where the integrand oscillates faster,etc. The problem is that I need a high accuracy because the integrand IS oscillating fast! I already interpolated the data using the scipy.interpolate.splrep but interestingly it turns out that simpsons rule now produces NANs and I am left with good old trapz! Is there a way of implementing quadrature for data-like-arrays? Still I could try romberg. But then I have to interpolate exactly in such a way that the "number of samples = positive power of 2) +1" (equally spaced if I remember correctly). Is this the best way or does anyone have another idea? Thanks in advance! Roman From julien.hillairet at gmail.com Thu Oct 18 12:47:43 2007 From: julien.hillairet at gmail.com (Julien Hillairet) Date: Thu, 18 Oct 2007 18:47:43 +0200 Subject: [SciPy-user] Integrating Array In-Reply-To: <34353.147.142.111.40.1192725349.squirrel@srv0.lsw.uni-heidelberg.de> References: <34353.147.142.111.40.1192725349.squirrel@srv0.lsw.uni-heidelberg.de> Message-ID: <8742df0c0710180947w2d3dc438oed07272eb9fb91a1@mail.gmail.com> > > The problem is that I need a high accuracy because the integrand IS > oscillating fast! [...] > Is this the best way or does anyone have another idea? > Hello, Do you have an analytic expression of your function ? If yes, du to the fact that it oscillates fast, maybe an asymptotic expansion of your integral could give you a good approximation ? Best regards, JH -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Thu Oct 18 13:15:26 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 18 Oct 2007 13:15:26 -0400 Subject: [SciPy-user] Integrating Array In-Reply-To: <34353.147.142.111.40.1192725349.squirrel@srv0.lsw.uni-heidelberg.de> References: <34353.147.142.111.40.1192725349.squirrel@srv0.lsw.uni-heidelberg.de> Message-ID: On 18/10/2007, rgold at lsw.uni-heidelberg.de wrote: > I need to integrate a set of data points stored in a 1-dim array. That > means I can only use scipy.integrate.simps and scipy.integrate.trapz but > these methods are not as accurate as the routines for function-input like > scipy.integrate.quadrature which efficiently use more samples where the > integrand oscillates faster,etc. > The problem is that I need a high accuracy because the integrand IS > oscillating fast! > > I already interpolated the data using the scipy.interpolate.splrep but > interestingly it turns out that simpsons rule now produces NANs and I am > left with good old trapz! > > Is there a way of implementing quadrature for data-like-arrays? > Still I could try romberg. But then I have to interpolate exactly in such > a way that the "number of samples = positive power of 2) +1" (equally > spaced if I remember correctly). > > Is this the best way or does anyone have another idea? The reason the clever routines (e.g., scipy.integrate.quad) are able to produce high accuracy is because they are able to sample your function more densely in regions where it is more complicated. Between samples, they generally make some very simple assumption about the function's behaviour - a polynomial of low order, usually. If you have already sampled your function and are not willing to sample it further, there's not much that can be done to improve on simps. If your function were very smooth, you could try very high order polynomials, but this is unlikely to help you much. The real problem is that you only know your function at some points, and interpolation is the best guess you have at what happens between them. The short answer is if your function oscillates between points, you need more points. The best and easiest choice is to simply use quad on your function, and let it decide where to sample. From your question, you presumably have a good reason not to do that. If it's that your evaluation routine is much more efficient when you give it a whole vector to evaluate at once, there are choices: * scipy.integrate.quadrature is based on Gaussian quadrature. The idea here is to evaluate your function at carefully-chosen abscissas, then use carefully-chosen weights to produce a quadrature rule. The abscissas and weights can be chosen so that you get exact integrals for polynomials up to a certain order times a weight function. Anyway, it's good, and it evaluates your function at a vector of points at once. scipy.integrate.quadrature evaluates it on denser and denser grids until the result appears to converge. (If you need special calling conventions for your function, you can look into the code of scipy.integrate.quadrature, it's not very complicated.) * If your function can only be evaluated at an evenly-spaced grid, you can use scipy.integrate.simps on finer and finer grids until it appears to have converged. If you have slightly more flexibility, you can subdivide individual grid steps separately (so that regions where the function is more complicated get evaluated more frequently). * On the remote chance that you are getting Fourier coefficients and wanting to integrate the time-domain function, some clever algebra involving the inverse Fourier transform lets you produce integrals directly (using an IFFT if desired). We may be more able to help you if you tell us more about your problem (for example why you're not using quad). Good luck, Anne From zyzhu2000 at gmail.com Thu Oct 18 20:08:54 2007 From: zyzhu2000 at gmail.com (Geoffrey Zhu) Date: Thu, 18 Oct 2007 19:08:54 -0500 Subject: [SciPy-user] Minimize a bounded convex function Message-ID: Hi Everyone, I am trying to minimize a convex function f(x) that is not defined when x<=0 or when x>=1.5. I am trying to use scipy.optimize.golden(), but when the minimal point is very close to zero, golden() will call my target function on the undefined region. Is there any way to ensure that the optimizer will only look for answers within the domain? Thanks, Geoffrey From robert.kern at gmail.com Thu Oct 18 20:31:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Oct 2007 19:31:18 -0500 Subject: [SciPy-user] Minimize a bounded convex function In-Reply-To: References: Message-ID: <4717FAD6.6090205@gmail.com> Geoffrey Zhu wrote: > Hi Everyone, > > I am trying to minimize a convex function f(x) that is not defined > when x<=0 or when x>=1.5. I am trying to use scipy.optimize.golden(), > but when the minimal point is very close to zero, golden() will call > my target function on the undefined region. Is there any way to ensure > that the optimizer will only look for answers within the domain? Hmm. When you have an open interval like that, (0, 1.5) rather than [0, 1.5], you will pretty much have to find an espilon that suits your problem and use the bounds [0+eps, 1.5-eps]. With golden() and brent(), you can use the argument brack=(0+eps, 1.0, 1.5-eps). The 1.0 isn't special, you can pick anything in between since your problem is convex. For fminbound(), you can use x1=0+eps, x2=1.5-eps. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Thu Oct 18 22:04:22 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 18 Oct 2007 22:04:22 -0400 Subject: [SciPy-user] Minimize a bounded convex function In-Reply-To: References: Message-ID: On 18/10/2007, Geoffrey Zhu wrote: > I am trying to minimize a convex function f(x) that is not defined > when x<=0 or when x>=1.5. I am trying to use scipy.optimize.golden(), > but when the minimal point is very close to zero, golden() will call > my target function on the undefined region. Is there any way to ensure > that the optimizer will only look for answers within the domain? There's one open interval that many optimizers can handle, namely the real line. You can try transforming your x coordinate so that (for example) x = (tanh(w)+1)*0.75 and then minimizing as a function of w. This probably doesn't preserve convexity (though it should not introduce any local minima), but it means that you can use unbounded minimization and avoid the endpoints. Anne From manuhack at gmail.com Fri Oct 19 02:29:31 2007 From: manuhack at gmail.com (Manu Hack) Date: Fri, 19 Oct 2007 02:29:31 -0400 Subject: [SciPy-user] Generate random samples from a non-standard distribution Message-ID: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> Hi, I've been goolging around but not sure how to generate random samples given the density function of a random variable. I was looking around something like scipy.stats.rv_continuous but couldn't find anything useful. Also, if one wants to do Markov Chain Monte Carlo, is scipy possible and if not, what should be a workaround (if still want to do in Python). I heard something like openbugs but seems that debian doesn't have a package. Any input is appreciated. Thanks! Manu From matthieu.brucher at gmail.com Fri Oct 19 03:18:46 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 19 Oct 2007 09:18:46 +0200 Subject: [SciPy-user] Generate random samples from a non-standard distribution In-Reply-To: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> References: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> Message-ID: Hi, Scipy proposes a lot of random distribution in the stats module, just use them ;) Matthieu 2007/10/19, Manu Hack : > > Hi, > > I've been goolging around but not sure how to generate random samples > given the density function of a random variable. I was looking around > something like scipy.stats.rv_continuous but couldn't find anything > useful. > > Also, if one wants to do Markov Chain Monte Carlo, is scipy possible > and if not, what should be a workaround (if still want to do in > Python). I heard something like openbugs but seems that debian > doesn't have a package. > > Any input is appreciated. Thanks! > > Manu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 19 03:22:59 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 19 Oct 2007 02:22:59 -0500 Subject: [SciPy-user] Generate random samples from a non-standard distribution In-Reply-To: References: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> Message-ID: <47185B53.8030604@gmail.com> Matthieu Brucher wrote: > Hi, > > Scipy proposes a lot of random distribution in the stats module, just > use them ;) He's asking about the general case, where you have a PDF that isn't one of the standard ones. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Oct 19 03:27:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 19 Oct 2007 02:27:55 -0500 Subject: [SciPy-user] Generate random samples from a non-standard distribution In-Reply-To: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> References: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> Message-ID: <47185C7B.5080305@gmail.com> Manu Hack wrote: > Hi, > > I've been goolging around but not sure how to generate random samples > given the density function of a random variable. I was looking around > something like scipy.stats.rv_continuous but couldn't find anything > useful. No, we don't have any general samplers like that. It is difficult to construct worthwhile general samplers. Most of them involve knowing at least a little bit of information about the distribution. I like this group's approach: http://statmath.wu-wien.ac.at/projects/arvag/index.html If you would like to implement some of their strategies for scipy (from the original literature, not their GPLed code), I'd be more than happy to help you with such an endeavor. > Also, if one wants to do Markov Chain Monte Carlo, is scipy possible > and if not, what should be a workaround (if still want to do in > Python). I heard something like openbugs but seems that debian > doesn't have a package. http://code.google.com/p/pymc/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Fri Oct 19 04:02:41 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 19 Oct 2007 04:02:41 -0400 Subject: [SciPy-user] Generate random samples from a non-standard distribution In-Reply-To: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> References: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> Message-ID: On 19/10/2007, Manu Hack wrote: > I've been goolging around but not sure how to generate random samples > given the density function of a random variable. I was looking around > something like scipy.stats.rv_continuous but couldn't find anything > useful. There's always the everything-looks-like-a-nail approach: you can implement the CDF by numerical integration of the PDF. You can implement the inverse CDF by root-finding on the CDF. You can generate points by generating uniform random variates and running them through the inverse CDF. More reasonably, you can generate a lot of samples and produce a spline that approximates the inverse CDF, which you can then evaluate fairly efficiently. This isn't terribly *good*, mind you, the deviations from your distribution may be rather peculiar, but it's better than nothing. Anne From haase at msg.ucsf.edu Fri Oct 19 04:44:52 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 19 Oct 2007 10:44:52 +0200 Subject: [SciPy-user] Integrating Array In-Reply-To: <34353.147.142.111.40.1192725349.squirrel@srv0.lsw.uni-heidelberg.de> References: <34353.147.142.111.40.1192725349.squirrel@srv0.lsw.uni-heidelberg.de> Message-ID: Hi, I'm definitely no expert - but I think to remember that once you have a spline interpolation of your original data, there essentially comes a spline-based intergral with it. Look through the spline related functions in scipy to see if I'm right .... Hope to hear back. Cheers, Sebastian Haase On 10/18/07, rgold at lsw.uni-heidelberg.de wrote: > Hi everybody, > > I need to integrate a set of data points stored in a 1-dim array. That > means I can only use scipy.integrate.simps and scipy.integrate.trapz but > these methods are not as accurate as the routines for function-input like > scipy.integrate.quadrature which efficiently use more samples where the > integrand oscillates faster,etc. > The problem is that I need a high accuracy because the integrand IS > oscillating fast! > > I already interpolated the data using the scipy.interpolate.splrep but > interestingly it turns out that simpsons rule now produces NANs and I am > left with good old trapz! > > Is there a way of implementing quadrature for data-like-arrays? > Still I could try romberg. But then I have to interpolate exactly in such > a way that the "number of samples = positive power of 2) +1" (equally > spaced if I remember correctly). > > Is this the best way or does anyone have another idea? > > Thanks in advance! > Roman > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From dmitrey.kroshko at scipy.org Fri Oct 19 04:58:23 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 19 Oct 2007 11:58:23 +0300 Subject: [SciPy-user] how to get to know which LAPACK version is used (by scipy)? Message-ID: <471871AF.1010906@scipy.org> hi all, how to get to know which LAPACK version is used (by numpy or scipy)? Regads, Dmitrey From rgold at lsw.uni-heidelberg.de Fri Oct 19 05:37:05 2007 From: rgold at lsw.uni-heidelberg.de (rgold at lsw.uni-heidelberg.de) Date: Fri, 19 Oct 2007 11:37:05 +0200 (CEST) Subject: [SciPy-user] Integrating Array Message-ID: <50105.147.142.111.40.1192786625.squirrel@srv0.lsw.uni-heidelberg.de> First of all, thanks for the extensive and quick answers! Unfortunately, I do not have an analytic expression of my integrand! That's why I can't use quad! More precisely I perform the integral over a product of two arrays from which one is the result of a function calculating Legendre Polynomials, but the other array is pure non-analytic data. I have posted here a few months ago my problem when calculating the Legendre-Polynomials using scipy.special.legendre() and thanks to your help and the help of others I have written a function PLs(l,x) which calculates for each given value x between -1 and 1 the value of the Legendre Polynomial l and now everything is stable AND accurate (except for my integration)! Actually I DO KNOW more about my integrand (but only about the one part with the Legendre Polynomials) thats why I started interpolating (@Anne: You are completely right about the other part (=the data-array)! I just have not enough points,end of story! I think that the error comes from the Legendre Polys:-). This morning I've seen that the interpolation indeed improves the accuracy allthough I am using trapz(). Does anyone of you have an explanation for the failure of the simps routine (happens only when I interpolate,works fine for the non-interpolated data:-/)? Today I am trying to interpolate such, that I can use romberg! Let's see what will happen... Thanks again! Roman > The reason the clever routines (e.g., scipy.integrate.quad) are able > to produce high accuracy is because they are able to sample your > function more densely in regions where it is more complicated. Between > samples, they generally make some very simple assumption about the > function's behaviour - a polynomial of low order, usually. If you have > already sampled your function and are not willing to sample it > further, there's not much that can be done to improve on simps. If > your function were very smooth, you could try very high order > polynomials, but this is unlikely to help you much. The real problem > is that you only know your function at some points, and interpolation > is the best guess you have at what happens between them. The short > answer is if your function oscillates between points, you need more > points. > > The best and easiest choice is to simply use quad on your function, > and let it decide where to sample. From your question, you presumably > have a good reason not to do that. If it's that your evaluation > routine is much more efficient when you give it a whole vector to > evaluate at once, there are choices: > > * scipy.integrate.quadrature is based on Gaussian quadrature. The idea > here is to evaluate your function at carefully-chosen abscissas, then > use carefully-chosen weights to produce a quadrature rule. The > abscissas and weights can be chosen so that you get exact integrals > for polynomials up to a certain order times a weight function. Anyway, > it's good, and it evaluates your function at a vector of points at > once. scipy.integrate.quadrature evaluates it on denser and denser > grids until the result appears to converge. (If you need special > calling conventions for your function, you can look into the code of > scipy.integrate.quadrature, it's not very complicated.) > > * If your function can only be evaluated at an evenly-spaced grid, you > can use scipy.integrate.simps on finer and finer grids until it > appears to have converged. If you have slightly more flexibility, you > can subdivide individual grid steps separately (so that regions where > the function is more complicated get evaluated more frequently). > > * On the remote chance that you are getting Fourier coefficients and > wanting to integrate the time-domain function, some clever algebra > involving the inverse Fourier transform lets you produce integrals > directly (using an IFFT if desired). > > We may be more able to help you if you tell us more about your problem > (for example why you're not using quad). > > Good luck, > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From wollez at gmx.net Fri Oct 19 07:07:16 2007 From: wollez at gmx.net (WolfgangZ) Date: Fri, 19 Oct 2007 13:07:16 +0200 Subject: [SciPy-user] unstructured mesh handling with scipy? Message-ID: Hi, I need to handle unstructured grids (2D), but I'm not sure how to do that. The source is list of coordinates and I also have the connectivity (which nodes form a triangle). In the next step I need to extract a rectangular part of that mesh, this means that elements that are cut through needs to be triangulated. Using such meshes for visualisation would be also nice (read in a list of values and create a image). Has anybody done something similar with python/scipy and could give me some advice? I'm also open for "external" tools to perform that task. I've already tried to do that with vtk (python bindings), it works already partially (I can display the extracted mesh but somehow the coordinates are transformed to another coordinate system and I have no clue how to get that back to my input coordinate system). Any suggestions are welcome. Regards Wolfgang From lev at columbia.edu Fri Oct 19 08:43:21 2007 From: lev at columbia.edu (Lev Givon) Date: Fri, 19 Oct 2007 08:43:21 -0400 Subject: [SciPy-user] how to get to know which LAPACK version is used (by scipy)? In-Reply-To: <471871AF.1010906@scipy.org> References: <471871AF.1010906@scipy.org> Message-ID: <20071019084321.5895bd2c@columbia.edu> Received from dmitrey on Fri, Oct 19, 2007 at 11:58:23 AM EDT: > hi all, > how to get to know which LAPACK version is used (by numpy or scipy)? > Regads, Dmitrey If you are running scipy on a Linux system, you can determine what LAPACK libraries the scipy shared libraries are linked to as follows. On my machine, for instance, the output below indicates that LAPACK 3.0 is being used: [lev at localhost ~]$ ldd /usr/lib/python2.5/site-packages/scipy/linalg/flapack.so linux-gate.so.1 => (0xffffe000) liblapack.so.3.0 => /usr/lib/liblapack.so.3.0 (0xb784c000) libblas.so.1.1 => /usr/lib/libblas.so.1.1 (0xb77b7000) libpython2.5.so.1.0 => /usr/lib/libpython2.5.so.1.0 (0xb767f000) libgfortran.so.2 => /usr/lib/libgfortran.so.2 (0xb75e1000) libm.so.6 => /lib/i686/libm.so.6 (0xb75bb000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb75af000) libc.so.6 => /lib/i686/libc.so.6 (0xb746f000) libgfortran.so.1 => /usr/lib/libgfortran.so.1 (0xb73f1000) libpthread.so.0 => /lib/i686/libpthread.so.0 (0xb73da000) libdl.so.2 => /lib/libdl.so.2 (0xb73d6000) libutil.so.1 => /lib/libutil.so.1 (0xb73d1000) /lib/ld-linux.so.2 (0x80000000) (The scipy files may be installed in different locations on your system, of course.) L.G. From prabhu at aero.iitb.ac.in Fri Oct 19 09:28:22 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 19 Oct 2007 18:58:22 +0530 Subject: [SciPy-user] unstructured mesh handling with scipy? In-Reply-To: References: Message-ID: <18200.45302.479989.919156@prpc.aero.iitb.ac.in> >>>>> "WolfgangZ" == WolfgangZ writes: WolfgangZ> I've already tried to do that with vtk (python WolfgangZ> bindings), it works already partially (I can display WolfgangZ> the extracted mesh but somehow the coordinates are WolfgangZ> transformed to another coordinate system and I have no WolfgangZ> clue how to get that back to my input coordinate WolfgangZ> system). You can use tvtk/mayavi2 for this. Here are examples of unstructured grids that you can take a look at: A polygonal data set with pure TVTK: https://svn.enthought.com/enthought/browser/branches/enthought.tvtk_2.0/examples/tiny_mesh.py A 3D unstructured grid where data is generated with tvtk and viewed with Mayavi2: https://svn.enthought.com/enthought/browser/branches/enthought.mayavi_2.0/examples/unstructured_grid.py The nice thing about both is that they integrate well with numpy arrays and use views of your numpy array data. For more information on the tools see: https://svn.enthought.com/enthought/wiki/MayaVi https://svn.enthought.com/enthought/wiki/TVTK HTH, Prabhu From zyzhu2000 at gmail.com Fri Oct 19 10:11:16 2007 From: zyzhu2000 at gmail.com (Geoffrey Zhu) Date: Fri, 19 Oct 2007 09:11:16 -0500 Subject: [SciPy-user] Minimize a bounded convex function In-Reply-To: <4717FAD6.6090205@gmail.com> References: <4717FAD6.6090205@gmail.com> Message-ID: On 10/18/07, Robert Kern wrote: > Geoffrey Zhu wrote: > > Hi Everyone, > > > > I am trying to minimize a convex function f(x) that is not defined > > when x<=0 or when x>=1.5. I am trying to use scipy.optimize.golden(), > > but when the minimal point is very close to zero, golden() will call > > my target function on the undefined region. Is there any way to ensure > > that the optimizer will only look for answers within the domain? > > Hmm. When you have an open interval like that, (0, 1.5) rather than [0, 1.5], > you will pretty much have to find an espilon that suits your problem and use the > bounds [0+eps, 1.5-eps]. With golden() and brent(), you can use the argument > brack=(0+eps, 1.0, 1.5-eps). The 1.0 isn't special, you can pick anything in > between since your problem is convex. For fminbound(), you can use x1=0+eps, > x2=1.5-eps. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Robert, This works except one problem. Convexity does not ensure that f(1.0) is smaller than f(0+eps) and f(1.5-eps). Thanks, Geoffrey From ryanlists at gmail.com Fri Oct 19 10:13:07 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 19 Oct 2007 09:13:07 -0500 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy Message-ID: I am planning to upgrade to Ubuntu 7.10 Gutsy Gibbon in the next week or so. Is anybody running it with SciPy? Any big issues I should know about? Thanks, Ryan From zyzhu2000 at gmail.com Fri Oct 19 12:09:34 2007 From: zyzhu2000 at gmail.com (Geoffrey Zhu) Date: Fri, 19 Oct 2007 11:09:34 -0500 Subject: [SciPy-user] Minimize a bounded convex function In-Reply-To: References: Message-ID: On 10/18/07, Anne Archibald wrote: > On 18/10/2007, Geoffrey Zhu wrote: > > > I am trying to minimize a convex function f(x) that is not defined > > when x<=0 or when x>=1.5. I am trying to use scipy.optimize.golden(), > > but when the minimal point is very close to zero, golden() will call > > my target function on the undefined region. Is there any way to ensure > > that the optimizer will only look for answers within the domain? > > There's one open interval that many optimizers can handle, namely the > real line. You can try transforming your x coordinate so that (for > example) x = (tanh(w)+1)*0.75 and then minimizing as a function of w. > This probably doesn't preserve convexity (though it should not > introduce any local minima), but it means that you can use unbounded > minimization and avoid the endpoints. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Thanks a lot, Anne. This should work. Fortunately golden() does not require convexity. From sgarcia at olfac.univ-lyon1.fr Fri Oct 19 12:19:44 2007 From: sgarcia at olfac.univ-lyon1.fr (sgarcia) Date: Fri, 19 Oct 2007 18:19:44 +0200 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: References: Message-ID: <4718D920.1040902@olfac.univ-lyon1.fr> Hi, I did it this morning and no problem for the moment scipy 0.5.2 numpy 1.0.3 matplotlib 0.90.1 good luck, servers are very slow Sam Ryan Krauss a ?crit : > I am planning to upgrade to Ubuntu 7.10 Gutsy Gibbon in the next week > or so. Is anybody running it with SciPy? Any big issues I should > know about? > > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From marcos.capistran at gmail.com Fri Oct 19 12:59:59 2007 From: marcos.capistran at gmail.com (Marcos Capistran) Date: Fri, 19 Oct 2007 11:59:59 -0500 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: <4718D920.1040902@olfac.univ-lyon1.fr> References: <4718D920.1040902@olfac.univ-lyon1.fr> Message-ID: I upgraded already. scipy+numpy+ipython+matplotlib+pyx+genetic are working fine so far Greetings On 10/19/07, sgarcia wrote: > Hi, > I did it this morning and no problem for the moment > scipy 0.5.2 > numpy 1.0.3 > matplotlib 0.90.1 > > good luck, servers are very slow > > Sam > > Ryan Krauss a ?crit : > > I am planning to upgrade to Ubuntu 7.10 Gutsy Gibbon in the next week > > or so. Is anybody running it with SciPy? Any big issues I should > > know about? > > > > Thanks, > > > > Ryan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Samuel Garcia > Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. > CNRS - UMR5020 - Universite Claude Bernard LYON 1 > Equipe logistique et technique > 50, avenue Tony Garnier > 69366 LYON Cedex 07 > FRANCE > T?l : 04 37 28 74 64 > Fax : 04 37 28 76 01 > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Marcos Aurelio Capistr?n Ocampo CIMAT A. P. 402 Jalisco S/N, Valenciana Guanajuato, GTO 36240 Tel: (473) 73 2 71 55 Ext. 49640 From dlenski at gmail.com Fri Oct 19 15:14:52 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Fri, 19 Oct 2007 19:14:52 +0000 (UTC) Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy References: Message-ID: On Fri, 19 Oct 2007 09:13:07 -0500, Ryan Krauss wrote: > I am planning to upgrade to Ubuntu 7.10 Gutsy Gibbon in the next week or > so. Is anybody running it with SciPy? Any big issues I should know > about? I've been using Scipy under pre-release Gutsy for several months, with absolutely no problems. The only (slightly) related issue I've had is that python-mode is broken in Emacs 22 on one of my computers :( Dan From dlenski at gmail.com Fri Oct 19 19:46:15 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Fri, 19 Oct 2007 23:46:15 +0000 (UTC) Subject: [SciPy-user] pulse width/density modulation with SciPy? Message-ID: Hi, Does anyone know if there's a SciPy package to do pulse-width modulation and pulse-density modulation of digital signals? Right now, I'm using some looping Python code that is slow and not very exact. Thanks! Dan Lenski From ryanlists at gmail.com Sat Oct 20 00:51:38 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 19 Oct 2007 23:51:38 -0500 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: References: Message-ID: That could be fairly serious for me. I can't live without python-mode (well, I could probably live...). Any ideas why this is? Do you have python-mode working on any Gutsy computer? On 10/19/07, Daniel Lenski wrote: > On Fri, 19 Oct 2007 09:13:07 -0500, Ryan Krauss wrote: > > I am planning to upgrade to Ubuntu 7.10 Gutsy Gibbon in the next week or > > so. Is anybody running it with SciPy? Any big issues I should know > > about? > > I've been using Scipy under pre-release Gutsy for several months, with > absolutely no problems. > > The only (slightly) related issue I've had is that python-mode is broken > in Emacs 22 on one of my computers :( > > Dan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From akumar at ee.iitm.ac.in Sat Oct 20 01:55:05 2007 From: akumar at ee.iitm.ac.in (Kumar Appaiah) Date: Sat, 20 Oct 2007 11:25:05 +0530 Subject: [SciPy-user] pulse width/density modulation with SciPy? In-Reply-To: References: Message-ID: On 20/10/2007, Daniel Lenski wrote: > Hi, > Does anyone know if there's a SciPy package to do pulse-width modulation > and pulse-density modulation of digital signals? Not any that I am aware of. > Right now, I'm using some looping Python code that is slow and not very > exact. There is no communication module yet for SciPy. I think we can start thinking on the lines of a scipy.comm module in the long run, but right now, you are on your own for communication related stuff. Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600036 From dmitrey.kroshko at scipy.org Sat Oct 20 08:55:10 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 20 Oct 2007 15:55:10 +0300 Subject: [SciPy-user] lower triangle matrix part: indexes of non-zeros Message-ID: <4719FAAE.3000104@scipy.org> Hi all, I have a square symmetric matrix H (Hesse matrix). I need to obtain 3 vectors ind_0, ind_1, vals: vals[k] = H[ind_0[k], ind_1[k]] All those ind_0, ind_1, vals must be related to lower triangle part of H only. What's the best way to solve the problem? I'm usind H.nonzero() but it yields whole H, while I need only lower triangle. Regards, D. From nwagner at iam.uni-stuttgart.de Sat Oct 20 10:02:13 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 20 Oct 2007 16:02:13 +0200 Subject: [SciPy-user] lower triangle matrix part: indexes of non-zeros In-Reply-To: <4719FAAE.3000104@scipy.org> References: <4719FAAE.3000104@scipy.org> Message-ID: On Sat, 20 Oct 2007 15:55:10 +0300 dmitrey wrote: > Hi all, > I have a square symmetric matrix H (Hesse matrix). > I need to obtain 3 vectors ind_0, ind_1, vals: > vals[k] = H[ind_0[k], ind_1[k]] > All those ind_0, ind_1, vals must be related to lower >triangle part of H > only. > What's the best way to solve the problem? > I'm usind H.nonzero() but it yields whole H, while I >need only lower > triangle. > > Regards, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Help on function tril in module numpy.lib.twodim_base: tril(m, k=0) returns the elements on and below the k-th diagonal of m. k=0 is the main diagonal, k > 0 is above and k < 0 is below the main diagonal. Nils From dlenski at gmail.com Sat Oct 20 11:28:52 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Sat, 20 Oct 2007 15:28:52 +0000 (UTC) Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy References: Message-ID: On Fri, 19 Oct 2007 23:51:38 -0500, Ryan Krauss wrote: > That could be fairly serious for me. I can't live without python-mode > (well, I could probably live...). Any ideas why this is? Do you have > python-mode working on any Gutsy computer? Yeah, I'm with you on the python-mode dependency. I haven't figured it out yet... I've just temporarily downgraded to Emacs 21 which is fine. I have two other computers running Gutsy on which Emacs22+python-mode works fine, though. Dan From zunzun at zunzun.com Sat Oct 20 11:43:21 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sat, 20 Oct 2007 11:43:21 -0400 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy emacs In-Reply-To: References: Message-ID: <20071020154321.GA6202@zunzun.com> On Sat, Oct 20, 2007 at 03:28:52PM +0000, Daniel Lenski wrote: > > Yeah, I'm with you on the python-mode dependency. I haven't figured it out > yet... I've just temporarily downgraded to Emacs 21 which is fine. I have > two other computers running Gutsy on which Emacs22+python-mode works fine, > though. Great Googly Moogly to the rescue: googling for python-mode gutsy emacs yields https://bugs.launchpad.net/debian/+source/python-mode/+bug/131355 https://lists.ubuntu.com/archives/gutsy-changes/2007-September/007650.html James P.S. My grandfather would actually exclaim "Great Googly Moogly!" when I was a child.. From cookedm at physics.mcmaster.ca Sat Oct 20 13:35:55 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 20 Oct 2007 13:35:55 -0400 Subject: [SciPy-user] Generate random samples from a non-standard distribution In-Reply-To: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> (Manu Hack's message of "Fri\, 19 Oct 2007 02\:29\:31 -0400") References: <50af02ed0710182329y771ef33fla5966605fd1f154d@mail.gmail.com> Message-ID: "Manu Hack" writes: > Hi, > > I've been goolging around but not sure how to generate random samples > given the density function of a random variable. I was looking around > something like scipy.stats.rv_continuous but couldn't find anything > useful. > > Also, if one wants to do Markov Chain Monte Carlo, is scipy possible > and if not, what should be a workaround (if still want to do in > Python). I heard something like openbugs but seems that debian > doesn't have a package. Look at scipy.sandbox.montecarlo (you'll have to add a line with 'montecarlo' on it to scipy/sandbox/enabled_packages.txt to compile it, and copy some files from numpy/random first). For instance, scipy.sandbox.montecarlo.intsampler takes a list or array of weights, and can sample from those. Here's a clip from the docstring: >>> table = [10, 15, 20] #representing this pmf: #x 0 1 2 #p(x) 10/45 15/45 20/45 #The output will be something like: >>> sampler = intsampler(table) >>> sampler.sample(10) array([c, b, b, b, b, b, c, b, b, b], dtype=object) I use it by sampling from my PDF on a grid, and using those points as the PMF. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From dahl.joachim at gmail.com Sat Oct 20 17:58:42 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Sat, 20 Oct 2007 23:58:42 +0200 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: References: Message-ID: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> Emacs22 has python mode integrated, and having python-mode installed as a separate package causes problems. Removing the python-mode package seems to fix it for Emacs22. - joachim On 10/20/07, Daniel Lenski wrote: > > On Fri, 19 Oct 2007 23:51:38 -0500, Ryan Krauss wrote: > > > That could be fairly serious for me. I can't live without python-mode > > (well, I could probably live...). Any ideas why this is? Do you have > > python-mode working on any Gutsy computer? > > Yeah, I'm with you on the python-mode dependency. I haven't figured it > out > yet... I've just temporarily downgraded to Emacs 21 which is fine. I have > two other computers running Gutsy on which Emacs22+python-mode works fine, > though. > > Dan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlenski at gmail.com Sat Oct 20 18:42:32 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Sat, 20 Oct 2007 22:42:32 +0000 (UTC) Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> Message-ID: On Sat, 20 Oct 2007 23:58:42 +0200, Joachim Dahl wrote: > Emacs22 has python mode integrated, and having python-mode installed as > a separate > package causes problems. Removing the python-mode package seems to fix > it for Emacs22. > > - joachim Thanks, Joachim!!! That did the trick. I had to not only *remove* the python-mode package, but purge its configuration files as well. You can do that from the command line using: dpkg --purge python-mode For some reason unknown to me, the default behavior is to allow removed packages to leave config files in place. Dan From tjhnson at gmail.com Sun Oct 21 00:43:22 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Sat, 20 Oct 2007 21:43:22 -0700 Subject: [SciPy-user] Sparse Random Variables In-Reply-To: References: Message-ID: Hi, I'm still looking for some help here.... On 10/17/07, Tom Johnson wrote: > > 4) Also, for some reason entropy() doesn't always work on the first try... > > >>> from scipy import * > >>> x = 1e3 > >>> v = rand(x) > >>> v = v/sum(x) > >>> a = stats.rv_discrete(name='test', values=(range(x), v)) > >>> a.entropy() > >>> a.entropy() > > The first entropy raises an error. The second works. The problem > seems to be with: > > /home/me/lib/python/scipy/stats/distributions.py in entropy(self, *args, **kwds) > -> 3794 place(output,cond0,self.vecentropy(*goodargs)) > > /home/me/lib/python/numpy/lib/function_base.py in __call__(self, *args) > 940 > 941 if self.nout == 1: > --> 942 _res = > array(self.ufunc(*args),copy=False).astype(self.otypes[0]) > 943 else: > 944 _res = tuple([array(x,copy=False).astype(c) \ > > : function not supported for these types, > and can't coerce safely to supported types > > Should I submit a bug? > > 5) I really need to have random variables where the xk are tuples of > the same type (integers xor floats xor strings ...) > > p( (0,0) ) = .25 > p( (0,1) ) = .25 > p( (1,0) ) = .25 > p( (1,1) ) = .25 > > but > > a = stats.rv_discrete(name='test', values=(((0,0),(0,1),(1,0),(1,1)), [.25]*4)) > > yields > > /home/me/lib/python/numpy/core/fromnumeric.py in take(a, indices, > axis, out, mode) > 79 except AttributeError: > 80 return _wrapit(a, 'take', indices, axis, out, mode) > ---> 81 return take(indices, axis, out, mode) > 82 > 83 > > : index out of range for array > > My initial thought would be that the xk could be anything that is > hashable. For dictionary-based discrete distributions, I do use > tuples...but I would like to start using scipy.stats. I am fishing > for too much or in the wrong lake? > Any hints on this? From tjhnson at gmail.com Sun Oct 21 00:49:29 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Sat, 20 Oct 2007 21:49:29 -0700 Subject: [SciPy-user] Fastest dtype for matrix multiplication Message-ID: Hi, I am interested in doing matrix multiplication, but all I really care about is if the elements are nonzero or not (after all the multiplication is done). Which dtype should I use to accomplish this in the fastest manner? I did some quick tests...and it seemed like dtype=bool was quite slow (contrary to my expectations). Could someone explain this and recommend the best dtype? So far, it seems like float64 is the best. These are 36x36 matrices: In [100]: T0.dtype Out[100]: dtype('float64') In [101]: timeit dot(T0,T1) 10000 loops, best of 3: 87.4 ?s per loop In [102]: T0b.dtype Out[102]: dtype('bool') In [103]: timeit dot(T0b,T1b) 1000 loops, best of 3: 175 ?s per loop In [104]: T0i.dtype Out[104]: dtype('int32') In [105]: timeit dot(T0i,T1i) 10000 loops, best of 3: 189 ?s per loop From david at ar.media.kyoto-u.ac.jp Sun Oct 21 00:47:31 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 21 Oct 2007 13:47:31 +0900 Subject: [SciPy-user] Fastest dtype for matrix multiplication In-Reply-To: References: Message-ID: <471AD9E3.2000209@ar.media.kyoto-u.ac.jp> Tom Johnson wrote: > Hi, > > I am interested in doing matrix multiplication, but all I really care > about is if the elements are nonzero or not (after all the > multiplication is done). > > Which dtype should I use to accomplish this in the fastest manner? > > I did some quick tests...and it seemed like dtype=bool was quite slow > (contrary to my expectations). Could someone explain this and > recommend the best dtype? So far, it seems like float64 is the best. > > When you are using dot (and I guess most of the matrix multiplication routines), internally, the function can only work on float. BLAS, for example, only deal with float. For some routines such as multiplication, it makes sense to use integer, but the implementation would have to be totally different (for example handling overflow would be quite different than with float). So basically, as far as I am aware, all linear algebra only deal with floating point, and for those, float32 is the fastest, generally. But then, nonzero is ill defined with floating points (what you want generally is really near 0), so you will have to take care of that, too. cheers, David From ryanlists at gmail.com Sun Oct 21 15:31:26 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 21 Oct 2007 14:31:26 -0500 Subject: [SciPy-user] Python mode in emacs 22 (Ubuntu Gutsy Gibbon) Message-ID: So, I freely admit that this belongs on some other list, but the Emacs list people are often mean to me. Maybe because I think python is better than lisp and I shouldn't have to learn lisp..... But I am having a problem with python-mode after upgrading to Ubuntu 7.10 Gutsy Gibbon (as predicted). I tried purging python-mode as recommended using dpkg --purge python-mode but it doesn't seem to have worked. Here is the result of "locate *pythonmode.el": ryan at am2:~$ locate *python*mode.el /mnt/RYANFAT/downloads/emacs/python-mode-1.0/doctest-mode.el /mnt/RYANFAT/downloads/emacs/python-mode-1.0/python-mode.el /mnt/RYANFAT/downloads/emacs/Emacs/lisp/python-mode.el None of those are on my emacs path. Here is my error message after trying to open a python file from the command line using emacs myfile.py "An error has occurred while loading `/home/ryan/.emacs.elc': File error: Cannot open load file, python-mode To ensure normal operation, you should investigate and remove the cause of the error in your initialization file. Start Emacs with the `--debug-init' option to view a complete error backtrace. File mode specification error: (file-error "Cannot open load file" "python-mode") For information about the GNU Project and its goals, type C-h C-p." I believe I have commented out all python-mode commands in my .emacs file. It is attached. Can anyone help me get python-mode working again? Or do I have to post to the emacs list? Thanks, Ryan On 10/20/07, Daniel Lenski wrote: > On Sat, 20 Oct 2007 23:58:42 +0200, Joachim Dahl wrote: > > > Emacs22 has python mode integrated, and having python-mode installed as > > a separate > > package causes problems. Removing the python-mode package seems to fix > > it for Emacs22. > > > > - joachim > > Thanks, Joachim!!! That did the trick. I had to not only *remove* the python-mode package, but > purge its configuration files as well. You can do that from the command line using: > > dpkg --purge python-mode > > For some reason unknown to me, the default behavior is to allow removed packages to leave config > files in place. > > Dan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: .emacs Type: application/octet-stream Size: 14669 bytes Desc: not available URL: From ryanlists at gmail.com Sun Oct 21 16:51:07 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 21 Oct 2007 15:51:07 -0500 Subject: [SciPy-user] Python mode in emacs 22 (Ubuntu Gutsy Gibbon) In-Reply-To: References: Message-ID: O.K. something in my .emacs file was causing my problem. I renamed it so it wouldn't load and deleted the .emacs.elc file and the file loaded correctly with the new built-in Python mode. But it doesn't seem as powerful. It doesn't scan for functions and classes and it doesn't auto-indent in for loops. I think I have to remove emacs22 and roll back to 21 and reinstall the other python-mode, unless someone has a better solution. On 10/21/07, Ryan Krauss wrote: > So, I freely admit that this belongs on some other list, but the Emacs > list people are often mean to me. Maybe because I think python is > better than lisp and I shouldn't have to learn lisp..... > > But I am having a problem with python-mode after upgrading to Ubuntu > 7.10 Gutsy Gibbon (as predicted). I tried purging python-mode as > recommended using > > dpkg --purge python-mode > > but it doesn't seem to have worked. Here is the result of "locate > *pythonmode.el": > > ryan at am2:~$ locate *python*mode.el > /mnt/RYANFAT/downloads/emacs/python-mode-1.0/doctest-mode.el > /mnt/RYANFAT/downloads/emacs/python-mode-1.0/python-mode.el > /mnt/RYANFAT/downloads/emacs/Emacs/lisp/python-mode.el > > None of those are on my emacs path. > > Here is my error message after trying to open a python file from the > command line using > > emacs myfile.py > > "An error has occurred while loading `/home/ryan/.emacs.elc': > > File error: Cannot open load file, python-mode > > To ensure normal operation, you should investigate and remove the > cause of the error in your initialization file. Start Emacs with > the `--debug-init' option to view a complete error backtrace. > > File mode specification error: (file-error "Cannot open load file" > "python-mode") > For information about the GNU Project and its goals, type C-h C-p." > > I believe I have commented out all python-mode commands in my .emacs > file. It is attached. > > Can anyone help me get python-mode working again? Or do I have to > post to the emacs list? > > Thanks, > > Ryan > > On 10/20/07, Daniel Lenski wrote: > > On Sat, 20 Oct 2007 23:58:42 +0200, Joachim Dahl wrote: > > > > > Emacs22 has python mode integrated, and having python-mode installed as > > > a separate > > > package causes problems. Removing the python-mode package seems to fix > > > it for Emacs22. > > > > > > - joachim > > > > Thanks, Joachim!!! That did the trick. I had to not only *remove* the python-mode package, but > > purge its configuration files as well. You can do that from the command line using: > > > > dpkg --purge python-mode > > > > For some reason unknown to me, the default behavior is to allow removed packages to leave config > > files in place. > > > > Dan > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From ctrachte at gmail.com Sun Oct 21 17:04:57 2007 From: ctrachte at gmail.com (Carl Trachte) Date: Sun, 21 Oct 2007 14:04:57 -0700 Subject: [SciPy-user] Python mode in emacs 22 (Ubuntu Gutsy Gibbon) In-Reply-To: References: Message-ID: <426ada670710211404x54fce358k1d69bbb59460c177@mail.gmail.com> I don't have a solution, but I have experienced the same things with emacs22. For indenting I've been using the space key. For commenting out, I've been using rectangle editing (move the section over 3 spaces, then fill with # characters). I really ought to write shortcuts for these actions or switch back to emacs21 like you suggested. On 10/21/07, Ryan Krauss wrote: > > O.K. something in my .emacs file was causing my problem. I renamed it > so it wouldn't load and deleted the .emacs.elc file and the file > loaded correctly with the new built-in Python mode. But it doesn't > seem as powerful. It doesn't scan for functions and classes and it > doesn't auto-indent in for loops. I think I have to remove emacs22 > and roll back to 21 and reinstall the other python-mode, unless > someone has a better solution. > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drfredkin at ucsd.edu Sun Oct 21 20:04:17 2007 From: drfredkin at ucsd.edu (Donald Fredkin) Date: Mon, 22 Oct 2007 00:04:17 +0000 (UTC) Subject: [SciPy-user] scipy.sparse problem Message-ID: I am unable to import the scipy.sparse module. When I try, here is what happens: C:\Documents and Settings\drf\My Documents>python Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> import scipy.sparse Traceback (most recent call last): File "", line 1, in File "C:\Python25\Lib\site-packages\scipy\sparse\__init__.py", line 5, in from sparse import * File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 21, in from scipy.sparse.sparsetools import cscmux, csrmux, \ ImportError: cannot import name cscmux >>> print scipy.__version__ 0.6.0 >>> Can anyone tell me how to fix this? It was OK with an earlier version of scipy. This is, by the way, on Windows XP. Thanks, Don -- Donald R. Fredkin drfredkin at ucsd.edu From ryanlists at gmail.com Mon Oct 22 00:11:11 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 21 Oct 2007 23:11:11 -0500 Subject: [SciPy-user] Python mode in emacs 22 (Ubuntu Gutsy Gibbon) In-Reply-To: <426ada670710211404x54fce358k1d69bbb59460c177@mail.gmail.com> References: <426ada670710211404x54fce358k1d69bbb59460c177@mail.gmail.com> Message-ID: It seems like emacs 21 was left installed and all that really had to be done was to change the symbolic link in /usr/bin to point to emacs21-x (after reinstalling python-mode). So, the switch back was fairly painless in my experience. On 10/21/07, Carl Trachte wrote: > I don't have a solution, but I have experienced the same things with > emacs22. For indenting I've been using the space key. For commenting out, > I've been using rectangle editing (move the section over 3 spaces, then fill > with # characters). I really ought to write shortcuts for these actions or > switch back to emacs21 like you suggested. > > > On 10/21/07, Ryan Krauss wrote: > > O.K. something in my .emacs file was causing my problem. I renamed it > > so it wouldn't load and deleted the .emacs.elc file and the file > > loaded correctly with the new built-in Python mode. But it doesn't > > seem as powerful. It doesn't scan for functions and classes and it > > doesn't auto-indent in for loops. I think I have to remove emacs22 > > and roll back to 21 and reinstall the other python-mode, unless > > someone has a better solution. > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From roger.herikstad at gmail.com Mon Oct 22 04:51:11 2007 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Mon, 22 Oct 2007 16:51:11 +0800 Subject: [SciPy-user] k nearest neighbour Message-ID: Hi all, Does anyone know of a fast python implementation of kNN? ~ Roger From gruben at bigpond.net.au Mon Oct 22 06:04:25 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon, 22 Oct 2007 20:04:25 +1000 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: Message-ID: <471C75A9.8090005@bigpond.net.au> I don't know Roger, but take a look at OpenCV (which has some Python support). See also and Also, look at Orange: Another option might be to use R via RPy, HTH, Gary R. Roger Herikstad wrote: > Hi all, > Does anyone know of a fast python implementation of kNN? > > ~ Roger From stefan at sun.ac.za Mon Oct 22 07:55:37 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 22 Oct 2007 13:55:37 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: Message-ID: <20071022115537.GD19079@mentat.za.net> On Mon, Oct 22, 2007 at 04:51:11PM +0800, Roger Herikstad wrote: > Hi all, > Does anyone know of a fast python implementation of kNN? Take a look at vq-quantisation (in scipy) and at David C*'s learning scikit. Of further interest is http://www.cs.umd.edu/~mount/ANN/ We also had a long discussion on the mailing list about different ways to do it without C code: http://thread.gmane.org/gmane.comp.python.numeric.general/8459/focus=8459 Cheers St?fan From whenney at gmail.com Mon Oct 22 12:10:16 2007 From: whenney at gmail.com (William Henney) Date: Mon, 22 Oct 2007 16:10:16 +0000 (UTC) Subject: [SciPy-user] Python mode in emacs 22 (Ubuntu Gutsy Gibbon) References: <426ada670710211404x54fce358k1d69bbb59460c177@mail.gmail.com> Message-ID: Ryan Krauss gmail.com> writes: > > It seems like emacs 21 was left installed and all that really had to > be done was to change the symbolic link in /usr/bin to point to > emacs21-x (after reinstalling python-mode). So, the switch back was > fairly painless in my experience. > > On 10/21/07, Carl Trachte gmail.com> wrote: > > I don't have a solution, but I have experienced the same things with > > emacs22. For indenting I've been using the space key. For commenting out, > > I've been using rectangle editing (move the section over 3 spaces, then fill > > with # characters). I really ought to write shortcuts for these actions or > > switch back to emacs21 like you suggested. > > > > > > On 10/21/07, Ryan Krauss gmail.com> wrote: > > > O.K. something in my .emacs file was causing my problem. I renamed it > > > so it wouldn't load and deleted the .emacs.elc file and the file > > > loaded correctly with the new built-in Python mode. But it doesn't > > > seem as powerful. It doesn't scan for functions and classes and it > > > doesn't auto-indent in for loops. I think I have to remove emacs22 > > > and roll back to 21 and reinstall the other python-mode, unless > > > someone has a better solution. > > > Hi Ryan, Carl, Strange that you have been having problems. I've been using emacs22's python.el for ages with no trouble at all. At least, indention works fine: TAB for auto-indent (repeated TAB to cycle between possibilities), "C-c <" and "C-c >" to manually indent or outdent the region. For commenting out, what is wrong with "C-3 M-;"? (this is emacs-wide, not python-specific). Not sure about about scanning for functions/classes though - do you mean something like imenu or ecb? I haven't tried these with python - find-tag works well enough for me. Automatic symbol completion and help strings work as well, although this is not as nice as in ipython. What would be really good is if someone could integrate ipython with emacs.... If you really prefer the old python-mode, why don't you just install that on emacs22? Regressing to emacs21 seems a bit drastic... There is a lot more info about the two modes at http://www.emacswiki.org/cgi-bin/emacs-en/PythonMode Now I look at that, I see that other people have been having similar problems. Seems the solution is to make sure you have transient mark mode turned on and the latest version of python.el Cheers Will From dlenski at gmail.com Mon Oct 22 13:01:23 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Mon, 22 Oct 2007 17:01:23 +0000 (UTC) Subject: [SciPy-user] pulse width/density modulation with SciPy? References: Message-ID: On Sat, 20 Oct 2007 11:25:05 +0530, Kumar Appaiah wrote: > On 20/10/2007, Daniel Lenski wrote: >> Hi, >> Does anyone know if there's a SciPy package to do pulse-width >> modulation and pulse-density modulation of digital signals? > > Not any that I am aware of. > >> Right now, I'm using some looping Python code that is slow and not very >> exact. > > There is no communication module yet for SciPy. I think we can start > thinking on the lines of a scipy.comm module in the long run, but right > now, you are on your own for communication related stuff. > > Kumar Thanks, Kumar! Do you know of anyone who has collected and published any communications-related code with Python. Not that SciPy makes it too hard to experiment, but I figure if someone else has already figured out the "right" way to do it I'll stick with that :-) From barrywark at gmail.com Mon Oct 22 13:11:25 2007 From: barrywark at gmail.com (Barry Wark) Date: Mon, 22 Oct 2007 10:11:25 -0700 Subject: [SciPy-user] k nearest neighbour In-Reply-To: <20071022115537.GD19079@mentat.za.net> References: <20071022115537.GD19079@mentat.za.net> Message-ID: I've been working an a numpy-compatible SWIG wrapper for the ANN library. I've got most of the library wrapped (though not all of it). I'd be happy to send you a tarball. If there's interest from more than one or two people, I'll motivate to finish it up and document it for public release. Barry On 10/22/07, Stefan van der Walt wrote: > On Mon, Oct 22, 2007 at 04:51:11PM +0800, Roger Herikstad wrote: > > Hi all, > > Does anyone know of a fast python implementation of kNN? > > Take a look at vq-quantisation (in scipy) and at David C*'s learning > scikit. > > Of further interest is > > http://www.cs.umd.edu/~mount/ANN/ > > We also had a long discussion on the mailing list about different ways to > do it without C code: > > http://thread.gmane.org/gmane.comp.python.numeric.general/8459/focus=8459 > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Mon Oct 22 13:13:04 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 22 Oct 2007 19:13:04 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: Message-ID: I tried a pure Python implementation, too slow. Now, I use a classic C++ approach with ctypes. Matthieu 2007/10/22, Roger Herikstad : > > Hi all, > Does anyone know of a fast python implementation of kNN? > > ~ Roger > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ctrachte at gmail.com Mon Oct 22 13:46:56 2007 From: ctrachte at gmail.com (Carl Trachte) Date: Mon, 22 Oct 2007 10:46:56 -0700 Subject: [SciPy-user] Python mode in emacs 22 (Ubuntu Gutsy Gibbon) In-Reply-To: References: <426ada670710211404x54fce358k1d69bbb59460c177@mail.gmail.com> Message-ID: <426ada670710221046n55b4f111ma0096804eb2b4545@mail.gmail.com> Will, Thanks. I'm on Windows XP - Tab works for indentation. The commenting out part I can't seem to replicate. This may well be a reflection of my EMACS ignorance. I will consult the link you provided on Python mode. > Hi Ryan, Carl, > > Strange that you have been having problems. I've been using emacs22's > python.el > for ages with no trouble at all. At least, indention works fine: TAB for > auto-indent (repeated TAB to cycle between possibilities), "C-c <" and > "C-c >" > to manually indent or outdent the region. For commenting out, what is > wrong with > "C-3 M-;"? (this is emacs-wide, not python-specific). > > > There is a lot more info about the two modes at > http://www.emacswiki.org/cgi-bin/emacs-en/PythonMode > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthe.loic at gmail.com Mon Oct 22 14:00:18 2007 From: berthe.loic at gmail.com (LB) Date: Mon, 22 Oct 2007 11:00:18 -0700 Subject: [SciPy-user] How to use argmax ? Message-ID: <1193076018.985001.94800@t8g2000prg.googlegroups.com> Hi, Let's say we have an array a : If a is 1D, we have : >>> a = array([1, 2, 5, 6,3, 4]) >>> ind = a.argmax() >>> a[ind] 6 >>> a[ind] == a.max() True but if a is 2D (or more) : a = array([[10,50,30],[60,20,40]]) I can know where the maxima are with the argmax function : ind = a.argmax(axis=0) >>> a array([[10, 50, 30], [60, 20, 40]]) >>> ind array([1, 0, 1]) But what if a is 2D or more : >>> a = array([[10, 20, 14], [60, 5, 7]]) >>> ind = a.argmax(axis=0) >>> a array([[10, 20, 14], [60, 5, 7]]) >>> ind array([1, 0, 0]) >>> a[ind] array([[60, 5, 7], [10, 20, 14], [10, 20, 14]]) >>> a.max(axis=0) array([60, 20, 14]) How can I combine a and ind to access to the max ? -- LB From dlenski at gmail.com Mon Oct 22 14:57:17 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Mon, 22 Oct 2007 18:57:17 +0000 (UTC) Subject: [SciPy-user] How to use argmax ? References: <1193076018.985001.94800@t8g2000prg.googlegroups.com> Message-ID: On Mon, 22 Oct 2007 11:00:18 -0700, LB wrote: > But what if a is 2D or more : >>>> a = array([[10, 20, 14], [60, 5, 7]]) ind = a.argmax(axis=0) >>>> a > array([[10, 20, 14], > [60, 5, 7]]) >>>> ind > array([1, 0, 0]) >>>> a[ind] > array([[60, 5, 7], > [10, 20, 14], > [10, 20, 14]]) >>>> a.max(axis=0) > array([60, 20, 14]) > > How can I combine a and ind to access to the max ? Hi, if I understand correctly, you're trying to get the indices in a of the maxima. Is that right? Try this: >>>> a = array([[10, 20, 14], [60, 5, 7]]) >>>> ind = (indices(a.shape)[0] == a.argmax(axis=0)) >>>> ind array([[False, True, True], [ True, False, False]], dtype=bool) >>>> ind.nonzero() (array([0, 0, 1]), array([1, 2, 0])) This gives the locations of the maxima in a: a[0,1], a[0,2], and a[1,0]. The indices() function basically generates an array A, where: A[axis,x,y,z...] = x if axis==0 y if axis==1 z if axis==2 etc. Hope that helps. Dan From dlenski at gmail.com Mon Oct 22 15:04:54 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Mon, 22 Oct 2007 19:04:54 +0000 (UTC) Subject: [SciPy-user] How to use argmax ? References: <1193076018.985001.94800@t8g2000prg.googlegroups.com> Message-ID: Or maybe this is what you're looking for: >>>> ind = a.argmax(axis=0) >>>> a[ ind, [0,1,2] ] array([60, 20, 14]) ... which basically returns a[ind[0],0], a[ind[1],1], and a[ind[2],2]. Dan From dlenski at gmail.com Mon Oct 22 15:08:16 2007 From: dlenski at gmail.com (Daniel Lenski) Date: Mon, 22 Oct 2007 19:08:16 +0000 (UTC) Subject: [SciPy-user] How to use argmax ? References: <1193076018.985001.94800@t8g2000prg.googlegroups.com> Message-ID: On Mon, 22 Oct 2007 19:04:54 +0000, Daniel Lenski wrote: > Or maybe this is what you're looking for: > >>>>> ind = a.argmax(axis=0) >>>>> a[ ind, [0,1,2] ] > array([60, 20, 14]) > > ... which basically returns a[ind[0],0], a[ind[1],1], and a[ind[2],2]. > > Dan Yeah, there really must be a straightforward, easy way to do this without explicitly listing the indices. Hmmm... From emanuele at relativita.com Mon Oct 22 16:09:22 2007 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 22 Oct 2007 22:09:22 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: <20071022115537.GD19079@mentat.za.net> Message-ID: <471D0372.3020203@relativita.com> I'm interested! Thanks. +1 E, Barry Wark wrote: > I've been working an a numpy-compatible SWIG wrapper for the ANN > library. I've got most of the library wrapped (though not all of it). > I'd be happy to send you a tarball. If there's interest from more than > one or two people, I'll motivate to finish it up and document it for > public release. > > Barry > > From stefan at sun.ac.za Mon Oct 22 16:38:20 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 22 Oct 2007 22:38:20 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: Message-ID: <20071022203820.GS19079@mentat.za.net> Hi Matthieu On Mon, Oct 22, 2007 at 07:13:04PM +0200, Matthieu Brucher wrote: > I tried a pure Python implementation, too slow. Which one did you try? Tim Hochberg's suggestion from the URL I posted earlier is fairly quick, given that your dataset is not extraordinarily large. Regards Stefan From matthieu.brucher at gmail.com Mon Oct 22 17:06:45 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 22 Oct 2007 23:06:45 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: <20071022203820.GS19079@mentat.za.net> References: <20071022203820.GS19079@mentat.za.net> Message-ID: > > On Mon, Oct 22, 2007 at 07:13:04PM +0200, Matthieu Brucher wrote: > > I tried a pure Python implementation, too slow. > > Which one did you try? Tim Hochberg's suggestion from the URL I > posted earlier is fairly quick, given that your dataset is not > extraordinarily large. > I tried my own approach, too much influenced by C++. I had some thousand points, and it is far slower than the C++ version. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnandris at btinternet.com Mon Oct 22 17:42:59 2007 From: mnandris at btinternet.com (Michael Nandris) Date: Mon, 22 Oct 2007 22:42:59 +0100 (BST) Subject: [SciPy-user] Generate random samples from a non-standard distribution In-Reply-To: Message-ID: <833874.55091.qm@web86513.mail.ird.yahoo.com> i'm sure there are shorter ways of doing this, like here--> google: ifa.hawaii.edu markov lecture10 for some shorter code on markov models ###################################### from __future__ import division from numpy.random import multinomial from numpy import dot from MA import take # requires python-numeric-ext from scipy.stats import rv_discrete import timeit from numpy import NaN """ src: Mathematical Modeling and Computer Simulation by Maki and Thompson page 179 Consider a Markov Chain with 4 states and transiton matrix T. Use simulation to estimate the 5-step transition matrix. There are at least 2 ways of doing this (very slowly, using scipy.rv_discrete, or quite fast using numpy.multinomial [which has a bug that may or may not have been fixed in the official release, plus an unsolved bug:- it cant handle zeros, which require substituting with 1e-16; but it does give the right answer. """ T = [[ 0.6, 0.3, 0.0, 0.1 ], [ 0.5, 0.0, 0.5, 0.0 ], [ 0.0, 0.2, 0.2, 0.6 ], [ 0.2, 0.0, 0.8, 0.0 ]] def recursor( n ): SPAN = [ T for i in range( n ) ] return reduce( lambda x, y: dot(x,y), SPAN ) def analyticalSolution(): """ Analytical solution agrees with answer on p180 """ print recursor(5) INPUT = [[ 0.6-(1e-16), 0.3, 1e-16, 0.1 ], # input for multinomial is same as T [ 0.5-(1e-16), 1e-16, 0.5-(1e-16), 1e-16 ], [ 1e-16, 0.2, 0.2, 0.6-(1e-16) ], [ 0.2-(1e-16), 1e-16, 0.8-(1e-16), 1e-16 ]] STATES = [0,1,2,3] SIZE = 10000 MAGNITUDE = 1/SIZE def showResults(d): """ Requires 'from __future__ import division' """ L = [] #print d while d: k,v = d.popitem() L.append(str(round(v*MAGNITUDE,5))) # <<<< need to alter rounding value by hand <<<<< print L class ScipyStatsMarkovSim(object): def __init__(self, i): self.i = i self.ans = dict(zip( STATES, (0,0,0,0) )) def markov(self): first = list(take(T,(self.i,))[0]) sample = rv_discrete( name='sample', values=( STATES, first ) ).rvs( size=SIZE ) #[0] for i in sample: nthStep = self.nthStep(i) self.ans[nthStep]+=1 return showResults(self.ans) def nthStep(self, inpt): """ Returns the 5th state calculated in sequence """ second = list(take(T,(inpt,))[0]) two = rv_discrete( name='sample', values=( STATES, second ) ).rvs( size=1 )[0] third = list(take(T,(two,))[0]) three = rv_discrete( name='sample', values=( STATES, third ) ).rvs( size=1 )[0] fourth = list(take(T,(three,))[0]) four = rv_discrete( name='sample', values=( STATES, fourth ) ).rvs( size=1 )[0] fifth = list(take(T,(four,))[0]) five = rv_discrete( name='sample', values=( STATES, fifth ) ).rvs( size=1 )[0] return five class NumpyRandomMarkovSim(object): def markov(self,i): ans = dict(zip( STATES, (0,0,0,0) )) first = INPUT[i] t = multinomial( SIZE, first ) # only of length 4 for state, n in enumerate(t): # n is number of samples 'thrown' for k in xrange(n): nthStep = self.nthStep(state) ans[nthStep]+=1 return showResults(ans) def nthStep(self, inpt): """ Returns the 5th state calculated in sequence """ second = INPUT[inpt] two = list(multinomial( 1, second )).index(1) third = INPUT[two] three = list(multinomial( 1, third )).index(1) fourth = INPUT[three] four = list(multinomial( 1, fourth )).index(1) fifth = INPUT[four] return list(multinomial( 1, fifth )).index(1) def testScipy(): """ Takes 32 seconds to execute a quarter of the workload """ ScipyStatsMarkovSim(0).markov() #ScipyStatsMarkovSim(1).markov() #ScipyStatsMarkovSim(2).markov() #ScipyStatsMarkovSim(3).markov() def testNumpy(): """ Takes 3 seconds to execute 4 times the workload """ NumpyRandomMarkovSim().markov(0) NumpyRandomMarkovSim().markov(1) NumpyRandomMarkovSim().markov(2) NumpyRandomMarkovSim().markov(3) if __name__=='__main__': t = timeit.Timer('analyticalSolution()', 'from __main__ import analyticalSolution') PASSES = 10 elapsed = (t.timeit(number=PASSES)) print "analyticalSolution() takes %0.3f micro-seconds per pass" % (elapsed/PASSES) """ t = timeit.Timer('testScipy()', 'from __main__ import testScipy') PASSES = 1 elapsed = (t.timeit(number=PASSES)) print "scipy.markov() takes %0.3f micro-seconds per pass" % (elapsed/PASSES) """ PASSES = 1 s = timeit.Timer('testNumpy()', 'from __main__ import testNumpy') elapsed2 = (s.timeit(number=PASSES)) print "numpy.markov() takes %0.3f micro-seconds per pass" % (elapsed2/PASSES) ######################################## # distgen.py from numpy import random from pylab import show, plot, hist from scipy import polyfit, polyval from scipy.stats.distributions import cauchy def polynomialRegression( x,y ): coeffs = polyfit( x, y, 20 ) return x, polyval( coeffs, x ) opt = [] for i in range( 1000000 ): #opt.append( random.beta(2,2.5) ) #pt.append( random.weibull(2) ) #opt.append( random.standard_cauchy(2) ) #opt.append( random.zipf(2) ) #opt.append( random.vonmises(2,3)) #opt.append( random.wald(2,3)) #opt.append( random.rayleigh()) #opt.append( random.standard_gamma(2)) #opt.append( random.binomial(1000,0.5)) opt.append( random.gumbel(1000,0.5)) """ multinomial(...) Multinomial distribution. multinomial(n, pvals, size=None) -> random values pvals is a sequence of probabilities that should sum to 1 (however, the last element is always assumed to account for the remaining probability as long as sum(pvals[:-1]) <= 1). """ data = hist(opt,10000) x = data[1] y = data[0] coeffs = polyfit( x, y, 20 ) besty = polyval( coeffs, x ) plot(x,besty) show() ######################################### #from __future__ import division from Numeric import * from pylab import hist, xlabel, ylabel, show import random def MetHastDirich( param, M, B, start ): """ Uses a Metropolis-Hastings method to generate a chain of values from a Dirichlet distribution. Input(a four element list): 'param' - vector of parameters for the Dirichlet dist., all > 0, 'M' - total number of steps kept in the MH chain after burn-in, 'B' - number of steps used for the burn-in, 'start' - vector of starting values which has length equal to the param vector minus 1 (NOTE: sum(start) must be < = 1). Output(a two element list): 1) the array of generated values 2) the number of accepted candidates out of B+M tries. """ step_cnt = 0 accept_ind = 0 ## To keep track of number of accepted candidates. x = start rv = len(param)-1 ## Last random variable is constrained. data_matrix = zeros( (M,rv), typecode = 'f' ) for i in range( B + M ): step_cnt = step_cnt + 1 y = [] cumy = 0 ## Generate a candidate over support space (0,1) under restriction of summation=1 for j in range( rv ): present_y = ( 1 - cumy ) * random.uniform( 0, 1 ) cumy = cumy + present_y y.append( present_y ) alpha_prob=((1-sum( y )) / (1.0 -sum( x )))**( param[rv]-1.0 ) for k in range(rv): tterm_1 = ( y[k] / x[k] )**( param[k]-1 ) if k == 0: tterm_2 = 1 else: tterm_2 = ( 1-sum( y[0:(k-1)] )) / ( 1-sum( x[0:k-1] ) ) alpha_prob = alpha_prob * tterm_1 * tterm_2 alpha_prob_final = min( [alpha_prob,1] ) u = random.uniform( 0, 1 ) if u <= alpha_prob_final: ## Candidate accepted. x=y ## Jump to new position. accept_ind=accept_ind+1 if step_cnt > B: data_matrix[ (step_cnt-B-1),...] = y ## Fill-in a full row else: ## of the matrix. if step_cnt > B: data_matrix[(step_cnt-B-1),...]=x ## Remain at old position. return [data_matrix,accept_ind] ## Use the above function to compute a chain of observations for the given argument values below: parameters=array([9,13,16,12]) # [2.,2.,3.,3.]) M = 100 # 000 B = 10 #000 start = array([0.1,0.6,0.1]) # [.25,.25,.25]) MH_out = MetHastDirich(parameters,M,B,start) chain = MH_out[0] count_successes = MH_out[1] hist( chain, 10000 ) #plot( chain, count_successes ) #title('MetHastDirich( [2.0,2.0,3.0,3.0]), 100000, 10000, [0.99,0.99,0.99]' ) # [0.25,0.25,0.25]') xlabel('Datum') ylabel('Frequency') show() "David M. Cooke" wrote: "Manu Hack" writes: > Hi, > > I've been goolging around but not sure how to generate random samples > given the density function of a random variable. I was looking around > something like scipy.stats.rv_continuous but couldn't find anything > useful. > > Also, if one wants to do Markov Chain Monte Carlo, is scipy possible > and if not, what should be a workaround (if still want to do in > Python). I heard something like openbugs but seems that debian > doesn't have a package. Look at scipy.sandbox.montecarlo (you'll have to add a line with 'montecarlo' on it to scipy/sandbox/enabled_packages.txt to compile it, and copy some files from numpy/random first). For instance, scipy.sandbox.montecarlo.intsampler takes a list or array of weights, and can sample from those. Here's a clip from the docstring: >>> table = [10, 15, 20] #representing this pmf: #x 0 1 2 #p(x) 10/45 15/45 20/45 #The output will be something like: >>> sampler = intsampler(table) >>> sampler.sample(10) array([c, b, b, b, b, b, c, b, b, b], dtype=object) I use it by sampling from my PDF on a grid, and using those points as the PMF. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From erendisaldarion at gmail.com Mon Oct 22 19:45:22 2007 From: erendisaldarion at gmail.com (aldarion) Date: Tue, 23 Oct 2007 07:45:22 +0800 Subject: [SciPy-user] k nearest neighbour In-Reply-To: <471D0372.3020203@relativita.com> References: <20071022115537.GD19079@mentat.za.net> <471D0372.3020203@relativita.com> Message-ID: I'm interested! Thanks. +2 On 10/23/07, Emanuele Olivetti wrote: > > I'm interested! Thanks. > +1 > > E, > > Barry Wark wrote: > > I've been working an a numpy-compatible SWIG wrapper for the ANN > > library. I've got most of the library wrapped (though not all of it). > > I'd be happy to send you a tarball. If there's interest from more than > > one or two people, I'll motivate to finish it up and document it for > > public release. > > > > Barry > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger.herikstad at gmail.com Mon Oct 22 23:45:54 2007 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 23 Oct 2007 11:45:54 +0800 Subject: [SciPy-user] k nearest neighbour In-Reply-To: <471C75A9.8090005@bigpond.net.au> References: <471C75A9.8090005@bigpond.net.au> Message-ID: Thanks. I did find this http://pyml.sourceforge.net/ which I think has all the tools I need. I looked at orange, but it seems there are some troubles getting it to work properly on a mac, so combining the tools from PyML and PyCluster http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/software.htm . ~ Roger On 10/22/07, Gary Ruben wrote: > I don't know Roger, but take a look at OpenCV (which has some Python > support). See also > > and > > > Also, look at Orange: > > > Another option might be to use R via RPy, > > HTH, > Gary R. > > Roger Herikstad wrote: > > Hi all, > > Does anyone know of a fast python implementation of kNN? > > > > ~ Roger > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From berthe.loic at gmail.com Tue Oct 23 02:27:21 2007 From: berthe.loic at gmail.com (LB) Date: Mon, 22 Oct 2007 23:27:21 -0700 Subject: [SciPy-user] How to use argmax ? In-Reply-To: References: <1193076018.985001.94800@t8g2000prg.googlegroups.com> Message-ID: <1193120841.417681.17460@i13g2000prf.googlegroups.com> >>> a = array([[10, 20, 14], [60, 5, 7]]) >>> ind = a.argmax(axis=0) >>> a[ind, xrange(a.shape[1])] == a.max(axis=0) array([ True, True, True], dtype=bool) That's what I was looking for Thanks ! This was really not evident for me. If I find some time, I'll add this to the argmax example. -- LB From stefan at sun.ac.za Tue Oct 23 02:28:22 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 23 Oct 2007 08:28:22 +0200 Subject: [SciPy-user] How to use argmax ? In-Reply-To: <1193076018.985001.94800@t8g2000prg.googlegroups.com> References: <1193076018.985001.94800@t8g2000prg.googlegroups.com> Message-ID: <20071023062822.GT19079@mentat.za.net> Hi, You can access the maximum directly as follows: x = N.array([[10,50,30],[60,20,40]]) p = x.argmin() x.flat[p] You can also convert 'p' to an index, using N.unravel_index(p,x.shape) Regards St?fan On Mon, Oct 22, 2007 at 11:00:18AM -0700, LB wrote: > > Hi, > > Let's say we have an array a : > > If a is 1D, we have : > >>> a = array([1, 2, 5, 6,3, 4]) > >>> ind = a.argmax() > >>> a[ind] > 6 > >>> a[ind] == a.max() > True > > but if a is 2D (or more) : > > a = array([[10,50,30],[60,20,40]]) > > I can know where the maxima are with the argmax function : > ind = a.argmax(axis=0) > >>> a > array([[10, 50, 30], > [60, 20, 40]]) > >>> ind > array([1, 0, 1]) > > But what if a is 2D or more : > >>> a = array([[10, 20, 14], [60, 5, 7]]) > >>> ind = a.argmax(axis=0) > >>> a > array([[10, 20, 14], > [60, 5, 7]]) > >>> ind > array([1, 0, 0]) > >>> a[ind] > array([[60, 5, 7], > [10, 20, 14], > [10, 20, 14]]) > >>> a.max(axis=0) > array([60, 20, 14]) > > How can I combine a and ind to access to the max ? > > -- > LB From lbolla at gmail.com Tue Oct 23 03:39:58 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 23 Oct 2007 09:39:58 +0200 Subject: [SciPy-user] How to use argmax ? In-Reply-To: References: <1193076018.985001.94800@t8g2000prg.googlegroups.com> Message-ID: <80c99e790710230039g55e85fdey31bbadb5c5a5fe4f@mail.gmail.com> not sure if this is very efficient, but it works: In [87]: a = numpy.random.rand(4,3) In [88]: ind = a.argmax(axis=0) In [89]: numpy.diag(a[ind]) Out[89]: array([ 0.62609397, 0.81456652, 0.86274995]) In [90]: a Out[90]: array([[ 0.46902428, 0.19723896, 0.25731427], [ 0.39632899, 0.81456652, 0.5836383 ], [ 0.62609397, 0.6931616 , 0.86274995], [ 0.22968695, 0.22322129, 0.14708956]]) In [91]: a[ind] Out[91]: array([[ 0.62609397, 0.6931616 , 0.86274995], [ 0.39632899, 0.81456652, 0.5836383 ], [ 0.62609397, 0.6931616 , 0.86274995]]) L. On 10/22/07, Daniel Lenski wrote: > > On Mon, 22 Oct 2007 19:04:54 +0000, Daniel Lenski wrote: > > > Or maybe this is what you're looking for: > > > >>>>> ind = a.argmax(axis=0) > >>>>> a[ ind, [0,1,2] ] > > array([60, 20, 14]) > > > > ... which basically returns a[ind[0],0], a[ind[1],1], and a[ind[2],2]. > > > > Dan > > Yeah, there really must be a straightforward, easy way to do this without > explicitly listing the indices. Hmmm... > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.borghgraef.rma at gmail.com Tue Oct 23 06:36:17 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Tue, 23 Oct 2007 12:36:17 +0200 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> Message-ID: <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> > > > I upgraded to Gutsy yesterday, scipy 0.5.2 kept working no problem. But then I upgraded to 0.6.0 from the http://debs.astraw.com/ repository, and now when import all of scipy, I get the error: : /usr/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so: undefined symbol: zfftnd_fftw Any idea what's causing this? -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Oct 23 06:30:37 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 23 Oct 2007 19:30:37 +0900 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> Message-ID: <471DCD4D.7030703@ar.media.kyoto-u.ac.jp> Alexander Borghgraef wrote: > > > I upgraded to Gutsy yesterday, scipy 0.5.2 kept working no problem. > But then I upgraded to 0.6.0 from the http://debs.astraw.com/ > repository, and now when import all of scipy, I get the error: > > : > /usr/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so: undefined > symbol: zfftnd_fftw > > Any idea what's causing this? Do you have fftw3 installed on your computer ? David From cimrman3 at ntc.zcu.cz Tue Oct 23 09:54:49 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 23 Oct 2007 15:54:49 +0200 Subject: [SciPy-user] ANN: SFE-00.31.06 release Message-ID: <471DFD29.5040007@ntc.zcu.cz> I am happy to announce the version 00.31.06 of SFE, featuring acoustic band gaps computation, rigid body motion constraints, new solver classes and reorganization, and regular bug fixes and updates, see http://ui505p06-mbs.ntc.zcu.cz/sfe. SFE is a finite element analysis software written almost entirely in Python. This version is released under BSD license. best wishes, r. From rouilj-scipy at renesys.com Tue Oct 23 11:53:29 2007 From: rouilj-scipy at renesys.com (John Rouillard) Date: Tue, 23 Oct 2007 15:53:29 +0000 Subject: [SciPy-user] Build problems with scipy-0.6.0 seg fault in tests, rpm build fails In-Reply-To: References: Message-ID: <20071023155329.GQ20376@renesys.com> Hi all: I have built and installed numpy-1.0.3.1 using: python setup.py bdist_rpm and run the test: python -c "import numpy; numpy.test(1,10)" it passed with "Ran 586 tests in 1.074s" and exited OK. When trying the same recipe for scipy, the bdist_rpm failed with: running sdist warning: sdist: manifest template 'MANIFEST.in' does not exist (using default file list) error: scipy/sandbox/multigrid/multigridtools/numpy.i: No such file or directory and sure enough numpy.i was pointing to an invalid location. numpy.i -> ../../../sparse/sparsetools/numpy.i ../../../sparse/sparsetools had: complex_ops.h sparsetools.py sparsetools.h sparsetools_wrap.cxx but no numpy.i there, or in any other place in the source tree. I removed the symbolic link and tried to get further with the rpm build. My next build failure was at: umfpack_info: libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /usr/lib/python2.3/site-packages/numpy/distutils/system_info.py:403: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE non-existing path in 'Lib/maxentropy': 'doc' Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/lib/python2.3/site-packages/numpy/distutils/core.py", line 142, in setup config = configuration() File "setup.py", line 19, in configuration config.add_subpackage('Lib') File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 798, in add_subpackage caller_level = 2) File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 781, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 728, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/setup.py", line 23, in configuration config.add_subpackage('stsci') File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 798, in add_subpackage caller_level = 2) File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 781, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 728, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/stsci/setup.py", line 5, in configuration config.add_subpackage('convolve') File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 798, in add_subpackage caller_level = 2) File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 774, in get_subpackage caller_level = caller_level+1) File "/usr/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 574, in __init__ raise ValueError("%r is not a directory" % (package_path,)) ValueError: 'Lib/stsci/convolve' is not a directory error: Bad exit status from /home/rouilj/develop/rpm_build/tmp/rpm-tmp.37699 (%build) however the tarball I unpacked does shows a directory scipy/stsci/convolve, which I assume is used as the source for the Lib/stsci/convolve directory. Since I was having no luck with a bdist_rpm, I tried a normal bdist build see if it would work. "python setup.py bdist" ran without failure and created dist/scipy-0.6.0.linux-i686.tar.gz. I manually installed it and ran the tests using: python -c 'import numpy; import scipy; scipy.test(1,10)' generation of a binary structure 4 ... ok generic filter 1 ... ERROR generic 1d filter 1 ... ERROR generic gradient magnitude 1 ... ok generic laplace filter 1 ... ok geometric transform 1 ... ok geometric transform 2 ... ok geometric transform 3 ... ok geometric transform 4 ... ok geometric transform 5 ... ok geometric transform 6 ... ok geometric transform 7 ... ok geometric transform 8 ... ok geometric transform 10 ... ok geometric transform 13 ... ok geometric transform 14 ... ok geometric transform 15 ... ok geometric transform 16 ... ok geometric transform 17 ... ok geometric transform 18 ... ok geometric transform 19 ... ok geometric transform 20 ... ok geometric transform 21 ... ok geometric transform 22 ... ok geometric transform 23 ... ok geometric transform 24 ... ok grey closing 1 ... ok grey closing 2 ... ok grey dilation 1 ... ok grey dilation 2 ... ok grey dilation 3 ... ok grey erosion 1 ... ok grey erosion 2 ... ok grey erosion 3 ... ok grey opening 1 ... ok grey opening 2 ... ok histogram 1*** glibc detected *** free(): invalid next size (fast): 0x098c0468 *** using python -c 'import scipy; import numpy; scipy.test' or python -c 'import scipy; scipy.test' produces: ... gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1Segmentation fault Running: python -c 'import numpy; import scipy; scipy.test(1)' produced: **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ..E......................................................................................................../usr/lib/python2.3/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' .............................................................................................EE.................................*** glibc detected *** free(): invalid next size (fast): 0x09eaeb88 *** Aborted System info is: OS: Fedora Core 3 python -V: Python 2.3.4 uname -a: Linux hostname 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 i686 i386 GNU/Linux So how do I build a working bdist_rpm using 0.6.0 release of scipy? Thanks for your help. -- -- rouilj John Rouillard System Administrator Renesys Corporation 603-643-9300 x 111 From alexander.borghgraef.rma at gmail.com Tue Oct 23 12:08:45 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Tue, 23 Oct 2007 18:08:45 +0200 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: <471DCD4D.7030703@ar.media.kyoto-u.ac.jp> References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> <471DCD4D.7030703@ar.media.kyoto-u.ac.jp> Message-ID: <9e8c52a20710230908g46f816abi84042469b6ad5689@mail.gmail.com> > Do you have fftw3 installed on your computer ? Yes. Version 3.2.1 -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Oct 23 12:24:28 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 23 Oct 2007 18:24:28 +0200 Subject: [SciPy-user] Build problems with scipy-0.6.0 seg fault in tests, rpm build fails In-Reply-To: <20071023155329.GQ20376@renesys.com> References: <20071023155329.GQ20376@renesys.com> Message-ID: <20071023162428.GA9786@mentat.za.net> Hi John Thanks for the report. This bug has been fixed. Regards St?fan On Tue, Oct 23, 2007 at 03:53:29PM +0000, John Rouillard wrote: > python -c 'import numpy; import scipy; scipy.test(1,10)' > > histogram 1*** glibc detected *** free(): invalid next size (fast): > 0x098c0468 *** From cookedm at physics.mcmaster.ca Tue Oct 23 12:47:28 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 23 Oct 2007 12:47:28 -0400 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> (Alexander Borghgraef's message of "Tue\, 23 Oct 2007 12\:36\:17 +0200") References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> Message-ID: "Alexander Borghgraef" writes: >> I upgraded to Gutsy yesterday, scipy 0.5.2 kept working no problem. But > then I upgraded to 0.6.0 from the http://debs.astraw.com/ > repository, and now when import all of scipy, I get the error: > > : > /usr/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so: undefined > symbol: zfftnd_fftw > > Any idea what's causing this? Run 'ldd /usr/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so' to see which libraries it wants to load. That function is the wrapper routine for FFTW2 for the complex FFT. I'm not sure how it wouldn't be compiled in, though. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From alexander.borghgraef.rma at gmail.com Tue Oct 23 13:17:04 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Tue, 23 Oct 2007 19:17:04 +0200 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> Message-ID: <9e8c52a20710231017y78eb4751mb4952c02a1c668ba@mail.gmail.com> On 10/23/07, David M. Cooke wrote: > > > > Run 'ldd /usr/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so' to > see which libraries it wants to load. That function is the wrapper > routine for FFTW2 for the complex FFT. I'm not sure how it wouldn't be > compiled in, though. Returns: linux-gate.so.1 => (0xffffe000) librfftw.so.2 => /usr/lib/librfftw.so.2 (0xb7f30000) libfftw.so.2 => /usr/lib/libfftw.so.2 (0xb7f05000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb7ede000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7eb8000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7ead000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7d63000) /lib/ld-linux.so.2 (0x80000000) -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Tue Oct 23 14:59:52 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 23 Oct 2007 13:59:52 -0500 Subject: [SciPy-user] Python mode in emacs 22 (Ubuntu Gutsy Gibbon) In-Reply-To: References: <20071022095441.GB12842@mentat.za.net> <20071022130418.GG19079@mentat.za.net> <20071022144007.GL19079@mentat.za.net> <20071022154555.GM19079@mentat.za.net> Message-ID: I have spent some time tinkering with this and think it might benefit emacs + ubuntu users. Stefan suggested I use emacs-snapshot from the peadrop repository. It was already in my package manager without having to add the repository. Aside from having beautiful anti-aliased fonts, it plays nicely with the python-mode that is distributed separately (the one that works with emacs 21). The snapshot is emacs 22.1.50.1. Having several emacs versions side-by-side doesn't seem to cause any issues. I just changed the symbolic link /usr/bin/emacs to point to the version I actually want to run (emacs-snapshot-gtk in my case). The emacs-22 that is the package manager does not work with the python-mode separate package and the python-mode built into emacs-22 doesn't have as many features. So, I highly recommend emacs-snapshot to solve this problem (emacs-22 and python-mode incompatibility). I didn't actually do this, but Stefan recommended: sudo update-alternatives --set emacs-snapshot /usr/bin/emacs-snapshot-gtk sudo aptitude reinstall emacs-snapshot This may have been less hackish than changing the target of /usr/bin/emacs. Note that adding (setq inhibit-splash-screen t) to your .emacs file removes an annoyance with the splash screen in emacs-snapshot (annoying in my opinion at least). FWIW, Ryan On 10/22/07, Ryan Krauss wrote: > > O.K., I think everything is cool. I seem to have emacs-snapshot working > and it was fairly painless. I was having one small problem (annoyance). > The splash screen stays up until I either click in the window or type > something. When I type "emacs filename.py" at the command prompt, I want > to look at the file. I googled and found that setting this variable gets > rid of the splash screen entirely: > > '(inhibit-splash-screen t) > > It also gets rid of an initial scratch message. > > Is something else causing this behavior? Do you see the same thing? Is > this a good solution (i.e. customizing the inhibit-splash-screen variable) > or is there a better one? > > On 10/22/07, Stefan van der Walt wrote: > > > > Hey Ryan, > > > > No problem. I am using the standard python-mode from > > > > pool/main/p/python-mode/python-mode_1.0-3.1ubuntu1_all.deb > > > > Cheers > > St?fan > > > > On Mon, Oct 22, 2007 at 10:40:02AM -0500, Ryan Krauss wrote: > > > Sorry to treat you like my personal emacs mailing list, but are you > > using the > > > python-mode in the standard repositories or from somewhere else? My > > > understanding was that this one wasn't compatible with emacs 22. Do > > you have a > > > newer version than the repository? > > > > > > Thanks again, > > > > > > Ryan > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand.prabhakar.patil at gmail.com Tue Oct 23 15:26:38 2007 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 23 Oct 2007 12:26:38 -0700 Subject: [SciPy-user] How to get parameters out of an rv object Message-ID: <2bc7a5a50710231226k3d7b0cf1ka664c5f88fae6081@mail.gmail.com> Hi all, I'd like to wrap scipy.stats.rv objects in PyMC random variables. To do that the class factory needs to inspect the rv objects and figure out what parameters they're expecting. For instance, it would need to look at stats.norm and figure out that it's expecting loc and scale. I've tried just inspecting the pdf / pmf methods, but it hasn't work because many of them seem to inherit the generic method taking only x, *args, **kwargs. Thanks, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Tue Oct 23 15:32:24 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 23 Oct 2007 13:32:24 -0600 Subject: [SciPy-user] Fwd: interpolation question References: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> Message-ID: Haven't heard any discussion on this issue since I posted last week - are these in fact bugs or am I not using the module correctly? I also found that if I modify the code in fitpack2.py (see below) to compare dimension[0] with Y and dimension[1] with X, I still get back interpolated results whose dimensions are opposite of what was specified (rows => columns, columns=> rows). If this is a persistent or low-priority issue, does anyone have any suggestions for workaround options for 2d interpolation of regularly spaced data? Thanks, Mike Hearne Begin forwarded message: > From: Michael Hearne > Date: October 17, 2007 2:46:31 PM MDT > To: SciPy Users List > Subject: Re: [SciPy-user] interpolation question > > John - Thanks very much, this is great! > > I do have one more question - when I use the code with a non-square > grid, I get an error saying that my x data does not match the x > dimension of my z data. > > In looking at the following code, it seems like x is being compared > with the number of rows of Z data, and y being compared with the > number of columns: > > if not x.size == z.shape[0]: > raise TypeError,\ > 'x dimension of z must have same number of > elements as x' > if not y.size == z.shape[1]: > raise TypeError,\ > 'y dimension of z must have same number of > elements as y' > > Isn't numpy row-major like Matlab? I.e, The first dimension is > that of rows, the second is that of columns? > > Thanks, > > Mike Hearne > > On Oct 17, 2007, at 1:17 PM, John Travers wrote: > >> from scipy.interpolate import RectBivariateSpline > > > > > ------------------------------------------------------ > Michael Hearne > mhearne at usgs.gov > (303) 273-8620 > USGS National Earthquake Information Center > 1711 Illinois St. Golden CO 80401 > Senior Software Engineer > Synergetics, Inc. > ------------------------------------------------------ > > ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zyzhu2000 at gmail.com Tue Oct 23 16:20:04 2007 From: zyzhu2000 at gmail.com (Geoffrey Zhu) Date: Tue, 23 Oct 2007 15:20:04 -0500 Subject: [SciPy-user] cond in lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0) Message-ID: Good afternoon, everyone. I was trying to use lstsq() in scipy.linalg. The prototype looks like the following: def lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0) cond is said to be used to determine effective rank of a, but it does not have any further explanation. Can anyone help me understand what exactly does cond do or where can I find some more information on this? Thanks, Geoffrey From dmitrey.kroshko at scipy.org Tue Oct 23 16:32:08 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 23 Oct 2007 23:32:08 +0300 Subject: [SciPy-user] cond in lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0) In-Reply-To: References: Message-ID: <471E5A48.5070502@scipy.org> It uses LAPACK xGELSS routine (I guess prefix is 'D'), so you'd better contact LAPACK developers. BTW you could call the xGELSS routines from OpenOpt framework (see LLSP class), but w/o "cond" argument. Regards, D. Geoffrey Zhu wrote: > Good afternoon, everyone. > > I was trying to use lstsq() in scipy.linalg. The prototype looks like > the following: > > > def lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0) > > cond is said to be used to determine effective rank of a, but it does > not have any further explanation. > > Can anyone help me understand what exactly does cond do or where can I > find some more information on this? > > Thanks, > Geoffrey > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From strawman at astraw.com Wed Oct 24 07:05:57 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 24 Oct 2007 04:05:57 -0700 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> Message-ID: <471F2715.3020103@astraw.com> Hi Alex, I believe I have fixed the problem. Please try upgrading to my latest package and let me know if you have any more issues. Remark #1 to scipy devs in general: a simple unittest should be added to do "import scipy.fftpack" to make sure these silly problems don't slip through -- this issue wasn't detected by scipy.test(10,10). I'd try to create a patch right now, but "svn up" resulted in "Connection reset by peer". I guess the whole scipy website is more-or-less down right now. Remark #2 to scipy devs in general: I think building of fftpack with fftw2 may be broken in 0.6.0. I don't have time to dig further right now. Alexander Borghgraef wrote: > > > I upgraded to Gutsy yesterday, scipy 0.5.2 kept working no problem. > But then I upgraded to 0.6.0 from the http://debs.astraw.com/ > repository, and now when import all of scipy, I get the error: > > : > /usr/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so: undefined > symbol: zfftnd_fftw > > Any idea what's causing this? > > -- > Alex Borghgraef > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From massimo.sandal at unibo.it Wed Oct 24 07:12:42 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 24 Oct 2007 13:12:42 +0200 Subject: [SciPy-user] from 1-d array to wave file? Message-ID: <471F28AA.5090809@unibo.it> Hi, Is there an easy way to transform a vector of data into a wave file? Let's say, if I have something like: import numpy as np a=np.arange[0,100,0.1] b=[np.sin(item) for item in a] how can I convert b into a wave file (with given sampling rate, etc.)? Is there an easy recipe? Thanks, m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From emanuele at relativita.com Wed Oct 24 07:27:12 2007 From: emanuele at relativita.com (Emanuele Olivetti) Date: Wed, 24 Oct 2007 13:27:12 +0200 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F28AA.5090809@unibo.it> References: <471F28AA.5090809@unibo.it> Message-ID: <471F2C10.1080000@relativita.com> massimo sandal wrote: > Hi, > > Is there an easy way to transform a vector of data into a wave file? > Let's say, if I have something like: > > import numpy as np > a=np.arange[0,100,0.1] > b=[np.sin(item) for item in a] > > how can I convert b into a wave file (with given sampling rate, etc.)? > Is there an easy recipe? > Audiolab ? http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/audiolab/index.html E. From elcorto at gmx.net Wed Oct 24 07:42:27 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 24 Oct 2007 13:42:27 +0200 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F28AA.5090809@unibo.it> References: <471F28AA.5090809@unibo.it> Message-ID: <471F2FA3.7010201@gmx.net> massimo sandal wrote: > Hi, > > Is there an easy way to transform a vector of data into a wave file? > Let's say, if I have something like: > > import numpy as np > a=np.arange[0,100,0.1] > b=[np.sin(item) for item in a] > > how can I convert b into a wave file (with given sampling rate, etc.)? > Is there an easy recipe? > Google truns up http://docs.python.org/lib/module-wave.html. BTW, why don't you do b = np.sin(a) ? -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From grh at mur.at Wed Oct 24 08:27:08 2007 From: grh at mur.at (Georg Holzmann) Date: Wed, 24 Oct 2007 14:27:08 +0200 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F2C10.1080000@relativita.com> References: <471F28AA.5090809@unibo.it> <471F2C10.1080000@relativita.com> Message-ID: <471F3A1C.7090809@mur.at> Hallo! >> how can I convert b into a wave file (with given sampling rate, etc.)? >> Is there an easy recipe? there is also a module for reading/writing wave files in scipy: scipy.io.wavfile But I guess you need a recent scipy version ... LG Georg From jtravs at gmail.com Wed Oct 24 09:02:49 2007 From: jtravs at gmail.com (John Travers) Date: Wed, 24 Oct 2007 14:02:49 +0100 Subject: [SciPy-user] Fwd: interpolation question In-Reply-To: References: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> Message-ID: <3a1077e70710240602t79c53d1dgaee99dffd5d4994c@mail.gmail.com> Hi Micheal, On 23/10/2007, Michael Hearne wrote: > Haven't heard any discussion on this issue since I posted last week - are > these in fact bugs or am I not using the module correctly? I do want to look at and solve these issues, but I'm extremely busy at the moment finishing off my thesis, so haven't got too much time to spend on it! > I do have one more question - when I use the code with a non-square grid, I > get an error saying that my x data does not match the x dimension of my z > data. > > In looking at the following code, it seems like x is being compared with the > number of rows of Z data, and y being compared with the number of columns: [snip] > Isn't numpy row-major like Matlab? I.e, The first dimension is that of > rows, the second is that of columns? numpy is generally row-major I think. However a lot of scipy is written in fortran and so the implementation may leak through. I think this is the case with these functions. However, I'm not quite sure I understand your problem. If you have a 2d array of data with m rows and n columns (i.e. m in first index, n in second) then I would expect to call a procedure like RectBivariateSpline with 1d arrays of length m and n, in that order. Does this not make sense? What is referred to as x and y is arbitrary right? In this case x just refers to the first dimension rather than fastest changing dimension. Anyway, the underlying code is in fortran and from a quick look at the source of fitpack2 I don't see any special treatment of x,y axes in any of the bivariate spline procedures so I suspect that the fortran convention is reflected directly in (this part) of scipy. Have you checked the other 2d spline functions like SmoothBivariateSpline etc? > I also found that if I modify the code in fitpack2.py (see below) to compare > dimension[0] with Y and dimension[1] with X, I still get back interpolated > results whose dimensions are opposite of what was specified (rows => > columns, columns=> rows). This is because the actual interpolation is performed in the base class of all of the bivariate spline procedures, see the __call__ method of BivariateSpline for this. As I noted above this convention will be across all of the bivariate spline methods. Is it not also reflected in interp2d? Maybe I will understand your problem better if you give me a short code example which shows the issue. I'll then discuss it with other scipy developers and see if a consensus can be agreed upon about whether this is a bug that should be fixed or simply a convention to be observed. There is a slight problem in that this code is now 4 years in use (though RectBivariateSpline only about a year) and so changing it now may not be possible due to backards compatibility. I'll try and consider this in more detail when I get more time! Best regards, John From massimo.sandal at unibo.it Wed Oct 24 09:28:31 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 24 Oct 2007 15:28:31 +0200 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F2FA3.7010201@gmx.net> References: <471F28AA.5090809@unibo.it> <471F2FA3.7010201@gmx.net> Message-ID: <471F487F.4010704@unibo.it> Steve Schmerler ha scritto: > massimo sandal wrote: > Google truns up http://docs.python.org/lib/module-wave.html. I know it, but it doesn't help. Its documentation is, to say the least, terse. > BTW, why don't you do > b = np.sin(a) ? Because I didn't know it was possible. :) m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From massimo.sandal at unibo.it Wed Oct 24 09:30:17 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 24 Oct 2007 15:30:17 +0200 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F2C10.1080000@relativita.com> References: <471F28AA.5090809@unibo.it> <471F2C10.1080000@relativita.com> Message-ID: <471F48E9.6070802@unibo.it> Emanuele Olivetti ha scritto: > Audiolab ? > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/audiolab/index.html Good suggestion! Thanks, I'll definitely look into it. I hoped there was no need of another dependence for this little thing, however. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From massimo.sandal at unibo.it Wed Oct 24 09:33:44 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 24 Oct 2007 15:33:44 +0200 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F3A1C.7090809@mur.at> References: <471F28AA.5090809@unibo.it> <471F2C10.1080000@relativita.com> <471F3A1C.7090809@mur.at> Message-ID: <471F49B8.5030807@unibo.it> Georg Holzmann ha scritto: > Hallo! > >>> how can I convert b into a wave file (with given sampling rate, etc.)? >>> Is there an easy recipe? > > there is also a module for reading/writing wave files in scipy: > scipy.io.wavfile > > But I guess you need a recent scipy version ... I think too; mine (0.5.2) seems not recent enough. However I'll look for it when upgrading. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From grh at mur.at Wed Oct 24 10:05:27 2007 From: grh at mur.at (Georg Holzmann) Date: Wed, 24 Oct 2007 16:05:27 +0200 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F49B8.5030807@unibo.it> References: <471F28AA.5090809@unibo.it> <471F2C10.1080000@relativita.com> <471F3A1C.7090809@mur.at> <471F49B8.5030807@unibo.it> Message-ID: <471F5127.5010002@mur.at> Hallo! > I think too; mine (0.5.2) seems not recent enough. > However I'll look for it when upgrading. It's in 0.6 ! LG Georg From mhearne at usgs.gov Wed Oct 24 10:51:20 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 24 Oct 2007 08:51:20 -0600 Subject: [SciPy-user] Fwd: interpolation question In-Reply-To: <3a1077e70710240602t79c53d1dgaee99dffd5d4994c@mail.gmail.com> References: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> <3a1077e70710240602t79c53d1dgaee99dffd5d4994c@mail.gmail.com> Message-ID: <5DD62803-2F80-4039-8292-0D40778341DD@usgs.gov> John - I guess I have a different bias - in Matlab (and now numpy) I've always thought of rows being in the y direction, and columns as being in the x direction. This explains my confusion with the x and y parameters. I also just figured out what the rest of my problem was - more confusion with x and y when extracting the interpolated results. You may rest easy, or as much as you can while finishing off your thesis! Is it uncommon to think of columns as being in the x direction, and rows in y? --Mike On Oct 24, 2007, at 7:02 AM, John Travers wrote: > Hi Micheal, > > On 23/10/2007, Michael Hearne wrote: >> Haven't heard any discussion on this issue since I posted last >> week - are >> these in fact bugs or am I not using the module correctly? > > I do want to look at and solve these issues, but I'm extremely busy at > the moment finishing off my thesis, so haven't got too much time to > spend on it! > >> I do have one more question - when I use the code with a non- >> square grid, I >> get an error saying that my x data does not match the x dimension >> of my z >> data. >> >> In looking at the following code, it seems like x is being >> compared with the >> number of rows of Z data, and y being compared with the number of >> columns: > > [snip] > >> Isn't numpy row-major like Matlab? I.e, The first dimension is >> that of >> rows, the second is that of columns? > > numpy is generally row-major I think. However a lot of scipy is > written in fortran and so the implementation may leak through. I think > this is the case with these functions. > > However, I'm not quite sure I understand your problem. If you have a > 2d array of data with m rows and n columns (i.e. m in first index, n > in second) then I would expect to call a procedure like > RectBivariateSpline with 1d arrays of length m and n, in that order. > Does this not make sense? What is referred to as x and y is arbitrary > right? In this case x just refers to the first dimension rather than > fastest changing dimension. > > Anyway, the underlying code is in fortran and from a quick look at the > source of fitpack2 I don't see any special treatment of x,y axes in > any of the bivariate spline procedures so I suspect that the fortran > convention is reflected directly in (this part) of scipy. Have you > checked the other 2d spline functions like SmoothBivariateSpline etc? > >> I also found that if I modify the code in fitpack2.py (see below) >> to compare >> dimension[0] with Y and dimension[1] with X, I still get back >> interpolated >> results whose dimensions are opposite of what was specified (rows => >> columns, columns=> rows). > > This is because the actual interpolation is performed in the base > class of all of the bivariate spline procedures, see the __call__ > method of BivariateSpline for this. As I noted above this convention > will be across all of the bivariate spline methods. Is it not also > reflected in interp2d? > > Maybe I will understand your problem better if you give me a short > code example which shows the issue. I'll then discuss it with other > scipy developers and see if a consensus can be agreed upon about > whether this is a bug that should be fixed or simply a convention to > be observed. There is a slight problem in that this code is now 4 > years in use (though RectBivariateSpline only about a year) and so > changing it now may not be possible due to backards compatibility. > > I'll try and consider this in more detail when I get more time! > > Best regards, > John > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zyzhu2000 at gmail.com Wed Oct 24 11:00:44 2007 From: zyzhu2000 at gmail.com (Geoffrey Zhu) Date: Wed, 24 Oct 2007 10:00:44 -0500 Subject: [SciPy-user] cond in lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0) In-Reply-To: <471E5A48.5070502@scipy.org> References: <471E5A48.5070502@scipy.org> Message-ID: Thanks Dmitrey! On 10/23/07, dmitrey wrote: > It uses LAPACK xGELSS routine (I guess prefix is 'D'), so you'd better > contact LAPACK developers. > BTW you could call the xGELSS routines from OpenOpt framework (see LLSP > class), but w/o "cond" argument. > Regards, D. > > > Geoffrey Zhu wrote: > > Good afternoon, everyone. > > > > I was trying to use lstsq() in scipy.linalg. The prototype looks like > > the following: > > > > > > def lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0) > > > > cond is said to be used to determine effective rank of a, but it does > > not have any further explanation. > > > > Can anyone help me understand what exactly does cond do or where can I > > find some more information on this? > > > > Thanks, > > Geoffrey > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jtravs at gmail.com Wed Oct 24 12:00:08 2007 From: jtravs at gmail.com (John Travers) Date: Wed, 24 Oct 2007 17:00:08 +0100 Subject: [SciPy-user] Fwd: interpolation question In-Reply-To: <5DD62803-2F80-4039-8292-0D40778341DD@usgs.gov> References: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> <3a1077e70710240602t79c53d1dgaee99dffd5d4994c@mail.gmail.com> <5DD62803-2F80-4039-8292-0D40778341DD@usgs.gov> Message-ID: <3a1077e70710240900j4422b150ua4aca2797a332afd@mail.gmail.com> Hi Mike, On 24/10/2007, Michael Hearne wrote: > John - I guess I have a different bias - in Matlab (and now numpy) I've > always thought of rows being in the y direction, and columns as being in the > x direction. This explains my confusion with the x and y parameters. > Is it uncommon to think of columns as being in the x direction, and rows in > y? > I think we are just confusing each other. If you look at my bad ascii art 2d-array below: -------------------- | 0,0 | 0,1 | 0,2 | -------------------- | 1,0 | 1,1 | 1,2 | -------------------- My understanding is: This is a 2 by 3 array. The first index runs from 0 to 1 inclusive. This indexes the rows of the array. The second index runs from 0 to 2 inclusive and indexes the columns of the array. So 1,0 is the 2nd row, 1st column. Now I regard x as the first index (because I think of it as the first axis). Therefore x corresponds to the rows. So obviously y is the second index and hence corresponds to the columns. I think maybe out confusion comes from the fact that for a given column (or y value) we index through it by row (or x value) and maybe you think of this as each set of x's lies in a column? The fortran bias comes from the fact that in fortran it is more efficient to change the inner index (row or x) fastest. This agrees with my idea of x corresponding to the first index (rows) as x is often the fastest changing axis (though this is an arbitrary choice). Anyway I think Matlab, numpy and Fortran all work like this in terms of indexing. But numpy (and Matlab?) is written in c and therefore it is usually more efficient to change the second index fastest. I say usually as numpy also supports storing arrays in fortran order, but users don't normally need to worry about this. So I guess another source of confusion is that if users want x to be the most efficiently indexed axis, they should consider it to be represented by the second index (or columns). By the above interpretation I think my implementation of RectBivariateSpline is consistent with the first index changing x. If you think this is a bad choice or inconsistent with the rest of scipy/numpy I'd like to hear. I apologise if all this is too basic for you, I don't mean to insult! Cheers, John From alexander.borghgraef.rma at gmail.com Wed Oct 24 13:07:39 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Wed, 24 Oct 2007 19:07:39 +0200 Subject: [SciPy-user] Ubuntu 7.10 Gutsy Gibbon vs. SciPy In-Reply-To: <471F2715.3020103@astraw.com> References: <47347f490710201458y14c2cbe9n1280d3f52e664b73@mail.gmail.com> <9e8c52a20710230336g6bacdac4nadfd0a658de4aee6@mail.gmail.com> <471F2715.3020103@astraw.com> Message-ID: <9e8c52a20710241007s77b95366o85396d76bbdc922c@mail.gmail.com> On 10/24/07, Andrew Straw wrote: > > Hi Alex, > > I believe I have fixed the problem. Please try upgrading to my latest > package and let me know if you have any more issues. Ok, upgraded, problem solved. Thanks alot! -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Wed Oct 24 13:51:48 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 24 Oct 2007 11:51:48 -0600 Subject: [SciPy-user] Fwd: interpolation question In-Reply-To: <3a1077e70710240900j4422b150ua4aca2797a332afd@mail.gmail.com> References: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> <3a1077e70710240602t79c53d1dgaee99dffd5d4994c@mail.gmail.com> <5DD62803-2F80-4039-8292-0D40778341DD@usgs.gov> <3a1077e70710240900j4422b150ua4aca2797a332afd@mail.gmail.com> Message-ID: <9C5DF73D-1346-48C5-ACF7-4C3E2FB310AF@usgs.gov> John - I thought about it some more, and I think part of my bias comes from a background of spatial computing, and working with remotely sensed images. Most of the time when I'm working with 2d grid data, that grid represents _geographic_ data, where each grid "cell" represents a spot on the ground. For example, it is common to represent gridded spatial data with the following parameters: Upper left X coordinate (i.e. longitude) of the center of the upper left grid cell Upper left Y coordinate (i.e. latitude) of the center of the upper left grid cell X dimension of each grid cell in linear units Y dimension of each grid cell in linear units Number of rows Number of columns Then if you have a 2d array in numpy, getting geographic coordinates from grid coordinates is as simple as: spatial_x = ulx + col*xdim spatial_y = uly - row*ydim For this kind of computing, it is much less confusing if the data at [0,ncols] is at a greater longitude than the data at [0,0], and data at [0,0] is at a greater latitude than the data at [nrows,0]. Thinking this way for so many years has probably caused me to think that it is "natural" for rows to be the Y dimension, and columns to be the X dimension. I get screwed up going between math angles and geographic direction too! Your points are correct and well taken, and looking at the documentation for RectBivariateSpline it _does_ say that x corresponds to the first dimension, and y corresponds to the second. My bad for not RTFM. Thanks, Mike Hearne On Oct 24, 2007, at 10:00 AM, John Travers wrote: > Hi Mike, > > On 24/10/2007, Michael Hearne wrote: >> John - I guess I have a different bias - in Matlab (and now numpy) >> I've >> always thought of rows being in the y direction, and columns as >> being in the >> x direction. This explains my confusion with the x and y parameters. > >> Is it uncommon to think of columns as being in the x direction, >> and rows in >> y? >> > > I think we are just confusing each other. If you look at my bad ascii > art 2d-array below: > > -------------------- > | 0,0 | 0,1 | 0,2 | > -------------------- > | 1,0 | 1,1 | 1,2 | > -------------------- > > My understanding is: > This is a 2 by 3 array. The first index runs from 0 to 1 inclusive. > This indexes the rows of the array. The second index runs from 0 to 2 > inclusive and indexes the columns of the array. So 1,0 is the 2nd row, > 1st column. Now I regard x as the first index (because I think of it > as the first axis). Therefore x corresponds to the rows. So obviously > y is the second index and hence corresponds to the columns. I think > maybe out confusion comes from the fact that for a given column (or y > value) we index through it by row (or x value) and maybe you think of > this as each set of x's lies in a column? > > The fortran bias comes from the fact that in fortran it is more > efficient to change the inner index (row or x) fastest. This agrees > with my idea of x corresponding to the first index (rows) as x is > often the fastest changing axis (though this is an arbitrary choice). > > Anyway I think Matlab, numpy and Fortran all work like this in terms > of indexing. > > But numpy (and Matlab?) is written in c and therefore it is usually > more efficient to change the second index fastest. I say usually as > numpy also supports storing arrays in fortran order, but users don't > normally need to worry about this. So I guess another source of > confusion is that if users want x to be the most efficiently indexed > axis, they should consider it to be represented by the second index > (or columns). > > By the above interpretation I think my implementation of > RectBivariateSpline is consistent with the first index changing x. If > you think this is a bad choice or inconsistent with the rest of > scipy/numpy I'd like to hear. > > I apologise if all this is too basic for you, I don't mean to insult! > > Cheers, > John > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Oct 24 15:28:59 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 24 Oct 2007 21:28:59 +0200 Subject: [SciPy-user] Fwd: interpolation question In-Reply-To: <5DD62803-2F80-4039-8292-0D40778341DD@usgs.gov> References: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> <3a1077e70710240602t79c53d1dgaee99dffd5d4994c@mail.gmail.com> <5DD62803-2F80-4039-8292-0D40778341DD@usgs.gov> Message-ID: <20071024192859.GB8452@mentat.za.net> On Wed, Oct 24, 2007 at 08:51:20AM -0600, Michael Hearne wrote: > Is it uncommon to think of columns as being in the x direction, and > rows in y? In my experience, that is the most common convention. Some books, like Image Processing (Gonzales & Woods) swap the axes, but that is probably done to be consistent with array indexing. Then, there is still the issue of the origin. In the cartesian plane the origin is at the lower left-hand corner of the array, whereas in array indexing it is at the top left-hand corner. Personally, I find it easiest to always use row and column indices, and to refrain from using x or y to describe them. On the other hand, this may not be easy when implementing algorithms from literature, set in the X-Y plane. Regards St?fan From bnuttall at uky.edu Wed Oct 24 17:20:33 2007 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Wed, 24 Oct 2007 17:20:33 -0400 Subject: [SciPy-user] Plotting with PyLab Message-ID: Folks, I'd like to make a plot (graph) where the x and y units have the same scale. That is, one unit in the x direction is the same physical size as 1 unit in the y direction; circles will be circles and squares squares. What I'm doing is I'm given a path through 3d space. At a given distance, MD, along that path I am supplied an azimuth (the departure from vertical) and a bearing. I can translate these data to (x,y,z). What I want to do is make three maps showing the trajectory of the path: XY space, XZ space, and YZ space. At this time, a 3D visualization is not needed and, in actuality, only the XY plot is required. If pylab is not right, suggestions on what module I should use is appreciated. Brandon [cid:image001.jpg at 01C81662.2F6C1E40] Powered by CardScan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17011 bytes Desc: image001.jpg URL: From dominique.orban at gmail.com Wed Oct 24 17:45:47 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Wed, 24 Oct 2007 17:45:47 -0400 Subject: [SciPy-user] Plotting with PyLab In-Reply-To: References: Message-ID: <8793ae6e0710241445l37d338f7k7ddfe5475a3c70bc@mail.gmail.com> On 10/24/07, Nuttall, Brandon C wrote: > I'd like to make a plot (graph) where the x and y units have the same > scale. That is, one unit in the x direction is the same physical size as 1 > unit in the y direction; circles will be circles and squares squares. What > I'm doing is I'm given a path through 3d space. At a given distance, MD, > along that path I am supplied an azimuth (the departure from vertical) and a > bearing. I can translate these data to (x,y,z). > > What I want to do is make three maps showing the trajectory of the path: > XY space, XZ space, and YZ space. At this time, a 3D visualization is not > needed and, in actuality, only the XY plot is required. > > If pylab is not right, suggestions on what module I should use is > appreciated. > Hi Brandon, If I understand well, what you're looking for is something like this: import pylab fig = pylab.figure() # Create a figure ax = fig.gca() # Obtain axes in the current figure ax.plot( [0,1], [0,0], 'k-' ) # Plot the [0,1] x [0,1] square ax.plot( [0,0], [0,1], 'k-' ) ax.plot( [0,1], [1,1], 'k-' ) ax.plot( [1,1], [0,1], 'k-' ) ax.set_xlim( [-0.3,1.3] ) # Adjust x and y range ax.set_ylim( [-0.3,1.3] ) ax.set_aspect('equal') # Make aspect ratio equal to 1 pylab.show() and that should produce a square that indeed looks square and not rectangular. Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrywark at gmail.com Wed Oct 24 17:51:03 2007 From: barrywark at gmail.com (Barry Wark) Date: Wed, 24 Oct 2007 14:51:03 -0700 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: <20071022115537.GD19079@mentat.za.net> <471D0372.3020203@relativita.com> Message-ID: For those interested, I've cleaned up (and documented a little) my wrapper for the Approximate Nearest Neighbor library. The source is available as a zip archive at http://rieke-server.physiol.washington.edu/~barry/ann/ANNwrapper.zip (BSD license). It provides a fast, dependency free (except for the Approximate Nearest Neighbor Library and numpy) kd-tree implementation of the k-nearest neighbor search. Not all of the ANN library is wrapped yet (You can see what's left in the kdtree.h.not_ready_for_prime_time file). The setup.py contains instructions for installation and usage. You will need the ANN library (from http://www.cs.umd.edu/~mount/ANN/), setuptools, and SWIG installed. API is documented in kdtree.h. I think this wrapper may be useful as an addition to scipy, but I haven't investigated where it might be most appropriate. I'd be happy to relicense if necessary. Any suggestions or patches welcome. Barry On 10/22/07, aldarion wrote: > I'm interested! Thanks. > +2 > > > > On 10/23/07, Emanuele Olivetti wrote: > > I'm interested! Thanks. > > +1 > > > > E, > > > > Barry Wark wrote: > > > I've been working an a numpy-compatible SWIG wrapper for the ANN > > > library. I've got most of the library wrapped (though not all of it). > > > I'd be happy to send you a tarball. If there's interest from more than > > > one or two people, I'll motivate to finish it up and document it for > > > public release. > > > > > > Barry > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From david at ar.media.kyoto-u.ac.jp Wed Oct 24 21:29:42 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 25 Oct 2007 10:29:42 +0900 Subject: [SciPy-user] from 1-d array to wave file? In-Reply-To: <471F48E9.6070802@unibo.it> References: <471F28AA.5090809@unibo.it> <471F2C10.1080000@relativita.com> <471F48E9.6070802@unibo.it> Message-ID: <471FF186.1060604@ar.media.kyoto-u.ac.jp> massimo sandal wrote: > Emanuele Olivetti ha scritto: >> Audiolab ? >> http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/audiolab/index.html >> > > Good suggestion! Thanks, I'll definitely look into it. I hoped there > was no need of another dependence for this little thing, however. There is: writing and reading wav files reliably is not so easy, and libsndfile, the library I am using, works really well. If all you need is a quick way to dump data in a wav file, then the python lib is easier, but if you need some kind of performances (i.e avoiding converting to and from list), or more control, then audiolab is better. David From jtravs at gmail.com Thu Oct 25 05:13:37 2007 From: jtravs at gmail.com (John Travers) Date: Thu, 25 Oct 2007 10:13:37 +0100 Subject: [SciPy-user] Fwd: interpolation question In-Reply-To: <20071024192859.GB8452@mentat.za.net> References: <4E9B849F-344B-4A0D-B04F-E0D611DB70C9@usgs.gov> <3a1077e70710240602t79c53d1dgaee99dffd5d4994c@mail.gmail.com> <5DD62803-2F80-4039-8292-0D40778341DD@usgs.gov> <20071024192859.GB8452@mentat.za.net> Message-ID: <3a1077e70710250213m6db6e013i2d0bc2808f6d11c8@mail.gmail.com> On 24/10/2007, Stefan van der Walt wrote: > On Wed, Oct 24, 2007 at 08:51:20AM -0600, Michael Hearne wrote: > > Is it uncommon to think of columns as being in the x direction, and > > rows in y? > > In my experience, that is the most common convention. After surveying my colleagues in the lab I have come to conclude that you are right, and I'm the odd one out here.... > Personally, I find it easiest to always use row and column indices, > and to refrain from using x or y to describe them. On the other hand, > this may not be easy when implementing algorithms from literature, set > in the X-Y plane. I agree with this, however, the original confusion here came from the fact that I called the dummy argument names of RectBivariateSpline x, y, and z. Maybe they should have been named in a more transparent way like row_basis, col_basis, data etc. Cheers, John From lorenzo.isella at gmail.com Thu Oct 25 05:55:50 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 25 Oct 2007 11:55:50 +0200 Subject: [SciPy-user] Caveat About integrate.odeint Message-ID: Dear All, I do not know if what I am going to write is really useful (maybe it is pretty obvious for everybody on this list). I have been using integrate.odeint for quite a while to solve some population equations. Then I made a trivial change (nothing leading to different physics or in general such as to justify any substantial difference with the previous results), and I woke up in a nightmare: precision errors, routine crashing etc... I think I now what happened: in my code I was using t [time], T(t) [time-dependent temperature], t_0 (initial time) and T_0 (initial temperature). For Python there is no possibility of confusion, but the underlying Fortran made a mess out of this... Something very trivial, but it took me a day and a half to debug this. Hope it was useful. Cheers Lorenzo From stefan at sun.ac.za Thu Oct 25 02:58:01 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 25 Oct 2007 08:58:01 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: <20071022115537.GD19079@mentat.za.net> <471D0372.3020203@relativita.com> Message-ID: <20071025065801.GG8452@mentat.za.net> Hi Barry On Wed, Oct 24, 2007 at 02:51:03PM -0700, Barry Wark wrote: > For those interested, I've cleaned up (and documented a little) my > wrapper for the Approximate Nearest Neighbor library. The source is > available as a zip archive at > http://rieke-server.physiol.washington.edu/~barry/ann/ANNwrapper.zip > (BSD license). Fantastic, this is a very useful contribution. > The setup.py contains instructions for installation and usage. You > will need the ANN library (from http://www.cs.umd.edu/~mount/ANN/), > setuptools, and SWIG installed. Maybe also add instructions to build in-place, i.e. python setup.py build_ext -i > I think this wrapper may be useful as an addition to scipy, but I > haven't investigated where it might be most appropriate. I'd be happy > to relicense if necessary. The license is LGPL, so a SciKit would be appropriate. > Any suggestions or patches welcome. For statistical unit tests, use numpy.testing.assert_almost_equal; this leaves a bit of leeway for floating point roundoff (some of the tests currently fail [randomly] on my system). Regards St?fan From alexander.borghgraef.rma at gmail.com Thu Oct 25 06:20:07 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Thu, 25 Oct 2007 12:20:07 +0200 Subject: [SciPy-user] Weird label behaviour in ndimage In-Reply-To: <9e8c52a20709190205t57e4a5f2xac934f0ad94cb5a8@mail.gmail.com> References: <9e8c52a20709180652g5b9a6835pd48c843533e1af2@mail.gmail.com> <9e8c52a20709180659l460f86devc361a12060cc2d3a@mail.gmail.com> <46EFDEE2.1080706@gmail.com> <9e8c52a20709190205t57e4a5f2xac934f0ad94cb5a8@mail.gmail.com> Message-ID: <9e8c52a20710250320v3a24e1f3m6b28f5361681fd1a@mail.gmail.com> Upgraded: bug's gone. -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Thu Oct 25 07:57:37 2007 From: emanuele at relativita.com (Emanuele Olivetti) Date: Thu, 25 Oct 2007 13:57:37 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: <20071022115537.GD19079@mentat.za.net> <471D0372.3020203@relativita.com> Message-ID: <472084B1.5070107@relativita.com> Barry Wark wrote: > For those interested, I've cleaned up (and documented a little) my > wrapper for the Approximate Nearest Neighbor library. The source is > available as a zip archive at > http://rieke-server.physiol.washington.edu/~barry/ann/ANNwrapper.zip > (BSD license). > > Excellent! What about moving to scikits? http://scipy.org/scipy/scikits/wiki E. From bnuttall at uky.edu Thu Oct 25 09:46:11 2007 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Thu, 25 Oct 2007 09:46:11 -0400 Subject: [SciPy-user] Plotting with PyLab In-Reply-To: <8793ae6e0710241445l37d338f7k7ddfe5475a3c70bc@mail.gmail.com> References: <8793ae6e0710241445l37d338f7k7ddfe5475a3c70bc@mail.gmail.com> Message-ID: Thanks Dominique. I was looking at the docstrings of the pylab objects to see if I could figure out how to invert the axes (I did). I found that axis() has an argument that works. import pylab pylab.figure(1) pylab.axis('equal') ... Brandon [cid:image001.jpg at 01C816EB.E0508C80] Powered by CardScan ________________________________ From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Dominique Orban Sent: Wednesday, October 24, 2007 5:46 PM To: SciPy Users List Subject: Re: [SciPy-user] Plotting with PyLab On 10/24/07, Nuttall, Brandon C > wrote: I'd like to make a plot (graph) where the x and y units have the same scale. That is, one unit in the x direction is the same physical size as 1 unit in the y direction; circles will be circles and squares squares. What I'm doing is I'm given a path through 3d space. At a given distance, MD, along that path I am supplied an azimuth (the departure from vertical) and a bearing. I can translate these data to (x,y,z). What I want to do is make three maps showing the trajectory of the path: XY space, XZ space, and YZ space. At this time, a 3D visualization is not needed and, in actuality, only the XY plot is required. If pylab is not right, suggestions on what module I should use is appreciated. Hi Brandon, If I understand well, what you're looking for is something like this: import pylab fig = pylab.figure() # Create a figure ax = fig.gca() # Obtain axes in the current figure ax.plot( [0,1], [0,0], 'k-' ) # Plot the [0,1] x [0,1] square ax.plot( [0,0], [0,1], 'k-' ) ax.plot( [0,1], [1,1], 'k-' ) ax.plot( [1,1], [0,1], 'k-' ) ax.set_xlim( [-0.3,1.3] ) # Adjust x and y range ax.set_ylim( [-0.3,1.3] ) ax.set_aspect('equal') # Make aspect ratio equal to 1 pylab.show() and that should produce a square that indeed looks square and not rectangular. Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17011 bytes Desc: image001.jpg URL: From s.mientki at ru.nl Thu Oct 25 11:21:54 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 25 Oct 2007 17:21:54 +0200 Subject: [SciPy-user] static magnetic field ? Message-ID: <4720B492.1010707@ru.nl> hello, does anyone knows about a Python library, for the calculation and visualization of static magnetic fields ? thanks, Stef Mientki From peridot.faceted at gmail.com Thu Oct 25 11:40:02 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 25 Oct 2007 11:40:02 -0400 Subject: [SciPy-user] Caveat About integrate.odeint In-Reply-To: References: Message-ID: On 25/10/2007, Lorenzo Isella wrote: > I do not know if what I am going to write is really useful (maybe it > is pretty obvious for everybody on this list). > I have been using integrate.odeint for quite a while to solve some > population equations. > Then I made a trivial change (nothing leading to different physics or > in general such as to justify any substantial difference with the > previous results), and I woke up in a nightmare: precision errors, > routine crashing etc... > I think I now what happened: in my code I was using t [time], T(t) > [time-dependent temperature], t_0 (initial time) and T_0 (initial > temperature). > For Python there is no possibility of confusion, but the underlying > Fortran made a mess out of this... > Something very trivial, but it took me a day and a half to debug this. > Hope it was useful. Do you have a small piece of demo code? This is very surprising, as FORTRAN should never see the variable names. I can't replicate it in spite of headache-inducing variable names: In [17]: T = lambda t, T: T In [18]: T0 = 1 In [19]: t = [0,1,2] In [20]: scipy.integrate.odeint(T,T0,t) Out[20]: array([[ 1. ], [ 1.50000001], [ 3.00000001]]) Anne From ellisonbg.net at gmail.com Thu Oct 25 12:32:13 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Thu, 25 Oct 2007 10:32:13 -0600 Subject: [SciPy-user] static magnetic field ? In-Reply-To: <4720B492.1010707@ru.nl> References: <4720B492.1010707@ru.nl> Message-ID: <6ce0ac130710250932v17db06eepc9c29e5ae0e849f2@mail.gmail.com> What exactly to you mean by this? On 10/25/07, Stef Mientki wrote: > hello, > > does anyone knows about a Python library, > for the calculation and visualization of static magnetic fields ? > > thanks, > Stef Mientki > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From zunzun at zunzun.com Thu Oct 25 12:49:35 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 25 Oct 2007 12:49:35 -0400 Subject: [SciPy-user] static magnetic field ? In-Reply-To: <4720B492.1010707@ru.nl> References: <4720B492.1010707@ru.nl> Message-ID: <20071025164934.GA26091@zunzun.com> On Thu, Oct 25, 2007 at 05:21:54PM +0200, Stef Mientki wrote: > > does anyone knows about a Python library, > for the calculation and visualization of static magnetic fields ? If Python bindings are OK, http://www.mare.ee/indrek/ephi/ Ephi is a C++ physics simulation software to simulate static magnetic fields and movement of charged particles in those fields (using the Lorentz force). Coulomb forces are also accounted when simulating particle paths. So Ephi allows you to model and visualize magnetic fields through current elements and also to visualize electron paths within those fields. Magnetic fields are calculated using numeric integration over the Biot-Savart law. looks like exactly what you need, because in the README file: Python interface ~~~~~~~~~~~~~~~~~ Ephi also provides a python interface for programming the simulation scenes in python. See src/*.py for examples. James From s.mientki at ru.nl Thu Oct 25 13:27:01 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 25 Oct 2007 19:27:01 +0200 Subject: [SciPy-user] static magnetic field ? In-Reply-To: <20071025164934.GA26091@zunzun.com> References: <4720B492.1010707@ru.nl> <20071025164934.GA26091@zunzun.com> Message-ID: <4720D1E5.9060608@ru.nl> zunzun at zunzun.com wrote: > On Thu, Oct 25, 2007 at 05:21:54PM +0200, Stef Mientki wrote: > >> does anyone knows about a Python library, >> for the calculation and visualization of static magnetic fields ? >> > > > If Python bindings are OK, > > http://www.mare.ee/indrek/ephi/ > > Ephi is a C++ physics simulation software to simulate static > magnetic fields and movement of charged particles in those > fields (using the Lorentz force). Coulomb forces are also > accounted when simulating particle paths. So Ephi allows you > to model and visualize magnetic fields through current > elements and also to visualize electron paths within those > fields. Magnetic fields are calculated using numeric > integration over the Biot-Savart law. > > > looks like exactly what you need, because in the README file: > > Python interface > ~~~~~~~~~~~~~~~~~ > Ephi also provides a python interface for programming > the simulation scenes in python. See src/*.py for examples. > > Thanks James, I found that too, and indeed it seems to fit my needs, (although I want to experiment with permanent magnets, Halbacher arrays, I simulate that with coils too), but unfortunately there is very little documentation about the functions and it's parameters in Ephi. cheers, Stef Mientki > James > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From peridot.faceted at gmail.com Thu Oct 25 16:21:02 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 25 Oct 2007 16:21:02 -0400 Subject: [SciPy-user] New example Message-ID: Hi, I just wrote up a calculation using scipy: http://www.scipy.org/Trinity It's a fluid dynamics calculation describing the evolution of a spherical blast wave. There's some theory there, though I skimmed it as much as possible, and a reasonable calculation using scipy; specifically, it uses odeint, simps, and pylab. Is this a useful sort of tutorial/example? Do we want more things like this? Anne From lorenzo.isella at gmail.com Thu Oct 25 16:24:24 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 25 Oct 2007 22:24:24 +0200 Subject: [SciPy-user] Caveat About integrate.odeint In-Reply-To: References: Message-ID: <4720FB78.6070908@gmail.com> Dear Anne, I think you are right, nevertheless I am puzzled at what is going on with integrate.odeint. If the problem I will post a code snippet on the list. Cheers Lorenzo Do you have a small piece of demo code? This is very surprising, as FORTRAN should never see the variable names. I can't replicate it in spite of headache-inducing variable names: In [17]: T = lambda t, T: T In [18]: T0 = 1 In [19]: t = [0,1,2] In [20]: scipy.integrate.odeint(T,T0,t) Out[20]: array([[ 1. ], [ 1.50000001], [ 3.00000001]]) Anne > Message: 2 > Date: Thu, 25 Oct 2007 11:40:02 -0400 > From: "Anne Archibald" > Subject: Re: [SciPy-user] Caveat About integrate.odeint > To: "SciPy Users List" > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > On 25/10/2007, Lorenzo Isella wrote: > > >> I do not know if what I am going to write is really useful (maybe it >> is pretty obvious for everybody on this list). >> I have been using integrate.odeint for quite a while to solve some >> population equations. >> Then I made a trivial change (nothing leading to different physics or >> in general such as to justify any substantial difference with the >> previous results), and I woke up in a nightmare: precision errors, >> routine crashing etc... >> I think I now what happened: in my code I was using t [time], T(t) >> [time-dependent temperature], t_0 (initial time) and T_0 (initial >> temperature). >> For Python there is no possibility of confusion, but the underlying >> Fortran made a mess out of this... >> Something very trivial, but it took me a day and a half to debug this. >> Hope it was useful. >> > > > > > From stefan at sun.ac.za Thu Oct 25 17:15:21 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 25 Oct 2007 23:15:21 +0200 Subject: [SciPy-user] New example In-Reply-To: References: Message-ID: <20071025211521.GA15301@mentat.za.net> Hi Anne, On Thu, Oct 25, 2007 at 04:21:02PM -0400, Anne Archibald wrote: > I just wrote up a calculation using scipy: > http://www.scipy.org/Trinity > It's a fluid dynamics calculation describing the evolution of a > spherical blast wave. There's some theory there, though I skimmed it > as much as possible, and a reasonable calculation using scipy; > specifically, it uses odeint, simps, and pylab. > > Is this a useful sort of tutorial/example? Do we want more things > like this? I would love to have some 2D and 3D examples to show to my Fluid Dynamics colleagues, who aren't yet familiar with Python. I have a finite-different temperature evolution experiment somewhere that I can contribute. Regards St?fan From aisaac at american.edu Thu Oct 25 21:44:43 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 25 Oct 2007 21:44:43 -0400 Subject: [SciPy-user] static magnetic field ? In-Reply-To: <20071025164934.GA26091@zunzun.com> References: <4720B492.1010707@ru.nl><20071025164934.GA26091@zunzun.com> Message-ID: On Thu, 25 Oct 2007, zunzun at zunzun.com apparently wrote: > http://www.mare.ee/indrek/ephi/ OK, I'm sure it's useful and all that, but much more important from where I sit, is that it is so damn cool. ;-) Cheers, Alan Isaac From ryanlists at gmail.com Thu Oct 25 23:50:44 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 25 Oct 2007 22:50:44 -0500 Subject: [SciPy-user] New example In-Reply-To: <20071025211521.GA15301@mentat.za.net> References: <20071025211521.GA15301@mentat.za.net> Message-ID: It looks pretty cool. I think there is value in it. On 10/25/07, Stefan van der Walt wrote: > > Hi Anne, > > On Thu, Oct 25, 2007 at 04:21:02PM -0400, Anne Archibald wrote: > > I just wrote up a calculation using scipy: > > http://www.scipy.org/Trinity > > It's a fluid dynamics calculation describing the evolution of a > > spherical blast wave. There's some theory there, though I skimmed it > > as much as possible, and a reasonable calculation using scipy; > > specifically, it uses odeint, simps, and pylab. > > > > Is this a useful sort of tutorial/example? Do we want more things > > like this? > > I would love to have some 2D and 3D examples to show to my Fluid > Dynamics colleagues, who aren't yet familiar with Python. I have a > finite-different temperature evolution experiment somewhere that I can > contribute. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Oct 26 02:40:19 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 26 Oct 2007 08:40:19 +0200 Subject: [SciPy-user] New example In-Reply-To: <20071025211521.GA15301@mentat.za.net> References: <20071025211521.GA15301@mentat.za.net> Message-ID: <20071026064019.GD9025@clipper.ens.fr> Anne and Stefan, I think these examples are very useful, and definitely should be on the scipy.org website. Thanks for your time, Ga?l From matthieu.brucher at gmail.com Fri Oct 26 04:35:40 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 26 Oct 2007 10:35:40 +0200 Subject: [SciPy-user] Thin plates for a 3D deformation field Message-ID: Hi, I wondered if someone knew of a package that allows the interpolation of a 3D deformation field with thin plates (or in fact with anything) based on a list of points and their associated deformation field. If only 2D deformation field is supported by a package, I'll go for it too ;) -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Oct 26 09:44:55 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 26 Oct 2007 15:44:55 +0200 Subject: [SciPy-user] Cookbook/InputOutput Message-ID: Hi all, I tried to use the home-made function (available at http://www.scipy.org/Cookbook/InputOutput) to read an array from the file topo-28.xohis (See attachment for details). If I run the script (xy.py) I get python -i xy.py Traceback (most recent call last): File "xy.py", line 47, in ? data = readArray("topo-28.xohis") File "xy.py", line 35, in readArray items = split(stripped_line) TypeError: split() takes at least 2 arguments (1 given) How can I fix the problem ? Any pointer would be appreciated. Thanks in advance Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: xy.py Type: application/octet-stream Size: 1258 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: topo-28.xohis Type: application/octet-stream Size: 1851 bytes Desc: not available URL: From fredmfp at gmail.com Fri Oct 26 10:30:28 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Oct 2007 16:30:28 +0200 Subject: [SciPy-user] Cookbook/InputOutput In-Reply-To: References: Message-ID: <4721FA04.3080403@gmail.com> Nils Wagner a ?crit : > Hi all, > > I tried to use the home-made function > (available at http://www.scipy.org/Cookbook/InputOutput) > to read an array from the file topo-28.xohis > (See attachment for details). > > If I run the script (xy.py) I get > > python -i xy.py > Traceback (most recent call last): > File "xy.py", line 47, in ? > data = readArray("topo-28.xohis") > File "xy.py", line 35, in readArray > items = split(stripped_line) stripped_line.split() ? -- http://scipy.org/FredericPetit From kdsudac at yahoo.com Fri Oct 26 15:03:34 2007 From: kdsudac at yahoo.com (Keith Suda-Cederquist) Date: Fri, 26 Oct 2007 12:03:34 -0700 (PDT) Subject: [SciPy-user] Begginer's Problem Message-ID: <583369.96505.qm@web54307.mail.re2.yahoo.com> Hi, I started using SciPy recently (converted from Matlab) and am running into a problem. I wrote my first script that uses PIL.Image module and several basic SciPy modules. Within this script I declared functions that used functions in the PIL.Image and scipy modules, for example: #version1.py import scipy as S import PIL.Image as I def imageopen(filename): im1=I.open(filename) ar1=S.ones((20,20)) .... im2=imageopen(file2) This worked out fine. Then I decided it would be nice to move all the function definitions into a seperate file: #split1.py def imageopen(filename): im1=I.open(filename) ar1=S.ones((20,20)) #split2.py import scipy as S import PIL.Image as I import split1 im2=imageopen(file2) This new structure is giving me a lot of problems. The usual error I get is that global name 'I' or 'S' is not defined. I've searched quite a bit for a solution, but haven't figured it out yet. As far as I can tell the problem is related to the namespace belonging to particular modules. I've been able to work around this by importing the scipy and PIL.Image modules all over the place (i.e. at the begging of split1.py and split2.py and inside the function). However, this doesn't seem like the best way to handle this. Any advice? Thanks in advance for your help. -Keith __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 26 15:11:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Oct 2007 14:11:51 -0500 Subject: [SciPy-user] Begginer's Problem In-Reply-To: <583369.96505.qm@web54307.mail.re2.yahoo.com> References: <583369.96505.qm@web54307.mail.re2.yahoo.com> Message-ID: <47223BF7.9080104@gmail.com> Keith Suda-Cederquist wrote: > Hi, > > I started using SciPy recently (converted from Matlab) and am running > into a problem. > > I wrote my first script that uses PIL.Image module and several basic > SciPy modules. Within this script I declared functions that used > functions in the PIL.Image and scipy modules, for example: > > > #version1.py > import scipy as S > import PIL.Image as I > > def imageopen(filename): > im1=I.open(filename) > ar1=S.ones((20,20)) > .... > > im2=imageopen(file2) > > This worked out fine. Then I decided it would be nice to move all the > function definitions into a seperate file: > > #split1.py > def imageopen(filename): > im1=I.open(filename) > ar1=S.ones((20,20)) > > #split2.py > import scipy as S > import PIL.Image as I > import split1 > > im2=imageopen(file2) > > This new structure is giving me a lot of problems. The usual error I > get is that global name 'I' or 'S' is not defined. I've searched quite > a bit for a solution, but haven't figured it out yet. > > As far as I can tell the problem is related to the namespace belonging > to particular modules. Yes. Each module has its own namespace. When functions look up names which are not provided in the argument list (e.g. I and S), they look them up in the module in which they are defined, not the module in which they are called. > I've been able to work around this by importing the scipy and PIL.Image > modules all over the place (i.e. at the begging of split1.py and > split2.py and inside the function). However, this doesn't seem like the > best way to handle this. > > Any advice? Generally speaking, you should import the things that you need at the top of the module where you need them. # correctsplit1.py import scipy as S import PIL.Image as I def imageopen(filename): im1=I.open(filename) ar1=S.ones((20,20)) # correctsplit2.py import correctsplit1 im2 = correctsplit1.imageopen('file2') You may want to take questions like this to the Python tutor-list. That list is more geared towards these kinds of general Python beginner questions. http://mail.python.org/mailman/listinfo/tutor -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mhearne at usgs.gov Fri Oct 26 15:17:12 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Fri, 26 Oct 2007 13:17:12 -0600 Subject: [SciPy-user] Begginer's Problem In-Reply-To: <583369.96505.qm@web54307.mail.re2.yahoo.com> References: <583369.96505.qm@web54307.mail.re2.yahoo.com> Message-ID: <3D6E5917-B855-447C-AF41-5DDC08FE9FF2@usgs.gov> Keith - You're right, it is a namespace problem. One of the big differences between Matlab and NumPy is that in Matlab, every function in your path is automatically in every namespace (the global one or any functions that you may write). In Python, by default the only functions you get are the Python "built-ins" (abs(), apply(), bool(), etc.) To access the classes or functions defined in any other modules, be they part of the standard library or part of a third-party module like NumPy or SciPy, you _have_ to import those modules somehow. That being said, I think there is a way to organize your code that would make sense: #fileopener.py import scipy as S import PIL.Image as I def imageopen(filename): im1 = I.open(filename) ar1 = S.ones((20,20)) return (im1,ar1) #main.py from fileopener import imageopen image,array = imageopen(filename) ... #do whatever it is you want to do with the data --Mike On Oct 26, 2007, at 1:03 PM, Keith Suda-Cederquist wrote: > Hi, > > I started using SciPy recently (converted from Matlab) and am > running into a problem. > > I wrote my first script that uses PIL.Image module and several > basic SciPy modules. Within this script I declared functions that > used functions in the PIL.Image and scipy modules, for example: > > > #version1.py > import scipy as S > import PIL.Image as I > > def imageopen(filename): > im1=I.open(filename) > ar1=S.ones((20,20)) > .... > > im2=imageopen(file2) > > This worked out fine. Then I decided it would be nice to move all > the function definitions into a seperate file: > > #split1.py > def imageopen(filename): > im1=I.open(filename) > ar1=S.ones((20,20)) > > #split2.py > import scipy as S > import PIL.Image as I > import split1 > > im2=imageopen(file2) > > This new structure is giving me a lot of problems. The usual error > I get is that global name 'I' or 'S' is not defined. I've searched > quite a bit for a solution, but haven't figured it out yet. > > As far as I can tell the problem is related to the namespace > belonging to particular modules. > > I've been able to work around this by importing the scipy and > PIL.Image modules all over the place (i.e. at the begging of > split1.py and split2.py and inside the function). However, this > doesn't seem like the best way to handle this. > > Any advice? > > Thanks in advance for your help. > > -Keith > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrywark at gmail.com Fri Oct 26 15:31:58 2007 From: barrywark at gmail.com (Barry Wark) Date: Fri, 26 Oct 2007 12:31:58 -0700 Subject: [SciPy-user] k nearest neighbour In-Reply-To: <20071025065801.GG8452@mentat.za.net> References: <20071022115537.GD19079@mentat.za.net> <471D0372.3020203@relativita.com> <20071025065801.GG8452@mentat.za.net> Message-ID: Stefan, What is the purpose (advantage) of build_ext instead of just `python setup.py build`? I'm still learning the whole distutils/setuptools things. thanks, Barry On 10/24/07, Stefan van der Walt wrote: > Hi Barry > > On Wed, Oct 24, 2007 at 02:51:03PM -0700, Barry Wark wrote: > > For those interested, I've cleaned up (and documented a little) my > > wrapper for the Approximate Nearest Neighbor library. The source is > > available as a zip archive at > > http://rieke-server.physiol.washington.edu/~barry/ann/ANNwrapper.zip > > (BSD license). > > Fantastic, this is a very useful contribution. > > > The setup.py contains instructions for installation and usage. You > > will need the ANN library (from http://www.cs.umd.edu/~mount/ANN/), > > setuptools, and SWIG installed. > > Maybe also add instructions to build in-place, i.e. > > python setup.py build_ext -i > > > I think this wrapper may be useful as an addition to scipy, but I > > haven't investigated where it might be most appropriate. I'd be happy > > to relicense if necessary. > > The license is LGPL, so a SciKit would be appropriate. > > > Any suggestions or patches welcome. > > For statistical unit tests, use numpy.testing.assert_almost_equal; > this leaves a bit of leeway for floating point roundoff (some of the > tests currently fail [randomly] on my system). > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Fri Oct 26 15:35:25 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 26 Oct 2007 21:35:25 +0200 Subject: [SciPy-user] k nearest neighbour In-Reply-To: References: <20071022115537.GD19079@mentat.za.net> <471D0372.3020203@relativita.com> <20071025065801.GG8452@mentat.za.net> Message-ID: build tells to build everythong including extensions, build_ext only the extensions (it is called buy build), and it is possible to build them in-place, so without having to install every file in site-pakages. Matthieu 2007/10/26, Barry Wark : > > Stefan, > > What is the purpose (advantage) of build_ext instead of just `python > setup.py build`? I'm still learning the whole distutils/setuptools > things. > > thanks, > Barry -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Sat Oct 27 02:49:26 2007 From: strawman at astraw.com (Andrew Straw) Date: Fri, 26 Oct 2007 23:49:26 -0700 Subject: [SciPy-user] scipy wiki problems Message-ID: <4722DF76.5080102@astraw.com> I'm getting the following error at http://scipy.org/Cookbook (the other pages I've looked at on the website look OK). Hopefully this can be fixed rather easily. OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root at localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.0.54 (Fedora) Server at scipy.org Port 80 From ryanlists at gmail.com Sat Oct 27 12:49:01 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 27 Oct 2007 11:49:01 -0500 Subject: [SciPy-user] Cookbook/InputOutput In-Reply-To: References: Message-ID: help split gives this: split(ary, indices_or_sections, axis=0) Divide an array into a list of sub-arrays. Description: Divide ary into a list of sub-arrays along the specified axis. If indices_or_sections is an integer, ary is divided into that many equally sized arrays. If it is impossible to make an equal split, an error is raised. This is the only way this function differs from the array_split() function. If indices_or_sections is a list of sorted integers, its entries define the indexes where ary is split. On 10/26/07, Nils Wagner wrote: > > Hi all, > > I tried to use the home-made function > (available at http://www.scipy.org/Cookbook/InputOutput) > to read an array from the file topo-28.xohis > (See attachment for details). > > If I run the script (xy.py) I get > > python -i xy.py > Traceback (most recent call last): > File "xy.py", line 47, in ? > data = readArray("topo-28.xohis") > File "xy.py", line 35, in readArray > items = split(stripped_line) > TypeError: split() takes at least 2 arguments (1 given) > > How can I fix the problem ? > > Any pointer would be appreciated. > > Thanks in advance > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Sat Oct 27 12:50:50 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 27 Oct 2007 11:50:50 -0500 Subject: [SciPy-user] Cookbook/InputOutput In-Reply-To: References: Message-ID: Are you trying to do somthing like this: In [19]: myline = '1.0, 2.0, 3.0, 4.0' In [20]: mylist = myline.split(',') In [21]: mylist Out[21]: ['1.0', ' 2.0', ' 3.0', ' 4.0'] In [22]: mylist = [item.strip() for item in mylist] In [23]: mylist Out[23]: ['1.0', '2.0', '3.0', '4.0'] In [24]: myfloats = [float(item) for item in mylist] In [25]: myfloats Out[25]: [1.0, 2.0, 3.0, 4.0] On 10/26/07, Nils Wagner wrote: > > Hi all, > > I tried to use the home-made function > (available at http://www.scipy.org/Cookbook/InputOutput) > to read an array from the file topo-28.xohis > (See attachment for details). > > If I run the script (xy.py) I get > > python -i xy.py > Traceback (most recent call last): > File "xy.py", line 47, in ? > data = readArray("topo-28.xohis") > File "xy.py", line 35, in readArray > items = split(stripped_line) > TypeError: split() takes at least 2 arguments (1 given) > > How can I fix the problem ? > > Any pointer would be appreciated. > > Thanks in advance > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Sat Oct 27 14:21:51 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 27 Oct 2007 20:21:51 +0200 Subject: [SciPy-user] Cookbook/InputOutput In-Reply-To: References: Message-ID: On Sat, 27 Oct 2007 11:49:01 -0500 "Ryan Krauss" wrote: > help split gives this: > > split(ary, indices_or_sections, axis=0) > Divide an array into a list of sub-arrays. > > Description: > Divide ary into a list of sub-arrays along the > specified axis. If indices_or_sections is an >integer, > ary is divided into that many equally sized >arrays. > If it is impossible to make an equal split, an >error is > raised. This is the only way this function >differs from > the array_split() function. If indices_or_sections >is a > list of sorted integers, its entries define the >indexes > where ary is split. > > > On 10/26/07, Nils Wagner >wrote: >> >> Hi all, >> >> I tried to use the home-made function >> (available at http://www.scipy.org/Cookbook/InputOutput) >> to read an array from the file topo-28.xohis >> (See attachment for details). >> >> If I run the script (xy.py) I get >> >> python -i xy.py >> Traceback (most recent call last): >> File "xy.py", line 47, in ? >> data = readArray("topo-28.xohis") >> File "xy.py", line 35, in readArray >> items = split(stripped_line) >> TypeError: split() takes at least 2 arguments (1 given) >> >> How can I fix the problem ? >> >> Any pointer would be appreciated. >> >> Thanks in advance >> >> Nils >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> There are two different split functions. Finally I have used from string import lstrip,split # Now python xy.py works for me :-). Cheers, Nils Help on function split in module string: split(s, sep=None, maxsplit=-1) split(s [,sep [,maxsplit]]) -> list of strings Return a list of the words in the string s, using sep as the delimiter string. If maxsplit is given, splits at no more than maxsplit places (resulting in at most maxsplit+1 words). If sep is not specified or is None, any whitespace string is a separator. (split and splitfields are synonymous) From zpincus at stanford.edu Sat Oct 27 19:01:57 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Sat, 27 Oct 2007 19:01:57 -0400 Subject: [SciPy-user] Thin plates for a 3D deformation field In-Reply-To: References: Message-ID: <519D7AAA-4F09-460D-A62D-6921B56C2369@stanford.edu> Hello Matthieu, I don't know of any fast multipole solver or anything fancy like that for dealing with evaluating the splines rapidly, but for simple purposes, the rbf module in scipy.sandbox should do just fine. Set the function to 'thin-plate', and you will get a thin-plate spline interpolator that can interpolate values of arbitrary dimension in spaces of arbitrary dimension. So it should naturally be able to interpolate a 3D deformation field in three dimensions, or any other combination you might want. I haven't used the RBF module exclusively (I wrote my own thin-plate spline interpolator before it existed -- it's a very small amount of code), but I think that that module should be just right. One note: this sort of interpolation can be very slow if you're going to be interpolating the deformations over a fine grid -- for, say, using the deformation field for image warping. This is where one might start wanting a fast multipole solver. I find that in most cases, I can get away with interpolating the deformations on a relatively coarse grid, and then using linear interpolation to fill in the gaps. (The resizing methods in ndimage come in useful here -- just scale up the x, y, and z components of the deformation separately.) Zach Pincus On Oct 26, 2007, at 4:35 AM, Matthieu Brucher wrote: > Hi, > > I wondered if someone knew of a package that allows the > interpolation of a 3D deformation field with thin plates (or in > fact with anything) based on a list of points and their associated > deformation field. > If only 2D deformation field is supported by a package, I'll go for > it too ;) > > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/? > blog=92 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Sat Oct 27 22:39:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Oct 2007 21:39:32 -0500 Subject: [SciPy-user] Thin plates for a 3D deformation field In-Reply-To: References: Message-ID: <4723F664.1060007@gmail.com> Matthieu Brucher wrote: > Hi, > > I wondered if someone knew of a package that allows the interpolation of > a 3D deformation field with thin plates (or in fact with anything) based > on a list of points and their associated deformation field. > If only 2D deformation field is supported by a package, I'll go for it > too ;) Another alternative is to use the natural-neighbor interpolation that I coded up. I've moved it to its own scikit for easier installation. http://svn.scipy.org/svn/scikits/trunk/delaunay/ At the moment, I only support doing one dimension at a time, but that can be easily worked around. Let's say you have your deformation field separated into 3 shape-(n,) arrays dx, dy, and dz and you have spatial coordinates in shape-(n,) arrays x and y. from scikits.delaunay import Triangulation t = Triangulation(x, y) dxinterp = t.nn_interpolator(dx) dyinterp = t.nn_interpolator(dy) dzinterp = t.nn_interpolator(dz) # Use nn_extrapolator if you want to find values outside of the convex hull of # the input points. If your interpolating points are on a grid, the easiest way to get the interpolated values is by "fake" slicing along the lines of numpy.mgrid. dx2 = dxinterp[0:1:101j, 0:2:201j] ... That interpolates the dx deformation across a grid going from 0 to 2 (inclusive) at a step of 0.01 in the X direction and 0 to 1 (inclusive) at a step of 0.01 in the Y direction (yes, the Y coordinate comes first; I'm not much happy with that either, but it seemed the most consistent with the way indexing "looks"). That should be reasonably fast; I've optimized grid-structured interpolating point sets. Let me know if you try this and anything you think can or should be improved about it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jss at ast.cam.ac.uk Sun Oct 28 10:30:59 2007 From: jss at ast.cam.ac.uk (Jeremy Sanders) Date: Sun, 28 Oct 2007 14:30:59 +0000 Subject: [SciPy-user] ANN: Veusz-1.0 - a scientific plotting package Message-ID: I'm pleased to announce Veusz-1.0. Linux and Windows binaries are available, with source. For further details see below. Jeremy Veusz 1.0 --------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2007 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a scientific plotting package written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Feature changes from 0.99.0: * Import of Text datasets * Labels can be plotted next to X-Y points * Numbers can be directly plotted by entering into X-Y datasets as X and Y * More line styles * Loaded document and functions are checked for unsafe Python features * Contours can be labelled with numbers * 2D dataset creation to make 2D datasets from x, y, z 1D datasets Bug and minor fixes from 0.99.0: * Zooming into X-Y images works now * Contour plots work on datasets with non equal X and Y sizes * Various fixes for datasets including NaN or Inf * Large changes to data import filter to support loading strings (and dates later) * Reduce number of undo levels for memory/speed * Text renderer rewritten to be more simple * Improved error dialogs * Proper error dialog for invalid loading of documents Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * LaTeX-like formatting for text * EPS/PDF/PNG export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV and FITS importing Requirements: Python (2.3 or greater required) http://www.python.org/ Qt >= 4.3 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits For documentation on using Veusz, see the "Documents" directory. The manual is in pdf, html and text format (generated from docbook). Issues: * Reqires a rather new version of PyQt, otherwise dialogs don't work. * Can be very slow to plot large datasets if antialiasing is enabled. Right click on graph and disable antialias to speed up output. * The embedding interface appears to crash on exiting. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From matthieu.brucher at gmail.com Sun Oct 28 11:12:44 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 28 Oct 2007 16:12:44 +0100 Subject: [SciPy-user] Thin plates for a 3D deformation field In-Reply-To: References: Message-ID: Thank you Zachary and Robert. In fact, I've decided to implement my own version (I'll make it available on the net, as it can be recurrent) based on the Bookstein article on the subject. I could use the rbf module, but I've decided that the dependence on scipy was not necessary (the function is needed at two places, not more, and the real trick was to compute the coefficients for the thin plates and then interpoling the field with it). Matthieu 2007/10/26, Matthieu Brucher : > > Hi, > > I wondered if someone knew of a package that allows the interpolation of a > 3D deformation field with thin plates (or in fact with anything) based on a > list of points and their associated deformation field. > If only 2D deformation field is supported by a package, I'll go for it too > ;) > > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Sun Oct 28 11:56:49 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Sun, 28 Oct 2007 11:56:49 -0400 Subject: [SciPy-user] Thin plates for a 3D deformation field In-Reply-To: References: Message-ID: <85D74637-A633-468F-8776-201AD44A4291@stanford.edu> Hi Matthieu, See attached my (GPL, but ask if you want in a different license) code for thin-plate image warping in 2D. It's straight out of Bookstein, but has some simple changes to make it tolerably fast in python/numpy. It still depends on ndimage for the image coordinate mapping (so that one can use the fancier interpolators available), but that's easy to remove. The basic code should work fine in 3D; I think only the interpolation of the coarse deformation grid to a finer grid (a hack anyway) is 2D specific. Anyhow, it should give you somewhere to start, and a hint at various caveats (like needing to set the log of very small values to zero to keep the thin plate function from exploding at times). Also, note that if you do choose to use the RBF module from scipy, you can just grab that file and put it into your code (modulo license restrictions) -- it doesn't depend on much else. Similarly, you could just grab Robert's delaunay scikit and incorporate that into your code so as not to have any dependencies. Zach -------------- next part -------------- A non-text attachment was scrubbed... Name: image_warp.py Type: text/x-python-script Size: 4754 bytes Desc: not available URL: -------------- next part -------------- On Oct 28, 2007, at 11:12 AM, Matthieu Brucher wrote: > Thank you Zachary and Robert. > In fact, I've decided to implement my own version (I'll make it > available on the net, as it can be recurrent) based on the > Bookstein article on the subject. > I could use the rbf module, but I've decided that the dependence on > scipy was not necessary (the function is needed at two places, not > more, and the real trick was to compute the coefficients for the > thin plates and then interpoling the field with it). > > Matthieu > > 2007/10/26, Matthieu Brucher : Hi, > > I wondered if someone knew of a package that allows the > interpolation of a 3D deformation field with thin plates (or in > fact with anything) based on a list of points and their associated > deformation field. > If only 2D deformation field is supported by a package, I'll go for > it too ;) > > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/? > blog=92 > > > > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/? > blog=92 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Sun Oct 28 12:43:31 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 28 Oct 2007 17:43:31 +0100 Subject: [SciPy-user] Thin plates for a 3D deformation field In-Reply-To: <85D74637-A633-468F-8776-201AD44A4291@stanford.edu> References: <85D74637-A633-468F-8776-201AD44A4291@stanford.edu> Message-ID: Hi, I see that you compute directly the new image ;) For the tool we are developping in my lab, we prefer to split everything (one function to compute a deformation field and one function for applying the deformation field), as some times we only want to work on the field, sometimes on the image, ... I didn't use the same approach than you, I tried to maximize the use of Numpy too (so to compute the distances, I used Bill Baxter's function he proposed some time ago, the computation of the sum in the smooth continuous field is done by a matrix multiplication, ...), which means that my approach may use more memory than yours. I've only one thing that I have to modifiy, it is the computation of r?log r? when r is too small, I saw that you took care of it. For those who are interested in the future, they have now two solutions : http://matt.eifelle.com/item/4 Matthieu 2007/10/28, Zachary Pincus : > > Hi Matthieu, > > See attached my (GPL, but ask if you want in a different license) > code for thin-plate image warping in 2D. It's straight out of > Bookstein, but has some simple changes to make it tolerably fast in > python/numpy. > > It still depends on ndimage for the image coordinate mapping (so that > one can use the fancier interpolators available), but that's easy to > remove. The basic code should work fine in 3D; I think only the > interpolation of the coarse deformation grid to a finer grid (a hack > anyway) is 2D specific. > > Anyhow, it should give you somewhere to start, and a hint at various > caveats (like needing to set the log of very small values to zero to > keep the thin plate function from exploding at times). > > Also, note that if you do choose to use the RBF module from scipy, > you can just grab that file and put it into your code (modulo license > restrictions) -- it doesn't depend on much else. Similarly, you could > just grab Robert's delaunay scikit and incorporate that into your > code so as not to have any dependencies. > > Zach -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tony.Mannucci at jpl.nasa.gov Sun Oct 28 13:44:24 2007 From: Tony.Mannucci at jpl.nasa.gov (Tony Mannucci) Date: Sun, 28 Oct 2007 10:44:24 -0700 Subject: [SciPy-user] Subject: Re: Scientific Python publications In-Reply-To: References: Message-ID: Thanks for your comments. It would be a good learning tool to see those two codes side-by-side (matlab and SciPy). I cannot access the article. -Tony >Message: 5 >Date: Tue, 09 Oct 2007 22:24:36 +0200 >From: jaropis >Subject: Re: [SciPy-user] Scientific Python publications >To: scipy-user at scipy.org >Message-ID: >Content-Type: text/plain; charset=us-ascii > >> the molecular science group at the university of Bristol on a recent >> scientific paper and thought that I could use the talk as an excuse to >> advertise the wonders of python. > >I use Python a lot - for example this paper: > >http://www.iop.org/EJ/abstract/0967-3334/28/3/005 > >has an animation done in SciPy+pylab, and all the calculations are in >Python. I give Matlab code for the same calculations in an appendix, but >they were originally done in SciPy and then translated to Matlab. > >Jarek P > -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 From zunzun at zunzun.com Sun Oct 28 13:58:28 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 28 Oct 2007 13:58:28 -0400 Subject: [SciPy-user] Scientific Python publications In-Reply-To: References: Message-ID: <20071028175828.GA22507@zunzun.com> On Sun, Oct 28, 2007 at 10:44:24AM -0700, Tony Mannucci wrote: > > I cannot access the article. > > -Tony > > >Date: Tue, 09 Oct 2007 22:24:36 +0200 > >From: jaropis > > > >I use Python a lot - for example this paper: > > > >http://www.iop.org/EJ/abstract/0967-3334/28/3/005 If someone will pay the US $30.00 (plus tax) viewing price I'll be glad to look at it. James From nicolas.pettiaux at ael.be Sun Oct 28 14:02:42 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Sun, 28 Oct 2007 19:02:42 +0100 Subject: [SciPy-user] Subject: Re: Scientific Python publications In-Reply-To: References: Message-ID: 2007/10/28, Tony Mannucci : > >From: jaropis > >I use Python a lot - for example this paper: > > > >http://www.iop.org/EJ/abstract/0967-3334/28/3/005 > > > >has an animation done in SciPy+pylab, and all the calculations are in > >Python. I give Matlab code for the same calculations in an appendix, but > >they were originally done in SciPy and then translated to Matlab. Maybe Jarek could send us or post on his website a copy of a previous version of the paper. These are not covered usually by the (abusive) copyright of the Journal THanks, Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be Utiliser des formats ouverts et des logiciels libres - http://www.passeralinux.org. Pour la bureautique, les seuls formats ISO sont ceux de http://fr.openoffice.org From ramercer at gmail.com Sun Oct 28 14:13:10 2007 From: ramercer at gmail.com (Adam Mercer) Date: Sun, 28 Oct 2007 14:13:10 -0400 Subject: [SciPy-user] Specifying fortran compiler Message-ID: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> Hi I'm trying to build SciPy on Mac OS 10.5 using gfortran from MacPorts gcc-4.2.2, which gets installed as gfortran-mp-4.2. When I build I get the following error: Command output: customize IbmFCompiler customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize NAGFCompiler customize NAGFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: f95 -fixed -O4 -target=native Fortran f90 compiler: f95 -O4 -target=native Fortran fix compiler: f95 -fixed -O4 -target=native creating build/temp.macosx-10.3-i386-2.5 creating build/temp.macosx-10.3-i386-2.5/scipy creating build/temp.macosx-10.3-i386-2.5/scipy/fftpack creating build/temp.macosx-10.3-i386-2.5/scipy/fftpack/dfftpack compile options: '-c' f95:f77: scipy/fftpack/dfftpack/dcosqb.f sh: f95: command not found sh: f95: command not found error: Command "f95 -fixed -O4 -target=native -c -c scipy/fftpack/dfftpack/dcosqb.f -o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/dfftpack/dcosqb.o" failed with exit status 127 >From this error it looks like SciPy is trying to use f95 as the fortran compiler executable, not gfortran-mp-4.2. Is there a way that I can specify which fortran compiler to use? Cheers Adam From ramercer at gmail.com Sun Oct 28 22:18:48 2007 From: ramercer at gmail.com (Adam Mercer) Date: Sun, 28 Oct 2007 22:18:48 -0400 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> Message-ID: <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> On 28/10/2007, Adam Mercer wrote: > Is there a way that I can specify which fortran compiler to use? I've got a little further with this, if I create a link to gfortran-mp-4.2 somewhere in my $PATH called gfortran, then scipy finds it and builds without issue. It also seems that I can specify which vendor of fortran compiler to use with $ python setup.py build --fcompiler= but this only seems to work when the fortran compiler uses the standard name, is there a way to specify the path to the fortran compiler? Cheers Adam From robert.kern at gmail.com Sun Oct 28 22:23:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Oct 2007 21:23:37 -0500 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> Message-ID: <47254429.4030508@gmail.com> Adam Mercer wrote: > On 28/10/2007, Adam Mercer wrote: > >> Is there a way that I can specify which fortran compiler to use? > > I've got a little further with this, if I create a link to > gfortran-mp-4.2 somewhere in my $PATH called gfortran, then scipy > finds it and builds without issue. > > It also seems that I can specify which vendor of fortran compiler to use with > > $ python setup.py build --fcompiler= > > but this only seems to work when the fortran compiler uses the > standard name, is there a way to specify the path to the fortran > compiler? [numpy]$ python setup.py config_fc --help Running from numpy source directory. Common commands: (see '--help-commands' for more) setup.py build will build the package underneath 'build/' setup.py install will install the package Global options: --verbose (-v) run verbosely (default) --quiet (-q) run quietly (turns verbosity off) --dry-run (-n) don't actually do anything --help (-h) show detailed help message Options for 'config_fc' command: --fcompiler specify Fortran compiler type --f77exec specify F77 compiler command --f90exec specify F90 compiler command --f77flags specify F77 compiler flags --f90flags specify F90 compiler flags --opt specify optimization flags --arch specify architecture specific optimization flags --debug (-g) compile with debugging information --noopt compile without optimization --noarch compile without arch-dependent optimization --help-fcompiler list available Fortran compilers usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ramercer at gmail.com Sun Oct 28 22:26:23 2007 From: ramercer at gmail.com (Adam Mercer) Date: Sun, 28 Oct 2007 22:26:23 -0400 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <47254429.4030508@gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> <47254429.4030508@gmail.com> Message-ID: <799406d60710281926l67b7deddg35f9c829d4830165@mail.gmail.com> On 28/10/2007, Robert Kern wrote: > [numpy]$ python setup.py config_fc --help Thanks, that's exactly what I was after! Cheers Adam From parvel.gu at gmail.com Sun Oct 28 22:36:38 2007 From: parvel.gu at gmail.com (Parvel Gu) Date: Mon, 29 Oct 2007 10:36:38 +0800 Subject: [SciPy-user] Generating random variables in a joint normal distribution? Message-ID: Hi all, Since I am fresh to use SciPy for simulation, I am not sure about how to get a sequence of pairs of random variables in a joint normal distribution. I have read the info docs in the module scipy.stats. It seems that only the normal distribution rvs for single variable is provided. Thus one sequence of random variables could be get by calling stats.norm.rvs() for times (right?). But if I want pairs of random variables, for example, (p, s), which are expected to be in the joint normal distribution of some given coefficient ro, would there be any routine provided to do this just like stats.norm.rvs? If currently there is no such routine, would there be some workaround to achieve this by combining the existing routines? I am not very good at numerical probabilities... Any suggestions would be much appreciated:) Regards, Parvel From robert.kern at gmail.com Sun Oct 28 22:42:06 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Oct 2007 21:42:06 -0500 Subject: [SciPy-user] Generating random variables in a joint normal distribution? In-Reply-To: References: Message-ID: <4725487E.40509@gmail.com> Parvel Gu wrote: > Hi all, > > Since I am fresh to use SciPy for simulation, I am not sure about how > to get a sequence of pairs of random variables in a joint normal > distribution. > > I have read the info docs in the module scipy.stats. It seems that > only the normal distribution rvs for single variable is provided. Thus > one sequence of random variables could be get by calling > stats.norm.rvs() for times (right?). But if I want pairs of random > variables, for example, (p, s), which are expected to be in the joint > normal distribution of some given coefficient ro, would there be any > routine provided to do this just like stats.norm.rvs? > > If currently there is no such routine, would there be some workaround > to achieve this by combining the existing routines? I am not very good > at numerical probabilities... In [1]: from numpy import random In [2]: random.multivariate_normal? Type: builtin_function_or_method Base Class: Namespace: Interactive Docstring: Return an array containing multivariate normally distributed random numbers with specified mean and covariance. multivariate_normal(mean, cov) -> random values multivariate_normal(mean, cov, [m, n, ...]) -> random values mean must be a 1 dimensional array. cov must be a square two dimensional array with the same number of rows and columns as mean has elements. The first form returns a single 1-D array containing a multivariate normal. The second form returns an array of shape (m, n, ..., cov.shape[0]). In this case, output[i,j,...,:] is a 1-D array containing a multivariate normal. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ramercer at gmail.com Sun Oct 28 23:30:06 2007 From: ramercer at gmail.com (Adam Mercer) Date: Sun, 28 Oct 2007 23:30:06 -0400 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <47254429.4030508@gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> <47254429.4030508@gmail.com> Message-ID: <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> On 28/10/2007, Robert Kern wrote: > [numpy]$ python setup.py config_fc --help Getting a bit further now, but still running into problems. Building with $ python setup.py config_fc --f77exec=gfortran-mp-4.2 --f90exec=gfortran-mp-4.2 build results in the error building 'scipy.interpolate._fitpack' extension warning: build_ext: extension 'scipy.interpolate._fitpack' has Fortran libraries but no Fortran linker found, using default linker compiling C sources C compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -D__DARWIN_UNIX03 compile options: '-I/opt/local/lib/python2.5/site-packages/numpy/core/include -I/opt/local/include/python2.5 -c' gcc: scipy/interpolate/_fitpackmodule.c gcc -L/opt/local/lib -bundle -undefined dynamic_lookup build/temp.macosx-10.3-i386-2.5/scipy/interpolate/_fitpackmodule.o -Lbuild/temp.macosx-10.3-i386-2.5 -lfitpack -o build/lib.macosx-10.3-i386-2.5/scipy/interpolate/_fitpack.so building 'scipy.interpolate.dfitpack' extension error: extension 'scipy.interpolate.dfitpack' has Fortran sources but no Fortran compiler found whereas creating a gfortran symlink and running $ python setup.py build results in no error. Is there a way I can build without creating the gfortran symlink, as I want to integrate this into a MacPorts Portfile? Cheers Adam From robert.kern at gmail.com Sun Oct 28 23:45:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Oct 2007 22:45:19 -0500 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> <47254429.4030508@gmail.com> <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> Message-ID: <4725574F.1070501@gmail.com> Adam Mercer wrote: > On 28/10/2007, Robert Kern wrote: > >> [numpy]$ python setup.py config_fc --help > > Getting a bit further now, but still running into problems. Building with > > $ python setup.py config_fc --f77exec=gfortran-mp-4.2 > --f90exec=gfortran-mp-4.2 build > > results in the error > > building 'scipy.interpolate._fitpack' extension > warning: build_ext: extension 'scipy.interpolate._fitpack' has Fortran > libraries but no Fortran linker found, using default linker > compiling C sources > C compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp > -mno-fused-madd -DNDEBUG -D__DARWIN_UNIX03 > > compile options: > '-I/opt/local/lib/python2.5/site-packages/numpy/core/include > -I/opt/local/include/python2.5 -c' > gcc: scipy/interpolate/_fitpackmodule.c > gcc -L/opt/local/lib -bundle -undefined dynamic_lookup > build/temp.macosx-10.3-i386-2.5/scipy/interpolate/_fitpackmodule.o > -Lbuild/temp.macosx-10.3-i386-2.5 -lfitpack -o > build/lib.macosx-10.3-i386-2.5/scipy/interpolate/_fitpack.so > building 'scipy.interpolate.dfitpack' extension > error: extension 'scipy.interpolate.dfitpack' has Fortran sources but > no Fortran compiler found > > whereas creating a gfortran symlink and running > > $ python setup.py build > > results in no error. > > Is there a way I can build without creating the gfortran symlink, as I > want to integrate this into a MacPorts Portfile? Hmm, file a bug on the numpy Trac and assign it to dmcooke. He was the last to touch this area. It's possible there are still bugs. Attach the full output of the command to the ticket. Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From parvel.gu at gmail.com Mon Oct 29 00:26:04 2007 From: parvel.gu at gmail.com (Parvel Gu) Date: Mon, 29 Oct 2007 12:26:04 +0800 Subject: [SciPy-user] Generating random variables in a joint normal distribution? In-Reply-To: <4725487E.40509@gmail.com> References: <4725487E.40509@gmail.com> Message-ID: Hi, Thanks a lot. And I am still puzzled about the input arguments, mean and cov. So take my current problem for example. I am expecting the random variable P and S, which follow a joint normal distribution with (Mu)p=(Mu)s=0.5 (the mean?), and (Sigma)p=(Sigma)s=0.4 (the variance), and a coefficient ro = 0.8. According to the function multivariate_normal(mean, cov), only the matrixes of mean and cov are provided as input. Mapping to my problem, the mean could be [0.5, 0.5]. and the cov matrix is supposed to be [ cov(p,p), cov(p,s) cov(s,p), cov(s,s) ] Is it indicated that we have to get each cov(p, s) with some formula like ro = cov(p, s) / (sqrt(Dp) * sqrt(Ds)) = 0.8 then fill the result into the cov matrix? Regards, Parvel On 10/29/07, Robert Kern wrote: > In [1]: from numpy import random > > In [2]: random.multivariate_normal? > Type: builtin_function_or_method > Base Class: > Namespace: Interactive > Docstring: > Return an array containing multivariate normally distributed random numbers > with specified mean and covariance. > > multivariate_normal(mean, cov) -> random values > multivariate_normal(mean, cov, [m, n, ...]) -> random values > > mean must be a 1 dimensional array. cov must be a square two dimensional > array with the same number of rows and columns as mean has elements. > > The first form returns a single 1-D array containing a multivariate > normal. > > The second form returns an array of shape (m, n, ..., cov.shape[0]). > In this case, output[i,j,...,:] is a 1-D array containing a multivariate > normal. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Oct 29 00:37:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Oct 2007 23:37:16 -0500 Subject: [SciPy-user] Generating random variables in a joint normal distribution? In-Reply-To: References: <4725487E.40509@gmail.com> Message-ID: <4725637C.7030207@gmail.com> Parvel Gu wrote: > Hi, > > Thanks a lot. > And I am still puzzled about the input arguments, mean and cov. > > So take my current problem for example. I am expecting the random > variable P and S, which follow a joint normal distribution with > (Mu)p=(Mu)s=0.5 (the mean?), and (Sigma)p=(Sigma)s=0.4 (the variance), > and a coefficient ro = 0.8. Careful there. The Greek letter sigma is usally reserved for the standard deviation, the square root of variance. > According to the function multivariate_normal(mean, cov), only the > matrixes of mean and cov are provided as input. Mapping to my problem, > the mean could be [0.5, 0.5]. and the cov matrix is supposed to be > [ > cov(p,p), cov(p,s) > cov(s,p), cov(s,s) > ] > > Is it indicated that we have to get each cov(p, s) with some formula like > ro = cov(p, s) / (sqrt(Dp) * sqrt(Ds)) = 0.8 > then fill the result into the cov matrix? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From grh at mur.at Mon Oct 29 05:51:38 2007 From: grh at mur.at (Georg Holzmann) Date: Mon, 29 Oct 2007 10:51:38 +0100 Subject: [SciPy-user] MFCC implementation Message-ID: <4725AD2A.6060803@mur.at> Hallo list! I only wanted to ask if there is already an MFCC implementation as used in speech or audio processing ? Thanks for any hint, LG Georg From ramercer at gmail.com Mon Oct 29 12:14:08 2007 From: ramercer at gmail.com (Adam Mercer) Date: Mon, 29 Oct 2007 12:14:08 -0400 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <4725574F.1070501@gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> <47254429.4030508@gmail.com> <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> <4725574F.1070501@gmail.com> Message-ID: <799406d60710290914h6ec0b99er76f61ed31e5f5dab@mail.gmail.com> On 28/10/2007, Robert Kern wrote: > Hmm, file a bug on the numpy Trac and assign it to dmcooke. He was the last to > touch this area. It's possible there are still bugs. Attach the full output of > the command to the ticket. Thanks. Will do, thanks for the help. Cheers Adam From jaropis at gazeta.pl Mon Oct 29 14:41:02 2007 From: jaropis at gazeta.pl (jaropis) Date: Mon, 29 Oct 2007 19:41:02 +0100 Subject: [SciPy-user] Subject: Re: Scientific Python publications References: Message-ID: > Maybe Jarek could send us or post on his website a copy of a previous > version of the paper. These are not covered usually by the (abusive) > copyright of the Journal No problem - the copyright agreement gives me the right to post it on my website, so here it is: http://www.if.uz.zgora.pl/~jaropis/geomasy The animation, all figures and all calculations were done in Python (+Scilab+Pylab), but obviously this is not a programming paper and Python is a research tool here (which I think is the goal of the creators of the wonderful language and libraries). Unfortunately there is only code for Matlab - just a few one-liners. In HRV (heart rate variability) research most people use Matlab and hardly anyone even knows about Python. I am trying to advertise Python, but the paper would not have been accepted with code in Python. Of course the paper clearly says that the calculations were done in Python. Jarek From jaropis at gazeta.pl Mon Oct 29 15:12:26 2007 From: jaropis at gazeta.pl (jaropis) Date: Mon, 29 Oct 2007 20:12:26 +0100 Subject: [SciPy-user] Subject: Re: Scientific Python publications References: Message-ID: > (+Scilab+ 'SciPy SciPy SciPy '+997*'SciPy' sorry Jarek From stefan at sun.ac.za Mon Oct 29 17:07:26 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 29 Oct 2007 23:07:26 +0200 Subject: [SciPy-user] New example In-Reply-To: <20071026064019.GD9025@clipper.ens.fr> References: <20071025211521.GA15301@mentat.za.net> <20071026064019.GD9025@clipper.ens.fr> Message-ID: <20071029210726.GE30790@mentat.za.net> Hi all, Here is the code for doing an elementary finite difference heat-flow simulation. This is really just a repeated convolution, with certain values resetting after each time-step (sources or sinks). The code was written for Numeric, but it seems to run under NumPy without problems. Regards St?fan On Fri, Oct 26, 2007 at 08:40:19AM +0200, Gael Varoquaux wrote: > Anne and Stefan, I think these examples are very useful, and definitely > should be on the scipy.org website. -------------- next part -------------- A non-text attachment was scrubbed... Name: fidihe.py Type: text/x-python Size: 3034 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fidihe.png Type: image/png Size: 4588 bytes Desc: not available URL: From ryanlists at gmail.com Mon Oct 29 20:16:04 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 29 Oct 2007 19:16:04 -0500 Subject: [SciPy-user] optimize.fmin_l_bfgs_b and 'ABNORMAL_TERMINATION_IN_LNSRCH', Message-ID: I am using optimize.fmin_l_bfgs_b and getting the following output: (array([ 8142982.47310469, 0. , 614438.11725001]), 1.58474444864e+12, {'funcalls': 21, 'grad': array([ 0. , 952148.4375, 24414.0625]), 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'warnflag': 2}) What does this mean? I am trying to do a least squares curve fit between some noisy data and a nonlinear model. I need to constrain one of my fit parameters to be positive. I do not have an fprime function, so the gradient is being determined numerically. Is there a better routine to use? I get a fairly close answer using my own hacked solution of using optimize.fmin where the cost function adds a gigantic penalty to the cost if the middle coefficient is greater than 0: [ 8.16174249e+06 1.93528613e-10 6.14626152e+05] So, I don't think the answer is bad, I just want to know why it terminated abnormally and whether or not I can trust the result. Thanks, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Mon Oct 29 22:51:23 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 29 Oct 2007 22:51:23 -0400 Subject: [SciPy-user] optimize.fmin_l_bfgs_b and 'ABNORMAL_TERMINATION_IN_LNSRCH', In-Reply-To: References: Message-ID: <8793ae6e0710291951u49f85728q55899fe4291c0136@mail.gmail.com> On 10/29/07, Ryan Krauss wrote: > I am using optimize.fmin_l_bfgs_b and getting the following output: > > (array([ 8142982.47310469, 0. , 614438.11725001]), > 1.58474444864e+12, > {'funcalls': 21, > 'grad': array([ 0. , 952148.4375, 24414.0625]), > 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', > 'warnflag': 2}) > > What does this mean? I am trying to do a least squares curve fit between > some noisy data and a nonlinear model. I need to constrain one of my fit > parameters to be positive. I do not have an fprime function, so the > gradient is being determined numerically. > > Is there a better routine to use? > > I get a fairly close answer using my own hacked solution of using > optimize.fmin where the cost function adds a gigantic penalty to the cost if > the middle coefficient is greater than 0: > [ 8.16174249e+06 1.93528613e-10 6.14626152e+05] > > So, I don't think the answer is bad, I just want to know why it terminated > abnormally and whether or not I can trust the result. Is it possible that your numerical gradient be (very) inaccurate ? Dominique From ryanlists at gmail.com Mon Oct 29 23:37:55 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 29 Oct 2007 22:37:55 -0500 Subject: [SciPy-user] optimize.fmin_l_bfgs_b and 'ABNORMAL_TERMINATION_IN_LNSRCH', In-Reply-To: <8793ae6e0710291951u49f85728q55899fe4291c0136@mail.gmail.com> References: <8793ae6e0710291951u49f85728q55899fe4291c0136@mail.gmail.com> Message-ID: Yes. On 10/29/07, Dominique Orban wrote: > > On 10/29/07, Ryan Krauss wrote: > > I am using optimize.fmin_l_bfgs_b and getting the following output: > > > > (array([ 8142982.47310469, 0. , 614438.11725001]), > > 1.58474444864e+12, > > {'funcalls': 21, > > 'grad': array([ 0. , 952148.4375, 24414.0625]), > > 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', > > 'warnflag': 2}) > > > > What does this mean? I am trying to do a least squares curve fit > between > > some noisy data and a nonlinear model. I need to constrain one of my > fit > > parameters to be positive. I do not have an fprime function, so the > > gradient is being determined numerically. > > > > Is there a better routine to use? > > > > I get a fairly close answer using my own hacked solution of using > > optimize.fmin where the cost function adds a gigantic penalty to the > cost if > > the middle coefficient is greater than 0: > > [ 8.16174249e+06 1.93528613e-10 6.14626152e+05] > > > > So, I don't think the answer is bad, I just want to know why it > terminated > > abnormally and whether or not I can trust the result. > > Is it possible that your numerical gradient be (very) inaccurate ? > > Dominique > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Tue Oct 30 04:24:29 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 30 Oct 2007 10:24:29 +0200 Subject: [SciPy-user] optimize.fmin_l_bfgs_b and 'ABNORMAL_TERMINATION_IN_LNSRCH', In-Reply-To: References: <8793ae6e0710291951u49f85728q55899fe4291c0136@mail.gmail.com> Message-ID: <4726EA3D.70200@scipy.org> Your variables differs in some orders, so you should use either use software with automatic scaling or do it by yourself: scalePhactor = array([1e6, 1e-10, 1e5]) x0 /= scalePhactor x_opt = a_solver(objfunc, x0,...) x_opt *= scalePhactor ######### def objfunc(x): x = x.copy() * scalePhactor .... ######### I intend to implement automatic scaling in scikits.openopt but I have no time for now (btw it's present in my MATLAB OpenOpt ver). Also, you could be interested in other OO solvers for your problem - ALGENCAN or lincher; connection to lbfgsb is provided as well. Regards, D. Ryan Krauss wrote: > Yes. > > On 10/29/07, *Dominique Orban* > wrote: > > On 10/29/07, Ryan Krauss > wrote: > > I am using optimize.fmin_l_bfgs_b and getting the following output: > > > > (array([ 8142982.47310469, 0. , 614438.11725001]), > > 1.58474444864e+12, > > {'funcalls': 21, > > 'grad': array([ 0. , 952148.4375, 24414.0625]), > > 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', > > 'warnflag': 2}) > > > > What does this mean? I am trying to do a least squares curve > fit between > > some noisy data and a nonlinear model. I need to constrain one > of my fit > > parameters to be positive. I do not have an fprime function, so > the > > gradient is being determined numerically. > > > > Is there a better routine to use? > > > > I get a fairly close answer using my own hacked solution of using > > optimize.fmin where the cost function adds a gigantic penalty to > the cost if > > the middle coefficient is greater than 0: > > [ 8.16174249e+06 1.93528613e-10 6.14626152e+05] > > > > So, I don't think the answer is bad, I just want to know why it > terminated > > abnormally and whether or not I can trust the result. > > Is it possible that your numerical gradient be (very) inaccurate ? > > Dominique > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Tue Oct 30 05:12:23 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 30 Oct 2007 11:12:23 +0200 Subject: [SciPy-user] Borda's mouthpiece and signed zero Message-ID: <20071030091223.GR30790@mentat.za.net> Hi all, After browsing through Kahan's slides titled "How Java's Floating-Point Hurts Everyone Everywhere" [1], I plotted Borda's Mouthpiece, as illustrated by David Williams [2]. The code is attached. It would also be interesting to simulate the anomaly shown in [2], but I don't know how. Regards St?fan [1] http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf [2] http://www-personal.umich.edu/~williams/archive/forth/complex/borda.html -------------- next part -------------- A non-text attachment was scrubbed... Name: borda.py Type: text/x-python Size: 909 bytes Desc: not available URL: From dominique.orban at gmail.com Tue Oct 30 12:16:27 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Tue, 30 Oct 2007 12:16:27 -0400 Subject: [SciPy-user] optimize.fmin_l_bfgs_b and 'ABNORMAL_TERMINATION_IN_LNSRCH', In-Reply-To: References: <8793ae6e0710291951u49f85728q55899fe4291c0136@mail.gmail.com> Message-ID: <8793ae6e0710300916g39c4c9cj47e1f9f5b405acef@mail.gmail.com> > On 10/29/07, Dominique Orban wrote: > > Is it possible that your numerical gradient be (very) inaccurate ? On 10/29/07, Ryan Krauss wrote: > Yes. That may be it. LBFGS-B computes a search direction d by first solving a linear linear system of the form Bd = -g where B is some positive definite matrix and g is the gradient (approximated numerically in your case). Next, it performs a linesearch along d. For everything to work well, the angle between d and the exact gradient must be larger than 90 degrees, but most not come too close to 90 degrees. If it does, the linesearch may fail to identify an appropriate steplength. In typical situations, when g is the exact gradient, all is well as long as B is not too ill conditioned. When g is no longer exact, everything is possible. You may want to look into obtaining a more accurate approximation of your gradient. I hope this helps, Dominique From jdh2358 at gmail.com Tue Oct 30 12:26:19 2007 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 30 Oct 2007 11:26:19 -0500 Subject: [SciPy-user] nullclines for nonlinear ODEs Message-ID: <88e473830710300926l66c350c3k72b7fda9e6fd5c31@mail.gmail.com> In a workshop on scientific computing in python I ran with Andrew Straw last weekend at the Claremont Colleges, I presented a unit on ODEs, numerically integrating the Lotka-Volterra equations, drawing a direction field, plotting a trajectory in the phase plane, and plotting the nullclines. For the nullclines, I used matplotlib's contouring routine with levels=[0], but I was wondering if there was a better/more direct route. The example script is included below (and attached) import numpy as n import pylab as p import scipy.integrate as integrate def dr(r, f): return alpha*r - beta*r*f def df(r, f): return gamma*r*f - delta*f def derivs(state, t): """ Map the state variable [rabbits, foxes] to the derivitives [deltar, deltaf] at time t """ #print t, state r, f = state # rabbits and foxes deltar = dr(r, f) # change in rabbits deltaf = df(r, f) # change in foxes return deltar, deltaf alpha, delta = 1, .25 beta, gamma = .2, .05 # the initial population of rabbits and foxes r0 = 20 f0 = 10 t = n.arange(0.0, 100, 0.1) y0 = [r0, f0] # the initial [rabbits, foxes] state vector y = integrate.odeint(derivs, y0, t) r = y[:,0] # extract the rabbits vector f = y[:,1] # extract the foxes vector p.figure() p.plot(t, r, label='rabbits') p.plot(t, f, label='foxes') p.xlabel('time (years)') p.ylabel('population') p.title('population trajectories') p.grid() p.legend() p.savefig('lotka_volterra.png', dpi=150) p.savefig('lotka_volterra.eps') p.figure() p.plot(r, f) p.xlabel('rabbits') p.ylabel('foxes') p.title('phase plane') # make a direction field plot with quiver rmax = 1.1 * r.max() fmax = 1.1 * f.max() R, F = n.meshgrid(n.arange(-1, rmax), n.arange(-1, fmax)) dR = dr(R, F) dF = df(R, F) p.quiver(R, F, dR, dF) R, F = n.meshgrid(n.arange(-1, rmax, .1), n.arange(-1, fmax, .1)) dR = dr(R, F) dF = df(R, F) p.contour(R, F, dR, levels=[0], linewidths=3, colors='black') p.contour(R, F, dF, levels=[0], linewidths=3, colors='black') p.ylabel('foxes') p.title('trajectory, direction field and null clines') p.savefig('lotka_volterra_pplane.png', dpi=150) p.savefig('lotka_volterra_pplane.eps') p.show() -------------- next part -------------- A non-text attachment was scrubbed... Name: lotka_volterra.py Type: text/x-python Size: 1706 bytes Desc: not available URL: From jdh2358 at gmail.com Tue Oct 30 12:28:49 2007 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 30 Oct 2007 11:28:49 -0500 Subject: [SciPy-user] nullclines for nonlinear ODEs In-Reply-To: <88e473830710300926l66c350c3k72b7fda9e6fd5c31@mail.gmail.com> References: <88e473830710300926l66c350c3k72b7fda9e6fd5c31@mail.gmail.com> Message-ID: <88e473830710300928r68f376b8k2a6a5c76b04c11c1@mail.gmail.com> On 10/30/07, John Hunter wrote: > In a workshop on scientific computing in python I ran with Andrew > Straw last weekend at the Claremont Colleges, I presented a unit on > ODEs, numerically integrating the Lotka-Volterra equations, drawing a > direction field, plotting a trajectory in the phase plane, and > plotting the nullclines. For the nullclines, I used matplotlib's > contouring routine with levels=[0], but I was wondering if there was a > better/more direct route. I should add, of course for the Lotka-Volterra equations one can easily do this analytically, but I was trying to illustrate a general approach for equations in which one could not necessarily write analytic expressions for the nullclines. JDH From berthe.loic at gmail.com Tue Oct 30 14:39:54 2007 From: berthe.loic at gmail.com (LB) Date: Tue, 30 Oct 2007 11:39:54 -0700 Subject: [SciPy-user] nullclines for nonlinear ODEs In-Reply-To: <88e473830710300928r68f376b8k2a6a5c76b04c11c1@mail.gmail.com> References: <88e473830710300926l66c350c3k72b7fda9e6fd5c31@mail.gmail.com> <88e473830710300928r68f376b8k2a6a5c76b04c11c1@mail.gmail.com> Message-ID: <1193769594.019129.155320@d55g2000hsg.googlegroups.com> Hi, It's really funny, I was planning to add an issue in the Cookbook about integrating ODE and I chose the Lokta-Volterra model as a basis too :-) I still haven't found the time to create the page, and I didn't plot any nullcline for the moment. I've just one remark on your example : if you look closely to your second graph, you can see that the trajectory crosses some arrows of the direction field. I had this problem too, before forcing matplotlib to use equal axis. -- LB From cookedm at physics.mcmaster.ca Tue Oct 30 20:11:43 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 30 Oct 2007 20:11:43 -0400 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <799406d60710290914h6ec0b99er76f61ed31e5f5dab@mail.gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> <47254429.4030508@gmail.com> <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> <4725574F.1070501@gmail.com> <799406d60710290914h6ec0b99er76f61ed31e5f5dab@mail.gmail.com> Message-ID: On Oct 29, 2007, at 12:14 , Adam Mercer wrote: > On 28/10/2007, Robert Kern wrote: > >> Hmm, file a bug on the numpy Trac and assign it to dmcooke. He was >> the last to >> touch this area. It's possible there are still bugs. Attach the >> full output of >> the command to the ticket. Thanks. > > Will do, thanks for the help. (typo there, assign to cookedm). I have much the same setup, except I use Python 2.5 from python.org instead of MacPort's. I have this in ~/.pydistutils.cfg : [config_fc] fcompiler=gfortran f77exec=gfortran-mp-4.2 f90exec=gfortran-mp-4.2 and I have no problems. I'll take a look at it passing the above on the command line like you're doing. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From zelbier at gmail.com Wed Oct 31 07:01:50 2007 From: zelbier at gmail.com (Olivier Verdier) Date: Wed, 31 Oct 2007 12:01:50 +0100 Subject: [SciPy-user] Bug in scipy.sparse Message-ID: The first thing I'm surprised about with scipy.sparse is that it uses the matrix type instead of the array type. This is very unfortunate, in my opinion. The bug is that a sparse matrix won't work correctly with `dot` and `array`: spmat = speye(3,3) vec = array([1,0,0]) dot(spmat, vec) returns: array([ (0, 0) 1.0 (1, 1) 1.0 (2, 2) 1.0, (0, 0) 0.0 (1, 1) 0.0 (2, 2) 0.0, (0, 0) 0.0 (1, 1) 0.0 (2, 2) 0.0], dtype=object) Ouch!! The right result may however be obtained with spmat * vec. This is a pity because any code that works with arrays or matrices in general and uses `dot` to do the matrix vector multiplication will be *broken* with sparse matrices. How reliable is scipy.sparse? Is there any plan to make it more compatible with the array type? Behave like the array type? How can I help? == Olivier -------------- next part -------------- An HTML attachment was scrubbed... URL: From cclarke at chrisdev.com Wed Oct 31 09:50:27 2007 From: cclarke at chrisdev.com (Christopher Clarke) Date: Wed, 31 Oct 2007 09:50:27 -0400 Subject: [SciPy-user] casting ndarray object to double Message-ID: Hi What's the most efficient way of casting an object array to a double ndarray? other than numpy.array(obj_arr.tolist(),dtype='d') The original data is a list of tuples form a SQL query which could potentially contain None Indeed a masked array would be more ideal where None is replaced by the mask I'm currently using list comprehension to replace the None in the original data with the mask But i wish to eliminate this step if possible Regards Chris From matthieu.brucher at gmail.com Wed Oct 31 09:53:24 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 31 Oct 2007 14:53:24 +0100 Subject: [SciPy-user] casting ndarray object to double In-Reply-To: References: Message-ID: Hi, Did you try numpy.asfarray(obj_arr) ? Matthieu 2007/10/31, Christopher Clarke : > > Hi > What's the most efficient way of casting an object array to a double > ndarray? > other than > numpy.array(obj_arr.tolist(),dtype='d') > > The original data is a list of tuples form a SQL query which could > potentially contain None > Indeed a masked array would be more ideal where None is replaced by > the mask > I'm currently using list comprehension to replace the None in the > original data with the mask > But i wish to eliminate this step if possible > Regards > Chris > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Wed Oct 31 10:26:46 2007 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 31 Oct 2007 10:26:46 -0400 Subject: [SciPy-user] nullclines for nonlinear ODEs In-Reply-To: <1193769594.019129.155320@d55g2000hsg.googlegroups.com> References: <88e473830710300926l66c350c3k72b7fda9e6fd5c31@mail.gmail.com> <88e473830710300928r68f376b8k2a6a5c76b04c11c1@mail.gmail.com> <1193769594.019129.155320@d55g2000hsg.googlegroups.com> Message-ID: PyDSTool has a basic find_nullclines function which certainly works for the L-V model and some other planar nonlinear systems. It's naively implemented and I put it together quickly to get some nullclines for a particular system, so use it at your own risk. You'll find it in the Toolbox/phaseplane.py module, along with some functions for finding fixed points, saddle manifolds, etc. -Rob On 30/10/2007, LB wrote: > Hi, > > It's really funny, I was planning to add an issue in the Cookbook > about integrating ODE and I chose the Lokta-Volterra model as a basis > too :-) > I still haven't found the time to create the page, and I didn't plot > any nullcline for the moment. > > I've just one remark on your example : if you look closely to your > second graph, you can see that the trajectory crosses some arrows of > the direction field. I had this problem too, before forcing matplotlib > to use equal axis. > > > -- > LB > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From cookedm at physics.mcmaster.ca Wed Oct 31 12:12:58 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 31 Oct 2007 12:12:58 -0400 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> <47254429.4030508@gmail.com> <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> Message-ID: <54D6FEEC-043D-4B1B-A0E4-B7AB7DC98CA4@physics.mcmaster.ca> On Oct 28, 2007, at 23:30 , Adam Mercer wrote: > On 28/10/2007, Robert Kern wrote: > >> [numpy]$ python setup.py config_fc --help > > Getting a bit further now, but still running into problems. Building > with > > $ python setup.py config_fc --f77exec=gfortran-mp-4.2 > --f90exec=gfortran-mp-4.2 build > > results in the error > > building 'scipy.interpolate._fitpack' extension > warning: build_ext: extension 'scipy.interpolate._fitpack' has Fortran > libraries but no Fortran linker found, using default linker > compiling C sources > C compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp > -mno-fused-madd -DNDEBUG -D__DARWIN_UNIX03 > > compile options: > '-I/opt/local/lib/python2.5/site-packages/numpy/core/include > -I/opt/local/include/python2.5 -c' > gcc: scipy/interpolate/_fitpackmodule.c > gcc -L/opt/local/lib -bundle -undefined dynamic_lookup > build/temp.macosx-10.3-i386-2.5/scipy/interpolate/_fitpackmodule.o > -Lbuild/temp.macosx-10.3-i386-2.5 -lfitpack -o > build/lib.macosx-10.3-i386-2.5/scipy/interpolate/_fitpack.so > building 'scipy.interpolate.dfitpack' extension > error: extension 'scipy.interpolate.dfitpack' has Fortran sources but > no Fortran compiler found I can't reproduce this error (yes, I commented out the config_fc section in my ~/.pydistutils.cfg ;). Are you using 1.0.3.1? It looks like changes I made in numpy.distutils.fcompiler didn't make it into that release (dangers of committing just before the release, I guess). Although, looking at them, they shouldn't make a difference. Try a current svn version of numpy. > whereas creating a gfortran symlink and running > > $ python setup.py build > > results in no error. > > Is there a way I can build without creating the gfortran symlink, as I > want to integrate this into a MacPorts Portfile? If it's fixed (as I think it is), best to wait for 1.0.4. I suppose you could make a symlink to gfortran-mp-42 in the work/ directory, and add it to the PATH that'd be used in the Portfile. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From pgmdevlist at gmail.com Wed Oct 31 13:06:37 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 31 Oct 2007 13:06:37 -0400 Subject: [SciPy-user] casting ndarray object to double In-Reply-To: References: Message-ID: <200710311306.37689.pgmdevlist@gmail.com> On Wednesday 31 October 2007 09:50:27 Christopher Clarke wrote: > Hi > What's the most efficient way of casting an object array to a double > ndarray? > other than > numpy.array(obj_arr.tolist(),dtype='d') Something like that ? >>>import numpy >>>obj_list = [(1,1),(2,2),(None,3)] >>>obj_arr=numpy.array(obj_list, dtype=numpy.object_) >>>obj_arr array([[1, 1], [2, 2], [None, 3]], dtype=object) With numpy.core.ma >>>import numpy.core.ma as ma >>>obj_flt = obj_arr.astype(numpy.float_) >>>obj_mask = ma.array(obj_flt, mask=numpy.isnan(obj_flt)) >>>obj_mask array(data = [[ 1.00000000e+00 1.00000000e+00] [ 2.00000000e+00 2.00000000e+00] [ 1.00000000e+20 3.00000000e+00]], mask = [[False False] [False False] [ True False]], fill_value=1e+20) With maskedarray (the now infamous alternative way, still in scipy SVN) >>>import maskedarray >>>obj_mask = maskedarray.fix_invalid(obj_arr.astype(numpy.float_)) >>>obj_mask masked_array(data = [[1.0 1.0] [2.0 2.0] [-- 3.0]], mask = [[False False] [False False] [ True False]], fill_value=1e+20) maskedarray.fix_invalid will transform nans and infs in your array into floats and mask the corresponding values. The numpy.core.ma will keep the nans and infs in the data and will only mask them. > The original data is a list of tuples form a SQL query which could > potentially contain None The trick of going through float w/ astype should work OK if your tuples don't have strings hidden somewhere. From ramercer at gmail.com Wed Oct 31 16:02:07 2007 From: ramercer at gmail.com (Adam Mercer) Date: Wed, 31 Oct 2007 16:02:07 -0400 Subject: [SciPy-user] Specifying fortran compiler In-Reply-To: <54D6FEEC-043D-4B1B-A0E4-B7AB7DC98CA4@physics.mcmaster.ca> References: <799406d60710281113m7865640ds11d9e66665f09c2b@mail.gmail.com> <799406d60710281918j616481a1te3fdcf88c45481a@mail.gmail.com> <47254429.4030508@gmail.com> <799406d60710282030t77121657s894b6859113700a@mail.gmail.com> <54D6FEEC-043D-4B1B-A0E4-B7AB7DC98CA4@physics.mcmaster.ca> Message-ID: <799406d60710311302k67a61882y52c2502bf52ecaf5@mail.gmail.com> On 31/10/2007, David M. Cooke wrote: > Are you using 1.0.3.1? It looks like changes I made in > numpy.distutils.fcompiler didn't make it into that release (dangers of > committing just before the release, I guess). Although, looking at > them, they shouldn't make a difference. Yep numpy-1.0.3.1 and scipy-0.6.0 > Try a current svn version of numpy. Will do, thanks. Cheers Adam From s.mientki at ru.nl Wed Oct 31 18:20:56 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 31 Oct 2007 23:20:56 +0100 Subject: [SciPy-user] Scipy + Vision = LabView ? Message-ID: <4728FFC8.2070007@ru.nl> hello, I found this demo (5 min) of Vision today http://www.osc.edu/~unpingco/Tutorial1.html If we combine this with Scipy, we've an almost complete replacement of LabView. Or am I dreaming ? cheers, Stef Mientki From matthieu.brucher at gmail.com Wed Oct 31 18:31:20 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 31 Oct 2007 23:31:20 +0100 Subject: [SciPy-user] Scipy + Vision = LabView ? In-Reply-To: <4728FFC8.2070007@ru.nl> References: <4728FFC8.2070007@ru.nl> Message-ID: This is what David Cournapeau presented once (but with Orange). It is one of my final goals as well, but franckly, Vision is not very pretty (there is something called VPython or something like that that presents the same interface, as pretty as Vision) Matthieu 2007/10/31, Stef Mientki : > > hello, > > I found this demo (5 min) of Vision today > > http://www.osc.edu/~unpingco/Tutorial1.html > > If we combine this with Scipy, > we've an almost complete replacement of LabView. > Or am I dreaming ? > > cheers, > Stef Mientki > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pebarrett at gmail.com Wed Oct 31 18:36:44 2007 From: pebarrett at gmail.com (Paul Barrett) Date: Wed, 31 Oct 2007 18:36:44 -0400 Subject: [SciPy-user] Scipy + Vision = LabView ? In-Reply-To: <4728FFC8.2070007@ru.nl> References: <4728FFC8.2070007@ru.nl> Message-ID: <40e64fa20710311536v68f351bbq4cbc7a836e7620bf@mail.gmail.com> No, you are not dreaming. I also commented on this possibility several years ago to some people in the SciPy community. Unfortunately, I have not had the time or motivation to act on it. It would certainly be nice to have a basic package to show people. You'll need to create a set of APIs so that you can connect to instrumentation. It shouldn't be too hard now that the interfaces have been standardized. -- Paul On Oct 31, 2007 6:20 PM, Stef Mientki wrote: > hello, > > I found this demo (5 min) of Vision today > > http://www.osc.edu/~unpingco/Tutorial1.html > > If we combine this with Scipy, > we've an almost complete replacement of LabView. > Or am I dreaming ? > > cheers, > Stef Mientki > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Wed Oct 31 18:51:07 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 31 Oct 2007 23:51:07 +0100 Subject: [SciPy-user] Scipy + Vision = LabView ? In-Reply-To: <40e64fa20710311536v68f351bbq4cbc7a836e7620bf@mail.gmail.com> References: <4728FFC8.2070007@ru.nl> <40e64fa20710311536v68f351bbq4cbc7a836e7620bf@mail.gmail.com> Message-ID: 2007/10/31, Paul Barrett : > > No, you are not dreaming. I also commented on this possibility > several years ago to some people in the SciPy community. > Unfortunately, I have not had the time or motivation to act on it. It > would certainly be nice to have a basic package to show people. > You'll need to create a set of APIs so that you can connect to > instrumentation. It shouldn't be too hard now that the interfaces > have been standardized. Sorry, it was in fact Vision and Viper I saw in the past as well. The idea is great but the application, IMHO, is not quite optimal : - even for the tutorial, the rotate bloc should have two inputs, the image and the rotation angle (this last factor could be given by the result of another algorithm) - what about multithreading in this model ? - what about flow control ? (which is really a huge issue that is lacking in ITK for instance) - is there a way to create a bunch of blocks easily ? All this has consequences on the interface. Besides, it would be very great to use Traits as input and output blocks. Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 -------------- next part -------------- An HTML attachment was scrubbed... URL: