From rsamurti at airtelbroadband.in Wed Aug 1 00:08:46 2007 From: rsamurti at airtelbroadband.in (R S Ananda Murthy) Date: Wed, 01 Aug 2007 09:38:46 +0530 Subject: [SciPy-user] Error in SciPy install on Zenwalk-4.6.1 system. In-Reply-To: <46AFFDC3.8060905@airtelbroadband.in> References: <46AFFDC3.8060905@airtelbroadband.in> Message-ID: <46B0074E.6050801@airtelbroadband.in> R S Ananda Murthy wrote: > Hello, > > I am trying to make package of SciPy on Zenwalk-4.6.1. I have already > installed numpy, lapack, fftw. When I do > > python setup.py install --prefix=/usr --root=$dest > > setup runs for some time, and then I get the following message: > > File "/usr/lib/python2.5/site-packages/numpy/distutils/misc_util.py", > line 687, in _get_configuration_from_setup_py > ('.py', 'U', 1)) > File "Lib/odr/setup.py", line 9, in > from numpy.distutils.misc_util import get_path, Configuration, dot_join > ImportError: cannot import name get_path > > > Why am I getting this ImportError? Because I screwed up just before the numpy 1.0.3 release and removed that deprecated function without realizing that scipy 0.5.2 still used it. > How to correct this? This is fixed in the SVN scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user Thank you so much Robert Kern for your immediate reply. Can you please tell me how to get SVN SciPy or what patches I should apply to 0.5.2 version of SciPy to eliminate this error? Thanks again for your help and time, Anand From david at ar.media.kyoto-u.ac.jp Wed Aug 1 00:04:19 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 01 Aug 2007 13:04:19 +0900 Subject: [SciPy-user] Error in SciPy install on Zenwalk-4.6.1 system. In-Reply-To: <46B0074E.6050801@airtelbroadband.in> References: <46AFFDC3.8060905@airtelbroadband.in> <46B0074E.6050801@airtelbroadband.in> Message-ID: <46B00643.4050706@ar.media.kyoto-u.ac.jp> R S Ananda Murthy wrote: > R S Ananda Murthy wrote: > >> Hello, >> >> I am trying to make package of SciPy on Zenwalk-4.6.1. I have already >> installed numpy, lapack, fftw. When I do >> >> python setup.py install --prefix=/usr --root=$dest >> >> setup runs for some time, and then I get the following message: >> >> File "/usr/lib/python2.5/site-packages/numpy/distutils/misc_util.py", >> line 687, in _get_configuration_from_setup_py >> ('.py', 'U', 1)) >> File "Lib/odr/setup.py", line 9, in >> from numpy.distutils.misc_util import get_path, Configuration, dot_join >> ImportError: cannot import name get_path >> >> >> Why am I getting this ImportError? >> > > Because I screwed up just before the numpy 1.0.3 release and removed that > deprecated function without realizing that scipy 0.5.2 still used it. > > >> How to correct this? >> > > This is fixed in the SVN scipy. > > Anand, I answer to you in this mail since it seems your reply is mislocated in your email. To fetch numpy and scipy from sources, you need to install subversion first, and then: svn co http://svn.scipy.org/svn/numpy/trunk numpy.svn svn co http://svn.scipy.org/svn/scipy/trunk scipy.svn And then build numpy and scipy from sources as usual. Note that your problem is not scipy but numpy (numpy is used by scipy to build). David From rhc28 at cornell.edu Wed Aug 1 00:39:05 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Wed, 1 Aug 2007 00:39:05 -0400 Subject: [SciPy-user] restricting optimize.leastsq to positive results In-Reply-To: References: Message-ID: Hi Christoph, If you want to keep using the leastsq code then the way to do it is to penalize negative values via your residual. This needs to be done in a way that makes the algorithm think that the optimal solution is only in the positive half-space. So when you write your residual function as a python function, you have various options. A really simple one that breaks some of the assumptions of smoothness (i.e. for people who don't care to retain theoretical conditions on the convergence properties of leastsq, esp. for such a simple problem as this...) might look something like this (off the top of my head): from numpy import any from numpy.linalg import norm # a global algorithmic parameter that should be fairly large, depending on the scale of your residual function res(p) neg_penalty = 100 def res(p): # this is your original unconstrained residual based on distance of your data points from your fitted line return def residual(p): # Non-negativity pseudo-constraint # Assume p = array([a, b, A, B]) are floats, chosen by leastsq if any(p<0): return neg_penalty * norm(p[p<0]) * res(p) else: return res(p) I haven't tested this exact code, but I use similar constructions all the time. By factoring in the magnitude of negativity in the parameters you give the algorithm some information about a gradient so that it better understands how to find a better estimate. The assumption here is that you have the generic case with isolated solutions to your problem, otherwise leastsq might track towards back from a negative to a zero value for one or more parameters. This might not be what you want to happen, but there are fancier ways of keeping it from even *near* zero (like adding a very steep exponential-like function that increases as p values get close to zero). Put in print statements to see the norm of the values being returned in each if statement branch to diagnose whether you're using an appropriate value for neg_penalty. There are undoubtedly more elegant ways to do this with python than use a global like this, for instance you can pass additional parameters properly through the call to residual (see the doc string for leastsq). BTW your posting was unclear about whether A and B are scalar functions of x or just floats. If they are functions you'll have work just a little harder in your residual function to ensure A(x), B(x) stay positive. Let me know if this works out for you. Rob On 31/07/07, Christoph Rademacher wrote: > Hi all, > > I am using optimize.leastsq to fit my experimental data. Is there a > way to restrict the results to be a positive float? > e.g. my target function is a simple linear combination of three > variables to fit > f(x) : return a*A(x) + b*B(x) > > From my experiment I know that a,b,A,B can only be positive floats. > How do I put this extra information in optimize.leastsq? > > Thanks for your help, > > Christoph From robert.kern at gmail.com Wed Aug 1 00:48:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 31 Jul 2007 23:48:29 -0500 Subject: [SciPy-user] Error in SciPy install on Zenwalk-4.6.1 system. In-Reply-To: <46B00643.4050706@ar.media.kyoto-u.ac.jp> References: <46AFFDC3.8060905@airtelbroadband.in> <46B0074E.6050801@airtelbroadband.in> <46B00643.4050706@ar.media.kyoto-u.ac.jp> Message-ID: <46B0109D.4020808@gmail.com> David Cournapeau wrote: > Anand, I answer to you in this mail since it seems your reply is > mislocated in your email. To fetch numpy and scipy from sources, you > need to install subversion first, and then: > > svn co http://svn.scipy.org/svn/numpy/trunk numpy.svn > svn co http://svn.scipy.org/svn/scipy/trunk scipy.svn > > And then build numpy and scipy from sources as usual. Note that your > problem is not scipy but numpy (numpy is used by scipy to build). Well, sort of. While I shouldn't have removed get_path(), odr's setup.py shouldn't have been using it. Of course, the fault is mine for both! Anand, if you just want to patch the scipy 0.5.2 that you have, you can see the relevant changes here: http://projects.scipy.org/scipy/scipy/changeset?new=trunk%2FLib%2Fodr%2Fsetup.py%403006&old=trunk%2FLib%2Fodr%2Fsetup.py%402596 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Aug 1 01:17:33 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 01 Aug 2007 14:17:33 +0900 Subject: [SciPy-user] Error in SciPy install on Zenwalk-4.6.1 system. In-Reply-To: <46B0109D.4020808@gmail.com> References: <46AFFDC3.8060905@airtelbroadband.in> <46B0074E.6050801@airtelbroadband.in> <46B00643.4050706@ar.media.kyoto-u.ac.jp> <46B0109D.4020808@gmail.com> Message-ID: <46B0176D.9090609@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: > >> Anand, I answer to you in this mail since it seems your reply is >> mislocated in your email. To fetch numpy and scipy from sources, you >> need to install subversion first, and then: >> >> svn co http://svn.scipy.org/svn/numpy/trunk numpy.svn >> svn co http://svn.scipy.org/svn/scipy/trunk scipy.svn >> >> And then build numpy and scipy from sources as usual. Note that your >> problem is not scipy but numpy (numpy is used by scipy to build). > > Well, sort of. While I shouldn't have removed get_path(), odr's setup.py > shouldn't have been using it. Of course, the fault is mine for both! > Is this the only problem arising when building current relased scipy + nump ? I don't remember the details, but I had to backport other changes to make rpms build and succeed the test suites (maybe this was only for 64 bits arch, don't remember). David From matthieu.brucher at gmail.com Wed Aug 1 01:51:22 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 1 Aug 2007 07:51:22 +0200 Subject: [SciPy-user] Finding Neighboors In-Reply-To: References: Message-ID: Hi, I have an implementation, but it depends on my own matrix library... That's a stopper... But it works. I do not know the other templated matrix libraries very well, but I'd say I do not need much to make it work with another library. Matthieu 2007/8/1, Alan G Isaac : > > On Tue, 17 Apr 2007, Matthieu Brucher apparently wrote: > > I wanted to know if there was a module in scipy that is able to find the > > k-neighboors of a point ? > > If so, is there an optimized one - tree-based search - ? > > If not, I'm doing the optimized version. > > Hi Matthieu, > > Where did you go with this? > > Thanks! > Alan > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuelez at gmail.com Wed Aug 1 02:43:32 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Wed, 1 Aug 2007 08:43:32 +0200 Subject: [SciPy-user] Finding Neighboors In-Reply-To: References: Message-ID: Hi, you might want to take a look at kd-trees. No implementation in scipy, but it should not be too hard to achieve. As far as i can remember its definition in wikipedia includes some python code. Just my 2 cents :) On 8/1/07, Matthieu Brucher wrote: > Hi, > > I have an implementation, but it depends on my own matrix library... That's > a stopper... But it works. > I do not know the other templated matrix libraries very well, but I'd say I > do not need much to make it work with another library. > > Matthieu > > 2007/8/1, Alan G Isaac : > > On Tue, 17 Apr 2007, Matthieu Brucher apparently wrote: > > > I wanted to know if there was a module in scipy that is able to find the > > > k-neighboors of a point ? > > > If so, is there an optimized one - tree-based search - ? > > > If not, I'm doing the optimized version. > > > > Hi Matthieu, > > > > Where did you go with this? > > > > Thanks! > > Alan > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From matthieu.brucher at gmail.com Wed Aug 1 03:08:47 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 1 Aug 2007 09:08:47 +0200 Subject: [SciPy-user] Finding Neighboors In-Reply-To: References: Message-ID: 2007/8/1, Emanuele Zattin : > > Hi, you might want to take a look at kd-trees. No implementation in > scipy, but it should not be too hard to achieve. As far as i can > remember its definition in wikipedia includes some python code. Just > my 2 cents :) Thank you for the link ;) As far as I see, I did not implement a real kd-tree, I always split dimensions in two equal parts. It is true I can make a better partition with not much recoding (instead of splitting at the middle, I split at the medians of the points, it can be a balanced kd-tree if the points are uniformy reparted). I will not implement a full kd-tree with adding or removing points, I do not have the time :( What is more, I only implemented it for K-neighboors or Parzen Windows, and for this, I don't need to make the tree appear in Python, everything is in C++ (I had an implementation in Python, but it was too slow because of the computation of the nearest zone to analyze, it is far better with a std::map). Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelleferinga at gmail.com Wed Aug 1 03:30:42 2007 From: jelleferinga at gmail.com (jelle) Date: Wed, 1 Aug 2007 07:30:42 +0000 (UTC) Subject: [SciPy-user] Finding Neighboors References: Message-ID: Note that BioPython has a decent KdTree implementation in C. Perhaps that might work for you. -jelle From openopt at ukr.net Wed Aug 1 04:34:48 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 01 Aug 2007 11:34:48 +0300 Subject: [SciPy-user] restricting optimize.leastsq to positive results In-Reply-To: References: Message-ID: <46B045A8.9020409@ukr.net> Christoph Rademacher wrote: > Hi all, > > I am using optimize.leastsq to fit my experimental data. Is there a > way to restrict the results to be a positive float? > e.g. my target function is a simple linear combination of three > variables to fit > f(x) : return a*A(x) + b*B(x) > > From my experiment I know that a,b,A,B can only be positive floats. > What are those three variables mentioned? I can't understood are A(x), B(x) some funcs from x or something else? if you mean you have constraints a>0, b>0, A(x)>0, B(x)>0 so you have optimization problem with non-linear constraints. You should either use penalties, as it was mentioned in other letter, or directly provide these constraints to optimization solver, like optimize.cobyla, or cvxopt (GPL), or openopt (BSD) constrained NLP solver lincher (this one is still very primitive for now and requires cvxopt installed because of qp solver). If you will decide to use penalties, openopt ralg solver would be a good choice (no extern dependences), it is capable of handling very large penalties; also, you can use gradient/subgradient info and not only least squares but least abs values as well (ralg is capable of handling non-smooth problems). So way 1 is something like sum_j(a*A(xj) + b*B(xj) -Cj)^2-> min subjected to a>=0 A(x)>=0 b>=0 B(x)>=0 way 2 is sum_j(a*A(xj) + b*B(xj) -Cj)^2 + N1*a + N2*b+ N3*A(x)+N4*B(x) -> min Also, you can replace some or all of a, b, A(x), B(x) to either abs(var) or var^2 for example sum_j(a1^2 * A(xj) + b1^2 * B(xj) -Cj)^2-> min subj.to A(x)>=0 B(x)>=0 and then after solving a = a1^2 b = b1^2 if you will use ralg, I guess abs would yield more good result than ^2 another one approach would be a1 = exp(a), this func is also always positive, but it will not work well if your solution is close to zero. > How do I put this extra information in optimize.leastsq? > It's impossible, optimize.leastsq is for unconstrained problems only > Thanks for your help, > > Christoph > HTH, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From stefan at sun.ac.za Wed Aug 1 05:12:56 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 1 Aug 2007 11:12:56 +0200 Subject: [SciPy-user] Question about scipy.test() In-Reply-To: <827183970707311547w3a57c809i81a2fac4768cea92@mail.gmail.com> References: <827183970707311425x15810362jacd3907a3a85bca4@mail.gmail.com> <20070731221535.GY7447@mentat.za.net> <827183970707311547w3a57c809i81a2fac4768cea92@mail.gmail.com> Message-ID: <20070801091255.GZ7447@mentat.za.net> On Tue, Jul 31, 2007 at 06:47:18PM -0400, william ratcliff wrote: > I am using the latest version of the scipy source from SVN. I am using the > mingw from the enthought sumo distribution of python (2.4.3), just copied over > to the python25 directory. I'm not sure of the version-- > > It is version 4.0.3 of gcc according to the folder in > C:\Python25\MingW\bin\lib\gcc-lib\i686-pc-mingw32 > > The only addition I made to the library was a recent download of g95. Has this > bug come up before? Yes, see http://projects.scipy.org/scipy/scipy/ticket/404 I'd be glad if you could help narrow down the problem using valgrind, as indicated in the ticket above. Thanks! St?fan From Alexander.Dietz at astro.cf.ac.uk Wed Aug 1 06:05:02 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Wed, 1 Aug 2007 11:05:02 +0100 Subject: [SciPy-user] General question on scipy In-Reply-To: <20070731222301.GA23052@zunzun.com> References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> <9cf809a00706290524p5fbc2e40m754ef6dc2c1994cf@mail.gmail.com> <9cf809a00706290548u42ce009w32e6b279085553c2@mail.gmail.com> <9cf809a00707311335w726da9ebh836e7a0f3905a67d@mail.gmail.com> <20070731222301.GA23052@zunzun.com> Message-ID: <9cf809a00708010305k3c99ae88oa4d95ae5d87a9494@mail.gmail.com> Hi, On 7/31/07, zunzun at zunzun.com wrote: > > On Tue, Jul 31, 2007 at 09:35:11PM +0100, Alexander Dietz wrote: > > I got an error this time: > > > > error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 > > -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer > > -malign-double -c -c Lib/fftpack/dfftpack/zfftb1.f -o build/temp.linux- > > i686-2.4/Lib/fftpack/dfftpack/zfftb1.o" failed with exit status 1 > > What happens if you type '/usr/bin/g77' on a command line? I get the following output: f77: no input files (so this seems to work...) There were other suggestions on how to install scipy and numpy through rpm's, but when I tried it it screwed up all my installtion (e.g. some pieces of code did not work anymore). I removered everything, re-installed numpy 1.0.1 and matplotlib 0.90.1 successfully, and now trying to install scipy 0.5.2, but it fails with the above error. Below I attach for completeness the whole screen-dump when trying to install scipy. Thanks Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: install.log Type: text/x-log Size: 13529 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Wed Aug 1 06:02:07 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 01 Aug 2007 19:02:07 +0900 Subject: [SciPy-user] General question on scipy In-Reply-To: <9cf809a00708010305k3c99ae88oa4d95ae5d87a9494@mail.gmail.com> References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> <9cf809a00706290524p5fbc2e40m754ef6dc2c1994cf@mail.gmail.com> <9cf809a00706290548u42ce009w32e6b279085553c2@mail.gmail.com> <9cf809a00707311335w726da9ebh836e7a0f3905a67d@mail.gmail.com> <20070731222301.GA23052@zunzun.com> <9cf809a00708010305k3c99ae88oa4d95ae5d87a9494@mail.gmail.com> Message-ID: <46B05A1F.3060302@ar.media.kyoto-u.ac.jp> Alexander Dietz wrote: > Hi, > > On 7/31/07, *zunzun at zunzun.com * > > wrote: > > On Tue, Jul 31, 2007 at 09:35:11PM +0100, Alexander Dietz wrote: > > I got an error this time: > > > > error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore > -fPIC -O2 > > -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer > > -malign-double -c -c Lib/fftpack/dfftpack/zfftb1.f -o > build/temp.linux- > > i686-2.4/Lib/fftpack/dfftpack/zfftb1.o" failed with exit status 1 > > What happens if you type '/usr/bin/g77' on a command line? > > > > I get the following output: > > f77: no input files > > (so this seems to work...) > > There were other suggestions on how to install scipy and numpy through > rpm's, but when I tried it it screwed up all my installtion ( e.g. > some pieces of code did not work anymore). What was screwed up when you tried to install rpms ? What did not work anymore ? David From openopt at ukr.net Wed Aug 1 07:51:14 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 01 Aug 2007 14:51:14 +0300 Subject: [SciPy-user] removing "apply" from minpack.py Message-ID: <46B073B2.6020103@ukr.net> hi all, I removed "apply" from minpack.py according to Nils Wagner ticket. Previously "apply" was removed from optimize.py. scipy.test(1) doesn't show any errors, but I noticed there are no any tests for (for example) leastsq (I checked just 2 my tests related to leastsq). Maybe, some other funcs from minpack.py need apropriate tests. It would be fine if anyone related to leastsq or other funcs from minpack.py will check right now, before new scipy release, is there anything that works incorrectly. Regards, D. From bnuttall at uky.edu Wed Aug 1 08:46:08 2007 From: bnuttall at uky.edu (Brandon C. Nuttall) Date: Wed, 01 Aug 2007 08:46:08 -0400 Subject: [SciPy-user] Ant Colony Optimization Message-ID: <1185972368.efdfa04bnuttall@uky.edu> Folks, Is there an implementation of the Ant Colony Optimization routines available in Python? See http://www.aco-metaheuristic.org/ Thanks. Brandon Nuttall Brandon C. Nuttall bnuttall at uky.edu www.uky.edu/kgs 859-257-5500 ext 174 From bryanv at enthought.com Wed Aug 1 10:04:45 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Wed, 01 Aug 2007 09:04:45 -0500 Subject: [SciPy-user] import problem on OSX Message-ID: <46B092FD.6070505@enthought.com> We have some demos that do "from scipy import *" in order to let users enter arbitrary expressions to be evaluated on the fly. This fails on OSX. I just wanted to make sure it's nothing I have done before filing a ticket. It cannot import densetocsr: In [1]: import scipy In [2]: scipy.__version__ Out[2]: '0.5.3.dev3015' In [3]: form scipy import * ------------------------------------------------------------ File "", line 1 form scipy import * ^ : invalid syntax In [4]: from scipy import * --------------------------------------------------------------------------- Traceback (most recent call last) /Users/bryan/workspace/enthought_branches/enthought.chaco2_2.0/examples/ in () /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/__init__.py in () 3 from info import __doc__ 4 ----> 5 import umfpack 6 __doc__ = '\n\n'.join( (__doc__, umfpack.__doc__) ) 7 del umfpack /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/__init__.py in () 2 ----> 3 from umfpack import * 4 5 __all__ = filter(lambda s:not s.startswith('_'),dir()) 6 from numpy.testing import NumpyTest /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/umfpack.py in () 9 #from base import Struct, pause 10 import numpy as nm ---> 11 import scipy.sparse as sp 12 import re, imp 13 try: # Silence import error. /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/__init__.py in () 3 from info import __doc__ 4 ----> 5 from sparse import * 6 7 __all__ = filter(lambda s:not s.startswith('_'),dir()) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparse.py in () 13 arange, shape, intc 14 import numpy ---> 15 from scipy.sparse.sparsetools import densetocsr, csrtocsc, csrtodense, \ 16 cscplcsc, cscelmulcsc, cscmux, csrmux, csrmucsr, csrtocoo, cootocsc, \ 17 cootocsr, cscmucsc, csctocoo, csctocsr, csrplcsr, csrelmulcsr : cannot import name densetocsr In [5]: From lou_boog2000 at yahoo.com Wed Aug 1 10:45:24 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 1 Aug 2007 07:45:24 -0700 (PDT) Subject: [SciPy-user] Finding Neighboors In-Reply-To: Message-ID: <904612.95953.qm@web34415.mail.mud.yahoo.com> If you are using a data set that is static, i.e. you are not adding or deleting data points, then you should check out nearest neighbor searches using slices along various directions. It's *much* easier to code than k-d trees and is about as efficient. The reference is, Nene and Nayer, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 9, SEPTEMBER 1997, pg. 989 If your data is changing a lot as you are doing searches, then as I understand it, k-d trees are better to use since you can delete and insert nodes easily. --- Emanuele Zattin wrote: > Hi, you might want to take a look at kd-trees. No > implementation in > scipy, but it should not be too hard to achieve. As > far as i can > remember its definition in wikipedia includes some > python code. Just > my 2 cents :) -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center. http://autos.yahoo.com/green_center/ From matthieu.brucher at gmail.com Wed Aug 1 10:50:19 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 1 Aug 2007 16:50:19 +0200 Subject: [SciPy-user] Finding Neighboors In-Reply-To: <904612.95953.qm@web34415.mail.mud.yahoo.com> References: <904612.95953.qm@web34415.mail.mud.yahoo.com> Message-ID: 2007/8/1, Lou Pecora : > > If you are using a data set that is static, i.e. you > are not adding or deleting data points, then you > should check out nearest neighbor searches using > slices along various directions. It's *much* easier > to code than k-d trees and is about as efficient. The > reference is, > > Nene and Nayer, IEEE TRANSACTIONS ON PATTERN ANALYSIS > AND MACHINE INTELLIGENCE, VOL. 19, NO. 9, SEPTEMBER > 1997, pg. 989 > > If your data is changing a lot as you are doing > searches, then as I understand it, k-d trees are > better to use since you can delete and insert nodes > easily. Isn't that just what kd-trees do ? If you take several slices around a point, you must know which points belong to each slice, thus making a tree anyway, no ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From lou_boog2000 at yahoo.com Wed Aug 1 11:08:59 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 1 Aug 2007 08:08:59 -0700 (PDT) Subject: [SciPy-user] Finding Neighboors Message-ID: <363482.82321.qm@web34402.mail.mud.yahoo.com> No, these are not the same. For multiple variable (multidimensional) data points the slice search uses presorting on each dimension's data points (hence the need for static data) to take slices around the "center Point" along each dimension thereby narrowing down the neighbors rather rapidly. There are no tree structures in slice searching. Setting it up is very simple compared to k-d trees. However, the presorting makes it inefficient for non-static data. Please see the original article for more information and comparisons to k-d tree searching including timing comparisions. ------------------------------------------- Matthieu wrote: 2007/8/1, Lou Pecora : If you are using a data set that is static, i.e. you are not adding or deleting data points, then you should check out nearest neighbor searches using slices along various directions. It's *much* easier to code than k-d trees and is about as efficient. The reference is, Nene and Nayer, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 9, SEPTEMBER 1997, pg. 989 If your data is changing a lot as you are doing searches, then as I understand it, k-d trees are better to use since you can delete and insert nodes easily. Isn't that just what kd-trees do ? If you take several slices around a point, you must know which points belong to each slice, thus making a tree anyway, no ? Matthieu -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545469 From stefan at sun.ac.za Wed Aug 1 11:24:46 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 1 Aug 2007 17:24:46 +0200 Subject: [SciPy-user] import problem on OSX In-Reply-To: <46B092FD.6070505@enthought.com> References: <46B092FD.6070505@enthought.com> Message-ID: <20070801152446.GA7447@mentat.za.net> Hi Bryan On Wed, Aug 01, 2007 at 09:04:45AM -0500, Bryan Van de Ven wrote: > We have some demos that do "from scipy import *" in order to let users > enter arbitrary expressions to be evaluated on the fly. This fails on > OSX. I just wanted to make sure it's nothing I have done before filing a > ticket. It cannot import densetocsr: > > In [1]: import scipy > > In [2]: scipy.__version__ > Out[2]: '0.5.3.dev3015' We are now at r3217. Would you please update to the latest SVN version and try again? Cheers St?fan From bryanv at enthought.com Wed Aug 1 11:43:30 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Wed, 01 Aug 2007 10:43:30 -0500 Subject: [SciPy-user] import problem on OSX In-Reply-To: <20070801152446.GA7447@mentat.za.net> References: <46B092FD.6070505@enthought.com> <20070801152446.GA7447@mentat.za.net> Message-ID: <46B0AA22.7080707@enthought.com> Different import, same module: In [1]: import scipy In [2]: scipy.__version__ Out[2]: '0.5.3.dev3217' In [3]: from scipy import * --------------------------------------------------------------------------- Traceback (most recent call last) /usr/local/scipy/ in () /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/__init__.py in () 3 from info import __doc__ 4 ----> 5 import umfpack 6 __doc__ = '\n\n'.join( (__doc__, umfpack.__doc__) ) 7 del umfpack /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/__init__.py in () 2 ----> 3 from umfpack import * 4 5 __all__ = filter(lambda s:not s.startswith('_'),dir()) 6 from numpy.testing import NumpyTest /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/umfpack.py in () 9 #from base import Struct, pause 10 import numpy as nm ---> 11 import scipy.sparse as sp 12 import re, imp 13 try: # Silence import error. /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/__init__.py in () 3 from info import __doc__ 4 ----> 5 from sparse import * 6 7 __all__ = filter(lambda s:not s.startswith('_'),dir()) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparse.py in () 19 arange, shape, intc 20 import numpy ---> 21 from scipy.sparse.sparsetools import cscmux, csrmux, \ 22 cootocsr, csrtocoo, cootocsc, csctocoo, csctocsr, csrtocsc, \ 23 densetocsr, csrtodense, \ : cannot import name cscmux Stefan van der Walt wrote: > Hi Bryan > > On Wed, Aug 01, 2007 at 09:04:45AM -0500, Bryan Van de Ven wrote: >> We have some demos that do "from scipy import *" in order to let users >> enter arbitrary expressions to be evaluated on the fly. This fails on >> OSX. I just wanted to make sure it's nothing I have done before filing a >> ticket. It cannot import densetocsr: >> >> In [1]: import scipy >> >> In [2]: scipy.__version__ >> Out[2]: '0.5.3.dev3015' > > We are now at r3217. Would you please update to the latest SVN > version and try again? > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Aug 1 11:50:03 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 01 Aug 2007 17:50:03 +0200 Subject: [SciPy-user] import problem on OSX In-Reply-To: <46B0AA22.7080707@enthought.com> References: <46B092FD.6070505@enthought.com> <20070801152446.GA7447@mentat.za.net> <46B0AA22.7080707@enthought.com> Message-ID: <46B0ABAB.5090602@iam.uni-stuttgart.de> Bryan Van de Ven wrote: > Different import, same module: > > In [1]: import scipy > > In [2]: scipy.__version__ > Out[2]: '0.5.3.dev3217' > > In [3]: from scipy import * > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /usr/local/scipy/ in () > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/__init__.py > in () > 3 from info import __doc__ > 4 > ----> 5 import umfpack > 6 __doc__ = '\n\n'.join( (__doc__, umfpack.__doc__) ) > 7 del umfpack > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/__init__.py > in () > 2 > ----> 3 from umfpack import * > 4 > 5 __all__ = filter(lambda s:not s.startswith('_'),dir()) > 6 from numpy.testing import NumpyTest > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/umfpack.py > in () 9 #from base import Struct, pause > 10 import numpy as nm > ---> 11 import scipy.sparse as sp > 12 import re, imp > 13 try: # Silence import error. > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/__init__.py > in () > 3 from info import __doc__ > 4 > ----> 5 from sparse import * > 6 > 7 __all__ = filter(lambda s:not s.startswith('_'),dir()) > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparse.py > in () > 19 arange, shape, intc > 20 import numpy > ---> 21 from scipy.sparse.sparsetools import cscmux, csrmux, \ > 22 cootocsr, csrtocoo, cootocsc, csctocoo, csctocsr, csrtocsc, \ > 23 densetocsr, csrtodense, \ > > : cannot import name cscmux > > > Stefan van der Walt wrote: > >> Hi Bryan >> >> On Wed, Aug 01, 2007 at 09:04:45AM -0500, Bryan Van de Ven wrote: >> >>> We have some demos that do "from scipy import *" in order to let users >>> enter arbitrary expressions to be evaluated on the fly. This fails on >>> OSX. I just wanted to make sure it's nothing I have done before filing a >>> ticket. It cannot import densetocsr: >>> >>> In [1]: import scipy >>> >>> In [2]: scipy.__version__ >>> Out[2]: '0.5.3.dev3015' >>> >> We are now at r3217. Would you please update to the latest SVN >> version and try again? >> >> Cheers >> St?fan >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Python 2.5 (r25:51908, May 25 2007, 16:11:33) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.5.3.dev3217' >>> from scipy import * >>> I cannot reproduce your problem. It might be a good idea to remove scipy from site-packages and reinstall it. Nils From bryanv at enthought.com Wed Aug 1 12:31:32 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Wed, 01 Aug 2007 11:31:32 -0500 Subject: [SciPy-user] import problem on OSX In-Reply-To: <46B0ABAB.5090602@iam.uni-stuttgart.de> References: <46B092FD.6070505@enthought.com> <20070801152446.GA7447@mentat.za.net> <46B0AA22.7080707@enthought.com> <46B0ABAB.5090602@iam.uni-stuttgart.de> Message-ID: <46B0B564.5090507@enthought.com> That fixed, sorry for the noise. Bryan > I cannot reproduce your problem. > It might be a good idea to remove scipy from site-packages and reinstall it. > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Wed Aug 1 14:13:38 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 1 Aug 2007 12:13:38 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? Message-ID: Hi all, I wonder if people would be interested in a little session about testing practices at scipy this year. I know Titus Brown will be there for the tutorials, and he's one of the gurus in this topic, I don't know if he'll be around for the rest of the week. Testing is something that drives me nuts, because I have yet to find a workflow that I'm really happy with, so that the friction between interactively experimenting with a problem/development and turning that work into permanent, solid tests for the life of a code, is minimized. I would thus like to hear from others: experiences, tricks, tools, workflow approaches, etc... Any takers? If not a BOF, an informal testing lunch/dinner would also be OK with me, even on Tuesday or Wednesday... Cheers, f From scyang at nist.gov Wed Aug 1 14:49:58 2007 From: scyang at nist.gov (Stephen Yang) Date: Wed, 01 Aug 2007 14:49:58 -0400 Subject: [SciPy-user] help with list comprehensions In-Reply-To: References: Message-ID: <46B0D5D6.70009@nist.gov> Hello everyone, I have a question about list comprehensions. I would like to append data to an existing list using a list comprehension. Unfortunately, what I have tried does not seem to work: >>> y = [5, 1, 3, 5] >>> x = ['a', 'b'] >>> new = [x.append(data) for data in y] >>> new [None, None, None, None] Can anyone help? Thanks very much in advance. Stephen From rmay at ou.edu Wed Aug 1 14:50:57 2007 From: rmay at ou.edu (Ryan May) Date: Wed, 01 Aug 2007 13:50:57 -0500 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: Message-ID: <46B0D611.3010500@ou.edu> Fernando Perez wrote: > Hi all, > > I wonder if people would be interested in a little session about > testing practices at scipy this year. I know Titus Brown will be > there for the tutorials, and he's one of the gurus in this topic, I > don't know if he'll be around for the rest of the week. > > Testing is something that drives me nuts, because I have yet to find a > workflow that I'm really happy with, so that the friction between > interactively experimenting with a problem/development and turning > that work into permanent, solid tests for the life of a code, is > minimized. I would thus like to hear from others: experiences, > tricks, tools, workflow approaches, etc... > > Any takers? If not a BOF, an informal testing lunch/dinner would also > be OK with me, even on Tuesday or Wednesday... Any of the above sounds great to me. I know personally I need to write more tests, but I often have no idea even where to start. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From matthieu.brucher at gmail.com Wed Aug 1 14:53:21 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 1 Aug 2007 20:53:21 +0200 Subject: [SciPy-user] help with list comprehensions In-Reply-To: <46B0D5D6.70009@nist.gov> References: <46B0D5D6.70009@nist.gov> Message-ID: hi, list comprehension creates a new list with the return of a function. Here, w.append(data) returns None, so you have a list of None. More simple : >>> x + y Matthieu 2007/8/1, Stephen Yang : > > Hello everyone, > > I have a question about list comprehensions. I would like to append data > to an existing list using a list comprehension. Unfortunately, what I > have tried does not seem to work: > > >>> y = [5, 1, 3, 5] > >>> x = ['a', 'b'] > >>> new = [x.append(data) for data in y] > >>> new > [None, None, None, None] > > Can anyone help? Thanks very much in advance. > > Stephen > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drnlmuller+scipy at gmail.com Wed Aug 1 14:58:05 2007 From: drnlmuller+scipy at gmail.com (Neil Muller) Date: Wed, 1 Aug 2007 20:58:05 +0200 Subject: [SciPy-user] help with list comprehensions In-Reply-To: <46B0D5D6.70009@nist.gov> References: <46B0D5D6.70009@nist.gov> Message-ID: On 8/1/07, Stephen Yang wrote: > Hello everyone, > > I have a question about list comprehensions. I would like to append data > to an existing list using a list comprehension. Unfortunately, what I > have tried does not seem to work: > > >>> y = [5, 1, 3, 5] > >>> x = ['a', 'b'] > >>> new = [x.append(data) for data in y] > >>> new > [None, None, None, None] Since x.append(data) returns None. And x at this point will be ['a', 'b', 5, 1, 3, 5] I'm not sure what you're trying to achieve here, and why x.extend(y) or new=x+y isn't sufficient, so perhaps if you explained your problem in more depth? -- Neil Muller drnlmuller at gmail.com I've got a gmail account. Why haven't I become cool? From rmay at ou.edu Wed Aug 1 14:58:42 2007 From: rmay at ou.edu (Ryan May) Date: Wed, 01 Aug 2007 13:58:42 -0500 Subject: [SciPy-user] help with list comprehensions In-Reply-To: <46B0D5D6.70009@nist.gov> References: <46B0D5D6.70009@nist.gov> Message-ID: <46B0D7E2.2010608@ou.edu> Stephen Yang wrote: > Hello everyone, > > I have a question about list comprehensions. I would like to append data > to an existing list using a list comprehension. Unfortunately, what I > have tried does not seem to work: > > >>> y = [5, 1, 3, 5] > >>> x = ['a', 'b'] > >>> new = [x.append(data) for data in y] > >>> new > [None, None, None, None] > I'm not sure if this is just due to your example, but what you really want to do is: >>> x.extend(y) Which will get you the original list x with all of the entries of y appended. FYI, if you look at x after running your original code, it's correct. The problem is that x.append() modifies x in place, and therefore doesn't return a value, so None gets put into the list "new" for each item in y. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From robert.kern at gmail.com Wed Aug 1 15:01:35 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 01 Aug 2007 14:01:35 -0500 Subject: [SciPy-user] help with list comprehensions In-Reply-To: <46B0D5D6.70009@nist.gov> References: <46B0D5D6.70009@nist.gov> Message-ID: <46B0D88F.5000307@gmail.com> Stephen Yang wrote: > Hello everyone, > > I have a question about list comprehensions. I would like to append data > to an existing list using a list comprehension. Unfortunately, what I > have tried does not seem to work: > > >>> y = [5, 1, 3, 5] > >>> x = ['a', 'b'] > >>> new = [x.append(data) for data in y] > >>> new > [None, None, None, None] > > Can anyone help? Thanks very much in advance. For the example you give, you shouldn't use a list comprehension at all: x.extend(y) If you do need a list comprehension for something more complicated (say, because you were calling a function on each element): x.extend([f(data) for data in y]) The reason that your code didn't do what you thought it did stems from two causes: 1) list.append() returns None. 2) You were looking at `new` which was simply the result of the list comprehension and not `x`, the list you were trying to extend. If you had looked at `x`, you would have seen that the data were correctly appended. However, you shouldn't do it that way anyways. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Karl.Young at ucsf.edu Wed Aug 1 13:46:40 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 01 Aug 2007 10:46:40 -0700 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: Message-ID: <46B0C700.6010000@ucsf.edu> Sounds great to me; I need to learn a lot more about developing good testing practices (given that it doesn't conflict with the planned 3D visualization BOF on Thursday evening as I've already agreed to be counted for that). >Hi all, > >I wonder if people would be interested in a little session about >testing practices at scipy this year. I know Titus Brown will be >there for the tutorials, and he's one of the gurus in this topic, I >don't know if he'll be around for the rest of the week. > >Testing is something that drives me nuts, because I have yet to find a >workflow that I'm really happy with, so that the friction between >interactively experimenting with a problem/development and turning >that work into permanent, solid tests for the life of a code, is >minimized. I would thus like to hear from others: experiences, >tricks, tools, workflow approaches, etc... > >Any takers? If not a BOF, an informal testing lunch/dinner would also >be OK with me, even on Tuesday or Wednesday... > > >Cheers, > >f >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From ryanlists at gmail.com Wed Aug 1 15:37:12 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 1 Aug 2007 14:37:12 -0500 Subject: [SciPy-user] General Python Question: nested function calls Message-ID: I often write code something like this: def TruncatePickleFile(pathin, trigname='a', trigtype='sw', savepath=None, threshlevel=None, duration=0.05, backup=0.01): """Trunate the pickleddatafile whose path is pathin. If trigtype=='lg', use a DropDownLightGateTrigger, else use a standard Trigger.""" untrunc = pickleddatafiles.PickledDataFile(pathin) mytrunc = pickleddatafiles.TruncateDataObj(untrunc, threshlevel=threshlevel, backup=backup, duration=duration) mytrunc.chname = trigname if trigtype == 'lg': mytrunc.SetupTrigger(DataProcMixins.DropDownLightGateTrigger) else: mytrunc.SetupTrigger() mytrunc.SetupTruncChannel() mytrunc.Truncate() pathout = mytrunc.Pickle(savepath) return pathout def TruncatePickleFiles(listin, trigname='a', trigtype='sw', threshlevel=None, duration=0.05, backup=0.01): listout = [] for item in listin: print item curpath = TruncatePickleFile(item, trigname=trigname, trigtype=trigtype, threshlevel=threshlevel, duration=duration, backup=backup) listout.append(curpath) return listout where TruncatePickleFiles is sort of just a vectorization of TruncatePickleFile, but with some of the keyword args set. My problem is not that this might not be the fastest way to execute the code, but that I get tired of doing this kind of stuff: curpath = TruncatePickleFile(item, trigname=trigname, trigtype=trigtype, threshlevel=threshlevel, duration=duration, backup=backup) but I also don't want to just do def TruncatePickleFiles(listin, **kwargs): because I like to see the defaults and know what keyword arguments are legal. Does anyone else have this problem or have an elegant solution to it? The problem comes up for me also when I want a derived class to call a parent class's method in some partially overwritten method of the derived class. Ideally, I think I would like to pass **kwargs to the nested function, but without using **kwargs in the definition of the top function, if that makes any sense. Thanks for any suggestions, Ryan From robert.kern at gmail.com Wed Aug 1 15:44:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 01 Aug 2007 14:44:46 -0500 Subject: [SciPy-user] General Python Question: nested function calls In-Reply-To: References: Message-ID: <46B0E2AE.2080905@gmail.com> Ryan Krauss wrote: > I often write code something like this: > > def TruncatePickleFile(pathin, trigname='a', trigtype='sw', > savepath=None, threshlevel=None, duration=0.05, backup=0.01): > """Trunate the pickleddatafile whose path is pathin. If > trigtype=='lg', use a DropDownLightGateTrigger, else use a > standard Trigger.""" > untrunc = pickleddatafiles.PickledDataFile(pathin) > mytrunc = pickleddatafiles.TruncateDataObj(untrunc, > threshlevel=threshlevel, backup=backup, duration=duration) > mytrunc.chname = trigname > if trigtype == 'lg': > mytrunc.SetupTrigger(DataProcMixins.DropDownLightGateTrigger) > else: > mytrunc.SetupTrigger() > mytrunc.SetupTruncChannel() > mytrunc.Truncate() > pathout = mytrunc.Pickle(savepath) > return pathout > > def TruncatePickleFiles(listin, trigname='a', trigtype='sw', > threshlevel=None, duration=0.05, backup=0.01): > listout = [] > for item in listin: > print item > curpath = TruncatePickleFile(item, trigname=trigname, > trigtype=trigtype, threshlevel=threshlevel, duration=duration, > backup=backup) > listout.append(curpath) > return listout > > where TruncatePickleFiles is sort of just a vectorization of > TruncatePickleFile, but with some of the keyword args set. My problem > is not that this might not be the fastest way to execute the code, but > that I get tired of doing this kind of stuff: > > curpath = TruncatePickleFile(item, trigname=trigname, > trigtype=trigtype, threshlevel=threshlevel, duration=duration, > backup=backup) > > but I also don't want to just do > > def TruncatePickleFiles(listin, **kwargs): > > because I like to see the defaults and know what keyword arguments are legal. > > Does anyone else have this problem or have an elegant solution to it? > The problem comes up for me also when I want a derived class to call a > parent class's method in some partially overwritten method of the > derived class. > > Ideally, I think I would like to pass **kwargs to the nested function, > but without using **kwargs in the definition of the top function, if > that makes any sense. There isn't really a straightforward solution. However, for something like this, you may want to consider just implementing one version of the function which can take either a list of filenames or a single filename. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Wed Aug 1 15:58:34 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 1 Aug 2007 15:58:34 -0400 Subject: [SciPy-user] Ant Colony Optimization In-Reply-To: <1185972368.efdfa04bnuttall@uky.edu> References: <1185972368.efdfa04bnuttall@uky.edu> Message-ID: On Wed, 01 Aug 2007, "Brandon C. Nuttall" apparently wrote: > Is there an implementation of the Ant Colony Optimization > routines available in Python? See > http://www.aco-metaheuristic.org/ You could wrap the C code at: http://www.aco-metaheuristic.org/aco-code/public-software.html fwiw, Alan Isaac From ryanlists at gmail.com Wed Aug 1 15:56:59 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 1 Aug 2007 14:56:59 -0500 Subject: [SciPy-user] General Python Question: nested function calls In-Reply-To: <46B0E2AE.2080905@gmail.com> References: <46B0E2AE.2080905@gmail.com> Message-ID: That would have been a better solution in this case - I have done that before and don't know why I didn't here. Any thoughts on the same situation with calling a parent class's method. I just wrote code that did this: def __init__(self, pathin=None, dialect=spreadsheet.tabdelim): .... spreadsheet.SpreadSheet.__init__(self, pathin=pathin, skiprows=0, collabels=collabels, colmap=colmap, datafunc=float, picklekeys=['t','lg','a','v0']) Basically, I do a few other things and then call the parent's __init__ somewhere in the middle. On 8/1/07, Robert Kern wrote: > Ryan Krauss wrote: > > I often write code something like this: > > > > def TruncatePickleFile(pathin, trigname='a', trigtype='sw', > > savepath=None, threshlevel=None, duration=0.05, backup=0.01): > > """Trunate the pickleddatafile whose path is pathin. If > > trigtype=='lg', use a DropDownLightGateTrigger, else use a > > standard Trigger.""" > > untrunc = pickleddatafiles.PickledDataFile(pathin) > > mytrunc = pickleddatafiles.TruncateDataObj(untrunc, > > threshlevel=threshlevel, backup=backup, duration=duration) > > mytrunc.chname = trigname > > if trigtype == 'lg': > > mytrunc.SetupTrigger(DataProcMixins.DropDownLightGateTrigger) > > else: > > mytrunc.SetupTrigger() > > mytrunc.SetupTruncChannel() > > mytrunc.Truncate() > > pathout = mytrunc.Pickle(savepath) > > return pathout > > > > def TruncatePickleFiles(listin, trigname='a', trigtype='sw', > > threshlevel=None, duration=0.05, backup=0.01): > > listout = [] > > for item in listin: > > print item > > curpath = TruncatePickleFile(item, trigname=trigname, > > trigtype=trigtype, threshlevel=threshlevel, duration=duration, > > backup=backup) > > listout.append(curpath) > > return listout > > > > where TruncatePickleFiles is sort of just a vectorization of > > TruncatePickleFile, but with some of the keyword args set. My problem > > is not that this might not be the fastest way to execute the code, but > > that I get tired of doing this kind of stuff: > > > > curpath = TruncatePickleFile(item, trigname=trigname, > > trigtype=trigtype, threshlevel=threshlevel, duration=duration, > > backup=backup) > > > > but I also don't want to just do > > > > def TruncatePickleFiles(listin, **kwargs): > > > > because I like to see the defaults and know what keyword arguments are legal. > > > > Does anyone else have this problem or have an elegant solution to it? > > The problem comes up for me also when I want a derived class to call a > > parent class's method in some partially overwritten method of the > > derived class. > > > > Ideally, I think I would like to pass **kwargs to the nested function, > > but without using **kwargs in the definition of the top function, if > > that makes any sense. > > There isn't really a straightforward solution. > > However, for something like this, you may want to consider just implementing one > version of the function which can take either a list of filenames or a single > filename. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Wed Aug 1 16:22:32 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 1 Aug 2007 22:22:32 +0200 Subject: [SciPy-user] General Python Question: nested function calls In-Reply-To: References: <46B0E2AE.2080905@gmail.com> Message-ID: <20070801202232.GB29@clipper.ens.fr> On Wed, Aug 01, 2007 at 02:56:59PM -0500, Ryan Krauss wrote: > That would have been a better solution in this case - I have done that > before and don't know why I didn't here. > Any thoughts on the same situation with calling a parent class's > method. I just wrote code that did this: > def __init__(self, pathin=None, dialect=spreadsheet.tabdelim): > .... > spreadsheet.SpreadSheet.__init__(self, pathin=pathin, > skiprows=0, collabels=collabels, colmap=colmap, datafunc=float, > picklekeys=['t','lg','a','v0']) > Basically, I do a few other things and then call the parent's __init__ > somewhere in the middle. I had a similar problem recently and resolved it using a complex machinery (my requirements where a bit more complex than only this). I used traits, but you could do this without traits (though it would be harder). The thread concerning this problem and its solution can be found at https://mail.enthought.com/pipermail/enthought-dev/2007-August/007820.html This is heavy machinery, but there are some ideas to steal. Ga?l From barrywark at gmail.com Wed Aug 1 16:31:58 2007 From: barrywark at gmail.com (Barry Wark) Date: Wed, 1 Aug 2007 13:31:58 -0700 Subject: [SciPy-user] Finding Neighboors In-Reply-To: References: Message-ID: Hi all, Catching this conversation late. As jelle points out, BioPython has a k-d tree implemetation, but it hasn't been converted to numpy yet (I think it still uses Numeric). I've written a SWIG (with numpy type maps) wrapper around the Approximate Nearest Neighbor library (http://www.cs.umd.edu/~mount/ANN/), which includes both exact and approximate k-d tree searches. Email me off list if you would like a copy of the swig definition files. If there's interest, I will send it on to the ANN writers for inclusion in the library itself. Barry On 7/31/07, Emanuele Zattin wrote: > Hi, you might want to take a look at kd-trees. No implementation in > scipy, but it should not be too hard to achieve. As far as i can > remember its definition in wikipedia includes some python code. Just > my 2 cents :) > > On 8/1/07, Matthieu Brucher wrote: > > Hi, > > > > I have an implementation, but it depends on my own matrix library... That's > > a stopper... But it works. > > I do not know the other templated matrix libraries very well, but I'd say I > > do not need much to make it work with another library. > > > > Matthieu > > > > 2007/8/1, Alan G Isaac : > > > On Tue, 17 Apr 2007, Matthieu Brucher apparently wrote: > > > > I wanted to know if there was a module in scipy that is able to find the > > > > k-neighboors of a point ? > > > > If so, is there an optimized one - tree-based search - ? > > > > If not, I'm doing the optimized version. > > > > > > Hi Matthieu, > > > > > > Where did you go with this? > > > > > > Thanks! > > > Alan > > > > > > > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > Emanuele Zattin > --------------------------------------------------- > -I don't have to know an answer. I don't feel frightened by not > knowing things; by being lost in a mysterious universe without any > purpose ? which is the way it really is, as far as I can tell, > possibly. It doesn't frighten me.- Richard Feynman > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jelleferinga at gmail.com Wed Aug 1 17:06:22 2007 From: jelleferinga at gmail.com (jelle) Date: Wed, 1 Aug 2007 21:06:22 +0000 (UTC) Subject: [SciPy-user] Finding Neighboors References: Message-ID: Terrific! ANN seems like a very useful module Barry! Thanks for pointing out! Cheers, -jelle From fperez.net at gmail.com Wed Aug 1 17:14:25 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 1 Aug 2007 15:14:25 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <46B0C700.6010000@ucsf.edu> References: <46B0C700.6010000@ucsf.edu> Message-ID: On 8/1/07, Karl Young wrote: > > Sounds great to me; I need to learn a lot more about developing good > testing practices (given that it doesn't conflict with the planned 3D > visualization BOF on Thursday evening as I've already agreed to be > counted for that). No worries, I also said I'd go to the Thursday viz one as well. I wonder if we could plan on 2 bofs a night. Titus is already signed up as moderator for one on Thursday: http://scipy.org/SciPy2007/BoFs and I'd like to have him in for the testing one as well. If we could plan something like bof1 bof2 7-8 vis3d bio 8-9 testing astronomy then we could have Titus available for both and those of us who said yes to the vis3d one could also go to testing. Else we put the other bofs on Wednesday night, at the risk of getting less people in. Friday is probably out, since a lot of people leave then... Opinions? f From mhearne at usgs.gov Wed Aug 1 17:15:22 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 1 Aug 2007 15:15:22 -0600 Subject: [SciPy-user] 2d interpolation Message-ID: <6F6365AF-B63F-4D6D-99BA-39D47179DA31@usgs.gov> All: I'm trying to use interp2d to replicate behavior in Matlab. The Matlab script: x = reshape(1:16,4,4)'; xi = 1:0.5:4; yi = [1:0.5:4]'; z = interp2(x,xi,yi,'linear') which results in the matrix: z = 1.0000 1.5000 2.0000 2.5000 3.0000 3.5000 4.0000 3.0000 3.5000 4.0000 4.5000 5.0000 5.5000 6.0000 5.0000 5.5000 6.0000 6.5000 7.0000 7.5000 8.0000 7.0000 7.5000 8.0000 8.5000 9.0000 9.5000 10.0000 9.0000 9.5000 10.0000 10.5000 11.0000 11.5000 12.0000 11.0000 11.5000 12.0000 12.5000 13.0000 13.5000 14.0000 13.0000 13.5000 14.0000 14.5000 15.0000 15.5000 16.0000 I had thought the following Python/numpy script would be equivalent, but it is not: from scipy.interpolate import interpolate from numpy.random import randn from numpy import * data = arange(16) data = data+1 data = data.reshape(4,4) xrange = arange(4) yrange = arange(4) X,Y = meshgrid(xrange,yrange) outgrid = interpolate.interp2d(X,Y,data,kind='linear') xi = array([0,0.5,1,1.5,2,2.5,3]) yi = xi z = outgrid(xi,yi) This results in the matrix: [[ 1. 1.10731213 2. 2.89268787 3. 3.25045605 4. ] [ 3. 2.57118448 4. 5.42881552 5. 4.90975947 6. ] [ 5. 4.03505682 6. 7.96494318 7. 6.56906289 8. ] [ 7. 5.49892917 8. 10.50107083 9. 8.22836631 10. ] [ 9. 6.96280152 10. 13.03719848 11. 9.88766973 12. ] [ 11. 8.42667386 12. 15.57332614 13. 11.54697315 14. ] [ 13. 9.89054621 14. 18.10945379 15. 13.20627657 16. ]] (Incidentally, is there a way to pretty-print arrays in numpy? The above is kind of ugly and hard to read) Is this some kind of spline interpolation that I don't understand? Thanks, Mike Hearne ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Karl.Young at ucsf.edu Wed Aug 1 17:33:23 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 01 Aug 2007 14:33:23 -0700 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> Message-ID: <46B0FC23.40007@ucsf.edu> One vote here (though I'll have to send my superposition to bio/astro and Friday is indeed out for me). >On 8/1/07, Karl Young wrote: > > >>Sounds great to me; I need to learn a lot more about developing good >>testing practices (given that it doesn't conflict with the planned 3D >>visualization BOF on Thursday evening as I've already agreed to be >>counted for that). >> >> > >No worries, I also said I'd go to the Thursday viz one as well. > >I wonder if we could plan on 2 bofs a night. Titus is already signed >up as moderator for one on Thursday: > >http://scipy.org/SciPy2007/BoFs > >and I'd like to have him in for the testing one as well. If we could >plan something like > > bof1 bof2 > >7-8 vis3d bio >8-9 testing astronomy > >then we could have Titus available for both and those of us who said >yes to the vis3d one could also go to testing. > >Else we put the other bofs on Wednesday night, at the risk of getting >less people in. Friday is probably out, since a lot of people leave >then... > >Opinions? > >f >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From alex.liberzon at gmail.com Wed Aug 1 17:43:00 2007 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Thu, 2 Aug 2007 00:43:00 +0300 Subject: [SciPy-user] 2d interpolation (Michael Hearne) Message-ID: <775f17a80708011443m36af7fbue0c9a752a7899533@mail.gmail.com> It doesn't fit because xi and yi are 1:.5:4 in Matlab but 0:.5:3 in Python I tested it with outgrid(arange(1,4.1,.5),arange(1,4.1,.5)) and the result is the same as in Matlab. Best Alex On 8/2/07, scipy-user-request at scipy.org wrote: > Send SciPy-user mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://projects.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-user digest..." > > > Today's Topics: > > 1. Re: Ant Colony Optimization (Alan G Isaac) > 2. Re: General Python Question: nested function calls (Ryan Krauss) > 3. Re: General Python Question: nested function calls > (Gael Varoquaux) > 4. Re: Finding Neighboors (Barry Wark) > 5. Re: Finding Neighboors (jelle) > 6. Re: Testing BOF at scipy'07? (Fernando Perez) > 7. 2d interpolation (Michael Hearne) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 1 Aug 2007 15:58:34 -0400 > From: Alan G Isaac > Subject: Re: [SciPy-user] Ant Colony Optimization > To: scipy-user at scipy.org > Message-ID: > Content-Type: TEXT/PLAIN; CHARSET=UTF-8 > > On Wed, 01 Aug 2007, "Brandon C. Nuttall" apparently wrote: > > Is there an implementation of the Ant Colony Optimization > > routines available in Python? See > > http://www.aco-metaheuristic.org/ > > You could wrap the C code at: > http://www.aco-metaheuristic.org/aco-code/public-software.html > > fwiw, > Alan Isaac > > > > > ------------------------------ > > Message: 2 > Date: Wed, 1 Aug 2007 14:56:59 -0500 > From: "Ryan Krauss" > Subject: Re: [SciPy-user] General Python Question: nested function > calls > To: "SciPy Users List" > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > That would have been a better solution in this case - I have done that > before and don't know why I didn't here. > > Any thoughts on the same situation with calling a parent class's > method. I just wrote code that did this: > def __init__(self, pathin=None, dialect=spreadsheet.tabdelim): > .... > spreadsheet.SpreadSheet.__init__(self, pathin=pathin, > skiprows=0, collabels=collabels, colmap=colmap, datafunc=float, > picklekeys=['t','lg','a','v0']) > > Basically, I do a few other things and then call the parent's __init__ > somewhere in the middle. > > On 8/1/07, Robert Kern wrote: > > Ryan Krauss wrote: > > > I often write code something like this: > > > > > > def TruncatePickleFile(pathin, trigname='a', trigtype='sw', > > > savepath=None, threshlevel=None, duration=0.05, backup=0.01): > > > """Trunate the pickleddatafile whose path is pathin. If > > > trigtype=='lg', use a DropDownLightGateTrigger, else use a > > > standard Trigger.""" > > > untrunc = pickleddatafiles.PickledDataFile(pathin) > > > mytrunc = pickleddatafiles.TruncateDataObj(untrunc, > > > threshlevel=threshlevel, backup=backup, duration=duration) > > > mytrunc.chname = trigname > > > if trigtype == 'lg': > > > mytrunc.SetupTrigger(DataProcMixins.DropDownLightGateTrigger) > > > else: > > > mytrunc.SetupTrigger() > > > mytrunc.SetupTruncChannel() > > > mytrunc.Truncate() > > > pathout = mytrunc.Pickle(savepath) > > > return pathout > > > > > > def TruncatePickleFiles(listin, trigname='a', trigtype='sw', > > > threshlevel=None, duration=0.05, backup=0.01): > > > listout = [] > > > for item in listin: > > > print item > > > curpath = TruncatePickleFile(item, trigname=trigname, > > > trigtype=trigtype, threshlevel=threshlevel, duration=duration, > > > backup=backup) > > > listout.append(curpath) > > > return listout > > > > > > where TruncatePickleFiles is sort of just a vectorization of > > > TruncatePickleFile, but with some of the keyword args set. My problem > > > is not that this might not be the fastest way to execute the code, but > > > that I get tired of doing this kind of stuff: > > > > > > curpath = TruncatePickleFile(item, trigname=trigname, > > > trigtype=trigtype, threshlevel=threshlevel, duration=duration, > > > backup=backup) > > > > > > but I also don't want to just do > > > > > > def TruncatePickleFiles(listin, **kwargs): > > > > > > because I like to see the defaults and know what keyword arguments are legal. > > > > > > Does anyone else have this problem or have an elegant solution to it? > > > The problem comes up for me also when I want a derived class to call a > > > parent class's method in some partially overwritten method of the > > > derived class. > > > > > > Ideally, I think I would like to pass **kwargs to the nested function, > > > but without using **kwargs in the definition of the top function, if > > > that makes any sense. > > > > There isn't really a straightforward solution. > > > > However, for something like this, you may want to consider just implementing one > > version of the function which can take either a list of filenames or a single > > filename. > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless enigma > > that is made terrible by our own mad attempt to interpret it as though it had > > an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > ------------------------------ > > Message: 3 > Date: Wed, 1 Aug 2007 22:22:32 +0200 > From: Gael Varoquaux > Subject: Re: [SciPy-user] General Python Question: nested function > calls > To: SciPy Users List > Message-ID: <20070801202232.GB29 at clipper.ens.fr> > Content-Type: text/plain; charset=iso-8859-1 > > On Wed, Aug 01, 2007 at 02:56:59PM -0500, Ryan Krauss wrote: > > That would have been a better solution in this case - I have done that > > before and don't know why I didn't here. > > > Any thoughts on the same situation with calling a parent class's > > method. I just wrote code that did this: > > def __init__(self, pathin=None, dialect=spreadsheet.tabdelim): > > .... > > spreadsheet.SpreadSheet.__init__(self, pathin=pathin, > > skiprows=0, collabels=collabels, colmap=colmap, datafunc=float, > > picklekeys=['t','lg','a','v0']) > > > Basically, I do a few other things and then call the parent's __init__ > > somewhere in the middle. > > I had a similar problem recently and resolved it using a complex > machinery (my requirements where a bit more complex than only this). I > used traits, but you could do this without traits (though it would be > harder). > > The thread concerning this problem and its solution can be found at > https://mail.enthought.com/pipermail/enthought-dev/2007-August/007820.html > > This is heavy machinery, but there are some ideas to steal. > > Ga?l > > > ------------------------------ > > Message: 4 > Date: Wed, 1 Aug 2007 13:31:58 -0700 > From: "Barry Wark" > Subject: Re: [SciPy-user] Finding Neighboors > To: "SciPy Users List" > Message-ID: > > Content-Type: text/plain; charset=WINDOWS-1252 > > Hi all, > > Catching this conversation late. As jelle points out, BioPython has a > k-d tree implemetation, but it hasn't been converted to numpy yet (I > think it still uses Numeric). I've written a SWIG (with numpy type > maps) wrapper around the Approximate Nearest Neighbor library > (http://www.cs.umd.edu/~mount/ANN/), which includes both exact and > approximate k-d tree searches. Email me off list if you would like a > copy of the swig definition files. If there's interest, I will send it > on to the ANN writers for inclusion in the library itself. > > Barry > > On 7/31/07, Emanuele Zattin wrote: > > Hi, you might want to take a look at kd-trees. No implementation in > > scipy, but it should not be too hard to achieve. As far as i can > > remember its definition in wikipedia includes some python code. Just > > my 2 cents :) > > > > On 8/1/07, Matthieu Brucher wrote: > > > Hi, > > > > > > I have an implementation, but it depends on my own matrix library... That's > > > a stopper... But it works. > > > I do not know the other templated matrix libraries very well, but I'd say I > > > do not need much to make it work with another library. > > > > > > Matthieu > > > > > > 2007/8/1, Alan G Isaac : > > > > On Tue, 17 Apr 2007, Matthieu Brucher apparently wrote: > > > > > I wanted to know if there was a module in scipy that is able to find the > > > > > k-neighboors of a point ? > > > > > If so, is there an optimized one - tree-based search - ? > > > > > If not, I'm doing the optimized version. > > > > > > > > Hi Matthieu, > > > > > > > > Where did you go with this? > > > > > > > > Thanks! > > > > Alan > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > -- > > Emanuele Zattin > > --------------------------------------------------- > > -I don't have to know an answer. I don't feel frightened by not > > knowing things; by being lost in a mysterious universe without any > > purpose ? which is the way it really is, as far as I can tell, > > possibly. It doesn't frighten me.- Richard Feynman > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > ------------------------------ > > Message: 5 > Date: Wed, 1 Aug 2007 21:06:22 +0000 (UTC) > From: jelle > Subject: Re: [SciPy-user] Finding Neighboors > To: scipy-user at scipy.org > Message-ID: > Content-Type: text/plain; charset=us-ascii > > Terrific! > ANN seems like a very useful module Barry! > Thanks for pointing out! > > Cheers, > > -jelle > > > > ------------------------------ > > Message: 6 > Date: Wed, 1 Aug 2007 15:14:25 -0600 > From: "Fernando Perez" > Subject: Re: [SciPy-user] Testing BOF at scipy'07? > To: "SciPy Users List" > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > On 8/1/07, Karl Young wrote: > > > > Sounds great to me; I need to learn a lot more about developing good > > testing practices (given that it doesn't conflict with the planned 3D > > visualization BOF on Thursday evening as I've already agreed to be > > counted for that). > > No worries, I also said I'd go to the Thursday viz one as well. > > I wonder if we could plan on 2 bofs a night. Titus is already signed > up as moderator for one on Thursday: > > http://scipy.org/SciPy2007/BoFs > > and I'd like to have him in for the testing one as well. If we could > plan something like > > bof1 bof2 > > 7-8 vis3d bio > 8-9 testing astronomy > > then we could have Titus available for both and those of us who said > yes to the vis3d one could also go to testing. > > Else we put the other bofs on Wednesday night, at the risk of getting > less people in. Friday is probably out, since a lot of people leave > then... > > Opinions? > > f > > > ------------------------------ > > Message: 7 > Date: Wed, 1 Aug 2007 15:15:22 -0600 > From: Michael Hearne > Subject: [SciPy-user] 2d interpolation > To: scipy-user at scipy.org > Message-ID: <6F6365AF-B63F-4D6D-99BA-39D47179DA31 at usgs.gov> > Content-Type: text/plain; charset="us-ascii" > > All: I'm trying to use interp2d to replicate behavior in Matlab. > > The Matlab script: > x = reshape(1:16,4,4)'; > xi = 1:0.5:4; > yi = [1:0.5:4]'; > z = interp2(x,xi,yi,'linear') > > which results in the matrix: > z = > > 1.0000 1.5000 2.0000 2.5000 3.0000 3.5000 4.0000 > 3.0000 3.5000 4.0000 4.5000 5.0000 5.5000 6.0000 > 5.0000 5.5000 6.0000 6.5000 7.0000 7.5000 8.0000 > 7.0000 7.5000 8.0000 8.5000 9.0000 9.5000 10.0000 > 9.0000 9.5000 10.0000 10.5000 11.0000 11.5000 12.0000 > 11.0000 11.5000 12.0000 12.5000 13.0000 13.5000 14.0000 > 13.0000 13.5000 14.0000 14.5000 15.0000 15.5000 16.0000 > > I had thought the following Python/numpy script would be equivalent, > but it is not: > from scipy.interpolate import interpolate > from numpy.random import randn > from numpy import * > > data = arange(16) > data = data+1 > data = data.reshape(4,4) > xrange = arange(4) > yrange = arange(4) > X,Y = meshgrid(xrange,yrange) > > outgrid = interpolate.interp2d(X,Y,data,kind='linear') > xi = array([0,0.5,1,1.5,2,2.5,3]) > yi = xi > > z = outgrid(xi,yi) > > This results in the matrix: > [[ 1. 1.10731213 2. 2.89268787 3. > 3.25045605 > 4. ] > [ 3. 2.57118448 4. 5.42881552 5. > 4.90975947 > 6. ] > [ 5. 4.03505682 6. 7.96494318 7. > 6.56906289 > 8. ] > [ 7. 5.49892917 8. 10.50107083 9. > 8.22836631 > 10. ] > [ 9. 6.96280152 10. 13.03719848 11. > 9.88766973 > 12. ] > [ 11. 8.42667386 12. 15.57332614 13. > 11.54697315 > 14. ] > [ 13. 9.89054621 14. 18.10945379 15. > 13.20627657 > 16. ]] > > (Incidentally, is there a way to pretty-print arrays in numpy? The > above is kind of ugly and hard to read) > > Is this some kind of spline interpolation that I don't understand? > > Thanks, > > Mike Hearne > > > > > > > ------------------------------------------------------ > Michael Hearne > mhearne at usgs.gov > (303) 273-8620 > USGS National Earthquake Information Center > 1711 Illinois St. Golden CO 80401 > Senior Software Engineer > Synergetics, Inc. > ------------------------------------------------------ > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20070801/5ae71590/attachment.html > > ------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-user Digest, Vol 48, Issue 6 > ***************************************** > From mhearne at usgs.gov Wed Aug 1 17:54:42 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 1 Aug 2007 15:54:42 -0600 Subject: [SciPy-user] 2d interpolation (Michael Hearne) In-Reply-To: <775f17a80708011443m36af7fbue0c9a752a7899533@mail.gmail.com> References: <775f17a80708011443m36af7fbue0c9a752a7899533@mail.gmail.com> Message-ID: <4F4EF2E2-5204-4129-B394-A4C09A7F8A2F@usgs.gov> Ummm, how? If I try "outgrid(arange(1,4.1,.5),arange(1,4.1,.5))", I get: array([[ 6. , 7.96494318, 7. , 6.56906289, 8. , 8. , 8. ], [ 8. , 10.50107083, 9. , 8.22836631, 10. , 10. , 10. ], [ 10. , 13.03719848, 11. , 9.88766973, 12. , 12. , 12. ], [ 12. , 15.57332614, 13. , 11.54697315, 14. , 14. , 14. ], [ 14. , 18.10945379, 15. , 13.20627657, 16. , 16. , 16. ], [ 14. , 18.10945379, 15. , 13.20627657, 16. , 16. , 16. ], [ 14. , 18.10945379, 15. , 13.20627657, 16. , 16. , 16. ]]) This doesn't look at all like my Matlab result! xi and yi are supposed to be arrays of indices, and since Matlab starts indexing at 1, and Python/numpy start indexing at 0, I assumed that xi and yi in numpy should be one less than the corresponding vectors in Matlab. Still confused, Mike Hearne On Aug 1, 2007, at 3:43 PM, Alex Liberzon wrote: > It doesn't fit because xi and yi are 1:.5:4 in Matlab but 0:.5:3 in > Python > I tested it with > outgrid(arange(1,4.1,.5),arange(1,4.1,.5)) and the result is the same > as in Matlab. > > Best > Alex > > On 8/2/07, scipy-user-request at scipy.org request at scipy.org> wrote: >> Send SciPy-user mailing list submissions to >> scipy-user at scipy.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://projects.scipy.org/mailman/listinfo/scipy-user >> or, via email, send a message with subject or body 'help' to >> scipy-user-request at scipy.org >> >> You can reach the person managing the list at >> scipy-user-owner at scipy.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of SciPy-user digest..." >> >> >> Today's Topics: >> >> 1. Re: Ant Colony Optimization (Alan G Isaac) >> 2. Re: General Python Question: nested function calls (Ryan >> Krauss) >> 3. Re: General Python Question: nested function calls >> (Gael Varoquaux) >> 4. Re: Finding Neighboors (Barry Wark) >> 5. Re: Finding Neighboors (jelle) >> 6. Re: Testing BOF at scipy'07? (Fernando Perez) >> 7. 2d interpolation (Michael Hearne) >> >> >> --------------------------------------------------------------------- >> - >> >> Message: 1 >> Date: Wed, 1 Aug 2007 15:58:34 -0400 >> From: Alan G Isaac >> Subject: Re: [SciPy-user] Ant Colony Optimization >> To: scipy-user at scipy.org >> Message-ID: >> Content-Type: TEXT/PLAIN; CHARSET=UTF-8 >> >> On Wed, 01 Aug 2007, "Brandon C. Nuttall" apparently wrote: >>> Is there an implementation of the Ant Colony Optimization >>> routines available in Python? See >>> http://www.aco-metaheuristic.org/ >> >> You could wrap the C code at: >> http://www.aco-metaheuristic.org/aco-code/public-software.html >> >> fwiw, >> Alan Isaac >> >> >> >> >> ------------------------------ >> >> Message: 2 >> Date: Wed, 1 Aug 2007 14:56:59 -0500 >> From: "Ryan Krauss" >> Subject: Re: [SciPy-user] General Python Question: nested function >> calls >> To: "SciPy Users List" >> Message-ID: >> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> That would have been a better solution in this case - I have done >> that >> before and don't know why I didn't here. >> >> Any thoughts on the same situation with calling a parent class's >> method. I just wrote code that did this: >> def __init__(self, pathin=None, dialect=spreadsheet.tabdelim): >> .... >> spreadsheet.SpreadSheet.__init__(self, pathin=pathin, >> skiprows=0, collabels=collabels, colmap=colmap, datafunc=float, >> picklekeys=['t','lg','a','v0']) >> >> Basically, I do a few other things and then call the parent's >> __init__ >> somewhere in the middle. >> >> On 8/1/07, Robert Kern wrote: >>> Ryan Krauss wrote: >>>> I often write code something like this: >>>> >>>> def TruncatePickleFile(pathin, trigname='a', trigtype='sw', >>>> savepath=None, threshlevel=None, duration=0.05, backup=0.01): >>>> """Trunate the pickleddatafile whose path is pathin. If >>>> trigtype=='lg', use a DropDownLightGateTrigger, else use a >>>> standard Trigger.""" >>>> untrunc = pickleddatafiles.PickledDataFile(pathin) >>>> mytrunc = pickleddatafiles.TruncateDataObj(untrunc, >>>> threshlevel=threshlevel, backup=backup, duration=duration) >>>> mytrunc.chname = trigname >>>> if trigtype == 'lg': >>>> mytrunc.SetupTrigger >>>> (DataProcMixins.DropDownLightGateTrigger) >>>> else: >>>> mytrunc.SetupTrigger() >>>> mytrunc.SetupTruncChannel() >>>> mytrunc.Truncate() >>>> pathout = mytrunc.Pickle(savepath) >>>> return pathout >>>> >>>> def TruncatePickleFiles(listin, trigname='a', trigtype='sw', >>>> threshlevel=None, duration=0.05, backup=0.01): >>>> listout = [] >>>> for item in listin: >>>> print item >>>> curpath = TruncatePickleFile(item, trigname=trigname, >>>> trigtype=trigtype, threshlevel=threshlevel, duration=duration, >>>> backup=backup) >>>> listout.append(curpath) >>>> return listout >>>> >>>> where TruncatePickleFiles is sort of just a vectorization of >>>> TruncatePickleFile, but with some of the keyword args set. My >>>> problem >>>> is not that this might not be the fastest way to execute the >>>> code, but >>>> that I get tired of doing this kind of stuff: >>>> >>>> curpath = TruncatePickleFile(item, trigname=trigname, >>>> trigtype=trigtype, threshlevel=threshlevel, duration=duration, >>>> backup=backup) >>>> >>>> but I also don't want to just do >>>> >>>> def TruncatePickleFiles(listin, **kwargs): >>>> >>>> because I like to see the defaults and know what keyword >>>> arguments are legal. >>>> >>>> Does anyone else have this problem or have an elegant solution >>>> to it? >>>> The problem comes up for me also when I want a derived class to >>>> call a >>>> parent class's method in some partially overwritten method of the >>>> derived class. >>>> >>>> Ideally, I think I would like to pass **kwargs to the nested >>>> function, >>>> but without using **kwargs in the definition of the top >>>> function, if >>>> that makes any sense. >>> >>> There isn't really a straightforward solution. >>> >>> However, for something like this, you may want to consider just >>> implementing one >>> version of the function which can take either a list of filenames >>> or a single >>> filename. >>> >>> -- >>> Robert Kern >>> >>> "I have come to believe that the whole world is an enigma, a >>> harmless enigma >>> that is made terrible by our own mad attempt to interpret it as >>> though it had >>> an underlying truth." >>> -- Umberto Eco >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> ------------------------------ >> >> Message: 3 >> Date: Wed, 1 Aug 2007 22:22:32 +0200 >> From: Gael Varoquaux >> Subject: Re: [SciPy-user] General Python Question: nested function >> calls >> To: SciPy Users List >> Message-ID: <20070801202232.GB29 at clipper.ens.fr> >> Content-Type: text/plain; charset=iso-8859-1 >> >> On Wed, Aug 01, 2007 at 02:56:59PM -0500, Ryan Krauss wrote: >>> That would have been a better solution in this case - I have done >>> that >>> before and don't know why I didn't here. >> >>> Any thoughts on the same situation with calling a parent class's >>> method. I just wrote code that did this: >>> def __init__(self, pathin=None, dialect=spreadsheet.tabdelim): >>> .... >>> spreadsheet.SpreadSheet.__init__(self, pathin=pathin, >>> skiprows=0, collabels=collabels, colmap=colmap, datafunc=float, >>> picklekeys=['t','lg','a','v0']) >> >>> Basically, I do a few other things and then call the parent's >>> __init__ >>> somewhere in the middle. >> >> I had a similar problem recently and resolved it using a complex >> machinery (my requirements where a bit more complex than only >> this). I >> used traits, but you could do this without traits (though it would be >> harder). >> >> The thread concerning this problem and its solution can be found at >> https://mail.enthought.com/pipermail/enthought-dev/2007-August/ >> 007820.html >> >> This is heavy machinery, but there are some ideas to steal. >> >> Ga?l >> >> >> ------------------------------ >> >> Message: 4 >> Date: Wed, 1 Aug 2007 13:31:58 -0700 >> From: "Barry Wark" >> Subject: Re: [SciPy-user] Finding Neighboors >> To: "SciPy Users List" >> Message-ID: >> >> Content-Type: text/plain; charset=WINDOWS-1252 >> >> Hi all, >> >> Catching this conversation late. As jelle points out, BioPython has a >> k-d tree implemetation, but it hasn't been converted to numpy yet (I >> think it still uses Numeric). I've written a SWIG (with numpy type >> maps) wrapper around the Approximate Nearest Neighbor library >> (http://www.cs.umd.edu/~mount/ANN/), which includes both exact and >> approximate k-d tree searches. Email me off list if you would like a >> copy of the swig definition files. If there's interest, I will >> send it >> on to the ANN writers for inclusion in the library itself. >> >> Barry >> >> On 7/31/07, Emanuele Zattin wrote: >>> Hi, you might want to take a look at kd-trees. No implementation in >>> scipy, but it should not be too hard to achieve. As far as i can >>> remember its definition in wikipedia includes some python code. Just >>> my 2 cents :) >>> >>> On 8/1/07, Matthieu Brucher wrote: >>>> Hi, >>>> >>>> I have an implementation, but it depends on my own matrix >>>> library... That's >>>> a stopper... But it works. >>>> I do not know the other templated matrix libraries very well, >>>> but I'd say I >>>> do not need much to make it work with another library. >>>> >>>> Matthieu >>>> >>>> 2007/8/1, Alan G Isaac : >>>>> On Tue, 17 Apr 2007, Matthieu Brucher apparently wrote: >>>>>> I wanted to know if there was a module in scipy that is able >>>>>> to find the >>>>>> k-neighboors of a point ? >>>>>> If so, is there an optimized one - tree-based search - ? >>>>>> If not, I'm doing the optimized version. >>>>> >>>>> Hi Matthieu, >>>>> >>>>> Where did you go with this? >>>>> >>>>> Thanks! >>>>> Alan >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.org >>>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >>> >>> -- >>> Emanuele Zattin >>> --------------------------------------------------- >>> -I don't have to know an answer. I don't feel frightened by not >>> knowing things; by being lost in a mysterious universe without any >>> purpose ? which is the way it really is, as far as I can tell, >>> possibly. It doesn't frighten me.- Richard Feynman >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> ------------------------------ >> >> Message: 5 >> Date: Wed, 1 Aug 2007 21:06:22 +0000 (UTC) >> From: jelle >> Subject: Re: [SciPy-user] Finding Neighboors >> To: scipy-user at scipy.org >> Message-ID: >> Content-Type: text/plain; charset=us-ascii >> >> Terrific! >> ANN seems like a very useful module Barry! >> Thanks for pointing out! >> >> Cheers, >> >> -jelle >> >> >> >> ------------------------------ >> >> Message: 6 >> Date: Wed, 1 Aug 2007 15:14:25 -0600 >> From: "Fernando Perez" >> Subject: Re: [SciPy-user] Testing BOF at scipy'07? >> To: "SciPy Users List" >> Message-ID: >> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> On 8/1/07, Karl Young wrote: >>> >>> Sounds great to me; I need to learn a lot more about developing good >>> testing practices (given that it doesn't conflict with the >>> planned 3D >>> visualization BOF on Thursday evening as I've already agreed to be >>> counted for that). >> >> No worries, I also said I'd go to the Thursday viz one as well. >> >> I wonder if we could plan on 2 bofs a night. Titus is already signed >> up as moderator for one on Thursday: >> >> http://scipy.org/SciPy2007/BoFs >> >> and I'd like to have him in for the testing one as well. If we could >> plan something like >> >> bof1 bof2 >> >> 7-8 vis3d bio >> 8-9 testing astronomy >> >> then we could have Titus available for both and those of us who said >> yes to the vis3d one could also go to testing. >> >> Else we put the other bofs on Wednesday night, at the risk of getting >> less people in. Friday is probably out, since a lot of people leave >> then... >> >> Opinions? >> >> f >> >> >> ------------------------------ >> >> Message: 7 >> Date: Wed, 1 Aug 2007 15:15:22 -0600 >> From: Michael Hearne >> Subject: [SciPy-user] 2d interpolation >> To: scipy-user at scipy.org >> Message-ID: <6F6365AF-B63F-4D6D-99BA-39D47179DA31 at usgs.gov> >> Content-Type: text/plain; charset="us-ascii" >> >> All: I'm trying to use interp2d to replicate behavior in Matlab. >> >> The Matlab script: >> x = reshape(1:16,4,4)'; >> xi = 1:0.5:4; >> yi = [1:0.5:4]'; >> z = interp2(x,xi,yi,'linear') >> >> which results in the matrix: >> z = >> >> 1.0000 1.5000 2.0000 2.5000 3.0000 3.5000 >> 4.0000 >> 3.0000 3.5000 4.0000 4.5000 5.0000 5.5000 >> 6.0000 >> 5.0000 5.5000 6.0000 6.5000 7.0000 7.5000 >> 8.0000 >> 7.0000 7.5000 8.0000 8.5000 9.0000 9.5000 >> 10.0000 >> 9.0000 9.5000 10.0000 10.5000 11.0000 11.5000 >> 12.0000 >> 11.0000 11.5000 12.0000 12.5000 13.0000 13.5000 >> 14.0000 >> 13.0000 13.5000 14.0000 14.5000 15.0000 15.5000 >> 16.0000 >> >> I had thought the following Python/numpy script would be equivalent, >> but it is not: >> from scipy.interpolate import interpolate >> from numpy.random import randn >> from numpy import * >> >> data = arange(16) >> data = data+1 >> data = data.reshape(4,4) >> xrange = arange(4) >> yrange = arange(4) >> X,Y = meshgrid(xrange,yrange) >> >> outgrid = interpolate.interp2d(X,Y,data,kind='linear') >> xi = array([0,0.5,1,1.5,2,2.5,3]) >> yi = xi >> >> z = outgrid(xi,yi) >> >> This results in the matrix: >> [[ 1. 1.10731213 2. 2.89268787 3. >> 3.25045605 >> 4. ] >> [ 3. 2.57118448 4. 5.42881552 5. >> 4.90975947 >> 6. ] >> [ 5. 4.03505682 6. 7.96494318 7. >> 6.56906289 >> 8. ] >> [ 7. 5.49892917 8. 10.50107083 9. >> 8.22836631 >> 10. ] >> [ 9. 6.96280152 10. 13.03719848 11. >> 9.88766973 >> 12. ] >> [ 11. 8.42667386 12. 15.57332614 13. >> 11.54697315 >> 14. ] >> [ 13. 9.89054621 14. 18.10945379 15. >> 13.20627657 >> 16. ]] >> >> (Incidentally, is there a way to pretty-print arrays in numpy? The >> above is kind of ugly and hard to read) >> >> Is this some kind of spline interpolation that I don't understand? >> >> Thanks, >> >> Mike Hearne >> >> >> >> >> >> >> ------------------------------------------------------ >> Michael Hearne >> mhearne at usgs.gov >> (303) 273-8620 >> USGS National Earthquake Information Center >> 1711 Illinois St. Golden CO 80401 >> Senior Software Engineer >> Synergetics, Inc. >> ------------------------------------------------------ >> >> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: http://projects.scipy.org/pipermail/scipy-user/attachments/ >> 20070801/5ae71590/attachment.html >> >> ------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> End of SciPy-user Digest, Vol 48, Issue 6 >> ***************************************** >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Wed Aug 1 19:21:24 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 02 Aug 2007 01:21:24 +0200 Subject: [SciPy-user] Ant Colony Optimization In-Reply-To: <1185972368.efdfa04bnuttall@uky.edu> References: <1185972368.efdfa04bnuttall@uky.edu> Message-ID: <46B11574.5010207@gmx.net> Brandon C. Nuttall wrote: > Folks, > > Is there an implementation of the Ant Colony Optimization routines available in Python? See http://www.aco-metaheuristic.org/ > AFAIK not some "quasi-standard" library (like LAPACK). Metaheuristics (be it ACO, GA, PSO, whatever) are usually (a) relatively straigtforward to code, (b) very problem-specific. The real challenge is more the algorithm tuning (population size etc.) and the selection of proper operators. Sometimes, a combination of (tons of) literature hints and clever test runs solves this, but there are also books on the topic (see e.g. http://www.amazon.com/Parameter-Evolutionary-Algorithms-Computational-Intelligence/dp/3540694315). -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From william.ratcliff at gmail.com Wed Aug 1 22:19:02 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 1 Aug 2007 22:19:02 -0400 Subject: [SciPy-user] Question about scipy.test() In-Reply-To: <20070801091255.GZ7447@mentat.za.net> References: <827183970707311425x15810362jacd3907a3a85bca4@mail.gmail.com> <20070731221535.GY7447@mentat.za.net> <827183970707311547w3a57c809i81a2fac4768cea92@mail.gmail.com> <20070801091255.GZ7447@mentat.za.net> Message-ID: <827183970708011919g5df3fc3dic7a61384890bcac4@mail.gmail.com> I'm on a windows box, so I don't have Valgrind--I downloaded Rational Purify from IBM, but have never used it--any suggestions? Thanks, William On 8/1/07, Stefan van der Walt wrote: > > On Tue, Jul 31, 2007 at 06:47:18PM -0400, william ratcliff wrote: > > I am using the latest version of the scipy source from SVN. I am using > the > > mingw from the enthought sumo distribution of python (2.4.3), just > copied over > > to the python25 directory. I'm not sure of the version-- > > > > It is version 4.0.3 of gcc according to the folder in > > C:\Python25\MingW\bin\lib\gcc-lib\i686-pc-mingw32 > > > > The only addition I made to the library was a recent download of > g95. Has this > > bug come up before? > > Yes, see > > http://projects.scipy.org/scipy/scipy/ticket/404 > > I'd be glad if you could help narrow down the problem using valgrind, > as indicated in the ticket above. > > Thanks! > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Aug 2 02:54:09 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Aug 2007 08:54:09 +0200 Subject: [SciPy-user] 2d interpolation In-Reply-To: <6F6365AF-B63F-4D6D-99BA-39D47179DA31@usgs.gov> References: <6F6365AF-B63F-4D6D-99BA-39D47179DA31@usgs.gov> Message-ID: <46B17F91.7000007@iam.uni-stuttgart.de> Michael Hearne wrote: > All: I'm trying to use interp2d to replicate behavior in Matlab. > > The Matlab script: > x = reshape(1:16,4,4)'; > xi = 1:0.5:4; > yi = [1:0.5:4]'; > z = interp2(x,xi,yi,'linear') > > which results in the matrix: > z = > > 1.0000 1.5000 2.0000 2.5000 3.0000 3.5000 4.0000 > 3.0000 3.5000 4.0000 4.5000 5.0000 5.5000 6.0000 > 5.0000 5.5000 6.0000 6.5000 7.0000 7.5000 8.0000 > 7.0000 7.5000 8.0000 8.5000 9.0000 9.5000 10.0000 > 9.0000 9.5000 10.0000 10.5000 11.0000 11.5000 12.0000 > 11.0000 11.5000 12.0000 12.5000 13.0000 13.5000 14.0000 > 13.0000 13.5000 14.0000 14.5000 15.0000 15.5000 16.0000 > > I had thought the following Python/numpy script would be equivalent, > but it is not: > from scipy.interpolate import interpolate > from numpy.random import randn > from numpy import * > > data = arange(16) > data = data+1 > data = data.reshape(4,4) > xrange = arange(4) > yrange = arange(4) > X,Y = meshgrid(xrange,yrange) > > outgrid = interpolate.interp2d(X,Y,data,kind='linear') > xi = array([0,0.5,1,1.5,2,2.5,3]) > yi = xi > > z = outgrid(xi,yi) > > This results in the matrix: > [[ 1. 1.10731213 2. 2.89268787 3. > 3.25045605 > 4. ] > [ 3. 2.57118448 4. 5.42881552 5. > 4.90975947 > 6. ] > [ 5. 4.03505682 6. 7.96494318 7. > 6.56906289 > 8. ] > [ 7. 5.49892917 8. 10.50107083 9. > 8.22836631 > 10. ] > [ 9. 6.96280152 10. 13.03719848 11. > 9.88766973 > 12. ] > [ 11. 8.42667386 12. 15.57332614 13. > 11.54697315 > 14. ] > [ 13. 9.89054621 14. 18.10945379 15. > 13.20627657 > 16. ]] > > (Incidentally, is there a way to pretty-print arrays in numpy? The > above is kind of ugly and hard to read) > You may use set_printoptions Help on function set_printoptions in module numpy.core.arrayprint: set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, suppress=None, nanstr=None, infstr=None) Set options associated with printing. :Parameters: precision : int Number of digits of precision for floating point output (default 8). threshold : int Total number of array elements which trigger summarization rather than full repr (default 1000). edgeitems : int Number of array items in summary at beginning and end of each dimension (default 3). linewidth : int The number of characters per line for the purpose of inserting line breaks (default 75). suppress : bool Whether or not suppress printing of small floating point values using scientific notation (default False). nanstr : string String representation of floating point not-a-number (default nan). infstr : string String representation of floating point infinity (default inf). e.g. set_printoptions(precision=4,linewidth=120) From prabhu at aero.iitb.ac.in Thu Aug 2 04:17:14 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 2 Aug 2007 13:47:14 +0530 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> Message-ID: <18097.37642.313889.398075@gargle.gargle.HOWL> >>>>> "Fernando" == Fernando Perez writes: Fernando> I wonder if we could plan on 2 bofs a night. Titus is Fernando> already signed up as moderator for one on Thursday: Fernando> http://scipy.org/SciPy2007/BoFs Fernando> and I'd like to have him in for the testing one as well. Fernando> If we could plan something like bof1 bof2 7-8 vis3d bio 8-9 testing astronomy This schedule sounds good to me. Will we be able to make it at 7:00pm to the bof? cheers, prabhu From cimrman3 at ntc.zcu.cz Thu Aug 2 04:41:30 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 02 Aug 2007 10:41:30 +0200 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> Message-ID: <46B198BA.3000106@ntc.zcu.cz> Fernando Perez wrote: > I wonder if we could plan on 2 bofs a night. Titus is already signed > up as moderator for one on Thursday: > > http://scipy.org/SciPy2007/BoFs > > and I'd like to have him in for the testing one as well. If we could > plan something like > > bof1 bof2 > > 7-8 vis3d bio > 8-9 testing astronomy I wonder if some material from those bofs will be on-line as I belong to the unfortunate crowd unable to visit the conference. Especially vid3d, bio and testing are of major interest for me. thanks, r. From emanuelez at gmail.com Thu Aug 2 05:05:02 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 2 Aug 2007 11:05:02 +0200 Subject: [SciPy-user] getting rid of for loops... Message-ID: Hello! say A is an array and N is an integer. Is there any smart (read fast!) way to obtain an array C that contains each row of A repeated N times? What i'm trying to do is to optimize something that looks like this: close_enough = [] tollerance = array([10, 15, 5, 8, 6]) for row_i in data1: for row_j in data2: if less(absolute(row_i - row_j), tollerance).all: close_enough.append([row_i, row_j]) i was thinking that using matrices like the ones described above i might be able to get rid of the for loops and (maybe) speed up the code (now it takes something like one minute). -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From david.huard at gmail.com Thu Aug 2 09:24:08 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 2 Aug 2007 09:24:08 -0400 Subject: [SciPy-user] histogram equalization using scipy.interpolate.splrep In-Reply-To: References: Message-ID: <91cf711d0708020624o2911a9cdlb7a6164b1674666@mail.gmail.com> Hi Sebastian, I know next to nothing about image transforms but it appears to me you could solve your problem by the following. a: image array fa = a.flatten() sa = sort(fa) c = a1.cumsum()/sa[-1] # This is the cumulative distribution function [0,1] c = c*(max-min)+min # Scale the CDF to whatever scale you're using (eg. [0,256]) # Apply the scaling to the original image y = interp(fa, aa, c). # Reshape to the original form. new_a = y.reshape(a.shape) Or am I out of it ? David 2007/7/31, Sebastian Haase : > > Hi, > I'm trying to implement an efficiant numpy / scipy based > implementation of an image analysis procedure called "histogram > equalization". > http://en.wikipedia.org/wiki/Histogram_equalization > > (I don't want to use the Python Imaging Library !) > > For that I calculate the intesity histogram of my 2D or 3D images > (Let's call this 1D array `h`). Then I calculate `hh= > h.astype(N.float).cumsum()` > > Now I need to create a new 2D / 3D image replacing each pixel value > with the corresponding "look-up value" in `hh`. > The means essentially doing something like > > `a2 = hh[a]` -- if `a` was the original image. > Of course this syntax does not work, because > a) the array index lookup probably is not possible using 2D/3D indices > like this - right !? > and > b) my histogram is just sampling / approximating all the actually > occuring values in `a`. In other words: the histogram is based on > creating bins, and nearby pixel values in `a` are then counted into > same bin. Consequently there is no simple "array index lookup" to > get the needed value out of the `h` array. Instead one needs to > interpolate. > > This is were I thought the scipy.interpolate package might come in handy > ;-) > In fact, if `a` was a 1D "image" I think this would work: > >>> rep = scipy.interpolate.splrep(x,y) # x,y being the > horizontal,count axis in my histogram > >>> aa = scipy.interpolate.splev(a,rep) > > But for a.ndim being 2 I get: > Traceback (most recent call last): > File "", line 1, in ? > File "C:\PrWinN\scipy\interpolate\fitpack.py", line 443, in splev > y,ier=_fitpack._spl_(x,der,t,c,k) > ValueError: object too deep for desired array > > Using N.vectorize(lambda x: scipy.interpolate.splev(x,rep)) > works but is slow. > > I there a fast vectorized version of some interpolation already in > scipy.interpolate ? > Am I missing something ? > > Thanks, > Sebastian Haase > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Thu Aug 2 09:56:54 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 2 Aug 2007 15:56:54 +0200 Subject: [SciPy-user] histogram equalization using scipy.interpolate.splrep In-Reply-To: <91cf711d0708020624o2911a9cdlb7a6164b1674666@mail.gmail.com> References: <91cf711d0708020624o2911a9cdlb7a6164b1674666@mail.gmail.com> Message-ID: On 8/2/07, David Huard wrote: > Hi Sebastian, > > I know next to nothing about image transforms but it appears to me you could > solve your problem by the following. > > a: image array > > fa = a.flatten() > > sa = sort(fa) > c = a1.cumsum()/sa[-1] # This is the cumulative distribution function [0,1] > c = c*(max-min)+min # Scale the CDF to whatever scale you're using (eg. > [0,256]) > > # Apply the scaling to the original image > y = interp(fa, aa, c). > > # Reshape to the original form. > new_a = y.reshape (a.shape) Hi David, thanks for the reply. It certainly seems simple enough this way. a1 = sa I assume. Is interp part of scipy.interpolate ? I don't know it. What is aa ? The only concern I have is that this might eat lots of memory and time for large images. Sometimes I have 3D data with multiple 100MB in size. For example, when the image's dtype is float or float32, I was hoping that a "coarser" histogram would suffice. Maybe, since you know "interp", you might know of another function, that can lookup values from a function defined by a (sparse) sequence of value pairs. How about looking up multiple values at one (I think this is called vectorizing !?) ? Thanks again, Sebastian > > 2007/7/31, Sebastian Haase : > > > > Hi, > > I'm trying to implement an efficiant numpy / scipy based > > implementation of an image analysis procedure called "histogram > > equalization". > > http://en.wikipedia.org/wiki/Histogram_equalization > > > > (I don't want to use the Python Imaging Library !) > > > > For that I calculate the intesity histogram of my 2D or 3D images > > (Let's call this 1D array `h`). Then I calculate `hh= > > h.astype(N.float).cumsum()` > > > > Now I need to create a new 2D / 3D image replacing each pixel value > > with the corresponding "look-up value" in `hh`. > > The means essentially doing something like > > > > `a2 = hh[a]` -- if `a` was the original image. > > Of course this syntax does not work, because > > a) the array index lookup probably is not possible using 2D/3D indices > > like this - right !? > > and > > b) my histogram is just sampling / approximating all the actually > > occuring values in `a`. In other words: the histogram is based on > > creating bins, and nearby pixel values in `a` are then counted into > > same bin. Consequently there is no simple "array index lookup" to > > get the needed value out of the `h` array. Instead one needs to > > interpolate. > > > > This is were I thought the scipy.interpolate package might come in handy > ;-) > > In fact, if `a` was a 1D "image" I think this would work: > > >>> rep = scipy.interpolate.splrep(x,y) # x,y being the > > horizontal,count axis in my histogram > > >>> aa = scipy.interpolate.splev(a,rep) > > > > But for a.ndim being 2 I get: > > Traceback (most recent call last): > > File "", line 1, in ? > > File "C:\PrWinN\scipy\interpolate\fitpack.py", line > 443, in splev > > y,ier=_fitpack._spl_(x,der,t,c,k) > > ValueError: object too deep for desired array > > > > Using N.vectorize(lambda x: scipy.interpolate.splev(x,rep)) > > works but is slow. > > > > I there a fast vectorized version of some interpolation already in > > scipy.interpolate ? > > Am I missing something ? > > > > Thanks, > > Sebastian Haase > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Thu Aug 2 10:25:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 02 Aug 2007 09:25:16 -0500 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <46B198BA.3000106@ntc.zcu.cz> References: <46B0C700.6010000@ucsf.edu> <46B198BA.3000106@ntc.zcu.cz> Message-ID: <46B1E94C.7090103@gmail.com> Robert Cimrman wrote: > Fernando Perez wrote: >> I wonder if we could plan on 2 bofs a night. Titus is already signed >> up as moderator for one on Thursday: >> >> http://scipy.org/SciPy2007/BoFs >> >> and I'd like to have him in for the testing one as well. If we could >> plan something like >> >> bof1 bof2 >> >> 7-8 vis3d bio >> 8-9 testing astronomy > > I wonder if some material from those bofs will be on-line as I belong to > the unfortunate crowd unable to visit the conference. Especially vid3d, > bio and testing are of major interest for me. BOFs tend not to have many artifacts like slide sets. They're mostly just discussions. However, we can suggest that there be one person at each who will take notes for posting. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david.huard at gmail.com Thu Aug 2 11:06:27 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 2 Aug 2007 11:06:27 -0400 Subject: [SciPy-user] histogram equalization using scipy.interpolate.splrep In-Reply-To: References: <91cf711d0708020624o2911a9cdlb7a6164b1674666@mail.gmail.com> Message-ID: <91cf711d0708020806i6e3b782dpfb1b6ca753b72c12@mail.gmail.com> Sebastian, 2007/8/2, Sebastian Haase : > > > a1 = sa I assume. Yes. Missed that one. What is aa ?\ Sorry, that's what happen with crappy notation. Here is a clearer version. flattened_a = a.flatten() sorted_a = numpy.sort(flattened_a) cdf = sorted_a.cumsum()/sorted_a[-1] # This is the cumulative distribution function [0,1] cdf = cdf*(max-min)+min # Scale the CDF to whatever scale you're using (eg. [0,256]) # Apply the scaling to the original image y = numpy.interp(flattened_a, sorted_a, cdf). # Reshape to the original form. new_a = y.reshape (a.shape) The only concern I have is that this might eat lots of memory and time > for large images. Sometimes I have 3D data with multiple 100MB in > size. Do you want the final image to have the same resolution ? If you want to optimize the equalization process, you could do y = numpy.interp(flattened_a, bins, cdf). hist, bins = numpy.histogram(flattened_a, nbins, normed=True) cdf = hist.cumsum() There were posts a while back with C implementation of histogram that is quite fast even for large matrices. For example, when the image's dtype is float or float32, I was hoping > that a "coarser" histogram would suffice. > Maybe, since you know "interp", you might know of another function, > that can lookup values from a function defined by a (sparse) sequence > of value pairs. > How about looking up multiple values at one (I think this is called > vectorizing !?) ? Not sure what you mean, here. My 2c: get it to work then worry about optimization... Cheers, David Thanks again, > Sebastian > > > > > > > > > > 2007/7/31, Sebastian Haase : > > > > > > Hi, > > > I'm trying to implement an efficiant numpy / scipy based > > > implementation of an image analysis procedure called "histogram > > > equalization". > > > http://en.wikipedia.org/wiki/Histogram_equalization > > > > > > (I don't want to use the Python Imaging Library !) > > > > > > For that I calculate the intesity histogram of my 2D or 3D images > > > (Let's call this 1D array `h`). Then I calculate `hh= > > > h.astype(N.float).cumsum()` > > > > > > Now I need to create a new 2D / 3D image replacing each pixel value > > > with the corresponding "look-up value" in `hh`. > > > The means essentially doing something like > > > > > > `a2 = hh[a]` -- if `a` was the original image. > > > Of course this syntax does not work, because > > > a) the array index lookup probably is not possible using 2D/3D indices > > > like this - right !? > > > and > > > b) my histogram is just sampling / approximating all the actually > > > occuring values in `a`. In other words: the histogram is based on > > > creating bins, and nearby pixel values in `a` are then counted into > > > same bin. Consequently there is no simple "array index lookup" to > > > get the needed value out of the `h` array. Instead one needs to > > > interpolate. > > > > > > This is were I thought the scipy.interpolate package might come in > handy > > ;-) > > > In fact, if `a` was a 1D "image" I think this would work: > > > >>> rep = scipy.interpolate.splrep(x,y) # x,y being the > > > horizontal,count axis in my histogram > > > >>> aa = scipy.interpolate.splev(a,rep) > > > > > > But for a.ndim being 2 I get: > > > Traceback (most recent call last): > > > File "", line 1, in ? > > > File "C:\PrWinN\scipy\interpolate\fitpack.py", line > > 443, in splev > > > y,ier=_fitpack._spl_(x,der,t,c,k) > > > ValueError: object too deep for desired array > > > > > > Using N.vectorize(lambda x: scipy.interpolate.splev(x,rep)) > > > works but is slow. > > > > > > I there a fast vectorized version of some interpolation already in > > > scipy.interpolate ? > > > Am I missing something ? > > > > > > Thanks, > > > Sebastian Haase > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Thu Aug 2 11:07:28 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 2 Aug 2007 11:07:28 -0400 Subject: [SciPy-user] histogram equalization using scipy.interpolate.splrep In-Reply-To: <91cf711d0708020806i6e3b782dpfb1b6ca753b72c12@mail.gmail.com> References: <91cf711d0708020624o2911a9cdlb7a6164b1674666@mail.gmail.com> <91cf711d0708020806i6e3b782dpfb1b6ca753b72c12@mail.gmail.com> Message-ID: <91cf711d0708020807x543771f4y744e96ff09555c27@mail.gmail.com> oups... it should read y = numpy.interp(flattened_a, bins, cdf). where hist, bins = numpy.histogram(flattened_a, nbins, normed=True) cdf = hist.cumsum() 2007/8/2, David Huard : > > Sebastian, > > 2007/8/2, Sebastian Haase : > > > > > > a1 = sa I assume. > > Yes. Missed that one. > > What is aa ?\ > > > > Sorry, that's what happen with crappy notation. Here is a clearer version. > > > flattened_a = a.flatten() > > sorted_a = numpy.sort(flattened_a) > cdf = sorted_a.cumsum()/sorted_a[-1] # This is the cumulative > distribution function [0,1] > cdf = cdf*(max-min)+min # Scale the CDF to whatever scale you're using > (eg. [0,256]) > > # Apply the scaling to the original image > y = numpy.interp(flattened_a, sorted_a, cdf). > > # Reshape to the original form. > new_a = y.reshape (a.shape) > > > The only concern I have is that this might eat lots of memory and time > > for large images. Sometimes I have 3D data with multiple 100MB in > > size. > > > > Do you want the final image to have the same resolution ? If you want to > optimize the equalization process, > you could do > y = numpy.interp(flattened_a, bins, cdf). > hist, bins = numpy.histogram(flattened_a, nbins, normed=True) > cdf = hist.cumsum() > > There were posts a while back with C implementation of histogram that is > quite fast even for large matrices. > > For example, when the image's dtype is float or float32, I was hoping > > that a "coarser" histogram would suffice. > > Maybe, since you know "interp", you might know of another function, > > that can lookup values from a function defined by a (sparse) sequence > > of value pairs. > > How about looking up multiple values at one (I think this is called > > vectorizing !?) ? > > > Not sure what you mean, here. My 2c: get it to work then worry about > optimization... > > Cheers, > > David > > Thanks again, > > Sebastian > > > > > > > > > > > > > > > > > > 2007/7/31, Sebastian Haase : > > > > > > > > Hi, > > > > I'm trying to implement an efficiant numpy / scipy based > > > > implementation of an image analysis procedure called "histogram > > > > equalization". > > > > http://en.wikipedia.org/wiki/Histogram_equalization > > > > > > > > (I don't want to use the Python Imaging Library !) > > > > > > > > For that I calculate the intesity histogram of my 2D or 3D images > > > > (Let's call this 1D array `h`). Then I calculate `hh= > > > > h.astype(N.float).cumsum()` > > > > > > > > Now I need to create a new 2D / 3D image replacing each pixel value > > > > with the corresponding "look-up value" in `hh`. > > > > The means essentially doing something like > > > > > > > > `a2 = hh[a]` -- if `a` was the original image. > > > > Of course this syntax does not work, because > > > > a) the array index lookup probably is not possible using 2D/3D > > indices > > > > like this - right !? > > > > and > > > > b) my histogram is just sampling / approximating all the actually > > > > occuring values in `a`. In other words: the histogram is based on > > > > creating bins, and nearby pixel values in `a` are then counted into > > > > same bin. Consequently there is no simple "array index lookup" to > > > > get the needed value out of the `h` array. Instead one needs to > > > > interpolate. > > > > > > > > This is were I thought the scipy.interpolate package might come in > > handy > > > ;-) > > > > In fact, if `a` was a 1D "image" I think this would work: > > > > >>> rep = scipy.interpolate.splrep(x,y) # x,y being the > > > > horizontal,count axis in my histogram > > > > >>> aa = scipy.interpolate.splev(a,rep) > > > > > > > > But for a.ndim being 2 I get: > > > > Traceback (most recent call last): > > > > File "", line 1, in ? > > > > File "C:\PrWinN\scipy\interpolate\fitpack.py", line > > > 443, in splev > > > > y,ier=_fitpack._spl_(x,der,t,c,k) > > > > ValueError: object too deep for desired array > > > > > > > > Using N.vectorize(lambda x: scipy.interpolate.splev(x,rep)) > > > > works but is slow. > > > > > > > > I there a fast vectorized version of some interpolation already in > > > > scipy.interpolate ? > > > > Am I missing something ? > > > > > > > > Thanks, > > > > Sebastian Haase > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 2 13:48:38 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 2 Aug 2007 11:48:38 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <46B1E94C.7090103@gmail.com> References: <46B0C700.6010000@ucsf.edu> <46B198BA.3000106@ntc.zcu.cz> <46B1E94C.7090103@gmail.com> Message-ID: On 8/2/07, Robert Kern wrote: > Robert Cimrman wrote: > > Fernando Perez wrote: > >> I wonder if we could plan on 2 bofs a night. Titus is already signed > >> up as moderator for one on Thursday: > >> > >> http://scipy.org/SciPy2007/BoFs > >> > >> and I'd like to have him in for the testing one as well. If we could > >> plan something like > >> > >> bof1 bof2 > >> > >> 7-8 vis3d bio > >> 8-9 testing astronomy > > > > I wonder if some material from those bofs will be on-line as I belong to > > the unfortunate crowd unable to visit the conference. Especially vid3d, > > bio and testing are of major interest for me. > > BOFs tend not to have many artifacts like slide sets. They're mostly just > discussions. However, we can suggest that there be one person at each who will > take notes for posting. I volunteer to take notes in the vis3d one, where I won't be moderating. I will also bring a little music player that has microphone recording. We'll see how good of a job it does though, since instead of a single speaker there will be people all over the room talking. If it's listenable, we'll post the files later on the wiki. I can only be in one place at a time, so if someone else can do the same thing for the other two BOFs, great. Cheers, f From fperez.net at gmail.com Thu Aug 2 13:51:04 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 2 Aug 2007 11:51:04 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <18097.37642.313889.398075@gargle.gargle.HOWL> References: <46B0C700.6010000@ucsf.edu> <18097.37642.313889.398075@gargle.gargle.HOWL> Message-ID: On 8/2/07, Prabhu Ramachandran wrote: > >>>>> "Fernando" == Fernando Perez writes: > > Fernando> I wonder if we could plan on 2 bofs a night. Titus is > Fernando> already signed up as moderator for one on Thursday: > > Fernando> http://scipy.org/SciPy2007/BoFs > > Fernando> and I'd like to have him in for the testing one as well. > Fernando> If we could plan something like > > bof1 bof2 > > 7-8 vis3d bio > 8-9 testing astronomy > > This schedule sounds good to me. Will we be able to make it at 7:00pm > to the bof? Honestly I'm not sure, it's a bit tight. Do we put instead two for Thursday and leave the other two for Wednesday night, at the risk of losing some people? I also worry that just one hour per bof is really not enough, since they tend to generate a lot of animated discussion. For both reasons I'm leaning towards bofs on Wednesday, at the risk of losing a few people. Thoughts? Cheers, f From gael.varoquaux at normalesup.org Thu Aug 2 13:53:36 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 2 Aug 2007 19:53:36 +0200 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> <18097.37642.313889.398075@gargle.gargle.HOWL> Message-ID: <20070802175336.GH16632@clipper.ens.fr> On Thu, Aug 02, 2007 at 11:51:04AM -0600, Fernando Perez wrote: > Honestly I'm not sure, it's a bit tight. Do we put instead two for > Thursday and leave the other two for Wednesday night, at the risk of > losing some people? I also worry that just one hour per bof is really > not enough, since they tend to generate a lot of animated discussion. +1, but I'll be there on Wednesday, so I am biased. Ga?l From rmay at ou.edu Thu Aug 2 14:01:49 2007 From: rmay at ou.edu (Ryan May) Date: Thu, 02 Aug 2007 13:01:49 -0500 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <20070802175336.GH16632@clipper.ens.fr> References: <46B0C700.6010000@ucsf.edu> <18097.37642.313889.398075@gargle.gargle.HOWL> <20070802175336.GH16632@clipper.ens.fr> Message-ID: <46B21C0D.3070403@ou.edu> Gael Varoquaux wrote: > On Thu, Aug 02, 2007 at 11:51:04AM -0600, Fernando Perez wrote: >> Honestly I'm not sure, it's a bit tight. Do we put instead two for >> Thursday and leave the other two for Wednesday night, at the risk of >> losing some people? I also worry that just one hour per bof is really >> not enough, since they tend to generate a lot of animated discussion. > > +1, but I'll be there on Wednesday, so I am biased. > > Ga?l Same here. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From dwf at cs.toronto.edu Thu Aug 2 17:11:35 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 2 Aug 2007 17:11:35 -0400 Subject: [SciPy-user] getting rid of for loops... In-Reply-To: References: Message-ID: <26A144B3-C4CC-4A42-AABF-FAC13D072C4A@cs.toronto.edu> On 2-Aug-07, at 5:05 AM, Emanuele Zattin wrote: > Hello! > > say A is an array and N is an integer. Is there any smart (read fast!) > way to obtain an array C that contains each row of A repeated N times? > > What i'm trying to do is to optimize something that looks like this: > > close_enough = [] > tollerance = array([10, 15, 5, 8, 6]) > for row_i in data1: > for row_j in data2: > if less(absolute(row_i - row_j), tollerance).all: > close_enough.append([row_i, row_j]) > > i was thinking that using matrices like the ones described above i > might be able to get rid of the for loops and (maybe) speed up the > code (now it takes something like one minute). If you want to do what I think you want to do, you should be able to use indexing tricks like the following: In [7]: m = array([[5,4,3,2,1]]) In [8]: m Out[8]: array([[5, 4, 3, 2, 1]]) In [9]: m[[0,0,0,0,0],:] Out[9]: array([[5, 4, 3, 2, 1], [5, 4, 3, 2, 1], [5, 4, 3, 2, 1], [5, 4, 3, 2, 1], [5, 4, 3, 2, 1]]) Basically use an array of indices along one dimension that just repeats the same index over and over again. If NumPy's implementation is as clever as I think, then it shouldn't even require a copy. David From berthe.loic at gmail.com Thu Aug 2 17:32:18 2007 From: berthe.loic at gmail.com (LB) Date: Thu, 02 Aug 2007 21:32:18 -0000 Subject: [SciPy-user] getting rid of for loops... In-Reply-To: References: Message-ID: <1186090338.560240.222500@m37g2000prh.googlegroups.com> Suppose you want to compare two matrices A and B : >>> A = random.randint(-10, 10, size=(4, 5)) >>> B = random.randint(-10, 10, size=(6, 5)) >>> tollerance = array([10, 15, 5, 8, 6]) >>> A array([[-7, -2, -5, -1, -2], [-9, -9, 9, -9, 1], [ 9, -9, 3, -6, -6], [-8, -5, -8, 7, 2]]) >>> B array([[ 8, 2, 2, 4, 3], [ 3, -3, -7, 5, 3], [-4, 9, 6, -8, 3], [ 3, -6, 5, -7, -3], [ 3, -5, -3, 1, 5], [ 4, -5, -5, 0, -3]]) Use broadcasting to replace for loops : >>> data1= A[newaxis, :, :] >>> data1.shape (1, 4, 5) >>> data2 = B[:, newaxis, :] >>> data2.shape (6, 1, 5) >>> close_enough = less(abs( data1-data2), tollerance).all(axis=2) >>> close_enough array([[False, False, False, False], [False, False, False, False], [False, False, False, False], [False, False, True, False], [False, False, False, False], [False, False, False, False]], dtype=bool) So we have a true value on the row 3 and on the column 2 : >>> less(abs(B[3]-A[2]), tollerance) array([True, True, True, True, True], dtype=bool) -- LB From niels.ellegaard at gmail.com Fri Aug 3 04:28:07 2007 From: niels.ellegaard at gmail.com (Niels L. Ellegaard) Date: Fri, 03 Aug 2007 10:28:07 +0200 Subject: [SciPy-user] getting rid of for loops... References: <1186090338.560240.222500@m37g2000prh.googlegroups.com> Message-ID: <87wswdrpig.fsf@gmail.com> LB writes: > Suppose you want to compare two matrices A and B : >>>> A = random.randint(-10, 10, size=(4, 5)) >>>> B = random.randint(-10, 10, size=(6, 5)) >>>> tolerance = array([10, 15, 5, 8, 6]) >>>> data1= A[newaxis, :, :] >>>> data2 = B[:, newaxis, :] >>>> close_enough = less(abs( data1-data2), tolerance).all(axis=2) Wow, that was a nice trick and a useful example. Would it make sense to add it as example 6 to the broadcasting wiki page? http://www.scipy.org/EricsBroadcastingDoc Niels From emanuelez at gmail.com Fri Aug 3 04:43:03 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Fri, 3 Aug 2007 10:43:03 +0200 Subject: [SciPy-user] getting rid of for loops... In-Reply-To: <87wswdrpig.fsf@gmail.com> References: <1186090338.560240.222500@m37g2000prh.googlegroups.com> <87wswdrpig.fsf@gmail.com> Message-ID: Very nice indeed. My approach was different (big 2D arrays), but this one looks way more elegant and probably even faster (the performancedifference from the double for loop and my solution was already impressive) On 8/3/07, Niels L. Ellegaard wrote: > LB writes: > > > Suppose you want to compare two matrices A and B : > >>>> A = random.randint(-10, 10, size=(4, 5)) > >>>> B = random.randint(-10, 10, size=(6, 5)) > >>>> tolerance = array([10, 15, 5, 8, 6]) > >>>> data1= A[newaxis, :, :] > >>>> data2 = B[:, newaxis, :] > >>>> close_enough = less(abs( data1-data2), tolerance).all(axis=2) > > Wow, that was a nice trick and a useful example. Would it make > sense to add it as example 6 to the broadcasting wiki page? > > http://www.scipy.org/EricsBroadcastingDoc > > Niels > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From cimrman3 at ntc.zcu.cz Fri Aug 3 05:14:58 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 03 Aug 2007 11:14:58 +0200 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> <46B198BA.3000106@ntc.zcu.cz> <46B1E94C.7090103@gmail.com> Message-ID: <46B2F212.5000400@ntc.zcu.cz> Fernando Perez wrote: > On 8/2/07, Robert Kern wrote: >> Robert Cimrman wrote: >>> Fernando Perez wrote: >>>> I wonder if we could plan on 2 bofs a night. Titus is already signed >>>> up as moderator for one on Thursday: >>>> >>>> http://scipy.org/SciPy2007/BoFs >>>> >>>> and I'd like to have him in for the testing one as well. If we could >>>> plan something like >>>> >>>> bof1 bof2 >>>> >>>> 7-8 vis3d bio >>>> 8-9 testing astronomy >>> I wonder if some material from those bofs will be on-line as I belong to >>> the unfortunate crowd unable to visit the conference. Especially vid3d, >>> bio and testing are of major interest for me. >> BOFs tend not to have many artifacts like slide sets. They're mostly just >> discussions. However, we can suggest that there be one person at each who will >> take notes for posting. > > I volunteer to take notes in the vis3d one, where I won't be moderating. > > I will also bring a little music player that has microphone recording. > We'll see how good of a job it does though, since instead of a single > speaker there will be people all over the room talking. If it's > listenable, we'll post the files later on the wiki. > > I can only be in one place at a time, so if someone else can do the > same thing for the other two BOFs, great. > > Cheers, > > f That would be great, thank you! r. From ryanlists at gmail.com Fri Aug 3 10:55:17 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Aug 2007 09:55:17 -0500 Subject: [SciPy-user] Python evangelism question: procedural programming Message-ID: I have a colleague in our electrical engineering department who today asked me "What's good about Python?". I responded with "What's bad about Python?" I am used to responding to engineering colleagues in terms of Python vs Matlab, but this particular person uses Mathematica almost exclusively. So, he wanted to know if Python could be used like Mathematica in terms of writing rule and procedure based programs. I don't know how difficult this would be. I don't do this kind of work and think of Mathematica primarily as a symbolic algebra language, so that Python could not easily replace it. Does anyone have experience of coming from Mathematica to Python and whether or not I should try and convince this person to pursue Python? Thanks, Ryan From aisaac at american.edu Fri Aug 3 11:29:39 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 3 Aug 2007 11:29:39 -0400 Subject: [SciPy-user] Python evangelism question: procedural programming In-Reply-To: References: Message-ID: I tried Mathematica awhile back and found it clumsy for anything that did not require symbolics. This example gives the right flavor: http://www.larssono.com/musings/matmatpy/index.html Your friend can have both worlds and choose as needed: http://library.wolfram.com/infocenter/MathSource/585/ http://library.wolfram.com/infocenter/MathSource/6622/ Cheers, Alan Isaac From gael.varoquaux at normalesup.org Fri Aug 3 11:34:41 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 3 Aug 2007 17:34:41 +0200 Subject: [SciPy-user] Python evangelism question: procedural programming Message-ID: <20070803153441.GD1680@clipper.ens.fr> On Fri, Aug 03, 2007 at 09:55:17AM -0500, Ryan Krauss wrote: > So, he wanted to know if Python could be used like Mathematica in > terms of writing rule and procedure based programs. I don't know how > difficult this would be. I think I understand what he means. Python can almost be used like this, but there is the UI missing. Ipython is currently working on a "notebook" UI that would address this problem. I have developped my own workflow using a home made script ("pyreport" http://gael-varoquaux.info/computers/pyreport/ ) to approach this workflow. Ultimately I want to merge this with ipython's effort in order to get nice notebook functionnality with html and pdf output, but xurrently I don't have time for this. I am attaching a file that shows how I use Python with Pyreport (Python file attached, pdf file on http://gael-varoquaux.info/FORT_MOT.pdf ). The file also shows the limits of the approach, and for larger project I use more modular programming. This example is a bit frightening because it has really grown longer than a Mathematica script and is a bit too complex to have in such a monolythic code, but it is the only thing I have around. There are also some very simple examples on pyreport's page. So I think the answer is that it is possible with Python, but the tools are not very developped and only slow progress is made. My 2 cents, Ga?l -------------- next part -------------- ##!##################################################################### ##! Description of the FORT ##!##################################################################### #Global imports. from __future__ import division # So that 2/3 = 0.6666 and not 0 ! from pylab import * from scipy import * main = ( __name__ == '__main__' ) #! Constants #!######################################################################### # ---------------------------------------------------------------------- # FORT constants: P_FORT = 50 w_0 = 100e-6 lambda_L = 1565e-9 print("FORT power: " + str(P_FORT) + " W, \t waist: " + str(w_0) + " m \t wavelength: " + str(lambda_L) + " m") # Position of the center relative to the center of the MOT. r_FORT = array([0.0, 0.0, 0.0]) # Transformation to go from a gaussian beam propagating along z to the trap rot_FORT = array([ [0, 0, 1, ], [ 0, 1, 0], [ 1, 0, 0, ], ]) print("center of the FORT relativ to the MOT : %.4e %.4e %.4e (x,y,z, m)" %( r_FORT[0], r_FORT[1], r_FORT[2])) # ---------------------------------------------------------------------- # Rubidium 87 D2 transition: Gamma_Rb = 2*pi*5.98e6 # Linewidth lambda_Rb = 780e-9 # Wavelength # Rubidium excited state transitions: # 5P(3/2) -> 4D(5/2) is the most important transiton of the excited # level around 1560nm # For the wavelength, see NIST database, for the Einstein A-Coefficient, # see Clark PRA 69 022509 lambda_Rb_e = 1528.948e-9 # Wavelength # 4*alpha * (2*pi)**3 * c_0/(3*e**2 * lamda_Rb_e**3) * (e*a_0)**2 * # (10.634) ** 2 * 1/(2*3/2+1) Gamma_Rb_e = 16e6 # Linewidth # ---------------------------------------------------------------------- # Tables of rubidium transition for the MOT ground state: # Wavelength lambda_g = array([ 794.979, 780.241, 421.671, 420.298, 359.259, 358.807, 335.177, 334.966 ]) * 1e-9 # Einstein A coefficients (ie Gamma) A_g = array([ 36.1, 38.1, 1.50, 1.77, 0.289, 0.396, 0.0891, 0.137 ]) * 1e6 # And for the excited state: # Wavelength lambda_e = array([ 1366.875, 741.021, 616.133, 1529.261, 1529.366, 776.157, 775.978, 630.0966, 630.0067 ]) * 1e-9 # Einstein A coefficients (ie Gamma) A_e = array([ 9.5618, 3.0339, 1.4555, 1.774, 10.675, 0.67097, 3.93703, 0.63045, 3.71235 ]) * 1e6 # ---------------------------------------------------------------------- # MOT constants: delta_0 = -15e6 # Detuning P_mot = 35e-3 # Power in a beam d_mot = 25e-3 # Diameter of a beam print("MOT power: %.3e W" % P_mot + "\t beam diameter: %.3e m" % d_mot + "\n detuning %.3e Hz" % delta_0 ) I_sat_Rb = 16 # Rubidium saturation intensity mu_Rb = 0.71e10 # Rubidium magnetic moment (T*s) # FIXME: There should be a 2*pi here, but it doesn't give the right # results, moving along for now. #mu_Rb = 0.71e10/(2*pi) # Rubidium magnetic moment (T*s) Bx = 15e-2 # magnetic field gradient (T/m and not G/cm !) m_Rb = 1.45e-25 # Rubidium 87 mass # ---------------------------------------------------------------------- # Second MOT constants: delta_1 = -12e6 # Detuning to the bottom of the FORT P_mot_2 = 15e-3 # Power in a beam #P_mot_2 = 0 # Power in a beam print("Second MOT frequency: \t power: %.3e W" % P_mot_2 + "\n\t\t detuning to the bottom of the trap :%.3e Hz" % delta_1 ) # ---------------------------------------------------------------------- # Fondamental constants: c_0 = 3e8 # Speed of light (m/s) h_bar = 1.054e-34 # h_bar ! k_J = 1.38e-23 # Blotzmann constant, in J/K # ---------------------------------------------------------------------- # mesh grid of the region of space we are interested in : def spaced_grid(xrange, yrange, nx=50, ny=50): dx, dy = xrange/float(nx), yrange/float(ny) return mgrid[-xrange:(xrange+dx):dx, -yrange:(yrange+dy):dy] def make_grids(xrange, yrange, vrange): global extentxy, Xgrid, Ygrid extentxy = (-xrange, xrange, -yrange, yrange) Xgrid, Ygrid = spaced_grid(xrange, yrange) global extentyv, Ygrid2, Vgrid, Ygrid2sparse, Vgridsparse yrange2 = 0.5*yrange extentyv = (-yrange2, yrange2, -vrange, vrange) Ygrid2, Vgrid = spaced_grid(yrange2, vrange) Ygrid2sparse, Vgridsparse = spaced_grid(yrange2, vrange, nx=8, ny=8) xrange, yrange, vrange = 0.04, 0.002, 4 [dx, dy, dv] = [xrange/50.0, yrange/50.0, vrange/50.0] make_grids(xrange, yrange, vrange) # The origin : O=array([[0,0,0],]) # ---------------------------------------------------------------------- # Useful subroutines: def pathcolor(x,y,t,colormap=cm.copper,linewidth=1, linestyle='solid'): """ Plots a path indexed by a parameter t, using a colormap to represent the parameter """ # A plot with line color changing from matplotlib.collections import LineCollection points=zip(x,y) segments = zip(points[:-1], points[1:]) t = t + t.min() t = t/t.max() colors=cm.copper(t) ax=gca() LC = LineCollection(segments, colors = colors) LC.set_linewidth(linewidth) LC.set_linestyle(linestyle) ax.add_collection(LC) axis() def cmap_map(function,cmap): """ Applies function (which should operate on vectors of shape 3: [r, g, b]), on the color vectors of colormap cmap. This routine will break any discontinuous points in a colormap. Beware, function should map the [0, 1] segment to itself, or you are in for surprises. See also cmap_map. """ cdict = cmap._segmentdata step_dict = {} # Firt get the list of points where the segments start or end for key in ('red','green','blue'): step_dict[key] = map(lambda x: x[0], cdict[key]) step_list = reduce(lambda x, y: x+y, step_dict.values()) step_list = array(list(set(step_list))) # Then compute the LUT, and apply the function to the LUT reduced_cmap = lambda step : array(cmap(step)[0:3]) old_LUT = array(map( reduced_cmap, step_list)) new_LUT = array(map( function, old_LUT)) # Now try to make a minimal segment definition of the new LUT cdict = {} for i,key in enumerate(('red','green','blue')): this_cdict = {} for j,step in enumerate(step_list): if step in step_dict[key]: this_cdict[step] = new_LUT[j,i] elif new_LUT[j,i]!=old_LUT[j,i]: this_cdict[step] = new_LUT[j,i] colorvector= map(lambda x: x + (x[1], ), this_cdict.items()) colorvector.sort() cdict[key] = colorvector return matplotlib.colors.LinearSegmentedColormap('colormap',cdict,1024) def cmap_xmap(function,cmap): """ Applies function, on the indices of colormap cmap. Beware, function should map the [0, 1] segment to itself, or you are in for surprises. See also cmap_xmap. """ cdict = cmap._segmentdata function_to_map = lambda x : (function(x[0]), x[1], x[2]) for key in ('red','green','blue'): cdict[key] = map(function_to_map, cdict[key]) cdict[key].sort() assert (cdict[key][0]<0 or cdict[key][-1]>1), "Resulting indices extend out of the [0, 1] segment." return matplotlib.colors.LinearSegmentedColormap('colormap',cdict,1024) light_jet = cmap_map(lambda x: x*0.7+0.3, cm.jet) def trajplot(y,time, delta, extent = ( -xrange, xrange, -yrange, yrange, -yrange, yrange)): xmin = extent[0] xmax = extent[1] dx = (xmax-xmin)/100.0 ymin = extent[2] ymax = extent[3] dy = (ymax - ymin)/100.0 zmin = extent[4] zmax = extent[5] dz = (zmax - zmin)/100.0 # YZ plane subplot(2,3,1) [Ygrid, Zgrid] = mgrid[ ymin:ymax:dy, zmin:zmax:dz ] positions=column_stack((zeros(Zgrid.size),Ygrid.ravel(),Zgrid.ravel())) deltaMap=delta(positions) deltaMat=reshape(deltaMap,(Ygrid.shape)) imshow(rot90(deltaMat),origin='lower',extent=(xmin, xmax, ymin, ymax),aspect="auto", cmap=cm.bone) cset = contour(rot90(deltaMat), array([0,Gamma_Rb/2,-Gamma_Rb/2]), cmap=cm.bone, origin='lower', linewidths=1, extent=extentxy, aspect="auto") pathcolor(y[:,2],y[:,1],time) yticks(yticks()[0][0:-1:2]) # Keep one tick out of two xticks(xticks()[0][0:-1:2]) xticks(xticks()[0], len(xticks()[1])*['',]) # Hide the labels ax = gca() text(0.05,0.95,'Y',verticalalignment='top',transform = ax.transAxes) text(0.95,0.05,'Z',horizontalalignment='right',transform = ax.transAxes) title(' Projected trajectories', horizontalalignment = 'left') # XY plane [Xgrid, Ygrid] = mgrid[ xmin:xmax:dx, ymin:ymax:dy ] positions=column_stack((Xgrid.ravel(),Ygrid.ravel(),zeros(Ygrid.size))) subplot(2,3,2) deltaMap=delta(positions) deltaMat=reshape(deltaMap,(Xgrid.shape)) imshow(rot90(deltaMat),origin='lower',extent=(xmin, xmax, ymin, ymax),aspect="auto", cmap=cm.bone) cset = contour(rot90(deltaMat), array([0,Gamma_Rb/2,-Gamma_Rb/2]), cmap=cm.bone, origin='lower', linewidths=1, extent=extentxy, aspect="auto") pathcolor(y[:,0],y[:,1],time) yticks(yticks()[0][0:-1:2]) # Keep one tick out of two xticks(xticks()[0][0:-1:2]) yticks(yticks()[0], len(yticks()[1])*['',]) ax = gca() text(0.05, 0.95, 'Y', verticalalignment='top', transform=ax.transAxes) text(0.95, 0.05, 'X', horizontalalignment='right', transform=ax.transAxes) # XZ plane [Xgrid, Zgrid] = mgrid[ xmin:xmax:dx, zmin:zmax:dz ] positions = column_stack((Xgrid.ravel(),zeros(Zgrid.size),Zgrid.ravel())) subplot(2, 3, 4) deltaMap = delta(positions) deltaMat = reshape(deltaMap,(Xgrid.shape)) imshow(deltaMat, origin='lower', extent=(xmin, xmax, zmin, zmax), aspect="auto", cmap=cm.bone) cset = contour(deltaMat, array([0, Gamma_Rb/2, -Gamma_Rb/2]), cmap=cm.bone, origin='lower', linewidths=1, extent=(xmin, xmax, zmin, zmax), aspect="auto") pathcolor(y[:,0], y[:,2], time) yticks(yticks()[0][0:-1:2]) # Keep one tick out of two xticks(xticks()[0][0:-1:2]) ax = gca() text(0.05,0.95,'X',verticalalignment='top',transform = ax.transAxes) text(0.95,0.05,'Z',horizontalalignment='right',transform = ax.transAxes) # X timeserie subplot(3,3,3) title('Time series') plot(time,y[:,0]) ax = gca() text(0.05,0.95,'X',verticalalignment='top',transform = ax.transAxes) text(0.95,0.05,'t',horizontalalignment='right',transform = ax.transAxes) xticks(xticks()[0], len(xticks()[1])*['',]) # Y timeserie subplot(3,3,6) plot(time,y[:,1]) ax = gca() text(0.05,0.95,'Y',verticalalignment='top',transform = ax.transAxes) text(0.95,0.05,'t',horizontalalignment='right',transform = ax.transAxes) xticks(xticks()[0], len(xticks()[1])*['',]) # Z timeserie subplot(3,3,9) ax = gca() plot(time,y[:,2]) text(0.05,0.95,'Z',verticalalignment='top',transform = ax.transAxes) text(0.95,0.05,'t',horizontalalignment='right',transform = ax.transAxes) #! #! FORT #!############################################################################ #! #! Intensity distribution #!---------------------------------------------------------------------------- Z_R = pi*w_0**2/lambda_L print "Raleigh Length: " + str(Z_R) + " m" w = lambda z : w_0 *sqrt( 1 + (z/Z_R) **2 ) Gaussian_beam = lambda r : P_FORT/(pi*w(r[:,2])**2)*exp(-(r[:,0]**2+r[:,1]**2)/(2*w(r[:,2])**2)) NablaGaussian = lambda r : transpose(Gaussian_beam(r) * array([ r[:,0]/w(r[:,2])**2, r[:,1]/w(r[:,2])**2, r[:,2]/(r[:,2]**2+Z_R**2)*(-2 + (r[:,0]**2 + r[:,1]**2)/w(r[:,2])**2) ])) Intensity = lambda r : Gaussian_beam(dot((r-r_FORT),transpose(rot_FORT)) ) NablaIntensity = lambda r : dot(NablaGaussian(dot((r-r_FORT),transpose(rot_FORT)) ),transpose(rot_FORT)) positions=column_stack((Xgrid.ravel(),Ygrid.ravel(),zeros(Xgrid.size))) IntensityMap=Intensity(positions) IntensityMat=reshape(IntensityMap,(Xgrid.shape)) rcParams.update({'figure.figsize': [10.5,5], 'text.usetex': True}) if main and True: figure(1) clf() imshow(rot90(-IntensityMat),origin='lower',extent=extentxy,aspect="auto") hot() plot(Xgrid[:,1],w(Xgrid[:,1]),'k--') plot(Xgrid[:,1],-w(Xgrid[:,1]),'k--') title('FORT beam intensity') xlabel("x{\small [m]}") ylabel("y{\small [m]}") show() figure(2) clf() print("Intensity at center of trap: %.3e W/m^2" % Intensity(O) ) #! #! Depth of the potential #!---------------------------------------------------------------------------- # The pulsations: omega = lambda l: 2*pi*c_0/l omega_L = omega(lambda_L) omega_Rb = omega(lambda_Rb) omega_Rb_e = omega(lambda_Rb_e) omega_g = omega(lambda_g) omega_e = omega(lambda_e) # ---------------------------------------------------------------------- # The potential for Rb: """ Dipolar potential depth as a function of laser wavelengths, einstein coefficients, and transition wavelengths """ u_dip = lambda l, A, l_0: sum( -3*pi* c_0**2/(2*omega(l_0)**3) * ( A/(omega(l_0)-omega(l)) + A/(omega(l_0)+omega(l)) )) / (2*pi*h_bar) gamma_sc = lambda l, A, l_0: sum( 3*pi* c_0**2/(2*h_bar*omega(l_0)**3) * (l_0/l)**3 * ( A/(omega(l_0)-omega(l)) + A/(omega(l_0)+omega(l)) )**2 ) u_g = sum(-3*pi* c_0**2/(2*omega_g**3) * ( A_g/(omega_g-omega_L) + A_g/(omega_g+omega_L) )) U_g = lambda r : u_g * Intensity( r) print("Ground state: Trap depth: %0.3e K" % (U_g(O)/k_J) + "\t Detuning: %.3e Hz" % (U_g(O)/(2*pi*h_bar)) ) u_e = sum(-3*pi* c_0**2/(2*omega_e**3) * ( A_e/(omega_e-omega_L) + A_e/(omega_e+omega_L) )) U_e = lambda r : u_e * Intensity( r) print("Excited state: Trap depth: %.3e K" % (U_e(O)/k_J) + "\t Detuning: %.3e Hz" % (U_e(O)/(2*pi*h_bar)) ) # Now transform delta_1 in a detuning relative to the max detuning: delta_1=delta_1+(U_e(O)-U_g(O))/(2*pi*h_bar) print("Detuning of the 2nd laser to the resonance: %.3e MHz" % (delta_1/1.0e6)) #! #! MOT #!############################################################################ # Wave-vectors of the different lasers: kx = array( [ 1, 0, 0 ]) ky = array( [ 0, 1, 0 ]) kz = array( [ 0, 0, 1 ]) Kx = -kx Ky = -ky Kz = -kz lasers = ( kx, Kx, ky, Ky, kz, Kz ) #! #! Detuning #!---------------------------------------------------------------------------- # Hack delta_0_outside = delta_0 def delta(r,v,k,delta_0=None): """ Detuning """ if not delta_0: delta_0 = delta_0_outside return(2*pi*(delta_0 + mu_Rb*Bx*dot(r, transpose(k)) + (U_g(r) - U_e(r))/(2*pi*h_bar) + dot(v, transpose(k))/lambda_Rb)/(2*Gamma_Rb)) if main and True: detuningMap=-(delta(positions,zeros(positions.shape),kx) + delta(positions,zeros(positions.shape),Kx))/2 detuningMat=reshape(detuningMap,(Xgrid.shape)) imshow(rot90(-2*detuningMat), origin='lower', extent=extentxy, aspect="auto", cmap=cmap_xmap(lambda x: 1-x, cm.bone)) co=colorbar() co.set_label('Detuning, in units of $\Gamma_{Rb}$') cset = contour(rot90(detuningMat), array([0,Gamma_Rb/2,-Gamma_Rb/2]), cmap=cm.bone, origin='lower', linewidths=1, extent=extentxy, aspect="auto") #clabel(cset, inline=1, fmt='%1.1f', fontsize=10, colors='black') title(r'Detuning, in units of $\Gamma_{Rb}$') xlabel("x{\small [m]}") ylabel("y{\small [m]}") show() #! Forces on the atoms #!---------------------------------------------------------------------------- # saturation parameter : s_0 = P_mot / (pi* (d_mot/2)**2 ) * 1 / I_sat_Rb print("Saturation parameter at resonnance: %g " % s_0) s_1 = P_mot_2 / (pi* (d_mot/2)**2 ) * 1 / I_sat_Rb print("Saturation parameter at resonnance for second frequency: %g " % s_1) # ---------------------------------------------------------------------- # capture velocity : #v_capt = ( -2*pi*delta_0 + mu_Rb*Bx*d_mot) * lambda_Rb/(2*pi) v_capt = mu_Rb*Bx*5e-3 * lambda_Rb/(2*pi) print "Capture velocity: %.3e m/s" % v_capt v_capt = h_bar*Gamma_Rb*pi / (lambda_Rb**2*m_Rb*mu_Rb*Bx)* s_0/(1+s_0) print "Capture velocity of the MOT: %.3e m/s" % v_capt # ---------------------------------------------------------------------- # excited level population : def Pop_e(r, v, k): global delta_1 return 0.5*( s_0/(1 + s_0 + delta(r, v, k)**2 ) + s_1/(1 + s_1 + delta(r, v, k, delta_0=delta_1)**2 ) ) # ---------------------------------------------------------------------- # radiation pressure force F_mot = lambda r, v: - h_bar * Gamma_Rb * 2*pi/lambda_Rb *( sum( dstack([ dot(reshape(Pop_e(r, v, k),(-1,1)),reshape(k,(1,-1))) for k in lasers]) , axis=-1 )) # ---------------------------------------------------------------------- # dipolar force of the FORT F_fort = lambda r, v: ( reshape((u_g + (u_e-u_g) * sum(dstack([ Pop_e(r, v, k) for k in lasers ]), axis=-1 )), (-1,1))*NablaIntensity(r) ) # ---------------------------------------------------------------------- # dipolar force of the MOT beams F_dipMOT = lambda r, v: ( (u_e-u_g)*1/pi*NablaIntensity(r) * reshape( sum(dstack([ 1/(1+s_0+delta(r,v,k)**2)*(1-1/(1+delta(r,v,k)**2)) + 1/(1+s_1+delta(r,v,k,delta_0=delta_1)**2)*(1-1/(1+delta(r,v,k,delta_0=delta_1)**2)) for k in lasers]), axis=-1), (-1,1)) ) # FIXME: I think I found a mistake !! # !!!!!!!!!!!!!!!!! Why isn't delta_O in the prefix of this equation !!!!!!!! #This need to be checked with my notes F_dipMOT = lambda r, v: ( (u_e-u_g)*1/pi*NablaIntensity(r) * reshape( sum(dstack([ 1/(1+s_0+delta(r,v,k)**2)*(1-1/(1+delta(r,v,k)**2)) + 1/(1+s_1+delta(r,v,k,delta_0=delta_1)**2)*(1-1/(1+delta(r,v,k,delta_0=delta_1)**2)) for k in lasers]), axis=-1), (-1,1)) ) # ---------------------------------------------------------------------- # total force F_tot = lambda r, v: F_mot(r,v) + F_dipMOT(r,v) + F_fort(r,v) # Initial conditions to study MOT capture y0_capture = map(lambda z : array([0., -yrange, 0., 0., z, 0.]), arange(2,48,5)) # ---------------------------------------------------------------------- def plot_phase_space(traj=False, y0list=[ array([0., 0.0005, 0., 0., 0.2, 0.]), array([0., 0.0001, 0., 0., 0. , 0.]), array([0., 0.0007, 0., 0., 2. , 0.]), ], plot_traj=False): # ---------------------------------------------------------------------- # color plot of the intensity of the force positions=column_stack((zeros(Ygrid2.size), Ygrid2.ravel(), zeros(Ygrid2.size))) velocities=column_stack((zeros(Vgrid.size), Vgrid.ravel(), zeros(Vgrid.size))) RVforceMap = F_tot(positions, velocities) / m_Rb YVforceMat = reshape(RVforceMap[:,1],(Ygrid2.shape)) figure(3) clf() imshow(rot90(YVforceMat), cmap=light_jet, extent=extentyv, aspect="auto") null_format = FuncFormatter(lambda x,y: '') co = colorbar() co.set_label('Force intensity, {\small [arb. units]}.') # ---------------------------------------------------------------------- # plot of the resonance line for the 4 lasers detuningMap = delta(positions, velocities, ky) detuningMat = reshape(detuningMap,(Ygrid2.shape)) cset = contour(transpose(detuningMat), array([0]), colors=((1,1,1),), origin='lower', linewidths=1, extent=extentyv, aspect="auto") detuningMap = delta(positions, velocities, Ky) detuningMat = reshape(detuningMap,(Ygrid2.shape)) cset = contour(transpose(detuningMat), array([0]), colors=((1,1,1),), origin='lower', linewidths=1, extent=extentyv, aspect="auto") detuningMap = delta(positions, velocities, ky, delta_0=delta_1) detuningMat = reshape(detuningMap,(Ygrid2.shape)) cset = contour(transpose(detuningMat), array([0]), colors=((1,1,1),), origin='lower', linewidths=1, extent=extentyv, aspect="auto") detuningMap = delta(positions, velocities, Ky, delta_0=delta_1) detuningMat = reshape(detuningMap,(Ygrid2.shape)) cset = contour(transpose(detuningMat), array([0]), colors=((1,1,1),), origin='lower', linewidths=1, extent=extentyv, aspect="auto") # ---------------------------------------------------------------------- # vector plot of the dynamical flow positionssparse=column_stack((zeros(Ygrid2sparse.size), Ygrid2sparse.ravel(), zeros(Ygrid2sparse.size))) velocitiessparse=column_stack((zeros(Vgridsparse.size), Vgridsparse.ravel(), zeros(Vgridsparse.size))) RVforceMapsparse = F_tot(positionssparse, velocitiessparse)/m_Rb YVforceMatsparse = reshape(RVforceMapsparse[:,1], (Ygrid2sparse.shape)) forceScale=YVforceMatsparse.max() # Autoscaling fails, here. I have to do it myself quiver2(Ygrid2sparse,Vgridsparse+0.001, (Vgridsparse+0.001)/(2*vrange), YVforceMatsparse/(2*forceScale),width=0.0015, color=(0.3,0.3,0.3)) title('Amplitude of the MOT force in the y:v plane') xlabel('y{\small [m]}') ylabel('v{\small [m s$^{-1}$]}') # ---------------------------------------------------------------------- # integration of trajectories if traj: def deriv(y,t): """ Computes the derivative of y=hstack((r,v)) at time t """ return hstack(( y[3:], F_tot(reshape(y[:3],(1,-1)), reshape(y[3:],(1,-1)))[0,:]/m_Rb )) # Integration parameters start=0 end=0.009 numsteps=4000 time=linspace(start,end,numsteps) from scipy import integrate for y0 in y0list: y=integrate.odeint(deriv,y0.copy(),time) if y[:, 1].max() > yrange: # The trajectory leaves MOT region: pathcolor(y[:,1], y[:,4], time, linewidth=1, linestyle='dotted') else: pathcolor(y[:,1], y[:,4], time, linewidth=2) show() y0 = array([ 0.002, 0.0008, 0.004, 0., 7., -3.]) time=linspace(start,2*end,numsteps) if plot_traj: figure(4) y=integrate.odeint(deriv,y0,time) delta_mean = lambda x : -(delta(x,zeros_like(x),kx) + delta(x,zeros_like(x),Kx))/2 trajplot(y,time,delta_mean) show() else: show() # ---------------------------------------------------------------------- # photon scattering rate (# photons/sec): ph_rate = lambda r, v: Gamma_Rb * reshape(sum(dstack( [ Pop_e(r, v, k) for k in lasers ] ), axis=-1), (-1, 1)) #! Heating, cooling and temperature #!---------------------------------------------------------------------------- # ---------------------------------------------------------------------- # damping coefficient in the y direction: damping_coef_y = lambda r, v: -h_bar*Gamma_Rb*2*pi/(m_Rb*lambda_Rb)*( s_0*delta(r, v, ky)/(1 + s_0 + delta(r, v, ky)**2 )* pi/(lambda_Rb*Gamma_Rb) + s_1*delta(r, v, ky, delta_0=delta_1)/(1 + s_0 + delta(r, v, ky, delta_0=delta_1)**2 ) * pi/(lambda_Rb*Gamma_Rb) + s_0*delta(r, v, Ky)/(1 + s_0 + delta(r, v, Ky)**2 )* pi/(lambda_Rb*Gamma_Rb) + s_1*delta(r, v, Ky, delta_0=delta_1)/(1 + s_0 + delta(r, v, Ky, delta_0=delta_1)**2 ) * pi/(lambda_Rb*Gamma_Rb) ) # ---------------------------------------------------------------------- # inverse equilibrium (?) temperature (compute inverse to avoid overflows): inv_temperature = lambda r, v: ( k_J*2/(m_Rb*(h_bar*2*pi/(lambda_Rb*m_Rb))**2 ) * damping_coef_y(r, v)/ph_rate(r, v) ) if main and True: yrange2 = yrange # ---------------------------------------------------------------------- # color plot of the scattering rate, damping coefficient, and co. [Ygrid3,Vgrid3] = mgrid[-yrange2:(yrange2+dy/8.):dy/8., -vrange:(vrange+dv):dv] positions=column_stack((zeros(Ygrid3.size), Ygrid3.ravel(), zeros(Ygrid3.size))) velocities=column_stack((zeros(Vgrid3.size), Vgrid3.ravel(), zeros(Vgrid3.size))) figure(5) clf() subplot(3,1,1) ph_scat_map = ph_rate(positions, velocities) ph_scat_mat = reshape(ph_scat_map, (Ygrid3.shape)) imshow(rot90(ph_scat_map), cmap=cm.hot, extent=extentyv, aspect="auto") co=colorbar(fraction=0.05, aspect=7.) #co.set_label('Photon scattering rate {\small [ph s$^{-1}$]}') title('Photon scattering rate {\small [ph s$^{-1}$]}') #xlabel('y{\small [m]}') xticks(xticks()[0], len(xticks()[1])*['',]) # Hide the labels ylabel('v{\small [m s$^{-1}$]}') subplot(3,1,2) damp_coef_map = damping_coef_y(positions, velocities) damp_coef_mat = reshape(damp_coef_map, (Ygrid3.shape)) imshow(rot90(damp_coef_mat), cmap=cm.jet_r, extent=extentyv, aspect="auto") null_format = FuncFormatter(lambda x,y: '') co=colorbar(fraction=0.05, aspect=7.) #co.set_label('Damping coefficient, {\small [arb. units]}') cset = contour(rot90(damp_coef_mat), array([0]), colors=('black', ) , origin='lower', linewidths=1, extent=extentyv, aspect="auto") title('Damping coefficient') # Store the ticks for later use standard_xticks = xticks() standard_yticks = yticks() xticks(xticks()[0], len(xticks()[0])*['', ]) xlabel(r'\strut \hskip 9cm y{\small [m]}') ylabel('v{\small [m s$^{-1}$]}') # ---------------------------------------------------------------------- # color plot of the temperature resulting of balance between heating and # damping. # Calling the inv_temperature function does not work, I am not to sure why, # but it raises a MemoryError, so we do this by hand. inv_temperature_mat = ( k_J*2/(m_Rb*(h_bar*2*pi/(lambda_Rb*m_Rb))**2 ) * damp_coef_mat / ph_scat_mat ) temperature_mat = log(abs(1/inv_temperature_mat))/log(10) if any(inv_temperature_mat<0): # Define masked arrays for positive and negative temperatures. temperature_pos = ma.array(rot90(temperature_mat), mask=rot90(inv_temperature_mat>0)) temperature_neg = -ma.array(rot90(temperature_mat), mask=rot90(inv_temperature_mat<0)) last_ax=subplot(3,1,3) pcolor(temperature_neg[:-1,:], shading='flat', cmap=cm.Blues_r) # Store the image, to be able to give it a colorbar, later on pos_im = gci() pcolor(temperature_pos[:-1,:], shading='flat', cmap=cm.hot_r) neg_im = gci() xticks(xticks()[0][::2], map(str, standard_xticks[0])) yticks(yticks()[0], map(str, standard_yticks[0])) #xlabel('y{\small [m]}') ylabel('v{\small [m s$^{-1}$]}') title('Temperature limit') ### Double colorbar ### # A phony colorbar just to shape these axis like to others: co = colorbar(fraction=0.05, aspect=7.) gcf().axes.pop() axes(last_ax) # Create a phony colorbar just for the sake of retrieving the axis info: co_neg = colorbar(orientation='horizontal', pad= 0.2) co_x, co_y, co_w, co_h = co_neg.ax.get_position() gcf().axes.pop() co_neg_axes = axes( [co_x, co_y, co_w/2., min(co_h, 0.04) ] ) neg_format = FuncFormatter(lambda x, y: '%.1f' % ((-10**(-x))*1000)) co_neg = colorbar(pos_im, co_neg_axes, orientation='horizontal', format=neg_format) co_neg.set_label('Temperature {\small [mK]}') co_pos_axes = axes( [co_x + co_w/2., co_y, co_w/2., min(co_h, 0.04) ] ) pos_format = FuncFormatter(lambda x, y: '%.2f' % ((10**x)*1000)) co_pos = colorbar(neg_im, co_pos_axes, orientation='horizontal', format=pos_format) co_pos.set_label('Imaginary temperature {\small [mK]}') else: last_ax = subplot(3,1,3) neg_im = imshow(rot90(temperature_mat), extent=extentyv, aspect="auto", origin="lower", cmap=cm.Blues_r) # A phony colorbar just to shape these axis like to others: co = colorbar(fraction=0.05, aspect=7.) gcf().axes.pop() axes(last_ax) neg_format = FuncFormatter(lambda x, y: '%d' % ((10**(-x))*1000)) co_neg = colorbar(neg_im, orientation='horizontal', pad= 0.2, format=neg_format) co_neg.set_label('Temperature {\small [mK]}') show() if main and True: plot_phase_space() From skraelings001 at gmail.com Fri Aug 3 11:37:33 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Fri, 03 Aug 2007 10:37:33 -0500 Subject: [SciPy-user] Python evangelism question: procedural programming In-Reply-To: References: Message-ID: <46B34BBD.2040801@gmail.com> Ryan Krauss escribi?: > I have a colleague in our electrical engineering department who today > asked me "What's good about Python?". I responded with "What's bad > about Python?" > > I am used to responding to engineering colleagues in terms of Python > vs Matlab, but this particular person uses Mathematica almost > exclusively. So, he wanted to know if Python could be used like > Mathematica in terms of writing rule and procedure based programs. I > don't know how difficult this would be. > > I don't do this kind of work and think of Mathematica primarily as a > symbolic algebra language, so that Python could not easily replace it. > > Does anyone have experience of coming from Mathematica to Python and > whether or not I should try and convince this person to pursue Python? > it might be of interest http://www.sagemath.org/ Cheers, From fperez.net at gmail.com Fri Aug 3 12:40:57 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 3 Aug 2007 10:40:57 -0600 Subject: [SciPy-user] Python evangelism question: procedural programming In-Reply-To: <20070803153441.GD1680@clipper.ens.fr> References: <20070803153441.GD1680@clipper.ens.fr> Message-ID: On 8/3/07, Gael Varoquaux wrote: > On Fri, Aug 03, 2007 at 09:55:17AM -0500, Ryan Krauss wrote: > > So, he wanted to know if Python could be used like Mathematica in > > terms of writing rule and procedure based programs. I don't know how > > difficult this would be. > > I think I understand what he means. Python can almost be used like this, > but there is the UI missing. Ipython is currently working on a "notebook" > UI that would address this problem. I have developped my own workflow > using a home made script ("pyreport" > http://gael-varoquaux.info/computers/pyreport/ ) to approach this > workflow. Ultimately I want to merge this with ipython's effort in order > to get nice notebook functionnality with html and pdf output, but > xurrently I don't have time for this. You better make time soon. Min has already written a plaintext dump format for the notebook, you'll be getting an email about that in a minute. We need you :) But back to the OP, I think the issue Ryan's colleague has isn't addressed by a notebook interface, nor by SAGE (as great as SAGE is). Mathematica's programming model/language can be very tricky to wrap your head around, but it allows you to do *phenomenal* things in very concise way, that would be extremely clunky in Python or any other language I can think of. Mathematica is very lisp-ish in its model, and its syntax for building complex programs can be quirky, and its encapsulation model is rather poor. But for rule-based programming it's hard to beat, it exposes every object it has in a completely uniform way so that you can do abstract manipulations on them, and it has very rich transformation facilities. Doing things like "take an arbitrarily nested object, traverse it and replace every instance of '(x-y)^4' by a polynomial over z^2" are one-liners in Mathematica. Honestly I'd say that if Ryan's colleague has a lot of code like that, Python is NOT the answer for him. Rather, he should learn to use Python because it *complements* Mathematica very well. Python is good, easy to use and convenient to work with precisely at many of the things that Mathematica is clunky for. They obviously have a lot of overlap, and I personally use Python where they overlap simply because I'm more proficient in Python these days. I'm sure he could go for Mathematica in that region for the same reasons. But there is definitely a domain where Mathematica is simply unbeatable, and that goes beyond the obvious triad of (notebook, symbolics, easy-to-control pervasive arbitrary precision). These days my working toolbox is more or less just Python+Mathematica, for these very reasons (and Python obviously includes C/C++/Fortran as needed for low-level/speed work). HTH. Cheers, f From ryanlists at gmail.com Fri Aug 3 14:57:04 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Aug 2007 13:57:04 -0500 Subject: [SciPy-user] Python evangelism question: procedural programming In-Reply-To: References: <20070803153441.GD1680@clipper.ens.fr> Message-ID: Thanks for all the replies. I appreciate your thoughts. No I have a dumb question: what is the easiest way to get him a link to these responses, since he obviously doesn't subscribe to this list. Does it take a few days to get into Gmane? Can he see the whole thread there easily? Is there a better way? On 8/3/07, Fernando Perez wrote: > On 8/3/07, Gael Varoquaux wrote: > > On Fri, Aug 03, 2007 at 09:55:17AM -0500, Ryan Krauss wrote: > > > So, he wanted to know if Python could be used like Mathematica in > > > terms of writing rule and procedure based programs. I don't know how > > > difficult this would be. > > > > I think I understand what he means. Python can almost be used like this, > > but there is the UI missing. Ipython is currently working on a "notebook" > > UI that would address this problem. I have developped my own workflow > > using a home made script ("pyreport" > > http://gael-varoquaux.info/computers/pyreport/ ) to approach this > > workflow. Ultimately I want to merge this with ipython's effort in order > > to get nice notebook functionnality with html and pdf output, but > > xurrently I don't have time for this. > > You better make time soon. Min has already written a plaintext dump > format for the notebook, you'll be getting an email about that in a > minute. We need you :) > > But back to the OP, I think the issue Ryan's colleague has isn't > addressed by a notebook interface, nor by SAGE (as great as SAGE is). > > Mathematica's programming model/language can be very tricky to wrap > your head around, but it allows you to do *phenomenal* things in very > concise way, that would be extremely clunky in Python or any other > language I can think of. > > Mathematica is very lisp-ish in its model, and its syntax for building > complex programs can be quirky, and its encapsulation model is rather > poor. But for rule-based programming it's hard to beat, it exposes > every object it has in a completely uniform way so that you can do > abstract manipulations on them, and it has very rich transformation > facilities. Doing things like "take an arbitrarily nested object, > traverse it and replace every instance of '(x-y)^4' by a polynomial > over z^2" are one-liners in Mathematica. > > Honestly I'd say that if Ryan's colleague has a lot of code like that, > Python is NOT the answer for him. Rather, he should learn to use > Python because it *complements* Mathematica very well. Python is > good, easy to use and convenient to work with precisely at many of the > things that Mathematica is clunky for. They obviously have a lot of > overlap, and I personally use Python where they overlap simply because > I'm more proficient in Python these days. I'm sure he could go for > Mathematica in that region for the same reasons. > > But there is definitely a domain where Mathematica is simply > unbeatable, and that goes beyond the obvious triad of (notebook, > symbolics, easy-to-control pervasive arbitrary precision). > > These days my working toolbox is more or less just Python+Mathematica, > for these very reasons (and Python obviously includes C/C++/Fortran as > needed for low-level/speed work). > > HTH. > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From skraelings001 at gmail.com Fri Aug 3 15:04:39 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Fri, 03 Aug 2007 14:04:39 -0500 Subject: [SciPy-user] Python evangelism question: procedural programming In-Reply-To: References: <20070803153441.GD1680@clipper.ens.fr> Message-ID: <46B37C47.8090907@gmail.com> Ryan Krauss escribi?: > Thanks for all the replies. I appreciate your thoughts. > > No I have a dumb question: what is the easiest way to get him a link > to these responses, since he obviously doesn't subscribe to this list. > Does it take a few days to get into Gmane? Can he see the whole > thread there easily? Is there a better way? > Here, they are http://groups.google.com/group/scipy-user/browse_thread/thread/bed22a18231c7709/69da385e1951c1fc#69da385e1951c1fc From ryanlists at gmail.com Fri Aug 3 15:05:06 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Aug 2007 14:05:06 -0500 Subject: [SciPy-user] Python evangelism question: procedural programming In-Reply-To: References: <20070803153441.GD1680@clipper.ens.fr> Message-ID: I think I found my answer: http://comments.gmane.org/gmane.comp.python.scientific.user/12812 Thanks again. On 8/3/07, Ryan Krauss wrote: > Thanks for all the replies. I appreciate your thoughts. > > No I have a dumb question: what is the easiest way to get him a link > to these responses, since he obviously doesn't subscribe to this list. > Does it take a few days to get into Gmane? Can he see the whole > thread there easily? Is there a better way? > > On 8/3/07, Fernando Perez wrote: > > On 8/3/07, Gael Varoquaux wrote: > > > On Fri, Aug 03, 2007 at 09:55:17AM -0500, Ryan Krauss wrote: > > > > So, he wanted to know if Python could be used like Mathematica in > > > > terms of writing rule and procedure based programs. I don't know how > > > > difficult this would be. > > > > > > I think I understand what he means. Python can almost be used like this, > > > but there is the UI missing. Ipython is currently working on a "notebook" > > > UI that would address this problem. I have developped my own workflow > > > using a home made script ("pyreport" > > > http://gael-varoquaux.info/computers/pyreport/ ) to approach this > > > workflow. Ultimately I want to merge this with ipython's effort in order > > > to get nice notebook functionnality with html and pdf output, but > > > xurrently I don't have time for this. > > > > You better make time soon. Min has already written a plaintext dump > > format for the notebook, you'll be getting an email about that in a > > minute. We need you :) > > > > But back to the OP, I think the issue Ryan's colleague has isn't > > addressed by a notebook interface, nor by SAGE (as great as SAGE is). > > > > Mathematica's programming model/language can be very tricky to wrap > > your head around, but it allows you to do *phenomenal* things in very > > concise way, that would be extremely clunky in Python or any other > > language I can think of. > > > > Mathematica is very lisp-ish in its model, and its syntax for building > > complex programs can be quirky, and its encapsulation model is rather > > poor. But for rule-based programming it's hard to beat, it exposes > > every object it has in a completely uniform way so that you can do > > abstract manipulations on them, and it has very rich transformation > > facilities. Doing things like "take an arbitrarily nested object, > > traverse it and replace every instance of '(x-y)^4' by a polynomial > > over z^2" are one-liners in Mathematica. > > > > Honestly I'd say that if Ryan's colleague has a lot of code like that, > > Python is NOT the answer for him. Rather, he should learn to use > > Python because it *complements* Mathematica very well. Python is > > good, easy to use and convenient to work with precisely at many of the > > things that Mathematica is clunky for. They obviously have a lot of > > overlap, and I personally use Python where they overlap simply because > > I'm more proficient in Python these days. I'm sure he could go for > > Mathematica in that region for the same reasons. > > > > But there is definitely a domain where Mathematica is simply > > unbeatable, and that goes beyond the obvious triad of (notebook, > > symbolics, easy-to-control pervasive arbitrary precision). > > > > These days my working toolbox is more or less just Python+Mathematica, > > for these very reasons (and Python obviously includes C/C++/Fortran as > > needed for low-level/speed work). > > > > HTH. > > > > Cheers, > > > > f > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From ipan at freeshell.org Sat Aug 4 00:05:39 2007 From: ipan at freeshell.org (Ivan Pan) Date: Fri, 3 Aug 2007 23:05:39 -0500 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> <18097.37642.313889.398075@gargle.gargle.HOWL> Message-ID: On Aug 2, 2007, at 12:51 PM, Fernando Perez wrote: > Honestly I'm not sure, it's a bit tight. Do we put instead two for > Thursday and leave the other two for Wednesday night, at the risk of > losing some people? I also worry that just one hour per bof is really > not enough, since they tend to generate a lot of animated discussion. > > For both reasons I'm leaning towards bofs on Wednesday, at the risk of > losing a few people. another vote for Wednesday night BOF. ip From fredmfp at gmail.com Sat Aug 4 06:13:16 2007 From: fredmfp at gmail.com (fred) Date: Sat, 04 Aug 2007 12:13:16 +0200 Subject: [SciPy-user] choose at random all elements in a array... Message-ID: <46B4513C.5080108@gmail.com> Hi, Say I have an array with 250 000 values. I want to choose at random _all_ the elements of this array, _once and only once_. How can I do this ? TIA. Cheers, -- http://scipy.org/FredericPetit From david at ar.media.kyoto-u.ac.jp Sat Aug 4 06:05:17 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 04 Aug 2007 19:05:17 +0900 Subject: [SciPy-user] choose at random all elements in a array... In-Reply-To: <46B4513C.5080108@gmail.com> References: <46B4513C.5080108@gmail.com> Message-ID: <46B44F5D.5010702@ar.media.kyoto-u.ac.jp> fred wrote: > Hi, > > Say I have an array with 250 000 values. > > I want to choose at random _all_ the elements of this array, > _once and only once_. > > How can I do this ? > > TIA. > > Cheers, > > You can simply use a random permutation: import numpy a = numpy.array([1, 2, 3, 4, 5, 6]) numpy.random.permutation(a) David From emanuelez at gmail.com Sat Aug 4 06:20:53 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Sat, 4 Aug 2007 12:20:53 +0200 Subject: [SciPy-user] choose at random all elements in a array... In-Reply-To: <46B4513C.5080108@gmail.com> References: <46B4513C.5080108@gmail.com> Message-ID: mmm... what about something like: random.shuffle(A.ravel()) On 8/4/07, fred wrote: > Hi, > > Say I have an array with 250 000 values. > > I want to choose at random _all_ the elements of this array, > _once and only once_. > > How can I do this ? > > TIA. > > Cheers, > > -- > http://scipy.org/FredericPetit > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From fredmfp at gmail.com Sat Aug 4 08:02:13 2007 From: fredmfp at gmail.com (fred) Date: Sat, 04 Aug 2007 14:02:13 +0200 Subject: [SciPy-user] choose at random all elements in a array... In-Reply-To: References: <46B4513C.5080108@gmail.com> Message-ID: <46B46AC5.7070503@gmail.com> Emanuele Zattin a ?crit : > mmm... what about something like: > > random.shuffle(A.ravel()) > Whaou, I ever did not think it could be so easy ;-)) Thanks a lot to all two. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Sat Aug 4 08:13:22 2007 From: fredmfp at gmail.com (fred) Date: Sat, 04 Aug 2007 14:13:22 +0200 Subject: [SciPy-user] choose at random all elements in a array... In-Reply-To: References: <46B4513C.5080108@gmail.com> Message-ID: <46B46D62.9030203@gmail.com> Emanuele Zattin a ?crit : > mmm... what about something like: > > random.shuffle(A.ravel()) > And now, I want 200 000 random values between 0 & 249999, once and only once ? TIA. -- http://scipy.org/FredericPetit From fredmfp at gmail.com Sat Aug 4 08:14:54 2007 From: fredmfp at gmail.com (fred) Date: Sat, 04 Aug 2007 14:14:54 +0200 Subject: [SciPy-user] choose at random all elements in a array... In-Reply-To: <46B46D62.9030203@gmail.com> References: <46B4513C.5080108@gmail.com> <46B46D62.9030203@gmail.com> Message-ID: <46B46DBE.6020106@gmail.com> fred a ?crit : > And now, I want 200 000 random values > between 0 & 249999, once and only once ? Sorry, answer is obvious. Cheers, -- http://scipy.org/FredericPetit From emanuelez at gmail.com Mon Aug 6 06:04:22 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Mon, 6 Aug 2007 12:04:22 +0200 Subject: [SciPy-user] more for loops removal Message-ID: this time it is something like this: paths = [] for i in range(d12.shape[1]): for j in range(d23.shape[1]): for k in range(d34.shape[1]): if d12[0,i] == d23[1,j] and d23[0,j] == d34[1,k]: paths.append([d12[1,i], d12[0,i], d23[0,j], d34[0,k]]) i will explain... d12, d23, d34 contain indices that say that a path from index1 in 1 to index2 in 2 is plausible. what i want to do now is to find which are the complete paths from 1 to 4. is it possible to once again use broadcasting to speed things up? -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From matthieu.brucher at gmail.com Mon Aug 6 16:02:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 6 Aug 2007 22:02:05 +0200 Subject: [SciPy-user] signal.bspline() Message-ID: Hi, I digged into some functions documentation, but I can't understand what the function bspline does. Someone has a clue ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Aug 6 17:33:03 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 6 Aug 2007 23:33:03 +0200 Subject: [SciPy-user] signal.bspline() In-Reply-To: References: Message-ID: <20070806213303.GU13429@mentat.za.net> On Mon, Aug 06, 2007 at 10:02:05PM +0200, Matthieu Brucher wrote: > I digged into some functions documentation, but I can't understand what the > function bspline does. > Someone has a clue ? Looks like it generates points on B-spline basis functions. For example, try: import pylab as P import numpy as N import scipy as S import scipy.signal points = N.linspace(-1,1,101) for k in range(3): P.plot(points,S.signal.bspline(points,k)) P.title('Order %s' % k) P.show() Also see http://mathworld.wolfram.com/B-Spline.html and http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/spline/B-spline/bspline-basis.html Regards St?fan From lfriedri at imtek.de Tue Aug 7 05:10:45 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Tue, 07 Aug 2007 11:10:45 +0200 Subject: [SciPy-user] fftw3 wrappers Message-ID: <46B83715.60606@imtek.de> Hello, in the numpy mailing list, I was told that > - if you care about speed (that is, faster than numpy), then use > scipy.fftpack with fftw3: there are wrappers in scipy for it. I am using version '0.5.3.dev3173' of scipy. Where do I find the fftw3-wrappers? When I just use scipy.fftpack.fft2, I do not see any speedup compared to numpy.fft.fft2. I googled a bit and found this page: http://pylab.sourceforge.net/ that has code for fftw2.1.3, but the page says that it is old. Do I need to compile scipy with some fftw3-switch turned on? Thanks Lars -- Dipl.-Ing. Lars Friedrich Photonic Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-K?hler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfriedri at imtek.de From david at ar.media.kyoto-u.ac.jp Tue Aug 7 05:32:27 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 07 Aug 2007 18:32:27 +0900 Subject: [SciPy-user] fftw3 wrappers In-Reply-To: <46B83715.60606@imtek.de> References: <46B83715.60606@imtek.de> Message-ID: <46B83C2B.8000503@ar.media.kyoto-u.ac.jp> Lars Friedrich wrote: > Hello, > > in the numpy mailing list, I was told that > >> - if you care about speed (that is, faster than numpy), then use >> scipy.fftpack with fftw3: there are wrappers in scipy for it. > > I am using version '0.5.3.dev3173' of scipy. Where do I find the > fftw3-wrappers? When I just use scipy.fftpack.fft2, I do not see any > speedup compared to numpy.fft.fft2. Mmh, sorry, I missed that you were interested in multi dimensional fft, this may explain the result. For 1d, fftw3 wrappers are almost always faster than numpy, but not for multi dimensional > I googled a bit and found this page: > > http://pylab.sourceforge.net/ > > that has code for fftw2.1.3, but the page says that it is old. There are wrappers for both fftw2 and 3 in scipy now. > > Do I need to compile scipy with some fftw3-switch turned on? What does scipy.show_config() tells you ? The problem is that, at least in my experience, scipy's fftw3 wrappers may or may not be faster than numpy for multi dimensional fft, depending on your architecture (eg Pentium 4 vs Pentium m vs Core Duo). This will change in the future, because the problem is on scipy's side, not on fftw3 (fftw by itself is certainly faster than what you get using numpy). If I were you, this is what I would do: - first, check whether you are really using fftw3 (using show_config) - if you are, then maybe you can try with fftw2 instead of fftw3: install fftw2 on your computer, and then rebuild scipy with FFTW3=None, eg: FFTW3=None python setup.py build (you can check whether fftw2 is picked instead of fftw3 by running first FFTW3 python setup.py config). - if this is ok for you, you may try the Intel MKL instead. I have never used it, but heard it is pretty efficient. fftw2 has a pretty good chance of working better than fftw3 now because it is more efficiently used by scipy. Hopefully, this will change soon (there is a bit some work needed to improve the wrappers efficiency wise: I am working on it). cheers, David From ryanlists at gmail.com Tue Aug 7 09:32:41 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 7 Aug 2007 08:32:41 -0500 Subject: [SciPy-user] New scipy release before 8/20? Message-ID: I would like to encourage my students to use Python in my class this Fall. The first day of class is 8/20. I have had mediocre luck building my own windows installers and I would prefer that the start up warnings about scipy.test now being called numpy.test not be displayed. I know that I am basically a consumer here asking for other people's time, but what are the odds of a new scipy release before 8/20? Or at least a windows installer that fixes the scipy.test warning messages? I assume that the svn version of scipy has these messages fixed. I could build my own windows installer from svn, but there are always test failures when I do it. I don't know why :) Thanks, Ryan From c.j.lee at tnw.utwente.nl Tue Aug 7 11:05:47 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Tue, 07 Aug 2007 17:05:47 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: Message-ID: <46B88A4B.2010505@tnw.utwente.nl> An alternative is to edit the appropriate __init__.py file, it is two small changes so in C:\python25\Lib\site-packages\scipy\misc open then __init__.py file and near the bottom change from numpy.testing import ScipyTest test = ScipyTest().test to from numpy.testing import NumpyTest test = NumpyTest().test then copy that file across all the machines in the lab and provide it for students to copy onto their own machines. Cheers Chris Ryan Krauss wrote: > I would like to encourage my students to use Python in my class this > Fall. The first day of class is 8/20. I have had mediocre luck > building my own windows installers and I would prefer that the start > up warnings about scipy.test now being called numpy.test not be > displayed. > > I know that I am basically a consumer here asking for other people's > time, but what are the odds of a new scipy release before 8/20? Or at > least a windows installer that fixes the scipy.test warning messages? > I assume that the svn version of scipy has these messages fixed. I > could build my own windows installer from svn, but there are always > test failures when I do it. I don't know why :) > > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From ryanlists at gmail.com Tue Aug 7 11:44:14 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 7 Aug 2007 10:44:14 -0500 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46B88A4B.2010505@tnw.utwente.nl> References: <46B88A4B.2010505@tnw.utwente.nl> Message-ID: My students are not very computer savvy and they perceive installation to be a major hurdle to Scipy usage. So, asking them to find a replace a file may be enough to drive them to Matlab. So, I have no problem with that fix, I just need to get it packaged in the installer before I give students CD's. Ryan On 8/7/07, Chris Lee wrote: > An alternative is to edit the appropriate __init__.py file, it is two > small changes > > so in C:\python25\Lib\site-packages\scipy\misc open then __init__.py > file and near the bottom change > from numpy.testing import ScipyTest > test = ScipyTest().test > > to > > from numpy.testing import NumpyTest > test = NumpyTest().test > > then copy that file across all the machines in the lab and provide it > for students to copy onto their own machines. > > Cheers > Chris > > Ryan Krauss wrote: > > I would like to encourage my students to use Python in my class this > > Fall. The first day of class is 8/20. I have had mediocre luck > > building my own windows installers and I would prefer that the start > > up warnings about scipy.test now being called numpy.test not be > > displayed. > > > > I know that I am basically a consumer here asking for other people's > > time, but what are the odds of a new scipy release before 8/20? Or at > > least a windows installer that fixes the scipy.test warning messages? > > I assume that the svn version of scipy has these messages fixed. I > > could build my own windows installer from svn, but there are always > > test failures when I do it. I don't know why :) > > > > Thanks, > > > > Ryan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > ********************************************** > * Chris Lee * > * Laser physics and nonlinear optics group * > * MESA+ Institute * > * University of Twente * > * Phone: ++31 (0)53 489 3968 * > * fax: ++31 (0) 53 489 1102 * > ********************************************** > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fdu.xiaojf at gmail.com Tue Aug 7 11:48:16 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Tue, 07 Aug 2007 23:48:16 +0800 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: <46B88A4B.2010505@tnw.utwente.nl> Message-ID: <46B89440.2010804@gmail.com> Ryan Krauss wrote: > My students are not very computer savvy and they perceive installation > to be a major hurdle to Scipy usage. So, asking them to find a > replace a file may be enough to drive them to Matlab. > > So, I have no problem with that fix, I just need to get it packaged in > the installer before I give students CD's. > > Ryan > Maybe you can just write a script in python to fix the problem for every students after they have installed python and scipy. From fperez.net at gmail.com Tue Aug 7 12:48:34 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 7 Aug 2007 10:48:34 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> <18097.37642.313889.398075@gargle.gargle.HOWL> Message-ID: On 8/3/07, Ivan Pan wrote: > > On Aug 2, 2007, at 12:51 PM, Fernando Perez wrote: > > > Honestly I'm not sure, it's a bit tight. Do we put instead two for > > Thursday and leave the other two for Wednesday night, at the risk of > > losing some people? I also worry that just one hour per bof is really > > not enough, since they tend to generate a lot of animated discussion. > > > > For both reasons I'm leaning towards bofs on Wednesday, at the risk of > > losing a few people. > > another vote for Wednesday night BOF. OK, I've updated the page here: http://new.scipy.org/SciPy2007/BoFs with the testing one for Wednesday. Cheers, f From fperez.net at gmail.com Tue Aug 7 12:51:09 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 7 Aug 2007 10:51:09 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: References: <46B0C700.6010000@ucsf.edu> <18097.37642.313889.398075@gargle.gargle.HOWL> Message-ID: Hi Titus, I'm not sure if you're on the scipy mailing list, so I'm writing directly to you. I've penciled in a testing BOF session for Wednesday night after the tutorials, and was wondering if you'd be able to participate. We all know that you are very involved with testing in python, and many of us are eager for a good discussion on the topic, where we think you'd have much to contribute. The end of the thread is pasted below for reference, the rest of the conversation (just scheduling details, really) took place on the scipy user list in case you are curious. This message is still cc-d to the list. Cheers, f On 8/7/07, Fernando Perez wrote: > On 8/3/07, Ivan Pan wrote: > > > > On Aug 2, 2007, at 12:51 PM, Fernando Perez wrote: > > > > > Honestly I'm not sure, it's a bit tight. Do we put instead two for > > > Thursday and leave the other two for Wednesday night, at the risk of > > > losing some people? I also worry that just one hour per bof is really > > > not enough, since they tend to generate a lot of animated discussion. > > > > > > For both reasons I'm leaning towards bofs on Wednesday, at the risk of > > > losing a few people. > > > > another vote for Wednesday night BOF. > > OK, I've updated the page here: > > http://new.scipy.org/SciPy2007/BoFs > > with the testing one for Wednesday. From matthieu.brucher at gmail.com Tue Aug 7 15:06:28 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 7 Aug 2007 21:06:28 +0200 Subject: [SciPy-user] Error in signal.firdesign ? Message-ID: Hi, I'm using the last scipy release, and I encountered this error : >>> signal.firwin(50, 1/4.) Traceback (most recent call last): File "", line 1, in signal.firwin(50, 1/4.) File "C:\Python25\lib\site-packages\scipy\signal\filter_design.py", line 1542, in firwin return h / sum(h,axis=0) TypeError: sum() takes no keyword arguments Will it be corrected in a future realse ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Aug 7 15:22:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 07 Aug 2007 14:22:25 -0500 Subject: [SciPy-user] Error in signal.firdesign ? In-Reply-To: References: Message-ID: <46B8C671.5080107@gmail.com> Matthieu Brucher wrote: > Hi, > > I'm using the last scipy release, and I encountered this error : >>>> signal.firwin(50, 1/4.) > > Traceback (most recent call last): > File "", line 1, in > signal.firwin(50, 1/4.) > File "C:\Python25\lib\site-packages\scipy\signal\filter_design.py", > line 1542, in firwin > return h / sum(h,axis=0) > TypeError: sum() takes no keyword arguments > > Will it be corrected in a future realse ? Already has been in r2542. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Tue Aug 7 15:31:15 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 7 Aug 2007 21:31:15 +0200 Subject: [SciPy-user] Error in signal.firdesign ? In-Reply-To: <46B8C671.5080107@gmail.com> References: <46B8C671.5080107@gmail.com> Message-ID: 2007/8/7, Robert Kern : > > Matthieu Brucher wrote: > > Hi, > > > > I'm using the last scipy release, and I encountered this error : > >>>> signal.firwin(50, 1/4.) > > > > Traceback (most recent call last): > > File "", line 1, in > > signal.firwin(50, 1/4.) > > File "C:\Python25\lib\site-packages\scipy\signal\filter_design.py", > > line 1542, in firwin > > return h / sum(h,axis=0) > > TypeError: sum() takes no keyword arguments > > > > Will it be corrected in a future realse ? > > Already has been in r2542. Sorry :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Aug 7 15:36:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 07 Aug 2007 14:36:19 -0500 Subject: [SciPy-user] Error in signal.firdesign ? In-Reply-To: References: <46B8C671.5080107@gmail.com> Message-ID: <46B8C9B3.6010502@gmail.com> Matthieu Brucher wrote: > > 2007/8/7, Robert Kern >: > > Matthieu Brucher wrote: > > Hi, > > > > I'm using the last scipy release, and I encountered this error : > >>>> signal.firwin(50, 1/4.) > > > > Traceback (most recent call last): > > File "", line 1, in > > signal.firwin(50, 1/4.) > > File "C:\Python25\lib\site-packages\scipy\signal\filter_design.py", > > line 1542, in firwin > > return h / sum(h,axis=0) > > TypeError: sum() takes no keyword arguments > > > > Will it be corrected in a future realse ? > > Already has been in r2542. > > Sorry :( No worries. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Tue Aug 7 16:41:01 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 7 Aug 2007 22:41:01 +0200 Subject: [SciPy-user] Error in signal.firdesign ? In-Reply-To: <46B8C9B3.6010502@gmail.com> References: <46B8C671.5080107@gmail.com> <46B8C9B3.6010502@gmail.com> Message-ID: > > No worries. > Is there a problem as well with signal.order_filter ? It seems to search for sigtools._orderfilterND which seems to not exist. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Aug 7 19:08:01 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 8 Aug 2007 01:08:01 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: Message-ID: <20070807230801.GF13502@mentat.za.net> On Tue, Aug 07, 2007 at 08:32:41AM -0500, Ryan Krauss wrote: > I would like to encourage my students to use Python in my class this > Fall. The first day of class is 8/20. I have had mediocre luck > building my own windows installers and I would prefer that the start > up warnings about scipy.test now being called numpy.test not be > displayed. The friendly guys at Enthought go to the trouble already, so why not let your students use http://code.enthought.com/enstaller Enthough currently allows you to mirror their eggs, even, so you can download them on behalf of your students and create a local repository. See, for example, the instruction at the bottom of http://dip.sun.ac.za/courses/ComputerVision/ Regards St?fan From stefan at sun.ac.za Tue Aug 7 19:53:36 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 8 Aug 2007 01:53:36 +0200 Subject: [SciPy-user] Error in signal.firdesign ? In-Reply-To: References: <46B8C671.5080107@gmail.com> <46B8C9B3.6010502@gmail.com> Message-ID: <20070807235335.GG13502@mentat.za.net> On Tue, Aug 07, 2007 at 10:41:01PM +0200, Matthieu Brucher wrote: > No worries. > > > Is there a problem as well with signal.order_filter ? It seems to search for > sigtools._orderfilterND which seems to not exist. Fixed in r3222. Cheers St?fan From matthieu.brucher at gmail.com Wed Aug 8 01:58:52 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 8 Aug 2007 07:58:52 +0200 Subject: [SciPy-user] Error in signal.firdesign ? In-Reply-To: <20070807235335.GG13502@mentat.za.net> References: <46B8C671.5080107@gmail.com> <46B8C9B3.6010502@gmail.com> <20070807235335.GG13502@mentat.za.net> Message-ID: 2007/8/8, Stefan van der Walt : > > On Tue, Aug 07, 2007 at 10:41:01PM +0200, Matthieu Brucher wrote: > > No worries. > > > > > > Is there a problem as well with signal.order_filter ? It seems to search > for > > sigtools._orderfilterND which seems to not exist. > > Fixed in r3222. > > Cheers > St?fan Thank you for this ! Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Wed Aug 8 02:01:28 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 8 Aug 2007 08:01:28 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <20070807230801.GF13502@mentat.za.net> References: <20070807230801.GF13502@mentat.za.net> Message-ID: > > The friendly guys at Enthought go to the trouble already, so why not > let your students use > > http://code.enthought.com/enstaller > > Enthough currently allows you to mirror their eggs, even, so you can > download them on behalf of your students and create a local > repository. > > See, for example, the instruction at the bottom of > > http://dip.sun.ac.za/courses/ComputerVision/ It seems that these eggs are too old for him and I cannot blame him for this. I think there are some bugs in weave that must be fixed before a release, but I really don't know anything about it to help solve them :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Aug 8 02:30:09 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 08 Aug 2007 15:30:09 +0900 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: <20070807230801.GF13502@mentat.za.net> Message-ID: <46B962F1.9070905@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > The friendly guys at Enthought go to the trouble already, so why not > let your students use > > http://code.enthought.com/enstaller > > Enthough currently allows you to mirror their eggs, even, so you can > download them on behalf of your students and create a local > repository. > > See, for example, the instruction at the bottom of > > http://dip.sun.ac.za/courses/ComputerVision/ > > > It seems that these eggs are too old for him and I cannot blame him > for this. > I think there are some bugs in weave that must be fixed before a > release, but I really don't know anything about it to help solve them :( > IMHO, the most problematic thing is numpy. I think that a numpy release should be done as soon as possible, since the current one does not work with scipy. For example, if I take a look at the 1.0.4 milestone, I see no big bugs except a few 64 bits things which should not be too difficult to handle. Now that my midterm PhD evalution is done, I can spend a bit more time to fix those bugs :) I think I also found a way to compile an ATLAS without SSE on my machine, too, which means that releasing binaries for numpy should be doable soon. For scipy,there are not many big bugs anymore I think, most serious problems (unbuildable on mac os X, memory corruption, bugs in basic functionalities) were handled. *I* think that the current trunk can be tagged as 0.5.3 as soon as numpy is released, basically, but maybe some other developers are working on other things which I didn't see on svn logs. cheers, David From lfriedri at imtek.de Wed Aug 8 02:41:31 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Wed, 08 Aug 2007 08:41:31 +0200 Subject: [SciPy-user] fftw3 wrappers Message-ID: <46B9659B.4030404@imtek.de> Hi David, thank you for your answer. David Cournapeau wrote: > Mmh, sorry, I missed that you were interested in multi dimensional fft, > this may explain the result. For 1d, fftw3 wrappers are almost always > faster than numpy, but not for multi dimensional Maybe I just did not tell before... I thought this would not make any difference. > What does scipy.show_config() tells you ? ... fftw2_info: NOT AVAILABLE fftw3_info: NOT AVAILABLE ... > in my experience, scipy's fftw3 wrappers may or may not be faster than > numpy for multi dimensional fft, depending on your architecture (eg > Pentium 4 vs Pentium m vs Core Duo). This will change in the future, > because the problem is on scipy's side, not on fftw3 (fftw by itself is > certainly faster than what you get using numpy). > > If I were you, this is what I would do: > - first, check whether you are really using fftw3 (using show_config) > - if you are, then maybe you can try with fftw2 instead of fftw3: > install fftw2 on your computer, and then rebuild scipy with FFTW3=None, > eg: FFTW3=None python setup.py build (you can check whether fftw2 is > picked instead of fftw3 by running first FFTW3 python setup.py config). > - if this is ok for you, you may try the Intel MKL instead. I have > never used it, but heard it is pretty efficient. > > fftw2 has a pretty good chance of working better than fftw3 now because > it is more efficiently used by scipy. Hopefully, this will change soon > (there is a bit some work needed to improve the wrappers efficiency > wise: I am working on it). I tend to wait for full fftw3-support... maybe I can help you? I started with checking out http://svn.scipy.org/svn/scipy/trunk, is this the right step? Lars From david at ar.media.kyoto-u.ac.jp Wed Aug 8 02:42:35 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 08 Aug 2007 15:42:35 +0900 Subject: [SciPy-user] fftw3 wrappers In-Reply-To: <46B9659B.4030404@imtek.de> References: <46B9659B.4030404@imtek.de> Message-ID: <46B965DB.8090508@ar.media.kyoto-u.ac.jp> Lars Friedrich wrote: > Hi David, > > thank you for your answer. > > David Cournapeau wrote: >> Mmh, sorry, I missed that you were interested in multi dimensional fft, >> this may explain the result. For 1d, fftw3 wrappers are almost always >> faster than numpy, but not for multi dimensional > > Maybe I just did not tell before... I thought this would not make any > difference. It shouldn't, but the current implementation is suboptimal for multi dimensional fft (please note that the original implementation of fft in scipy makes it possible to use not less than 5 different fft backend, which is already quite an achievement; I don't want to sound like I am bashing anyone work, just that you can expect better performances in a near future). > >> What does scipy.show_config() tells you ? > > ... > fftw2_info: > NOT AVAILABLE > > fftw3_info: > NOT AVAILABLE Well, that means that fftw3 was not found, or not used when it was packaged :) Which platform are you on ? Did you install fftw3 (as well as development package on linux) ? From nwagner at iam.uni-stuttgart.de Wed Aug 8 02:53:24 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Aug 2007 08:53:24 +0200 Subject: [SciPy-user] fftw3 wrappers In-Reply-To: <46B965DB.8090508@ar.media.kyoto-u.ac.jp> References: <46B9659B.4030404@imtek.de> <46B965DB.8090508@ar.media.kyoto-u.ac.jp> Message-ID: <46B96864.40302@iam.uni-stuttgart.de> David Cournapeau wrote: > Lars Friedrich wrote: > >> Hi David, >> >> thank you for your answer. >> >> David Cournapeau wrote: >> >>> Mmh, sorry, I missed that you were interested in multi dimensional fft, >>> this may explain the result. For 1d, fftw3 wrappers are almost always >>> faster than numpy, but not for multi dimensional >>> >> Maybe I just did not tell before... I thought this would not make any >> difference. >> > It shouldn't, but the current implementation is suboptimal for multi > dimensional fft (please note that the original implementation of fft in > scipy makes it possible to use not less than 5 different fft backend, > which is already quite an achievement; I don't want to sound like I am > bashing anyone work, just that you can expect better performances in a > near future). > >>> What does scipy.show_config() tells you ? >>> >> ... >> fftw2_info: >> NOT AVAILABLE >> >> fftw3_info: >> NOT AVAILABLE >> > Well, that means that fftw3 was not found, or not used when it was > packaged :) > > Which platform are you on ? Did you install fftw3 (as well as > development package on linux) ? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I have used site.cfg on openSuSE10.2 with [fftw3] library_dirs = /usr/lib64 include_dirs = /usr/include fftw_libs = fftw3 Nils rpm -qi fftw3 Name : fftw3 Relocations: (not relocatable) Version : 3.1.2 Vendor: SUSE LINUX Products GmbH, Nuernberg, Germany Release : 19 Build Date: Sat 25 Nov 2006 01:34:01 PM CET Install Date: Mon 16 Apr 2007 05:32:26 PM CEST Build Host: adams.suse.de Group : Productivity/Scientific/Math Source RPM: fftw3-3.1.2-19.src.rpm Size : 3268111 License: GNU General Public License (GPL) Signature : DSA/SHA1, Sat 25 Nov 2006 01:39:32 PM CET, Key ID a84edae89c800aca Packager : http://bugs.opensuse.org URL : http://www.fftw.org Summary : Discrete Fourier Transform (DFT) C Subroutine Library Description : FFTW is a C subroutine library for computing the Discrete Fourier Transform (DFT) in one or more dimensions, of both real and complex data, and of arbitrary input size. Authors: -------- Matteo Frigo Stevenj G. Johnson Distribution: openSUSE 10.2 (X86-64) From stefan at sun.ac.za Wed Aug 8 05:09:01 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 8 Aug 2007 11:09:01 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: <20070807230801.GF13502@mentat.za.net> Message-ID: <20070808090901.GN30988@mentat.za.net> On Wed, Aug 08, 2007 at 08:01:28AM +0200, Matthieu Brucher wrote: > The friendly guys at Enthought go to the trouble already, so why not > let your students use > > http://code.enthought.com/enstaller > > Enthough currently allows you to mirror their eggs, even, so you can > download them on behalf of your students and create a local > repository. > > See, for example, the instruction at the bottom of > > http://dip.sun.ac.za/courses/ComputerVision/ > > > It seems that these eggs are too old for him and I cannot blame him > for this. What do you mean? http://code.enthought.com/enstaller/eggs/windows/xp/unstable/numpy-1.0.4.dev3954-py2.5-win32.egg http://code.enthought.com/enstaller/eggs/windows/xp/unstable/scipy-0.5.3.dev3224-py2.4-win32.egg St?fan From matthieu.brucher at gmail.com Wed Aug 8 05:18:29 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 8 Aug 2007 11:18:29 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <20070808090901.GN30988@mentat.za.net> References: <20070807230801.GF13502@mentat.za.net> <20070808090901.GN30988@mentat.za.net> Message-ID: > > What do you mean? > > > http://code.enthought.com/enstaller/eggs/windows/xp/unstable/numpy-1.0.4.dev3954-py2.5-win32.egg > > > http://code.enthought.com/enstaller/eggs/windows/xp/unstable/scipy-0.5.3.dev3224-py2.4-win32.egg > OK, I'll go hinding in some hole somewhere. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Wed Aug 8 06:59:56 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 08 Aug 2007 12:59:56 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: <46B88A4B.2010505@tnw.utwente.nl> Message-ID: <46B9A22C.7080308@unibo.it> Ryan Krauss ha scritto: > My students are not very computer savvy and they perceive installation > to be a major hurdle to Scipy usage. So, asking them to find a > replace a file may be enough to drive them to Matlab. Sorry if I'm being somehow between offtopic and rude, but if your students are so lazy that replacing a file means using Matlab, therefore your students do not deserve to be... students. You're going to teach programming to those guys, so, if they still are not, they *will have to* be computer savy. Explain them why do you think SciPy is better than NumPy (free, open source, fast updates, a complete programming language behind, multiplatform etc.etc.) and let them trust you firmly. If you are almost unconvinced of using SciPy yourself and you want to hide warnings, you're even less likely to get your students to use it. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From massimo.sandal at unibo.it Wed Aug 8 10:50:51 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 08 Aug 2007 16:50:51 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> Message-ID: <46B9D84B.5040004@unibo.it> Ryan Krauss ha scritto: > I do think your response was rude and I don't think it fosters the > kind of community we have on the SciPy list. Sorry,really. I didn't want to (albeit I understood it could have been understood as that). > I am quite convinced of the value of SciPy. I did my entire Ph.D. > thesis work using it and have published at least one paper so far on > the power and beauty of Python applied to my area of research > (feedback control systems). Perfect. So why so much doubts? Why cheating about the failed tests? Go ahead. You are the teacher. They will (have to) follow you. > I am not teaching programming, I am teaching mechanical engineering > and specifically mechatronics. For reasons I don't fully understand, > our students don't think they need to learn much about computers and > would prefer to do everything in Excel. That's exactly the problem I was pointing to. I am a graduate in Biotechnology, currently finishing my Ph.D. When I was studying, there was exactly the same approach "oh well, you don't need to learn that much, just throw in some matlab and excel, there's this and that, click and go". I learned on my own skin that this is BAD, BAD, BAD for *every* meaningful scientific course of study. Luckly I had a good bioinformatics teacher that exposed me to serious programming (and Python, by the way), and out of my curiosity I began to explore what was in. But the situation, at least here, is desolating. I see even Physics Ph.Ds trying to do complex data analysis using (not even VB-scripted!) Excel. I see people that try to draw graphs with Powerpoint (no kidding). In my field (AFM protein force spectroscopy), serious data analysis applications almost do not exist, so everyone is forced to reinvent the wheel. Usually using -guess what?- kludgy matlab (or,even worse, IGOR) scripts. Heck, I just read a *paper* about a data analysis algorithm implemented partially in Excel. How can this be published, is beyond my understanding. I am trying to put up a modular and clean data analysis app myself to release soon (as a previous thread by me explained). Tough work, and I'm not a good programmer, but someone in my little field has to do it and however it cannot worsen the situation. My collegues, at least, are very happy. But if everyone of us had had a good informatics background, things would be much better. Much,much better. In 2007, I would expect a decent informatic literacy to be obligatory for every scientific course. All steps of our work, from experimental work to paper writing, are so dependent from the computer environment that it is of paramount importance. That's why I urge you not to repeat the wrongdoings of my teachers. Do not give your students the illusion that good = click-and-go. They won't be good scientists that way. Give your students the truth, and that is better tools can be actually harder -but the reward is worth the hassle. My bioinformatics teacher told us "Ok, there's this thing called Python. You can download it and the libraries we need here, here, and here. Do it yourself". After a couple of introductory lessons, we were told "Python docs are there. Look there for further help." Painful at first for many, but a learning experience by itself. More so if these people are *engineers*. Do XXI-century engineers expect, in their work life, to click a big button with written "ENGINEER THIS! LOL!" or do they expect, like it was once, to actually think and do some homework? :) > So, I am trying to remove as > many barriers as possible to getting them to use Python and SciPy. An > installation process made up of many steps and that involves replacing > a file manually doesn't inspire the confidence of windows users who > are used to things working "out-of-the-box". Your aim is noble and I understand that, but I think it's not by cheating (hiding tests etc.) that you will win them. I think it's by frankly presenting what the tools are, why have you chosen them, why they are superior. And by teaching them that sometimes not everything worthwile is "out-of-the-box". This holds for computers, science and life. These are my 0.02 euros. Sorry again if I looked rude, but I'm frankly sad at the students' situation. You are doing a great job and I'm extremly happy you do it. Sorry again for any misunderstanding. Yours, Massimo -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From skraelings001 at gmail.com Wed Aug 8 11:13:30 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Wed, 08 Aug 2007 10:13:30 -0500 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46B9A22C.7080308@unibo.it> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> Message-ID: <46B9DD9A.3010109@gmail.com> massimo sandal escribi?: > Ryan Krauss ha scritto: >> My students are not very computer savvy and they perceive installation >> to be a major hurdle to Scipy usage. So, asking them to find a >> replace a file may be enough to drive them to Matlab. > > Sorry if I'm being somehow between offtopic and rude, but if your > students are so lazy that replacing a file means using Matlab, > therefore your students do not deserve to be... students. > > You're going to teach programming to those guys, so, if they still are > not, they *will have to* be computer savy. Explain them why do you > think SciPy is better than NumPy (free, open source, fast updates, a > complete programming language behind, multiplatform etc.etc.) and let > them trust you firmly. I guess you ment here: why do you think Scipy is better than Matlab. Just that. Cheers Reynaldo From stefan at sun.ac.za Wed Aug 8 11:39:39 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 8 Aug 2007 17:39:39 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46B9D84B.5040004@unibo.it> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> Message-ID: <20070808153939.GB29100@mentat.za.net> On Wed, Aug 08, 2007 at 04:50:51PM +0200, massimo sandal wrote: > Ryan Krauss ha scritto: > >So, I am trying to remove as > >many barriers as possible to getting them to use Python and SciPy. An > >installation process made up of many steps and that involves replacing > >a file manually doesn't inspire the confidence of windows users who > >are used to things working "out-of-the-box". > > Your aim is noble and I understand that, but I think it's not by > cheating (hiding tests etc.) that you will win them. I think it's by A user unfamiliar with the scipy platform certainly doesn't need irrelevant (and therefore confusing) warnings popping up. I agree with Ryan -- it is best to lower the barrier of entry as far as possible. Since scipy and numpy are so closely coupled, and given that the current releases do not function together, we should make it a priority to release a new version of scipy as soon as possible. One of the problems is that those who are capable of building packages simply use the SVN version, so there isn't a lot of motivation to go to all that trouble. What would have been really cool is if we had an automated packaging system, so that releasing was as simple as tagging and pushing a button. St?fan From coughlan at ski.org Wed Aug 8 11:45:05 2007 From: coughlan at ski.org (James Coughlan) Date: Wed, 08 Aug 2007 08:45:05 -0700 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46B9D84B.5040004@unibo.it> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> Message-ID: <46B9E501.6030504@ski.org> Massimo, I respect what you're saying: engineers and scientists can't expect software handed to them on a silver platter. On the other hand, the reality is that the more barriers there are to installing and using a piece of software, the fewer people will try it out and adopt it. (This is one reason why web applications are so popular: no installation required.) I know many capable scientists who have adopted Matlab partly because it was so easy for them to get started with it. The beauty of the Scipy framework only becomes apparent once you've had the chance to play with it! Unfortunately, for every computer-savvy engineer/scientist who can navigate the Scipy installation process, there are probably ten others who cannot (or will not) take the time to figure it out. Critical mass is very important, and the Scipy community can't afford to turn off prospective users who can't get up and running quickly. Best, James massimo sandal wrote: > Ryan Krauss ha scritto: > >> I do think your response was rude and I don't think it fosters the >> kind of community we have on the SciPy list. > > Sorry,really. I didn't want to (albeit I understood it could have been > understood as that). > >> I am quite convinced of the value of SciPy. I did my entire Ph.D. >> thesis work using it and have published at least one paper so far on >> the power and beauty of Python applied to my area of research >> (feedback control systems). > > Perfect. So why so much doubts? Why cheating about the failed tests? > Go ahead. You are the teacher. They will (have to) follow you. > >> I am not teaching programming, I am teaching mechanical engineering >> and specifically mechatronics. For reasons I don't fully understand, >> our students don't think they need to learn much about computers and >> would prefer to do everything in Excel. > > That's exactly the problem I was pointing to. > I am a graduate in Biotechnology, currently finishing my Ph.D. When I > was studying, there was exactly the same approach "oh well, you don't > need to learn that much, just throw in some matlab and excel, there's > this and that, click and go". > > I learned on my own skin that this is BAD, BAD, BAD for *every* > meaningful scientific course of study. Luckly I had a good > bioinformatics teacher that exposed me to serious programming (and > Python, by the way), and out of my curiosity I began to explore what > was in. But the situation, at least here, is desolating. I see even > Physics Ph.Ds trying to do complex data analysis using (not even > VB-scripted!) Excel. I see people that try to draw graphs with > Powerpoint (no kidding). In my field (AFM protein force spectroscopy), > serious data analysis applications almost do not exist, so everyone is > forced to reinvent the wheel. Usually using -guess what?- kludgy > matlab (or,even worse, IGOR) scripts. Heck, I just read a *paper* > about a data analysis algorithm implemented partially in Excel. How > can this be published, is beyond my understanding. > > I am trying to put up a modular and clean data analysis app myself to > release soon (as a previous thread by me explained). Tough work, and > I'm not a good programmer, but someone in my little field has to do it > and however it cannot worsen the situation. My collegues, at least, > are very happy. But if everyone of us had had a good informatics > background, things would be much better. Much,much better. > > In 2007, I would expect a decent informatic literacy to be obligatory > for every scientific course. All steps of our work, from experimental > work to paper writing, are so dependent from the computer environment > that it is of paramount importance. That's why I urge you not to > repeat the wrongdoings of my teachers. Do not give your students the > illusion that good = click-and-go. They won't be good scientists that > way. Give your students the truth, and that is better tools can be > actually harder -but the reward is worth the hassle. > > My bioinformatics teacher told us "Ok, there's this thing called > Python. You can download it and the libraries we need here, here, and > here. Do it yourself". After a couple of introductory lessons, we were > told "Python docs are there. Look there for further help." Painful at > first for many, but a learning experience by itself. > > More so if these people are *engineers*. Do XXI-century engineers > expect, in their work life, to click a big button with written > "ENGINEER THIS! LOL!" or do they expect, like it was once, to actually > think and do some homework? :) > >> So, I am trying to remove as >> many barriers as possible to getting them to use Python and SciPy. An >> installation process made up of many steps and that involves replacing >> a file manually doesn't inspire the confidence of windows users who >> are used to things working "out-of-the-box". > > Your aim is noble and I understand that, but I think it's not by > cheating (hiding tests etc.) that you will win them. I think it's by > frankly presenting what the tools are, why have you chosen them, why > they are superior. And by teaching them that sometimes not everything > worthwile is "out-of-the-box". This holds for computers, science and > life. > > These are my 0.02 euros. Sorry again if I looked rude, but I'm frankly > sad at the students' situation. You are doing a great job and I'm > extremly happy you do it. Sorry again for any misunderstanding. > > Yours, > Massimo > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Wed Aug 8 12:57:39 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 8 Aug 2007 10:57:39 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <20070807191046.GC8313@caltech.edu> References: <46B0C700.6010000@ucsf.edu> <18097.37642.313889.398075@gargle.gargle.HOWL> <20070807191046.GC8313@caltech.edu> Message-ID: Hi Titus, On 8/7/07, Titus Brown wrote: > -> I'm not sure if you're on the scipy mailing list, so I'm writing > -> directly to you. I've penciled in a testing BOF session for Wednesday > -> night after the tutorials, and was wondering if you'd be able to > -> participate. We all know that you are very involved with testing in > -> python, and many of us are eager for a good discussion on the topic, > -> where we think you'd have much to contribute. > -> > -> The end of the thread is pasted below for reference, the rest of the > -> conversation (just scheduling details, really) took place on the scipy > -> user list in case you are curious. This message is still cc-d to the > -> list. > > I am indeed not on the list and it seems a bit slow to subscribe at the > moment, so perhaps you could forward my reply on to the list... CC-d here. > I'd love to participate! Great! > Two things: > > - first, there's no reason to limit the BoF to an hour, since we (me > and other Python people at Caltech) can reserve rooms for free for as > long as is needed. > > - second, we were already thinking about doing a SoCal Python Interest > Group meeting on Wednesday and just inviting everybody. I can't > imagine that Grig and I would have any problems with focusing it on > testing, effectively turning it into a testing BoF. Do you want to > combine, or perhaps stage things separately? > > One advantage to combining is that we usually get pizza for our SoCal > PIGgies meeting. It's usually $10/person (max), and the pizza is > pretty good quality (not Dominos!) so it might be a way to provide > dinner for attendees. > > What do you think? I'm happy with the idea of combining, unless anyone else objects. As long as they don't mind that we'd probably like to focus on issues that are more immediately relevant to scientific users (so for example numerical correctness testing may be a bit higher on the list than say GUI testing for deployment to customers). The only other thing is that is would be good to still hold the BOF at Caltech itself, since the conference people will gravitate around that. Is that OK with Grig? I'll be arriving Monday night and will be there for the tutorials (mine is the first on Tuesday), so if you prefer we can sort out last minute details there. But feel free to make any arrangements you deem necessary. Cheers, f From william.ratcliff at gmail.com Wed Aug 8 14:30:05 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 8 Aug 2007 14:30:05 -0400 Subject: [SciPy-user] Question about scipy.test() In-Reply-To: <827183970708011919g5df3fc3dic7a61384890bcac4@mail.gmail.com> References: <827183970707311425x15810362jacd3907a3a85bca4@mail.gmail.com> <20070731221535.GY7447@mentat.za.net> <827183970707311547w3a57c809i81a2fac4768cea92@mail.gmail.com> <20070801091255.GZ7447@mentat.za.net> <827183970708011919g5df3fc3dic7a61384890bcac4@mail.gmail.com> Message-ID: <827183970708081130w15f07d93x903610fcb1ae7d2c@mail.gmail.com> Has anyone used Rational Purify with python in windows? Are there any documents on how to use it with python to check scipy.tests()? Thanks, William On 8/1/07, william ratcliff wrote: > > I'm on a windows box, so I don't have Valgrind--I downloaded Rational > Purify from IBM, but have never used it--any suggestions? > > Thanks, > William > > On 8/1/07, Stefan van der Walt wrote: > > > > On Tue, Jul 31, 2007 at 06:47:18PM -0400, william ratcliff wrote: > > > I am using the latest version of the scipy source from SVN. I am > > using the > > > mingw from the enthought sumo distribution of python (2.4.3), just > > copied over > > > to the python25 directory. I'm not sure of the version-- > > > > > > It is version 4.0.3 of gcc according to the folder in > > > C:\Python25\MingW\bin\lib\gcc-lib\i686-pc-mingw32 > > > > > > The only addition I made to the library was a recent download of > > g95. Has this > > > bug come up before? > > > > Yes, see > > > > http://projects.scipy.org/scipy/scipy/ticket/404 > > > > I'd be glad if you could help narrow down the problem using valgrind, > > as indicated in the ticket above. > > > > Thanks! > > St?fan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Aug 8 14:42:22 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 8 Aug 2007 12:42:22 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <20070808182508.GB27388@caltech.edu> References: <18097.37642.313889.398075@gargle.gargle.HOWL> <20070807191046.GC8313@caltech.edu> <20070808182508.GB27388@caltech.edu> Message-ID: On 8/8/07, Titus Brown wrote: > On Wed, Aug 08, 2007 at 10:57:39AM -0600, Fernando Perez wrote: > -> I'm happy with the idea of combining, unless anyone else objects. As > -> long as they don't mind that we'd probably like to focus on issues > -> that are more immediately relevant to scientific users (so for example > -> numerical correctness testing may be a bit higher on the list than say > -> GUI testing for deployment to customers). > > That should be fine. Why don't we schedule a single presentation at 7pm > (we already have one lined up for our PIGgies meeting, I think) during > which we can eat etc., and then start the BoF stuff at 8pm? As long as the presentation is testing-related, that's fine. I'd really like to keep the focus on just one topic, testing in this case. Is that OK with your plans? I updated the page with a little section about this session, feel free to fill in the pizza/PIGgies/logistical details as needed: http://scipy.org/SciPy2007/BoFs Cheers, f From fperez.net at gmail.com Wed Aug 8 19:44:07 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 8 Aug 2007 17:44:07 -0600 Subject: [SciPy-user] For the testing BOF... Message-ID: Hi all, this is something I just put out to the ipython-dev list. After I sent out the proposal for the testing BOF, I decided to actually write something on testing so I could dive into a big refactoring I have here at work with better tools. This is the result, and I'll probably use it as a starting point for some of the discussion next week. It's pure alpha code and written with a very specific purpose: - make writing tests work with minimal friction in *my* workflow, while using only unittest and doctest for actual test construction. So it is NOT a nose/py.test/xxx competitor, nor is it intended to ever be one. It's just a bit of support code to ease the integration of tests while developing as painless as possible. If anyone has interest and has a look between now and next week, great. We can discuss it further there. Cheers, f ---------- Forwarded message ---------- From: Fernando Perez Date: Aug 8, 2007 5:39 PM Subject: ANN: SnakeOil - Python testing that doesn't squeak :) To: IPython-dev List Hi all, For a long time I've been really fed up with feeling that the workflow for testing in Python just isn't very fluid. Generating doctests requires typing code in text mode, rerunning and modifying them is annoying, having a standalone test script become a unit test isn't convenient, and the worst of all, writing parametrized tests in unittest is a huge PITA. IPython SVN now has a doctest profile to help a little (and I'll add a magic to switch at runtime in and out of it, hopefully later tonight). But that's just a fix for one small annoyance among many. I know there are testing frameworks out there that address some of these issues (nose, py.test), but I wanted something small(ish), that could be used without installing anything (easy to copy inside the test area of any project and carry it from there, or install it in a private area after a rename, etc), and that would only depend on the stdlib (+ ipython, of course, since I use that everywhere). So the IPython team is happy to announce SnakeOil: svn co http://ipython.scipy.org/svn/ipython/ipython/branches/saw/sandbox/snakeoil/snakeoil It's raw alpha, svn-only, lightly tested (about 2k LOC). But it lets me do *easily* all the things I needed to feel that adding tests wasn't getting in *my* way. If it doesn't work for you, no worries, you can get your money back. It's also fairly hackish in places (surprise, coming from me). Unittest doesn't lend itself to much of anything, really, so I had to beat it into behaving in a few places. Still, I wanted to stick to valid unittest classes so that using SnakeOil would not make your tests citizens of yet-another-framework. The doc/ directory has a reST document with some more details as well as a fully worked set of examples. The highlights: - Easy creation of parametric tests (unittests that take arguments). That includes tests that share state (yes, I know about shared state in tests. I'm an adult, I know what I'm doing and I need that. Now unittest, get out of my way and let me actually work). - Immediate use of any standalone testing script as a unit test, without having to subclass anything. - Easy mechanisms for creating valid doctest (.txt) files from true Python sources, so that one can edit real Python code in an editor and convert that code to a set of doctests with minimal effort. This will be integrated in ipython1 later, but for now it sits in the sandbox. We'll announce when we move it out. We first need to check that everything works with Twisted trial correctly (I think it does, but I haven't checked enough yet). Comments/feedback welcome. Cheers, f From david at ar.media.kyoto-u.ac.jp Wed Aug 8 23:15:36 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 09 Aug 2007 12:15:36 +0900 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <20070808153939.GB29100@mentat.za.net> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> Message-ID: <46BA86D8.90706@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > > > What would have been really cool is if we had an automated packaging > system, so that releasing was as simple as tagging and pushing a > button. Well, that's exactly why I started a scipy+numpy project on the openSUSE build system: http://software.opensuse.org/download/home:/ashigabou/. When the tarballs for the new releases of numpy and scipy are available, it will take me two lines change to build and make available rpms for various distributions. cheers, David From rsamurti at airtelbroadband.in Wed Aug 8 23:57:23 2007 From: rsamurti at airtelbroadband.in (R S Ananda Murthy) Date: Thu, 09 Aug 2007 09:27:23 +0530 Subject: [SciPy-user] NumPy-1.0.1 and SciPy-0.5.2 packages for Zenwalk-4.6.1 available. Message-ID: <46BA90A3.6020808@airtelbroadband.in> Hello, I have prepared easy-to-install packages of NumPy-1.0.1 and SciPy-0.5.2. I did not use NumPy-1.0.3 since it did not work with SciPy-0.5.2. These packages are available here: http://users.zenwalk.org/user-accounts/rsamurti/numpy/ http://users.zenwalk.org/user-accounts/rsamurti/scipy/ Interested users can try these packages and give their feedback. This is very important to improve the packages. Thanks for your help, Anand From rsamurti at airtelbroadband.in Thu Aug 9 00:00:48 2007 From: rsamurti at airtelbroadband.in (R S Ananda Murthy) Date: Thu, 09 Aug 2007 09:30:48 +0530 Subject: [SciPy-user] How to make NumPy-1.0.3 work with SciPy-0.5.2 ? Message-ID: <46BA9170.7030603@airtelbroadband.in> Hello, Can somebody tell me what patches are required to be done to make NumPy-1.0.3 work with SciPy-0.5.2? I am trying to build these packages on Zenwalk-4.6.1. I am using lapack-3.1.1 which has all the BLAS libraries. Thanks for your help, Anand From david at ar.media.kyoto-u.ac.jp Thu Aug 9 00:59:13 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 09 Aug 2007 13:59:13 +0900 Subject: [SciPy-user] How to make NumPy-1.0.3 work with SciPy-0.5.2 ? In-Reply-To: <46BA9170.7030603@airtelbroadband.in> References: <46BA9170.7030603@airtelbroadband.in> Message-ID: <46BA9F21.1020508@ar.media.kyoto-u.ac.jp> R S Ananda Murthy wrote: > Hello, > > Can somebody tell me what patches are required to be done to make > NumPy-1.0.3 work with SciPy-0.5.2? I am trying to build these packages > on Zenwalk-4.6.1. I am using lapack-3.1.1 which has all the BLAS libraries. > You could take a look at the source rpm here: http://download.opensuse.org/repositories/home:/ashigabou/Fedora_7/src/ I backported some distutils change from numpy, and fix a trivial error in scipy, such as they both work together. Each patch being against pristine source, it should be easy to apply to zen walk, whatever package system it is using (I am not familiar with this distribution). cheers, David From lfriedri at imtek.de Thu Aug 9 03:03:30 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Thu, 09 Aug 2007 09:03:30 +0200 Subject: [SciPy-user] fftw3 wrappers Message-ID: <46BABC42.4050106@imtek.de> Hello, to be able to do compilations for myself, I looked at http://www.scipy.org/Installing_SciPy/Windows is this the right place? I am on a Windows machine, using the standard Python 2.5 compilation and lots of packages (eggs?) from enthought. Is it possible to compile just the fft-part of scipy with MinGW and use the result together with the rest of my installation? I could not answer this question reading the page given above... I downloaded the dll-package of fftw3. I suppose that I just place the .dll somewhere and tell scipy to use it at compilation time...? Lars From c.j.lee at tnw.utwente.nl Thu Aug 9 04:11:45 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 09 Aug 2007 10:11:45 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46B9E501.6030504@ski.org> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <46B9E501.6030504@ski.org> Message-ID: <46BACC41.5020309@tnw.utwente.nl> Actually if you are teaching something other than a computer course you want the software to work as perfectly possible. There is nothing worse than distracting the students from the content of the course with irrelevant error messages, crashes, and other unpleasantness. This doesn't mean that engineers shouldn't be presented with the realities of software but not during a course on mechanics. James Coughlan wrote: > Massimo, > > I respect what you're saying: engineers and scientists can't expect > software handed to them on a silver platter. On the other hand, the > reality is that the more barriers there are to installing and using a > piece of software, the fewer people will try it out and adopt it. (This > is one reason why web applications are so popular: no installation > required.) > > I know many capable scientists who have adopted Matlab partly because it > was so easy for them to get started with it. The beauty of the Scipy > framework only becomes apparent once you've had the chance to play with > it! Unfortunately, for every computer-savvy engineer/scientist who can > navigate the Scipy installation process, there are probably ten others > who cannot (or will not) take the time to figure it out. > > Critical mass is very important, and the Scipy community can't afford to > turn off prospective users who can't get up and running quickly. > > Best, > > James > > > > massimo sandal wrote: > >> Ryan Krauss ha scritto: >> >> >>> I do think your response was rude and I don't think it fosters the >>> kind of community we have on the SciPy list. >>> >> Sorry,really. I didn't want to (albeit I understood it could have been >> understood as that). >> >> >>> I am quite convinced of the value of SciPy. I did my entire Ph.D. >>> thesis work using it and have published at least one paper so far on >>> the power and beauty of Python applied to my area of research >>> (feedback control systems). >>> >> Perfect. So why so much doubts? Why cheating about the failed tests? >> Go ahead. You are the teacher. They will (have to) follow you. >> >> >>> I am not teaching programming, I am teaching mechanical engineering >>> and specifically mechatronics. For reasons I don't fully understand, >>> our students don't think they need to learn much about computers and >>> would prefer to do everything in Excel. >>> >> That's exactly the problem I was pointing to. >> I am a graduate in Biotechnology, currently finishing my Ph.D. When I >> was studying, there was exactly the same approach "oh well, you don't >> need to learn that much, just throw in some matlab and excel, there's >> this and that, click and go". >> >> I learned on my own skin that this is BAD, BAD, BAD for *every* >> meaningful scientific course of study. Luckly I had a good >> bioinformatics teacher that exposed me to serious programming (and >> Python, by the way), and out of my curiosity I began to explore what >> was in. But the situation, at least here, is desolating. I see even >> Physics Ph.Ds trying to do complex data analysis using (not even >> VB-scripted!) Excel. I see people that try to draw graphs with >> Powerpoint (no kidding). In my field (AFM protein force spectroscopy), >> serious data analysis applications almost do not exist, so everyone is >> forced to reinvent the wheel. Usually using -guess what?- kludgy >> matlab (or,even worse, IGOR) scripts. Heck, I just read a *paper* >> about a data analysis algorithm implemented partially in Excel. How >> can this be published, is beyond my understanding. >> >> I am trying to put up a modular and clean data analysis app myself to >> release soon (as a previous thread by me explained). Tough work, and >> I'm not a good programmer, but someone in my little field has to do it >> and however it cannot worsen the situation. My collegues, at least, >> are very happy. But if everyone of us had had a good informatics >> background, things would be much better. Much,much better. >> >> In 2007, I would expect a decent informatic literacy to be obligatory >> for every scientific course. All steps of our work, from experimental >> work to paper writing, are so dependent from the computer environment >> that it is of paramount importance. That's why I urge you not to >> repeat the wrongdoings of my teachers. Do not give your students the >> illusion that good = click-and-go. They won't be good scientists that >> way. Give your students the truth, and that is better tools can be >> actually harder -but the reward is worth the hassle. >> >> My bioinformatics teacher told us "Ok, there's this thing called >> Python. You can download it and the libraries we need here, here, and >> here. Do it yourself". After a couple of introductory lessons, we were >> told "Python docs are there. Look there for further help." Painful at >> first for many, but a learning experience by itself. >> >> More so if these people are *engineers*. Do XXI-century engineers >> expect, in their work life, to click a big button with written >> "ENGINEER THIS! LOL!" or do they expect, like it was once, to actually >> think and do some homework? :) >> >> >>> So, I am trying to remove as >>> many barriers as possible to getting them to use Python and SciPy. An >>> installation process made up of many steps and that involves replacing >>> a file manually doesn't inspire the confidence of windows users who >>> are used to things working "out-of-the-box". >>> >> Your aim is noble and I understand that, but I think it's not by >> cheating (hiding tests etc.) that you will win them. I think it's by >> frankly presenting what the tools are, why have you chosen them, why >> they are superior. And by teaching them that sometimes not everything >> worthwile is "out-of-the-box". This holds for computers, science and >> life. >> >> These are my 0.02 euros. Sorry again if I looked rude, but I'm frankly >> sad at the students' situation. You are doing a great job and I'm >> extremly happy you do it. Sorry again for any misunderstanding. >> >> Yours, >> Massimo >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From stefan at sun.ac.za Thu Aug 9 05:15:15 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 9 Aug 2007 11:15:15 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46BA86D8.90706@ar.media.kyoto-u.ac.jp> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> <46BA86D8.90706@ar.media.kyoto-u.ac.jp> Message-ID: <20070809091514.GH9452@mentat.za.net> On Thu, Aug 09, 2007 at 12:15:36PM +0900, David Cournapeau wrote: > > What would have been really cool is if we had an automated packaging > > system, so that releasing was as simple as tagging and pushing a > > button. > Well, that's exactly why I started a scipy+numpy project on the openSUSE > build system: http://software.opensuse.org/download/home:/ashigabou/. > When the tarballs for the new releases of numpy and scipy are available, > it will take me two lines change to build and make available rpms for > various distributions. That's absolutely brilliant, David. Andrew Straw has done the same (using Launchpad) for Ubuntu. Do you know of a way to do this for Windows? That platform seems to pose the biggest problem. Regards St?fan From lbolla at gmail.com Thu Aug 9 05:38:30 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 9 Aug 2007 11:38:30 +0200 Subject: [SciPy-user] scipy installation problem with cygwin Message-ID: <80c99e790708090238t34c62ed8nb6e32dab3f0a3ddc@mail.gmail.com> Dear all, I'm having troubles installing scipy with cygwin on a WindowsXP Pro box. I had to setup all the "environment" before installing scipy. Here is what I've done. 1. download and install a BLAS implementation (I chose the GotoBLAS) 2. download and install the complete LAPACK from netlib 3. download and install ATLAS 3.6.0, compiled with threads enabled (the computer has 2 cpus) 4. merge the complete-slow netlib LAPACK with the incomplete-fast-threaded ATLAS LAPACK into a single .a lib file 5. compile other packages (like UMFPACK and FFTW) with the BLAS/LAPACK obtained 6. download and install the latest Python 2.5.1 -- with success 6. download and install the latest svn numpy 1.0.4.dev3947 -- with success the problems arise compiling scipy: 7. download and install the latest svn scipy 0.5.3. compilation gives a link error when trying to create flapack.dll. I checked if the symbols that the linker can not find are present in my liblapack.a and they are all there... my questions are: - why is scipy trying to build flapack, if a complete LAPACK is already present in the system? - is this numpy version compatible with this scipy version? (I read about some incompatibility of the numpy 1.0.3 with scipy 0.5.2, if I remember correctly) - can anyone give me some hints? :-) I've attached the output.txt file with the error messages from: python setup.py install thank you! lorenzo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib'] language = c include_dirs = ['/usr/include'] customize GnuFCompiler Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes compile options: '-c' gcc: _configtest.c gcc _configtest.o -L/usr/lib -lptf77blas -lptcblas -latlas -o _configtest.exe ATLAS version 3.6.0 built by bollalo001 on Tue Aug 7 14:17:04 WEDT 2007: UNAME : CYGWIN_NT-5.1 WK020-159IT 1.5.22(0.156/4/2) 2006-11-13 17:01 i686 Cygwin INSTFLG : MMDEF : /cygdrive/c/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/gemm ARCHDEF : /cygdrive/c/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 1048576 F77 : /usr/bin/g77.exe, version GNU Fortran (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) F77FLAGS : -fomit-frame-pointer -O CC : /usr/bin/gcc.exe, version gcc (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc.exe, version gcc (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest.exe FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include'] ATLAS version 3.6.0 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib'] language = f77 include_dirs = ['/usr/include'] customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes compile options: '-c' gcc: _configtest.c gcc _configtest.o -L/usr/lib -llapack -lptf77blas -lptcblas -latlas -o _configtest.exe ATLAS version 3.6.0 built by bollalo001 on Tue Aug 7 14:17:04 WEDT 2007: UNAME : CYGWIN_NT-5.1 WK020-159IT 1.5.22(0.156/4/2) 2006-11-13 17:01 i686 Cygwin INSTFLG : MMDEF : /cygdrive/c/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/gemm ARCHDEF : /cygdrive/c/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 1048576 F77 : /usr/bin/g77.exe, version GNU Fortran (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) F77FLAGS : -fomit-frame-pointer -O CC : /usr/bin/gcc.exe, version gcc (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc.exe, version gcc (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest.exe FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include'] ATLAS version 3.6.0 ATLAS version 3.6.0 non-existing path in 'Lib/linsolve': 'tests' umfpack_info: libraries umfpack not found in /usr/local/lib amd_info: libraries amd not found in /usr/local/lib FOUND: libraries = ['amd'] library_dirs = ['/usr/lib'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/usr/lib'] swig_opts = ['-I/usr/include'] define_macros = [('SCIPY_UMFPACK_H', None)] include_dirs = ['/usr/include'] non-existing path in 'Lib/maxentropy': 'doc' running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "arpack" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. adding 'build/src.cygwin-1.5.22-i686-2.5/Lib/interpolate/dfitpack-f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. adding 'build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'Lib/lib/blas/cblas.pyf.src' to sources. f2py options: ['skip:', ':'] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'Lib/lib/lapack/clapack.pyf.src' to sources. f2py options: ['skip:', ':'] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.cygwin-1.5.22-i686-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. adding 'build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.cygwin-1.5.22-i686-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.cygwin-1.5.22-i686-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. adding 'build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.cygwin-1.5.22-i686-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.linalg._iterative" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources adding 'Lib/linsolve/umfpack/umfpack.i' to sources. building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.sandbox.arpack._arpack" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. adding 'build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/sandbox/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse._sparsetools" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.cygwin-1.5.22-i686-2.5/fortranobject.c' to sources. adding 'build/src.cygwin-1.5.22-i686-2.5' to include_dirs. adding 'build/src.cygwin-1.5.22-i686-2.5/Lib/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building extension "scipy.stsci.convolve._correlate" sources building extension "scipy.stsci.convolve._lineshape" sources building extension "scipy.stsci.image._combine" sources building data_files sources running build_py copying Lib/__svn_version__.py -> build/lib.cygwin-1.5.22-i686-2.5/scipy copying build/src.cygwin-1.5.22-i686-2.5/scipy/__config__.py -> build/lib.cygwin-1.5.22-i686-2.5/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['Lib/integrate/mach/d1mach.f', 'Lib/integrate/mach/i1mach.f', 'Lib/integrate/mach/r1mach.f', 'Lib/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['Lib/special/mach/d1mach.f', 'Lib/special/mach/i1mach.f', 'Lib/special/mach/r1mach.f', 'Lib/special/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/special/setup.pyc', 1)}, 'source_languages': ['f77']}... resetting extension 'scipy.integrate._odepack' language from 'c' to 'f77'. resetting extension 'scipy.integrate.vode' language from 'c' to 'f77'. resetting extension 'scipy.lib.blas.fblas' language from 'c' to 'f77'. extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] resetting extension 'scipy.odr.__odrpack' language from 'c' to 'f77'. customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using build_ext building 'scipy.lib.lapack.flapack' extension compiling C sources C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes compile options: '-DATLAS_INFO="\"3.6.0\"" -I/usr/include -Ibuild/src.cygwin-1.5.22-i686-2.5 -I/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/local/include/python2.5 -c' /usr/bin/g77 -g -Wall -g -Wall -shared build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/fortranobject.o -L/usr/lib -L/usr/lib/gcc/i686-pc-cygwin/3.4.4 -L/usr/local/lib/python2.5/config -Lbuild/temp.cygwin-1.5.22-i686-2.5 -llapack -lptf77blas -lptcblas -latlas -lpython2.5 -lg2c -o build/lib.cygwin-1.5.22-i686-2.5/scipy/lib/lapack/flapack.dll build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `create_cb_arglist': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:273: undefined reference to `_sgbsv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:226: undefined reference to `_dgbsv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:266: undefined reference to `_cgbsv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:279: undefined reference to `_zgbsv_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_sselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:522: undefined reference to `_sgelss_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:493: undefined reference to `_dgelss_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_dselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:585: undefined reference to `_cgelss_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:614: undefined reference to `_zgelss_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:606: undefined reference to `_ssyev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:602: undefined reference to `_dsyev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_cselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:685: undefined reference to `_cheev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:713: undefined reference to `_zheev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:728: undefined reference to `_ssyevd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:704: undefined reference to `_dsyevd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_zselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:752: undefined reference to `_cheevd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:810: undefined reference to `_zheevd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:823: undefined reference to `_ssyevr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:804: undefined reference to `_dsyevr_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_sgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:867: undefined reference to `_cheevr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:896: undefined reference to `_zheevr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:927: undefined reference to `_sgees_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:938: undefined reference to `_dgees_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_dgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1009: undefined reference to `_cgees_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1041: undefined reference to `_zgees_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1052: undefined reference to `_sgeev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1092: undefined reference to `_dgeev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_cgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1147: undefined reference to `_cgeev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1196: undefined reference to `_zgeev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1208: undefined reference to `_sgesdd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1186: undefined reference to `_dgesdd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_zgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1301: undefined reference to `_cgesdd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1343: undefined reference to `_zgesdd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1356: undefined reference to `_ssygv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1358: undefined reference to `_dsygv_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_sgbsv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1451: undefined reference to `_chegv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1485: undefined reference to `_zhegv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1509: undefined reference to `_ssygvd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1524: undefined reference to `_dsygvd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_dgbsv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1629: undefined reference to `_chegvd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1636: undefined reference to `_zhegvd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1676: undefined reference to `_sggev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1647: undefined reference to `_dggev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1696: undefined reference to `_cggev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_cgbsv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1758: undefined reference to `_zggev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_sgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2741: undefined reference to `_sgeqrf_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2703: undefined reference to `_dgeqrf_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2732: undefined reference to `_cgeqrf_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_dgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2848: undefined reference to `_zgeqrf_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2886: undefined reference to `_sorgqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2928: undefined reference to `_dorgqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2925: undefined reference to `_cungqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2874: undefined reference to `_zungqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2949: undefined reference to `_sgehrd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_cgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3024: undefined reference to `_dgehrd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3074: undefined reference to `_cgehrd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3108: undefined reference to `_zgehrd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:203: undefined reference to `_sgebal_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3131: undefined reference to `_dgebal_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3165: undefined reference to `_cgebal_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3122: undefined reference to `_zgebal_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_zgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3350: undefined reference to `_slaswp_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3415: undefined reference to `_dlaswp_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3336: undefined reference to `_claswp_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_ssyev': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3438: undefined reference to `_zlaswp_' collect2: ld returned 1 exit status build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `create_cb_arglist': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:273: undefined reference to `_sgbsv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:226: undefined reference to `_dgbsv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:266: undefined reference to `_cgbsv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:279: undefined reference to `_zgbsv_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_sselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:522: undefined reference to `_sgelss_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:493: undefined reference to `_dgelss_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_dselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:585: undefined reference to `_cgelss_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:614: undefined reference to `_zgelss_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:606: undefined reference to `_ssyev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:602: undefined reference to `_dsyev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_cselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:685: undefined reference to `_cheev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:713: undefined reference to `_zheev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:728: undefined reference to `_ssyevd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:704: undefined reference to `_dsyevd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `cb_zselect_in_gees__user__routines': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:752: undefined reference to `_cheevd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:810: undefined reference to `_zheevd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:823: undefined reference to `_ssyevr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:804: undefined reference to `_dsyevr_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_sgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:867: undefined reference to `_cheevr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:896: undefined reference to `_zheevr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:927: undefined reference to `_sgees_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:938: undefined reference to `_dgees_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_dgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1009: undefined reference to `_cgees_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1041: undefined reference to `_zgees_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1052: undefined reference to `_sgeev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1092: undefined reference to `_dgeev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_cgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1147: undefined reference to `_cgeev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1196: undefined reference to `_zgeev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1208: undefined reference to `_sgesdd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1186: undefined reference to `_dgesdd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_zgesv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1301: undefined reference to `_cgesdd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1343: undefined reference to `_zgesdd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1356: undefined reference to `_ssygv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1358: undefined reference to `_dsygv_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_sgbsv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1451: undefined reference to `_chegv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1485: undefined reference to `_zhegv_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1509: undefined reference to `_ssygvd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1524: undefined reference to `_dsygvd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_dgbsv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1629: undefined reference to `_chegvd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1636: undefined reference to `_zhegvd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1676: undefined reference to `_sggev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1647: undefined reference to `_dggev_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1696: undefined reference to `_cggev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_cgbsv': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:1758: undefined reference to `_zggev_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_sgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2741: undefined reference to `_sgeqrf_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2703: undefined reference to `_dgeqrf_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2732: undefined reference to `_cgeqrf_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_dgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2848: undefined reference to `_zgeqrf_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2886: undefined reference to `_sorgqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2928: undefined reference to `_dorgqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2925: undefined reference to `_cungqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2874: undefined reference to `_zungqr_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:2949: undefined reference to `_sgehrd_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_cgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3024: undefined reference to `_dgehrd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3074: undefined reference to `_cgehrd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3108: undefined reference to `_zgehrd_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:203: undefined reference to `_sgebal_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3131: undefined reference to `_dgebal_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3165: undefined reference to `_cgebal_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3122: undefined reference to `_zgebal_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_zgelss': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3350: undefined reference to `_slaswp_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3415: undefined reference to `_dlaswp_' /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3336: undefined reference to `_claswp_' build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o: In function `f2py_rout_flapack_ssyev': /cygdrive/c/Documents and Settings/bollalo001/My Documents/archive/download/scipy/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.c:3438: undefined reference to `_zlaswp_' collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/Lib/lib/lapack/flapackmodule.o build/temp.cygwin-1.5.22-i686-2.5/build/src.cygwin-1.5.22-i686-2.5/fortranobject.o -L/usr/lib -L/usr/lib/gcc/i686-pc-cygwin/3.4.4 -L/usr/local/lib/python2.5/config -Lbuild/temp.cygwin-1.5.22-i686-2.5 -llapack -lptf77blas -lptcblas -latlas -lpython2.5 -lg2c -o build/lib.cygwin-1.5.22-i686-2.5/scipy/lib/lapack/flapack.dll" failed with exit status 1 From lorenzo.isella at gmail.com Thu Aug 9 06:11:55 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 9 Aug 2007 12:11:55 +0200 Subject: [SciPy-user] New scipy release before 8/20? (Stefan van der Walt) Message-ID: Dear All, Sorry if this turns out to be a trivial question: I am running Debian testing on my box and I would like to avoid building SciPy from source using cvs. If SciPy is built under Ubuntu Launchpad, does that mean that the same package is more or less simultaneously available in Debian testing? Cheers Lorenzo On 09/08/07, scipy-user-request at scipy.org wrote: > Message: 1 > Date: Thu, 9 Aug 2007 11:15:15 +0200 > From: Stefan van der Walt > Subject: Re: [SciPy-user] New scipy release before 8/20? > To: scipy-user at scipy.org > Message-ID: <20070809091514.GH9452 at mentat.za.net> > Content-Type: text/plain; charset=iso-8859-1 > > On Thu, Aug 09, 2007 at 12:15:36PM +0900, David Cournapeau wrote: > > > What would have been really cool is if we had an automated packaging > > > system, so that releasing was as simple as tagging and pushing a > > > button. > > Well, that's exactly why I started a scipy+numpy project on the openSUSE > > build system: http://software.opensuse.org/download/home:/ashigabou/. > > When the tarballs for the new releases of numpy and scipy are available, > > it will take me two lines change to build and make available rpms for > > various distributions. > > That's absolutely brilliant, David. Andrew Straw has done the same > (using Launchpad) for Ubuntu. Do you know of a way to do this for > Windows? That platform seems to pose the biggest problem. > > Regards > St?fan > > From emanuelez at gmail.com Thu Aug 9 08:35:15 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 9 Aug 2007 14:35:15 +0200 Subject: [SciPy-user] memory usage Message-ID: Hello, i'm finally done with my implementation of a comet detector so right now i need to tweak some parameters. It makes quite a heavy use of images and big arrays so it's quite hungry of memory, so hungry that right now it will fill up the 768 megabytes of memory of my machine in no time. i would like to investigate the reasons and find a solution so i think i need to know more about python's way to manage memory. for example... do i need to delete arrays i don't use anymore? if so... how? do i need to close images after i've read data from them? any hint for debugging and profiling is welcome! -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From steve at shrogers.com Thu Aug 9 08:34:47 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Thu, 09 Aug 2007 06:34:47 -0600 Subject: [SciPy-user] APL2007 Update Message-ID: <46BB09E7.7070208@shrogers.com> Attached is an updated announcement for APL2007: Arrays and Objects. 21-23 October 2007 Montreal, Canada APL = Array Programming Languages -------------- next part -------------- A non-text attachment was scrubbed... Name: APL2007Ann-2-1.pdf Type: application/pdf Size: 18084 bytes Desc: not available URL: From rsamurti at airtelbroadband.in Thu Aug 9 08:41:18 2007 From: rsamurti at airtelbroadband.in (R S Ananda Murthy) Date: Thu, 09 Aug 2007 18:11:18 +0530 Subject: [SciPy-user] How to make NumPy-1.0.3 work with SciPy-0.5.2 ? In-Reply-To: <46BA9F21.1020508@ar.media.kyoto-u.ac.jp> References: <46BA9170.7030603@airtelbroadband.in> <46BA9F21.1020508@ar.media.kyoto-u.ac.jp> Message-ID: <46BB0B6E.9040000@airtelbroadband.in> Zenwalk-4.6.1 is a derivative of Slackware. This has the following: gcc-4.1.2, gcc-gfortran-4.1.2, lapack-3.1.1 which also contains BLAS, suitesparse-3.0.0. I have also created a link /usr/include/umfpack to /usr/include/suitesparse so that SciPy and NumPy can locate UMFPACK header files. I applied all the patches you have applied in your package. But SciPy build stopped in the middle. Does the version of LAPACK and SuiteSparse matter? Anand David Cournapeau wrote: > R S Ananda Murthy wrote: > >> Hello, >> >> Can somebody tell me what patches are required to be done to make >> NumPy-1.0.3 work with SciPy-0.5.2? I am trying to build these packages >> on Zenwalk-4.6.1. I am using lapack-3.1.1 which has all the BLAS libraries. >> >> > You could take a look at the source rpm here: > http://download.opensuse.org/repositories/home:/ashigabou/Fedora_7/src/ > > I backported some distutils change from numpy, and fix a trivial error > in scipy, such as they both work together. Each patch being against > pristine source, it should be easy to apply to zen walk, whatever > package system it is using (I am not familiar with this distribution). > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From matthieu.brucher at gmail.com Thu Aug 9 08:48:22 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 9 Aug 2007 14:48:22 +0200 Subject: [SciPy-user] memory usage In-Reply-To: References: Message-ID: > > for example... do i need to delete arrays i don't use anymore? if so... > how? As long as a variable references the array, it is not deleted. If you want to delete a variable, just tape del variable. If your code is corretly organized, you souldn't have much of these to write as the variable is deleted after it goes out of function/method scope do i need to close images after i've read data from them? It can be interesting, as for every variable, in fact. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuelez at gmail.com Thu Aug 9 08:58:30 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 9 Aug 2007 14:58:30 +0200 Subject: [SciPy-user] memory usage In-Reply-To: References: Message-ID: hmmm... then it's pretty weird... my code uses no global variables and it's divided into several functions. still investigating... :) On 8/9/07, Matthieu Brucher wrote: > > > for example... do i need to delete arrays i don't use anymore? if so... > how? > > > > As long as a variable references the array, it is not deleted. If you want > to delete a variable, just tape del variable. > If your code is corretly organized, you souldn't have much of these to write > as the variable is deleted after it goes out of function/method scope > > > > do i need to close images after i've read data from them? > > It can be interesting, as for every variable, in fact. > > Matthieu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From will.woods at ynic.york.ac.uk Thu Aug 9 08:59:39 2007 From: will.woods at ynic.york.ac.uk (Will Woods) Date: Thu, 09 Aug 2007 13:59:39 +0100 Subject: [SciPy-user] butter function different from matlab? Message-ID: <46BB0FBB.3000008@ynic.york.ac.uk> I'm rather new to the scipy.signal module, so I appologise if I have missed something obvious, but I seem to be getting strange results from the 'butter' function. In matlab, a 9th order higpass filter at 300 Hz on a signal sampled at 1kHz would be: >> [B,A] = butter(9,300/500,'high') B = 0.0011 -0.0096 0.0384 -0.0895 0.1342 -0.1342 0.0895 -0.0384 0.0096 -0.0011 A = 1.0000 1.7916 2.5319 2.1182 1.3708 0.6090 0.1993 0.0431 0.0058 0.0004 If I do the same with scipy.signal.butter, I get: In [47]: (b,a)=butter(9,300/500,btype='high') In [48]: (b,a) Out[48]: (array([ 1., -9., 36., -84., 126., -126., 84., -36., 9., -1.]), array([ 1., -9., 36., -84., 126., -126., 84., -36., 9., -1.])) It would seem that b is roughly 1000 * B, and that a is the same as b, while in matlab B and A are very different. Can anyone explain why this is? The docs for matlab butter function appear to say the same thing as the scipy version. Thanks Will From lbolla at gmail.com Thu Aug 9 09:04:17 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 9 Aug 2007 15:04:17 +0200 Subject: [SciPy-user] memory usage In-Reply-To: References: Message-ID: <80c99e790708090604l29d47d5dgcbd47320eef58887@mail.gmail.com> first of all: congratulations! :-) I've experienced some memory leaks using ipython: if you are using it, try using the bare python interpreter... L. On 8/9/07, Emanuele Zattin wrote: > > Hello, > > i'm finally done with my implementation of a comet detector so right > now i need to tweak some parameters. > It makes quite a heavy use of images and big arrays so it's quite > hungry of memory, so hungry that right now it will fill up the 768 > megabytes of memory of my machine in no time. > i would like to investigate the reasons and find a solution so i think > i need to know more about python's way to manage memory. > for example... do i need to delete arrays i don't use anymore? if so... > how? > do i need to close images after i've read data from them? > > any hint for debugging and profiling is welcome! > > -- > Emanuele Zattin > --------------------------------------------------- > -I don't have to know an answer. I don't feel frightened by not > knowing things; by being lost in a mysterious universe without any > purpose ? which is the way it really is, as far as I can tell, > possibly. It doesn't frighten me.- Richard Feynman > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Aug 9 09:06:06 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 09 Aug 2007 15:06:06 +0200 Subject: [SciPy-user] butter function different from matlab? In-Reply-To: <46BB0FBB.3000008@ynic.york.ac.uk> References: <46BB0FBB.3000008@ynic.york.ac.uk> Message-ID: <46BB113E.6060408@iam.uni-stuttgart.de> Will Woods wrote: > I'm rather new to the scipy.signal module, so I appologise if I have > missed something obvious, but I seem to be getting strange results from > the 'butter' function. > > In matlab, a 9th order higpass filter at 300 Hz on a signal sampled at > 1kHz would be: > > >> [B,A] = butter(9,300/500,'high') > > B = > > 0.0011 -0.0096 0.0384 -0.0895 0.1342 -0.1342 0.0895 > -0.0384 0.0096 -0.0011 > > > A = > > 1.0000 1.7916 2.5319 2.1182 1.3708 0.6090 0.1993 > 0.0431 0.0058 0.0004 > > > > > If I do the same with scipy.signal.butter, I get: > > In [47]: (b,a)=butter(9,300/500,btype='high') > > In [48]: (b,a) > Out[48]: > (array([ 1., -9., 36., -84., 126., -126., 84., -36., 9., > -1.]), > array([ 1., -9., 36., -84., 126., -126., 84., -36., 9., > -1.])) > > > It would seem that b is roughly 1000 * B, and that a is the same as b, > while in matlab B and A are very different. > > Can anyone explain why this is? The docs for matlab butter function > appear to say the same thing as the scipy version. > > Thanks > > Will > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Please note that 300/500 is zero. You may use >>> (b,a)=signal.butter(9,300./500,btype='high') >>> b array([ 0.00106539, -0.00958855, 0.0383542 , -0.08949314, 0.13423971, -0.13423971, 0.08949314, -0.0383542 , 0.00958855, -0.00106539]) >>> a array([ 1.00000000e+00, 1.79158135e+00, 2.53189988e+00, 2.11822942e+00, 1.37075629e+00, 6.09038913e-01, 1.99331557e-01, 4.31047310e-02, 5.80426165e-03, 3.55580604e-04]) >>> 300/500 0 >>> 300./500 0.59999999999999998 >>> Nils From emanuelez at gmail.com Thu Aug 9 09:16:40 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 9 Aug 2007 15:16:40 +0200 Subject: [SciPy-user] memory usage In-Reply-To: <80c99e790708090604l29d47d5dgcbd47320eef58887@mail.gmail.com> References: <80c99e790708090604l29d47d5dgcbd47320eef58887@mail.gmail.com> Message-ID: yeah i noticed that as well. i think i'm on the right way now... i think this is all due to the heavy use of broadcasting i did in order to optimize performance... i might as well try to go with some inline C code with nested for loops. On 8/9/07, lorenzo bolla wrote: > first of all: congratulations! :-) > I've experienced some memory leaks using ipython: if you are using it, try > using the bare python interpreter... > L. > > > On 8/9/07, Emanuele Zattin wrote: > > > > Hello, > > > > i'm finally done with my implementation of a comet detector so right > > now i need to tweak some parameters. > > It makes quite a heavy use of images and big arrays so it's quite > > hungry of memory, so hungry that right now it will fill up the 768 > > megabytes of memory of my machine in no time. > > i would like to investigate the reasons and find a solution so i think > > i need to know more about python's way to manage memory. > > for example... do i need to delete arrays i don't use anymore? if so... > how? > > do i need to close images after i've read data from them? > > > > any hint for debugging and profiling is welcome! > > > > -- > > Emanuele Zattin > > --------------------------------------------------- > > -I don't have to know an answer. I don't feel frightened by not > > knowing things; by being lost in a mysterious universe without any > > purpose ? which is the way it really is, as far as I can tell, > > possibly. It doesn't frighten me.- Richard Feynman > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From will.woods at ynic.york.ac.uk Thu Aug 9 09:25:01 2007 From: will.woods at ynic.york.ac.uk (Will Woods) Date: Thu, 09 Aug 2007 14:25:01 +0100 Subject: [SciPy-user] butter function different from matlab? In-Reply-To: <46BB113E.6060408@iam.uni-stuttgart.de> References: <46BB0FBB.3000008@ynic.york.ac.uk> <46BB113E.6060408@iam.uni-stuttgart.de> Message-ID: <46BB15AD.2070503@ynic.york.ac.uk> That explains a lot - thanks! Nils Wagner wrote: > Will Woods wrote: >> I'm rather new to the scipy.signal module, so I appologise if I have >> missed something obvious, but I seem to be getting strange results from >> the 'butter' function. >> >> In matlab, a 9th order higpass filter at 300 Hz on a signal sampled at >> 1kHz would be: >> >> >> [B,A] = butter(9,300/500,'high') >> >> B = >> >> 0.0011 -0.0096 0.0384 -0.0895 0.1342 -0.1342 0.0895 >> -0.0384 0.0096 -0.0011 >> >> >> A = >> >> 1.0000 1.7916 2.5319 2.1182 1.3708 0.6090 0.1993 >> 0.0431 0.0058 0.0004 >> >> >> >> >> If I do the same with scipy.signal.butter, I get: >> >> In [47]: (b,a)=butter(9,300/500,btype='high') >> >> In [48]: (b,a) >> Out[48]: >> (array([ 1., -9., 36., -84., 126., -126., 84., -36., 9., >> -1.]), >> array([ 1., -9., 36., -84., 126., -126., 84., -36., 9., >> -1.])) >> >> >> It would seem that b is roughly 1000 * B, and that a is the same as b, >> while in matlab B and A are very different. >> >> Can anyone explain why this is? The docs for matlab butter function >> appear to say the same thing as the scipy version. >> >> Thanks >> >> Will >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > Please note that 300/500 is zero. You may use > >>>> (b,a)=signal.butter(9,300./500,btype='high') >>>> b > array([ 0.00106539, -0.00958855, 0.0383542 , -0.08949314, 0.13423971, > -0.13423971, 0.08949314, -0.0383542 , 0.00958855, -0.00106539]) >>>> a > array([ 1.00000000e+00, 1.79158135e+00, 2.53189988e+00, > 2.11822942e+00, 1.37075629e+00, 6.09038913e-01, > 1.99331557e-01, 4.31047310e-02, 5.80426165e-03, > 3.55580604e-04]) > >>>> 300/500 > 0 >>>> 300./500 > 0.59999999999999998 >>>> > > Nils > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Will Woods York Neuroimaging Centre The Biocentre York Science Park Innovation Way Heslington York YO10 5DG http://www.ynic.york.ac.uk From kasper.souren at gmail.com Thu Aug 9 09:27:33 2007 From: kasper.souren at gmail.com (Kasper Souren) Date: Thu, 9 Aug 2007 15:27:33 +0200 Subject: [SciPy-user] IRC channel, #scipy Message-ID: <9e9181e10708090627oacc3dc8la23766d92bcc4f13@mail.gmail.com> I was looking for a way to solve a problem related to the installation of SciPy and I was surprised to see there was noone hanging out at #scipy on freenode. Right now there are two people. Feel free to /join #scipy and maybe we can make it into something more permanant? Kasper -- The content of this e-mail is in the public domain, see http://creativecommons.org/licenses/publicdomain/ From ahala2000 at yahoo.com Thu Aug 9 10:02:10 2007 From: ahala2000 at yahoo.com (elton wang) Date: Thu, 9 Aug 2007 07:02:10 -0700 (PDT) Subject: [SciPy-user] empirical distribution Message-ID: <476686.80712.qm@web31405.mail.mud.yahoo.com> Hi All, Are there functions for empirical distribution(pdf/cdf/icdf) with gaussian kernel or other kernels? Thanks ____________________________________________________________________________________ Luggage? GPS? Comic books? Check out fitting gifts for grads at Yahoo! Search http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz From david.huard at gmail.com Thu Aug 9 10:09:40 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 9 Aug 2007 10:09:40 -0400 Subject: [SciPy-user] empirical distribution In-Reply-To: <476686.80712.qm@web31405.mail.mud.yahoo.com> References: <476686.80712.qm@web31405.mail.mud.yahoo.com> Message-ID: <91cf711d0708090709ub0062cao353ea81f425fead@mail.gmail.com> Look at scipy.stats.gaussian_kde 2007/8/9, elton wang : > > Hi All, > Are there functions for empirical > distribution(pdf/cdf/icdf) with gaussian kernel or > other kernels? > > Thanks > > > > ____________________________________________________________________________________ > Luggage? GPS? Comic books? > Check out fitting gifts for grads at Yahoo! Search > http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Aug 9 22:33:01 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 10 Aug 2007 11:33:01 +0900 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <20070809091514.GH9452@mentat.za.net> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> <46BA86D8.90706@ar.media.kyoto-u.ac.jp> <20070809091514.GH9452@mentat.za.net> Message-ID: <46BBCE5D.9050408@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > On Thu, Aug 09, 2007 at 12:15:36PM +0900, David Cournapeau wrote: >>> What would have been really cool is if we had an automated packaging >>> system, so that releasing was as simple as tagging and pushing a >>> button. >> Well, that's exactly why I started a scipy+numpy project on the openSUSE >> build system: http://software.opensuse.org/download/home:/ashigabou/. >> When the tarballs for the new releases of numpy and scipy are available, >> it will take me two lines change to build and make available rpms for >> various distributions. > > That's absolutely brilliant, David. Andrew Straw has done the same > (using Launchpad) for Ubuntu. Do you know of a way to do this for > Windows? That platform seems to pose the biggest problem. Building and distributing binaries is really hard work (and not that grateful :) ). The thing you want is automatic and repeatable builds, which means having the possibility to get a specified environment at will. Windows and Mac OS X are inherently difficult to make this kind of things; Mac OS X being more sane and stable hardwares wise, you can assume than when it works on one PPC and one Intel machine, it works on most machines. This is only my experience, though. I don't know any way to do things totally automatically for windows, but I am not familiar with this platform. I don't know how installable packages are built on windows (does distutils provides the ability to do installers ?). Call me crazy, but I think the best thing to distribute binaries would be the ability to do cross compilation; unfortunately, cross compilation for python is not easy at all. I intend to take a look at GUB (http://lilypond.org/~janneke/bzr/gub.darcs/) which is used by lilypond to compile binaries for many platforms, including mac os X, windows, linux and FreeBSD. cheers David From david at ar.media.kyoto-u.ac.jp Thu Aug 9 22:41:04 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 10 Aug 2007 11:41:04 +0900 Subject: [SciPy-user] New scipy release before 8/20? (Stefan van der Walt) In-Reply-To: References: Message-ID: <46BBD040.6030902@ar.media.kyoto-u.ac.jp> Lorenzo Isella wrote: > Dear All, > Sorry if this turns out to be a trivial question: I am running Debian > testing on my box and I would like to avoid building SciPy from source > using cvs. > If SciPy is built under Ubuntu Launchpad, does that mean that the same > package is more or less simultaneously available in Debian testing? > If by available you mean in officiel repositories, then no. What can work though is to use the debian subdir which contains all the scripts, and then rebuild the package: this may work. There are official deb packages of scipy in unstable and maybe even testing now, though. But honestly, debian is maybe the easiest platform to install numpy and scipy on: you just do: apt-get install g77 atlas3-base-dev atlas3-sse2-dev make gcc python-dev subversion And then fetch the numpy and scipy sources, and to python setup.py install. Debian provides all the dependencies for you, making it really easy. David From david at ar.media.kyoto-u.ac.jp Thu Aug 9 23:03:49 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 10 Aug 2007 12:03:49 +0900 Subject: [SciPy-user] memory usage In-Reply-To: References: <80c99e790708090604l29d47d5dgcbd47320eef58887@mail.gmail.com> Message-ID: <46BBD595.6010308@ar.media.kyoto-u.ac.jp> Emanuele Zattin wrote: > yeah i noticed that as well. > i think i'm on the right way now... i think this is all due to the > heavy use of broadcasting i did in order to optimize performance... i > might as well try to go with some inline C code with nested for loops. In the cases broadcasting is expensive memory wise, ctypes is really a good option, too, specially for basic numerical works done in a few lines of C (typically, you allocate your data in python with numpy.empty, and gives those arrays to C functions). Concerning memory usage, I have just discovered a fantastic tool for memory usage: massif. It is part of valgrind, and it gives you this kind of graphs: http://valgrind.org/docs/manual/ms-manual.html Your scripts will run extremely slowly (10-20x slower), but it is extremely useful, at least for me. David From david at ar.media.kyoto-u.ac.jp Thu Aug 9 23:27:36 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 10 Aug 2007 12:27:36 +0900 Subject: [SciPy-user] How to make NumPy-1.0.3 work with SciPy-0.5.2 ? In-Reply-To: <46BB0B6E.9040000@airtelbroadband.in> References: <46BA9170.7030603@airtelbroadband.in> <46BA9F21.1020508@ar.media.kyoto-u.ac.jp> <46BB0B6E.9040000@airtelbroadband.in> Message-ID: <46BBDB28.6030901@ar.media.kyoto-u.ac.jp> R S Ananda Murthy wrote: > Zenwalk-4.6.1 is a derivative of Slackware. This has the following: > > gcc-4.1.2, gcc-gfortran-4.1.2, lapack-3.1.1 which also contains BLAS, > suitesparse-3.0.0. I have also created a link /usr/include/umfpack to > /usr/include/suitesparse so that SciPy and NumPy can locate UMFPACK > header files. I applied all the patches you have applied in your > package. But SciPy build stopped in the middle. > Well, without more details, I cannot help you :) Could you make the build log available somewhere ? (both numpy and scipy if possible). > Does the version of LAPACK and SuiteSparse matter? > Yes, but you have the good one for LAPACK. I don't know what SuiteSparse is, but I guess this is Ok. This all depends on the error you got, David From peridot.faceted at gmail.com Fri Aug 10 00:31:43 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 10 Aug 2007 00:31:43 -0400 Subject: [SciPy-user] memory usage In-Reply-To: References: <80c99e790708090604l29d47d5dgcbd47320eef58887@mail.gmail.com> Message-ID: On 09/08/07, Emanuele Zattin wrote: > yeah i noticed that as well. > i think i'm on the right way now... i think this is all due to the > heavy use of broadcasting i did in order to optimize performance... i > might as well try to go with some inline C code with nested for loops. If memory is your problem, rather than speed, I'd take a careful look at how your numpy code is written. Minor rewriting of expressions can often save a great deal of temporary space. (Memory allocation is actually extremely fast, so unless you're *running out* of memory, don't worry too much about it.) It's worth distinguishing between temporary memory use - that is, where the intermediate values in your calculation fill up all your RAM but then disappear once the calculation is over - and memory use by arrays you actually want. Are the arrays you actually care about huge? if you've got 2000x2000x3 channels of numpy floats, that's 96 MB; I expect in a comet detection routine you're shuffling a number of such things around? If this is the case, you're going to need to think about your algorithm: can you get away with float32s, which are half as big? can you keep fewer images in memory at once? For temporaries, there are various fairly easy tricks to cut down on the creation of temporaries. Bracketing expressions can make a huge difference: 2*(3*bigarray) makes more temporaries than (2*3)*bigarray (numpy has extremely limited scope for expression optimization). The package numexpr is essentially an expression compiler that will help with this sort of thing. You can also use the output arguments of ufuncs: for example, you can apply sine to an array in place, or add one array into an existing array. Broadcasting isn't necessarily wasteful; A[...,newaxis] doesn't take any more space than A does. Of course, doing something like (A[:,newaxis]*B[newaxis,:])*C does create a big array, which you may be able to avoid doing. Particularly, if you don't know about dot, you should: dot(A,B) is roughly sum(A*B) (it is a matrix product, in fact, for rank-2 arrays, and something analogous for all other ranks). Not only does it avoid a large temporary array, it's implemented using fast routines from BLAS/ATLAS. Anne From topengineer at gmail.com Fri Aug 10 01:25:33 2007 From: topengineer at gmail.com (HuiChang Moon) Date: Fri, 10 Aug 2007 14:25:33 +0900 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46B88A4B.2010505@tnw.utwente.nl> References: <46B88A4B.2010505@tnw.utwente.nl> Message-ID: <296323b50708092225w58286b57t1d5df248cc7ea387@mail.gmail.com> Thank you for your help. Through your advice, the warning messeges is not shown to me. Cheers Moon 2007/8/8, Chris Lee : > > An alternative is to edit the appropriate __init__.py file, it is two > small changes > > so in C:\python25\Lib\site-packages\scipy\misc open then __init__.py > file and near the bottom change > from numpy.testing import ScipyTest > test = ScipyTest().test > > to > > from numpy.testing import NumpyTest > test = NumpyTest().test > > then copy that file across all the machines in the lab and provide it > for students to copy onto their own machines. > > Cheers > Chris > > Ryan Krauss wrote: > > I would like to encourage my students to use Python in my class this > > Fall. The first day of class is 8/20. I have had mediocre luck > > building my own windows installers and I would prefer that the start > > up warnings about scipy.test now being called numpy.test not be > > displayed. > > > > I know that I am basically a consumer here asking for other people's > > time, but what are the odds of a new scipy release before 8/20? Or at > > least a windows installer that fixes the scipy.test warning messages? > > I assume that the svn version of scipy has these messages fixed. I > > could build my own windows installer from svn, but there are always > > test failures when I do it. I don't know why :) > > > > Thanks, > > > > Ryan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > ********************************************** > * Chris Lee * > * Laser physics and nonlinear optics group * > * MESA+ Institute * > * University of Twente * > * Phone: ++31 (0)53 489 3968 * > * fax: ++31 (0) 53 489 1102 * > ********************************************** > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- The master course student, Nano SOI Process Lab. Hanyang University, Korea. Contacts; smartmoon at hanyang.ac.kr HeeChang.Moon at Gmail.com +82-2-2220-0247 +82-10-6455-7444 ________________________________________ Dream but be awake. ________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Aug 10 02:49:45 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 10 Aug 2007 08:49:45 +0200 Subject: [SciPy-user] How to make NumPy-1.0.3 work with SciPy-0.5.2 ? In-Reply-To: <46BBDB28.6030901@ar.media.kyoto-u.ac.jp> References: <46BA9170.7030603@airtelbroadband.in> <46BA9F21.1020508@ar.media.kyoto-u.ac.jp> <46BB0B6E.9040000@airtelbroadband.in> <46BBDB28.6030901@ar.media.kyoto-u.ac.jp> Message-ID: <46BC0A89.7080403@iam.uni-stuttgart.de> David Cournapeau wrote: > R S Ananda Murthy wrote: > >> Zenwalk-4.6.1 is a derivative of Slackware. This has the following: >> >> gcc-4.1.2, gcc-gfortran-4.1.2, lapack-3.1.1 which also contains BLAS, >> suitesparse-3.0.0. I have also created a link /usr/include/umfpack to >> /usr/include/suitesparse so that SciPy and NumPy can locate UMFPACK >> header files. I applied all the patches you have applied in your >> package. But SciPy build stopped in the middle. >> >> > Well, without more details, I cannot help you :) Could you make the > build log available somewhere ? (both numpy and scipy if possible). > >> Does the version of LAPACK and SuiteSparse matter? >> >> > Yes, but you have the good one for LAPACK. I don't know what SuiteSparse > is, but I guess this is Ok. This all depends on the error you got, > > See the homepage of Tim Davis for details --> http://www.cise.ufl.edu/~davis/ "The SuiteSparse is a new name for my meta-package, containing all the sparse matrix codes I've authored or co-authored. Most of this meta-package appears as built-in functions in MATLAB (CHOLMOD, UMFPACK, and soon CSparse), Mathematica (UMFPACK), and commercial circuit simulation tools (KLU, BTF)." Cheers, Nils From rsamurti at airtelbroadband.in Fri Aug 10 07:41:04 2007 From: rsamurti at airtelbroadband.in (R S Ananda Murthy) Date: Fri, 10 Aug 2007 17:11:04 +0530 Subject: [SciPy-user] How to make NumPy-1.0.3 work with SciPy-0.5.2 ? In-Reply-To: <46BBDB28.6030901@ar.media.kyoto-u.ac.jp> References: <46BA9170.7030603@airtelbroadband.in><46BA9F21.1020508@ar.media. kyoto-u.ac.jp><46BB0B6E.9040000@airtelbroadband.in> <46BBDB28.6030901@ar.media.kyoto-u.ac.jp> Message-ID: <46BC4ED0.7000305@airtelbroadband.in> Hello David, Please pardon me for writing to you directly. Can I send you the build log since I may not be able to post it to the mailing list? Regards, Anand David Cournapeau wrote: > R S Ananda Murthy wrote: > >> Zenwalk-4.6.1 is a derivative of Slackware. This has the following: >> >> gcc-4.1.2, gcc-gfortran-4.1.2, lapack-3.1.1 which also contains BLAS, >> suitesparse-3.0.0. I have also created a link /usr/include/umfpack to >> /usr/include/suitesparse so that SciPy and NumPy can locate UMFPACK >> header files. I applied all the patches you have applied in your >> package. But SciPy build stopped in the middle. >> >> > Well, without more details, I cannot help you :) Could you make the > build log available somewhere ? (both numpy and scipy if possible). > >> Does the version of LAPACK and SuiteSparse matter? >> >> > Yes, but you have the good one for LAPACK. I don't know what SuiteSparse > is, but I guess this is Ok. This all depends on the error you got, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From fredmfp at gmail.com Fri Aug 10 08:10:57 2007 From: fredmfp at gmail.com (fred) Date: Fri, 10 Aug 2007 14:10:57 +0200 Subject: [SciPy-user] movies on wiki ? Message-ID: <46BC55D1.3040003@gmail.com> Hi all, Is it possible to add movies in the wiki pages ? If yes, which format ? TIA. Cheers, -- http://scipy.org/FredericPetit From zunzun at zunzun.com Fri Aug 10 09:29:42 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Fri, 10 Aug 2007 09:29:42 -0400 Subject: [SciPy-user] movies on wiki ? In-Reply-To: <46BC55D1.3040003@gmail.com> References: <46BC55D1.3040003@gmail.com> Message-ID: <20070810132942.GA26220@zunzun.com> On Fri, Aug 10, 2007 at 02:10:57PM +0200, fred wrote: > > Is it possible to add movies in the wiki pages ? Perhaps if a few multi-petabyte drives were added... James From lbolla at gmail.com Fri Aug 10 09:37:31 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 10 Aug 2007 15:37:31 +0200 Subject: [SciPy-user] movies on wiki ? In-Reply-To: <20070810132942.GA26220@zunzun.com> References: <46BC55D1.3040003@gmail.com> <20070810132942.GA26220@zunzun.com> Message-ID: <80c99e790708100637x6afc66c4y4a4aa3260d756388@mail.gmail.com> have you tried with animated-gifs? not very nice, but small... On 8/10/07, zunzun at zunzun.com wrote: > > On Fri, Aug 10, 2007 at 02:10:57PM +0200, fred wrote: > > > > Is it possible to add movies in the wiki pages ? > > Perhaps if a few multi-petabyte drives were added... > > James > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Fri Aug 10 09:42:33 2007 From: fredmfp at gmail.com (fred) Date: Fri, 10 Aug 2007 15:42:33 +0200 Subject: [SciPy-user] movies on wiki ? In-Reply-To: <80c99e790708100637x6afc66c4y4a4aa3260d756388@mail.gmail.com> References: <46BC55D1.3040003@gmail.com> <20070810132942.GA26220@zunzun.com> <80c99e790708100637x6afc66c4y4a4aa3260d756388@mail.gmail.com> Message-ID: <46BC6B49.9040608@gmail.com> lorenzo bolla a ?crit : > have you tried with animated-gifs? > not very nice, but small... Well, I'll give it a try if I can (I make my movies with xvidcap). Thx. -- http://scipy.org/FredericPetit From emanuelez at gmail.com Fri Aug 10 10:10:48 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Fri, 10 Aug 2007 16:10:48 +0200 Subject: [SciPy-user] movies on wiki ? In-Reply-To: <46BC6B49.9040608@gmail.com> References: <46BC55D1.3040003@gmail.com> <20070810132942.GA26220@zunzun.com> <80c99e790708100637x6afc66c4y4a4aa3260d756388@mail.gmail.com> <46BC6B49.9040608@gmail.com> Message-ID: what about saving the videos on stage6.com (much better quality than youtube!) and linking them in the wiki? Actually the wiki already seems to have problems... am i the only one getting errors loading the cookbook page? On 8/10/07, fred wrote: > lorenzo bolla a ?crit : > > have you tried with animated-gifs? > > not very nice, but small... > Well, I'll give it a try if I can > (I make my movies with xvidcap). > > Thx. > > -- > http://scipy.org/FredericPetit > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From fredmfp at gmail.com Fri Aug 10 10:17:51 2007 From: fredmfp at gmail.com (fred) Date: Fri, 10 Aug 2007 16:17:51 +0200 Subject: [SciPy-user] movies on wiki ? In-Reply-To: References: <46BC55D1.3040003@gmail.com> <20070810132942.GA26220@zunzun.com> <80c99e790708100637x6afc66c4y4a4aa3260d756388@mail.gmail.com> <46BC6B49.9040608@gmail.com> Message-ID: <46BC738F.8060802@gmail.com> Emanuele Zattin a ?crit : > what about saving the videos on stage6.com (much better quality than > youtube!) and linking them in the wiki? Actually the wiki already > seems to have problems... am i the only one getting errors loading the > cookbook page? > Got problem too (slowness, or server error), as I'm modifying a few of my pages. BTW, thanks for the hint. -- http://scipy.org/FredericPetit From gruben at bigpond.net.au Fri Aug 10 11:03:07 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 11 Aug 2007 01:03:07 +1000 Subject: [SciPy-user] movies on wiki ? In-Reply-To: References: <46BC55D1.3040003@gmail.com> <20070810132942.GA26220@zunzun.com> <80c99e790708100637x6afc66c4y4a4aa3260d756388@mail.gmail.com> <46BC6B49.9040608@gmail.com> Message-ID: <46BC7E2B.5080309@bigpond.net.au> I've also been getting errors trying to access the cookbook on and off for about a week, and other errors with search for a bit longer. Gary R. Emanuele Zattin wrote: > what about saving the videos on stage6.com (much better quality than > youtube!) and linking them in the wiki? Actually the wiki already > seems to have problems... am i the only one getting errors loading the > cookbook page? From chris.lasher at gmail.com Fri Aug 10 13:28:35 2007 From: chris.lasher at gmail.com (Chris Lasher) Date: Fri, 10 Aug 2007 13:28:35 -0400 Subject: [SciPy-user] movies on wiki ? In-Reply-To: <46BC55D1.3040003@gmail.com> References: <46BC55D1.3040003@gmail.com> Message-ID: <128a885f0708101028i5575ae8x7102b554eb724695@mail.gmail.com> On 8/10/07, fred wrote: > Hi all, > > Is it possible to add movies in the wiki pages ? > > If yes, which format ? Hi Fred, I suggest uploading the movies to another site and linking them in the Wiki. I'm really fond of ShowMeDo.com as a host for instructional movies; they have very good feedback widgets. I've produced two series for Software Carpentry that they kindly host: http://showmedo.com/videos/series?id=94 http://showmedo.com/videos/series?id=95 Chris From fredmfp at gmail.com Fri Aug 10 16:37:52 2007 From: fredmfp at gmail.com (fred) Date: Fri, 10 Aug 2007 22:37:52 +0200 Subject: [SciPy-user] movies on wiki ? In-Reply-To: <128a885f0708101028i5575ae8x7102b554eb724695@mail.gmail.com> References: <46BC55D1.3040003@gmail.com> <128a885f0708101028i5575ae8x7102b554eb724695@mail.gmail.com> Message-ID: <46BCCCA0.8080506@gmail.com> Chris Lasher a ?crit : > On 8/10/07, fred wrote: > >> Hi all, >> >> Is it possible to add movies in the wiki pages ? >> >> If yes, which format ? >> > > Hi Fred, > > I suggest uploading the movies to another site and linking them in the > Wiki. I'm really fond of ShowMeDo.com as a host for instructional > movies; they have very good feedback widgets. I've produced two series > for Software Carpentry that they kindly host: > http://showmedo.com/videos/series?id=94 > http://showmedo.com/videos/series?id=95 > Thanks all you guys ! -- http://scipy.org/FredericPetit From jstrunk at enthought.com Fri Aug 10 18:31:07 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 10 Aug 2007 17:31:07 -0500 Subject: [SciPy-user] Wiki errors Message-ID: <200708101731.08013.jstrunk@enthought.com> I think the Moin wiki error are caused by Trac taking too much CPU time since it is running as CGI. Unfortunately, I was unable to solve that today. I will return from my vacation on August 27th and work on it then. I'm sorry for the trouble. Thanks, Jeff From fperez.net at gmail.com Fri Aug 10 19:15:00 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 10 Aug 2007 17:15:00 -0600 Subject: [SciPy-user] Testing BOF at scipy'07? In-Reply-To: <20070810212939.GD10496@caltech.edu> References: <20070807191046.GC8313@caltech.edu> <20070808182508.GB27388@caltech.edu> <20070810212939.GD10496@caltech.edu> Message-ID: On 8/10/07, Titus Brown wrote: > Hi, Fernando, > > since our interest group is interested in but not focused on testing, I > think we'll actually just shift the interest group meeting back a week > -- we moved it up so that there'd be something for SciPy attendees to > do, but the testing BoF fills that need nicely! OK, thanks. Sorry for sounding a bit imposing, but I know it's easy to drift out of focus if we put too many things on one table. I hope nobody was bothered. Cheers, f From stefan at sun.ac.za Fri Aug 10 20:26:44 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 11 Aug 2007 02:26:44 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <46BBCE5D.9050408@ar.media.kyoto-u.ac.jp> References: <46B88A4B.2010505@tnw.utwente.nl> <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> <46BA86D8.90706@ar.media.kyoto-u.ac.jp> <20070809091514.GH9452@mentat.za.net> <46BBCE5D.9050408@ar.media.kyoto-u.ac.jp> Message-ID: <20070811002644.GG17460@mentat.za.net> On Fri, Aug 10, 2007 at 11:33:01AM +0900, David Cournapeau wrote: > Building and distributing binaries is really hard work (and not that > grateful :) ). The thing you want is automatic and repeatable builds, > which means having the possibility to get a specified environment at > will. Albert reminded me that we could potentially tweak the buildbot to do this packaging. There are, however, two issues. First, the buildbot currently only supports numpy. Compiling multiple projects (residing in different repositories) with buildbot isn't straightforward, but with a bit of hacking it should be possible (the source is Python, how hard can it be ;). Second, those machines all have different configurations of ATLAS, MKL, etc. > Call me crazy, but I think the best thing to distribute binaries would > be the ability to do cross compilation; unfortunately, cross compilation > for python is not easy at all. I intend to take a look at GUB > (http://lilypond.org/~janneke/bzr/gub.darcs/) which is used by lilypond > to compile binaries for many platforms, including mac os X, windows, > linux and FreeBSD. Another option is to build the packages under vmware. If you can obtain a licensed copy of windows somewhere, there are instructions on the internet on how to create a vmware image. Vmplayer is a free download. Regards St?fan From fperez.net at gmail.com Fri Aug 10 20:34:31 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 10 Aug 2007 18:34:31 -0600 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <20070811002644.GG17460@mentat.za.net> References: <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> <46BA86D8.90706@ar.media.kyoto-u.ac.jp> <20070809091514.GH9452@mentat.za.net> <46BBCE5D.9050408@ar.media.kyoto-u.ac.jp> <20070811002644.GG17460@mentat.za.net> Message-ID: On 8/10/07, Stefan van der Walt wrote: > Another option is to build the packages under vmware. If you can > obtain a licensed copy of windows somewhere, there are instructions on > the internet on how to create a vmware image. Vmplayer is a free > download. The player is free but doesn't allow you to create new images, only to run existing ones. The vmware SERVER, however, is now also freely available http://www.vmware.com/download/server/ The advantage is that the server product allows the creation of new images, hence is the one you want if you are going to make a fresh WinXP install on a linux box. I've used it successfully under Ubuntu for a while, and I love it. cheers, f From david at ar.media.kyoto-u.ac.jp Sat Aug 11 02:12:28 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 11 Aug 2007 15:12:28 +0900 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> <46BA86D8.90706@ar.media.kyoto-u.ac.jp> <20070809091514.GH9452@mentat.za.net> <46BBCE5D.9050408@ar.media.kyoto-u.ac.jp> <20070811002644.GG17460@mentat.za.net> Message-ID: <46BD534C.70202@ar.media.kyoto-u.ac.jp> Fernando Perez wrote: > On 8/10/07, Stefan van der Walt wrote: > >> Another option is to build the packages under vmware. If you can >> obtain a licensed copy of windows somewhere, there are instructions on >> the internet on how to create a vmware image. Vmplayer is a free >> download. > > The player is free but doesn't allow you to create new images, only to > run existing ones. The vmware SERVER, however, is now also freely > available > > http://www.vmware.com/download/server/ > > The advantage is that the server product allows the creation of new > images, hence is the one you want if you are going to make a fresh > WinXP install on a linux box. I've used it successfully under Ubuntu > for a while, and I love it. > You can also run windows under linux with other ways, including qemu, xen, etc... The problem is that I find it really difficult to use for a repetive process. Basically, what would be nice, and maybe doable with buildbot, I don't know, is the following: - have a "default" windows image, with nothing installed (equivalent of bootstraping minimal installation in a chrott environement under linux) - being able to script it through some network process to automatically build the thing (where buildbot would help ?) The biggest problem is the licenses problems: you cannot distribute the images for various reasons, etc... David From stefan at sun.ac.za Sat Aug 11 07:30:39 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 11 Aug 2007 13:30:39 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: References: <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> <46BA86D8.90706@ar.media.kyoto-u.ac.jp> <20070809091514.GH9452@mentat.za.net> <46BBCE5D.9050408@ar.media.kyoto-u.ac.jp> <20070811002644.GG17460@mentat.za.net> Message-ID: <20070811113039.GJ17460@mentat.za.net> On Fri, Aug 10, 2007 at 06:34:31PM -0600, Fernando Perez wrote: > On 8/10/07, Stefan van der Walt wrote: > > > Another option is to build the packages under vmware. If you can > > obtain a licensed copy of windows somewhere, there are instructions on > > the internet on how to create a vmware image. Vmplayer is a free > > download. > > The player is free but doesn't allow you to create new images, only to > run existing ones. The vmware SERVER, however, is now also freely > available The images can be created with qemu: qemu-img create -f vmdk imagename.vmdk nG Cheers St?fan From peter.pootytang at gmail.com Mon Aug 13 14:58:05 2007 From: peter.pootytang at gmail.com (Peter PootyTang) Date: Mon, 13 Aug 2007 14:58:05 -0400 Subject: [SciPy-user] Welch's ttest Message-ID: Hello, I am converting some code from 'R' to using scipy, and the ttest results aren't matching. I figured out that the reason is because of the versions of the ttest that are being used. In 'R' I am doing a ttest without the assumtion that the deviations are the same for both samples. However, I can't find the same test in scipy. tt_ind might work, except my basline and sample sets have different sizes. Does anyone know how to implement to Welch's ttest using scipy? http://en.wikipedia.org/wiki/Welch%27s_t_test Thanks! Peter From amcmorl at gmail.com Mon Aug 13 16:24:20 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Tue, 14 Aug 2007 08:24:20 +1200 Subject: [SciPy-user] Welch's ttest In-Reply-To: References: Message-ID: On 14/08/07, Peter PootyTang wrote: > Hello, > > I am converting some code from 'R' to using scipy, and the ttest > results aren't matching. I figured out that the reason is because of > the versions of the ttest that are being used. In 'R' I am doing a > ttest without the assumtion that the deviations are the same for both > samples. However, I can't find the same test in scipy. tt_ind might > work, except my basline and sample sets have different sizes. > > Does anyone know how to implement to Welch's ttest using scipy? > > http://en.wikipedia.org/wiki/Welch%27s_t_test You'll need to check my working, but I did have a go at implementing this sometime back from Sokal and Rohlf's Biometry text. I'm sorry the code is pretty ugly. Please let me know if you decide the implementation is wrong, so I can patch it up. Ideally I (or someone else) should create some unittests for this. def welchs_approximate_ttest(n1, mean1, sem1, \ n2, mean2, sem2, alpha): # calculate standard variances of the means svm1 = sem1**2 * n1 svm2 = sem2**2 * n2 print "standard variance of the mean 1: %0.4f" % svm1 print "standard variance of the mean 2: %0.4f" % svm2 print "" t_s_prime = (mean1 - mean2)/n.sqrt(svm1/n1+svm2/n2) print "t'_s = %0.4f" % t_s_prime print "" t_alpha_df1 = scipy.stats.t.ppf(1-alpha/2, n1 - 1) t_alpha_df2 = scipy.stats.t.ppf(1-alpha/2, n2 - 1) print "t_alpha[%d] = %0.4f" % (n1-1, t_alpha_df1) print "t_alpha[%d] = %0.4f" % (n2-1, t_alpha_df2) print "" t_alpha_prime = (t_alpha_df1 * sem1**2 + t_alpha_df2 * sem2**2) / \ (sem1**2 + sem2**2) print "t'_alpha = %0.4f" % t_alpha_prime print "" if abs(t_s_prime) > t_alpha_prime: print "Significantly different" return True else: print "Not significantly different" return False Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From fredmfp at gmail.com Mon Aug 13 20:57:52 2007 From: fredmfp at gmail.com (fred) Date: Tue, 14 Aug 2007 02:57:52 +0200 Subject: [SciPy-user] numpy/f2py issue... Message-ID: <46C0FE10.7030501@gmail.com> Hi all, I have just built svn numpy #3964, and compiling my fortran codes does not work anymore. I get the following error message: Traceback (most recent call last): File "/usr/local/bin/f2py", line 22, in ? from numpy.f2py import main ImportError: No module named numpy.f2py Any clue ? TIA. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Tue Aug 14 06:21:09 2007 From: fredmfp at gmail.com (fred) Date: Tue, 14 Aug 2007 12:21:09 +0200 Subject: [SciPy-user] numpy/f2py issue... In-Reply-To: <46C0FE10.7030501@gmail.com> References: <46C0FE10.7030501@gmail.com> Message-ID: <46C18215.5090104@gmail.com> fred a ?crit : > > Any clue ? Sorry, I made a mistake in my Makefile. -- http://scipy.org/FredericPetit From c.gillespie at ncl.ac.uk Tue Aug 14 06:55:41 2007 From: c.gillespie at ncl.ac.uk (Colin Gillespie) Date: Tue, 14 Aug 2007 11:55:41 +0100 Subject: [SciPy-user] Double integration Message-ID: <46C18A2D.9090007@ncl.ac.uk> Dear all, What would be the best way of solving this double integration using scipy: int_0^1 int_x^1 y dy dx Many thanks Colin -- Dr Colin Gillespie http://www.mas.ncl.ac.uk/~ncsg3/ From gruben at bigpond.net.au Tue Aug 14 07:42:23 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue, 14 Aug 2007 21:42:23 +1000 Subject: [SciPy-user] Double integration In-Reply-To: <46C18A2D.9090007@ncl.ac.uk> References: <46C18A2D.9090007@ncl.ac.uk> Message-ID: <46C1951F.1070508@bigpond.net.au> Hi Colin, Try this: In [8]: import scipy.integrate.quadpack as siq In [9]: siq.dblquad(lambda y,x:y, 0, 1, lambda x:x, lambda x:1) Out[9]: (0.33333333333333331, 3.7007434154171879e-015) Gary R. Colin Gillespie wrote: > Dear all, > > What would be the best way of solving this double integration using scipy: > > int_0^1 int_x^1 y dy dx > > Many thanks > > Colin From emanuelez at gmail.com Tue Aug 14 07:56:33 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Tue, 14 Aug 2007 13:56:33 +0200 Subject: [SciPy-user] returning a vector from weave.inline Message-ID: Hello, i would like to have a vector< vector > as return_val using weave.inline but so far i did not manage to since weave does not seem to know how to convert it to a python datatype. any hint? -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From mnandris at btinternet.com Tue Aug 14 08:08:45 2007 From: mnandris at btinternet.com (Michael Nandris) Date: Tue, 14 Aug 2007 13:08:45 +0100 (BST) Subject: [SciPy-user] Possible bug in scipy.stats.rv_discrete Message-ID: <738370.79255.qm@web86509.mail.ird.yahoo.com> Hi, I think there may be an issue with the way rv_discrete orders its output when it encounters zeros in the input, causing non-zero probabilities to creep up towards the end of the output. If you are attempting to track the accumulation of states in an n-state Markov chain, this is a problem! I have had a look at the API but can't figure it. Any help at fixing this would be much appreciated. regards M.Nandris ########### # test.py from __future__ import division from scipy.stats import rv_discrete # looks at output of scipy.stats.rv_discrete STATES = [0,1,2,3] SIZE = 10000 def count(inpt): opt = dict(zip( STATES, (0,0,0,0) )) for i in inpt: opt[i]+=1 while opt: k,v = opt.popitem() print k, ' : ', v/SIZE # probability should reflect that of the input distribution def bugdemo(): test0 = rv_discrete( name='sample', values=( STATES, [ 0.3, 0.4, 0.2, 0.1 ] ) ).rvs( size=SIZE ) count(test0) print 'Output .3,.4,.2,.1 approximately matches input in the correct order: the problem only occurs when zeros are included in the initial distribution (see below)';print test1 = rv_discrete( name='sample', values=( STATES, [ 0.5, 0.4, 0.0, 0.1 ] ) ).rvs( size=SIZE ) count(test1) print 'State 1 and State 2 have been mixed up.'; print test2 = rv_discrete( name='sample', values=( STATES, [ 0.6, 0.0, 0.0, 0.4 ] ) ).rvs( size=SIZE ) count(test2) print 'State 2 is sampled with the probability that State 0 should be sampled.'; print test3 = rv_discrete( name='sample', values=( STATES, [ 0.0, 1.0, 0.0, 0.0 ] ) ).rvs( size=SIZE ) count(test3) print 'State 3 appears to have the probability State 1 should have.' if __name__=='__main__': bugdemo() -------------- next part -------------- An HTML attachment was scrubbed... URL: From lev at columbia.edu Tue Aug 14 08:40:30 2007 From: lev at columbia.edu (Lev Givon) Date: Tue, 14 Aug 2007 08:40:30 -0400 Subject: [SciPy-user] Double integration In-Reply-To: <46C18A2D.9090007@ncl.ac.uk> References: <46C18A2D.9090007@ncl.ac.uk> Message-ID: <20070814124029.GD21580@localhost.ee.columbia.edu> Received from Colin Gillespie on Tue, Aug 14, 2007 at 06:55:41AM EDT: > Dear all, > > What would be the best way of solving this double integration using scipy: > > int_0^1 int_x^1 y dy dx > > Many thanks > > Colin Try this: from scipy.integrate import dblquad dblquad(lambda y,x:y,0.0,1.0,lambda x:x,lambda x:1.0) L.G. From aisaac at american.edu Tue Aug 14 09:59:20 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 14 Aug 2007 09:59:20 -0400 Subject: [SciPy-user] Welch's ttest In-Reply-To: References: Message-ID: On Tue, 14 Aug 2007, Angus McMorland apparently wrote: > def welchs_approximate_ttest Just a reminder that nowadays if you post code it is helpful to state explicitly what the license is, even if you think it is simple enough to obviously belong in the public domain. On this list, public domain, BSD, or MIT licensing are particularly welcome, I believe. IMO, there should be an explicit policy that code posted to the list without a licensing statement will be in the public domain. I believe that is the intent of such posts, in general. But this policy should be presented as part of the list registration. Cheers, Alan Isaac From ryanlists at gmail.com Tue Aug 14 10:10:59 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 14 Aug 2007 09:10:59 -0500 Subject: [SciPy-user] formatting python code for html Message-ID: Can anyone recommend a good tool to turn Python code into something nicely formatted to display in html? I am thinking for something that gets results like the Latex listings package without having to run Latex - and with output that could display natively in a web browser, i.e. not pdf or dvi. Any suggestions would be welcome. A python script that generates nicely formatted html would be the best solution I can think of. Thanks, Ryan From david at ar.media.kyoto-u.ac.jp Tue Aug 14 10:15:06 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 14 Aug 2007 23:15:06 +0900 Subject: [SciPy-user] formatting python code for html In-Reply-To: References: Message-ID: <46C1B8EA.7090902@ar.media.kyoto-u.ac.jp> Ryan Krauss wrote: > Can anyone recommend a good tool to turn Python code into something > nicely formatted to display in html? I am thinking for something that > gets results like the Latex listings package without having to run > Latex - and with output that could display natively in a web browser, > i.e. not pdf or dvi. > > Any suggestions would be welcome. A python script that generates > nicely formatted html would be the best solution I can think of. > http://pygments.org/ is what I used for my own website. It does many languages, and is written in python. David From aisaac at american.edu Tue Aug 14 10:23:46 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 14 Aug 2007 10:23:46 -0400 Subject: [SciPy-user] formatting python code for html In-Reply-To: References: Message-ID: On Tue, 14 Aug 2007, Ryan Krauss apparently wrote: > Can anyone recommend a good tool to turn Python code into > something nicely formatted to display in html? http://pygments.org/ http://docutils.sourceforge.net/sandbox/code-block-directive/docs/syntax-highlight.html hth, Alan Isaac From tgrav at mac.com Tue Aug 14 12:26:11 2007 From: tgrav at mac.com (Tommy Grav) Date: Tue, 14 Aug 2007 12:26:11 -0400 Subject: [SciPy-user] Are two distributions different Message-ID: <051EBD13-14ED-4C6C-A2DF-53737F835594@mac.com> I have two binned distributions R and S (both generated using the numpy.histogram() with the same bins keyword. I would like to check if these two distributions are different using the chi squared and k-s tests. I know that scipy implements these two tests, but I have been unable to figure out how to use them. Any help is appreciated. Cheers Tommy From scyang at nist.gov Tue Aug 14 13:06:13 2007 From: scyang at nist.gov (Stephen Yang) Date: Tue, 14 Aug 2007 13:06:13 -0400 Subject: [SciPy-user] try/finally exceptions dying Message-ID: <46C1E105.90706@nist.gov> Hello all, In order to churn through a whole bunch of data sets (some good, some bad..) and get all the errors at the end, (the data will be updated to make it all good later on) I implemented a try/finally block with a higher level handler to catch errors before they propagate to the default handler and crash the program. Something like this: def doStuff(args): if(args says we should keep going): try: stuff finally: update args doStuff(updated args) def runWithErrorHandling(): try: doStuff(args) except (exception1, exception2), data: append exception data to a list of errors runWithErrorHandling() unfortunately, this churns through all the data and I only get the last exception that occurred in the error list, not all of them, as I expected. What is going on? I thought finally always propagated the exception all the way up the stack until it was handled. Any thoughts are appreciated. Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Tue Aug 14 13:39:58 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 14 Aug 2007 12:39:58 -0500 Subject: [SciPy-user] formatting python code for html In-Reply-To: References: Message-ID: Pygments is perfect. Thanks to David and Alan. Ryan On 8/14/07, Alan G Isaac wrote: > On Tue, 14 Aug 2007, Ryan Krauss apparently wrote: > > Can anyone recommend a good tool to turn Python code into > > something nicely formatted to display in html? > > http://pygments.org/ > > http://docutils.sourceforge.net/sandbox/code-block-directive/docs/syntax-highlight.html > > hth, > Alan Isaac > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From kwmsmith at gmail.com Tue Aug 14 17:48:51 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Tue, 14 Aug 2007 16:48:51 -0500 Subject: [SciPy-user] Mac OS X(PPC 10.4.10): import scipy gives "Fatal Python error: Interpreter not initialized" In-Reply-To: References: Message-ID: Hi -- Any help is appreciated -- it's probably something simple... relevant versions: gfortran 4.3.0 gcc 4.0.1 mac OS X 10.4.10 (PPC) numpy 1.0.4dev3884 scipy 0.5.3dev3225 python 2.5.1 Built and installed: $ python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build $ sudo python setup.py install No problems. But importing gives fatal error: $ python Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Fatal Python error: Interpreter not initialized (version mismatch?) Abort trap $ I can send the output from the build if you like (it's too long to attach in its entirety). Thanks for your help. Kurt From amcmorl at gmail.com Tue Aug 14 19:06:34 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 15 Aug 2007 11:06:34 +1200 Subject: [SciPy-user] Welch's ttest In-Reply-To: References: Message-ID: On 15/08/07, Alan G Isaac wrote: > On Tue, 14 Aug 2007, Angus McMorland apparently wrote: > > def welchs_approximate_ttest > Just a reminder that nowadays if you post code it is > helpful to state explicitly what the license is, > even if you think it is simple enough to obviously > belong in the public domain. On this list, > public domain, BSD, or MIT licensing are particularly > welcome, I believe. An excellent reminder, thanks Alan. After a quick check to remind myself what these all mean, the BSD licence will do fine for that code. For completeness then, the code becomes: def welchs_approximate_ttest(n1, mean1, sem1, \ n2, mean2, sem2, alpha): '''Welch''s approximate t-test for the difference of two means of heteroscedasctic populations. Implemented from Biometry, Sokal and Rohlf, 3rd ed., 1995, Box 13.4 :Parameters: n1 : int number of variates in sample 1 n2 : int number of variates in sample 2 mean1 : float mean of sample 1 mean2 : float mean of sample 2 sem1 : float standard error of mean1 sem2 : float standard error of mean2 alpha : float desired level of significance of test :Returns: significant : bool True if means are significantly different, else False t_s_prime : float t_prime value for difference of means t_alpha_prime : float critical value of t_prime at given level of significance Copyright (c) 2007, Angus McMorland All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the University of Auckland, New Zealand nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.''' svm1 = sem1**2 * n1 svm2 = sem2**2 * n2 t_s_prime = (mean1 - mean2)/n.sqrt(svm1/n1+svm2/n2) t_alpha_df1 = scipy.stats.t.ppf(1-alpha/2, n1 - 1) t_alpha_df2 = scipy.stats.t.ppf(1-alpha/2, n2 - 1) t_alpha_prime = (t_alpha_df1 * sem1**2 + t_alpha_df2 * sem2**2) / \ (sem1**2 + sem2**2) return abs(t_s_prime) > t_alpha_prime, t_s_prime, t_alpha_prime and a test class as well... class TestBiometry(NumpyTestCase): def test_welchs_approximate_ttest(self): chimpanzees = (37, 0.115, 0.017) # n, mean, sem gorillas = (6, 0.511, 0.144) case1 = welchs_approximate_ttest(chimpanzees[0], \ chimpanzees[1], \ chimpanzees[2], \ gorillas[0], \ gorillas[1], \ gorillas[2], \ 0.05) self.assertTrue( case1[0] ) self.assertAlmostEqual( case1[1], -2.73, 2 ) self.assertAlmostEqual( case1[2], 2.564, 2 ) female = (10, 8.5, n.sqrt(3.6)/n.sqrt(10)) male = (10, 4.8, n.sqrt(0.9)/n.sqrt(10)) case2 = welchs_approximate_ttest(female[0], \ female[1], \ female[2], \ male[0], \ male[1], \ male[2], 0.001) self.assertTrue( case2[0] ) self.assertAlmostEqual( case2[1], 5.52, 2 ) self.assertAlmostEqual( case2[2], 4.781, 2 ) In case it's useful to anyone the standard form of the BSD licence can be found here: http://www.opensource.org/licenses/bsd-license.php > IMO, there should be an explicit policy that code > posted to the list without a licensing statement > will be in the public domain. I believe that is > the intent of such posts, in general. But this policy > should be presented as part of the list registration. Sounds like a good plan to me. > Cheers, > Alan Isaac -- AJC McMorland, PhD Student Physiology, University of Auckland From aisaac at american.edu Tue Aug 14 22:17:19 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 14 Aug 2007 22:17:19 -0400 Subject: [SciPy-user] Welch's ttest In-Reply-To: References: Message-ID: On Wed, 15 Aug 2007, Angus McMorland apparently wrote: > class TestBiometry(NumpyTestCase): lowercase 't' needed, I think. Cheers, Alan Isaac From peridot.faceted at gmail.com Tue Aug 14 23:59:25 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 14 Aug 2007 23:59:25 -0400 Subject: [SciPy-user] Are two distributions different In-Reply-To: <051EBD13-14ED-4C6C-A2DF-53737F835594@mac.com> References: <051EBD13-14ED-4C6C-A2DF-53737F835594@mac.com> Message-ID: On 14/08/07, Tommy Grav wrote: > I have two binned distributions R and S (both generated using the > numpy.histogram() with the same bins keyword. I would like to > check if these two distributions are different using the chi squared > and k-s tests. I know that scipy implements these two tests, > but I have been unable to figure out how to use them. > Any help is appreciated. For the K-S test, don't bin the samples: it works directly on unbinned data. (Generally, don't bin things unless you have to, it tends to introduce artificial features which are hard to understand.) IIRC, if you simply supply kstwo with two samples (that is, arrays of numbers, each representing a sample), it will return the probability that samples this different could be drawn from the same distribution, and it will also return some internal number you don't care about. I haven't used the chisquared test in numpy. Anne From pablo_mitchell at yahoo.com Wed Aug 15 00:23:19 2007 From: pablo_mitchell at yahoo.com (pablo mitchell) Date: Tue, 14 Aug 2007 21:23:19 -0700 (PDT) Subject: [SciPy-user] TimeSeries Package on Mac OS X Message-ID: <587141.40951.qm@web30805.mail.mud.yahoo.com> I have a a Fink installation of Python 2.5 and SciPy. Following the TimeSeries install instructions appear to work fine (svn download look fine and python setup.py install seem to work fine). The trace starting up the relevant modules follows (please advise on a possible fix). The ScipyTest/NumpyTest deprecation msgs don't worry me too much. Thanks -- P $ python Python 2.5 (r25:51908, Aug 11 2007, 14:49:19) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy /sw/lib/python2.5/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test >>> import maskedarray >>> import timeseries /sw/lib/python2.5/site-packages/scipy/linalg/__init__.py:32: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /sw/lib/python2.5/site-packages/scipy/special/__init__.py:22: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /sw/lib/python2.5/site-packages/scipy/optimize/__init__.py:17: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /sw/lib/python2.5/site-packages/scipy/interpolate/__init__.py:15: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /sw/lib/python2.5/site-packages/scipy/integrate/__init__.py:16: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test Traceback (most recent call last): File "", line 1, in File "/sw/lib/python2.5/site-packages/timeseries/__init__.py", line 42, in from lib import filters, interpolate, moving_funcs File "/sw/lib/python2.5/site-packages/timeseries/lib/filters.py", line 18, in from scipy.signal import convolve, get_window File "/sw/lib/python2.5/site-packages/scipy/signal/__init__.py", line 12, in from signaltools import * File "/sw/lib/python2.5/site-packages/scipy/signal/signaltools.py", line 7, in from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2 File "/sw/lib/python2.5/site-packages/scipy/fftpack/__init__.py", line 10, in from basic import * File "/sw/lib/python2.5/site-packages/scipy/fftpack/basic.py", line 13, in import _fftpack as fftpack ImportError: dlopen(/sw/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so, 2): Symbol not found: _fftc8_4 Referenced from: /sw/lib/python2.5/site-packages/scipy/fftpack/_fftpack.so Expected in: dynamic lookup ____________________________________________________________________________________ Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. http://tv.yahoo.com/collections/222 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Aug 15 00:37:47 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 15 Aug 2007 00:37:47 -0400 Subject: [SciPy-user] TimeSeries Package on Mac OS X In-Reply-To: <587141.40951.qm@web30805.mail.mud.yahoo.com> References: <587141.40951.qm@web30805.mail.mud.yahoo.com> Message-ID: <200708150037.48009.pgmdevlist@gmail.com> On Wednesday 15 August 2007 00:23:19 pablo mitchell wrote: Pablo, Thanks for giving TimeSeries a try. Please note that this pacakge was developped partly on Windows w/ Python2.5, partly on x86_64 w/ Python2.4. None of us (Matt Knox and myself) have access to mac machines, which will make it difficult to test what's going on. The problem seems related to the scipy.fftpack module. Is this module properly installed on your machine ? Does it run OK ? Do the tests pass ? Apart from that, it's fairly minor: you probably won't be able to use the moving functions to process your data for a while, but you can still use the rest of the package. Let us know how it goes: we need your feedback ! Thanks again P. From fperez.net at gmail.com Wed Aug 15 01:38:27 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 14 Aug 2007 23:38:27 -0600 Subject: [SciPy-user] Testing BOF room scheduling? Message-ID: Hi all, how should we go about scheduling a room for the BOF tomorrow? T. Vaught suggested the Powell-Booth room where the sprints have been taking place (parallel to the tutorials), but if Titus or any other Caltech-er have a better suggestion, I'm all ears. Cheers, f From haase at msg.ucsf.edu Wed Aug 15 04:31:32 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 15 Aug 2007 10:31:32 +0200 Subject: [SciPy-user] Welch's ttest In-Reply-To: References: Message-ID: Hi, my two cents: The license should not go full text in the function documentation string. To much text - makes my IDE (PyShell) explode ... ;-) I would suggest one line: Author: ..... (BSD-style license) Cheers, Sebastian Haase On 8/15/07, Alan G Isaac wrote: > On Wed, 15 Aug 2007, Angus McMorland apparently wrote: > > class TestBiometry(NumpyTestCase): > > lowercase 't' needed, I think. > > Cheers, > Alan Isaac > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From c.j.lee at tnw.utwente.nl Wed Aug 15 06:29:34 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Wed, 15 Aug 2007 12:29:34 +0200 Subject: [SciPy-user] histogramdd question Message-ID: <46C2D58E.6030307@tnw.utwente.nl> Hi All, I have been using histogramdd to generate histograms of a 4-D data set. A histogram should, I believe, return integers but histogramdd returns doubles. Is there are reason for this? Cheers Chris -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From david.huard at gmail.com Wed Aug 15 10:01:47 2007 From: david.huard at gmail.com (David Huard) Date: Wed, 15 Aug 2007 10:01:47 -0400 Subject: [SciPy-user] histogramdd question In-Reply-To: <46C2D58E.6030307@tnw.utwente.nl> References: <46C2D58E.6030307@tnw.utwente.nl> Message-ID: <91cf711d0708150701g6a04950bl1168e8f096e13056@mail.gmail.com> Hi Chris, histogramdd will consist of floats when it is normalized or when non-integer weights are given. I thought it was preferable to return floats in all cases so the output is consistent no matter what. It could be changed if there is a compelling argument, but I'd rather not risk the chance of breaking someone's code. Cheers, David 2007/8/15, Chris Lee : > > Hi All, > > I have been using histogramdd to generate histograms of a 4-D data set. > A histogram should, I believe, return integers but histogramdd returns > doubles. Is there are reason for this? > > Cheers > Chris > > -- > ********************************************** > * Chris Lee * > * Laser physics and nonlinear optics group * > * MESA+ Institute * > * University of Twente * > * Phone: ++31 (0)53 489 3968 * > * fax: ++31 (0) 53 489 1102 * > ********************************************** > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prabhu at aero.iitb.ac.in Wed Aug 15 12:56:56 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Wed, 15 Aug 2007 09:56:56 -0700 Subject: [SciPy-user] Testing BOF room scheduling? In-Reply-To: References: Message-ID: <18115.12376.869486.410227@gargle.gargle.HOWL> >>>>> "Fernando" == Fernando Perez writes: Fernando> Hi all, how should we go about scheduling a room for the Fernando> BOF tomorrow? T. Vaught suggested the Powell-Booth Fernando> room where the sprints have been taking place (parallel Fernando> to the tutorials), but if Titus or any other Caltech-er Fernando> have a better suggestion, I'm all ears. I'm all confused about the BoF's on Thursday. I am not sure how we should coordinate the Thursday BoFs. Any ideas? cheers, prabhu From prabhu at aero.iitb.ac.in Wed Aug 15 13:12:05 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Wed, 15 Aug 2007 10:12:05 -0700 Subject: [SciPy-user] BoF page updates Message-ID: <18115.13285.638830.792782@gargle.gargle.HOWL> Hello, I've updated the BOF page at the scipy wiki with tentative timings and location for the BoFs on both days. It would be great if the page is updated by the moderators. http://www.scipy.org/SciPy2007/BoFs cheers, Prabhu From millman at berkeley.edu Wed Aug 15 13:15:09 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 15 Aug 2007 10:15:09 -0700 Subject: [SciPy-user] Fwd: NumPy 1.0.3.x and SciPy 0.5.2.x In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Jarrod Millman Date: Aug 15, 2007 9:22 AM Subject: NumPy 1.0.3.x and SciPy 0.5.2.x To: Discussion of Numerical Python , SciPy Developers List Hello, I am hoping to release NumPy 1.0.3.1 and SciPy 0.5.2.1 this weekend. These releases will work with each other and get rid of the annoying deprecation warning about SciPyTest. They are both basically ready to release. If you have some time, please build and install the stable branches and let me know if you have any errors. You can check out the code here: svn co http://svn.scipy.org/svn/numpy/branches/1.0.3.x svn co http://svn.scipy.org/svn/scipy/branches/0.5.2.x Below is a list of the changes I have made. NumPy 1.0.3.1 ============ * Adds back get_path to numpy.distutils.misc_utils SciPy 0.5.2.1 ========== * Replaces ScipyTest with NumpyTest * Fixes mio5.py as per revision 2893 * Adds missing test definition in scipy.cluster as per revision 2941 * Synchs odr module with trunk since odr is broken in 0.5.2 * Updates for SWIG > 1.3.29 and fixes memory leak of type 'void *' Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From chanley at stsci.edu Wed Aug 15 13:18:04 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 15 Aug 2007 13:18:04 -0400 Subject: [SciPy-user] BoF page updates In-Reply-To: <18115.13285.638830.792782@gargle.gargle.HOWL> References: <18115.13285.638830.792782@gargle.gargle.HOWL> Message-ID: <46C3354C.4040007@stsci.edu> How do I arrange for a room to meet in? The time will be 8 PM after the dinner. Chris From strang at nmr.mgh.harvard.edu Wed Aug 15 17:14:18 2007 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Wed, 15 Aug 2007 17:14:18 -0400 (EDT) Subject: [SciPy-user] scipy _fftpack.so build failure, centos4 Message-ID: Hi all, I'm trying to build scipy0.5.2 in a non-root location on centos4, as part of an entirely-separate-from-root python install (python2.4.3, numpy1.0b5, gcc3.4.6). I'm having link(?) problems--lots of undefined references--with _fftpack.so. The problem appears similar to the reported problems for Mac users but unsetenv'ing LDFLAGS and CPPFLAGS didn't help. The tail of the build is below. Anyone have any ideas? It doesn't seem to matter whether I do "python setup.py install" or "python setup.py build_src build_clib build_ext build" as I saw for one suggestion .......... [snip] build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o(.text+0x1 adf):build/src.linux-i686-2.4/fortranobject.c:218: undefined reference to `PyExc_AttributeError' build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o(.text+0x1 ae7):build/src.linux-i686-2.4/fortranobject.c:218: undefined reference to `PyErr_SetString' /usr/lib/gcc/i386-redhat-linux/3.4.6/libfrtbegin.a(frtbegin.o)(.text+0x35): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -L/usr/X11R6/lib -L/space/nsg/8/users/lib -L/usr/lib -L/usr/local/lib build/temp.linux-i686-2.4/build/src.linux-i686-2.4/Lib/fftpack/_fftpackmodul e.o build/temp.linux-i686-2.4/Lib/fftpack/src/zfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/drfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/zrfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/zfftnd.o build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o -L/space/nsg/8/users/lib -Lbuild/temp.linux-i686-2.4 -ldfftpack -lfftw3 -lg2c -o build/lib.linux-i686-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 /space/nsg/8/users/scipy-0.5.2> Gary From rsamurti at airtelbroadband.in Wed Aug 15 22:03:43 2007 From: rsamurti at airtelbroadband.in (R S Ananda Murthy) Date: Thu, 16 Aug 2007 07:33:43 +0530 Subject: [SciPy-user] Fwd: NumPy 1.0.3.x and SciPy 0.5.2.x In-Reply-To: References: Message-ID: <46C3B07F.7010600@airtelbroadband.in> Hello J. Millman, Both NumPy 1.0.3.1 and SciPy 0.5.2.1 are working properly on Zenwalk-4.6.1. I am ready to build the packages and upload them to Zenwalk repository once you announce stable release of these versions. Thanks for your effort and time. Anand Jarrod Millman wrote: > ---------- Forwarded message ---------- > From: Jarrod Millman > Date: Aug 15, 2007 9:22 AM > Subject: NumPy 1.0.3.x and SciPy 0.5.2.x > To: Discussion of Numerical Python , SciPy > Developers List > > > Hello, > > I am hoping to release NumPy 1.0.3.1 and SciPy 0.5.2.1 this weekend. > These releases will work with each other and get rid of the annoying > deprecation warning about SciPyTest. > > They are both basically ready to release. If you have some time, > please build and install the stable branches and let me know if you > have any errors. > > You can check out the code here: > svn co http://svn.scipy.org/svn/numpy/branches/1.0.3.x > svn co http://svn.scipy.org/svn/scipy/branches/0.5.2.x > > Below is a list of the changes I have made. > > NumPy 1.0.3.1 > ============ > * Adds back get_path to numpy.distutils.misc_utils > > SciPy 0.5.2.1 > ========== > * Replaces ScipyTest with NumpyTest > * Fixes mio5.py as per revision 2893 > * Adds missing test definition in scipy.cluster as per revision 2941 > * Synchs odr module with trunk since odr is broken in 0.5.2 > * Updates for SWIG > 1.3.29 and fixes memory leak of type 'void *' > > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > > > From c.j.lee at tnw.utwente.nl Thu Aug 16 04:50:14 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 16 Aug 2007 10:50:14 +0200 Subject: [SciPy-user] histogramdd question In-Reply-To: <91cf711d0708150701g6a04950bl1168e8f096e13056@mail.gmail.com> References: <46C2D58E.6030307@tnw.utwente.nl> <91cf711d0708150701g6a04950bl1168e8f096e13056@mail.gmail.com> Message-ID: <46C40FC6.6030802@tnw.utwente.nl> Thanks David, That makes sense. How difficult would it be to put in an optional parameter to only use integers? In some sense this doesn't matter too much since I can (in pathological cases) end up with a 700 GB histogram but normally It clocks in between 0.8-2 GB. In the end type casting isn't going to stop me from grabbing all the memory I can. David Huard wrote: > Hi Chris, > > histogramdd will consist of floats when it is normalized or when > non-integer weights are given. I thought it was preferable to return > floats in all cases so the output is consistent no matter what. It > could be changed if there is a compelling argument, but I'd rather not > risk the chance of breaking someone's code. > > Cheers, > > David > > 2007/8/15, Chris Lee >: > > Hi All, > > I have been using histogramdd to generate histograms of a 4-D data > set. > A histogram should, I believe, return integers but histogramdd returns > doubles. Is there are reason for this? > > Cheers > Chris > > -- > ********************************************** > * Chris Lee * > * Laser physics and nonlinear optics group * > * MESA+ Institute * > * University of Twente * > * Phone: ++31 (0)53 489 3968 * > * fax: ++31 (0) 53 489 1102 * > ********************************************** > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- ********************************************** * Chris Lee * * Laser physics and nonlinear optics group * * MESA+ Institute * * University of Twente * * Phone: ++31 (0)53 489 3968 * * fax: ++31 (0) 53 489 1102 * ********************************************** From david.huard at gmail.com Thu Aug 16 09:58:54 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 16 Aug 2007 09:58:54 -0400 Subject: [SciPy-user] histogramdd question In-Reply-To: <46C40FC6.6030802@tnw.utwente.nl> References: <46C2D58E.6030307@tnw.utwente.nl> <91cf711d0708150701g6a04950bl1168e8f096e13056@mail.gmail.com> <46C40FC6.6030802@tnw.utwente.nl> Message-ID: <91cf711d0708160658j4d8fcf48u1f89c84eb835bd62@mail.gmail.com> 2007/8/16, Chris Lee : > > Thanks David, > > That makes sense. How difficult would it be to put in an optional > parameter to only use integers? Not difficult at all, in fact, but I'm guessing there are not many people around with the kind of needs you have ! I can't even begin to understand how a computer can handle a 700G object. In any case, my suggestion would be for you to copy and paste the histogramdd fonction from numpy and tweak it to you heart's content. Better yet for such large arrays would be for you to rewrite the function in fortran and wrap it using f2py. I'd also advise using a "bloc" approach, where you feed the histogram function chunks of the array instead of the whole thing. This avoids swapping memory to the hard drive when you exceed you RAM capacity. If you want to give it a try, I'll send you some drafts to get you started. Cheers, David In some sense this doesn't matter too much since I can (in pathological > cases) end up with a 700 GB histogram but normally It clocks in between > 0.8-2 GB. In the end type casting isn't going to stop me from grabbing > all the memory I can. > > > > David Huard wrote: > > Hi Chris, > > > > histogramdd will consist of floats when it is normalized or when > > non-integer weights are given. I thought it was preferable to return > > floats in all cases so the output is consistent no matter what. It > > could be changed if there is a compelling argument, but I'd rather not > > risk the chance of breaking someone's code. > > > > Cheers, > > > > David > > > > 2007/8/15, Chris Lee > >: > > > > Hi All, > > > > I have been using histogramdd to generate histograms of a 4-D data > > set. > > A histogram should, I believe, return integers but histogramdd > returns > > doubles. Is there are reason for this? > > > > Cheers > > Chris > > > > -- > > ********************************************** > > * Chris Lee * > > * Laser physics and nonlinear optics group * > > * MESA+ Institute * > > * University of Twente * > > * Phone: ++31 (0)53 489 3968 * > > * fax: ++31 (0) 53 489 1102 * > > ********************************************** > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > ********************************************** > * Chris Lee * > * Laser physics and nonlinear optics group * > * MESA+ Institute * > * University of Twente * > * Phone: ++31 (0)53 489 3968 * > * fax: ++31 (0) 53 489 1102 * > ********************************************** > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smithsm at samuelsmith.org Thu Aug 16 13:28:13 2007 From: smithsm at samuelsmith.org (Samuel M. Smith) Date: Thu, 16 Aug 2007 11:28:13 -0600 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc Message-ID: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> When building from source on mac os x 10.4 ppc either from the tarball 0.5.2 or svn, I get the following error, after installing gfortran and fftw as per the scipy web instructions svn co http://svn.scipy.org/svn/scipy/trunk scipysvn cd scipysvn export MACOSX_DEPLOYMENT_TARGET=10.4 sudo python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build snip..... /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 collect2: ld returned 1 exit status error: Command "/usr/local/bin/gfortran -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.4-ppc-2.5/build/ src.macosx-10.4-ppc-2.5/Lib/fftpack/_fftpackmodule.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zfft.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/drfft.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple- darwin8.9.0/4.3.0 -Lbuild/temp.macosx-10.4-ppc-2.5 -ldfftpack -lfftw3 -lgfortran -o build/lib.macosx-10.4-ppc-2.5/scipy/fftpack/ _fftpack.so" failed with exit status 1 Looking in the archives on scipy.org I found a couple of other people who had this problem in June but there was not mentioned a fix. anyone have a fix for this? Sam ********************************************************************** Samuel M. Smith Ph.D. 2966 Fort Hill Road Eagle Mountain, Utah 84005-4108 801-768-2768 voice 801-768-2769 fax ********************************************************************** "The greatest source of failure and unhappiness in the world is giving up what we want most for what we want at the moment" ********************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Aug 16 14:15:00 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Aug 2007 11:15:00 -0700 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> Message-ID: <46C49424.8000001@gmail.com> Samuel M. Smith wrote: > When building from source on mac os x 10.4 ppc > either from the tarball 0.5.2 or svn, I get the following error, > > after installing gfortran and fftw as per the scipy web instructions > > svn co http://svn.scipy.org/svn/scipy/trunk scipysvn > cd scipysvn > export MACOSX_DEPLOYMENT_TARGET=10.4 > sudo python setup.py build_src build_clib --fcompiler=gnu95 build_ext > --fcompiler=gnu95 build > > snip..... > > /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with > MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 > collect2: ld returned 1 exit status > error: Command "/usr/local/bin/gfortran -Wall -undefined dynamic_lookup > -bundle > build/temp.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/Lib/fftpack/_fftpackmodule.o > build/temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zfft.o > build/temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/drfft.o > build/temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zrfft.o > build/temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zfftnd.o > build/temp.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/fortranobject.o > -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple-darwin8.9.0/4.3.0 > -Lbuild/temp.macosx-10.4-ppc-2.5 -ldfftpack -lfftw3 -lgfortran -o > build/lib.macosx-10.4-ppc-2.5/scipy/fftpack/_fftpack.so" failed with > exit status 1 > > Looking in the archives on scipy.org I found a couple of other people > who had this problem in June but there was > not mentioned a fix. > > anyone have a fix for this? I'd *highly* recommend not using sudo for building. It may be changing your environment, and it just gets in the way for no benefit; all of your build files won't be writable by the regular user. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From smithsm at samuelsmith.org Thu Aug 16 19:05:13 2007 From: smithsm at samuelsmith.org (Samuel M. Smith) Date: Thu, 16 Aug 2007 17:05:13 -0600 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C49424.8000001@gmail.com> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> Message-ID: <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> I erased the svn directory rechecked out and rebuilt without the sudo, same error. /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 collect2: ld returned 1 exit status /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 collect2: ld returned 1 exit status error: Command "/usr/local/bin/gfortran -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.4-ppc-2.5/build/ src.macosx-10.4-ppc-2.5/Lib/fftpack/_fftpackmodule.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zfft.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/drfft.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.4-ppc-2.5/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple- darwin8.9.0/4.3.0 -Lbuild/temp.macosx-10.4-ppc-2.5 -ldfftpack -lfftw3 -lgfortran -o build/lib.macosx-10.4-ppc-2.5/scipy/fftpack/ _fftpack.so" failed with exit status 1 On 16 Aug 2007, at 12:15 , Robert Kern wrote: > > I'd *highly* recommend not using sudo for building. It may be > changing your > environment, and it just gets in the way for no benefit; all of > your build files > won't be writable by the regular user. > > -- > Robert Kern ********************************************************************** Samuel M. Smith Ph.D. 2966 Fort Hill Road Eagle Mountain, Utah 84005-4108 801-768-2768 voice 801-768-2769 fax ********************************************************************** "The greatest source of failure and unhappiness in the world is giving up what we want most for what we want at the moment" ********************************************************************** From fredmfp at gmail.com Thu Aug 16 19:09:40 2007 From: fredmfp at gmail.com (fred) Date: Fri, 17 Aug 2007 01:09:40 +0200 Subject: [SciPy-user] [ipython] confirm_exit issue with scipy profile... Message-ID: <46C4D934.20809@gmail.com> Hi all, I have set confirm_exit 0 in my ipythonrc file. So I type Ctrl+D in ipython, it exits as expected. But if I use the scipy profile ipython -p scipy, confirmation is asked when I want to exit. Any clue ? TIA. Cheers, PS: my ipythonrc-scipy file only contains include ipythonrc # import ... # Load SciPy by itself so that 'help scipy' works import_mod scipy # from ... import ... from scipy.io.numpyio import fread, fwrite # Now we load all of SciPy # from ... import * import_all scipy # code execute print 'Welcome to the SciPy Scientific Computing Environment.' -- http://scipy.org/FredericPetit From robert.kern at gmail.com Fri Aug 17 02:21:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Aug 2007 23:21:21 -0700 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> Message-ID: <46C53E61.102@gmail.com> Samuel M. Smith wrote: > I erased the svn directory rechecked out and rebuilt without the > sudo, same error. Huh. Does your environment include MACOSX_DEPLOYMENT_TARGET anywhere? Which compilers, gcc and gfortran, are you using? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Fri Aug 17 10:22:55 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 17 Aug 2007 16:22:55 +0200 Subject: [SciPy-user] [ndimage] Interpolation question Message-ID: Hi, I wondered if someone knew if the interpolation for geometric transformation is done in the correct way, that is looking for each result where the origin is in the input matrix. This sound logical to do (and is), but there is no mention of it in the documentation. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjdandreta at att.net Fri Aug 17 10:50:06 2007 From: wjdandreta at att.net (Bill Dandreta) Date: Fri, 17 Aug 2007 10:50:06 -0400 Subject: [SciPy-user] Convert a date string to a date object Message-ID: <46C5B59E.3080309@att.net> date=dt.date.fromtimestamp(time.mktime(time.strptime('2007-08-17', "%Y-%m-%d"))) Is very very slow! Is this the recommended Python way to convert a date string to a date object? What is the fastest way to do this conversion? -- Bill wjdandreta at att.net Gentoo Linux X86_64 2.6.20-gentoo-r8 Reclaim Your Inbox with http://www.mozilla.org/products/thunderbird/ All things cometh to he who waiteth as long as he who waiteth worketh like hell while he waiteth. From wjdandreta at att.net Fri Aug 17 10:51:42 2007 From: wjdandreta at att.net (Bill Dandreta) Date: Fri, 17 Aug 2007 10:51:42 -0400 Subject: [SciPy-user] SQLite columns to scipy arrays Message-ID: <46C5B5FE.7050308@att.net> What is the fastest/best way to put the columns of an SQLite database into scipy arrays? I've been doing it with the following pseudo-code but it is kind of slow for large blocks of data. col1=[] col2=[] ... for r in cursor: col1.append(r['col1']) col2.append(r['col2']) ... acol1=S.array(col1) acol2=S.array(col2) ... -- Bill wjdandreta at att.net Gentoo Linux X86_64 2.6.20-gentoo-r8 Reclaim Your Inbox with http://www.mozilla.org/products/thunderbird/ All things cometh to he who waiteth as long as he who waiteth worketh like hell while he waiteth. From pgmdevlist at gmail.com Fri Aug 17 11:03:51 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 17 Aug 2007 11:03:51 -0400 Subject: [SciPy-user] Convert a date string to a date object In-Reply-To: <46C5B59E.3080309@att.net> References: <46C5B59E.3080309@att.net> Message-ID: <200708171103.52820.pgmdevlist@gmail.com> On Friday 17 August 2007 10:50:06 Bill Dandreta wrote: > Is this the recommended Python way to convert a date string to a date > object? You can check the packages dateutil (http://labix.org/python-dateutil) and its parser, mx.Datetime (http://www.egenix.com/files/python/mxDateTime.html) and its parser, as well as the TimeSeriesPackage available on the Scipy SVN sandbox. From lxander.m at gmail.com Fri Aug 17 11:36:21 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Fri, 17 Aug 2007 11:36:21 -0400 Subject: [SciPy-user] Convert a date string to a date object In-Reply-To: <46C5B59E.3080309@att.net> References: <46C5B59E.3080309@att.net> Message-ID: <525f23e80708170836y3eba42cdre519c2353f3f63a5@mail.gmail.com> On 8/17/07, Bill Dandreta wrote: > date=dt.date.fromtimestamp(time.mktime(time.strptime('2007-08-17', > "%Y-%m-%d"))) > > Is very very slow! > > Is this the recommended Python way to convert a date string to a date > object? > > What is the fastest way to do this conversion? The time.mktime function is the primary culprit here, but is still faster to parse yourself if that works for you. import datetime import time import timeit def str2date1(s): return datetime.date.fromtimestamp(time.mktime(time.strptime(s, '%Y-%m-%d'))) def str2date2(s): return datetime.date(*time.strptime(s, '%Y-%m-%d')[:3]) def str2date3(s): return datetime.date(*[int(s) for s in '2007-08-17'.split('-')]) def test(f): return '%s: %.2f usec/pass' % (f, 1000000 * timeit.Timer(f+"('2007-08-17')", 'from __main__ import ' + f).timeit(100)/100) >>> print test('str2date1') str2date1: 451.57 usec/pass >>> print test('str2date2') str2date2: 36.10 usec/pass >>> print test('str2date3') str2date3: 6.06 usec/pass From zpincus at stanford.edu Fri Aug 17 11:53:18 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 17 Aug 2007 11:53:18 -0400 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C53E61.102@gmail.com> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> Message-ID: <47E7E2CB-35C5-4819-80E7-8997F8F7E5FA@stanford.edu> Hello all, > Samuel M. Smith wrote: >> I erased the svn directory rechecked out and rebuilt without the >> sudo, same error. > > Huh. Does your environment include MACOSX_DEPLOYMENT_TARGET > anywhere? Which > compilers, gcc and gfortran, are you using? I have also repeatedly run into the MACOSX_DEPLOYMENT_TARGET error on OS X PPC (cf. my earlier messages to this list on the subject). In my experience, the issue has been the gfortran version. The latest version of the gfortran compiler from hpc.sourceforge.net causes this error for me (regardless of any environment settings for MACOSX_DEPLOYMENT_TARGET). The previous revision of the compiler from that site works fine, and the compiler at r.research.att.com/tools works properly in this regard in my hands as well. (The r.research.att.com/tools one can build proper universal binaries, too, and is now my preference.) I'd be curious to see if this is Dr. Smith's problem as well. I'm not sure if the error is triggered by any specific scipy version, or if it's due just to some gremlin in the hpc.sourceforge.net compiler -- I've never managed to track it down with any more specificity than what I reported above. Zach Pincus From wjdandreta at att.net Fri Aug 17 12:09:34 2007 From: wjdandreta at att.net (Bill Dandreta) Date: Fri, 17 Aug 2007 12:09:34 -0400 Subject: [SciPy-user] Convert a date string to a date object In-Reply-To: <200708171103.52820.pgmdevlist@gmail.com> References: <46C5B59E.3080309@att.net> <200708171103.52820.pgmdevlist@gmail.com> Message-ID: <46C5C83E.1060303@att.net> Thanks for the links. mxDateTime only distributes binaries but I looked at the dateutil source and realized right away that any general datestring parsing algorithm will have to me much slower than a direct approach as Alexander's post demonstrates. Pierre GM wrote: > On Friday 17 August 2007 10:50:06 Bill Dandreta wrote: > >> Is this the recommended Python way to convert a date string to a date >> object? >> > > You can check the packages dateutil (http://labix.org/python-dateutil) and its > parser, mx.Datetime (http://www.egenix.com/files/python/mxDateTime.html) and > its parser, as well as the TimeSeriesPackage available on the Scipy SVN > sandbox. > > -- Bill wjdandreta at att.net Gentoo Linux X86_64 2.6.20-gentoo-r8 Reclaim Your Inbox with http://www.mozilla.org/products/thunderbird/ All things cometh to he who waiteth as long as he who waiteth worketh like hell while he waiteth. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smithsm at samuelsmith.org Fri Aug 17 12:10:14 2007 From: smithsm at samuelsmith.org (Samuel M. Smith) Date: Fri, 17 Aug 2007 10:10:14 -0600 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C53E61.102@gmail.com> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> Message-ID: <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> listing env variables from shell prompt env there is no definition for MACOSX_DEPLOYMENT_TARGET gcc_select Current default compiler: gcc version 4.0.1 (Apple Computer, Inc. build 5367) gfortran from http://prdownloads.sourceforge.net/hpc/gfortran-bin.tar.gz?download as per scipy web page I don't have a ~/.MacOSX/environment.plist file here is a report of the same problem http://projects.scipy.org/pipermail/scipy-user/2007-June/012542.html On 17 Aug 2007, at 00:21 , Robert Kern wrote: > Samuel M. Smith wrote: >> I erased the svn directory rechecked out and rebuilt without the >> sudo, same error. > > Huh. Does your environment include MACOSX_DEPLOYMENT_TARGET > anywhere? Which > compilers, gcc and gfortran, are you using? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ********************************************************************** Samuel M. Smith Ph.D. 2966 Fort Hill Road Eagle Mountain, Utah 84005-4108 801-768-2768 voice 801-768-2769 fax ********************************************************************** "The greatest source of failure and unhappiness in the world is giving up what we want most for what we want at the moment" ********************************************************************** From wjdandreta at att.net Fri Aug 17 12:16:28 2007 From: wjdandreta at att.net (Bill Dandreta) Date: Fri, 17 Aug 2007 12:16:28 -0400 Subject: [SciPy-user] Convert a date string to a date object In-Reply-To: <525f23e80708170836y3eba42cdre519c2353f3f63a5@mail.gmail.com> References: <46C5B59E.3080309@att.net> <525f23e80708170836y3eba42cdre519c2353f3f63a5@mail.gmail.com> Message-ID: <46C5C9DC.6050608@att.net> Thanks for the demo. I added a 4th test for what I have used to speed things up: def str2date4(s): return datetime.date(int(s[:4]),int(s[5:7]),int(s[8:10])) My results: str2date1: 136.35 usec/pass str2date2: 37.95 usec/pass str2date3: 5.53 usec/pass str2date4: 3.45 usec/pass It is interesting to note that I ran the tests on a faster machine than you did. My results for #1 were more than 3 X faster than yours but my results for #3 were less than 10% faster! Alexander Michael wrote: > On 8/17/07, Bill Dandreta wrote: > >> date=dt.date.fromtimestamp(time.mktime(time.strptime('2007-08-17', >> "%Y-%m-%d"))) >> >> Is very very slow! >> >> Is this the recommended Python way to convert a date string to a date >> object? >> >> What is the fastest way to do this conversion? >> > > The time.mktime function is the primary culprit here, but is still > faster to parse yourself if that works for you. > > import datetime > import time > import timeit > > def str2date1(s): > return datetime.date.fromtimestamp(time.mktime(time.strptime(s, > '%Y-%m-%d'))) > > def str2date2(s): > return datetime.date(*time.strptime(s, '%Y-%m-%d')[:3]) > > def str2date3(s): > return datetime.date(*[int(s) for s in '2007-08-17'.split('-')]) > > def test(f): > return '%s: %.2f usec/pass' % (f, 1000000 * timeit.Timer(f+"('2007-08-17')", > 'from __main__ import ' + f).timeit(100)/100) > > >>>> print test('str2date1') >>>> > str2date1: 451.57 usec/pass > > >>>> print test('str2date2') >>>> > str2date2: 36.10 usec/pass > > >>>> print test('str2date3') >>>> > str2date3: 6.06 usec/pass > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Bill wjdandreta at att.net Gentoo Linux X86_64 2.6.20-gentoo-r8 Reclaim Your Inbox with http://www.mozilla.org/products/thunderbird/ All things cometh to he who waiteth as long as he who waiteth worketh like hell while he waiteth. From smithsm at samuelsmith.org Fri Aug 17 12:17:39 2007 From: smithsm at samuelsmith.org (Samuel M. Smith) Date: Fri, 17 Aug 2007 10:17:39 -0600 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <47E7E2CB-35C5-4819-80E7-8997F8F7E5FA@stanford.edu> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <47E7E2CB-35C5-4819-80E7-8997F8F7E5FA@stanford.edu> Message-ID: <75B587A2-C80C-4450-8202-DBDA888A7C66@samuelsmith.org> I am downloading the other gfortran to try it right now. On 17 Aug 2007, at 09:53 , Zachary Pincus wrote: > Hello all, > >> Samuel M. Smith wrote: >>> I erased the svn directory rechecked out and rebuilt without the >>> sudo, same error. >> >> Huh. Does your environment include MACOSX_DEPLOYMENT_TARGET >> anywhere? Which >> compilers, gcc and gfortran, are you using? > > I have also repeatedly run into the MACOSX_DEPLOYMENT_TARGET error on > OS X PPC (cf. my earlier messages to this list on the subject). > > In my experience, the issue has been the gfortran version. The latest > version of the gfortran compiler from hpc.sourceforge.net causes this > error for me (regardless of any environment settings for > MACOSX_DEPLOYMENT_TARGET). The previous revision of the compiler from > that site works fine, and the compiler at r.research.att.com/tools > works properly in this regard in my hands as well. (The > r.research.att.com/tools one can build proper universal binaries, > too, and is now my preference.) > > I'd be curious to see if this is Dr. Smith's problem as well. > > I'm not sure if the error is triggered by any specific scipy version, > or if it's due just to some gremlin in the hpc.sourceforge.net > compiler -- I've never managed to track it down with any more > specificity than what I reported above. > > Zach Pincus > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ********************************************************************** Samuel M. Smith Ph.D. 2966 Fort Hill Road Eagle Mountain, Utah 84005-4108 801-768-2768 voice 801-768-2769 fax ********************************************************************** "The greatest source of failure and unhappiness in the world is giving up what we want most for what we want at the moment" ********************************************************************** From robert.kern at gmail.com Fri Aug 17 12:20:00 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 17 Aug 2007 09:20:00 -0700 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> Message-ID: <46C5CAB0.7040106@gmail.com> Samuel M. Smith wrote: > listing env variables from shell prompt > env > there is no definition for MACOSX_DEPLOYMENT_TARGET > > gcc_select > Current default compiler: > gcc version 4.0.1 (Apple Computer, Inc. build 5367) > > gfortran from > http://prdownloads.sourceforge.net/hpc/gfortran-bin.tar.gz?download > as per scipy web page Okay. I would recommend following Zach's advice and trying the gfortran from http://r.research.att.com/tools/. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Fri Aug 17 12:24:01 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 17 Aug 2007 12:24:01 -0400 Subject: [SciPy-user] Convert a date string to a date object In-Reply-To: <46C5C83E.1060303@att.net> References: <46C5B59E.3080309@att.net> <200708171103.52820.pgmdevlist@gmail.com> <46C5C83E.1060303@att.net> Message-ID: <200708171224.03563.pgmdevlist@gmail.com> On Friday 17 August 2007 12:09:34 Bill Dandreta wrote: > mxDateTime only distributes binaries Scroll down for the sources. > but I looked at the dateutil source > and realized right away that any general datestring parsing algorithm will > have to me much slower than a direct approach as Alexander's post > demonstrates. But far more flexible at the same time. If you only use the same YYYY-MM-DD format, Alexander's split-based method will work as long as the format is respected... From w.richert at gmx.net Fri Aug 17 14:21:41 2007 From: w.richert at gmx.net (Willi Richert) Date: Fri, 17 Aug 2007 20:21:41 +0200 Subject: [SciPy-user] Finding min/max of a B-Spline Message-ID: <200708172021.41692.w.richert@gmx.net> Hi, I have a sequence of n-dim points for which I approximate a spline via splrep and splev. For splev I can specify der=1 to get the first derivative evaluated. And there is sproot, which finds the roots of the originially created spline. However, with these methods I cannot find the roots of the first derivative to get the minima and maxima of the spline. How can that be achieved? As an example take the 2-dim case (scipy 0.5.2): ========================= from pylab import * from scipy.interpolate import splrep, splev, sproot, spalde import pylab x=linspace(0,2*pi, 100) y=sin(linspace(0,2*pi, 100)) rep=splrep(x, y, k=3, s=1) ynew = splev(x, rep) raw_points = plot(x,y) spline_points = plot(x, ynew) y_der = splev(x, rep, der=1) p_der = plot(x, y_der) # x,y_der now contain the evaluated data for the first derivative. # The only chance to get the root would be to again fit a polynomial # to the data. However, that might be overkill. What else? legend([raw_points, spline_points, p_der], \ ["raw", "spline", "derivative"], "upper left") grid() show() ========================= Best regards, wr From openopt at ukr.net Fri Aug 17 14:32:02 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 17 Aug 2007 21:32:02 +0300 Subject: [SciPy-user] Finding min/max of a B-Spline In-Reply-To: <200708172021.41692.w.richert@gmx.net> References: <200708172021.41692.w.richert@gmx.net> Message-ID: <46C5E9A2.3060905@ukr.net> you'd better to inform about exact n and time demands. As for me I would try to use any smooth or non-smooth optimization solver, however, I don't know what are your requirements for speed. Regards, D. P.S. I can't run the example because each time I install scipy it has problems with scipy.interpolate._fitpack.so (unknown unicode symbol). Willi Richert wrote: > Hi, > > I have a sequence of n-dim points for which I approximate a spline via splrep > and splev. For splev I can specify der=1 to get the first derivative > evaluated. And there is sproot, which finds the roots of the originially > created spline. However, with these methods I cannot find the roots of the > first derivative to get the minima and maxima of the spline. How can that be > achieved? > > As an example take the 2-dim case (scipy 0.5.2): > > ========================= > from pylab import * > from scipy.interpolate import splrep, splev, sproot, spalde > import pylab > > x=linspace(0,2*pi, 100) > y=sin(linspace(0,2*pi, 100)) > > rep=splrep(x, y, k=3, s=1) > ynew = splev(x, rep) > > raw_points = plot(x,y) > spline_points = plot(x, ynew) > > y_der = splev(x, rep, der=1) > p_der = plot(x, y_der) > > # x,y_der now contain the evaluated data for the first derivative. > # The only chance to get the root would be to again fit a polynomial > # to the data. However, that might be overkill. What else? > > legend([raw_points, spline_points, p_der], \ > ["raw", "spline", "derivative"], "upper left") > > grid() > show() > ========================= > > Best regards, > wr > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From smithsm at samuelsmith.org Fri Aug 17 15:14:50 2007 From: smithsm at samuelsmith.org (Samuel M. Smith) Date: Fri, 17 Aug 2007 13:14:50 -0600 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C5CAB0.7040106@gmail.com> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> Message-ID: Zach, Robert Changing to the other fortran compiler did the trick. Yeah! Thank you! Still have to explicitly set the MACOSX_DEPLOYMENT_TARGET to 10.4 or else it builds by default for 10.3 I don't know who the maintainer of the scipy web page for os x installation is but I suggest it be updated to use a fortran compiler that works. Also the link for the g77 compiler is broken it points to the g95 compiler both on the hpc site. *************** In case anyone is interested here is what I did to install scipy from source on a G4 powerbook os x 10.4.10 xcode 2.4.1 xcode_2.4.1_8m1910_6936315.dmg gcc 4.01 Python and numpy from http://www.pythonmac.org/packages/py25-fat/ index.html python-2.5.1-macosx.dmg wxPython2.8-osx-unicode-2.8.3.0-universal10.4-py2.5.dmg pytz-2006g-py2.5-macosx10.4.dmg numpy-1.0.3-py2.5-macosx10.4.mpkg $ echo $PATH /usr/local/bin:/usr/local/sbin:/usr/texbin:/Library/Frameworks/ Python.framework/Versions/Current/bin:/opt/local/bin:/opt/local/sbin:/ bin:/sbin:/usr/bin:/usr/sbin:/Users/samuel/bin *** check gcc version $ gcc_select Current default compiler: gcc version 4.0.1 (Apple Computer, Inc. build 5367) to change if not 4.0x $ sudo gcc_select 4.0 *** Install subversion 1.4.4-2 from pkg installer to get scipy from svn http://www.open.collab.net/servlets/OCNDownload?id=CSVNMACC Subversion 1.4.4-2 Universal.dmg **** Install gfortran get gfortran from below instead of links on scipy page http://r.research.att.com/tools/ http://r.research.att.com/gfortran-4.2.1.dmg *** Install fftw get fftw http://fftw.org/fftw-3.1.2.tar.gz $ tar -xvzf fftw-3.1.2.tar.gz $ cd fftw-3.1.2 $ ./configure $ make $ sudo make install $ sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a $ sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la $ sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h *** Build and Install scipy from svn $ cd /Volumes/Archive/Install/Python/MacPython/Python2.5.x/scipy/ $ svn co http://svn.scipy.org/svn/scipy/trunk scipysvn Checked out revision 3245. $ cd scipysvn ***must set environment variable or will build for 10.3 not 10.4 $ export MACOSX_DEPLOYMENT_TARGET=10.4 $ python setup.py build_src build_clib --fcompiler=gnu95 build_ext -- fcompiler=gnu95 build $sudo python setup.py install *** to test $ python >>> import scipy >>> scipy.test(1,10) I had 3 failures ... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ..... ====================================================================== FAIL: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/io/tests/test_mio.py", line 80, in _check_case self._check_level(k_label, expected, matdict[k]) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/io/tests/test_mio.py", line 63, in _check_level decimal = 5) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal test sparse; file /Library/Frameworks/Python.framework/Versions/2.5/ lib/python2.5/site-packages/scipy/io/tests/./data/ testsparse_6.5.1_GLNX86.mat, variable testsparse (mismatch 46.6666666667%) x: array([[ 3.03865194e-319, 3.16202013e-322, 1.04346664e-320, 2.05531309e-320, 2.56123631e-320], [ 3.16202013e-322, 0.00000000e+000, 0.00000000e+000,... y: array([[ 1., 2., 3., 4., 5.], [ 2., 0., 0., 0., 0.], [ 3., 0., 0., 0., 0.]]) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.998772144317627+5.1307056773842221e-37j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.998772144317627+5.1063549216060798e-37j) DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1706 tests in 15.697s FAILED (failures=3) From robert.kern at gmail.com Fri Aug 17 15:32:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 17 Aug 2007 12:32:11 -0700 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> Message-ID: <46C5F7BB.9090802@gmail.com> Samuel M. Smith wrote: > Zach, Robert > > Changing to the other fortran compiler did the trick. Yeah! Thank you! > Still have to explicitly set the MACOSX_DEPLOYMENT_TARGET to 10.4 or > else it builds by default for 10.3 You should leave off the MACOSX_DEPLOYMENT_TARGET unless if you really need to use a feature that is only available in 10.4. > I don't know who the maintainer of the scipy web page for os x > installation is but I suggest it be updated > to use a fortran compiler that works. Also the link for the g77 > compiler is broken it points to the g95 > compiler both on the hpc site. It's a wiki, so the set of maintainers contains you, too. :-) In particular, people who have found problems and solved them are often the best people to write documentation about how to avoid such problems. If you have the time, we would appreciate your fixes to that page. > *************** > In case anyone is interested here is what I did to install scipy from > source on a G4 powerbook > os x 10.4.10 > xcode 2.4.1 xcode_2.4.1_8m1910_6936315.dmg > gcc 4.01 > > Python and numpy from http://www.pythonmac.org/packages/py25-fat/ > index.html > python-2.5.1-macosx.dmg > wxPython2.8-osx-unicode-2.8.3.0-universal10.4-py2.5.dmg > pytz-2006g-py2.5-macosx10.4.dmg > numpy-1.0.3-py2.5-macosx10.4.mpkg > > $ echo $PATH > /usr/local/bin:/usr/local/sbin:/usr/texbin:/Library/Frameworks/ > Python.framework/Versions/Current/bin:/opt/local/bin:/opt/local/sbin:/ > bin:/sbin:/usr/bin:/usr/sbin:/Users/samuel/bin > > > *** check gcc version > $ gcc_select > Current default compiler: > gcc version 4.0.1 (Apple Computer, Inc. build 5367) > > to change if not 4.0x > $ sudo gcc_select 4.0 > > *** Install subversion 1.4.4-2 from pkg installer to get scipy from svn > http://www.open.collab.net/servlets/OCNDownload?id=CSVNMACC > Subversion 1.4.4-2 Universal.dmg > > **** Install gfortran > get gfortran from below instead of links on scipy page > http://r.research.att.com/tools/ > http://r.research.att.com/gfortran-4.2.1.dmg > > *** Install fftw > get fftw > http://fftw.org/fftw-3.1.2.tar.gz > $ tar -xvzf fftw-3.1.2.tar.gz > $ cd fftw-3.1.2 > $ ./configure > $ make > $ sudo make install > $ sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a > $ sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la > $ sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h > > *** Build and Install scipy from svn > $ cd /Volumes/Archive/Install/Python/MacPython/Python2.5.x/scipy/ > $ svn co http://svn.scipy.org/svn/scipy/trunk scipysvn > Checked out revision 3245. > > $ cd scipysvn > > ***must set environment variable or will build for 10.3 not 10.4 > $ export MACOSX_DEPLOYMENT_TARGET=10.4 > > $ python setup.py build_src build_clib --fcompiler=gnu95 build_ext -- > fcompiler=gnu95 build > $sudo python setup.py install > > *** to test > $ python > > >>> import scipy > >>> scipy.test(1,10) > > I had 3 failures > > ... > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > ..... I may have fixed that in numpy SVN; I'm not sure. The check_dot failures are known and we have a fix thanks to David Cournapeau that will probably make its way in soon. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwmsmith at gmail.com Fri Aug 17 16:34:54 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Fri, 17 Aug 2007 15:34:54 -0500 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> Message-ID: On 8/17/07, Samuel M. Smith wrote: > Zach, Robert > > Changing to the other fortran compiler did the trick. Yeah! Thank you! > Still have to explicitly set the MACOSX_DEPLOYMENT_TARGET to 10.4 or > else it builds by default for 10.3 > > I don't know who the maintainer of the scipy web page for os x > installation is but I suggest it be updated > to use a fortran compiler that works. Also the link for the g77 > compiler is broken it points to the g95 > compiler both on the hpc site. > > *************** > In case anyone is interested here is what I did to install scipy from > source on a G4 powerbook > os x 10.4.10 > xcode 2.4.1 xcode_2.4.1_8m1910_6936315.dmg > gcc 4.01 > > Python and numpy from http://www.pythonmac.org/packages/py25-fat/ > index.html > python-2.5.1-macosx.dmg > wxPython2.8-osx-unicode-2.8.3.0-universal10.4-py2.5.dmg > pytz-2006g-py2.5-macosx10.4.dmg > numpy-1.0.3-py2.5-macosx10.4.mpkg > > $ echo $PATH > /usr/local/bin:/usr/local/sbin:/usr/texbin:/Library/Frameworks/ > Python.framework/Versions/Current/bin:/opt/local/bin:/opt/local/sbin:/ > bin:/sbin:/usr/bin:/usr/sbin:/Users/samuel/bin > > > *** check gcc version > $ gcc_select > Current default compiler: > gcc version 4.0.1 (Apple Computer, Inc. build 5367) > > to change if not 4.0x > $ sudo gcc_select 4.0 > > *** Install subversion 1.4.4-2 from pkg installer to get scipy from svn > http://www.open.collab.net/servlets/OCNDownload?id=CSVNMACC > Subversion 1.4.4-2 Universal.dmg > > **** Install gfortran > get gfortran from below instead of links on scipy page > http://r.research.att.com/tools/ > http://r.research.att.com/gfortran-4.2.1.dmg > > *** Install fftw > get fftw > http://fftw.org/fftw-3.1.2.tar.gz > $ tar -xvzf fftw-3.1.2.tar.gz > $ cd fftw-3.1.2 > $ ./configure > $ make > $ sudo make install > $ sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a > $ sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la > $ sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h > > *** Build and Install scipy from svn > $ cd /Volumes/Archive/Install/Python/MacPython/Python2.5.x/scipy/ > $ svn co http://svn.scipy.org/svn/scipy/trunk scipysvn > Checked out revision 3245. > > $ cd scipysvn > > ***must set environment variable or will build for 10.3 not 10.4 > $ export MACOSX_DEPLOYMENT_TARGET=10.4 > > $ python setup.py build_src build_clib --fcompiler=gnu95 build_ext -- > fcompiler=gnu95 build > $sudo python setup.py install > > *** to test > $ python > > >>> import scipy > >>> scipy.test(1,10) Thanks for the post -- unfortunately I still cannot get things to compile on my PPC G5 OS X 10.4.10. I followed your directions to a "T"; result when I loaded scipy: ksmith:ksmith [381]> python Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Fatal Python error: Interpreter not initialized (version mismatch?) Abort trap ksmith:ksmith [382]> Version info: Python 2.5.1 gcc 4.0.1 scipy from svn numpy 1.0.4 from svn gfortran 4.2.1 Any advice, pointers? I've been stuck without a working scipy for weeks. Thanks, Kurt From vincent.nijs at gmail.com Fri Aug 17 18:40:53 2007 From: vincent.nijs at gmail.com (Vincent) Date: Fri, 17 Aug 2007 22:40:53 -0000 Subject: [SciPy-user] SQLite columns to scipy arrays In-Reply-To: <46C5B5FE.7050308@att.net> References: <46C5B5FE.7050308@att.net> Message-ID: <1187390453.538489.66330@i13g2000prf.googlegroups.com> I looked into this a few weeks back and posted about it on the numpy list and the sqlite list. http://projects.scipy.org/pipermail/numpy-discussion/2007-July/028584.html http://lists.initd.org/pipermail/pysqlite/2007-July/001084.html Below is a test program that shows probably the fastest way to get data from an sqlite database into a numpy recarray. It is very slow for large blocks of data. You might want to take a look at pytables (http://www.pytables.org/moin) it is almost as fast as cPickle and has some of the same advantages as sql databases of cPickle. Best, Vincent def test_save_sqlite(fname, table = 'data'): # saving recarray to an sqlite file conn = sqlite3.connect('%s.sqlite' % fname) c = conn.cursor() # getting the variable names varnm = data.dtype.names nr_var = len(varnm) # transform to types sqlite knows types = [] for i in data[0]: if type(i) == N.string_: types.append('text') if type(i) == N.float_: types.append('real') if type(i) == N.int_: types.append('integer') create_string = ",".join(["%s %s" % (v,t) for v,t in zip(varnm,types)]) # create a table if it doesn't exist yet try: c.execute('drop table %s' % table) except sqlite3.OperationalError: pass c.execute('create table %s (%s)' % (table,create_string)) # putting the data into the table exec_string = 'insert into %s values %s' % (table,'(%s)' % (','.join(('?')*nr_var))) c.executemany(exec_string, data) # commiting the data to the database conn.commit() # closing the connection conn.close() def test_load_sqlite(fname, table = 'data', str_length = 20): conn = sqlite3.connect('%s.sqlite' % fname) c = conn.cursor() # get all data c.execute('select * from %s' % table) # getting data types types = [] for i in c.fetchone(): if type(i) == unicode: types.append('S%s' % str_length) if type(i) == float: types.append('float') if type(i) == int: types.append('int') # variable names varnm = [i[0] for i in c.description] # autodetected dtype dtype = zip(varnm,types) data = N.fromiter(c, dtype = dtype) # closing the connection conn.close() if __name__ == '__main__': from timeit import Timer import numpy as N import os, sqlite3 # making a directory to store simulate data if not os.path.exists('./data'): os.mkdir('./data') # creating simulated data and variable labels varnm = ['id','a','b','c','d','e','f','g','h','i','j'] # variable labels nobs = 500000 data1 = N.random.randn(nobs,5) data2 = N.random.randint(-100, high = 100, size = (nobs,5)) # adding a string variable id = [('id'+str(i)) for i in range(nobs)] data1 = [i for i in data1.T] data2 = [i for i in data2.T] d = [] d.append(N.array(id)) d.extend(data1) d.extend(data2) descr = [(varnm[i],d[i].dtype) for i in xrange(len(varnm))] data = N.rec.fromarrays(d, dtype=descr) n = 20 fname = './data/data' # testing sqlite t2 = Timer('test_save_sqlite(\"%s\")' % fname, 'from __main__ import test_save_sqlite') print "\n\nTest saving recarray with sqlite\n" print "%.6f sec/pass" % (t2.timeit(number=n)/n) # testing sqlite t4 = Timer('test_load_sqlite(\"%s\")' % fname, 'from __main__ import test_load_sqlite') print "\n\nTest loading recarray with sqlite\n" print "%.6f sec/pass" % (t4.timeit(number=n)/n) On Aug 17, 9:51 am, Bill Dandreta wrote: > What is the fastest/best way to put the columns of an SQLite database > into scipy arrays? > > I've been doing it with the following pseudo-code but it is kind of slow > for large blocks of data. > > col1=[] > col2=[] > ... > for r in cursor: > col1.append(r['col1']) > col2.append(r['col2']) > ... > acol1=S.array(col1) > acol2=S.array(col2) > ... > > -- > Bill > > wjdandr... at att.net > > Gentoo Linux X86_64 2.6.20-gentoo-r8 > > Reclaim Your Inbox withhttp://www.mozilla.org/products/thunderbird/ > > All things cometh to he who waiteth as long as he who waiteth worketh like hell while he waiteth. > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From dominique.orban at gmail.com Fri Aug 17 19:45:16 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 17 Aug 2007 19:45:16 -0400 Subject: [SciPy-user] Finding min/max of a B-Spline In-Reply-To: <200708172021.41692.w.richert@gmx.net> References: <200708172021.41692.w.richert@gmx.net> Message-ID: <8793ae6e0708171645x44a639ccq3a9509ac592af785@mail.gmail.com> On 8/17/07, Willi Richert wrote: > I have a sequence of n-dim points for which I approximate a spline via > splrep > and splev. For splev I can specify der=1 to get the first derivative > evaluated. And there is sproot, which finds the roots of the originially > created spline. However, with these methods I cannot find the roots of the > first derivative to get the minima and maxima of the spline. How can that > be > achieved? > I am not sure how the code you used works, but typically, the branches of your spline are represented implicitly. Each branch being a polynomial of degree 3, you are able to work out by hand their constrained extrema (they must lie between your interpolation points). Because the polynomials are of degree 3, they have inflexion points, so it is not sufficient to find zeros of the first derivative. Moreover, since they are constrained, the extrema may occur at a point where the derivative doesn't vanish. Since there is a finite number of interpolation intervals, you can find your global extrema by comparing the spline values across all the minima and maxima that you found. There must already be existing routines for this. Any good numerical analysis book explains how to do this. Good luck, Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Aug 18 03:18:31 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 18 Aug 2007 16:18:31 +0900 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> Message-ID: <46C69D47.4090806@ar.media.kyoto-u.ac.jp> Kurt Smith wrote: > On 8/17/07, Samuel M. Smith wrote: > >> Zach, Robert >> >> Changing to the other fortran compiler did the trick. Yeah! Thank you! >> Still have to explicitly set the MACOSX_DEPLOYMENT_TARGET to 10.4 or >> else it builds by default for 10.3 >> >> I don't know who the maintainer of the scipy web page for os x >> installation is but I suggest it be updated >> to use a fortran compiler that works. Also the link for the g77 >> compiler is broken it points to the g95 >> compiler both on the hpc site. >> >> *************** >> In case anyone is interested here is what I did to install scipy from >> source on a G4 powerbook >> os x 10.4.10 >> xcode 2.4.1 xcode_2.4.1_8m1910_6936315.dmg >> gcc 4.01 >> >> Python and numpy from http://www.pythonmac.org/packages/py25-fat/ >> index.html >> python-2.5.1-macosx.dmg >> wxPython2.8-osx-unicode-2.8.3.0-universal10.4-py2.5.dmg >> pytz-2006g-py2.5-macosx10.4.dmg >> numpy-1.0.3-py2.5-macosx10.4.mpkg >> >> $ echo $PATH >> /usr/local/bin:/usr/local/sbin:/usr/texbin:/Library/Frameworks/ >> Python.framework/Versions/Current/bin:/opt/local/bin:/opt/local/sbin:/ >> bin:/sbin:/usr/bin:/usr/sbin:/Users/samuel/bin >> >> >> *** check gcc version >> $ gcc_select >> Current default compiler: >> gcc version 4.0.1 (Apple Computer, Inc. build 5367) >> >> to change if not 4.0x >> $ sudo gcc_select 4.0 >> >> *** Install subversion 1.4.4-2 from pkg installer to get scipy from svn >> http://www.open.collab.net/servlets/OCNDownload?id=CSVNMACC >> Subversion 1.4.4-2 Universal.dmg >> >> **** Install gfortran >> get gfortran from below instead of links on scipy page >> http://r.research.att.com/tools/ >> http://r.research.att.com/gfortran-4.2.1.dmg >> >> *** Install fftw >> get fftw >> http://fftw.org/fftw-3.1.2.tar.gz >> $ tar -xvzf fftw-3.1.2.tar.gz >> $ cd fftw-3.1.2 >> $ ./configure >> $ make >> $ sudo make install >> $ sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a >> $ sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la >> $ sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h >> >> *** Build and Install scipy from svn >> $ cd /Volumes/Archive/Install/Python/MacPython/Python2.5.x/scipy/ >> $ svn co http://svn.scipy.org/svn/scipy/trunk scipysvn >> Checked out revision 3245. >> >> $ cd scipysvn >> >> ***must set environment variable or will build for 10.3 not 10.4 >> $ export MACOSX_DEPLOYMENT_TARGET=10.4 >> >> $ python setup.py build_src build_clib --fcompiler=gnu95 build_ext -- >> fcompiler=gnu95 build >> $sudo python setup.py install >> >> *** to test >> $ python >> >> >>> import scipy >> >>> scipy.test(1,10) >> > > > Thanks for the post -- unfortunately I still cannot get things to > compile on my PPC G5 OS X 10.4.10. > > I followed your directions to a "T"; result when I loaded scipy: > > ksmith:ksmith [381]> python > Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) > [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>>> import scipy >>>> > Fatal Python error: Interpreter not initialized (version mismatch?) > Abort trap > ksmith:ksmith [382]> > > Version info: > > Python 2.5.1 > gcc 4.0.1 > scipy from svn > numpy 1.0.4 from svn > gfortran 4.2.1 > > Any advice, pointers? I've been stuck without a working scipy for weeks. > > Ok, I will take a look at it, I still have an old ppc minimac with mac os X on it. I will see if I can reproduce your problem. Could you open a trac ticket describing your problem (just paste this email) on scipy trac, so that we can track what's going on ? David From stefan at sun.ac.za Sat Aug 18 07:21:54 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 18 Aug 2007 13:21:54 +0200 Subject: [SciPy-user] Possible bug in scipy.stats.rv_discrete In-Reply-To: <738370.79255.qm@web86509.mail.ird.yahoo.com> References: <738370.79255.qm@web86509.mail.ird.yahoo.com> Message-ID: <20070818112154.GN2977@mentat.za.net> Hi Michael Thanks for the report. This should be fixed in SVN revision 3246. Cheers St?fan On Tue, Aug 14, 2007 at 01:08:45PM +0100, Michael Nandris wrote: > Hi, > > I think there may be an issue with the way rv_discrete orders its output when > it encounters zeros in the input, causing non-zero probabilities to creep up > towards the end of the output. If you are attempting to track the accumulation > of states in an n-state Markov chain, this is a problem! > > I have had a look at the API but can't figure it. Any help at fixing this would > be much appreciated. > > regards > > M.Nandris From stefan at sun.ac.za Sat Aug 18 07:24:58 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 18 Aug 2007 13:24:58 +0200 Subject: [SciPy-user] formatting python code for html In-Reply-To: <46C1B8EA.7090902@ar.media.kyoto-u.ac.jp> References: <46C1B8EA.7090902@ar.media.kyoto-u.ac.jp> Message-ID: <20070818112458.GO2977@mentat.za.net> On Tue, Aug 14, 2007 at 11:15:06PM +0900, David Cournapeau wrote: > Ryan Krauss wrote: > > Can anyone recommend a good tool to turn Python code into something > > nicely formatted to display in html? I am thinking for something that > > gets results like the Latex listings package without having to run > > Latex - and with output that could display natively in a web browser, > > i.e. not pdf or dvi. > > > > Any suggestions would be welcome. A python script that generates > > nicely formatted html would be the best solution I can think of. > > > http://pygments.org/ is what I used for my own website. It does many > languages, and is written in python. Using emacs, htmlize-buffer will convert the current buffer (with syntax highlighting) to html. Cheers St?fan From kwmsmith at gmail.com Sat Aug 18 13:26:34 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Sat, 18 Aug 2007 12:26:34 -0500 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C69D47.4090806@ar.media.kyoto-u.ac.jp> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C69D47.4090806@ar.media.kyoto-u.ac.jp> Message-ID: On 8/18/07, David Cournapeau wrote: > Kurt Smith wrote: > > On 8/17/07, Samuel M. Smith wrote: > > > >> Zach, Robert > >> > >> Changing to the other fortran compiler did the trick. Yeah! Thank you! > >> Still have to explicitly set the MACOSX_DEPLOYMENT_TARGET to 10.4 or > >> else it builds by default for 10.3 > >> > >> I don't know who the maintainer of the scipy web page for os x > >> installation is but I suggest it be updated > >> to use a fortran compiler that works. Also the link for the g77 > >> compiler is broken it points to the g95 > >> compiler both on the hpc site. > >> > >> *************** > >> In case anyone is interested here is what I did to install scipy from > >> source on a G4 powerbook > >> os x 10.4.10 > >> xcode 2.4.1 xcode_2.4.1_8m1910_6936315.dmg > >> gcc 4.01 > >> > >> Python and numpy from http://www.pythonmac.org/packages/py25-fat/ > >> index.html > >> python-2.5.1-macosx.dmg > >> wxPython2.8-osx-unicode-2.8.3.0-universal10.4-py2.5.dmg > >> pytz-2006g-py2.5-macosx10.4.dmg > >> numpy-1.0.3-py2.5-macosx10.4.mpkg > >> > >> $ echo $PATH > >> /usr/local/bin:/usr/local/sbin:/usr/texbin:/Library/Frameworks/ > >> Python.framework/Versions/Current/bin:/opt/local/bin:/opt/local/sbin:/ > >> bin:/sbin:/usr/bin:/usr/sbin:/Users/samuel/bin > >> > >> > >> *** check gcc version > >> $ gcc_select > >> Current default compiler: > >> gcc version 4.0.1 (Apple Computer, Inc. build 5367) > >> > >> to change if not 4.0x > >> $ sudo gcc_select 4.0 > >> > >> *** Install subversion 1.4.4-2 from pkg installer to get scipy from svn > >> http://www.open.collab.net/servlets/OCNDownload?id=CSVNMACC > >> Subversion 1.4.4-2 Universal.dmg > >> > >> **** Install gfortran > >> get gfortran from below instead of links on scipy page > >> http://r.research.att.com/tools/ > >> http://r.research.att.com/gfortran-4.2.1.dmg > >> > >> *** Install fftw > >> get fftw > >> http://fftw.org/fftw-3.1.2.tar.gz > >> $ tar -xvzf fftw-3.1.2.tar.gz > >> $ cd fftw-3.1.2 > >> $ ./configure > >> $ make > >> $ sudo make install > >> $ sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a > >> $ sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la > >> $ sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h > >> > >> *** Build and Install scipy from svn > >> $ cd /Volumes/Archive/Install/Python/MacPython/Python2.5.x/scipy/ > >> $ svn co http://svn.scipy.org/svn/scipy/trunk scipysvn > >> Checked out revision 3245. > >> > >> $ cd scipysvn > >> > >> ***must set environment variable or will build for 10.3 not 10.4 > >> $ export MACOSX_DEPLOYMENT_TARGET=10.4 > >> > >> $ python setup.py build_src build_clib --fcompiler=gnu95 build_ext -- > >> fcompiler=gnu95 build > >> $sudo python setup.py install > >> > >> *** to test > >> $ python > >> > >> >>> import scipy > >> >>> scipy.test(1,10) > >> > > > > > > Thanks for the post -- unfortunately I still cannot get things to > > compile on my PPC G5 OS X 10.4.10. > > > > I followed your directions to a "T"; result when I loaded scipy: > > > > ksmith:ksmith [381]> python > > Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) > > [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin > > Type "help", "copyright", "credits" or "license" for more information. > > > >>>> import scipy > >>>> > > Fatal Python error: Interpreter not initialized (version mismatch?) > > Abort trap > > ksmith:ksmith [382]> > > > > Version info: > > > > Python 2.5.1 > > gcc 4.0.1 > > scipy from svn > > numpy 1.0.4 from svn > > gfortran 4.2.1 > > > > Any advice, pointers? I've been stuck without a working scipy for weeks. > > > > > Ok, I will take a look at it, I still have an old ppc minimac with mac > os X on it. I will see if I can reproduce your problem. Could you open a > trac ticket describing your problem (just paste this email) on scipy > trac, so that we can track what's going on ? > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Thanks for looking into it -- I'll open a trac ticket. I presume I send the email to scipy-tickets at scipy.org, if not, let me know where to send it. Thanks, Kurt From millman at berkeley.edu Sat Aug 18 13:35:45 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 18 Aug 2007 10:35:45 -0700 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C69D47.4090806@ar.media.kyoto-u.ac.jp> Message-ID: On 8/18/07, Kurt Smith wrote: > Thanks for looking into it -- I'll open a trac ticket. I presume I > send the email to scipy-tickets at scipy.org, if not, let me know where > to send it. Hey Kurt, You need to create an account on the trac site: http://projects.scipy.org/scipy/scipy/account Then login to the trac site and you will be able to create a "new ticket". Good luck, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From kwmsmith at gmail.com Sat Aug 18 13:41:37 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Sat, 18 Aug 2007 12:41:37 -0500 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C69D47.4090806@ar.media.kyoto-u.ac.jp> Message-ID: On 8/18/07, Kurt Smith wrote: > On 8/18/07, David Cournapeau wrote: > > Kurt Smith wrote: > > > On 8/17/07, Samuel M. Smith wrote: > > > > > >> Zach, Robert > > >> > > >> Changing to the other fortran compiler did the trick. Yeah! Thank you! > > >> Still have to explicitly set the MACOSX_DEPLOYMENT_TARGET to 10.4 or > > >> else it builds by default for 10.3 > > >> > > >> I don't know who the maintainer of the scipy web page for os x > > >> installation is but I suggest it be updated > > >> to use a fortran compiler that works. Also the link for the g77 > > >> compiler is broken it points to the g95 > > >> compiler both on the hpc site. > > >> > > >> *************** > > >> In case anyone is interested here is what I did to install scipy from > > >> source on a G4 powerbook > > >> os x 10.4.10 > > >> xcode 2.4.1 xcode_2.4.1_8m1910_6936315.dmg > > >> gcc 4.01 > > >> > > >> Python and numpy from http://www.pythonmac.org/packages/py25-fat/ > > >> index.html > > >> python-2.5.1-macosx.dmg > > >> wxPython2.8-osx-unicode-2.8.3.0-universal10.4-py2.5.dmg > > >> pytz-2006g-py2.5-macosx10.4.dmg > > >> numpy-1.0.3-py2.5-macosx10.4.mpkg > > >> > > >> $ echo $PATH > > >> /usr/local/bin:/usr/local/sbin:/usr/texbin:/Library/Frameworks/ > > >> Python.framework/Versions/Current/bin:/opt/local/bin:/opt/local/sbin:/ > > >> bin:/sbin:/usr/bin:/usr/sbin:/Users/samuel/bin > > >> > > >> > > >> *** check gcc version > > >> $ gcc_select > > >> Current default compiler: > > >> gcc version 4.0.1 (Apple Computer, Inc. build 5367) > > >> > > >> to change if not 4.0x > > >> $ sudo gcc_select 4.0 > > >> > > >> *** Install subversion 1.4.4-2 from pkg installer to get scipy from svn > > >> http://www.open.collab.net/servlets/OCNDownload?id=CSVNMACC > > >> Subversion 1.4.4-2 Universal.dmg > > >> > > >> **** Install gfortran > > >> get gfortran from below instead of links on scipy page > > >> http://r.research.att.com/tools/ > > >> http://r.research.att.com/gfortran-4.2.1.dmg > > >> > > >> *** Install fftw > > >> get fftw > > >> http://fftw.org/fftw-3.1.2.tar.gz > > >> $ tar -xvzf fftw-3.1.2.tar.gz > > >> $ cd fftw-3.1.2 > > >> $ ./configure > > >> $ make > > >> $ sudo make install > > >> $ sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a > > >> $ sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la > > >> $ sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h > > >> > > >> *** Build and Install scipy from svn > > >> $ cd /Volumes/Archive/Install/Python/MacPython/Python2.5.x/scipy/ > > >> $ svn co http://svn.scipy.org/svn/scipy/trunk scipysvn > > >> Checked out revision 3245. > > >> > > >> $ cd scipysvn > > >> > > >> ***must set environment variable or will build for 10.3 not 10.4 > > >> $ export MACOSX_DEPLOYMENT_TARGET=10.4 > > >> > > >> $ python setup.py build_src build_clib --fcompiler=gnu95 build_ext -- > > >> fcompiler=gnu95 build > > >> $sudo python setup.py install > > >> > > >> *** to test > > >> $ python > > >> > > >> >>> import scipy > > >> >>> scipy.test(1,10) > > >> > > > > > > > > > Thanks for the post -- unfortunately I still cannot get things to > > > compile on my PPC G5 OS X 10.4.10. > > > > > > I followed your directions to a "T"; result when I loaded scipy: > > > > > > ksmith:ksmith [381]> python > > > Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) > > > [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin > > > Type "help", "copyright", "credits" or "license" for more information. > > > > > >>>> import scipy > > >>>> > > > Fatal Python error: Interpreter not initialized (version mismatch?) > > > Abort trap > > > ksmith:ksmith [382]> > > > > > > Version info: > > > > > > Python 2.5.1 > > > gcc 4.0.1 > > > scipy from svn > > > numpy 1.0.4 from svn > > > gfortran 4.2.1 > > > > > > Any advice, pointers? I've been stuck without a working scipy for weeks. > > > > > > > > Ok, I will take a look at it, I still have an old ppc minimac with mac > > os X on it. I will see if I can reproduce your problem. Could you open a > > trac ticket describing your problem (just paste this email) on scipy > > trac, so that we can track what's going on ? > > > > David > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > Thanks for looking into it -- I'll open a trac ticket. I presume I > send the email to scipy-tickets at scipy.org, if not, let me know where > to send it. > > Thanks, > > Kurt > David -- Sorry; I'm at a loss as to how to open a new trac (I'm not familiar with the process). Do I have to download the Trac software, or is there a way to issue a new one from the scipy trac wiki? Thanks, Kurt From kwmsmith at gmail.com Sat Aug 18 13:50:07 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Sat, 18 Aug 2007 12:50:07 -0500 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C69D47.4090806@ar.media.kyoto-u.ac.jp> Message-ID: > > David -- > > Sorry; I'm at a loss as to how to open a new trac (I'm not familiar > with the process). Do I have to download the Trac software, or is > there a way to issue a new one from the scipy trac wiki? > > Thanks, > > Kurt > Thanks Jarrod for the pointers. - Kurt From millman at berkeley.edu Sat Aug 18 13:52:09 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 18 Aug 2007 10:52:09 -0700 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C69D47.4090806@ar.media.kyoto-u.ac.jp> Message-ID: On 8/18/07, Kurt Smith wrote: > Thanks Jarrod for the pointers. - Kurt No problem. Thanks for your patience. Please let me know if you have any problems with getting registered and opening a new ticket. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From faltet at carabos.com Sat Aug 18 16:04:39 2007 From: faltet at carabos.com (Francesc Altet) Date: Sat, 18 Aug 2007 22:04:39 +0200 Subject: [SciPy-user] Fwd: Request for Use Cases - h5import and text data Message-ID: <200708182204.39674.faltet@carabos.com> Hi, This has been sent to the hdf-forum at hdfgroup.org list, but it should of interest to NumPy/SciPy lists too. Remember that you can access most of the HDF5 files from Python by using PyTables. Cheers, -- >0,0< Francesc Altet http://www.carabos.com/ V V C?rabos Coop. V. Enjoy Data "-" ----------- Original message ------------------------------------- Request for Use Cases - h5import and text data h5import is an HDF5 tool that converts floating point or integer data stored in ASCII or binary files into the HDF5 format. Currently h5import only processes numeric data. The HDF Group plans to add support for importing text data into HDF5 using h5import. We are now soliciting use cases that will guide the design of the text to dataset import feature in h5import. Please consider text you might want to import and how you would want to access that text once it is in the HDF5 file, and send your use cases to help at hdfgroup.org before September 17, 2007. Thank you for your help as we strive to improve our tools. ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ Pytables-users mailing list Pytables-users at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/pytables-users ------------------------------------------------------- -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From stefan at sun.ac.za Sat Aug 18 17:31:39 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 18 Aug 2007 23:31:39 +0200 Subject: [SciPy-user] [ndimage] Interpolation question In-Reply-To: References: Message-ID: <20070818213139.GP2977@mentat.za.net> Hi Matthieu On Fri, Aug 17, 2007 at 04:22:55PM +0200, Matthieu Brucher wrote: > I wondered if someone knew if the interpolation for geometric transformation is > done in the correct way, that is looking for each result where the origin is in > the input matrix. > This sound logical to do (and is), but there is no mention of it in the > documentation. If it wern't, you would be seeing black holes in the output images. Cheers St?fan From amcmorl at gmail.com Sat Aug 18 20:41:15 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Sun, 19 Aug 2007 12:41:15 +1200 Subject: [SciPy-user] Weave woes In-Reply-To: References: Message-ID: Hi all, On 19/07/07, Paul-Michael Agapow wrote: > I've got errors installing and using weave, that persist across different > installations and Python versions. My google-fu has failed me and an > identical error reported on the mailing list some time ago went unanswered. > > Symptoms: First I installed weave from svn into Python2.5 and ran > weave.test(): > > Found 1 tests for weave.ast_tools > [...snip...] > Found 26 tests for weave.catalog > building extensions here: > /Users/agapow/.python25_compiled/m3 > [...snip...] > Found 3 tests for weave.standard_array_spec > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_wx_spec.py:16: > DeprecationWarning: The wxPython compatibility package is no longer > automatically generated or activly maintained. Please switch to the wx > package as soon as possible. > import wxPython > Found 0 tests for weave.wx_spec > Found 0 tests for __main__ > ...warning: specified build_dir '_bad_path_' does not exist or is not > writable. Trying default locations > .....warning: specified build_dir '_bad_path_' does not exist or is not > writable. Trying default locations > ............................removing > '/tmp/tmptBN1Qxcat_test' (and everything under it) > .removing '/tmp/tmpY2WiLfcat_test' (and everything under it) > > ..............................F..F............................................................. > > ====================================================================== > FAIL: check_1d_3 > (weave.tests.test_size_check.test_dummy_array_indexing) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 168, in check_1d_3 > self.generic_1d('a[-11:]') > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 135, in generic_1d > self.generic_wrap(a,expr) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 127, in generic_wrap > self.generic_test(a,expr,desired) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 123, in generic_test > assert_array_equal(actual,desired, expr) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 223, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not equal > a[-11:] > (mismatch 100.0%) > x: array([1]) > y: array([10]) > > > ====================================================================== > FAIL: check_1d_6 > (weave.tests.test_size_check.test_dummy_array_indexing) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 174, in check_1d_6 > self.generic_1d('a[:-11]') > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 135, in generic_1d > self.generic_wrap(a,expr) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 127, in generic_wrap > self.generic_test(a,expr,desired) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > line 123, in generic_test > assert_array_equal(actual,desired, expr) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 223, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not equal > a[:-11] > (mismatch 100.0%) > x: array([9]) > y: array([0]) > > I'm uncertain if the "__bad_path__" message is important, but the two errors > may relfect issues with numpy (v1.0.1). I installed weave into Python2.4 for > identical symptoms. I then installed the whole Scipy package just to be sure > (weave 0.4.9, numpy 1.0.4.dev3882). No change. > > Along the way - mindful that maybe the tests were broken - I tried out a > simple line of weave:: > > >>> a = 1; weave.inline('printf("%d\\n",a);',['a']) > > which gave:: > > Traceback (most recent call last): > File "", line 1, in > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/inline_tools.py", > line 325, in inline > results = > attempt_function_call(code,local_dict,global_dict) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/inline_tools.py", > line 375, in attempt_function_call > function_list = > function_catalog.get_functions(code,module_dir) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/catalog.py", > line 611, in get_functions > function_list = self.get_cataloged_functions(code) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/catalog.py", > line 524, in get_cataloged_functions > cat = get_catalog(path,mode) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/catalog.py", > line 294, in get_catalog > or os.path.exists(catalog_file): > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/posixpath.py", > line 171, in exists > st = os.stat(path) > TypeError: coercing to Unicode: need string or buffer, NoneType found > > > Any ideas on what to try next? (Technical details: MacOSX 10.4.10 Intel > macBook, gcc 4.0.1.) > > > -- > Dr Paul-Michael Agapow: VieDigitale / Inst. for Animal Health > pma at viedigitale.com / paul-michael.agapow at bbsrc.ac.uk Was a resolution to these problems ever found? I'm having exactly the same problems on a debian box running numpy 1.0.3 and scipy 0.5.3.dev2698, and python 2.4.4. Any suggestions to get things going would be appreciated. Thanks, Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From amcmorl at gmail.com Sat Aug 18 21:20:17 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Sun, 19 Aug 2007 13:20:17 +1200 Subject: [SciPy-user] Weave woes In-Reply-To: References: Message-ID: On 19/08/07, Angus McMorland wrote: > Hi all, > > On 19/07/07, Paul-Michael Agapow wrote: > > I've got errors installing and using weave, that persist across different > > installations and Python versions. My google-fu has failed me and an > > identical error reported on the mailing list some time ago went unanswered. > > > > Symptoms: First I installed weave from svn into Python2.5 and ran > > weave.test(): > > > > Found 1 tests for weave.ast_tools > > [...snip...] > > Found 26 tests for weave.catalog > > building extensions here: > > /Users/agapow/.python25_compiled/m3 > > [...snip...] > > Found 3 tests for weave.standard_array_spec > > > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_wx_spec.py:16: > > DeprecationWarning: The wxPython compatibility package is no longer > > automatically generated or activly maintained. Please switch to the wx > > package as soon as possible. > > import wxPython > > Found 0 tests for weave.wx_spec > > Found 0 tests for __main__ > > ...warning: specified build_dir '_bad_path_' does not exist or is not > > writable. Trying default locations > > .....warning: specified build_dir '_bad_path_' does not exist or is not > > writable. Trying default locations > > ............................removing > > '/tmp/tmptBN1Qxcat_test' (and everything under it) > > .removing '/tmp/tmpY2WiLfcat_test' (and everything under it) > > > > ..............................F..F............................................................. > > > > ====================================================================== > > FAIL: check_1d_3 > > (weave.tests.test_size_check.test_dummy_array_indexing) > > > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 168, in check_1d_3 > > self.generic_1d('a[-11:]') > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 135, in generic_1d > > self.generic_wrap(a,expr) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 127, in generic_wrap > > self.generic_test(a,expr,desired) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 123, in generic_test > > assert_array_equal(actual,desired, expr) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > > line 223, in assert_array_equal > > verbose=verbose, header='Arrays are not equal') > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > > line 215, in assert_array_compare > > assert cond, msg > > AssertionError: > > Arrays are not equal > > a[-11:] > > (mismatch 100.0%) > > x: array([1]) > > y: array([10]) > > > > > > ====================================================================== > > FAIL: check_1d_6 > > (weave.tests.test_size_check.test_dummy_array_indexing) > > > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 174, in check_1d_6 > > self.generic_1d('a[:-11]') > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 135, in generic_1d > > self.generic_wrap(a,expr) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 127, in generic_wrap > > self.generic_test(a,expr,desired) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/tests/test_size_check.py", > > line 123, in generic_test > > assert_array_equal(actual,desired, expr) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > > line 223, in assert_array_equal > > verbose=verbose, header='Arrays are not equal') > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > > line 215, in assert_array_compare > > assert cond, msg > > AssertionError: > > Arrays are not equal > > a[:-11] > > (mismatch 100.0%) > > x: array([9]) > > y: array([0]) > > > > I'm uncertain if the "__bad_path__" message is important, but the two errors > > may relfect issues with numpy (v1.0.1). I installed weave into Python2.4 for > > identical symptoms. I then installed the whole Scipy package just to be sure > > (weave 0.4.9, numpy 1.0.4.dev3882). No change. > > > > Along the way - mindful that maybe the tests were broken - I tried out a > > simple line of weave:: > > > > >>> a = 1; weave.inline('printf("%d\\n",a);',['a']) > > > > which gave:: > > > > Traceback (most recent call last): > > File "", line 1, in > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/inline_tools.py", > > line 325, in inline > > results = > > attempt_function_call(code,local_dict,global_dict) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/inline_tools.py", > > line 375, in attempt_function_call > > function_list = > > function_catalog.get_functions(code,module_dir) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/catalog.py", > > line 611, in get_functions > > function_list = self.get_cataloged_functions(code) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/catalog.py", > > line 524, in get_cataloged_functions > > cat = get_catalog(path,mode) > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/weave/catalog.py", > > line 294, in get_catalog > > or os.path.exists(catalog_file): > > File > > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/posixpath.py", > > line 171, in exists > > st = os.stat(path) > > TypeError: coercing to Unicode: need string or buffer, NoneType found > > > > > > Any ideas on what to try next? (Technical details: MacOSX 10.4.10 Intel > > macBook, gcc 4.0.1.) > > > > > > -- > > Dr Paul-Michael Agapow: VieDigitale / Inst. for Animal Health > > pma at viedigitale.com / paul-michael.agapow at bbsrc.ac.uk > > Was a resolution to these problems ever found? I'm having exactly the > same problems on a debian box running numpy 1.0.3 and scipy > 0.5.3.dev2698, and python 2.4.4. Any suggestions to get things going > would be appreciated. > > Thanks, Never mind, I've answered my own question. In case anyone else is need of the solution, the problem is that weave is expecting to find a folder ~/.python24_compiled/m1, but this doesn't exist and doesn't get created automatically. After manually creating the folder, the inine print command run okay. The other two failed tests are still present, but clearly represent a separate problem. Cheers, A. -- AJC McMorland, PhD Student Physiology, University of Auckland From matthieu.brucher at gmail.com Sun Aug 19 03:53:24 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 19 Aug 2007 09:53:24 +0200 Subject: [SciPy-user] [ndimage] Interpolation question In-Reply-To: <20070818213139.GP2977@mentat.za.net> References: <20070818213139.GP2977@mentat.za.net> Message-ID: > > If it wern't, you would be seeing black holes in the output images. > Thanks, that was what I thought, but I could find a reference where it saiys it black on white. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From pepe_kawumi at yahoo.co.uk Mon Aug 20 07:44:34 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Mon, 20 Aug 2007 11:44:34 +0000 (GMT) Subject: [SciPy-user] finding number of elements in a vector Message-ID: <568110.43053.qm@web27712.mail.ukl.yahoo.com> Hi, say i have a vector [3,5,0,9,0,8,9] is there any method in python i can use to find out how many non-zero vector elements are in this list? ___________________________________________________________ NEW Yahoo! Cars - sell your car and browse thousands of new and used cars online! http://uk.cars.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robfalck at gmail.com Mon Aug 20 08:04:10 2007 From: robfalck at gmail.com (Rob Falck) Date: Mon, 20 Aug 2007 08:04:10 -0400 Subject: [SciPy-user] finding number of elements in a vector In-Reply-To: <568110.43053.qm@web27712.mail.ukl.yahoo.com> References: <568110.43053.qm@web27712.mail.ukl.yahoo.com> Message-ID: You could use a list comprehension to return a list containing only the non-zero elements of the list, and then use the built-in len function to return its length. mylist = [3,5,0,9,0,8,9] num_non_zero = len([x fo x in mylist if x != 0]) Or the count method of a list could be used to get the number of elements that do equal zero. num_non_zero = len(mylist) - mylist.count(0) On 8/20/07, Perez Kawumi wrote: > > Hi, > say i have a vector [3,5,0,9,0,8,9] > is there any method in python i can use to find out how many non-zero > vector elements are in this list? > > ------------------------------ > To help you stay safe and secure online, we've developed the all new *Yahoo! > Security Centre* > . > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Mon Aug 20 09:39:14 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 20 Aug 2007 15:39:14 +0200 Subject: [SciPy-user] finding number of elements in a vector In-Reply-To: <568110.43053.qm@web27712.mail.ukl.yahoo.com> References: <568110.43053.qm@web27712.mail.ukl.yahoo.com> Message-ID: <46C99982.5040800@gmx.net> Perez Kawumi wrote: > Hi, > say i have a vector [3,5,0,9,0,8,9] > is there any method in python i can use to find out how many non-zero > vector elements are in this list? > from numpy import array x = array([3,5,0,9,0,8,9]) x[x!=0].shape[0] or len(x[x!=0]) or x.nonzero()[0].shape[0] -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From openopt at ukr.net Mon Aug 20 09:54:59 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 20 Aug 2007 16:54:59 +0300 Subject: [SciPy-user] finding number of elements in a vector In-Reply-To: <46C99982.5040800@gmx.net> References: <568110.43053.qm@web27712.mail.ukl.yahoo.com> <46C99982.5040800@gmx.net> Message-ID: <46C99D33.1030609@ukr.net> Steve Schmerler wrote: > Perez Kawumi wrote: > >> Hi, >> say i have a vector [3,5,0,9,0,8,9] >> is there any method in python i can use to find out how many non-zero >> vector elements are in this list? >> >> > > from numpy import array > > x = array([3,5,0,9,0,8,9]) > > x[x!=0].shape[0] > or > len(x[x!=0]) > or > x.nonzero()[0].shape[0] > one more: x[x!=0].size D. From pepe_kawumi at yahoo.co.uk Mon Aug 20 10:13:05 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Mon, 20 Aug 2007 14:13:05 +0000 (GMT) Subject: [SciPy-user] (no subject) Message-ID: <680938.19835.qm@web27714.mail.ukl.yahoo.com> Hi all, Is there a command in python that sorts the elements of a vector in ascending order? Thanks ___________________________________________________________ To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pepe_kawumi at yahoo.co.uk Mon Aug 20 10:17:05 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Mon, 20 Aug 2007 14:17:05 +0000 (GMT) Subject: [SciPy-user] error messages using ''from scipy import*'' Message-ID: <985995.48831.qm@web27701.mail.ukl.yahoo.com> Hi, I'm new to python. Just want to find out why I get alot of messages(sort of like error messages) in red each time I run my program. The program runs and complies fine but just want to know if there is a way I can stop that. Thanks Perez ___________________________________________________________ Want ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Mon Aug 20 10:16:38 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 20 Aug 2007 17:16:38 +0300 Subject: [SciPy-user] (no subject) In-Reply-To: <680938.19835.qm@web27714.mail.ukl.yahoo.com> References: <680938.19835.qm@web27714.mail.ukl.yahoo.com> Message-ID: <46C9A246.4070701@ukr.net> Perez Kawumi wrote: > Hi all, > Is there a command in python that sorts the elements of a vector in > ascending order? > Thanks > > ------------------------------------------------------------------------ > Yahoo! Messenger > > NEW - crystal clear PC to PC calling worldwide with voicemail > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > x = array([3,5,0,9,0,8,9]) x.sort() >>> x array([0, 0, 3, 5, 8, 9, 9]) D From jr at sun.ac.za Mon Aug 20 10:19:46 2007 From: jr at sun.ac.za (Johann Rohwer) Date: Mon, 20 Aug 2007 16:19:46 +0200 Subject: [SciPy-user] (no subject) In-Reply-To: <680938.19835.qm@web27714.mail.ukl.yahoo.com> References: <680938.19835.qm@web27714.mail.ukl.yahoo.com> Message-ID: <200708201619.46133.jr@sun.ac.za> On Monday, 20 August 2007, Perez Kawumi wrote: > Hi all, > Is there a command in python that sorts the elements of a vector in > ascending order? Thanks In [9]: a=numpy.array([3.,2,4,1]) In [10]: a Out[10]: array([ 3., 2., 4., 1.]) In [11]: a.sort() In [12]: a Out[12]: array([ 1., 2., 3., 4.]) Regards Johann From skraelings001 at gmail.com Mon Aug 20 10:33:15 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Mon, 20 Aug 2007 09:33:15 -0500 Subject: [SciPy-user] error messages using ''from scipy import*'' In-Reply-To: <985995.48831.qm@web27701.mail.ukl.yahoo.com> References: <985995.48831.qm@web27701.mail.ukl.yahoo.com> Message-ID: <46C9A62B.2060705@gmail.com> Perez Kawumi escribi?: > Hi, > I'm new to python. Just want to find out why I get alot of > messages(sort of like error messages) in red each time I run my > program. The program runs and complies fine but just want to know if > there is a way I can stop that. > Thanks Perez Hi Perez, Would you mind give us more information, so we can help you out? Maybe, you could post the error messages you get, the code you are running. Also, why don't you read this too http://catb.org/~esr/faqs/smart-questions.html Reynaldo From elcorto at gmx.net Mon Aug 20 10:42:27 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 20 Aug 2007 16:42:27 +0200 Subject: [SciPy-user] sorting [WAS: (no subject)] In-Reply-To: <680938.19835.qm@web27714.mail.ukl.yahoo.com> References: <680938.19835.qm@web27714.mail.ukl.yahoo.com> Message-ID: <46C9A853.5000601@gmx.net> Perez Kawumi wrote: > Hi all, > Is there a command in python that sorts the elements of a vector in > ascending order? > Thanks > In plain Python, if you use lists you can use the list method sort(), as mentioned in the Python tutotial: http://docs.python.org/tut/node7.html#l2h-12 lst = [2,4,1,6,2,3] lst.sort() However, for numerical operations, you want to use numpy arrays rather than lists (much faster). There is a lot of documentation at http://www.scipy.org/Documentation. A good stating point is e.g. http://www.scipy.org/Numpy_Example_List. With the text search function of your web browser, you can search for "sort" and will find examples on the usage of e.g. numpy.sort(). If you use the IPython interactive shell (http://ipython.scipy.org/moin/), you can do numpy.*sort*? to find all functions wich have the pattern "sort" in their name, or simply browse the online help of numpy/scipy with help(numpy) or help(scipy) in any Python interactive session. PS: Please use a Mail Subject :) -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From smithsm at samuelsmith.org Mon Aug 20 11:25:07 2007 From: smithsm at samuelsmith.org (Samuel M. Smith) Date: Mon, 20 Aug 2007 09:25:07 -0600 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C5F7BB.9090802@gmail.com> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C5F7BB.9090802@gmail.com> Message-ID: <48865490-501B-4174-929E-A56269DD373C@samuelsmith.org> > > It's a wiki, so the set of maintainers contains you, too. :-) In > particular, > people who have found problems and solved them are often the best > people to > write documentation about how to avoid such problems. If you have > the time, we > would appreciate your fixes to that page. > After spending about 10 minutes I couldn't find anywhere to get a password that lets me edit the wiki. My mailing list user/password doesn't work? From skraelings001 at gmail.com Mon Aug 20 11:31:03 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Mon, 20 Aug 2007 10:31:03 -0500 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <48865490-501B-4174-929E-A56269DD373C@samuelsmith.org> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C5F7BB.9090802@gmail.com> <48865490-501B-4174-929E-A56269DD373C@samuelsmith.org> Message-ID: <46C9B3B7.8040304@gmail.com> Samuel M. Smith escribi?: >> It's a wiki, so the set of maintainers contains you, too. :-) In >> particular, >> people who have found problems and solved them are often the best >> people to >> write documentation about how to avoid such problems. If you have >> the time, we >> would appreciate your fixes to that page. >> >> > > After spending about 10 minutes I couldn't find anywhere to get a > password that lets me > edit the wiki. My mailing list user/password doesn't work? > > http://www.scipy.org/UserPreferences From skraelings001 at gmail.com Mon Aug 20 11:33:10 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Mon, 20 Aug 2007 10:33:10 -0500 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <48865490-501B-4174-929E-A56269DD373C@samuelsmith.org> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C5F7BB.9090802@gmail.com> <48865490-501B-4174-929E-A56269DD373C@samuelsmith.org> Message-ID: <46C9B436.6040302@gmail.com> Samuel M. Smith escribi?: >> It's a wiki, so the set of maintainers contains you, too. :-) In >> particular, >> people who have found problems and solved them are often the best >> people to >> write documentation about how to avoid such problems. If you have >> the time, we >> would appreciate your fixes to that page. >> >> > > After spending about 10 minutes I couldn't find anywhere to get a > password that lets me > edit the wiki. My mailing list user/password doesn't work? > > Sorry previous post, maybe this http://projects.scipy.org/scipy/scipy/register Cheers, Reynaldo From skraelings001 at gmail.com Mon Aug 20 11:40:55 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Mon, 20 Aug 2007 10:40:55 -0500 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C9B3B7.8040304@gmail.com> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C5F7BB.9090802@gmail.com> <48865490-501B-4174-929E-A56269DD373C@samuelsmith.org> <46C9B3B7.8040304@gmail.com> Message-ID: <46C9B607.80600@gmail.com> Reynaldo Baquerizo escribi?: > Samuel M. Smith escribi?: > >>> It's a wiki, so the set of maintainers contains you, too. :-) In >>> particular, >>> people who have found problems and solved them are often the best >>> people to >>> write documentation about how to avoid such problems. If you have >>> the time, we >>> would appreciate your fixes to that page. >>> >>> >>> >> After spending about 10 minutes I couldn't find anywhere to get a >> password that lets me >> edit the wiki. My mailing list user/password doesn't work? >> >> This one is for Scipy wiki : > http://www.scipy.org/UserPreferences > and this one for Scipy Development wiki : > http://projects.scipy.org/scipy/scipy/register Sorry for my confusion Reynaldo From roger.herikstad at gmail.com Mon Aug 20 12:23:24 2007 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 21 Aug 2007 00:23:24 +0800 Subject: [SciPy-user] Sphericity test..or some such Message-ID: hi all! I was wondering if there is an implementation in python of a sphericity test? Basically, I have a bunch of eigenvalues, some corresponding to noise, some to signals, and I need a fast, automatic algorithm to decide which are which. Anyone have any advice? ~ Roger From smithsm at samuelsmith.org Mon Aug 20 13:35:56 2007 From: smithsm at samuelsmith.org (Samuel M. Smith) Date: Mon, 20 Aug 2007 11:35:56 -0600 Subject: [SciPy-user] building scipy from source on Mac Os X 10.4 ppc In-Reply-To: <46C9B607.80600@gmail.com> References: <47BF11B0-202D-4D97-8D65-C1D8A2051F9E@samuelsmith.org> <46C49424.8000001@gmail.com> <9413F644-77F4-4C55-8D59-3064950A8376@samuelsmith.org> <46C53E61.102@gmail.com> <3FB649F4-6C8D-4E8E-83E7-397747774665@samuelsmith.org> <46C5CAB0.7040106@gmail.com> <46C5F7BB.9090802@gmail.com> <48865490-501B-4174-929E-A56269DD373C@samuelsmith.org> <46C9B3B7.8040304@gmail.com> <46C9B607.80600@gmail.com> Message-ID: I posted changes to the Wiki with link to the r gfortran. On 20 Aug 2007, at 09:40 , Reynaldo Baquerizo wrote: > Reynaldo Baquerizo escribi?: >> Samuel M. Smith escribi?: >> >>>> It's a wiki, so the set of maintainers contains you, too. :-) In >>>> particular, >>>> people who have found problems and solved them are often the best >>>> people to >>>> write documentation about how to avoid such problems. If you have >>>> the time, we >>>> would appreciate your fixes to that page. >>>> >>>> >>>> >>> After spending about 10 minutes I couldn't find anywhere to get a >>> password that lets me >>> edit the wiki. My mailing list user/password doesn't work? >>> >>> > This one is for Scipy wiki : >> http://www.scipy.org/UserPreferences >> > and this one for Scipy Development wiki : >> http://projects.scipy.org/scipy/scipy/register > > Sorry for my confusion > > Reynaldo > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ********************************************************************** Samuel M. Smith Ph.D. 2966 Fort Hill Road Eagle Mountain, Utah 84005-4108 801-768-2768 voice 801-768-2769 fax ********************************************************************** "The greatest source of failure and unhappiness in the world is giving up what we want most for what we want at the moment" ********************************************************************** From pepe_kawumi at yahoo.co.uk Tue Aug 21 08:08:15 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Tue, 21 Aug 2007 12:08:15 +0000 (GMT) Subject: [SciPy-user] Boolean Algebra Message-ID: <972246.5680.qm@web27713.mail.ukl.yahoo.com> Hi, Can someone please tell me where I can find any information regarding how boolean algebra is used in python. This is the piece of code im trying to write I have just used dummy variables for the matrices and the other different values. Just want to know if there is anything I should watch out for when I get the actual values for the different matrices coz they will most probably be floating point numbers. And I'm trying to return dof_flag at the end. I initially want all the elements of the vector dof_flag = 1. Is this the right way of doing this or is there any other smarter way of doing this coz I want to return dof_flag as a vector at the end Thanks dof_flag = [] dof_flag = ones((1,5)) NUM_EDGES = 2 b=1 print dof_flag node1 =[] print node1 node2 =[] print node2 EDGES =ones((3,4)) NODE_COORD =ones((3,4)) eps = 3 print EDGES print NODE_COORD for i_edge in range(0, NUM_EDGES): node1 = EDGES[i_edge,1] print node1 node2 = EDGES[i_edge,2] print node2 #i.e. y =0 if (abs(NODE_COORD[node1,2]) < eps) & (abs(NODE_COORD[node2,2]) From pepe_kawumi at yahoo.co.uk Tue Aug 21 08:16:34 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Tue, 21 Aug 2007 12:16:34 +0000 (GMT) Subject: [SciPy-user] Boolean Algebra Message-ID: <372223.76767.qm@web27707.mail.ukl.yahoo.com> Hi, sorry forgot to declarea value for a. Thanks ___________________________________________________________ Want ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmarais at sun.ac.za Tue Aug 21 08:41:11 2007 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 21 Aug 2007 12:41:11 +0000 (UTC) Subject: [SciPy-user] Interpolation polynomials Message-ID: Hi, I'm looking for functions that evalue to the Lagrangian interpolation polynomials, i.e. L_i = \prod_{j=0..p, j!=i}(x-Xj)/(Xi-Xj) where Xi are the p+1 interpolation points. I.e., I'd like to pass in p+1 interpolation points and get back p+1 polynomials L_i that are each zero at all interpolation points except Xi where the value is 1. Is there an easy way to construct these polynomials? Thanks Neilen From matthieu.brucher at gmail.com Tue Aug 21 09:05:39 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 21 Aug 2007 15:05:39 +0200 Subject: [SciPy-user] Interpolation polynomials In-Reply-To: References: Message-ID: Hi, You can can create p+1 polynomials with the poly() function, and then scale them by evaluating each of them at the remaining point. Matthieu 2007/8/21, Neilen Marais : > > Hi, > > I'm looking for functions that evalue to the Lagrangian interpolation > polynomials, i.e. > > L_i = \prod_{j=0..p, j!=i}(x-Xj)/(Xi-Xj) > > where Xi are the p+1 interpolation points. I.e., I'd like to pass in p+1 > interpolation points and get back p+1 polynomials L_i that are each zero > at > all interpolation points except Xi where the value is 1. Is there an easy > way to construct these polynomials? > > Thanks > Neilen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brainerd at MIT.EDU Tue Aug 21 09:28:20 2007 From: brainerd at MIT.EDU (Andrew Brainerd) Date: Tue, 21 Aug 2007 09:28:20 -0400 Subject: [SciPy-user] Mathieu Function Bug? Message-ID: <003b01c7e3f7$24771640$5305fea9@ANDREW> Hi, I've been working with the angular Mathieu functions in scipy.special, and I've come across what seems like a bug. For n = 2 and q < 0.0025, mathieu_cem(n,q,x) seems to be evaluating the n = 0 case instead. The following code makes the bug clear: import numpy import pylab from scipy.special import mathieu_cem s = arange(0,180,1) t = mathieu_cem(0,0.001,s)[0] pylab.plot(s,t) t = mathieu_cem(2,0.001,s)[0] pylab.plot(s,t) t = mathieu_cem(2,0.003,s)[0] pylab.plot(s,t) pylab.show() It would seem like the second and third plots there should be almost the same (slight variations of cos(2x)) but instead, the first and second plots are almost the same. Is this a known bug, something that has been fixed recently, some bug in my own installation, or what? I've managed to get around the problem in my case by just using a series approximation for small q and n = 2, so I don't need an immediate fix or anything like that. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnandris at btinternet.com Tue Aug 21 10:00:45 2007 From: mnandris at btinternet.com (Michael Nandris) Date: Tue, 21 Aug 2007 15:00:45 +0100 (BST) Subject: [SciPy-user] Bug in scipy.stats.rv_discrete is on line 3218 In-Reply-To: <20070818112154.GN2977@mentat.za.net> Message-ID: <955846.49290.qm@web86510.mail.ird.yahoo.com> Stefan, I have traced the bug to line 3218 of distributions.py The problem is with the operation of argmax in the _drv_ppf handler. It can be solved by pulling the correct state out of self.Finv (using a probability, as generated by mtrand.random_sample), in an order -preserving manner. As soon as tracs starts working i'll post some more demo testcode cheers Michael Stefan van der Walt wrote: Hi Michael Thanks for the report. This should be fixed in SVN revision 3246. Cheers St?fan On Tue, Aug 14, 2007 at 01:08:45PM +0100, Michael Nandris wrote: > Hi, > > I think there may be an issue with the way rv_discrete orders its output when > it encounters zeros in the input, causing non-zero probabilities to creep up > towards the end of the output. If you are attempting to track the accumulation > of states in an n-state Markov chain, this is a problem! > > I have had a look at the API but can't figure it. Any help at fixing this would > be much appreciated. > > regards > > M.Nandris _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Tue Aug 21 11:20:22 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 21 Aug 2007 09:20:22 -0600 Subject: [SciPy-user] colormap model Message-ID: <3DB2D107-BF98-4619-B5D6-99AF6927E3A9@usgs.gov> Is there any documentation anywhere for matplotlib's colormap model? I'm confused, for example, by the "jet" entry in _cm.py: Why are there 5 tuples describing red and blue, but 6 describing green? I'd like to create my own colormaps, but I'm uncertain what the data describing the existing ones actually means. Thanks, Mike Hearne ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Tue Aug 21 11:27:18 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 21 Aug 2007 09:27:18 -0600 Subject: [SciPy-user] Boolean Algebra In-Reply-To: <972246.5680.qm@web27713.mail.ukl.yahoo.com> References: <972246.5680.qm@web27713.mail.ukl.yahoo.com> Message-ID: <5E4CEFB8-38CF-4CC7-B3AB-8836160CE331@usgs.gov> Perez - I actually can't find any decent on-line tutorials describing this, but booleans in Python are not done using "&" and "|" symbols, but the words "and" and "or". The library reference has this page: http://docs.python.org/lib/boolean.html You may want to buy an introductory Python book - I have OReilly's "Learning Python", and it's pretty good for figuring out the basics. --Mike Hearne On Aug 21, 2007, at 6:08 AM, Perez Kawumi wrote: > Hi, > Can someone please tell me where I can find any information > regarding how boolean algebra is used in python. This is the piece > of code im trying to write I have just used dummy variables for the > matrices and the other different values. Just want to know if there > is anything I should watch out for when I get the actual values for > the different matrices coz they will most probably be floating > point numbers. > > And I'm trying to return dof_flag at the end. I initially want all > the elements of the vector dof_flag = 1. Is this the right way of > doing this or is there any other smarter way of doing this coz I > want to return dof_flag as a vector at the end > Thanks > > dof_flag = [] > dof_flag = ones((1,5)) > NUM_EDGES = 2 > b=1 > print dof_flag > node1 =[] > print node1 > node2 =[] > print node2 > EDGES =ones((3,4)) > NODE_COORD =ones((3,4)) > eps = 3 > print EDGES > print NODE_COORD > > for i_edge in range(0, NUM_EDGES): > node1 = EDGES[i_edge,1] > print node1 > node2 = EDGES[i_edge,2] > print node2 > #i.e. y =0 > if (abs(NODE_COORD[node1,2]) < eps) & (abs(NODE_COORD > [node2,2]) dof_flag[i_edge] = 0 > > #i.e. y =b > if abs((NODE_COORD[node1,2])-b) < eps & abs((NODE_COORD > [node2,2])-b) dof_flag[i_edge] = 0 > #i.e. x =0 > if abs(NODE_COORD[node1,1]) < eps & abs(NODE_COORD > [node2,1]) dof_flag[i_edge] = 0 > #i.e. x =a > if abs((NODE_COORD(node1,1))-a) < eps & abs((NODE_COORD > (node2,1))-a) dof_flag[i_edge] = 0 > > > > > To help you stay safe and secure online, we've developed the all > new Yahoo! Security Centre. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Aug 21 11:45:18 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 21 Aug 2007 11:45:18 -0400 Subject: [SciPy-user] Boolean Algebra In-Reply-To: <5E4CEFB8-38CF-4CC7-B3AB-8836160CE331@usgs.gov> References: <972246.5680.qm@web27713.mail.ukl.yahoo.com><5E4CEFB8-38CF-4CC7-B3AB-8836160CE331@usgs.gov> Message-ID: On Tue, 21 Aug 2007, Michael Hearne apparently wrote: > I actually can't find any decent on-line tutorials > describing this, but booleans in Python are not done using > "&" and "|" symbols, but the words "and" and "or". This might not be the best answer to the OP's question. See below. Note that numpy's boolean matrices also behavior very nicely. Cheers, Alan Isaac >>> x= numpy.random.random((10,)) >>> y= numpy.random.random((10,)) >>> z1=x>0.1 >>> z2=x<0.2 >>> z1|z2 array([True, True, True, True, True, True, True, True, True, True], dtype=bool) >>> z1 & z2 array([True, False, False, False, False, False, False, False, False, False], dty pe=bool) >>> z1 or z2 Traceback (most recent call last): File "", line 1, in ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() >>> From fredmfp at gmail.com Tue Aug 21 11:49:10 2007 From: fredmfp at gmail.com (fred) Date: Tue, 21 Aug 2007 17:49:10 +0200 Subject: [SciPy-user] colormap model In-Reply-To: <3DB2D107-BF98-4619-B5D6-99AF6927E3A9@usgs.gov> References: <3DB2D107-BF98-4619-B5D6-99AF6927E3A9@usgs.gov> Message-ID: <46CB0976.9040406@gmail.com> Michael Hearne a ?crit : > Is there any documentation anywhere for matplotlib's colormap model? > I'm confused, for example, by the "jet" entry in _cm.py: > > Why are there 5 tuples describing red and blue, but 6 describing green? > > I'd like to create my own colormaps, but I'm uncertain what the data > describing the existing ones actually means. > You can look at the scipy cookbook, in the section matplotlib IIRC, but I can't access it for the moment (scipy.org down ?) -- http://scipy.org/FredericPetit From skraelings001 at gmail.com Tue Aug 21 12:00:47 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Tue, 21 Aug 2007 11:00:47 -0500 Subject: [SciPy-user] colormap model In-Reply-To: <46CB0976.9040406@gmail.com> References: <3DB2D107-BF98-4619-B5D6-99AF6927E3A9@usgs.gov> <46CB0976.9040406@gmail.com> Message-ID: <46CB0C2F.5090709@gmail.com> fred escribi?: > Michael Hearne a ?crit : > >> Is there any documentation anywhere for matplotlib's colormap model? >> I'm confused, for example, by the "jet" entry in _cm.py: >> >> Why are there 5 tuples describing red and blue, but 6 describing green? >> >> I'd like to create my own colormaps, but I'm uncertain what the data >> describing the existing ones actually means. >> >> > You can look at the scipy cookbook, in the section matplotlib IIRC, > but I can't access it for the moment (scipy.org down ?) > Yes, it's down. Check it on chached page http://64.233.169.104/search?q=cache:2Sj6bSG_7hUJ:www.scipy.org/Cookbook/Matplotlib/Show_colormaps+show+colormaps,+matplotlib&hl=en&ct=clnk&cd=1 From pwang at enthought.com Tue Aug 21 12:12:32 2007 From: pwang at enthought.com (Peter Wang) Date: Tue, 21 Aug 2007 11:12:32 -0500 Subject: [SciPy-user] colormap model In-Reply-To: <3DB2D107-BF98-4619-B5D6-99AF6927E3A9@usgs.gov> References: <3DB2D107-BF98-4619-B5D6-99AF6927E3A9@usgs.gov> Message-ID: <53B253D4-07C6-49DA-841D-E59EA9141755@enthought.com> On Aug 21, 2007, at 10:20 AM, Michael Hearne wrote: > Is there any documentation anywhere for matplotlib's colormap > model? I'm confused, for example, by the "jet" entry in _cm.py: > > Why are there 5 tuples describing red and blue, but 6 describing > green? > > I'd like to create my own colormaps, but I'm uncertain what the > data describing the existing ones actually means. These are documented in the docstrings for makeMappingArray (colors.py:332) and in Colormap.__call__ (colors.py:409). Basically each color channel (red, green, blue) has its own set of linearly interpolated segments, with control points spanning the range (0.0, 1.0). In the case of LinearSegmentedColormap, these control points are used to build a 256-entry table with an RGB color in each row. -Peter From nmarais at sun.ac.za Wed Aug 22 08:50:51 2007 From: nmarais at sun.ac.za (Neilen Marais) Date: Wed, 22 Aug 2007 12:50:51 +0000 (UTC) Subject: [SciPy-user] Interpolation polynomials References: Message-ID: Hi On Tue, 21 Aug 2007 15:05:39 +0200, Matthieu Brucher wrote: > Hi, > > You can can create p+1 polynomials with the poly() function, and then scale > them by evaluating each of them at the remaining point. I'm generating them like this at the moment: def gen_lagrange_polys(points): def make_poly(int_pt, zero_pts): return N.poly1d(N.poly(zero_pts)/N.multiply.reduce( [int_pt - p for p in zero_pts])) return [make_poly(pi, [pz for pz in points if pz != pi]) for pi in points] This gives the correct scaling, which means N.poly is exactly equivalent to the product of (x - Zi) where Zi are the desired zeros. The problem I have is that because the polynomials are represented i.t.o. monomial coefficients, they don't evaluate to exactly zero at Zi which is quite important for what I want to do. Does scipy/numpy have an alternate polynomial representation based on the product of zeros rather than monomial coefficients? If not, is there a better way to do this than generating code to do this? Thanks Neilen > Matthieu From matthieu.brucher at gmail.com Wed Aug 22 09:04:52 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 22 Aug 2007 15:04:52 +0200 Subject: [SciPy-user] Interpolation polynomials In-Reply-To: References: Message-ID: > > def gen_lagrange_polys(points): > def make_poly(int_pt, zero_pts): > return N.poly1d(N.poly(zero_pts)/N.multiply.reduce( > [int_pt - p for p in zero_pts])) > return [make_poly(pi, [pz for pz in points if pz != pi]) for pi in > points] I suppose you can do better this way (not sure though) : def make_poly(int_pt, zero_pts): p = N.poly(zero_pts) return N.poly1d(N.polydiv(p, N.polyval(int_pt))) This gives the correct scaling, which means N.poly is exactly equivalent to > the product of (x - Zi) where Zi are the desired zeros. The problem I have > is that because the polynomials are represented i.t.o. monomial > coefficients, they don't evaluate to exactly zero at Zi which is quite > important for what I want to do. > > Does scipy/numpy have an alternate > polynomial representation based on the product of zeros rather than > monomial coefficients? If not, is there a better way to do this than > generating code to do this? The poly1d returns a new object with the roots, but the value is computed with polyval :( I don't know if there is another fnction to evaluate a polynomial. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Wed Aug 22 11:49:04 2007 From: fredmfp at gmail.com (fred) Date: Wed, 22 Aug 2007 17:49:04 +0200 Subject: [SciPy-user] choose at random all elements in a array... In-Reply-To: References: <46B4513C.5080108@gmail.com> Message-ID: <46CC5AF0.5070405@gmail.com> Emanuele Zattin a ?crit : > mmm... what about something like: > > random.shuffle(A.ravel()) > Hi all, I would like to show how I use this, with a result, which seems to be strange to me. I have a float array, with dimensions of 501x501. The values in this array are the scalar values at the point (x=i*dx, y=j*dy,) with i=0..500 & j=0..500. I flatten this array and get 15000 random (with random.permuation method) values from this array and thus get 15000 points. Then I want to find out the number of neighboors from each of theses points, in a neighboorhood of (2*20+1)x(2*20+1). Thus, I get an array with coords and the number of neighboors. x0 y0 nb0 x1 y1 nb1 ... I display the result with mayavi2, and get the following snapshot: http://fredantispam.free.fr/snapshot.png Structures clearly appear. (I don't consider side effects, of course). What could be the meaning of this ? If it was "totally random", should I not get no structure at all, something like a white noise ? What am I doing wrong ? How can I get a totally random scatter points ? TIA. Cheers, -- http://scipy.org/FredericPetit From peridot.faceted at gmail.com Wed Aug 22 13:50:16 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 22 Aug 2007 13:50:16 -0400 Subject: [SciPy-user] Interpolation polynomials In-Reply-To: References: Message-ID: On 22/08/07, Matthieu Brucher wrote: > > Does scipy/numpy have an alternate > > polynomial representation based on the product of zeros rather than > > monomial coefficients? If not, is there a better way to do this than > > generating code to do this? > > The poly1d returns a new object with the roots, but the value is computed > with polyval :( I don't know if there is another fnction to evaluate a > polynomial. In this particular context - the interpolating polynomial - there is an algorithm for evaluating it more-or-less directly. In Numerical Recipes in C (which used to be freely available; they now need some kind of peculiar plugin to read their book), they describe Neville's algorithm for computing values of the interpolating polynomial. Anne From lbolla at gmail.com Thu Aug 23 03:46:32 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 23 Aug 2007 09:46:32 +0200 Subject: [SciPy-user] bug in ARPACK from scipy.sandbox? Message-ID: <80c99e790708230046r35fc6a69v845e755e317f5d1f@mail.gmail.com> I have problems with the ARPACK wrappers in scipy.sandbox. take a look at this snippet of code. --------------------------------------------------------------------------------------------------------- In [295]: A = numpy.array([[1,2,3,4],[0,2,3,4],[0,0,3,4],[0,0,0,4]], dtype=float) In [296]: A Out[296]: array([[ 1., 2., 3., 4.], [ 0., 2., 3., 4.], [ 0., 0., 3., 4.], [ 0., 0., 0., 4.]]) In [297]: [w,v] = arpack.eigen(A,2) In [298]: w Out[298]: array([ 4.+0.j, 3.+0.j, 0.+0.j]) --> WRONG: I get 3 eigenvalues instead of two! In [299]: v Out[299]: array([[ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]]) --> WRONG: all the eigenvectors are null! In [300]: [w,v] = arpack.eigen(A.astype(numpy.complex),2) In [301]: w Out[301]: array([ 4. -2.41126563e-15j, 3. +1.34425147e-15j]) --> CORRECT: casting the matrix to complex type gives the correct result and the correct numbers of eigenvalues In [302]: v Out[302]: array([[ -1.37221970e-01-0.75187207j, -7.50019180e-01-0.32694452j], [ -1.02916477e-01-0.56390405j, -5.00012787e-01-0.21796301j], [ -5.14582387e-02-0.28195202j, -1.66670929e-01-0.07265434j], [ -1.28645597e-02-0.07048801j, 2.49800181e-16+0.j ]]) --> MAYBE: and the eigenvectors are not null, but... In [303]: [w,v] = arpack.eigen(A.astype(numpy.complex128),2) In [304]: w Out[304]: array([ 4. +7.28583860e-16j, 3. +2.23881966e-16j]) In [305]: v Out[305]: array([[ -6.65958925e-01 -3.75020242e-01j, 8.08076904e-01 +1.28192062e-01j], [ -4.99469194e-01 -2.81265182e-01j, 5.38717936e-01 +8.54613743e-02j], [ -2.49734597e-01 -1.40632591e-01j, 1.79572645e-01 +2.84871248e-02j], [ -6.24336492e-02 -3.51581477e-02j, -5.20417043e-16 -2.39391840e-16j]]) --> WRONG: casting to a complex128 changes the values of the eigenvectors!!! --------------------------------------------------------------------------------------------------------- in any case, the result for the eigenvectors are different than Matlab (while the eigenvalues are ok): v = -8.181818181818171e-001 7.642914835078907e-001 -5.454545454545460e-001 5.732186126309180e-001 -1.818181818181832e-001 2.866093063154587e-001 -6.938893903907228e-018 7.165232657886386e-002 w = 3.000000000000010e+000 0 0 3.999999999999995e+000 Can someone explain me what's wrong? Thanks in advance, lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Thu Aug 23 04:42:19 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 23 Aug 2007 10:42:19 +0200 Subject: [SciPy-user] bug in ARPACK from scipy.sandbox? In-Reply-To: <80c99e790708230046r35fc6a69v845e755e317f5d1f@mail.gmail.com> References: <80c99e790708230046r35fc6a69v845e755e317f5d1f@mail.gmail.com> Message-ID: <80c99e790708230142t4e3b7108kabd51a39d92d5f3b@mail.gmail.com> sorry for the noise, but I think I've found the bug... this is what I changed in arpack.py to get the correct results (see the test files attached). should we commit the change to the CVS? -------------------------------------------------------------- $> diff -c arpack_original.py /usr/local/lib/python2.5/site-packages/scipy/sandbox/arpack/arpack.py *** arpack_original.py Thu Aug 23 10:30:44 2007 --- /usr/local/lib/python2.5/site-packages/scipy/sandbox/arpack/arpack.py Thu Aug 23 10:39:59 2007 *************** *** 201,216 **** dr=sb.zeros(k+1,typ) di=sb.zeros(k+1,typ) zr=sb.zeros((n,k+1),typ) ! dr,di,z,info=\ eigextract(rvec,howmny,sselect,sigmar,sigmai,workev, bmat,which,k,tol,resid,v,iparam,ipntr, workd,workl,info) ! # make eigenvalues complex ! d=dr+1.0j*di # futz with the eigenvectors: # complex are stored as real,imaginary in consecutive columns ! z=zr.astype(typ.upper()) for i in range(k): # fix c.c. pairs if di[i] > 0 : z[:,i]=zr[:,i]+1.0j*zr[:,i+1] --- 201,216 ---- dr=sb.zeros(k+1,typ) di=sb.zeros(k+1,typ) zr=sb.zeros((n,k+1),typ) ! dr,di,zr,info=\ eigextract(rvec,howmny,sselect,sigmar,sigmai,workev, bmat,which,k,tol,resid,v,iparam,ipntr, workd,workl,info) ! # make eigenvalues complex ! d=(dr+1.0j*di)[:k] # futz with the eigenvectors: # complex are stored as real,imaginary in consecutive columns ! z=zr.astype(typ.upper())[:,:k] for i in range(k): # fix c.c. pairs if di[i] > 0 : z[:,i]=zr[:,i]+1.0j*zr[:,i+1] -------------------------------------------------------------- lorenzo. On 8/23/07, lorenzo bolla wrote: > > I have problems with the ARPACK wrappers in scipy.sandbox. > take a look at this snippet of code. > > > --------------------------------------------------------------------------------------------------------- > > In [295]: A = numpy.array([[1,2,3,4],[0,2,3,4],[0,0,3,4],[0,0,0,4]], > dtype=float) > > In [296]: A > Out[296]: > array([[ 1., 2., 3., 4.], > [ 0., 2., 3., 4.], > [ 0., 0., 3., 4.], > [ 0., 0., 0., 4.]]) > > In [297]: [w,v] = arpack.eigen(A,2) > In [298]: w > Out[298]: array([ 4.+0.j, 3.+0.j, 0.+0.j]) > > --> WRONG: I get 3 eigenvalues instead of two! > > In [299]: v > Out[299]: > array([[ 0.+0.j, 0.+0.j, 0.+0.j], > [ 0.+0.j, 0.+0.j, 0.+0.j], > [ 0.+0.j, 0.+0.j, 0.+0.j], > [ 0.+0.j, 0.+0.j, 0.+0.j]]) > --> WRONG: all the eigenvectors are null! > > In [300]: [w,v] = arpack.eigen(A.astype(numpy.complex),2) > > In [301]: w > Out[301]: array([ 4. -2.41126563e-15j, 3. +1.34425147e-15j]) > --> CORRECT: casting the matrix to complex type gives the correct result > and the correct numbers of eigenvalues > > In [302]: v > Out[302]: > array([[ -1.37221970e-01-0.75187207j, -7.50019180e-01-0.32694452j], > [ -1.02916477e-01-0.56390405j, -5.00012787e-01-0.21796301j], > [ -5.14582387e-02-0.28195202j, -1.66670929e-01-0.07265434j ], > [ -1.28645597e-02-0.07048801j, 2.49800181e-16+0.j ]]) > --> MAYBE: and the eigenvectors are not null, but... > > In [303]: [w,v] = arpack.eigen(A.astype(numpy.complex128),2) > > In [304]: w > Out[304]: array([ 4. +7.28583860e-16j, 3. +2.23881966e-16j]) > > In [305]: v > Out[305]: > array([[ -6.65958925e-01 -3.75020242e-01j, > 8.08076904e-01 +1.28192062e-01j], > [ -4.99469194e-01 -2.81265182e-01j, > 5.38717936e-01 +8.54613743e-02j], > [ - 2.49734597e-01 -1.40632591e-01j, > 1.79572645e-01 +2.84871248e-02j], > [ -6.24336492e-02 -3.51581477e-02j, > -5.20417043e-16 -2.39391840e-16j]]) > --> WRONG: casting to a complex128 changes the values of the > eigenvectors!!! > > > --------------------------------------------------------------------------------------------------------- > > in any case, the result for the eigenvectors are different than Matlab > (while the eigenvalues are ok): > > v = > > -8.181818181818171e-001 7.642914835078907e-001 > -5.454545454545460e-001 5.732186126309180e-001 > -1.818181818181832e-001 2.866093063154587e-001 > -6.938893903907228e-018 7.165232657886386e-002 > > > w = > > 3.000000000000010e+000 0 > 0 3.999999999999995e+000 > > Can someone explain me what's wrong? > Thanks in advance, > lorenzo. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- import numpy from scipy import sparse from scipy.sandbox import arpack A = numpy.array([[1.,2,3,4],[0,2,3,4],[0,0,3,4],[0,0,0,4]]) [w,v] = arpack.eigen(A.astype('f'),2) print w print v [w,v] = arpack.eigen(A.astype('d'),2) print w print v [w,v] = arpack.eigen(A.astype('F'),2) print w print v print numpy.abs(v) print numpy.angle(v) [w,v] = arpack.eigen(A.astype('D'),2) print w print v print numpy.abs(v) print numpy.angle(v) -------------- next part -------------- [ 3.99998975+0.j 3.00000763+0.j] [[ 7.64292002e-01+0.j 8.18181396e-01+0.j] [ 5.73218405e-01+0.j 5.45454800e-01+0.j] [ 2.86608338e-01+0.j 1.81819022e-01+0.j] [ 7.16515183e-02+0.j 4.24683094e-07+0.j]] [ 4.+0.j 3.+0.j] [[ 7.64291484e-01+0.j 8.18181818e-01+0.j] [ 5.73218613e-01+0.j 5.45454545e-01+0.j] [ 2.86609306e-01+0.j 1.81818182e-01+0.j] [ 7.16523266e-02+0.j -1.11022302e-15+0.j]] [ 4.00000143 -2.02525530e-06j 3.00000048 +3.33409253e-06j] [[ -6.78678036e-01 +3.51479203e-01j 6.90856159e-01 -4.38337117e-01j] [ -5.09008467e-01 +2.63609469e-01j 4.60570931e-01 -2.92224437e-01j] [ -2.54504293e-01 +1.31804958e-01j 1.53523907e-01 -9.74078998e-02j] [ -6.36259913e-02 +3.29513997e-02j 3.53902578e-08 +1.07567757e-07j]] [[ 7.64291525e-01 8.18181932e-01] [ 5.73218584e-01 5.45454562e-01] [ 2.86609471e-01 1.81818292e-01] [ 7.16523677e-02 1.13239977e-07]] [[ 2.6637373 -0.56539017] [ 2.66373706 -0.56538951] [ 2.66373658 -0.56538761] [ 2.66373396 1.25294697]] [ 4. +6.69343053e-15j 3. -9.84463195e-15j] [[ -6.78176623e-01 +3.52445654e-01j 6.90719756e-01 -4.38551828e-01j] [ -5.08632467e-01 +2.64334241e-01j 4.60479838e-01 -2.92367885e-01j] [ -2.54316234e-01 +1.32167120e-01j 1.53493279e-01 -9.74559617e-02j] [ -6.35790584e-02 +3.30417801e-02j 9.28944421e-16 -1.13017234e-15j]] [[ 7.64291484e-01 8.18181818e-01] [ 5.73218613e-01 5.45454545e-01] [ 2.86609306e-01 1.81818182e-01] [ 7.16523266e-02 1.46295156e-15]] [[ 2.66231271 -0.56570106] [ 2.66231271 -0.56570106] [ 2.66231271 -0.56570106] [ 2.66231271 -0.88281419]] From rgold at lsw.uni-heidelberg.de Thu Aug 23 05:58:22 2007 From: rgold at lsw.uni-heidelberg.de (rgold at lsw.uni-heidelberg.de) Date: Thu, 23 Aug 2007 11:58:22 +0200 (CEST) Subject: [SciPy-user] Legendre Polynomials Message-ID: <57811.147.142.111.40.1187863102.squirrel@srv0.lsw.uni-heidelberg.de> Hi everybody, I am using: Python 2.4 and Scipy 5.0.1-3ubuntu2 I have run into problems trying to expand a function into Legendre-Polynomials: The calculated coefficients start to explode from l=33 on... The problem is that (for some reason) the evaluation of the Legendre Polynomials near x <= -1 starts to become wrong from l=33 on! See for example: >>> scipy.special.legendre(21)(-1) -0.99999988137941176 >>> scipy.special.legendre(31)(-1) -1.0034823356832285 >>> scipy.special.legendre(33)(-1) -1.2927111532386935 >>> scipy.special.legendre(35)(-1) -3.8408406352908409 >>> scipy.special.legendre(45)(-1) 129887.55400302292 Moreover the xrange of that error grows with l: >>> scipy.special.legendre(35)(-0.8) 0.15667302303740524 >>> scipy.special.legendre(45)(-0.8) 185.85665104241005 >>> scipy.special.legendre(45)(-0.5) 0.1229019272524782 The problem occures also for the even Polynomials but interestingly again for the negative arguments only as you can see here: >>> scipy.special.legendre(36)(-1) 11.445346667471807 >>> x=arange(-1,1,0.01) >>> plot(x,scipy.special.legendre(36)(x)) [] >>> show() A first quick (but not satisfying) idea was to use the values calculated for x>=0 and copy them (of course correcting for the minus sign if l is odd). Problems remaining: -> WHY is the evaluation wrong for x close to -1? -> Can I trust the routine at all? As a test I tried to expand cos(x) over Legendre-Polynomials because I know the result: Coefficient 1 should be 1 and the others as small as possible! It works quite fine as long as l<33! Please help! Roman From fredmfp at gmail.com Thu Aug 23 09:01:00 2007 From: fredmfp at gmail.com (fred) Date: Thu, 23 Aug 2007 15:01:00 +0200 Subject: [SciPy-user] [f2py] bool array... Message-ID: <46CD850C.5060404@gmail.com> Hi, I want to return a boolean array from my fortran code, but I get a int32 ? What am I doing wrong ? Any clue ? I could still convert it to bool, of course... Please see the CME attached that shows the trick. TIA. Cheers, -- http://scipy.org/FredericPetit -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: f2py_test.f90 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: f2py_test.py Type: text/x-python Size: 297 bytes Desc: not available URL: From pepe_kawumi at yahoo.co.uk Thu Aug 23 10:19:41 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Thu, 23 Aug 2007 14:19:41 +0000 (GMT) Subject: [SciPy-user] 3 dimensional arrays Message-ID: <892013.56584.qm@web27712.mail.ukl.yahoo.com> Hi , I'm trying to create arrays in three-d. And then break them down into planes(e.g. h(:,:,1) below). I have attached a piece of matlab code and how it runs. I have also attached what I have done in python but having problems verifying if it's doing exactly what I want. Just wondering if anyone can help me do this in python. I Thanks Perez Matlab code >> b= rand(3,3,2)*10 b(:,:,1) = 5.3960 6.7735 3.1040 6.2339 8.7683 7.7908 6.8589 0.1289 3.0730 b(:,:,2) = 9.2668 0.7067 5.1625 6.7872 0.1193 4.5820 0.7432 2.2715 7.0320 Python code >>> b = random.random((3,3,2))*10 >>> b array([[[ 2.90701573, 5.43543163], [ 3.23435142, 6.3642547 ], [ 4.63504825, 4.4083266 ]], [[ 6.21107005, 8.40185679], [ 1.25017666, 6.25796949], [ 8.69343579, 0.48324745]], [[ 3.19007467, 2.65135416], [ 5.8586314 , 6.72327567], [ 8.14780457, 2.39946201]]]) >>> m = b[:,:,1] >>> m array([[ 5.43543163, 6.3642547 , 4.4083266 ], [ 8.40185679, 6.25796949, 0.48324745], [ 2.65135416, 6.72327567, 2.39946201]]) ___________________________________________________________ Win a BlackBerry device from O2 with Yahoo!. Enter now. http://www.yahoo.co.uk/blackberry -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Aug 23 10:36:16 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 23 Aug 2007 16:36:16 +0200 Subject: [SciPy-user] 3 dimensional arrays In-Reply-To: <892013.56584.qm@web27712.mail.ukl.yahoo.com> References: <892013.56584.qm@web27712.mail.ukl.yahoo.com> Message-ID: Hi, Don't forget that Python begins array indexing by 1. Once this is known, b[:,:,1] gets the second element in your display, which is coherent. Matthieu 2007/8/23, Perez Kawumi : > > Hi , > I'm trying to create arrays in three-d. And then break them down into > planes(e.g. h(:,:,1) below). I have attached a piece of matlab code and > how it runs. I have also attached what I have done in python but having > problems verifying if it's doing exactly what I want. Just wondering if > anyone can help me do this in python. I > Thanks Perez > > *Matlab code > *>> b= rand(3,3,2)*10 > b(:,:,1) = > 5.3960 6.7735 3.1040 > 6.2339 8.7683 7.7908 > 6.8589 0.1289 3.0730 > > b(:,:,2) = > 9.2668 0.7067 5.1625 > 6.7872 0.1193 4.5820 > 0.7432 2.2715 7.0320 > > *Python code* > >>> b = random.random((3,3,2))*10 > >>> b > array([[[ 2.90701573, 5.43543163], > [ 3.23435142, 6.3642547 ], > [ 4.63504825, 4.4083266 ]], > [[ 6.21107005, 8.40185679], > [ 1.25017666, 6.25796949], > [ 8.69343579, 0.48324745]], > [[ 3.19007467, 2.65135416], > [ 5.8586314 , 6.72327567], > [ 8.14780457, 2.39946201]]]) > >>> m = b[:,:,1] > >>> m > array([[ 5.43543163, 6.3642547 , 4.4083266 ], > [ 8.40185679, 6.25796949, 0.48324745], > [ 2.65135416, 6.72327567, 2.39946201]]) > > ------------------------------ > To help you stay safe and secure online, we've developed the all new *Yahoo! > Security Centre* > . > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Thu Aug 23 10:59:53 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 23 Aug 2007 16:59:53 +0200 Subject: [SciPy-user] 3 dimensional arrays In-Reply-To: <892013.56584.qm@web27712.mail.ukl.yahoo.com> References: <892013.56584.qm@web27712.mail.ukl.yahoo.com> Message-ID: <46CDA0E9.8060202@gmx.net> Perez Kawumi wrote: > Hi , > I'm trying to create arrays in three-d. And then break them down into planes(e.g. h(:,:,1) below). I have attached a piece of matlab code and how it runs. I have also attached what I have done in python but having problems verifying if it's doing exactly what I want. Just wondering if anyone can help me do this in python. I > Thanks Perez > > Matlab code >>> b= rand(3,3,2)*10 > b(:,:,1) = > 5.3960 6.7735 3.1040 > 6.2339 8.7683 7.7908 > 6.8589 0.1289 3.0730 > > b(:,:,2) = > 9.2668 0.7067 5.1625 > 6.7872 0.1193 4.5820 > 0.7432 2.2715 7.0320 > > Python code >>>> b = random.random((3,3,2))*10 >>>> b > array([[[ 2.90701573, 5.43543163], > [ 3.23435142, 6.3642547 ], > [ 4.63504825, 4.4083266 ]], > [[ 6.21107005, 8.40185679], > [ 1.25017666, 6.25796949], > [ 8.69343579, 0.48324745]], > [[ 3.19007467, 2.65135416], > [ 5.8586314 , 6.72327567], > [ 8.14780457, 2.39946201]]]) >>>> m = b[:,:,1] >>>> m > array([[ 5.43543163, 6.3642547 , 4.4083266 ], > [ 8.40185679, 6.25796949, 0.48324745], > [ 2.65135416, 6.72327567, 2.39946201]]) > Have you checked these? http://www.scipy.org/Tentative_NumPy_Tutorial http://www.scipy.org/NumPy_for_Matlab_Users I think you want to do this: In [14]: a = randn(2,3,3)*10 In [15]: a Out[15]: array([[[ -9.2518157 , -4.64420333, -16.10449661], [ -0.29522799, 0.55422247, -17.0803633 ], [ 3.36247204, 11.06595503, -1.54147594]], [[ -1.20231878, 7.46063533, -5.28938933], [ 0.60121811, -8.04018663, 0.74826369], [ 6.78279031, 2.4766683 , 10.34622062]]]) In [16]: a[0,:,:] Out[16]: array([[ -9.2518157 , -4.64420333, -16.10449661], [ -0.29522799, 0.55422247, -17.0803633 ], [ 3.36247204, 11.06595503, -1.54147594]]) In [17]: a[0,::] Out[17]: array([[ -9.2518157 , -4.64420333, -16.10449661], [ -0.29522799, 0.55422247, -17.0803633 ], [ 3.36247204, 11.06595503, -1.54147594]]) In [18]: a[1,::] Out[18]: array([[ -1.20231878, 7.46063533, -5.28938933], [ 0.60121811, -8.04018663, 0.74826369], [ 6.78279031, 2.4766683 , 10.34622062]]) In [24]: a.shape Out[24]: (2, 3, 3) So here you have a 3d-array which consists of two 2d arrays of shape 3x3 ("planes" if you like). -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From gary.pajer at gmail.com Thu Aug 23 12:19:56 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 23 Aug 2007 12:19:56 -0400 Subject: [SciPy-user] skew Gaussian distribution Message-ID: <88fe22a0708230919i754cac8bode5198e4027ba5c4@mail.gmail.com> Do any of the distributions in scipy.stats make a skew Gaussian? Ditto, a Gaussian with kurtosis? thanks, -gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.L.Goldsmith at noaa.gov Thu Aug 23 13:27:38 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 23 Aug 2007 10:27:38 -0700 Subject: [SciPy-user] skew Gaussian distribution In-Reply-To: <88fe22a0708230919i754cac8bode5198e4027ba5c4@mail.gmail.com> References: <88fe22a0708230919i754cac8bode5198e4027ba5c4@mail.gmail.com> Message-ID: <46CDC38A.9010408@noaa.gov> Please educate me: what are a "skew" and "kurtotid" Gaussians? (What I learned: Gaussians are - by definition - devoid of any higher-than-second moments; wouldn't (shouldn't) a "Gaussian" with higher-than-second moments be called something else entirely?) DG Gary Pajer wrote: > Do any of the distributions in scipy.stats make a skew Gaussian? > Ditto, a Gaussian with kurtosis? > > thanks, > -gary > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ERD/ORR/NOS/NOAA From David.L.Goldsmith at noaa.gov Thu Aug 23 13:33:35 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 23 Aug 2007 10:33:35 -0700 Subject: [SciPy-user] skew Gaussian distribution In-Reply-To: <46CDC38A.9010408@noaa.gov> References: <88fe22a0708230919i754cac8bode5198e4027ba5c4@mail.gmail.com> <46CDC38A.9010408@noaa.gov> Message-ID: <46CDC4EF.3070209@noaa.gov> Sorry, please ignore, should have google-d first. :-[ DG David Goldsmith wrote: > Please educate me: what are a "skew" and "kurtotid" Gaussians? (What I > learned: Gaussians are - by definition - devoid of any > higher-than-second moments; wouldn't (shouldn't) a "Gaussian" with > higher-than-second moments be called something else entirely?) > > DG > > Gary Pajer wrote: > >> Do any of the distributions in scipy.stats make a skew Gaussian? >> Ditto, a Gaussian with kurtosis? >> >> thanks, >> -gary >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > -- ERD/ORR/NOS/NOAA From gary.pajer at gmail.com Thu Aug 23 13:57:58 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 23 Aug 2007 13:57:58 -0400 Subject: [SciPy-user] skew Gaussian distribution In-Reply-To: <46CDC38A.9010408@noaa.gov> References: <88fe22a0708230919i754cac8bode5198e4027ba5c4@mail.gmail.com> <46CDC38A.9010408@noaa.gov> Message-ID: <88fe22a0708231057w7c93ee96qa8197c9a674ec11a@mail.gmail.com> On 8/23/07, David Goldsmith wrote: > > Please educate me: what are a "skew" and "kurtotid" Gaussians? (What I > learned: Gaussians are - by definition - devoid of any > higher-than-second moments; wouldn't (shouldn't) a "Gaussian" with > higher-than-second moments be called something else entirely?) Exactly. So I could rephrase my question: what is the name of the distribution that is similar to a normal distribution, but has a variable amount of skewness? Or is such a distribution a special case of one of the many distributions in scipy? I *think* the pdf of a "skew normal" should be proportional to exp(-a*x**2 - b*x**3) but that comes from a quick google search. Better yet: what I'm trying to do is simulate the spectrum of an optical emitter that I have. The spectrum is nearly normal, but it is not symmetric. I'm looking for a model distribution. I started by looking for something quick and dirty. If "quick" doesn't happen, I might actually have to figure out if there is already a theoretically-expected distribution for my case. But in the meantime, quick and dirty still sounds good. That's the real question. thanks for helping me express what I want :) -gary DG > > Gary Pajer wrote: > > Do any of the distributions in scipy.stats make a skew Gaussian? > > Ditto, a Gaussian with kurtosis? > > > > thanks, > > -gary > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > ERD/ORR/NOS/NOAA > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.L.Goldsmith at noaa.gov Thu Aug 23 14:21:16 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 23 Aug 2007 11:21:16 -0700 Subject: [SciPy-user] skew Gaussian distribution In-Reply-To: <88fe22a0708231057w7c93ee96qa8197c9a674ec11a@mail.gmail.com> References: <88fe22a0708230919i754cac8bode5198e4027ba5c4@mail.gmail.com> <46CDC38A.9010408@noaa.gov> <88fe22a0708231057w7c93ee96qa8197c9a674ec11a@mail.gmail.com> Message-ID: <46CDD01C.3030805@noaa.gov> Ah, then I may be able to help you in two ways. First, my Google search turned up: http://azzalini.stat.unipd.it/SN/ Visit the "very brief account" link thereon, but also note on the main page the passage titled "A pioneer" which hints at a possible "physical" basis for these distributions in nature; I didn't dig any deeper than that, but it seems worthwhile to assess whether the family of distributions he is discussing might "properly" represent your experimental population. Second, I used to work for an Astronomer/Applied Optics Engineer, and was thus exposed, albeit superficially, to physical derivations of empirical optical particle count distributions; I didn't learn enough to help you, but he probably could - if you're interested, I can forward you his email, or if your reticent about making a "cold" contact, I can forward your email to him to see if he thinks he can help. DG Gary Pajer wrote: > On 8/23/07, *David Goldsmith* > wrote: > > Please educate me: what are a "skew" and "kurtotid" > Gaussians? (What I > learned: Gaussians are - by definition - devoid of any > higher-than-second moments; wouldn't (shouldn't) a "Gaussian" with > higher-than-second moments be called something else entirely?) > > > Exactly. So I could rephrase my question: what is the name of the > distribution that is similar to a normal distribution, but has a > variable amount of skewness? Or is such a distribution a special > case of one of the many distributions in scipy? > > I *think* the pdf of a "skew normal" should be proportional to > exp(-a*x**2 - b*x**3) but that comes from a quick google search. > > Better yet: what I'm trying to do is simulate the spectrum of an > optical emitter that I have. The spectrum is nearly normal, but it is > not symmetric. I'm looking for a model distribution. I started by > looking for something quick and dirty. If "quick" doesn't happen, I > might actually have to figure out if there is already a > theoretically-expected distribution for my case. But in the meantime, > quick and dirty still sounds good. > > That's the real question. > > thanks for helping me express what I want :) > -gary > > > > DG > > Gary Pajer wrote: > > Do any of the distributions in scipy.stats make a skew Gaussian? > > Ditto, a Gaussian with kurtosis? > > > > thanks, > > -gary > > > ------------------------------------------------------------------------ > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > ERD/ORR/NOS/NOAA > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ERD/ORR/NOS/NOAA From ahala2000 at yahoo.com Thu Aug 23 14:31:59 2007 From: ahala2000 at yahoo.com (elton wang) Date: Thu, 23 Aug 2007 11:31:59 -0700 (PDT) Subject: [SciPy-user] skew Gaussian distribution In-Reply-To: <46CDD01C.3030805@noaa.gov> Message-ID: <671295.98631.qm@web31408.mail.mud.yahoo.com> I use R (http://cran.r-project.org/) for distribution fitting. skewed-normal/skew-t or what ever distributions. very ease to use. I also have RDcom server/client installed(http://sunsite.univie.ac.at/rcom/). So R can be called in python as below (windows XP) (from website above) >>> from win32com.client import Dispatch >>> sc=Dispatch("StatConnectorSrv.StatConnector") >>> sc.Init("R") >>> print(sc.Evaluate("2+2")) 4.0 # COMMENT- R can do arithmetic and can tell python about it! >>> --- David Goldsmith wrote: > Ah, then I may be able to help you in two ways. > First, my Google search > turned up: > > http://azzalini.stat.unipd.it/SN/ > > Visit the "very brief account" link thereon, but > also note on the main > page the passage titled "A pioneer" which hints at a > possible "physical" > basis for these distributions in nature; I didn't > dig any deeper than > that, but it seems worthwhile to assess whether the > family of > distributions he is discussing might "properly" > represent your > experimental population. > > Second, I used to work for an Astronomer/Applied > Optics Engineer, and > was thus exposed, albeit superficially, to physical > derivations of > empirical optical particle count distributions; I > didn't learn enough to > help you, but he probably could - if you're > interested, I can forward > you his email, or if your reticent about making a > "cold" contact, I can > forward your email to him to see if he thinks he can > help. > > DG > > Gary Pajer wrote: > > On 8/23/07, *David Goldsmith* > > > wrote: > > > > Please educate me: what are a "skew" and > "kurtotid" > > Gaussians? (What I > > learned: Gaussians are - by definition - > devoid of any > > higher-than-second moments; wouldn't > (shouldn't) a "Gaussian" with > > higher-than-second moments be called something > else entirely?) > > > > > > Exactly. So I could rephrase my question: what > is the name of the > > distribution that is similar to a normal > distribution, but has a > > variable amount of skewness? Or is such a > distribution a special > > case of one of the many distributions in scipy? > > > > I *think* the pdf of a "skew normal" should be > proportional to > > exp(-a*x**2 - b*x**3) but that comes from a quick > google search. > > > > Better yet: what I'm trying to do is simulate the > spectrum of an > > optical emitter that I have. The spectrum is > nearly normal, but it is > > not symmetric. I'm looking for a model > distribution. I started by > > looking for something quick and dirty. If > "quick" doesn't happen, I > > might actually have to figure out if there is > already a > > theoretically-expected distribution for my case. > But in the meantime, > > quick and dirty still sounds good. > > > > That's the real question. > > > > thanks for helping me express what I want :) > > -gary > > > > > > > > DG > > > > Gary Pajer wrote: > > > Do any of the distributions in scipy.stats > make a skew Gaussian? > > > Ditto, a Gaussian with kurtosis? > > > > > > thanks, > > > -gary > > > > > > ------------------------------------------------------------------------ > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > > ERD/ORR/NOS/NOAA > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > ERD/ORR/NOS/NOAA > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________ Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545469 From elcorto at gmx.net Thu Aug 23 15:08:44 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 23 Aug 2007 21:08:44 +0200 Subject: [SciPy-user] 3d arrays In-Reply-To: <656739.4634.qm@web27711.mail.ukl.yahoo.com> References: <656739.4634.qm@web27711.mail.ukl.yahoo.com> Message-ID: <46CDDB3C.6010707@gmx.net> Perez Kawumi wrote: > Hi , > Sorry got myself very confused before I sent out that email. When i solve this equation I get a 1*2 matrix as shown. > > b(1)*a(1,:) + b(2)*a(2,:) + b(3)*a(3,:) = [25 18] As mentioned earlier, it would be really, really nice if you'd learn some numpy basics and terminology (especially numpy array indexing) from the docs at scipy.org. This would make it a lot easier to communicate you problems. Questions like "please translate my -code to numpy" is in general not a very kind thing to do, even though many people on this list are very fluent with Matlab which increases your chances. My own Matlab days are some time ago but I guess I can help you out. > > I want to assign all x and y coordinates in the first plane to have the value 25 and all the x and y coordinates in the second plane to the value 18. Which as you notice are the first and second elements of the eqn above. > > Do you know how to do this in python? Sorry it was badly phrased ealier. Hope this makes more sense. > Thanks Perez > If I translate correctly, you have an array of shape (2,N,M), that is, two NxM arrays "stacked behind one another", like that (here: N = M = 3): In [75]: a = ones((2,3,3)) In [76]: a Out[76]: array([[[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]], [[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]]) In [81]: a.shape Out[81]: (2, 3, 3) Along the 1st axis (of length 2), you have two 2d-arrays, of which each spans across the 2nd and 3rd axis (your "x and y coordinates"). Then you want all elements of the first 3x3 array to be 25: In [82]: a[0,...] = 25 In [83]: a Out[83]: array([[[ 25., 25., 25.], [ 25., 25., 25.], [ 25., 25., 25.]], [[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]]) This is the same as a[0,:,:] = 25 and a[0,::] = 25. The 0 is an index into the first axis, and there the first element, which is the first 3x3 array. The ... or :,: say "take all of the remaining axes", as you know from Matlab. For the second 3x3 array: In [84]: a[1,...] = 18 > P.S. Can you create a 3 dimensional matrix say by just doing this > d = random.random(3,3,2)*10 > You mean without the ()'s arround (3,3,2)? No. The argument you give must be a tuple determining the shape of the desired array. Also: Make sure that you understand that array != matrix in numpy! numpy.matrix converts arrays into objects which behave much more like Matlab matrices: you can have row and column vectors, which are treated as matrices (shape (1,N), (N,1)) and much more. For example, see http://scipy.org/Tentative_NumPy_Tutorial#head-926a6c9b68b752eed8a330636c41829e6358b1d3 http://scipy.org/NumPy_for_Matlab_Users#head-e9a492daa18afcd86e84e07cd2824a9b1b651935 for matrix and http://scipy.org/Numpy_Example_List#newaxis for a way to turn a numpy 1d-array of shape (N,) into a 2d one with shape, say (N, 1) to also emulate Matlab. > Matlab > >>> a = ([2,3;4,6;5,1]) > a = > 2 3 > 4 6 > 5 1 >>> b = ([1,2,3]) > b = > 1 2 3 > >>> c(1,1,:)= b(1)*a(1,:) + b(2)*a(2,:) + b(3)*a(3,:) > c(:,:,1) = > 25 > > c(:,:,2) = > 18 -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From gary.pajer at gmail.com Thu Aug 23 15:25:46 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 23 Aug 2007 15:25:46 -0400 Subject: [SciPy-user] skew Gaussian distribution In-Reply-To: <671295.98631.qm@web31408.mail.mud.yahoo.com> References: <46CDD01C.3030805@noaa.gov> <671295.98631.qm@web31408.mail.mud.yahoo.com> Message-ID: <88fe22a0708231225g6ef0846pdff35b7514ef91d2@mail.gmail.com> On 8/23/07, elton wang wrote: > > I use R (http://cran.r-project.org/) for distribution > fitting. skewed-normal/skew-t or what ever > distributions. very ease to use. > I also have RDcom server/client > installed(http://sunsite.univie.ac.at/rcom/). So R can > be called in python as below (windows XP) (from > website above) > > >>> from win32com.client import Dispatch > >>> sc=Dispatch("StatConnectorSrv.StatConnector") > >>> sc.Init("R") > >>> print(sc.Evaluate("2+2")) > 4.0 # COMMENT- R can do arithmetic and can tell python > about it! > >>> That's a pretty big hammer, but it might be the easiest to use. I tried writing my own and ran into some unexpected difficulties. thanks, g -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.pajer at gmail.com Thu Aug 23 15:53:18 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 23 Aug 2007 15:53:18 -0400 Subject: [SciPy-user] skew Gaussian distribution In-Reply-To: <46CDD01C.3030805@noaa.gov> References: <88fe22a0708230919i754cac8bode5198e4027ba5c4@mail.gmail.com> <46CDC38A.9010408@noaa.gov> <88fe22a0708231057w7c93ee96qa8197c9a674ec11a@mail.gmail.com> <46CDD01C.3030805@noaa.gov> Message-ID: <88fe22a0708231253p3625e490yd3264de1db818468@mail.gmail.com> On 8/23/07, David Goldsmith wrote: > > Ah, then I may be able to help you in two ways. First, my Google search > turned up: > > http://azzalini.stat.unipd.it/SN/ > > Visit the "very brief account" link thereon, Indeed, on the "very brief account page" a skewed normal is defined as norm.pdf(x) * norm.cdf(a*x) which seems to work quite nicely. thanks but also note on the main > page the passage titled "A pioneer" which hints at a possible "physical" > basis for these distributions in nature; I didn't dig any deeper than > that, but it seems worthwhile to assess whether the family of > distributions he is discussing might "properly" represent your > experimental population. > > Second, I used to work for an Astronomer/Applied Optics Engineer, and > was thus exposed, albeit superficially, to physical derivations of > empirical optical particle count distributions; I didn't learn enough to > help you, but he probably could - if you're interested, I can forward > you his email, or if your reticent about making a "cold" contact, I can > forward your email to him to see if he thinks he can help. > > DG > > Gary Pajer wrote: > > On 8/23/07, *David Goldsmith* > > wrote: > > > > Please educate me: what are a "skew" and "kurtotid" > > Gaussians? (What I > > learned: Gaussians are - by definition - devoid of any > > higher-than-second moments; wouldn't (shouldn't) a "Gaussian" with > > higher-than-second moments be called something else entirely?) > > > > > > Exactly. So I could rephrase my question: what is the name of the > > distribution that is similar to a normal distribution, but has a > > variable amount of skewness? Or is such a distribution a special > > case of one of the many distributions in scipy? > > > > I *think* the pdf of a "skew normal" should be proportional to > > exp(-a*x**2 - b*x**3) but that comes from a quick google search. > > > > Better yet: what I'm trying to do is simulate the spectrum of an > > optical emitter that I have. The spectrum is nearly normal, but it is > > not symmetric. I'm looking for a model distribution. I started by > > looking for something quick and dirty. If "quick" doesn't happen, I > > might actually have to figure out if there is already a > > theoretically-expected distribution for my case. But in the meantime, > > quick and dirty still sounds good. > > > > That's the real question. > > > > thanks for helping me express what I want :) > > -gary > > > > > > > > DG > > > > Gary Pajer wrote: > > > Do any of the distributions in scipy.stats make a skew Gaussian? > > > Ditto, a Gaussian with kurtosis? > > > > > > thanks, > > > -gary > > > > > > ------------------------------------------------------------------------ > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > > ERD/ORR/NOS/NOAA > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > ERD/ORR/NOS/NOAA > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Aug 23 17:40:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 23 Aug 2007 16:40:23 -0500 Subject: [SciPy-user] [f2py] bool array... In-Reply-To: <46CD850C.5060404@gmail.com> References: <46CD850C.5060404@gmail.com> Message-ID: <46CDFEC7.7030705@gmail.com> fred wrote: > Hi, > > I want to return a boolean array from my fortran code, > but I get a int32 ? > > What am I doing wrong ? > > Any clue ? Fortran LOGICAL arrays are usually stored as integers, usually of the same width as Fortran INTEGER. numpy bool arrays are stored as bytes. The correct mapping between Fortran LOGICALs and numpy arrays is a numpy int array of some width. > I could still convert it to bool, of course... Yup. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Thu Aug 23 18:27:48 2007 From: fredmfp at gmail.com (fred) Date: Fri, 24 Aug 2007 00:27:48 +0200 Subject: [SciPy-user] [f2py] bool array... In-Reply-To: <46CDFEC7.7030705@gmail.com> References: <46CD850C.5060404@gmail.com> <46CDFEC7.7030705@gmail.com> Message-ID: <46CE09E4.5080108@gmail.com> Robert Kern a ?crit : > > Fortran LOGICAL arrays are usually stored as integers, usually of the same width > as Fortran INTEGER. numpy bool arrays are stored as bytes. The correct mapping > between Fortran LOGICALs and numpy arrays is a numpy int array of some width. > Ok, I did not know that (and was thinking that boolean were stored as bytes). Thanks for clarification. -- http://scipy.org/FredericPetit From fredmfp at gmail.com Fri Aug 24 03:04:25 2007 From: fredmfp at gmail.com (fred) Date: Fri, 24 Aug 2007 09:04:25 +0200 Subject: [SciPy-user] mapping bool array... In-Reply-To: <46CE09E4.5080108@gmail.com> References: <46CD850C.5060404@gmail.com> <46CDFEC7.7030705@gmail.com> <46CE09E4.5080108@gmail.com> Message-ID: <46CE82F9.3090609@gmail.com> Hi, I continue my thread... My bool array is a mask array: it contains false & true. The "shape" which contains true inside this mask has the shape of an ellipse. I want to create a float array which has the same dimension as the mask: inside the ellispoide, I want to put scalars from another array which has the right dimensions to fit those of the ellipse's, and outside, I want to put 0. How can I do this, in a ? numpy/scipy way ? ? (read: without loops, if possible; if not, it's straightforward). TIA. Cheers, -- http://scipy.org/FredericPetit From gael.varoquaux at normalesup.org Fri Aug 24 03:44:13 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 24 Aug 2007 09:44:13 +0200 Subject: [SciPy-user] mapping bool array... In-Reply-To: <46CE82F9.3090609@gmail.com> References: <46CD850C.5060404@gmail.com> <46CDFEC7.7030705@gmail.com> <46CE09E4.5080108@gmail.com> <46CE82F9.3090609@gmail.com> Message-ID: <20070824074413.GA11989@clipper.ens.fr> On Fri, Aug 24, 2007 at 09:04:25AM +0200, fred wrote: > My bool array is a mask array: it contains false & true. > The "shape" which contains true inside this mask has the shape of an > ellipse. > I want to create a float array which has the same dimension > as the mask: inside the ellispoide, I want to put scalars from another array > which has the right dimensions to fit those of the ellipse's, > and outside, I want to put 0. > How can I do this, in a ? numpy/scipy way ? ? Something like (not tested): a[b == 0] = c[b == 0] Get the idea ? Ga??l From openopt at ukr.net Fri Aug 24 04:07:08 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 24 Aug 2007 11:07:08 +0300 Subject: [SciPy-user] best way of matrix G => L, D: G = L D L^T Message-ID: <46CE91AC.2020709@ukr.net> Hi all, I need best way of getting from matrix G matrices L, D: G = L D L^T (L - unit lower triangular matrix, D - diagonal) (IIRC it's related to LU-decomposition) (it's for implementing Gill-Murray Stable Newton?s Method) then I need to solve the equation L^T d = e[t] for d , where e[t] is a unit vector with the t-th component of e[t] being 1 What's the best way of doing it via numpy/scipy? Has using something like colamd before LU any sense here? Regards, D. From matthieu.brucher at gmail.com Fri Aug 24 04:42:39 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 24 Aug 2007 10:42:39 +0200 Subject: [SciPy-user] New scipy release before 8/20? In-Reply-To: <20070811002644.GG17460@mentat.za.net> References: <46B9A22C.7080308@unibo.it> <46B9D84B.5040004@unibo.it> <20070808153939.GB29100@mentat.za.net> <46BA86D8.90706@ar.media.kyoto-u.ac.jp> <20070809091514.GH9452@mentat.za.net> <46BBCE5D.9050408@ar.media.kyoto-u.ac.jp> <20070811002644.GG17460@mentat.za.net> Message-ID: > > First, the buildbot currently only supports numpy. Compiling multiple > projects (residing in different repositories) with buildbot isn't > straightforward, but with a bit of hacking it should be possible (the > source is Python, how hard can it be ;). > On the buildbot ML, this question was recently raised and a patch is available and it seems to work for multiple SVN repositories. Perhaps it is a solution ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Fri Aug 24 05:03:09 2007 From: fredmfp at gmail.com (fred) Date: Fri, 24 Aug 2007 11:03:09 +0200 Subject: [SciPy-user] mapping bool array... In-Reply-To: <20070824074413.GA11989@clipper.ens.fr> References: <46CD850C.5060404@gmail.com> <46CDFEC7.7030705@gmail.com> <46CE09E4.5080108@gmail.com> <46CE82F9.3090609@gmail.com> <20070824074413.GA11989@clipper.ens.fr> Message-ID: <46CE9ECD.4060208@gmail.com> Gael Varoquaux a ?crit : > Something like (not tested): > > a[b == 0] = c[b == 0] > > Get the idea ? > Quite ;-) This does the trick: c[a==True] = b.ravel() Thanks for the idea ! Cheers, -- http://scipy.org/FredericPetit From unpingco at osc.edu Fri Aug 24 08:19:28 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Fri, 24 Aug 2007 08:19:28 -0400 Subject: [SciPy-user] linalg.qr factorization so slow... Message-ID: <46CE6A60.AA84.0083.0@osc.edu> I have a 1000 x 1000 matrix. To apply a QR factorization in Matlab takes less than three seconds. however, when I do scipy.linalg.qr on the same matrix, it takes 25+ minutes. Why is this? I thought they were running the same LAPACK/ATLAS/BLAS libraries underneath. I am running Enthought's install on winXP; 2 GB RAM. Please contact me if you have questions or need more information. Thanks! Jose Unpingco, Ph.D. (619)553-2922 www.osc.edu/~unpingco -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Aug 24 10:17:04 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 24 Aug 2007 17:17:04 +0300 Subject: [SciPy-user] modified Cholesky factorization Message-ID: <46CEE860.8010708@ukr.net> Hi all, I need a way of obtaining modified Cholesky factorization: G + E = L D L^T in the algorithm that I'm trying to implement there is a reference to other algorithm, that obtains L and D according to Gill-Murray method. However, I think using simple Cholesky factorization would be enough (if scipy hasn't Gill-Murray method implemented). Could anyone inform me what's the bast way to obtain the matrices L and D in scipy? Regards, D. From stefan at sun.ac.za Fri Aug 24 12:58:57 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 24 Aug 2007 18:58:57 +0200 Subject: [SciPy-user] linalg.qr factorization so slow... In-Reply-To: <46CE6A60.AA84.0083.0@osc.edu> References: <46CE6A60.AA84.0083.0@osc.edu> Message-ID: <20070824165857.GB12402@mentat.za.net> On Fri, Aug 24, 2007 at 08:19:28AM -0400, Jose Unpingco wrote: > I have a 1000 x 1000 matrix. To apply a QR factorization in Matlab takes less > than three seconds. however, when I do scipy.linalg.qr on the same matrix, it > takes 25+ minutes. > > Why is this? I thought they were running the same LAPACK/ATLAS/BLAS libraries > underneath. It is because Enthought's install is not linked to your ATLAS. You may have to compile SciPy yourself. Cheers St?fan From travis at enthought.com Fri Aug 24 16:43:24 2007 From: travis at enthought.com (Travis Vaught) Date: Fri, 24 Aug 2007 15:43:24 -0500 Subject: [SciPy-user] linalg.qr factorization so slow... In-Reply-To: <20070824165857.GB12402@mentat.za.net> References: <46CE6A60.AA84.0083.0@osc.edu> <20070824165857.GB12402@mentat.za.net> Message-ID: <9B2F8459-771A-4C63-A0F1-125B9BAB1D20@enthought.com> On Aug 24, 2007, at 11:58 AM, Stefan van der Walt wrote: > On Fri, Aug 24, 2007 at 08:19:28AM -0400, Jose Unpingco wrote: >> I have a 1000 x 1000 matrix. To apply a QR factorization in >> Matlab takes less >> than three seconds. however, when I do scipy.linalg.qr on the same >> matrix, it >> takes 25+ minutes. >> >> Why is this? I thought they were running the same LAPACK/ATLAS/ >> BLAS libraries >> underneath. > > It is because Enthought's install is not linked to your ATLAS. You > may have to compile SciPy yourself. > > Cheers > St?fan I think Stefan's on to something--although ATLAS should be distributed with the Enthought Edition, I believe. On Windows XP: In [14]: import time In [15]: def time_qr(size=1000): ....: """ time qr for array of shape (size,size) """ ....: from numpy import random ....: from scipy.linalg import qr ....: a = random.random((size,size)) ....: t1 = time.time() ....: b, c = qr(a, mode='qr') ....: print "size: %s, time: %s" % (size, time.time()-t1) ....: In [16]: time_qr() size: 1000, time: 1.83299994469 In [17]: time_qr(2000) size: 2000, time: 12.0770001411 In [18]: time_qr(100) size: 100, time: 0.00999999046326 In [19]: time_qr(500) size: 500, time: 0.380999803543 In [20]: time_qr(1000) size: 1000, time: 1.74199986458 In [21]: time_qr(1500) size: 1500, time: 5.24699997902 In [22]: import scipy In [23]: scipy.__version__ Out[23]: '0.5.3.dev3173' In [24]: import numpy In [25]: numpy.__version__ Out[25]: '1.0.3' ...snip. some looking for the right config call... just try this one... In [27]: numpy.__config__.show() atlas_threads_info: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['d:\\test_atlas'] language = f77 blas_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['d:\\test_atlas'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c atlas_blas_threads_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['d:\\test_atlas'] language = c lapack_opt_info: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['d:\\test_atlas'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = f77 lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE In [28]: From dominique.orban at gmail.com Fri Aug 24 22:35:23 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 24 Aug 2007 22:35:23 -0400 Subject: [SciPy-user] best way of matrix G => L, D: G = L D L^T In-Reply-To: <46CE91AC.2020709@ukr.net> References: <46CE91AC.2020709@ukr.net> Message-ID: <8793ae6e0708241935w4470d123h7e488b7fa8ffb494@mail.gmail.com> On 8/24/07, dmitrey wrote: > I need best way of getting from matrix G matrices L, D: G = L D L^T (L - > unit lower triangular matrix, D - diagonal) > (IIRC it's related to LU-decomposition) The decomposition that you mention is called a Cholesky factorization. It is only applicable to *symmetric* and *positive definite* matrices. Assuming you are not using sparse matrices, you can obtain L and D from numpy.linalg.cholesky: In [4]: A = numpy.matrix( [ [4, 2, 1], [2, 4, 1], [1, 1, 4] ], dtype=float ) In [5]: A Out[5]: matrix([[ 4., 2., 1.], [ 2., 4., 1.], [ 1., 1., 4.]]) In [6]: numpy.linalg.cholesky(A) Out[6]: matrix([[ 2. , 0. , 0. ], [ 1. , 1.73205081, 0. ], [ 0.5 , 0.28867513, 1.91485422]]) Here you get L and D combined (there is no need for the function to return the transpose of L... it is trivial to write a function that solves triangular systems with L and its transpose). This decomposition is usually denoted L L^t. If the decomposition that you want is L D L^t, the L you are looking for is the above matrix with diagonal elements replaced by 1, and D is the diagonal matrix whose diagonal elements are the squares of the diagonal elements in the above matrix. In SciPy, there is also scipy.linalg.cholesky, scipy.linalg.cho_factor and scipy.linalg.cho_solve. I don't know if those work with sparse matrices though. > (it's for implementing Gill-Murray Stable Newton's Method) > then I need to solve the equation > > L^T d = e[t] > for d , where e[t] is a unit vector with the t-th component of e[t] being 1 > > What's the best way of doing it via numpy/scipy? > > Has using something like colamd before LU any sense here? Approximate minimum degree ordering is really relevant for large and sparse matrices. If that's what you're working with, you'll need to have a sparse Cholesky hooked up. I don't know if there is one in SciPy. For instance, Choldmod (from the same author as Umfpack) is a great code. There are also a couple simpler ones on the same author's webpage (Tim Davis at University of Florida, Gainseville) called "ldl" and another one in his "Csparse" library. Typically, a sparse factorization code will do some ordering beforehand. However, ldl doesn't (because it is meant to be as concise as possible). Dominique From amcmorl at gmail.com Sat Aug 25 07:37:46 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Sat, 25 Aug 2007 23:37:46 +1200 Subject: [SciPy-user] How to free memory allocated to array in weave.inline? Message-ID: Hi all, I'm trying to debug a memory leak in my code, which I suspect comes from some weave.inline code I have written. Unfortunately, I don't know anything about debugging memory leaks, and only marginally more about writing in G, and would appreciate some advice from the list. I needed to switch to the C to speed up my code - and that bit worked, with a 10x improvement over my numpy implementation. The code generates a 3-D array, and loops through the array assigning values based on the co-ordinate position. In the final program this array generation has to be done many (ideally 1000s) of times, but when I try this my memory consumption increases. I've narrowed down the problem to the following code snippet: from scipy import weave def build_mem(): '''an implementation of spine_model_new using weave.inline''' code = ''' npy_intp dims[3] = {200,200,200}; PyArrayObject* ar = (PyArrayObject *)PyArray_ZEROS(3, &dims[0], NPY_BOOL, 0); return_val = PyArray_Return(ar); ''' return weave.inline( code ) if __name__ == "__main__": sp = build_mem() def loop_test(): for i in xrange(150): sp = build_mem() Running loop_test causes uses up some 1.5 GB of memory, which is consistent with the array size * 150 iterations of the loop, and this is confirmed by valgrind's memcheck (if I understand the output correctly), which reports the following when run on the above module: ==14571== 8,000,000 bytes in 1 blocks are possibly lost in loss record 45 of 45 ==14571== at 0x40244B0: malloc (vg_replace_malloc.c:149) ==14571== by 0x48FFCA0: (within /usr/lib/python2.4/site-packages/numpy/core/multiarray.so) ==14571== by 0x4901EF3: (within /usr/lib/python2.4/site-packages/numpy/core/multiarray.so) ==14571== by 0x606B9A5: compiled_func(_object*, _object*) (sc_333f8aabfa93fdf594860ccb438bae6d0.cpp:661) ==14571== by 0x805A506: PyObject_Call (in /usr/bin/python2.4) ==14571== by 0x80B458B: PyEval_CallObjectWithKeywords (in /usr/bin/python2.4) ==14571== by 0x80B0523: (within /usr/bin/python2.4) ==14571== by 0x80B9C49: PyEval_EvalFrame (in /usr/bin/python2.4) ==14571== by 0x80B99C3: PyEval_EvalFrame (in /usr/bin/python2.4) ==14571== by 0x80BB0E4: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==14571== ==14571== LEAK SUMMARY: ==14571== definitely lost: 40 bytes in 1 blocks. ==14571== indirectly lost: 24 bytes in 1 blocks. ==14571== possibly lost: 8,028,594 bytes in 37 blocks. ==14571== still reachable: 3,795,173 bytes in 2,906 blocks. ==14571== suppressed: 0 bytes in 0 blocks. So, it looks pretty well like I'm not freeing the array correctly at the C level. I've looked through the numpy book and the wiki (http://www.scipy.org/Cookbook/C_Extensions/NumPy_arrays) but couldn't determine any solution from those. What do I need to do to free the array memory from each iteration of the loop? Thanks, Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From gael.varoquaux at normalesup.org Sat Aug 25 07:45:20 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 25 Aug 2007 13:45:20 +0200 Subject: [SciPy-user] How to free memory allocated to array in weave.inline? In-Reply-To: References: Message-ID: <20070825114520.GB22291@clipper.ens.fr> On Sat, Aug 25, 2007 at 11:37:46PM +1200, Angus McMorland wrote: > The code generates a 3-D array, and loops through the array assigning > values based on the co-ordinate position. In the final program this > array generation has to be done many (ideally 1000s) of times, but > when I try this my memory consumption increases. My rule of thumb when I create arrays in C code, is not to create arrays in C code. Your problem is most probably that your memory does not get freed, as it it not registered in Python's garbage collector. You could learn how to do this (I don't know how). The other option is to use numpy.empty to create an empty array, and pass it to your code that will populate it. That way you don't have to deal with all this. It alos makes your cod emore readable: no PyArrayObject* and co. in it. My 2 cents, Ga?l From aisaac at american.edu Sat Aug 25 16:12:38 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 25 Aug 2007 16:12:38 -0400 Subject: [SciPy-user] modified Cholesky factorization In-Reply-To: <46CEE860.8010708@ukr.net> References: <46CEE860.8010708@ukr.net> Message-ID: On Fri, 24 Aug 2007, dmitrey apparently wrote: > I need a way of obtaining modified Cholesky factorization: > G + E = L D L^T > in the algorithm that I'm trying to implement there is a reference to > other algorithm, that obtains L and D according to Gill-Murray method. http://artsci.wustl.edu/~jgill/papers/gmchol.g http://www.cs.colorado.edu/~bobby/papers/chol2.ps hth, Alan Isaac From amcmorl at gmail.com Sat Aug 25 17:54:21 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Sun, 26 Aug 2007 09:54:21 +1200 Subject: [SciPy-user] Passing boolean array to weave.inline Message-ID: Hi all, I'm having trouble passing a boolean array into a weave.inline block. This works- def build(): ar = n.empty((200,200,200)) return weave.inline( '', ['ar'] ) but this doesn't- def build(): ar = n.empty((200,200,200), dtype=bool) return weave.inline( '', ['ar'] ) failing with KeyError: '?', and I understand '?' to be the character code for a boolean array. Is this deliberate, or perhaps a missed type translation case in weave? Thanks, A. -- AJC McMorland, PhD Student Physiology, University of Auckland From amcmorl at gmail.com Sat Aug 25 19:25:37 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Sun, 26 Aug 2007 11:25:37 +1200 Subject: [SciPy-user] How to free memory allocated to array in weave.inline? In-Reply-To: <20070825114520.GB22291@clipper.ens.fr> References: <20070825114520.GB22291@clipper.ens.fr> Message-ID: On 25/08/07, Gael Varoquaux wrote: > On Sat, Aug 25, 2007 at 11:37:46PM +1200, Angus McMorland wrote: > > The code generates a 3-D array, and loops through the array assigning > > values based on the co-ordinate position. In the final program this > > array generation has to be done many (ideally 1000s) of times, but > > when I try this my memory consumption increases. > > My rule of thumb when I create arrays in C code, is not to create arrays > in C code. Your problem is most probably that your memory does not get > freed, as it it not registered in Python's garbage collector. You could > learn how to do this (I don't know how). The other option is to use > numpy.empty to create an empty array, and pass it to your code that will > populate it. That way you don't have to deal with all this. It alos makes > your cod emore readable: no PyArrayObject* and co. in it. Thanks Ga?l, that sounds like a very sensible suggestion, but I'm having trouble getting it to work. The following causes a segfault. Am I doing something stupid? If I understand the manual correctly, the weave code should alter the array in-place, so I don't need to use return_val for that one. Is that correct? def build(): xsz, ysz, zsz = (200,200,200) ar = n.empty((xsz, ysz, zsz), dtype=n.uint8) code = ''' int i,j,k; npy_ubyte *curpos; for (i = 0; i References: <20070825114520.GB22291@clipper.ens.fr> Message-ID: <20070826123410.GA23042@clipper.ens.fr> On Sun, Aug 26, 2007 at 11:25:37AM +1200, Angus McMorland wrote: > Thanks Ga?l, that sounds like a very sensible suggestion, but I'm > having trouble getting it to work. The following causes a segfault. Am > I doing something stupid? > If I understand the manual correctly, the weave code should alter the > array in-place, so I don't need to use return_val for that one. Is > that correct? Hell, I don't use inline much and I have never read the manual (Uh Oh, may I should keep that secret). Here is code that works for me: """ from scipy import weave from numpy import * def my_array(shape): nx, ny = shape # ar = empty(shape) ar = ones(shape) code = """ for (int i=1; i Hello Everyone, I have been using Scipy/NumPy/Matplotlib for a little bit now. I would like to move a few of my Matlab and Igor routines over to Scipy/Numpy. But before I attempt to do so, I wanted to try to better understand the relationship of Scipy and Numpy and the numerical base libraries used for Scipy/Numpy. In reading the documentation I found around scipy.org and internet, it appears that the Scipy/Numpy libraries use the Blas/Lapack/Atlas libraries. Also, that Scipy/Numpy will either use the native Blas/Lapack/Atlas library on the system or a *_lite version included with Scipy/Numpy. If this is correct, does Scipy and/or Numpy provide a complete interface to the Blas and Lapack libraries for either scenario? Also, if I understand it correctly, Scipy -depends- on Numpy. But I noticed that both Scipy and Numpy have some overlap in functionality/routines (i.e. fft). Is this intentional? Which is better to use? Is there a list that identifies duplicate functions? Any information or pointers to sites would be greatly appreciated. Thanks. -anthony. From fperez.net at gmail.com Sun Aug 26 13:14:11 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 26 Aug 2007 11:14:11 -0600 Subject: [SciPy-user] How to free memory allocated to array in weave.inline? In-Reply-To: <20070826123410.GA23042@clipper.ens.fr> References: <20070825114520.GB22291@clipper.ens.fr> <20070826123410.GA23042@clipper.ens.fr> Message-ID: On 8/26/07, Gael Varoquaux wrote: > What really bothers me is that it does not work if I use empty instead of > ones. It must be due to a type problem, probably the blitz converters not > guessing properly the type of the input array, but I'd really appreciate > some better explaination of that, and some info on the right way to use > empty with inline. It would probably be a good idea to open this as a ticket against weave. I've used this pattern a lot but in my case I'd always needed zeroed-out memory, so I'd used zeros(). But I had always assumed empty() would work just as well, and if it doesn't, it should probably be considered a weave bug. Have you had a look at the auto-generated code for the two cases? A quick diff might be revealing... Cheers, f From gael.varoquaux at normalesup.org Sun Aug 26 13:26:11 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 26 Aug 2007 19:26:11 +0200 Subject: [SciPy-user] How to free memory allocated to array in weave.inline? In-Reply-To: References: <20070825114520.GB22291@clipper.ens.fr> <20070826123410.GA23042@clipper.ens.fr> Message-ID: <20070826172611.GC23042@clipper.ens.fr> On Sun, Aug 26, 2007 at 11:14:11AM -0600, Fernando Perez wrote: > It would probably be a good idea to open this as a ticket against > weave. I'll do if people confirm this is indeed a bug, and not a case of my brain-deadness (remember, I am an experimental physicist, I deal with a soldering iron, not computers :->). > Have you had a look at the auto-generated > code for the two cases? A quick diff might be revealing... It does not get recompiled, so I guess the autogenerated code is the same. I don't understand this problem, but this is really extending far beyong what I have ever actually used. Ga?l From william.ratcliff at gmail.com Sun Aug 26 13:47:05 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Sun, 26 Aug 2007 13:47:05 -0400 Subject: [SciPy-user] How to free memory allocated to array in weave.inline? In-Reply-To: <20070826172611.GC23042@clipper.ens.fr> References: <20070825114520.GB22291@clipper.ens.fr> <20070826123410.GA23042@clipper.ens.fr> <20070826172611.GC23042@clipper.ens.fr> Message-ID: <827183970708261047r1530649ci46709ee961c1d172@mail.gmail.com> I must confess that I use windows rather than linux--but has anyone tried using programs like valgrind (linux), or rational purify (windwos) to track down the memory leaks? I haven't used either. William On 8/26/07, Gael Varoquaux wrote: > > On Sun, Aug 26, 2007 at 11:14:11AM -0600, Fernando Perez wrote: > > It would probably be a good idea to open this as a ticket against > > weave. > > I'll do if people confirm this is indeed a bug, and not a case of my > brain-deadness (remember, I am an experimental physicist, I deal with a > soldering iron, not computers :->). > > > Have you had a look at the auto-generated > > code for the two cases? A quick diff might be revealing... > > It does not get recompiled, so I guess the autogenerated code is the > same. > > I don't understand this problem, but this is really extending far beyong > what I have ever actually used. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sun Aug 26 15:50:20 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 26 Aug 2007 13:50:20 -0600 Subject: [SciPy-user] How to free memory allocated to array in weave.inline? In-Reply-To: <20070826123410.GA23042@clipper.ens.fr> References: <20070825114520.GB22291@clipper.ens.fr> <20070826123410.GA23042@clipper.ens.fr> Message-ID: On 8/26/07, Gael Varoquaux wrote: > What really bothers me is that it does not work if I use empty instead of > ones. It must be due to a type problem, probably the blitz converters not > guessing properly the type of the input array, but I'd really appreciate > some better explaination of that, and some info on the right way to use > empty with inline. Well, since I opened my mouth, I figured I better finish. With both current SVN and Jarrod's release, I can't see any problems. It works fine for me with both empty() and ones(). Note that blitz arrays are 0-offset, that may have confused you. Here's my code: n [4]: !cat sciweave.py """Simple weave test. """ from scipy import weave from numpy import empty,ones def my_array(shape,alloc): nx, ny = shape ar = alloc(shape) code = """ for (int i=0; i References: <20070825114520.GB22291@clipper.ens.fr> <20070826123410.GA23042@clipper.ens.fr> Message-ID: <20070826200130.GD23042@clipper.ens.fr> On Sun, Aug 26, 2007 at 01:50:20PM -0600, Fernando Perez wrote: > Well, since I opened my mouth, I figured I better finish. Yup, thanks ! > It works fine for me with both empty() and ones(). Note that blitz > arrays are 0-offset, Yes, this was where the problem was lying. I wasn't checking my ouput well enough and was looking only at the first lines. I don't know why I put ones in there, I remeber thinking that it was weird. It is indeed working as it should be. Weave rocks. Thanks for getting things straight, I told you I was a clueless user of weave. Ga?l From robert.vergnes at yahoo.fr Mon Aug 27 03:32:12 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Mon, 27 Aug 2007 09:32:12 +0200 (CEST) Subject: [SciPy-user] How to free unused memory by Python Message-ID: <274793.23942.qm@web27409.mail.ukl.yahoo.com> Hello, This is not a scipy issue - albeit I do use scipy for my app- and that array() creation seems to crash once I reached my upper Physical Memory limit. The question is general, How to free unused memory by Python: Te following small test demonstrates the issue: Before starting the test my UsedPhysicalMemory(PF): 555Mb >>>tf=range(0,10000000) PF: 710Mb ( so 155Mb for my List) >>> tf=[0,1,2,3,4,5] PF: 672Mb (Why? Why the remaining 117Mb is not freed?) >>> del tf PF: 672Mb ( Nothing happens) So changing the list contents and/or deleting the list changes nothing... This is a problem as I have several applications/threads competing for memory share. So how can I force Python to clean the memory and free the memory that is not used? Any idea on how to free the unused memory ? Thanks for help. Robert --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdu.xiaojf at gmail.com Mon Aug 27 04:57:18 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Mon, 27 Aug 2007 16:57:18 +0800 Subject: [SciPy-user] scipy.optimize.leastsq failure Message-ID: <46D291EE.3000706@gmail.com> Hi, I get an ier flag of 2 when running the following code: ---------------------------------------------------------- from scipy.optimize import leastsq from numpy import array x = array([1.2, 3.4, 5.6, 7.8]) y = (x-1.234)**2+3.456 def obj_f(pp, xx, yy): return pp[0] * (xx-pp[1])**2 + pp[2] - yy b = leastsq(obj_f, (1., 1., 2.), args=(x,y), full_output=2) print b print obj_f(b[0], x, y) ---------------------------------------------------------- The output is: ---------------------------------------------------------- (array([ 1. , 1.234, 3.456]), array([[ 0.01067209, 0.03485503, 0.04927042], [ 0.03485503, 0.12416711, 0.22839652], [ 0.04927042, 0.22839652, 0.91824454]]), {'qtf': array([ -5.72325346e-10, -3.96833152e-10, -2.98440470e-11]), 'nfev': 13, 'fjac': array([[-47.37134237, 0.09903785, 0.40239425, 0.9100936 ], [ 15.89408601, 3.85306656, 0.61349297, -0.32983294], [ -1.41155008, -0.95837977, 1.04356817, 0.82836952]]), 'fvec': array([ -8.88178420e-16, 0.00000000e+00, 0.00000000e+00, -7.10542736e-15]), 'ipvt': array([1, 2, 3])}, 'The relative error between two consecutive iterates is at most 0.000000', 2) [ -8.88178420e-16 0.00000000e+00 0.00000000e+00 -7.10542736e-15] ---------------------------------------------------------- Tt seems that the result is rather accurate, but while I get an ier flag of 2? I don't quite understand the error message. Regards, From elcorto at gmx.net Mon Aug 27 05:25:52 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 27 Aug 2007 11:25:52 +0200 Subject: [SciPy-user] scipy.optimize.leastsq failure In-Reply-To: <46D291EE.3000706@gmail.com> References: <46D291EE.3000706@gmail.com> Message-ID: <46D298A0.5020203@gmx.net> fdu.xiaojf at gmail.com wrote: > Hi, > > I get an ier flag of 2 when running the following code: > > ---------------------------------------------------------- >>from scipy.optimize import leastsq >>from numpy import array > > x = array([1.2, 3.4, 5.6, 7.8]) > y = (x-1.234)**2+3.456 > > def obj_f(pp, xx, yy): > return pp[0] * (xx-pp[1])**2 + pp[2] - yy > > b = leastsq(obj_f, (1., 1., 2.), args=(x,y), full_output=2) > print b > print obj_f(b[0], x, y) > ---------------------------------------------------------- > > The output is: > ---------------------------------------------------------- > (array([ 1. , 1.234, 3.456]), array([[ 0.01067209, 0.03485503, 0.04927042], > [ 0.03485503, 0.12416711, 0.22839652], > [ 0.04927042, 0.22839652, 0.91824454]]), {'qtf': array([ > -5.72325346e-10, -3.96833152e-10, -2.98440470e-11]), 'nfev': 13, 'fjac': > array([[-47.37134237, 0.09903785, 0.40239425, 0.9100936 ], > [ 15.89408601, 3.85306656, 0.61349297, -0.32983294], > [ -1.41155008, -0.95837977, 1.04356817, 0.82836952]]), 'fvec': > array([ -8.88178420e-16, 0.00000000e+00, 0.00000000e+00, > -7.10542736e-15]), 'ipvt': array([1, 2, 3])}, 'The relative error > between two consecutive iterates is at most 0.000000', 2) > [ -8.88178420e-16 0.00000000e+00 0.00000000e+00 -7.10542736e-15] > ---------------------------------------------------------- > > Tt seems that the result is rather accurate, but while I get an ier flag of 2? > I don't quite understand the error message. > leastq() uses the underlying Fortran routines from MINPACK: LMDIF if you don't provide a Jacobain (Dfun=None) or LMDER otherwise. The message tells you that the routine met one of it's convergence criteria. See the source of leastsq() in optimize/minpack.py and the docstrings of the Fortran routines ('info' flag): http://www.netlib.org/minpack/lmdif.f http://www.netlib.org/minpack/lmder.f -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From unpingco at osc.edu Mon Aug 27 08:18:28 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Mon, 27 Aug 2007 08:18:28 -0400 Subject: [SciPy-user] linalg.qr factorization so slow... (Unpingco) In-Reply-To: References: Message-ID: <46D25E9E.AA84.0083.0@osc.edu> Travis: Below is the corresponding output for the configuration for numpy. These refer to directories and files I do not have. Do I have to get these separately? Will they automatically integrate with my existing installation scipy? ================ In [3]: numpy.__config__.show() atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\ATLAS-3.6.0-bin'] define_macros = [('NO_ATLAS_INFO', 2)] language = c atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\ATLAS-3.6.0-bin'] define_macros = [('NO_ATLAS_INFO', 2)] language = f77 atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\ATLAS-3.6.0-bin'] language = f77 lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\ATLAS-3.6.0-bin'] language = c mkl_info: NOT AVAILABLE Please contact me if you have questions or need more information. Thanks! Jose Unpingco, Ph.D. (619)553-2922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Aug 27 08:43:53 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 27 Aug 2007 14:43:53 +0200 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: <274793.23942.qm@web27409.mail.ukl.yahoo.com> References: <274793.23942.qm@web27409.mail.ukl.yahoo.com> Message-ID: <20070827124353.GQ14395@mentat.za.net> On Mon, Aug 27, 2007 at 09:32:12AM +0200, Robert VERGNES wrote: > Hello, > > This is not a scipy issue - albeit I do use scipy for my app- and that array() > creation seems to crash once I reached my upper Physical Memory limit. > > The question is general, How to free unused memory by Python: > > Te following small test demonstrates the issue: > > Before starting the test my UsedPhysicalMemory(PF): 555Mb > > >>>tf=range(0,10000000) PF: 710Mb ( so 155Mb for my List) > >>> tf=[0,1,2,3,4,5] PF: 672Mb (Why? Why the remaining 117Mb is not > freed?) > >>> del tf PF: 672Mb ( Nothing happens) Does it help if you manually run garbage collection? import gc gc.collect() St?fan From robert.vergnes at yahoo.fr Mon Aug 27 09:17:30 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Mon, 27 Aug 2007 15:17:30 +0200 (CEST) Subject: [SciPy-user] RE : Re: How to free unused memory by Python In-Reply-To: <20070827124353.GQ14395@mentat.za.net> Message-ID: <773556.7257.qm@web27404.mail.ukl.yahoo.com> unfortunately not... Stefan van der Walt a ?crit : On Mon, Aug 27, 2007 at 09:32:12AM +0200, Robert VERGNES wrote: > Hello, > > This is not a scipy issue - albeit I do use scipy for my app- and that array() > creation seems to crash once I reached my upper Physical Memory limit. > > The question is general, How to free unused memory by Python: > > Te following small test demonstrates the issue: > > Before starting the test my UsedPhysicalMemory(PF): 555Mb > > >>>tf=range(0,10000000) PF: 710Mb ( so 155Mb for my List) > >>> tf=[0,1,2,3,4,5] PF: 672Mb (Why? Why the remaining 117Mb is not > freed?) > >>> del tf PF: 672Mb ( Nothing happens) Does it help if you manually run garbage collection? import gc gc.collect() St?fan _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Aug 27 09:28:59 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 27 Aug 2007 15:28:59 +0200 Subject: [SciPy-user] RE : Re: How to free unused memory by Python In-Reply-To: <773556.7257.qm@web27404.mail.ukl.yahoo.com> References: <20070827124353.GQ14395@mentat.za.net> <773556.7257.qm@web27404.mail.ukl.yahoo.com> Message-ID: <20070827132859.GD14668@clipper.ens.fr> Could it be due to some stored variable. In the Python interpretor "_" is the last answer, so the last answer does not get garbage collected. There might be other "jokes" lying around. HTH, Ga?l On Mon, Aug 27, 2007 at 03:17:30PM +0200, Robert VERGNES wrote: > unfortunately not... > Stefan van der Walt a ecrit : > On Mon, Aug 27, 2007 at 09:32:12AM +0200, Robert VERGNES wrote: > > Hello, > > This is not a scipy issue - albeit I do use scipy for my app- and that > array() > > creation seems to crash once I reached my upper Physical Memory limit. > > The question is general, How to free unused memory by Python: > > Te following small test demonstrates the issue: > > Before starting the test my UsedPhysicalMemory(PF): 555Mb > > >>>tf=range(0,10000000) PF: 710Mb ( so 155Mb for my List) > > >>> tf=[0,1,2,3,4,5] PF: 672Mb (Why? Why the remaining 117Mb is not > > freed?) > > >>> del tf PF: 672Mb ( Nothing happens) > Does it help if you manually run garbage collection? > import gc > gc.collect() From robert.vergnes at yahoo.fr Mon Aug 27 09:59:13 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Mon, 27 Aug 2007 15:59:13 +0200 (CEST) Subject: [SciPy-user] RE : Re: RE : Re: How to free unused memory by Python In-Reply-To: <20070827132859.GD14668@clipper.ens.fr> Message-ID: <360574.37221.qm@web27402.mail.ukl.yahoo.com> something like lists not being carbage collected ? strange... Gael Varoquaux a ?crit : Could it be due to some stored variable. In the Python interpretor "_" is the last answer, so the last answer does not get garbage collected. There might be other "jokes" lying around. HTH, Ga?l On Mon, Aug 27, 2007 at 03:17:30PM +0200, Robert VERGNES wrote: > unfortunately not... > Stefan van der Walt a ecrit : > On Mon, Aug 27, 2007 at 09:32:12AM +0200, Robert VERGNES wrote: > > Hello, > > This is not a scipy issue - albeit I do use scipy for my app- and that > array() > > creation seems to crash once I reached my upper Physical Memory limit. > > The question is general, How to free unused memory by Python: > > Te following small test demonstrates the issue: > > Before starting the test my UsedPhysicalMemory(PF): 555Mb > > >>>tf=range(0,10000000) PF: 710Mb ( so 155Mb for my List) > > >>> tf=[0,1,2,3,4,5] PF: 672Mb (Why? Why the remaining 117Mb is not > > freed?) > > >>> del tf PF: 672Mb ( Nothing happens) > Does it help if you manually run garbage collection? > import gc > gc.collect() _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Aug 27 10:02:35 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 27 Aug 2007 16:02:35 +0200 Subject: [SciPy-user] RE : Re: RE : Re: How to free unused memory by Python In-Reply-To: <360574.37221.qm@web27402.mail.ukl.yahoo.com> References: <20070827132859.GD14668@clipper.ens.fr> <360574.37221.qm@web27402.mail.ukl.yahoo.com> Message-ID: <20070827140232.GF14668@clipper.ens.fr> On Mon, Aug 27, 2007 at 03:59:13PM +0200, Robert VERGNES wrote: > something like lists not being carbage collected ? strange... No, juste references lying around because of magic variables. Can you ran your tests in a script to check for this, rather than an interactiv environment (I would do so, but I do not know how you test for used memory). Ga?l > Gael Varoquaux a ecrit : > Could it be due to some stored variable. In the Python interpretor "_" > is > the last answer, so the last answer does not get garbage collected. > There > might be other "jokes" lying around. From grante at visi.com Mon Aug 27 11:38:28 2007 From: grante at visi.com (Grant Edwards) Date: Mon, 27 Aug 2007 15:38:28 +0000 (UTC) Subject: [SciPy-user] Entought 1.0.0: scipy fails to import Message-ID: I can't seem to figure out how to get scipy to work using Enthoguht 1.0.0. A simple "import scypy" crashes: $ python Python 2.4.3 - Enthought Edition 1.0.0 (#69, Aug 2 2006, 12:09:59) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy c:\python24\lib\site-packages\numpy\testing\numpytest.py:634: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code DeprecationWarning) Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) RuntimeError: module compiled against version 90907 of C-API but this version of numpy is 1000009 Fatal Python error: numpy.core.multiarray failed to import... exiting. This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Any ideas what's wrong? -- Grant Edwards grante Yow! OVER the underpass! at UNDER the overpass! visi.com Around the FUTURE and BEYOND REPAIR!! From grante at visi.com Mon Aug 27 12:25:20 2007 From: grante at visi.com (Grant Edwards) Date: Mon, 27 Aug 2007 16:25:20 +0000 (UTC) Subject: [SciPy-user] Entought 1.0.0: scipy fails to import References: Message-ID: On 2007-08-27, Grant Edwards wrote: > I can't seem to figure out how to get scipy to work using > Enthoguht 1.0.0. A simple "import scypy" crashes: > > $ python > Python 2.4.3 - Enthought Edition 1.0.0 (#69, Aug 2 2006, 12:09:59) [MSC v.1310 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > c:\python24\lib\site-packages\numpy\testing\numpytest.py:634: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code DeprecationWarning) > Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) > Overwriting who= from scipy.misc.common (was from numpy.lib.utils) > Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) > RuntimeError: module compiled against version 90907 of C-API but this version of numpy is 1000009 > Fatal Python error: numpy.core.multiarray failed to import... exiting. > > This application has requested the Runtime to terminate it in an unusual way. > Please contact the application's support team for more information. I'm beginning to suspect that somehow a new version of numpy got installed. Though I'm not sure how. I think all the stuff dated Mar 1 is the original Enthuoght install, and numpy is too new... $ pwd /cygdrive/c/Python24/Lib/site-packages $ ls -ltr G at grants-virtual /cygdrive/c/Python24/Lib/site-packages total 4103 -rwxrwxrwx 1 Administrators None 119 Mar 31 2000 README.txt -rwxrwxrwx 1 Administrators None 2687 Jun 15 2003 roman.py -rwxrwxrwx 1 Administrators None 141 Oct 31 2003 pythoncom.py -rwxrwxrwx 1 Administrators None 69 Oct 10 2004 pywin32.pth -rwxrwxrwx 1 Administrators None 33616 Dec 21 2004 pydot.py [...] -rwxrwxrwx 1 Administrators None 9 Aug 2 2006 Numeric.pth -rwxrwxrwx 1 Administrators None 5 Aug 2 2006 pywin32.version.txt -rwxrwxrwx 1 Administrators None 32 Aug 2 2006 setuptools.pth drwxrwxrwx+ 4 Administrators None 0 Mar 1 12:54 wx-2.6-msw-unicode-enthought drwxrwxrwx+ 8 Administrators None 4096 Mar 1 12:54 vtk drwxrwxrwx+ 2 Administrators None 16384 Mar 1 12:54 vtk_python drwxrwxrwx+ 8 Administrators None 4096 Mar 1 12:54 mayavi drwxrwxrwx+ 3 Administrators None 4096 Mar 1 12:54 vtkPipeline drwxrwxrwx+ 2 Administrators None 8192 Mar 1 12:56 PyCrust drwxrwxrwx+ 3 Administrators None 4096 Mar 1 12:57 BTrees drwxrwxrwx+ 3 Administrators None 4096 Mar 1 12:57 persistent drwxrwxrwx+ 2 Administrators None 4096 Mar 1 12:57 ThreadedAsync drwxrwxrwx+ 3 Administrators None 4096 Mar 1 12:57 transaction drwxrwxrwx+ 4 Administrators None 4096 Mar 1 12:57 ZConfig drwxrwxrwx+ 3 Administrators None 4096 Mar 1 12:57 zdaemon drwxrwxrwx+ 5 Administrators None 12288 Mar 1 12:57 ZEO drwxrwxrwx+ 4 Administrators None 20480 Mar 1 12:57 ZODB drwxrwxrwx+ 4 Administrators None 0 Mar 1 12:58 zope [...] drwxrwxrwx+ 13 Administrators None 4096 May 21 10:23 Scientific drwxrwxrwx+ 3 G None 8192 May 21 14:01 Gnuplot drwxrwxrwx+ 2 Administrators None 4096 Aug 15 15:07 serial drwxrwxrwx+ 20 Administrators None 8192 Aug 27 10:21 scipy drwxrwxrwx+ 2 Administrators None 32768 Aug 27 10:21 PIL -- Grant Edwards grante Yow! An air of FRENCH FRIES at permeates my nostrils!! visi.com From unpingco at osc.edu Mon Aug 27 12:31:15 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Mon, 27 Aug 2007 12:31:15 -0400 Subject: [SciPy-user] pylab pyCrust/pythonWin crash Message-ID: <46D2C413020000830003D750@gw.osc.edu> I am using the Enthought Windows XP distribution of scipy. >From pyCrust or pythonWin I create a simple plot as in : >> from pylab import * >> a =arange(10) >>plot(a,a) >> show() * I close the window using the mouse * >> plot(a,a) * window crashes with the following error* --------------------------- Microsoft Visual C++ Runtime Library --------------------------- Runtime Error! Program: C:\Python24\pythonw.exe This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. --------------------------- OK --------------------------- =========================== Any help is appreciated. Thanks! Jose Unpingco, Ph.D. (619)553-2922 unpingco at osc.edu From gael.varoquaux at normalesup.org Mon Aug 27 12:38:20 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 27 Aug 2007 18:38:20 +0200 Subject: [SciPy-user] pylab pyCrust/pythonWin crash In-Reply-To: <46D2C413020000830003D750@gw.osc.edu> References: <46D2C413020000830003D750@gw.osc.edu> Message-ID: <20070827163820.GG14668@clipper.ens.fr> On Mon, Aug 27, 2007 at 12:31:15PM -0400, Jose Unpingco wrote: > I am using the Enthought Windows XP distribution of scipy. > >From pyCrust or pythonWin I create a simple plot as in : > >> from pylab import * > >> a =arange(10) > >>plot(a,a) > >> show() > * I close the window using the mouse * > >> plot(a,a) > * window crashes with the following error* > --------------------------- > Microsoft Visual C++ Runtime Library > --------------------------- > Runtime Error! > Program: C:\Python24\pythonw.exe This is a FAQ :-> (I asked it a bit more than a year ago). See http://matplotlib.sourceforge.net/interactive.html about this. I strongly suggest that you use ipython with pylab. It comes with a lot of extra goodies (like easy use of the debugger, or dimensioned physics variables, see http://showmedo.com/videos/series?name=CnluURUTV for more info. HTH, Ga?l From robert.kern at gmail.com Mon Aug 27 12:49:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 27 Aug 2007 11:49:41 -0500 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: <274793.23942.qm@web27409.mail.ukl.yahoo.com> References: <274793.23942.qm@web27409.mail.ukl.yahoo.com> Message-ID: <46D300A5.3010902@gmail.com> Robert VERGNES wrote: > Hello, > > This is not a scipy issue - albeit I do use scipy for my app- and that > array() creation seems to crash once I reached my upper Physical Memory > limit. > > The question is general, How to free unused memory by Python: > > Te following small test demonstrates the issue: > > Before starting the test my UsedPhysicalMemory(PF): 555Mb > >>>>tf=range(0,10000000) PF: 710Mb ( so 155Mb for my List) >>>> tf=[0,1,2,3,4,5] PF: 672Mb (Why? Why the remaining 117Mb is > not freed?) >>>> del tf PF: 672Mb ( Nothing happens) > > So changing the list contents and/or deleting the list changes nothing... > > This is a problem as I have several applications/threads competing for > memory share. > > So how can I force Python to clean the memory and free the memory that > is not used? Pythons previous to 2.5 do not release memory back to the OS even after all of your objects have been completely decrefed. C.f. http://evanjones.ca/python-memory-part3.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Mon Aug 27 14:56:04 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 27 Aug 2007 21:56:04 +0300 Subject: [SciPy-user] simpliest way to determine is matrix positive - definite? Message-ID: <46D31E44.5080204@ukr.net> hi all, what's the simpliest way to determine is matrix positive - definite? (using numpy/scipy of course) Also, don't you know asymptotic complexity of the func? i.e. O(log n * n^2) or like that. Thank you in advance, D. From aisaac at american.edu Mon Aug 27 15:23:26 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 27 Aug 2007 15:23:26 -0400 Subject: [SciPy-user] =?utf-8?q?simpliest_way_to_determine_is_matrix_posit?= =?utf-8?q?ive_-=09definite=3F?= In-Reply-To: <46D31E44.5080204@ukr.net> References: <46D31E44.5080204@ukr.net> Message-ID: On Mon, 27 Aug 2007, dmitrey apparently wrote: > what's the simpliest way to determine is matrix positive > - definite? (using numpy/scipy of course) Look at the eigenvalues of the symmetric part? (Did you mean simplest conceptually or computationally?) Is the matrix known to be real or symmetric? If so, attempt a Choleski factorization? (Simplest.) Look at the principal minors? Cheers, Alan Isaac From openopt at ukr.net Mon Aug 27 15:39:09 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 27 Aug 2007 22:39:09 +0300 Subject: [SciPy-user] simpliest way to determine is matrix positive - definite? In-Reply-To: References: <46D31E44.5080204@ukr.net> Message-ID: <46D3285D.9030906@ukr.net> Yes, matrix is real and symmetric. However, I believed there is more simple way to determine to obtain the single bit of info than do Choleski factorization with O(n^3) operations. Ok, I will use the latter. D. Alan G Isaac wrote: > On Mon, 27 Aug 2007, dmitrey apparently wrote: > >> what's the simpliest way to determine is matrix positive >> - definite? (using numpy/scipy of course) >> > > Look at the eigenvalues of the symmetric part? > (Did you mean simplest conceptually or computationally?) > Is the matrix known to be real or symmetric? > If so, attempt a Choleski factorization? (Simplest.) > Look at the principal minors? > > Cheers, > Alan Isaac > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From dominique.orban at gmail.com Mon Aug 27 17:21:51 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 27 Aug 2007 17:21:51 -0400 Subject: [SciPy-user] simpliest way to determine is matrix positive - definite? In-Reply-To: <46D3285D.9030906@ukr.net> References: <46D31E44.5080204@ukr.net> <46D3285D.9030906@ukr.net> Message-ID: <8793ae6e0708271421g5d0974e8jdaf44044a65fc097@mail.gmail.com> On 8/27/07, dmitrey wrote: > > Yes, matrix is real and symmetric. > However, I believed there is more simple way to determine to obtain the > single bit of info than do Choleski factorization with O(n^3) operations. > Ok, I will use the latter. > D. That's a difficult question from a numerical point of view. What will you do if you find that the smallest eigenvalue is 1.0e-16? Is the matrix really positive definite or is it positive semi-definite? Or worse, is it indefinite with an eigenvalue equal to -1.0e-16, and numerical errors occurred? What you want may be a procedure to identify when a matrix is NOT positive definite. For instance, if Cholesky fails, it would be the case. A cheaper way might be to use a plain conjugate gradient iteration and stop if you find that <= 0. Or you could use any type of iterative method that estimates the smallest eigenvalue... Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Mon Aug 27 23:23:29 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Mon, 27 Aug 2007 21:23:29 -0600 Subject: [SciPy-user] really basic where() function question Message-ID: I'm trying to put together a presentation at work on Python, and I'm confused about the where() numpy function. The documentation, which is scant, indicates that where() requires three input arguments: where(condition,x,y) returns an array shaped like condition and has elements of x and y where condition is respectively true or false First, I have to admit that I don't understand what x and y are for here. Second, I want to use where() like find() in Matlab - namely: a = array([3,5,7,9]) i = where(a <= 6) => should return (array([0, 1]),) (see http:// www.scipy.org/ Numpy_Example_List#head-7de97cb88f064612d2f339e9713a949cd7f2f804) instead, I get this error: ------------------------------------------------------------------------ --- Traceback (most recent call last) /Users/mhearne/scipy/ in () : where() takes exactly 3 arguments (1 given) What am I doing wrong? Or did the usage of where() change since the cookbook example was written? --Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From as8ca at virginia.edu Mon Aug 27 23:44:05 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Mon, 27 Aug 2007 23:44:05 -0400 Subject: [SciPy-user] really basic where() function question In-Reply-To: References: Message-ID: <20070828034405.GA16864@virginia.edu> On 27/08/07: 21:23, Michael Hearne wrote: > I'm trying to put together a presentation at work on Python, and I'm > confused about the where() numpy function. > The documentation, which is scant, indicates that where() requires three > input arguments: > where(condition,x,y) returns an array shaped like condition and has > elements of x and y where condition is respectively true or false > First, I have to admit that I don't understand what x and y are for here. > Second, I want to use where() like find() in Matlab - namely: > a = array([3,5,7,9]) > i = where(a <= 6) => should return (array([0, 1]),) > (see http://www.scipy.org/Numpy_Example_List#head-7de97cb88f064612d2f339e9713a949cd7f2f804) > instead, I get this error: > --------------------------------------------------------------------------- > Traceback (most recent call > last) > /Users/mhearne/scipy/ in () > : where() takes exactly 3 arguments (1 given) > What am I doing wrong? Or did the usage of where() change since the > cookbook example was written? What is the version of numpy you are running? I get: >>> import numpy >>> numpy.__version__ '1.0.4.dev4015' >>> a = numpy.array([3,5,7,9]) >>> i = numpy.where(a <= 6) >>> i (array([0, 1]),) >>> a[i] array([3, 5]) You could also try: >>> i = numpy.where(a <= 6, 1, 0).astype('Bool') >>> i array([ True, True, False, False], dtype=bool) >>> a[i] array([3, 5]) To answer your other question, if x and y are supplied, numpy.where() returns an array of the same shape as the array in the first parameter. The returned array has value x where the condition is True, and value y where it is False: >>> i = numpy.where(a <= 6, 1, 0) >>> i array([1, 1, 0, 0]) -Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From robert.kern at gmail.com Mon Aug 27 23:50:40 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 27 Aug 2007 22:50:40 -0500 Subject: [SciPy-user] really basic where() function question In-Reply-To: References: Message-ID: <46D39B90.9050409@gmail.com> Michael Hearne wrote: > I'm trying to put together a presentation at work on Python, and I'm > confused about the where() numpy function. > > The documentation, which is scant, indicates that where() requires three > input arguments: It doesn't require 3; it can take either 1 or 3. It does that find()-like behavior when given only one argument. > where(condition,x,y) returns an array shaped like condition and has > elements of x and y where condition is respectively true or false > > First, I have to admit that I don't understand what x and y are for here. They provide the actual values that will be in the result. There are a number of use cases. For example, let's say you wanted to rectify an image: each pixel with a value less than a threshold will become 0 and those at or above will be 255: img = where(img < theshold, 0, 255) Or, let's say you wanted to leave the above-threshold values alone: img = where(img < threshold, 0, img) > Second, I want to use where() like find() in Matlab - namely: > a = array([3,5,7,9]) > i = where(a <= 6) => should return (array([0, 1]),) > (see http://www.scipy.org/Numpy_Example_List#head-7de97cb88f064612d2f339e9713a949cd7f2f804) > > instead, I get this error: > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /Users/mhearne/scipy/ in () > > : where() takes exactly 3 arguments (1 given) > > What am I doing wrong? Or did the usage of where() change since the > cookbook example was written? It works for me. Exactly what code did you type in? Copy-and-paste the whole thing along with the traceback in context. In [1]: from numpy import * In [2]: a = array([3,5,7,9]) In [3]: i = where(a <= 6) In [4]: i Out[4]: (array([0, 1]),) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Tue Aug 28 00:23:14 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 28 Aug 2007 00:23:14 -0400 Subject: [SciPy-user] Legendre Polynomials In-Reply-To: <57811.147.142.111.40.1187863102.squirrel@srv0.lsw.uni-heidelberg.de> References: <57811.147.142.111.40.1187863102.squirrel@srv0.lsw.uni-heidelberg.de> Message-ID: On 23/08/07, rgold at lsw.uni-heidelberg.de wrote: > A first quick (but not satisfying) idea was to use the values calculated > for x>=0 and copy them (of course correcting for the minus sign if l is > odd). > Problems remaining: > -> WHY is the evaluation wrong for x close to -1? > -> Can I trust the routine at all? > > As a test I tried to expand cos(x) over Legendre-Polynomials because I > know the result: Coefficient 1 should be 1 and the others as small as > possible! > > It works quite fine as long as l<33! All of scipy's orthogonal polynomials are implemented using recurrence relations. These are a fairly good way to address the problem in general, but they can go berserk for high orders. I ran into a similar problem when trying to evaluate high-order orthogonal polynomials of several types (to construct my own with custom weights). In some cases there are other approaches for evaluating them accurately (for example the nth Chebyshev polynomial is cos(n*arccos(X))); you might check Abramowitz and Stegun for some helpful analytic relations. I think scipy.special might also have some legendre functions, implemented using cephes, that might be more reliably accurate. In terms of improving scipy, I don't think there's a better wayto handle arbitrary families of orthogonal polynomials. But it would be nice to have certain classes of orthogonal polynomial - chebyshev, for example - implemented using special-case, more reliable, methods. I haven't looked to see whether the orthogonal polynomial class (which is in pure python) admits this kind of extensibility. Anne From kwmsmith at gmail.com Tue Aug 28 02:18:39 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Tue, 28 Aug 2007 01:18:39 -0500 Subject: [SciPy-user] Legendre Polynomials In-Reply-To: References: <57811.147.142.111.40.1187863102.squirrel@srv0.lsw.uni-heidelberg.de> Message-ID: On 8/27/07, Anne Archibald wrote: > On 23/08/07, rgold at lsw.uni-heidelberg.de wrote: > > > A first quick (but not satisfying) idea was to use the values calculated > > for x>=0 and copy them (of course correcting for the minus sign if l is > > odd). > > Problems remaining: > > -> WHY is the evaluation wrong for x close to -1? > > -> Can I trust the routine at all? > > [snippage] > > In terms of improving scipy, I don't think there's a better wayto > handle arbitrary families of orthogonal polynomials. But it would be > nice to have certain classes of orthogonal polynomial - chebyshev, for > example - implemented using special-case, more reliable, methods. I > haven't looked to see whether the orthogonal polynomial class (which > is in pure python) admits this kind of extensibility. > > Anne I took a stab at Anne's suggestion and tried a direct implementation of Legendre's polys in terms of a series, not a recurrence. It certainly isn't as general as the orthogonal poly class, but it seems to solve the OP's accuracy problems. For implementation, see http://mathworld.wolfram.com/LegendrePolynomial.html equation #33. I don't claim this is the best series to implement for the problem, just the first one I tried. Use at your own risk!!! import numpy as np from scipy import factorial as fac def nCk(n,k): return fac(n,1) / fac(n-k,1) / fac(k,1) def P_l(n): """ P_l(n) -> legendre poly. of order n. returns a (vectorized) function that evaluates the legendre poly. of order n. Able to handle x = -1.0 faithfully for large order. Reference: http://mathworld.wolfram.com/LegendrePolynomial.html, eqn. 33 """ if n < 0: raise ValueError("n must be >= 0") front = 1.0/2.**n coeff_arr = np.array([nCk(n,k)**2 for k in range(n+1)]) def inner_func(x): if not -1.0 <= x <= 1.0: #perhaps do without this check raise ValueError("x outside bounds") xm1,xp1 = x-1., x+1. val_arr = np.array( [xm1**(n-k)*xp1**k for k in range(n+1)]) return front * np.dot(coeff_arr, val_arr) return np.vectorize(inner_func) from scipy.special import legendre as ssl err = [ np.abs(ssl(k)(xx)-P_l(k)(xx)).sum() for k in range(33)] # max(err) = 0.70110163376646795 for k == 32; error small for smaller k, diverges for larger k... endpts = np.array([P_l(k)(-1.0) for k in range(200)]) print endpts (endpts[::2] == 1.0).all() # True. (endpts[1::2] == -1.0).all() # True. From kwmsmith at gmail.com Tue Aug 28 02:26:42 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Tue, 28 Aug 2007 01:26:42 -0500 Subject: [SciPy-user] really basic where() function question In-Reply-To: <46D39B90.9050409@gmail.com> References: <46D39B90.9050409@gmail.com> Message-ID: On 8/27/07, Robert Kern wrote: > Michael Hearne wrote: > > I'm trying to put together a presentation at work on Python, and I'm > > confused about the where() numpy function. > > > > The documentation, which is scant, indicates that where() requires three > > input arguments: The pylab-numpy inconsistency bug strikes again! There is a pylab "where" that is different from the numpy "where." import pylab as pl import numpy as np a = np.array([3,5,7,9]) np.where(a<=6) # yields (array([0, 1]),) pl.where(a<=6) # Traceback (most recent call last): # File "", line 1, in # TypeError: where() takes exactly 3 arguments (1 given) Moral: be careful about importing * from pylab (or running $ ipython -pylab)! Its functions aren't the same as numpy/scipy. Kurt From openopt at ukr.net Tue Aug 28 03:25:12 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 10:25:12 +0300 Subject: [SciPy-user] solving Ax=b, known L: A = L L^T Message-ID: <46D3CDD8.3020803@ukr.net> Hi all, suppose I have a system Ax=b and I know (Cholesky) triangular L: A = L L^T what's the simplest way to obtain x now? (I meant python code) is it solve(L^T, solve(L,b))? or I should somehow inform solve() about L is triangular or, maybe, use other func? Documentation of numpy.linalg.solve doesn't say anything. Afaik MATLAB's '\' determ type of matrix automatically. Regards, D. From fredmfp at gmail.com Tue Aug 28 03:25:30 2007 From: fredmfp at gmail.com (fred) Date: Tue, 28 Aug 2007 09:25:30 +0200 Subject: [SciPy-user] neighbourhood of randomly scattered points [Was: choose at random all elements in a array...] In-Reply-To: <46CC5AF0.5070405@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> Message-ID: <46D3CDEA.8010102@gmail.com> fred a ?crit : > Hi all, > > I would like to show how I use this, > with a result, which seems to be strange to me. > > I have a float array, with dimensions of 501x501. > The values in this array are the scalar values at the point (x=i*dx, > y=j*dy,) > with i=0..500 & j=0..500. > > I flatten this array and get 15000 random (with random.permuation > method) values from this array > and thus get 15000 points. > Then I want to find out the number of neighboors from each of theses > points, > in a neighboorhood of (2*20+1)x(2*20+1). > > Thus, I get an array with coords and the number of neighboors. > x0 y0 nb0 > x1 y1 nb1 > ... > > I display the result with mayavi2, and get the following snapshot: > http://fredantispam.free.fr/snapshot.png > > Structures clearly appear. > (I don't consider side effects, of course). > > What could be the meaning of this ? > If it was "totally random", should I not get no structure at all, > something like a white noise ? > > What am I doing wrong ? > > How can I get a totally random scatter points ? No one can help me on this issue ? TIA. Cheers, -- http://scipy.org/FredericPetit From matthieu.brucher at gmail.com Tue Aug 28 03:36:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 28 Aug 2007 09:36:23 +0200 Subject: [SciPy-user] neighbourhood of randomly scattered points [Was: choose at random all elements in a array...] In-Reply-To: <46D3CDEA.8010102@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> Message-ID: Hi, Considering your window and your points, it doesn't surprise me much. What do you want exactly as a result ? Matthieu 2007/8/28, fred : > > fred a ?crit : > > Hi all, > > > > I would like to show how I use this, > > with a result, which seems to be strange to me. > > > > I have a float array, with dimensions of 501x501. > > The values in this array are the scalar values at the point (x=i*dx, > > y=j*dy,) > > with i=0..500 & j=0..500. > > > > I flatten this array and get 15000 random (with random.permuation > > method) values from this array > > and thus get 15000 points. > > Then I want to find out the number of neighboors from each of theses > > points, > > in a neighboorhood of (2*20+1)x(2*20+1). > > > > Thus, I get an array with coords and the number of neighboors. > > x0 y0 nb0 > > x1 y1 nb1 > > ... > > > > I display the result with mayavi2, and get the following snapshot: > > http://fredantispam.free.fr/snapshot.png > > > > Structures clearly appear. > > (I don't consider side effects, of course). > > > > What could be the meaning of this ? > > If it was "totally random", should I not get no structure at all, > > something like a white noise ? > > > > What am I doing wrong ? > > > > How can I get a totally random scatter points ? > No one can help me on this issue ? > > TIA. > > Cheers, > > -- > http://scipy.org/FredericPetit > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Tue Aug 28 04:14:26 2007 From: fredmfp at gmail.com (fred) Date: Tue, 28 Aug 2007 10:14:26 +0200 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> Message-ID: <46D3D962.5090006@gmail.com> Matthieu Brucher a ?crit : > Hi, > > Considering your window and your points, it doesn't surprise me much. > What do you want exactly as a result ? Something more uniform, no ? Why do I get these "structures" ? Can you explain me a little bit more ? Cheers, -- http://scipy.org/FredericPetit From elcorto at gmx.net Tue Aug 28 05:06:22 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 28 Aug 2007 11:06:22 +0200 Subject: [SciPy-user] solving Ax=b, known L: A = L L^T In-Reply-To: <46D3CDD8.3020803@ukr.net> References: <46D3CDD8.3020803@ukr.net> Message-ID: <46D3E58E.3050709@gmx.net> dmitrey wrote: > Hi all, > suppose I have a system Ax=b and I know (Cholesky) triangular L: A = L L^T > what's the simplest way to obtain x now? > (I meant python code) > is it solve(L^T, solve(L,b))? > or I should somehow inform solve() about L is triangular or, maybe, use > other func? > Documentation of numpy.linalg.solve doesn't say anything. > Afaik MATLAB's '\' determ type of matrix automatically. > Regards, D. > I don't do linalg much, but I know of linalg.cho_factor() and linalg.cho_solve(). In [42]: A Out[42]: array([[ 5., 2., 1.], [ 2., 5., 2.], [ 1., 2., 5.]]) In [43]: b Out[43]: array([ 1., 2., 3.]) In [44]: linalg.solve(A,b, sym_pos=0) Out[44]: array([ 0.02272727, 0.18181818, 0.52272727]) In [45]: linalg.solve(A,b, sym_pos=1) Out[45]: array([ 0.02272727, 0.18181818, 0.52272727]) In [46]: linalg.cho_solve(linalg.cho_factor(A), b) Out[46]: array([ 0.02272727, 0.18181818, 0.52272727]) -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From massimo.sandal at unibo.it Tue Aug 28 05:50:17 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 28 Aug 2007 11:50:17 +0200 Subject: [SciPy-user] kernel density estimation in scipy? Message-ID: <46D3EFD9.1050607@unibo.it> Hi, Just a little suggestion. I recently found this package ( http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/python/Statistics/ ) that allows, among other things, for easy kernel density estimation of a data set. The software works nicely, but it doesn't seem very well maintained (it says it has not been checked with NumPy, for example). Does a similar functionality exist on SciPy? If not, why not considering including it in scipy? Are there other KDE and related statistical packages for Python? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From pgmdevlist at gmail.com Tue Aug 28 08:31:51 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 28 Aug 2007 08:31:51 -0400 Subject: [SciPy-user] kernel density estimation in scipy? In-Reply-To: <46D3EFD9.1050607@unibo.it> References: <46D3EFD9.1050607@unibo.it> Message-ID: <200708280831.51799.pgmdevlist@gmail.com> On Tuesday 28 August 2007 05:50:17 massimo sandal wrote: > The software works nicely, but it doesn't seem very well maintained (it > says it has not been checked with NumPy, for example). Does a similar > functionality exist on SciPy? If not, why not considering including it > in scipy? Are there other KDE and related statistical packages for Python? If you're willing to be a beta-tester, I'm working on adapting SiZer maps to Python. http://www.stat.unc.edu/faculty/marron/DataAnalyses/SiZer_Intro.html The module presents some functions to compute Gaussian KDEs, find the optimal bandwidth according to different algorithms (Terrell, Sheather-Jones, Rupert-Sheather-Wand). A couple of features are still missing (such as handling missing/censored values), but I should be able to send you the sources nevertheless. In the next few days (couple of weeks) I hope to be able to propose it as a Scikit From mhearne at usgs.gov Tue Aug 28 09:21:31 2007 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 28 Aug 2007 07:21:31 -0600 Subject: [SciPy-user] really basic where() function question In-Reply-To: References: <46D39B90.9050409@gmail.com> Message-ID: Kurt got it in one - I was using (without knowing it), the pylab version of "where", which I hadn't known existed. I'm using ipython with the -pylab option, which generally speaking I really like. Are there any other gotcha's like this that users should know about? --Mike On Aug 28, 2007, at 12:26 AM, Kurt Smith wrote: > On 8/27/07, Robert Kern wrote: >> Michael Hearne wrote: >>> I'm trying to put together a presentation at work on Python, and I'm >>> confused about the where() numpy function. >>> >>> The documentation, which is scant, indicates that where() >>> requires three >>> input arguments: > > The pylab-numpy inconsistency bug strikes again! > > There is a pylab "where" that is different from the numpy "where." > > import pylab as pl > import numpy as np > > a = np.array([3,5,7,9]) > np.where(a<=6) > # yields (array([0, 1]),) > pl.where(a<=6) > # Traceback (most recent call last): > # File "", line 1, in > # TypeError: where() takes exactly 3 arguments (1 given) > > Moral: be careful about importing * from pylab (or running $ ipython > -pylab)! Its functions aren't the same as numpy/scipy. > > Kurt > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Tue Aug 28 09:25:33 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Tue, 28 Aug 2007 09:25:33 -0400 Subject: [SciPy-user] solving Ax=b, known L: A = L L^T In-Reply-To: <46D3E58E.3050709@gmx.net> References: <46D3CDD8.3020803@ukr.net> <46D3E58E.3050709@gmx.net> Message-ID: <8793ae6e0708280625p55afae73h6144ce8f90e75050@mail.gmail.com> > dmitrey wrote: > > Hi all, > > suppose I have a system Ax=b and I know (Cholesky) triangular L: A = L > L^T > > what's the simplest way to obtain x now? > > (I meant python code) > > is it solve(L^T, solve(L,b))? > > or I should somehow inform solve() about L is triangular or, maybe, use > > other func? You have to use the fact that L is triangular. Writing a solve function with triangular L is a basic numerical linear algebra 101 exercise. You may want to get a good book too. Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Tue Aug 28 09:34:59 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 16:34:59 +0300 Subject: [SciPy-user] solving Ax=b, known L: A = L L^T In-Reply-To: <8793ae6e0708280625p55afae73h6144ce8f90e75050@mail.gmail.com> References: <46D3CDD8.3020803@ukr.net> <46D3E58E.3050709@gmx.net> <8793ae6e0708280625p55afae73h6144ce8f90e75050@mail.gmail.com> Message-ID: <46D42483.2070907@ukr.net> Yes, of course, I do understood how to solve triangular matrix linear equation, but I just want to check does numpy.linalg.solve do it by itself or not, does it take into account matrix type - upper,lower triangular, symmetric etc, like MATLAB's '\' do automatically or matlab's linsolve do with special flags. I have no time to invent the bycickle for twice.. Regards, D. Thank you for your answers. Dominique Orban wrote: > > dmitrey wrote: > > Hi all, > > suppose I have a system Ax=b and I know (Cholesky) triangular L: > A = L L^T > > what's the simplest way to obtain x now? > > (I meant python code) > > is it solve(L^T, solve(L,b))? > > or I should somehow inform solve() about L is triangular or, > maybe, use > > other func? > > > You have to use the fact that L is triangular. Writing a solve > function with triangular L is a basic numerical linear algebra 101 > exercise. You may want to get a good book too. > > Dominique > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From openopt at ukr.net Tue Aug 28 09:40:06 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 16:40:06 +0300 Subject: [SciPy-user] most negative eigenvalue Message-ID: <46D425B6.8090800@ukr.net> hi all, what's the best way to solve the problem: A - symmetric real matrix. I need to find is A positive-definite, and if not, I need to know what's the most negative eigenvalue does it has? Thank you in advance, D. From openopt at ukr.net Tue Aug 28 09:43:01 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 16:43:01 +0300 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: <46D425B6.8090800@ukr.net> References: <46D425B6.8090800@ukr.net> Message-ID: <46D42665.6020703@ukr.net> I don't require high precision fro the eigenvalue, relative error +/- 20-30% is OK dmitrey wrote: > hi all, > what's the best way to solve the problem: > > A - symmetric real matrix. > I need to find is A positive-definite, and if not, I need to know what's > the most negative eigenvalue does it has? > > Thank you in advance, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From nwagner at iam.uni-stuttgart.de Tue Aug 28 09:47:13 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Aug 2007 15:47:13 +0200 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: <46D425B6.8090800@ukr.net> References: <46D425B6.8090800@ukr.net> Message-ID: On Tue, 28 Aug 2007 16:40:06 +0300 dmitrey wrote: > hi all, > what's the best way to solve the problem: > > A - symmetric real matrix. > I need to find is A positive-definite, and if not, I >need to know what's > the most negative eigenvalue does it has? > > Thank you in advance, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user You may use linalg.cholesky. If A is not spd it will return an error. Then you can use symeig to compute the leftmost eigenvalue. Cheers, Nils From openopt at ukr.net Tue Aug 28 09:57:01 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 16:57:01 +0300 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: References: <46D425B6.8090800@ukr.net> Message-ID: <46D429AD.2040402@ukr.net> Yes, I already use try: cholesky(A) except: ... but it makes impossible to use debugger, that is very bad. is the symeig a part of scipy? I didn't see that one... D. Nils Wagner wrote: > On Tue, 28 Aug 2007 16:40:06 +0300 > dmitrey wrote: > >> hi all, >> what's the best way to solve the problem: >> >> A - symmetric real matrix. >> I need to find is A positive-definite, and if not, I >> need to know what's >> the most negative eigenvalue does it has? >> >> Thank you in advance, D. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > You may use linalg.cholesky. > If A is not spd it will return an error. > Then you can use symeig to compute the leftmost > eigenvalue. > > Cheers, > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From aisaac at american.edu Tue Aug 28 10:04:17 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 28 Aug 2007 10:04:17 -0400 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: <46D425B6.8090800@ukr.net> References: <46D425B6.8090800@ukr.net> Message-ID: On Tue, 28 Aug 2007, dmitrey apparently wrote: > hi all, > what's the best way to solve the problem: > A - symmetric real matrix. > I need to find is A positive-definite, and if not, I need to know what's > the most negative eigenvalue does it has? I know you mean, what is the best SciPy way, but you may be interested in the PETSc routines. There is some minimum eigenvalue stuff in there. (Some of these may also be useful to openopt; note that the license is NOT GPL: http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/copyright.html) Cheers, Alan Isaac From aisaac at american.edu Tue Aug 28 10:08:09 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 28 Aug 2007 10:08:09 -0400 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: References: <46D425B6.8090800@ukr.net> Message-ID: I think Nils is referring to http://mdp-toolkit.sourceforge.net/symeig.html hth, Alan Isaac From openopt at ukr.net Tue Aug 28 10:07:10 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 17:07:10 +0300 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: References: <46D425B6.8090800@ukr.net> Message-ID: <46D42C0E.2000602@ukr.net> yes, first of all I'm interested in tools that are provided by numpy (I see numpy.linalg have some routines for eigenvalues). Matthieu and I are trying to avoid any additional openopt dependences except numpy, moreover, because of single routine from dozens of the same ones (I mean Golgfeld step here). I don't think the disadvantages of numpy vs others will be significant in the case involved, so I decided to use eigvalsh from numpy.linalg. D. Alan G Isaac wrote: > I know you mean, what is the best SciPy way, > but you may be interested in the PETSc routines. > There is some minimum eigenvalue stuff in there. > (Some of these may also be useful to openopt; > note that the license is NOT GPL: > http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/copyright.html) > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From lbolla at gmail.com Tue Aug 28 11:24:15 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 28 Aug 2007 17:24:15 +0200 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: References: <46D425B6.8090800@ukr.net> Message-ID: <80c99e790708280824h1d219430k5a3107dbc5931d4b@mail.gmail.com> you might be interested in ARPACK, too. it can find the first n eigvals with minimum/maximum real/imag/abs value. it can handle also symmetric matrices. its python interface is in scipy.sandbox.arpack. hth, lorenzo. On 8/28/07, Alan G Isaac wrote: > > On Tue, 28 Aug 2007, dmitrey apparently wrote: > > > hi all, > > what's the best way to solve the problem: > > > A - symmetric real matrix. > > I need to find is A positive-definite, and if not, I need to know what's > > the most negative eigenvalue does it has? > > I know you mean, what is the best SciPy way, > but you may be interested in the PETSc routines. > There is some minimum eigenvalue stuff in there. > (Some of these may also be useful to openopt; > note that the license is NOT GPL: > http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/copyright.html) > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hudzik at gmail.com Tue Aug 28 11:26:52 2007 From: hudzik at gmail.com (Chris Hudzik) Date: Tue, 28 Aug 2007 10:26:52 -0500 Subject: [SciPy-user] Linear Constrained Quadratic Optimization Problem Message-ID: I am trying to solve a quadratic optimization problem: minimize (x - y*p)^T * V * (x - y*p) over x,y s.t. x^T * t = b where x, t, and p are vectors; V is a symmetric matrix; and y and b are scalars. I am new to scipy. Can anybody point me to a scipy module to solve this? Thanks, Chris From ryanlists at gmail.com Tue Aug 28 11:59:08 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 28 Aug 2007 10:59:08 -0500 Subject: [SciPy-user] data fitting question Message-ID: I need to fit some data using the form: Ydata = a[0]*vect1+a[1]*vect2+a[2]*vect3+..... where Ydata might be the experimental data and vect1, vect2, and vect3 are known and constant (i.e. they aren't changing during the optimization). a is a vector of the unknown coefficients I am trying to find. The length of a and the number of constant vectors vectN might change. Is this a form that is already implemented using some existing optimization or least squares function, or do I just need to do something with fmin (for example) or optimize.leastsq? I guess it could be reformulated as Y=Ax where A would be a matrix with vect1, vect2, vect3, ... as its columns and x would be a column vector of the unknown a's. I think this is a very standard form and it is a linear set of equations. So, I think there is some simple way to do this, but it is eluding me at the moment. I don't think I can just use linalg.solve because A wouldn't be square. The matrix A might be 100x5 for example and Y would be 100x1 and x would be 5x1, so that I am trying to find a least squares solution of the 5 unknowns for the 100 equations. I think the more complicated way would be to do something like this: fitfunc = lambda p, x: p[0]*cos(2*pi/p[1]*x+p[2]) + p[3]*x # Target function errfunc = lambda p, x, y: fitfunc(p,x) -y # Distance to the target function p0 = [-15., 0.8, 0., -1.] # Initial guess for the parameters p1,success = optimize.leastsq(errfunc, p0[:], args = (Tx, tX)) from http://scipy.org/Cookbook/FittingData. This could work and I would just redefine fitfunc if the number of terms in my fit increased. But I don't think this is necessary because my system is linear in the coefficients. What is the easiest/best/cleanest way? Thanks, Ryan From ryanlists at gmail.com Tue Aug 28 12:01:50 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 28 Aug 2007 11:01:50 -0500 Subject: [SciPy-user] data fitting question In-Reply-To: References: Message-ID: Sorry to waste your time, I think I found my solution about 15 seconds after I sent my last email. I am 95% certain that the solution to my problem is linalg.lstsq: def lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0): """ lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0) -> x,resids,rank,s Return least-squares solution of a * x = b. Inputs: a -- An M x N matrix. b -- An M x nrhs matrix or M vector. ...""" On 8/28/07, Ryan Krauss wrote: > I need to fit some data using the form: > Ydata = a[0]*vect1+a[1]*vect2+a[2]*vect3+..... > > where Ydata might be the experimental data and vect1, vect2, and vect3 > are known and constant (i.e. they aren't changing during the > optimization). a is a vector of the unknown coefficients I am trying > to find. The length of a and the number of constant vectors vectN > might change. Is this a form that is already implemented using some > existing optimization or least squares function, or do I just need to > do something with fmin (for example) or optimize.leastsq? > > I guess it could be reformulated as > Y=Ax > where A would be a matrix with vect1, vect2, vect3, ... as its columns > and x would be a column vector of the unknown a's. I think this is > a very standard form and it is a linear set of equations. So, I think > there is some simple way to do this, but it is eluding me at the > moment. I don't think I can just use linalg.solve because A wouldn't > be square. The matrix A might be 100x5 for example and Y would be > 100x1 and x would be 5x1, so that I am trying to find a least squares > solution of the 5 unknowns for the 100 equations. > > > I think the more complicated way would be to do something like this: > > fitfunc = lambda p, x: p[0]*cos(2*pi/p[1]*x+p[2]) + p[3]*x # Target function > errfunc = lambda p, x, y: fitfunc(p,x) -y # Distance to the > target function > p0 = [-15., 0.8, 0., -1.] # Initial guess for > the parameters > p1,success = optimize.leastsq(errfunc, p0[:], args = (Tx, tX)) > > from http://scipy.org/Cookbook/FittingData. This could work and I > would just redefine fitfunc if the number of terms in my fit > increased. But I don't think this is necessary because my system is > linear in the coefficients. > > What is the easiest/best/cleanest way? > > Thanks, > > Ryan > From stefan at sun.ac.za Tue Aug 28 12:20:11 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 28 Aug 2007 18:20:11 +0200 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: <46D429AD.2040402@ukr.net> References: <46D425B6.8090800@ukr.net> <46D429AD.2040402@ukr.net> Message-ID: <20070828162011.GG14395@mentat.za.net> On Tue, Aug 28, 2007 at 04:57:01PM +0300, dmitrey wrote: > Yes, I already use > try: > cholesky(A) > except: > ... > but it makes impossible to use debugger, that is very bad. Blind catches are dangerous. How about try: cholesky(A) except LinAlgError: ... or try: cholesky(A) except LinAlgError,e: if not 'positive definite' in e.message: raise e ... Cheers St?fan From peridot.faceted at gmail.com Tue Aug 28 12:23:16 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 28 Aug 2007 12:23:16 -0400 Subject: [SciPy-user] most negative eigenvalue In-Reply-To: <46D429AD.2040402@ukr.net> References: <46D425B6.8090800@ukr.net> <46D429AD.2040402@ukr.net> Message-ID: On 28/08/07, dmitrey wrote: > Yes, I already use > try: > cholesky(A) > except: > ... > but it makes impossible to use debugger, that is very bad. This is a general python question. You should almost never use a bare "except", for the reason you discovered. What you should do is find out what exception cholesky (or cho_factor) throws (it's not in the documentation, though it really should be, but a quick test shows it throws LinAlgError) and catch only that one: try: cholesky(A) except LinAlgError: # something went wrong. You can go a step further and check the extra information returned by the exception, but in this case you want to investigate further anyway, so I recommend simply using eigvalsh() and looking at the eigenvalues. If any of them are too close to zero (say 10**-14 times the largest) the matrix is indefinite, at least numerically. For more on dealing with exceptions, see http://docs.python.org/tut/node10.html and maybe also http://www.diveintopython.org/file_handling/index.html Anne From rmay at ou.edu Tue Aug 28 09:43:14 2007 From: rmay at ou.edu (Ryan May) Date: Tue, 28 Aug 2007 08:43:14 -0500 Subject: [SciPy-user] Problems with ACML Message-ID: <46D42672.8090203@ou.edu> Hi, Does anyone here use the AMD Core Math Libraries (ACML) as their underlying libraries for BLAS/LAPACK/etc. in SciPy? I have problems with (at least) scipy.linalg.eigvals (Fernando discovered this at SciPy on my laptop). For instance, the following crashes reliably with ACML, but works fine with ATLAS versions of BLAS/LAPACK: >>>from scipy.linalg import eigvals >>>from numpy.random import rand >>>a = rand(100,100) >>>eigvals(a) Anyone else have this problem? Is ACML known to be a bad thing to use with SciPy? Thanks, Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From peridot.faceted at gmail.com Tue Aug 28 12:26:10 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 28 Aug 2007 12:26:10 -0400 Subject: [SciPy-user] kernel density estimation in scipy? In-Reply-To: <46D3EFD9.1050607@unibo.it> References: <46D3EFD9.1050607@unibo.it> Message-ID: On 28/08/07, massimo sandal wrote: > Hi, > > Just a little suggestion. I recently found this package ( > http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/python/Statistics/ ) > that allows, among other things, for easy kernel density estimation of a > data set. > > The software works nicely, but it doesn't seem very well maintained (it > says it has not been checked with NumPy, for example). Does a similar > functionality exist on SciPy? If not, why not considering including it > in scipy? Are there other KDE and related statistical packages for Python? There is scipy.stats.gaussian_kde, which does gaussian kernel density estimation in arbitrary dimensions, including bandwidth estimation. I think it is not extensively used, so it may be kind of rough around the edges, but it does exist. Anne From robert.kern at gmail.com Tue Aug 28 14:00:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Aug 2007 13:00:42 -0500 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: <46D3D962.5090006@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> <46D3D962.5090006@gmail.com> Message-ID: <46D462CA.7080006@gmail.com> fred wrote: > Matthieu Brucher a ?crit : >> Hi, >> >> Considering your window and your points, it doesn't surprise me much. >> What do you want exactly as a result ? > Something more uniform, no ? No. > Why do I get these "structures" ? > > Can you explain me a little bit more ? White noise isn't all that uniform. It does clump, and some kinds of displays may show that off more than others. The orthogonal striations are a side effect of your square definition of "neighborhood" that you used to plot and not anything intrinsic to the data. If you do need something more spatially uniform than what real pseudorandomness gives you, then you should look at low-discrepancy sequences. http://en.wikipedia.org/wiki/Low-discrepancy_sequence -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Aug 28 14:12:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Aug 2007 13:12:33 -0500 Subject: [SciPy-user] Problems with ACML In-Reply-To: <46D42672.8090203@ou.edu> References: <46D42672.8090203@ou.edu> Message-ID: <46D46591.9020703@gmail.com> Ryan May wrote: > Hi, > > Does anyone here use the AMD Core Math Libraries (ACML) as their > underlying libraries for BLAS/LAPACK/etc. in SciPy? I have problems > with (at least) scipy.linalg.eigvals (Fernando discovered this at SciPy > on my laptop). For instance, the following crashes reliably with ACML, > but works fine with ATLAS versions of BLAS/LAPACK: > >>> >from scipy.linalg import eigvals >>> >from numpy.random import rand >>>> a = rand(100,100) >>>> eigvals(a) > > Anyone else have this problem? Is ACML known to be a bad thing to use > with SciPy? No, but it's not an option that's as thoroughly tested as ATLAS, either. If you could supply a gdb backtrace from the crash, that would help locate the problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Tue Aug 28 14:29:22 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 21:29:22 +0300 Subject: [SciPy-user] Linear Constrained Quadratic Optimization Problem In-Reply-To: References: Message-ID: <46D46982.5040608@ukr.net> hi Chris, afaik scipy hasn't an appropriate solver. The only one that handles other than lb-ub constraints is scipy.optimize.fmin_cobyla, but you should type each constraint as separate one, i.e. Ax<=b or Aeq x = beq can't be pass as matrices. Also, cobyla can't handle user-supplied gradients. You can try scikits.openopt, solver lincher. Not far from best for now, but, at least, it's better than nothing. svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt sudo python setup.py install from scikits.openopt import NLP help(NLP) you should use lincher solver, also, you can assign penalties to your linear constraints and use unconstrained ralg solver, it's capable of handling rather huge penalties. I hope I will enhance the solver lincher from time to time. Also, I hope soon ALGENCAN will be connected, and/or NLPy, when it will finish migrating from numeric to numpy Also, I hope some months or weeks later ralg-based QP/QPQC solver will be ready, it can solve your problem even with quadratic constraints. Regards, D. Chris Hudzik wrote: > I am trying to solve a quadratic optimization problem: > > minimize (x - y*p)^T * V * (x - y*p) over x,y > s.t. > x^T * t = b > > where x, t, and p are vectors; V is a symmetric matrix; and y and b are scalars. > > I am new to scipy. Can anybody point me to a scipy module to solve this? > > Thanks, > Chris > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From gnurser at googlemail.com Tue Aug 28 16:14:19 2007 From: gnurser at googlemail.com (George Nurser) Date: Tue, 28 Aug 2007 21:14:19 +0100 Subject: [SciPy-user] Problems with ACML In-Reply-To: <46D42672.8090203@ou.edu> References: <46D42672.8090203@ou.edu> Message-ID: <1d1e6ea70708281314o49297eccne687f07f638ecd39@mail.gmail.com> On 28/08/07, Ryan May wrote: > Hi, > > Does anyone here use the AMD Core Math Libraries (ACML) as their > underlying libraries for BLAS/LAPACK/etc. in SciPy? I have problems > with (at least) scipy.linalg.eigvals (Fernando discovered this at SciPy > on my laptop). For instance, the following crashes reliably with ACML, > but works fine with ATLAS versions of BLAS/LAPACK: > > >>>from scipy.linalg import eigvals > >>>from numpy.random import rand > >>>a = rand(100,100) > >>>eigvals(a) > > Anyone else have this problem? Is ACML known to be a bad thing to use > with SciPy? Works for me, no problems. I use the gnu 64 bit version of acml; I think it's v 3.6.0 If you wish i can give a description of how I installed it. -George. From dominique.orban at gmail.com Tue Aug 28 16:23:05 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Tue, 28 Aug 2007 16:23:05 -0400 Subject: [SciPy-user] Linear Constrained Quadratic Optimization Problem In-Reply-To: References: Message-ID: <8793ae6e0708281323w41f3e3f8m16da6555b3a88258@mail.gmail.com> On 8/28/07, Chris Hudzik wrote: > > I am trying to solve a quadratic optimization problem: > > minimize (x - y*p)^T * V * (x - y*p) over x,y > s.t. > x^T * t = b > > where x, t, and p are vectors; V is a symmetric matrix; and y and b are > scalars. > > I am new to scipy. Can anybody point me to a scipy module to solve this? Hi Chris, I don't think there is anything specifically for quadratic programs in SciPy. In NLPy, you can solve it using the projected conjugate gradient algorithm (ppcg). I will commit an update tonight so you can try it out. Note that if V is not positive semi-definite, your problem may be unbounded below. Cheers, Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernhard.voigt at gmail.com Tue Aug 28 17:03:54 2007 From: bernhard.voigt at gmail.com (Bernhard Voigt) Date: Tue, 28 Aug 2007 23:03:54 +0200 Subject: [SciPy-user] minimization returns values never used in residuum function Message-ID: <21a270aa0708281403p151eba9fr645b37fe39a44b0@mail.gmail.com> Dear list, I've defined a residuum function based on angles which is cyclic. I'm adding pi for values smaller zero and taking the module for values larger pi: def res(self, event, x=None, y=None, z=None, zenith=None, azimuth=None, energy=None, level=10): if zenith < 0: zenith += math.pi zenith = zenith % math.pi print 'zen: %f' % zenith if azimuth < 0: azimuth += math.pi * 2 azimuth = azimuth % (math.pi * 2) print 'az: %f' % azimuth The minimizer (I was using leastsq and fmin_bfgs for now) returns negative values and values beyond the boundaries, though it never calls the residuum function with these values. Here's the output when calling the minimization: .... fmin_bfgs example....... zen: 0.634435 az: 1.198491 zen: 0.641435 az: 1.198491 zen: 0.634435 az: 1.205491 zen: 0.634435 az: 1.198491 Result: array([ 57.18310254, 7.48167597]) ....... leastsq example ........... zen: 3.116993 az: 0.359548 zen: 3.116993 az: 0.359548 zen: 3.116993 az: 0.359548 Result: array([-0.02459993, 0.35954834]) Where in the result the numbers are zenith and azimuth, respectively. Can someone explain what's going on? Thanks, Bernhard -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Aug 28 17:47:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Aug 2007 16:47:16 -0500 Subject: [SciPy-user] minimization returns values never used in residuum function In-Reply-To: <21a270aa0708281403p151eba9fr645b37fe39a44b0@mail.gmail.com> References: <21a270aa0708281403p151eba9fr645b37fe39a44b0@mail.gmail.com> Message-ID: <46D497E4.7040609@gmail.com> Bernhard Voigt wrote: > Can someone explain what's going on? We would need more code to understand what you were trying to do. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cscheid at sci.utah.edu Tue Aug 28 18:09:45 2007 From: cscheid at sci.utah.edu (Carlos Scheidegger) Date: Tue, 28 Aug 2007 16:09:45 -0600 Subject: [SciPy-user] minimization returns values never used in residuum function In-Reply-To: <21a270aa0708281403p151eba9fr645b37fe39a44b0@mail.gmail.com> References: <21a270aa0708281403p151eba9fr645b37fe39a44b0@mail.gmail.com> Message-ID: <46D49D29.1030200@sci.utah.edu> I would bet that if you move those prints so that they're before the changes, you're going to see the values beyond the boundaries. fmin_bfgs has no way to know you're optimizing within the periodic boundaries. -carlos > Dear list, > > I've defined a residuum function based on angles which is cyclic. I'm adding > pi for values smaller zero and taking the module for values larger pi: > > > def res(self, event, x=None, y=None, z=None, > zenith=None, azimuth=None, energy=None, level=10): > > if zenith < 0: > zenith += math.pi > zenith = zenith % math.pi > print 'zen: %f' % zenith > > if azimuth < 0: > azimuth += math.pi * 2 > azimuth = azimuth % (math.pi * 2) > print 'az: %f' % azimuth > > > The minimizer (I was using leastsq and fmin_bfgs for now) returns negative > values and values beyond the boundaries, though it never calls the residuum > function with these values. Here's the output when calling the minimization: > .... fmin_bfgs example....... > zen: 0.634435 > az: 1.198491 > zen: 0.641435 > az: 1.198491 > zen: 0.634435 > az: 1.205491 > zen: 0.634435 > az: 1.198491 > Result: array([ 57.18310254, 7.48167597]) > > ....... leastsq example ........... > zen: 3.116993 > az: 0.359548 > zen: 3.116993 > az: 0.359548 > zen: 3.116993 > az: 0.359548 > Result: array([-0.02459993, 0.35954834]) > > Where in the result the numbers are zenith and azimuth, respectively. > > Can someone explain what's going on? > > Thanks, Bernhard > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From stefan at sun.ac.za Tue Aug 28 19:00:30 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 29 Aug 2007 01:00:30 +0200 Subject: [SciPy-user] intp failure on win x86_64 Message-ID: <20070828230029.GN14395@mentat.za.net> Hi all, A numpy regression test is failing on Windows x86_64: http://buildbot.scipy.org/Windows%20XP%20x86_64%20MSVC/builds/108/step-shell_2/0 (don't let the "green" deceive you, that buildclient isn't setup properly) ====================================================================== ERROR: Ticket #99 ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\numpy-buildbot\numpy\b11\install\Lib\site-packages\numpy\core\tests\test_regression.py", line 197, in check_intp N.intp('0x' + 'f'*i_width,16) TypeError: function takes at most 1 argument (2 given) ====================================================================== Looks to be working on all other platforms (including 64-bit Linux). I'm off to bed now, so I'll hand this over to the day-shift :) Cheers St?fan From Bernhard.Voigt at desy.de Wed Aug 29 03:30:12 2007 From: Bernhard.Voigt at desy.de (Bernhard Voigt) Date: Wed, 29 Aug 2007 09:30:12 +0200 Subject: [SciPy-user] minimization returns values never used in residuum function In-Reply-To: <46D49D29.1030200@sci.utah.edu> References: <21a270aa0708281403p151eba9fr645b37fe39a44b0@mail.gmail.com> <46D49D29.1030200@sci.utah.edu> Message-ID: <21a270aa0708290030v5133bae4nd893bff7b182dfdf@mail.gmail.com> On 8/29/07, Carlos Scheidegger wrote: > > I would bet that if you move those prints so that they're before the > changes, > you're going to see the values beyond the boundaries. fmin_bfgs has no way > to > know you're optimizing within the periodic boundaries. Ah, yes! Thought if I change the parameter inside the residuum function the minimizer will use the new value in succeeding calls. Thinking about it, that's stupid, how should it notice these changes... Anyway I just need to apply the parameter transformation to the result, this should be fine. Thanks! Bernhard -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.reid at mail.cryst.bbk.ac.uk Wed Aug 29 11:13:14 2007 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Wed, 29 Aug 2007 16:13:14 +0100 Subject: [SciPy-user] Error in scipy.stats.poisson Message-ID: Hi, Should the following code work? import scipy.stats as S r=S.poisson(4) r.pdf(2) I get the following error: C:\apps\Python25\Lib\site-packages\scipy\stats\distributions.py in pdf(self, x) 104 self.dist = dist 105 def pdf(self,x): --> 106 return self.dist.pdf(x,*self.args,**self.kwds) 107 def cdf(self,x): 108 return self.dist.cdf(x,*self.args,**self.kwds) : poisson_gen instance has no attribute 'pdf' S.poisson does have a pmf function not a pdf function. Is this the cause? John. From massimo.sandal at unibo.it Wed Aug 29 12:05:27 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 29 Aug 2007 18:05:27 +0200 Subject: [SciPy-user] kernel density estimation in scipy? In-Reply-To: References: <46D3EFD9.1050607@unibo.it> Message-ID: <46D59947.9080506@unibo.it> Anne Archibald ha scritto: > There is scipy.stats.gaussian_kde, which does gaussian kernel density > estimation in arbitrary dimensions, including bandwidth estimation. I > think it is not extensively used, so it may be kind of rough around > the edges, but it does exist. Thanks! (I failed to find it in my googling, I don't know why, probably because it isn't very much used...) m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From robert.kern at gmail.com Wed Aug 29 12:18:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 29 Aug 2007 11:18:42 -0500 Subject: [SciPy-user] Error in scipy.stats.poisson In-Reply-To: References: Message-ID: <46D59C62.1090103@gmail.com> John Reid wrote: > Hi, > > Should the following code work? > > import scipy.stats as S > r=S.poisson(4) > r.pdf(2) > > > I get the following error: > C:\apps\Python25\Lib\site-packages\scipy\stats\distributions.py in > pdf(self, x) > 104 self.dist = dist > 105 def pdf(self,x): > --> 106 return self.dist.pdf(x,*self.args,**self.kwds) > 107 def cdf(self,x): > 108 return self.dist.cdf(x,*self.args,**self.kwds) > > : poisson_gen instance has no > attribute 'pdf' > > S.poisson does have a pmf function not a pdf function. Is this the cause? Yes. Discrete distributions have Probability Mass Functions, not Probability Density Functions. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Wed Aug 29 12:27:05 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 29 Aug 2007 18:27:05 +0200 Subject: [SciPy-user] Error in scipy.stats.poisson In-Reply-To: References: Message-ID: <20070829162705.GW14395@mentat.za.net> Hi John On Wed, Aug 29, 2007 at 04:13:14PM +0100, John Reid wrote: > Should the following code work? > > import scipy.stats as S > r=S.poisson(4) > r.pdf(2) The poisson distribution is discrete, so maybe you are looking for the probability mass function, r.pmf? Cheers St?fan From j.reid at mail.cryst.bbk.ac.uk Wed Aug 29 14:51:51 2007 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Wed, 29 Aug 2007 19:51:51 +0100 Subject: [SciPy-user] Error in scipy.stats.poisson In-Reply-To: <20070829162705.GW14395@mentat.za.net> References: <20070829162705.GW14395@mentat.za.net> Message-ID: i was looking for the probability of the given realisation of the random variable given the frozen parameters. There is no way to achieve this with the current api AFAICS. John. Stefan van der Walt wrote: > Hi John > > On Wed, Aug 29, 2007 at 04:13:14PM +0100, John Reid wrote: >> Should the following code work? >> >> import scipy.stats as S >> r=S.poisson(4) >> r.pdf(2) > > The poisson distribution is discrete, so maybe you are looking for the > probability mass function, r.pmf? > > Cheers > St?fan From robert.kern at gmail.com Wed Aug 29 14:57:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 29 Aug 2007 13:57:27 -0500 Subject: [SciPy-user] Error in scipy.stats.poisson In-Reply-To: References: <20070829162705.GW14395@mentat.za.net> Message-ID: <46D5C197.5090905@gmail.com> John Reid wrote: > i was looking for the probability of the given realisation of the random > variable given the frozen parameters. There is no way to achieve this > with the current api AFAICS. The PMF does give you that. In [1]: from scipy import stats In [2]: rv = stats.poisson(4) In [3]: rv.pmf(2) Out[3]: array(0.14652511110987343) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From grante at visi.com Wed Aug 29 18:00:06 2007 From: grante at visi.com (Grant Edwards) Date: Wed, 29 Aug 2007 22:00:06 +0000 (UTC) Subject: [SciPy-user] Windows installer that includes delaunay module? Message-ID: I'm trying to port a Python program that uses Delaunay triangulation to the Win32 platform, and the delaunay module I'm currently doesn't seem to be available for Win32. My Win32 machine is running the latest Enthought Python release which includes SciPy 0.5.0.2033 -- which doesn't include the sandbox/delaunay module. Do the newer scipy installers include the delaunay module? If not, can anybody point me to a Delaunay triangulation module for Enthought Python? I don't have the toolchain required to build Python extensions, so a pure Python module may be the easiest option. I realize it's going to be slow, but it only has to deal with a couple hundred points, and it's OK if it takes a few minutes to run. -- Grant Edwards grante Yow! FUN is never having to at say you're SUSHI!! visi.com From fredmfp at gmail.com Wed Aug 29 19:26:26 2007 From: fredmfp at gmail.com (fred) Date: Thu, 30 Aug 2007 01:26:26 +0200 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: <46D462CA.7080006@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> <46D3D962.5090006@gmail.com> <46D462CA.7080006@gmail.com> Message-ID: <46D600A2.5060304@gmail.com> Robert Kern a ?crit : > White noise isn't all that uniform. It does clump, and some kinds of displays > may show that off more than others. The orthogonal striations are a side effect > of your square definition of "neighborhood" that you used to plot and not > anything intrinsic to the data. > > If you do need something more spatially uniform than what real pseudorandomness > gives you, then you should look at low-discrepancy sequences. > > http://en.wikipedia.org/wiki/Low-discrepancy_sequence > Sorry, I don't understand. 1) My algorithm to enumerate the number of neighbours in the neighbourhood does work fine and has been validated. 2) http://fredantispam.free.fr/rect.png shows the result for a rectangular neighbourhood. That's ok, there are striations, but this is not the problem. 3) http://fredantispam.free.fr/circ.png shows the result for a circular neighbourhood. There is no more striations. The question is : why I get (for the circular case) around 90 neighbours at il = xl = 450 and around 40 at il = xl = 200, ie less than half, if it is completely random ? I was expecting something more "uniform": the mean is about 70 and the std about 10. Cheers, -- http://scipy.org/FredericPetit From robert.kern at gmail.com Wed Aug 29 19:48:38 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 29 Aug 2007 18:48:38 -0500 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: <46D600A2.5060304@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> <46D3D962.5090006@gmail.com> <46D462CA.7080006@gmail.com> <46D600A2.5060304@gmail.com> Message-ID: <46D605D6.8070308@gmail.com> fred wrote: > Robert Kern a ?crit : >> White noise isn't all that uniform. It does clump, and some kinds of displays >> may show that off more than others. The orthogonal striations are a side effect >> of your square definition of "neighborhood" that you used to plot and not >> anything intrinsic to the data. >> >> If you do need something more spatially uniform than what real pseudorandomness >> gives you, then you should look at low-discrepancy sequences. >> >> http://en.wikipedia.org/wiki/Low-discrepancy_sequence >> > Sorry, I don't understand. > > 1) My algorithm to enumerate the number of neighbours in the > neighbourhood does work fine > and has been validated. > 2) http://fredantispam.free.fr/rect.png > shows the result for a rectangular neighbourhood. That's ok, > there are striations, but this is not the problem. > 3) http://fredantispam.free.fr/circ.png > shows the result for a circular neighbourhood. There is no more striations. > > The question is : why I get (for the circular case) around 90 neighbours > at il = xl = 450 and around 40 at il = xl = 200, ie less than half, if > it is > completely random ? Show us your code, if you think there is a problem in it. Looking at images is a very poor way to judge randomness; intuition isn't very good. I'm trying to verify your numbers quantitatively, but without knowing how you implemented the circular neighborhood, I can't. If you picked the most dense spot and the least dense, then those values appear reasonable. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Thu Aug 30 03:24:26 2007 From: fredmfp at gmail.com (fred) Date: Thu, 30 Aug 2007 09:24:26 +0200 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: <46D605D6.8070308@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> <46D3D962.5090006@gmail.com> <46D462CA.7080006@gmail.com> <46D600A2.5060304@gmail.com> <46D605D6.8070308@gmail.com> Message-ID: <46D670AA.6020009@gmail.com> Robert Kern a ?crit : > > Show us your code, if you think there is a problem in it. I really think there is no problem in my code, it has been already validated. My 2D data array is a n=501x501 array. If I get n points from it, the neighbourhood is uniform, I think this is a problem for nobody ;-) In fact, I don't get n points, but far less, say 15000. If these points were uniformly distributed, I think I could not see theses structures: theses structures are not an artifact. > Looking at images is a > very poor way to judge randomness; Yes, but one can see structures. The question is : why I can see these structures . Do they have any meaning ? I was expecting to see no structure at all, in fact. But may be I'm wrong. I understand the trick like this : if I get 90 neighbours in a neighbourhood, the density of points is much higher than in a neighbourhood where I get only 40 neighbours per points. So, for me, it is not uniformly distributed. > intuition isn't very good. I'm trying to > verify your numbers quantitatively, but without knowing how you implemented the > circular neighborhood, I can't. If you picked the most dense spot and the least > dense, then those values appear reasonable. > Quite straightforward ;-) I use something like this : nbx, nby are the neighbourhood dimensions. x0, y0 are the points coordinates. x, y are the neighbourhood points coordinates For a rectangular neighbourhood, I use for all (x0,y0) points for all (x, y) points if abs(x-x0) <= nbx and abs(y-y0) <= nby nb0 = nb0 +1 For a circular neighbourhood, I use for all (x0, y0) points for all (x, y) points if ((x-x0)/nbx)**2 + ((y-y0)/nby)**2 <= 1 nb0 = nb0 + 1 BTW, I did not look at your url yet. Cheers, -- http://scipy.org/FredericPetit From openopt at ukr.net Thu Aug 30 03:40:01 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 30 Aug 2007 10:40:01 +0300 Subject: [SciPy-user] what is simpliest way to create this boolean array Message-ID: <46D67451.6090305@ukr.net> hi all, what's the easiest way to create bool array that contains m1 True, m2 False, m3 True, m4 False? Or (alternatively) creating same Python list instead. Regards, D. From lbolla at gmail.com Thu Aug 30 03:53:41 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 30 Aug 2007 09:53:41 +0200 Subject: [SciPy-user] bug in ARPACK from scipy.sandbox? In-Reply-To: <80c99e790708230142t4e3b7108kabd51a39d92d5f3b@mail.gmail.com> References: <80c99e790708230046r35fc6a69v845e755e317f5d1f@mail.gmail.com> <80c99e790708230142t4e3b7108kabd51a39d92d5f3b@mail.gmail.com> Message-ID: <80c99e790708300053qf6fd343y55d2f76830fb6191@mail.gmail.com> hi all, I've attached a patch for the problem here: http://scipy.org/scipy/scipy/ticket/231 maybe, some of the developer will look into it and commit it to cvs, if correct. hth, lorenzo. On 8/23/07, lorenzo bolla wrote: > > sorry for the noise, but I think I've found the bug... > this is what I changed in arpack.py to get the correct results (see the > test files attached). > > should we commit the change to the CVS? > > -------------------------------------------------------------- > > $> diff -c arpack_original.py > /usr/local/lib/python2.5/site-packages/scipy/sandbox/arpack/arpack.py > > *** arpack_original.py Thu Aug 23 10:30:44 2007 > --- /usr/local/lib/python2.5/site-packages/scipy/sandbox/arpack/arpack.py > Thu Aug 23 10:39:59 2007 > *************** > *** 201,216 **** > dr= sb.zeros(k+1,typ) > di=sb.zeros(k+1,typ) > zr=sb.zeros((n,k+1),typ) > ! dr,di,z,info=\ > eigextract(rvec,howmny,sselect,sigmar,sigmai,workev, > bmat,which,k,tol,resid,v,iparam,ipntr, > workd,workl,info) > ! > # make eigenvalues complex > ! d=dr+1.0j*di > # futz with the eigenvectors: > # complex are stored as real,imaginary in consecutive columns > ! z=zr.astype(typ.upper()) > for i in range(k): # fix c.c. pairs > if di[i] > 0 : > z[:,i]=zr[:,i]+1.0j*zr[:,i+1] > --- 201,216 ---- > dr=sb.zeros(k+1,typ) > di=sb.zeros(k+1,typ) > zr=sb.zeros((n,k+1),typ) > ! dr,di,zr,info=\ > eigextract(rvec,howmny,sselect,sigmar,sigmai,workev, > bmat,which,k,tol,resid,v,iparam,ipntr, > workd,workl,info) > ! > # make eigenvalues complex > ! d=(dr+1.0j*di)[:k] > # futz with the eigenvectors: > # complex are stored as real,imaginary in consecutive columns > ! z=zr.astype(typ.upper())[:,:k] > for i in range(k): # fix c.c. pairs > if di[i] > 0 : > z[:,i]=zr[:,i]+1.0j*zr[:,i+1] > > -------------------------------------------------------------- > > > lorenzo. > > > On 8/23/07, lorenzo bolla wrote: > > > > I have problems with the ARPACK wrappers in scipy.sandbox. > > take a look at this snippet of code. > > > > > > --------------------------------------------------------------------------------------------------------- > > > > In [295]: A = numpy.array([[1,2,3,4],[0,2,3,4],[0,0,3,4],[0,0,0,4]], > > dtype=float) > > > > In [296]: A > > Out[296]: > > array([[ 1., 2., 3., 4.], > > [ 0., 2., 3., 4.], > > [ 0., 0., 3., 4.], > > [ 0., 0., 0., 4.]]) > > > > In [297]: [w,v] = arpack.eigen(A,2) > > In [298]: w > > Out[298]: array([ 4.+0.j, 3.+0.j, 0.+0.j]) > > > > --> WRONG: I get 3 eigenvalues instead of two! > > > > In [299]: v > > Out[299]: > > array([[ 0.+0.j, 0.+0.j, 0.+0.j], > > [ 0.+0.j, 0.+0.j, 0.+0.j], > > [ 0.+0.j, 0.+0.j, 0.+0.j], > > [ 0.+0.j, 0.+0.j, 0.+0.j]]) > > --> WRONG: all the eigenvectors are null! > > > > In [300]: [w,v] = arpack.eigen(A.astype(numpy.complex),2) > > > > In [301]: w > > Out[301]: array([ 4. -2.41126563e-15j, 3. +1.34425147e-15j]) > > --> CORRECT: casting the matrix to complex type gives the correct result > > and the correct numbers of eigenvalues > > > > In [302]: v > > Out[302]: > > array([[ -1.37221970e-01-0.75187207j, -7.50019180e-01-0.32694452j], > > [ -1.02916477e-01-0.56390405j, -5.00012787e-01-0.21796301j], > > [ -5.14582387e-02-0.28195202j, -1.66670929e-01-0.07265434j ], > > [ -1.28645597e-02-0.07048801j, 2.49800181e-16+0.j ]]) > > --> MAYBE: and the eigenvectors are not null, but... > > > > In [303]: [w,v] = arpack.eigen(A.astype(numpy.complex128),2) > > > > In [304]: w > > Out[304]: array([ 4. +7.28583860e-16j, 3. +2.23881966e-16j]) > > > > In [305]: v > > Out[305]: > > array([[ -6.65958925e-01 -3.75020242e-01j, > > 8.08076904e-01 +1.28192062e-01j], > > [ -4.99469194e-01 -2.81265182e-01j, > > 5.38717936e-01 +8.54613743e-02j], > > [ - 2.49734597e-01 -1.40632591e-01j, > > 1.79572645e-01 +2.84871248e-02j], > > [ -6.24336492e-02 -3.51581477e-02j, > > -5.20417043e-16 -2.39391840e-16j]]) > > --> WRONG: casting to a complex128 changes the values of the > > eigenvectors!!! > > > > > > --------------------------------------------------------------------------------------------------------- > > > > in any case, the result for the eigenvectors are different than Matlab > > (while the eigenvalues are ok): > > > > v = > > > > -8.181818181818171e-001 7.642914835078907e-001 > > -5.454545454545460e-001 5.732186126309180e-001 > > -1.818181818181832e-001 2.866093063154587e-001 > > -6.938893903907228e-018 7.165232657886386e-002 > > > > > > w = > > > > 3.000000000000010e+000 0 > > 0 3.999999999999995e+000 > > > > Can someone explain me what's wrong? > > Thanks in advance, > > lorenzo. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amcmorl at gmail.com Thu Aug 30 04:30:16 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Thu, 30 Aug 2007 20:30:16 +1200 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: <46D67451.6090305@ukr.net> References: <46D67451.6090305@ukr.net> Message-ID: On 30/08/2007, dmitrey wrote: > hi all, > what's the easiest way to create bool array that contains m1 True, m2 > False, m3 True, m4 False? > Or (alternatively) creating same Python list instead. How about "n.array([bool(1 - (x % 2)) for x in xrange(length)])"? A. -- AJC McMorland, PhD Student Physiology, University of Auckland From stefan at sun.ac.za Thu Aug 30 04:41:53 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 30 Aug 2007 10:41:53 +0200 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: <46D67451.6090305@ukr.net> References: <46D67451.6090305@ukr.net> Message-ID: <20070830084153.GB14395@mentat.za.net> On Thu, Aug 30, 2007 at 10:40:01AM +0300, dmitrey wrote: > what's the easiest way to create bool array that contains m1 True, m2 > False, m3 True, m4 False? Don't know whether there is any absolute easiest way, but this should work: In [6]: x = N.ones(30).astype(bool) In [7]: x[1::2] = False Cheers St?fan From matthieu.brucher at gmail.com Thu Aug 30 04:46:54 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 30 Aug 2007 10:46:54 +0200 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: <46D670AA.6020009@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> <46D3D962.5090006@gmail.com> <46D462CA.7080006@gmail.com> <46D600A2.5060304@gmail.com> <46D605D6.8070308@gmail.com> <46D670AA.6020009@gmail.com> Message-ID: > > If these points were uniformly distributed, > I think I could not see theses structures: theses structures are not an > artifact. > If you draw n points from a gaussian distribution, it won't fit exactly a gaussian distribution, it's the same for an uniform distribution. Besides, if you draw n points for your array, it won't be uniform either, you should draw 100*n points and then you should have something almost uniform. If you really want to have no structures, use a weak generator (linear congruence) with n as its period. But then, it won't be random. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Thu Aug 30 04:59:26 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 30 Aug 2007 11:59:26 +0300 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: References: <46D67451.6090305@ukr.net> Message-ID: <46D686EE.3060500@ukr.net> No, I meant a = array((True, True, ... True (m1 numbers), False, False,... False(m2 numbers), True, ...(m3 numbers), False, ...(m4 numbers) )) If I'll get nothing better than St?fan propose I'll use the way a = array(ones(m1+m2+m3+m4), bool) a[m1:m1+m2]=False a[m1+m2+m3:]=False D Angus McMorland wrote: > On 30/08/2007, dmitrey wrote: > >> hi all, >> what's the easiest way to create bool array that contains m1 True, m2 >> False, m3 True, m4 False? >> Or (alternatively) creating same Python list instead. >> > > How about "n.array([bool(1 - (x % 2)) for x in xrange(length)])"? > > A. > From amcmorl at gmail.com Thu Aug 30 05:33:29 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Thu, 30 Aug 2007 21:33:29 +1200 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: <46D686EE.3060500@ukr.net> References: <46D67451.6090305@ukr.net> <46D686EE.3060500@ukr.net> Message-ID: On 30/08/2007, dmitrey wrote: > No, I meant a = array((True, True, ... True (m1 numbers), False, > False,... False(m2 numbers), True, ...(m3 numbers), False, ...(m4 > numbers) )) Of course, that makes more sense. A cool generic list comprehension solution is: def make_bool(*args): a = [] [[a.append(k) for k in [not bool((i % 2)) for y in xrange(x)]] for i, x in enumerate(args)] return a Still slightly faster than the numpy way (by ~25% I think). > If I'll get nothing better than St?fan propose I'll use the way > a = array(ones(m1+m2+m3+m4), bool) > a[m1:m1+m2]=False > a[m1+m2+m3:]=False > D > > > > Angus McMorland wrote: > > On 30/08/2007, dmitrey wrote: > > > >> hi all, > >> what's the easiest way to create bool array that contains m1 True, m2 > >> False, m3 True, m4 False? > >> Or (alternatively) creating same Python list instead. > >> > > > > How about "n.array([bool(1 - (x % 2)) for x in xrange(length)])"? > > > > A. > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- AJC McMorland, PhD Student Physiology, University of Auckland From sgarcia at olfac.univ-lyon1.fr Thu Aug 30 05:42:27 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 30 Aug 2007 11:42:27 +0200 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: <46D67451.6090305@ukr.net> References: <46D67451.6090305@ukr.net> Message-ID: <46D69103.3030602@olfac.univ-lyon1.fr> And this ? array( [True]*m1 + [False]*m2 + [True]*m3 + [False]*m4) sam dmitrey a ?crit : > hi all, > what's the easiest way to create bool array that contains m1 True, m2 > False, m3 True, m4 False? > Or (alternatively) creating same Python list instead. > Regards, D. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From stefan at sun.ac.za Thu Aug 30 07:14:37 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 30 Aug 2007 13:14:37 +0200 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: References: <46D67451.6090305@ukr.net> <46D686EE.3060500@ukr.net> Message-ID: <20070830111437.GC14395@mentat.za.net> On Thu, Aug 30, 2007 at 09:33:29PM +1200, Angus McMorland wrote: > On 30/08/2007, dmitrey wrote: > > No, I meant a = array((True, True, ... True (m1 numbers), False, > > False,... False(m2 numbers), True, ...(m3 numbers), False, ...(m4 > > numbers) )) > > Of course, that makes more sense. A cool generic list comprehension solution is: > > def make_bool(*args): > a = [] > [[a.append(k) for k in [not bool((i % 2)) for y in xrange(x)]] for > i, x in enumerate(args)] > return a > > Still slightly faster than the numpy way (by ~25% I think). Or with only one for-loop: a = [] for i,x in enumerate(m): a.extend([i%2]*x) return ~N.array(a,bool) where m contains [m1,m2,m3]. Cheers St?fan From stefan at sun.ac.za Thu Aug 30 07:21:41 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 30 Aug 2007 13:21:41 +0200 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: <20070830111437.GC14395@mentat.za.net> References: <46D67451.6090305@ukr.net> <46D686EE.3060500@ukr.net> <20070830111437.GC14395@mentat.za.net> Message-ID: <20070830112141.GD14395@mentat.za.net> On Thu, Aug 30, 2007 at 01:14:37PM +0200, Stefan van der Walt wrote: > On Thu, Aug 30, 2007 at 09:33:29PM +1200, Angus McMorland wrote: > > On 30/08/2007, dmitrey wrote: > > > No, I meant a = array((True, True, ... True (m1 numbers), False, > > > False,... False(m2 numbers), True, ...(m3 numbers), False, ...(m4 > > > numbers) )) > > > > Of course, that makes more sense. A cool generic list comprehension solution is: > > > > def make_bool(*args): > > a = [] > > [[a.append(k) for k in [not bool((i % 2)) for y in xrange(x)]] for > > i, x in enumerate(args)] > > return a > > > > Still slightly faster than the numpy way (by ~25% I think). > > Or with only one for-loop: > > a = [] > for i,x in enumerate(m): > a.extend([i%2]*x) > return ~N.array(a,bool) > > where m contains [m1,m2,m3]. Or without any for-loops! x = N.ones(len(m),bool) x[1::2] = False x.repeat(m) Cheers St?fan From openopt at ukr.net Thu Aug 30 07:44:16 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 30 Aug 2007 14:44:16 +0300 Subject: [SciPy-user] what is simpliest way to create this boolean array In-Reply-To: <46D69103.3030602@olfac.univ-lyon1.fr> References: <46D67451.6090305@ukr.net> <46D69103.3030602@olfac.univ-lyon1.fr> Message-ID: <46D6AD90.3020100@ukr.net> I don't need much speed here, so the way array( [True]*m1 + [False]*m2 + [True]*m3 + [False]*m4) seems to be most appropriate for me and code is very obvious for those who will look through. Thanks all. D. Samuel GARCIA wrote: > And this ? > array( [True]*m1 + [False]*m2 + [True]*m3 + [False]*m4) > sam > > > dmitrey a ?crit : > >> hi all, >> what's the easiest way to create bool array that contains m1 True, m2 >> False, m3 True, m4 False? >> Or (alternatively) creating same Python list instead. >> Regards, D. >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > From j.reid at mail.cryst.bbk.ac.uk Thu Aug 30 08:37:14 2007 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Thu, 30 Aug 2007 13:37:14 +0100 Subject: [SciPy-user] Error in scipy.stats.poisson In-Reply-To: <46D5C197.5090905@gmail.com> References: <20070829162705.GW14395@mentat.za.net> <46D5C197.5090905@gmail.com> Message-ID: That's strange, I get the following error when I call rv.pmf(2) --------------------------------------------------------------------------- Traceback (most recent call last) C:\Dev\MyProjects\ in () : 'rv_frozen' object has no attribute 'pmf' I have scipy '0.5.2.1' Any clues? John. Robert Kern wrote: > John Reid wrote: >> i was looking for the probability of the given realisation of the random >> variable given the frozen parameters. There is no way to achieve this >> with the current api AFAICS. > > The PMF does give you that. > > In [1]: from scipy import stats > > In [2]: rv = stats.poisson(4) > > In [3]: rv.pmf(2) > Out[3]: array(0.14652511110987343) > From openopt at ukr.net Thu Aug 30 08:58:22 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 30 Aug 2007 15:58:22 +0300 Subject: [SciPy-user] howto extract positions and values of non-zeros from array (flat)? Message-ID: <46D6BEEE.7090100@ukr.net> I have one-dimensional array with zeros and non-zeros, like a = array((1,0,2,3,4,0,5,0)) I need to obtain positions of non-zeros and corresponding values ind = array((0, 2, 3,4, 6)) val = array((1,2,3,4,5)) what's the simplest way, that uses only numpy? in MATLAB it would look like [ind, val] = find(a) Regards, D. From sgarcia at olfac.univ-lyon1.fr Thu Aug 30 09:00:07 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 30 Aug 2007 15:00:07 +0200 Subject: [SciPy-user] howto extract positions and values of non-zeros from array (flat)? In-Reply-To: <46D6BEEE.7090100@ukr.net> References: <46D6BEEE.7090100@ukr.net> Message-ID: <46D6BF57.1080504@olfac.univ-lyon1.fr> the function is where : ind, = where(a) for 1d array them val = a[ind] dmitrey a ?crit : > I have one-dimensional array with zeros and non-zeros, like > a = array((1,0,2,3,4,0,5,0)) > > I need to obtain positions of non-zeros and corresponding values > ind = array((0, 2, 3,4, 6)) > val = array((1,2,3,4,5)) > > what's the simplest way, that uses only numpy? > > in MATLAB it would look like [ind, val] = find(a) > > Regards, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From unpingco at osc.edu Thu Aug 30 09:57:27 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Thu, 30 Aug 2007 09:57:27 -0400 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks Message-ID: <46D66A53.AA84.0083.0@osc.edu> For this experiment, I create a square matrix with random entries of size 500, 1000, 1500, 2000 and then apply a variety of factorizations to those matrices. Naturally, each of these experiments is run on exactly the same workstation. The times are wall times in seconds. The bottom line is that Matlab is still substantially faster than either of the ATLAS library versions. However, the newer developer version (3.7.37) of the ATLAS library is about one third faster than the previous version (3.6). Note that Enthought SciPy distribution includes ATLAS 3.6. There is a graph summarizing the results at the following link: http://www.osc.edu/~unpingco/Benchmarks30Aug2007.jpg Please contact me if you have questions or need more information. Thanks! Jose Unpingco, Ph.D. (619)553-2922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lxander.m at gmail.com Thu Aug 30 10:00:54 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Thu, 30 Aug 2007 10:00:54 -0400 Subject: [SciPy-user] howto extract positions and values of non-zeros from array (flat)? In-Reply-To: <46D6BEEE.7090100@ukr.net> References: <46D6BEEE.7090100@ukr.net> Message-ID: <525f23e80708300700p4b3c0ab6o69888511ff430160@mail.gmail.com> On 8/30/07, dmitrey wrote: > I have one-dimensional array with zeros and non-zeros, like > a = array((1,0,2,3,4,0,5,0)) > > I need to obtain positions of non-zeros and corresponding values > ind = array((0, 2, 3,4, 6)) > val = array((1,2,3,4,5)) > > what's the simplest way, that uses only numpy? > > in MATLAB it would look like [ind, val] = find(a) There's also nonzero: From ryanlists at gmail.com Thu Aug 30 10:02:21 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 30 Aug 2007 09:02:21 -0500 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks In-Reply-To: <46D66A53.AA84.0083.0@osc.edu> References: <46D66A53.AA84.0083.0@osc.edu> Message-ID: Your graph doesn't show up when I click on the link. On 8/30/07, Jose Unpingco wrote: > > > > > For this experiment, I create a square matrix with random entries of size > 500, 1000, 1500, 2000 and then apply a variety of factorizations to those > matrices. Naturally, each of these experiments is run on exactly the same > workstation. The times are wall times in seconds. > > The bottom line is that Matlab is still substantially faster than either of > the ATLAS library versions. However, the newer developer version (3.7.37) of > the ATLAS library is about one third faster than the previous version (3.6). > Note that Enthought SciPy distribution includes ATLAS 3.6. > > There is a graph summarizing the results at the following link: > > http://www.osc.edu/~unpingco/Benchmarks30Aug2007.jpg > > > > > Please contact me if you have questions or need more information. > > > Thanks! > > Jose Unpingco, Ph.D. > (619)553-2922 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From Joris.DeRidder at ster.kuleuven.be Thu Aug 30 10:09:38 2007 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Thu, 30 Aug 2007 16:09:38 +0200 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks In-Reply-To: References: <46D66A53.AA84.0083.0@osc.edu> Message-ID: <4EF80FA3-78F7-41D6-8812-2EE2CCEB4507@ster.kuleuven.be> On 30 Aug 2007, at 16:02, Ryan Krauss wrote: > Your graph doesn't show up when I click on the link. It does for me... Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From wkerzendorf at googlemail.com Thu Aug 30 10:43:49 2007 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Fri, 31 Aug 2007 00:43:49 +1000 Subject: [SciPy-user] Fit a function with errors Message-ID: <46D6D7A5.3020103@gmail.com> I am trying to fit a function and get errors for the fit parameters a and b: a simple example would be: y=a*x+b. I dont want to get only the rms but I want to get errors for the parameters a and b. neither linregress nor lstsq method do that, I also want to fit x's and y's that have errors and then get the appropriate error for a and b. To complicate matters;-) : I want to fit a function : u=a*v+b*w+c*x+d*y +e*z and want to fit the parameters a,b,c,d,e and know their errors to values u,v,w,x,y,z in the case of them having errors and them having not errors. I know this is a lot, but thanks in advance for help. Wolfgang From bart.vandereycken at cs.kuleuven.be Thu Aug 30 10:54:53 2007 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Thu, 30 Aug 2007 16:54:53 +0200 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks In-Reply-To: <46D66A53.AA84.0083.0@osc.edu> References: <46D66A53.AA84.0083.0@osc.edu> Message-ID: Jose Unpingco wrote: > For this experiment, I create a square matrix with random entries of > size 500, 1000, 1500, 2000 and then apply a variety of factorizations to > those matrices. Naturally, each of these experiments is run on exactly > the same workstation. The times are wall times in seconds. > > The bottom line is that Matlab is still substantially faster than either > of the ATLAS library versions. However, the newer developer version > (3.7.37) of the ATLAS library is about one third faster than the > previous version (3.6). Note that Enthought SciPy distribution includes > ATLAS 3.6. I think you compared the wrong timings. If I compare the svd of scipy (atlas 3.6) and matlab 7.2 on my machine I get: n 500 1000 1500 scipy 0.4 3.9 13 matlab 0.4 3.9 13 There is no difference between scipy and matlab. Matlab script: tic; s = svd(A); toc Scipy script: import numpy as NY import scipy.linalg as LA import time t = time.time() T = LA.svd(A,compute_uv=0) t = time.time() - t print t As you can see the compute_uv=0 is important. The matlab command "s = svd(A)" only computes the singular values. If you want the full output with singular vectors, I get this n 500 1000 1500 scipy 1.3 10 31 matlab 3.0 32 107 Now scipy is significantly faster! This is probably because matlab's output for S is a full matrix and the V is not transposed. Scipy just gives you the raw output of lapack svd routines, which is good enough. Matlab script: tic; [U,S,V] = svd(A); toc Scipy script: import numpy as NY import scipy.linalg as LA import time t = time.time() T = LA.svd(A) t = time.time() - t print t I suspect the same has happened to the lu and qr routines. It would help if you include the benchmark code. BTW I didn't know atlas 3.7 was that much faster :) -- bart From stefan at sun.ac.za Thu Aug 30 11:08:02 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 30 Aug 2007 17:08:02 +0200 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks In-Reply-To: <46D66A53.AA84.0083.0@osc.edu> References: <46D66A53.AA84.0083.0@osc.edu> Message-ID: <20070830150802.GE14395@mentat.za.net> On Thu, Aug 30, 2007 at 09:57:27AM -0400, Jose Unpingco wrote: > For this experiment, I create a square matrix with random entries of size 500, > 1000, 1500, 2000 and then apply a variety of factorizations to those matrices. > Naturally, each of these experiments is run on exactly the same workstation. > The times are wall times in seconds. > > The bottom line is that Matlab is still substantially faster than either of the > ATLAS library versions. However, the newer developer version (3.7.37) of the > ATLAS library is about one third faster than the previous version (3.6). Note > that Enthought SciPy distribution includes ATLAS 3.6. > > There is a graph summarizing the results at the following link: > > http://www.osc.edu/~unpingco/Benchmarks30Aug2007.jpg Would you mind forwarding your code to the list so that we can verify those results? Regards St?fan From lxander.m at gmail.com Thu Aug 30 11:20:24 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Thu, 30 Aug 2007 11:20:24 -0400 Subject: [SciPy-user] Fit a function with errors In-Reply-To: <46D6D7A5.3020103@gmail.com> References: <46D6D7A5.3020103@gmail.com> Message-ID: <525f23e80708300820m7076fc93s70241d3b197d642c@mail.gmail.com> On 8/30/07, Wolfgang Kerzendorf wrote: > I am trying to fit a function and get errors for the fit parameters a > and b: > a simple example would be: y=a*x+b. I dont want to get only the rms but > I want to get errors for the parameters a and b. neither linregress nor > lstsq method do that, I also want to fit x's and y's that have errors > and then get the appropriate error for a and b. > To complicate matters;-) : > I want to fit a function > : u=a*v+b*w+c*x+d*y +e*z and want to fit the parameters a,b,c,d,e and > know their errors to values u,v,w,x,y,z in the case of them having > errors and them having not errors. > I know this is a lot, but thanks in advance for help. Have you looked at the scipy cookbooks example OLS? From openopt at ukr.net Thu Aug 30 11:34:12 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 30 Aug 2007 18:34:12 +0300 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') Message-ID: <46D6E374.5080406@ukr.net> hi all, during debug I obtain >>> x array([0.5, 1.5, 2.5, 3.5], 'd') What does this 'd' mean? How can I construct a = array(something) to obtain array of same type? Thank you in advance, D. From matthieu.brucher at gmail.com Thu Aug 30 11:46:09 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 30 Aug 2007 17:46:09 +0200 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') In-Reply-To: <46D6E374.5080406@ukr.net> References: <46D6E374.5080406@ukr.net> Message-ID: 2007/8/30, dmitrey : > > hi all, > during debug I obtain > >>> x > array([0.5, 1.5, 2.5, 3.5], 'd') > > What does this 'd' mean? It's the type of the array (I think), and in this case, it indicates a double array IIRC. How can I construct a = array(something) to obtain array of same type? numpy.array([0.5, 1.5, 2.5, 3.5], dtype = 'd') Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Thu Aug 30 11:47:58 2007 From: fredmfp at gmail.com (fred) Date: Thu, 30 Aug 2007 17:47:58 +0200 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') In-Reply-To: <46D6E374.5080406@ukr.net> References: <46D6E374.5080406@ukr.net> Message-ID: <46D6E6AE.6070401@gmail.com> dmitrey a ?crit : > hi all, > during debug I obtain > >>> x > array([0.5, 1.5, 2.5, 3.5], 'd') > > What does this 'd' mean? > It means "double", aka float64. 'f' for float32, etc. Please try array? under ipython for more information. -- http://scipy.org/FredericPetit From openopt at ukr.net Thu Aug 30 11:51:04 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 30 Aug 2007 18:51:04 +0300 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') In-Reply-To: References: <46D6E374.5080406@ukr.net> Message-ID: <46D6E768.4040107@ukr.net> >>> array([0.5, 1.5, 2.5, 3.5], dtype = 'd') array([ 0.5, 1.5, 2.5, 3.5]) so no 'd' is shown But I write this for other reason, then showing 'd' of course the problem is that the array doesn't have 'ndim' attribute, and that makes error in my program D Matthieu Brucher wrote: > > > 2007/8/30, dmitrey >: > > hi all, > during debug I obtain > >>> x > array([0.5, 1.5, 2.5, 3.5], 'd') > > What does this 'd' mean? > > > > It's the type of the array (I think), and in this case, it indicates a > double array IIRC. > > > How can I construct a = array(something) to obtain array of same type? > > > numpy.array([0.5, 1.5, 2.5, 3.5], dtype = 'd') > > Matthieu > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From elcorto at gmx.net Thu Aug 30 12:03:35 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 30 Aug 2007 18:03:35 +0200 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') In-Reply-To: <46D6E374.5080406@ukr.net> References: <46D6E374.5080406@ukr.net> Message-ID: <46D6EA57.7000809@gmx.net> dmitrey wrote: > hi all, > during debug I obtain > >>> x > array([0.5, 1.5, 2.5, 3.5], 'd') > What does this 'd' mean? http://www.scipy.org/Tentative_NumPy_Tutorial#head-6a1bc005bd80e1b19f812e1e64e0d25d50f99fe2 http://www.scipy.org/Numpy_Example_List#typeDict http://www.hjcb.nl/python/Arrays.html#.dtype -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From openopt at ukr.net Thu Aug 30 12:24:46 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 30 Aug 2007 19:24:46 +0300 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') In-Reply-To: <46D6EA57.7000809@gmx.net> References: <46D6E374.5080406@ukr.net> <46D6EA57.7000809@gmx.net> Message-ID: <46D6EF4E.5090403@ukr.net> Unfortunately noone of the links give answer to my questions: 1. What should I type in command line to obtain Python print output array([0.5, 1.5, 2.5, 3.5], 'd') if I just type print array([0.5, 1.5, 2.5, 3.5], double) I get >>> print array([0.5, 1.5, 2.5, 3.5], double) [ 0.5 1.5 2.5 3.5] >>> print x array([0.5, 1.5, 2.5, 3.5], 'd') 2. Why the x (that "print x" yields array([0.5, 1.5, 2.5, 3.5], 'd')) has some attributes as ordinary array (dir(x) = ['__copy__', '__deepcopy__', 'astype', 'byteswapped', 'copy', 'iscontiguous', 'itemsize', 'resize', 'savespace', 'spacesaver', 'tolist', 'toscalar', 'tostring', 'typecode']), x.shape works((4,)),but x.size yields error? dir(ordinary array) yields much more fields: >>> dir(a) ['T', '__abs__', '__add__', '__and__', '__array__', '__array_finalize__', '__array_interface__', '__array_priority__', '__array_struct__', '__array_wrap__', '__class__', '__contains__', '__copy__', '__deepcopy__', '__delattr__', '__delitem__', '__delslice__', '__div__', '__divmod__', '__doc__', '__eq__', '__float__', '__floordiv__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__hex__', '__iadd__', '__iand__', '__idiv__', '__ifloordiv__', '__ilshift__', '__imod__', '__imul__', '__index__', '__init__', '__int__', '__invert__', '__ior__', '__ipow__', '__irshift__', '__isub__', '__iter__', '__itruediv__', '__ixor__', '__le__', '__len__', '__long__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__setitem__', '__setslice__', '__setstate__', '__str__', '__sub__', '__truediv__', '__xor__', 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', 'item', 'itemset', 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', 'nonzero', 'prod', 'ptp', 'put', 'ravel', 'real', 'repeat', 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', 'swapaxes', 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', 'view'] It seems it's important - I got this x from fortran code, that uses numpy (ALGENCAN). >>> type(x) >>> x.shape (4,) D. Steve Schmerler wrote: > dmitrey wrote: > >> hi all, >> during debug I obtain >> >>> x >> array([0.5, 1.5, 2.5, 3.5], 'd') >> What does this 'd' mean? >> > > http://www.scipy.org/Tentative_NumPy_Tutorial#head-6a1bc005bd80e1b19f812e1e64e0d25d50f99fe2 > http://www.scipy.org/Numpy_Example_List#typeDict > http://www.hjcb.nl/python/Arrays.html#.dtype > > From robert.kern at gmail.com Thu Aug 30 12:28:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 30 Aug 2007 11:28:25 -0500 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: <46D670AA.6020009@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> <46D3D962.5090006@gmail.com> <46D462CA.7080006@gmail.com> <46D600A2.5060304@gmail.com> <46D605D6.8070308@gmail.com> <46D670AA.6020009@gmail.com> Message-ID: <46D6F029.20001@gmail.com> fred wrote: > Robert Kern a ?crit : >> Show us your code, if you think there is a problem in it. > I really think there is no problem in my code, > it has been already validated. > > My 2D data array is a n=501x501 array. > If I get n points from it, the neighbourhood is uniform, > I think this is a problem for nobody ;-) > > In fact, I don't get n points, but far less, say 15000. > > If these points were uniformly distributed, > I think I could not see theses structures: theses structures are not an > artifact. You *will* see the things that you call structures if the sampling is correct. The sample will only be really uniform in the limit as you get near total sampling. As is, you are only drawing a sample about 6% of the total. You *will* get fluctuations. >> Looking at images is a >> very poor way to judge randomness; > Yes, but one can see structures. > The question is : why I can see these structures . There are no structures. Just random fluctuations that are accentuated by your neighborhood scheme and your visual system looking for patterns. > Do they have any meaning ? Plotting the points in the neighborhood essentially takes the raw data and convolves it with a kernel. That broadens the effect of each point, so you see more low-frequency "structure" than you otherwise would. Other than that, no meaning. > I was expecting to see no structure at all, in fact. > But may be I'm wrong. Your intuition is wrong. Please accept that fact. > I understand the trick like this : if I get 90 neighbours > in a neighbourhood, the density of points is much higher > than in a neighbourhood where I get only 40 neighbours per points. > So, for me, it is not uniformly distributed. In fact, that is *exactly* what you should see if the sampling was random. Fluctuations of that size are expected for the parameters you've given. Only if the sampling were non-random, like the low-discrepancy sequences, would you see a tighter spread. If you want to double-check the sampling, you can try a slower, but easier-to-verify method of sampling without replacement than the shuffle method: def sample_noreplace(nurn, nsample): sampled = [] while len(sampled) != nsample: i = random.randint(nurn) if i not in sampled: sampled.append(i) return array(sampled, dtype=int) nside = 501 nsample = 15000 assert xy.shape == (nside*nside, 2) xy_sampled = xy[sample_noreplace(nside*nside, nsample)] You will see similar results. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Aug 30 12:39:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 30 Aug 2007 11:39:26 -0500 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks In-Reply-To: <46D66A53.AA84.0083.0@osc.edu> References: <46D66A53.AA84.0083.0@osc.edu> Message-ID: <46D6F2BE.5040809@gmail.com> Jose Unpingco wrote: > For this experiment, I create a square matrix with random entries of > size 500, 1000, 1500, 2000 and then apply a variety of factorizations to > those matrices. Naturally, each of these experiments is run on exactly > the same workstation. The times are wall times in seconds. > > The bottom line is that Matlab is still substantially faster than either > of the ATLAS library versions. Please be careful with such statements. Matlab is not the thing that's faster; it's the BLAS and LAPACK underlying it. I see that in 7.2, they've started using the Intel MKL or the ACML (depending on the processor) by default. That's the source of the timing difference. Note that scipy can use either of these libraries, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Aug 30 12:42:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 30 Aug 2007 11:42:24 -0500 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') In-Reply-To: <46D6EF4E.5090403@ukr.net> References: <46D6E374.5080406@ukr.net> <46D6EA57.7000809@gmx.net> <46D6EF4E.5090403@ukr.net> Message-ID: <46D6F370.9060301@gmail.com> dmitrey wrote: > It seems it's important - I got this x from fortran code, that uses > numpy (ALGENCAN). > >>> type(x) > > >>> x.shape > (4,) That looks like a Numeric array, not a numpy one. Although, Numeric didn't print out the 'd', either. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Aug 30 12:44:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 30 Aug 2007 11:44:31 -0500 Subject: [SciPy-user] Error in scipy.stats.poisson In-Reply-To: References: <20070829162705.GW14395@mentat.za.net> <46D5C197.5090905@gmail.com> Message-ID: <46D6F3EF.2080909@gmail.com> John Reid wrote: > That's strange, I get the following error when I call rv.pmf(2) > > --------------------------------------------------------------------------- > Traceback (most recent call last) > > C:\Dev\MyProjects\ in () > > : 'rv_frozen' object has no attribute > 'pmf' > > I have scipy '0.5.2.1' > > Any clues? Ah yes, that might have been something I fixed that didn't get into the release. 0.6 will be out relatively soon, and will have the fix. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From elcorto at gmx.net Thu Aug 30 12:59:58 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 30 Aug 2007 18:59:58 +0200 Subject: [SciPy-user] what does 'd' mean in array([0.0, 1.0, 2.0, 3.0], 'd') In-Reply-To: <46D6EF4E.5090403@ukr.net> References: <46D6E374.5080406@ukr.net> <46D6EA57.7000809@gmx.net> <46D6EF4E.5090403@ukr.net> Message-ID: <46D6F78E.5090905@gmx.net> dmitrey wrote: > Unfortunately noone of the links give answer to my questions: > 1. What should I type in command line to obtain Python print output > array([0.5, 1.5, 2.5, 3.5], 'd') Nothing. Just use .dtype to the the dtype of the array. > if I just type > print array([0.5, 1.5, 2.5, 3.5], double) > I get > >>> print array([0.5, 1.5, 2.5, 3.5], double) > [ 0.5 1.5 2.5 3.5] Me too, with numpy arrays and the Ipython shell (and 'double' instead of double). > >>> print x > array([0.5, 1.5, 2.5, 3.5], 'd') > Hmmm. Maybe your shell behaves differently with when you print x. > 2. Why the x (that "print x" yields array([0.5, 1.5, 2.5, 3.5], 'd')) > has some attributes as ordinary array (dir(x) = ['__copy__', > '__deepcopy__', 'astype', 'byteswapped', 'copy', 'iscontiguous', > 'itemsize', 'resize', 'savespace', 'spacesaver', 'tolist', 'toscalar', > 'tostring', 'typecode']), x.shape works((4,)),but x.size yields error? > dir(ordinary array) yields much more fields: > >>> dir(a) > ['T', '__abs__', '__add__', '__and__', '__array__', > '__array_finalize__', '__array_interface__', '__array_priority__', > '__array_struct__', '__array_wrap__', '__class__', '__contains__', > '__copy__', '__deepcopy__', '__delattr__', '__delitem__', > '__delslice__', '__div__', '__divmod__', '__doc__', '__eq__', > '__float__', '__floordiv__', '__ge__', '__getattribute__', > '__getitem__', '__getslice__', '__gt__', '__hash__', '__hex__', > '__iadd__', '__iand__', '__idiv__', '__ifloordiv__', '__ilshift__', > '__imod__', '__imul__', '__index__', '__init__', '__int__', > '__invert__', '__ior__', '__ipow__', '__irshift__', '__isub__', > '__iter__', '__itruediv__', '__ixor__', '__le__', '__len__', '__long__', > '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', > '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', > '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', > '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', > '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', > '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__setitem__', > '__setslice__', '__setstate__', '__str__', '__sub__', '__truediv__', > '__xor__', 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', > 'base', 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', > 'copy', 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', > 'dump', 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', > 'item', 'itemset', 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', > 'newbyteorder', 'nonzero', 'prod', 'ptp', 'put', 'ravel', 'real', > 'repeat', 'reshape', 'resize', 'round', 'searchsorted', 'setfield', > 'setflags', 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', > 'swapaxes', 'take', 'tofile', 'tolist', 'tostring', 'trace', > 'transpose', 'var', 'view'] > > It seems it's important - I got this x from fortran code, that uses > numpy (ALGENCAN). > >>> type(x) > > >>> x.shape > (4,) > I'm not familiar with this code, but from a little playing arround, I suspect that this code gives you back a Numeric array rather than a numpy array: In [16]: import numpy In [17]: import Numeric In [18]: a = numpy.array([1,2,3], 'd') In [19]: a Out[19]: array([ 1., 2., 3.]) In [20]: type(a) Out[20]: In [21]: print dir(a) ['T', '__abs__', '__add__', '__and__', '__array__', '__array_finalize__', '__array_interface__', '__array_priority__', '__array_struct__', '__array_wrap__', '__class__', '__contains__', '__copy__', '__deepcopy__', '__delattr__', '__delitem__', '__delslice__', '__div__', '__divmod__', '__doc__', '__eq__', '__float__', '__floordiv__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__hex__', '__iadd__', '__iand__', '__idiv__', '__ifloordiv__', '__ilshift__', '__imod__', '__imul__', '__init__', '__int__', '__invert__', '__ior__', '__ipow__', '__irshift__', '__isub__', '__iter__', '__itruediv__', '__ixor__', '__le__', '__len__', '__long__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__setitem__', '__setslice__', '__setstate__', '__str__', '__sub__', '__truediv__', '__xor__', 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', 'item', 'itemset', 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', 'nonzero', 'prod', 'ptp', 'put', 'ravel', 'real', 'repeat', 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', 'swapaxes', 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', 'view'] In [22]: a.size Out[22]: 3 In [23]: x = Numeric.array([1,2,3], 'd') In [24]: x Out[24]: array([ 1., 2., 3.]) In [25]: type(x) Out[25]: In [26]: print dir(x) ['__copy__', '__deepcopy__', 'astype', 'byteswapped', 'copy', 'iscontiguous', 'itemsize', 'resize', 'savespace', 'spacesaver', 'tolist', 'toscalar', 'tostring', 'typecode'] In [27]: x.size --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/schmerler/work/pycts/ AttributeError: size But even with Numeric arrays, the 'd' isn't printed for me, neither with IPython, nor with the normal Python shell. -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From william.ratcliff at gmail.com Thu Aug 30 14:55:38 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 30 Aug 2007 14:55:38 -0400 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks In-Reply-To: <46D6F2BE.5040809@gmail.com> References: <46D66A53.AA84.0083.0@osc.edu> <46D6F2BE.5040809@gmail.com> Message-ID: <827183970708301155p7361762dm5221aad76c4cc05f@mail.gmail.com> Has the cookbook been updated for using the MKL libraries with scipy and mingw under windows? I installed it for numpy, but in scipy ran into some problems awhile back because of problems using mingw with mkl. Thanks, William On 8/30/07, Robert Kern wrote: > > Jose Unpingco wrote: > > For this experiment, I create a square matrix with random entries of > > size 500, 1000, 1500, 2000 and then apply a variety of factorizations to > > those matrices. Naturally, each of these experiments is run on exactly > > the same workstation. The times are wall times in seconds. > > > > The bottom line is that Matlab is still substantially faster than either > > of the ATLAS library versions. > > Please be careful with such statements. Matlab is not the thing that's > faster; > it's the BLAS and LAPACK underlying it. I see that in 7.2, they've started > using > the Intel MKL or the ACML (depending on the processor) by default. That's > the > source of the timing difference. Note that scipy can use either of these > libraries, too. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaonary at free.fr Thu Aug 30 16:10:38 2007 From: jaonary at free.fr (Jaonary Rabarisoa) Date: Thu, 30 Aug 2007 22:10:38 +0200 Subject: [SciPy-user] Need help for installing numpy & scipy on linux with MKL Message-ID: <7C509802-D985-4F39-A2C0-42584B30DB11@free.fr> Hi all, I'm trying to install scipy on a linux distribution with intel MKL. When I run the command > python setup.py config I get this output F2PY Version 2_4024 blas_opt_info: blas_mkl_info: libraries mkl,vml not found in /usr/local/lib libraries mkl,vml not found in /usr/lib NOT AVAILABLE It seems that it couldn't find the MKL library. I've create a site.cfg file with 123 [mkl] 124 library_dirs = /certis/nosave2/rabariso/local/intel/mkl/64 125 mkl_libs = mkl,vml 126 include_dirs =/certis/nosave2/rabariso/local/intel/mkl/include as said in installation instruction on the scipy web site. I put this file in the distutils directory (or in my numpy root directory) end even with that it didn't find MKL at all. Your help will be appreciated, Best regards, Jaonary From j.reid at mail.cryst.bbk.ac.uk Thu Aug 30 17:37:52 2007 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Thu, 30 Aug 2007 22:37:52 +0100 Subject: [SciPy-user] Error in scipy.stats.poisson In-Reply-To: <46D6F3EF.2080909@gmail.com> References: <20070829162705.GW14395@mentat.za.net> <46D5C197.5090905@gmail.com> <46D6F3EF.2080909@gmail.com> Message-ID: thx Robert Kern wrote: > John Reid wrote: >> That's strange, I get the following error when I call rv.pmf(2) >> >> --------------------------------------------------------------------------- >> Traceback (most recent call last) >> >> C:\Dev\MyProjects\ in () >> >> : 'rv_frozen' object has no attribute >> 'pmf' >> >> I have scipy '0.5.2.1' >> >> Any clues? > > Ah yes, that might have been something I fixed that didn't get into the release. > 0.6 will be out relatively soon, and will have the fix. > From fredmfp at gmail.com Thu Aug 30 18:16:55 2007 From: fredmfp at gmail.com (fred) Date: Fri, 31 Aug 2007 00:16:55 +0200 Subject: [SciPy-user] neighbourhood of randomly scattered points In-Reply-To: <46D6F029.20001@gmail.com> References: <46B4513C.5080108@gmail.com> <46CC5AF0.5070405@gmail.com> <46D3CDEA.8010102@gmail.com> <46D3D962.5090006@gmail.com> <46D462CA.7080006@gmail.com> <46D600A2.5060304@gmail.com> <46D605D6.8070308@gmail.com> <46D670AA.6020009@gmail.com> <46D6F029.20001@gmail.com> Message-ID: <46D741D7.5020906@gmail.com> Robert Kern a ?crit : > You will see similar results. > Ok. I admit :-) Thanks.. for your help & patience. Cheers, -- http://scipy.org/FredericPetit From wkerzendorf at googlemail.com Thu Aug 30 22:04:38 2007 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Fri, 31 Aug 2007 12:04:38 +1000 Subject: [SciPy-user] weighted least squares fit Message-ID: <46D77736.2010003@gmail.com> Is there a weighted least squares fit in scipy. As far as I have seen there's none. thanks in advance Wolfgang From peridot.faceted at gmail.com Thu Aug 30 22:18:04 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 30 Aug 2007 22:18:04 -0400 Subject: [SciPy-user] weighted least squares fit In-Reply-To: <46D77736.2010003@gmail.com> References: <46D77736.2010003@gmail.com> Message-ID: On 30/08/2007, Wolfgang Kerzendorf wrote: > Is there a weighted least squares fit in scipy. As far as I have seen > there's none. > thanks in advance You can get a weighted fit by simply scaling your coefficient matrix and result vector: If you want to find x making (Mx-b) as small as possible in the least-squares sense, you can use scipy.linalg.lstsq. If you want to weight row i by w[i], just multiply b[i] and M[i,:] by w[i]. If you want a more sophisticated linear least-squares solver, look at ODR (which I don't know anything about). If you want nonlinear least-squares (scipy.optimize.leastsq), the same trick works but you need to put it in your objective function. Anne From pepe_kawumi at yahoo.co.uk Fri Aug 31 03:29:21 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Fri, 31 Aug 2007 07:29:21 +0000 (GMT) Subject: [SciPy-user] Python equivalent of triplot in matlab Message-ID: <436120.80309.qm@web27706.mail.ukl.yahoo.com> Hi, trying to convert a matlab program to python. Just wondering if anyone knows if there is an equivalent to the triplot command in matlab. Thanks, Perez ___________________________________________________________ Win a BlackBerry device from O2 with Yahoo!. Enter now. http://www.yahoo.co.uk/blackberry -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Aug 31 05:50:51 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 31 Aug 2007 10:50:51 +0100 Subject: [SciPy-user] SciPy ATLAS vs. Matlab 7.2 linear algebra benchmarks In-Reply-To: <46D66A53.AA84.0083.0@osc.edu> References: <46D66A53.AA84.0083.0@osc.edu> Message-ID: <1e2af89e0708310250s5c01c3a8n56a55f1f5fd107ec@mail.gmail.com> Hi, > There is a graph summarizing the results at the following link: > > http://www.osc.edu/~unpingco/Benchmarks30Aug2007.jpg It would be very good if we could keep track of this automatically; can I also ask if you would consider forwarding your code? Thanks a lot, Matthew From stefan at sun.ac.za Fri Aug 31 07:13:20 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 31 Aug 2007 13:13:20 +0200 Subject: [SciPy-user] weighted least squares fit In-Reply-To: References: <46D77736.2010003@gmail.com> Message-ID: <20070831111320.GI14395@mentat.za.net> On Thu, Aug 30, 2007 at 10:18:04PM -0400, Anne Archibald wrote: > If you want a more sophisticated linear least-squares solver, look at > ODR (which I don't know anything about). If you want nonlinear > least-squares (scipy.optimize.leastsq), the same trick works but you > need to put it in your objective function. Here's a short summary of orthogonal distance regression, http://mentat.za.net/phd-wiki-web/OrthogonalDistanceRegression.html based on Paul T. Boggs and Janet E. Rogers. Orthogonal Distance Regression. In P.J. Brown and Wayne A. Fuller, editor, Contemporary Mathematics. American Mathematical Society, Providence, Rhode Island, 1990. Regards St?fan From robert.vergnes at yahoo.fr Fri Aug 31 07:49:10 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 31 Aug 2007 13:49:10 +0200 (CEST) Subject: [SciPy-user] RE : Re: RE : Re: RE : Re: How to free unused memory by Python In-Reply-To: <20070827140232.GF14668@clipper.ens.fr> Message-ID: <95368.4273.qm@web27414.mail.ukl.yahoo.com> Used memory in linux or windows is displayed on by the windows task manager ( win) (ctrl+alt+del) or by the system memory manager (or Task Manager) ( depending on your linux version i Think). So you can see how much ofyour physical memory is used while running progs. So apprently gc cannot redeem memory to the OS... so it seems without solution for the moment - apart from out-process the task which load memory too much. And kill it each it when it has done its work so the memory is given back to the OS. Any other ideas ? Gael Varoquaux a ?crit : On Mon, Aug 27, 2007 at 03:59:13PM +0200, Robert VERGNES wrote: > something like lists not being carbage collected ? strange... No, juste references lying around because of magic variables. Can you ran your tests in a script to check for this, rather than an interactiv environment (I would do so, but I do not know how you test for used memory). Ga?l > Gael Varoquaux a ecrit : > Could it be due to some stored variable. In the Python interpretor "_" > is > the last answer, so the last answer does not get garbage collected. > There > might be other "jokes" lying around. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Aug 31 07:51:25 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 31 Aug 2007 13:51:25 +0200 Subject: [SciPy-user] RE : Re: RE : Re: RE : Re: How to free unused memory by Python In-Reply-To: <95368.4273.qm@web27414.mail.ukl.yahoo.com> References: <20070827140232.GF14668@clipper.ens.fr> <95368.4273.qm@web27414.mail.ukl.yahoo.com> Message-ID: <20070831115125.GA15718@clipper.ens.fr> On Fri, Aug 31, 2007 at 01:49:10PM +0200, Robert VERGNES wrote: > Used memory in linux or windows is displayed on by the windows task > manager ( win) (ctrl+alt+del) or by the system memory manager (or Task > Manager) ( depending on your linux version i Think). So you can see how > much ofyour physical memory is used while running progs. > So apprently gc cannot redeem memory to the OS... so it seems without > solution for the moment - apart from out-process the task which load > memory too much. And kill it each it when it has done its work so the > memory is given back to the OS. > Any other ideas ? Use python 2.5, where this problem is solved ? Ga?l From peridot.faceted at gmail.com Fri Aug 31 15:44:11 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 31 Aug 2007 15:44:11 -0400 Subject: [SciPy-user] RE : Re: RE : Re: RE : Re: How to free unused memory by Python In-Reply-To: <95368.4273.qm@web27414.mail.ukl.yahoo.com> References: <20070827140232.GF14668@clipper.ens.fr> <95368.4273.qm@web27414.mail.ukl.yahoo.com> Message-ID: On 31/08/2007, Robert VERGNES wrote: > Used memory in linux or windows is displayed on by the windows task manager > ( win) (ctrl+alt+del) or by the system memory manager (or Task Manager) ( > depending on your linux version i Think). So you can see how much ofyour > physical memory is used while running progs. > > So apprently gc cannot redeem memory to the OS... so it seems without > solution for the moment - apart from out-process the task which load memory > too much. And kill it each it when it has done its work so the memory is > given back to the OS. > > Any other ideas ? Make sure you have lots of swap space. If python has freed some memory, python will reuse that before requesting more from the OS, so there's no problem of memory use growing without bound. If you don't reuse the memory, it will just sit there unused. If you run into memory pressure from other applications, the OS (well, most OSes) will page it out to disk until you actually use it again. So a python process that has a gigabyte allocated but is only using a hundred megabytes of that will, if something else wants to use some of the physical RAM in your machine, simply occupy nine hundred megabytes in your swap file. Who cares? Also worth knowing is that even on old versions of python, on some OSes (probably all) numpy arrays suffer from this problem to a much lesser degree. When you allocate a numpy array, there's a relatively small python object describing it, and a chunk of memory to contain the values. This chunk of memory is allocated with malloc(). The malloc() implementation on Linux (and probably on other systems) provides big chunks by requesting them directly from the operating system, so that they can be returned to the OS when done. Even if you're using many small arrays, you should be aware that the memory needed by numpy array data is allocated by malloc() and not python's allocators, so whether it is freed back to the system is a separate question from whether the memory needed by python objects goes back to the system. Anne