From wierob83 at googlemail.com Mon Jun 1 05:26:38 2009 From: wierob83 at googlemail.com (wierob) Date: Mon, 01 Jun 2009 11:26:38 +0200 Subject: [SciPy-user] How to use pcolor and scatter plot in one image? In-Reply-To: <20090531124151.GA9852@phare.normalesup.org> References: <4A22630B.1080909@googlemail.com> <20090531121019.GA21468@phare.normalesup.org> <20090531124151.GA9852@phare.normalesup.org> Message-ID: <4A239ECE.6090509@googlemail.com> Hi, thanks for your help. Unfortunately, your example does not work for me. The line histo = np.histogram(z.ravel(), bins=r_[Z.ravel(),2*n**2]) produeces the following error message: Traceback (most recent call last): File "/mnt/VBoxShare/eg.py", line 15, in histo = np.histogram(z.ravel(), bins=r_[Z.ravel(),2*n**2]) NameError: name 'r_' is not defined I'm very new to Scipy and have no idea what your intended to do there. What I'm trying to do is the following: from scipy import polyval, zeros import pylab a, b = fetch_data(...) pylab.plot(a, b, "g.") # scatter plot # regression line regression = regression_analysis(...) xr = polyval([regression[0], regression[1]], b) pylab.plot(b, xr, "r-") pylab.gca().set_xlim([0,max(b)]) pylab.gca().set_ylim([0,max(a)]) # calculate grid (10x10) xlim = pylab.gca().get_xlim()[1] ylim = pylab.gca().get_ylim()[1] block_x = int(xlim / 10.0 + 1) block_y = int(ylim / 10.0 + 1) grid_x = [ block_x * i for i in range(11) ] grid_y = [ block_y * i for i in range(11) ] density_map = zeros((10, 10)) # matrix for points per cell inc = 1.0 / number_of_data_points for i in range(10): for j in range(10): cell = [ grid_x[i], grid_x[i+1], grid_y[j], grid_y[j+1] ] density_map[j][i] += points_in(cell) * inc # plot the 'density map' pylab.pcolor(density_map, cmap=pylab.get_cmap("hot")) pylab.show() This only creates the scatter plot and the regression line. kind regards robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Jun 1 05:36:33 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 1 Jun 2009 11:36:33 +0200 Subject: [SciPy-user] How to use pcolor and scatter plot in one image? In-Reply-To: <4A239ECE.6090509@googlemail.com> References: <4A22630B.1080909@googlemail.com> <20090531121019.GA21468@phare.normalesup.org> <20090531124151.GA9852@phare.normalesup.org> <4A239ECE.6090509@googlemail.com> Message-ID: <20090601093632.GC9994@phare.normalesup.org> On Mon, Jun 01, 2009 at 11:26:38AM +0200, wierob wrote: > thanks for your help. Unfortunately, your example does not work for me. > The line > histo = np.histogram(z.ravel(), bins=r_[Z.ravel(),2*n**2]) > produeces the following error message: > Traceback (most recent call last): > File "/mnt/VBoxShare/eg.py", line 15, in > histo = np.histogram(z.ravel(), bins=r_[Z.ravel(),2*n**2]) > NameError: name 'r_' is not defined That was a typo: replace 'r_' by 'np.r_'. Ga?l PS: IPython dev's: is there a pylab mode without the dreaded 'from pylab import *'. We need to advertise such a workflow, rather the 'ipython -pylab' which polutes the namespace with almost 900 entries. From dmitrey15 at ukr.net Mon Jun 1 05:51:27 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Mon, 01 Jun 2009 12:51:27 +0300 Subject: [SciPy-user] projection of a point to a set defined by linear constraints Message-ID: <4A23A49F.2010807@ukr.net> hi all, Suppose I have set of linear constraints b1 <= Ax <= b2 and point X = [x1, ..., xn] out of the set. A is n x m matrix, b1, b2 are vectors of length m (some of b2, b1 coords can be +/- inf or equal). What is the best way to find projection of the point x to the set? I.e. ||X-x||_2 -> min, s.t. b1 <= Ax <= b2. I intended to use constrained LLSP solver, something like ACM TOMS 587 But: * Using f2py yields error, as I have mentioned here http://groups.google.com/group/scipy-user/browse_thread/thread/bb3ad277e9213d3e * Since the problem has eye matrix (||Cx-d||^2 -> min, C = I), I thought maybe there are more efficient ways (and/or software) to do it? Could you recommend me another forum/google group where I should search, an article or mb some code (preferably Python or MATLAB)? I was recommended to involving ODRPack and I know it is included into scipy, can this somehow help? BTW I intend to involve it to speedup my NLP/NSP sover ralg for problems with lots of constraints, other than only box-bounded ones. Thank you in advance, D. From emmanuelle.gouillart at normalesup.org Mon Jun 1 07:20:21 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Mon, 1 Jun 2009 13:20:21 +0200 Subject: [SciPy-user] How to use pcolor and scatter plot in one image? In-Reply-To: <4A239ECE.6090509@googlemail.com> References: <4A22630B.1080909@googlemail.com> <20090531121019.GA21468@phare.normalesup.org> <20090531124151.GA9852@phare.normalesup.org> <4A239ECE.6090509@googlemail.com> Message-ID: <20090601112021.GA12731@phare.normalesup.org> Hi Robert, > histo = np.histogram(z.ravel(), bins=r_[Z.ravel(),2*n**2]) > produeces the following error message: > Traceback (most recent call last): > File "/mnt/VBoxShare/eg.py", line 15, in > histo = np.histogram(z.ravel(), bins=r_[Z.ravel(),2*n**2]) > NameError: name 'r_' is not defined As Ga?l said, it should be np.r_ instead of r_. It's just that I executed my code into ipython -pylab that enables the interactive use of matplotlib, but also loads a lot of pylab and numpy features into the namespace. Sorry about the typo! You should try to execute the code again. I used np.r_ to concatenate the array Z.ravel() with 2*n**2 in order to add the upper_edge of the last bin for the histogram (note: if you don't use a recent version of numpy, histogram may return an error). I read rapidly your code below, here are a few comments: * I guess one of the problems might be that you're using two different scales for the data and for your grid. pcolor(density_map) plots the color levels corresponding to density_map on a y-scale (x-scale) between 0 and max_index_along_first_direction - 1 (that does not correspond to the values of the data). That may explain why your data and your density_map do not superpose. You should define X and Y coordinates of the grid (e.g. using np.mgrid as in my example) and plot pcolor(X, Y, density_map). * Rather than grid_x = [ block_x * i for i in range(11) ], use np.linspace(0, block_x*11, 11, endpoint=False) or np.arange(0, block_x*11, block_x) * You can avoid easily your for loop using numpy.histogram2d that does just what you want for putting your points inside the bins of the grid. Try np.histogram2d(a, b, bins=11, range=[[0, xlim], [0, ylim]]) (check the documentation first). Hope this helps, Emmanuelle PS: actually this discussion should rather be on the numpy-discussion list. I would advise you to suscribe to this list and -- if you have further questions -- post your reply on the numpy-discussion list instead. > I'm very new to Scipy and have no idea what your intended to do there. > What I'm trying to do is the following: > from scipy import polyval, zeros > import pylab > a, b = fetch_data(...) > pylab.plot(a, b, "g.") # scatter plot > # regression line > regression = regression_analysis(...) > xr = polyval([regression[0], regression[1]], b) > pylab.plot(b, xr, "r-") > pylab.gca().set_xlim([0,max(b)]) > pylab.gca().set_ylim([0,max(a)]) > # calculate grid (10x10) > xlim = pylab.gca().get_xlim()[1] > ylim = pylab.gca().get_ylim()[1] > block_x = int(xlim / 10.0 + 1) > block_y = int(ylim / 10.0 + 1) > grid_x = [ block_x * i for i in range(11) ] > grid_y = [ block_y * i for i in range(11) ] > density_map = zeros((10, 10)) # matrix for points per cell > inc = 1.0 / number_of_data_points > for i in range(10): > for j in range(10): > cell = [ grid_x[i], grid_x[i+1], grid_y[j], grid_y[j+1] ] > density_map[j][i] += points_in(cell) * inc > # plot the 'density map' > pylab.pcolor(density_map, cmap=pylab.get_cmap("hot")) > pylab.show() > This only creates the scatter plot and the regression line. > kind regards > robert > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From R.Springuel at umit.maine.edu Mon Jun 1 13:16:57 2009 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Mon, 01 Jun 2009 13:16:57 -0400 Subject: [SciPy-user] Import statements Message-ID: <4A240D09.8020602@umit.maine.edu> I recently installed the Superpack on some of the "server" (they don't run MAC OS Server) computers in my group in order to use them to run data analysis programs in the background overnight while they continue to perform their normal file sharing purposes. However, the Superpack build of scipy seems to create issues with one of my home made packages that is dependant on scipy. In particular, the problem seems to be a "import scipy.stats" statement in my home package. Said statement also fails in the command line interpreter with a error that reports that scipy has no stats module (attribute), which contradicts what's in the help statements for scipy. Has there been a recent change in the structure of scipy that hasn't made it into the help documentation yet? -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By appointment only -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5632 bytes Desc: S/MIME Cryptographic Signature URL: From robert.kern at gmail.com Mon Jun 1 13:24:54 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 1 Jun 2009 12:24:54 -0500 Subject: [SciPy-user] Import statements In-Reply-To: <4A240D09.8020602@umit.maine.edu> References: <4A240D09.8020602@umit.maine.edu> Message-ID: <3d375d730906011024p4b0bf3e9x772fb98e1ea03904@mail.gmail.com> On Mon, Jun 1, 2009 at 12:16, R. Padraic Springuel wrote: > I recently installed the Superpack on some of the "server" (they don't run > MAC OS Server) computers in my group in order to use them to run data > analysis programs in the background overnight while they continue to perform > their normal file sharing purposes. ?However, the Superpack build of scipy > seems to create issues with one of my home made packages that is dependant > on scipy. ?In particular, the problem seems to be a "import scipy.stats" > statement in my home package. ?Said statement also fails in the command line > interpreter with a error that reports that scipy has no stats module > (attribute), which contradicts what's in the help statements for scipy. "import scipy.stats" should work. "import scipy; scipy.stats" should not. Please copy-and-paste error messages along with the code that created them rather than trying to describe them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Jun 1 16:25:05 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Jun 2009 16:25:05 -0400 Subject: [SciPy-user] software review: econometrics with python Message-ID: <1cd32cbb0906011325s2f38dc2cj6b315c8cf6ba02bd@mail.gmail.com> I just found this http://www3.interscience.wiley.com/journal/122363228/abstract Josef From wierob83 at googlemail.com Mon Jun 1 16:23:59 2009 From: wierob83 at googlemail.com (wierob) Date: Mon, 01 Jun 2009 22:23:59 +0200 Subject: [SciPy-user] How to use pcolor and scatter plot in one image? In-Reply-To: <20090601112021.GA12731@phare.normalesup.org> References: <4A22630B.1080909@googlemail.com> <20090531121019.GA21468@phare.normalesup.org> <20090531124151.GA9852@phare.normalesup.org> <4A239ECE.6090509@googlemail.com> <20090601112021.GA12731@phare.normalesup.org> Message-ID: <4A2438DF.4030208@googlemail.com> Hi, > * I guess one of the problems might be that you're using two different > scales for the data and for your grid. that's it. Thanks a lot. kind regards robert From dwf at cs.toronto.edu Mon Jun 1 17:42:54 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 1 Jun 2009 17:42:54 -0400 Subject: [SciPy-user] software review: econometrics with python In-Reply-To: <1cd32cbb0906011325s2f38dc2cj6b315c8cf6ba02bd@mail.gmail.com> References: <1cd32cbb0906011325s2f38dc2cj6b315c8cf6ba02bd@mail.gmail.com> Message-ID: <64A5C15A-FBD7-48C8-BE1B-32884482985D@cs.toronto.edu> On 1-Jun-09, at 4:25 PM, josef.pktd at gmail.com wrote: > I just found this > > http://www3.interscience.wiley.com/journal/122363228/abstract Vaguely relevant, also: http://johnstachurski.net/lectures/ Tons of scipy-based computational economics example code. David From cohen at lpta.in2p3.fr Tue Jun 2 01:35:43 2009 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Tue, 02 Jun 2009 07:35:43 +0200 Subject: [SciPy-user] quaternion and slerp interpolator, and all that Message-ID: <4A24BA2F.8030907@lpta.in2p3.fr> Hello, I would be interested in quaternions, spherical rotations, and interpolations thereof, a la slerp..... I quickly look at the online doc of scipy/numpy, and did not find any mention of these. Can someone point me to such functionalities in either package? thanks, Johann From robert.kern at gmail.com Tue Jun 2 01:41:11 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Jun 2009 00:41:11 -0500 Subject: [SciPy-user] quaternion and slerp interpolator, and all that In-Reply-To: <4A24BA2F.8030907@lpta.in2p3.fr> References: <4A24BA2F.8030907@lpta.in2p3.fr> Message-ID: <3d375d730906012241h2d9ab41fs29ba58a2b8e40716@mail.gmail.com> On Tue, Jun 2, 2009 at 00:35, Cohen-Tanugi Johann wrote: > > Hello, > I would be interested in quaternions, spherical rotations, and > interpolations thereof, a la slerp..... I quickly look at the online doc > of scipy/numpy, and did not find any mention of these. Can someone point > me to such functionalities in either package? We don't have them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cohen at lpta.in2p3.fr Tue Jun 2 02:13:59 2009 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Tue, 02 Jun 2009 08:13:59 +0200 Subject: [SciPy-user] quaternion and slerp interpolator, and all that In-Reply-To: <3d375d730906012241h2d9ab41fs29ba58a2b8e40716@mail.gmail.com> References: <4A24BA2F.8030907@lpta.in2p3.fr> <3d375d730906012241h2d9ab41fs29ba58a2b8e40716@mail.gmail.com> Message-ID: <4A24C327.8090003@lpta.in2p3.fr> is there an interest in them? JCT Robert Kern wrote: > On Tue, Jun 2, 2009 at 00:35, Cohen-Tanugi Johann wrote: > >> Hello, >> I would be interested in quaternions, spherical rotations, and >> interpolations thereof, a la slerp..... I quickly look at the online doc >> of scipy/numpy, and did not find any mention of these. Can someone point >> me to such functionalities in either package? >> > > We don't have them. > > From robert.kern at gmail.com Tue Jun 2 02:25:06 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Jun 2009 01:25:06 -0500 Subject: [SciPy-user] quaternion and slerp interpolator, and all that In-Reply-To: <4A24C327.8090003@lpta.in2p3.fr> References: <4A24BA2F.8030907@lpta.in2p3.fr> <3d375d730906012241h2d9ab41fs29ba58a2b8e40716@mail.gmail.com> <4A24C327.8090003@lpta.in2p3.fr> Message-ID: <3d375d730906012325kf95fb4fy4e64eab61e3e14b9@mail.gmail.com> On Tue, Jun 2, 2009 at 01:13, Cohen-Tanugi Johann wrote: > is there an interest in them? Sure! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Johann.COHEN-TANUGI at LPTA.in2p3.fr Tue Jun 2 09:09:03 2009 From: Johann.COHEN-TANUGI at LPTA.in2p3.fr (Johann Cohen-Tanugi) Date: Tue, 02 Jun 2009 15:09:03 +0200 Subject: [SciPy-user] quaternion and slerp interpolator, and all that In-Reply-To: <3d375d730906012325kf95fb4fy4e64eab61e3e14b9@mail.gmail.com> References: <4A24BA2F.8030907@lpta.in2p3.fr> <3d375d730906012241h2d9ab41fs29ba58a2b8e40716@mail.gmail.com> <4A24C327.8090003@lpta.in2p3.fr> <3d375d730906012325kf95fb4fy4e64eab61e3e14b9@mail.gmail.com> Message-ID: <4A25246F.1090301@lpta.univ-montp2.fr> well, I found https://scicompforge.org/tracker/scicompforge/file/trunk/crystallography/usage/test/output/python/Quaternion.py?rev=2139 and http://cgkit.sourceforge.net/doc2/quat.html I hope that helps others in the list..... it would be nice to have this in scipy though.... best, JCT Robert Kern wrote: > On Tue, Jun 2, 2009 at 01:13, Cohen-Tanugi Johann wrote: > >> is there an interest in them? >> > > Sure! > > From devicerandom at gmail.com Tue Jun 2 12:22:32 2009 From: devicerandom at gmail.com (ms) Date: Tue, 02 Jun 2009 17:22:32 +0100 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? Message-ID: <4A2551C8.20408@gmail.com> Hello, I am trying to integrate a large (50-200 equations) system of chemical kinetics ODEs using scipy.integrate.odeint. The qualitative behaviour of the system looks sensible, but mass is not conserved at all -it increases a lot during time, sometimes wildly, depending on parameters. This puzzles me, and I am *very new* to this kind of problems; however reading here and there it seems that stiff ODEs can show this kind of artefacts. I checked the equation values and looks stiff; infodict['mused'] is always 1... so odeint sees it as stiff too. This does not look good. I wonder what can one do to tame this beast. Any hint? thanks! m. From peridot.faceted at gmail.com Tue Jun 2 14:29:06 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 2 Jun 2009 14:29:06 -0400 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: <4A2551C8.20408@gmail.com> References: <4A2551C8.20408@gmail.com> Message-ID: 2009/6/2 ms : > Hello, > > I am trying to integrate a large (50-200 equations) system of chemical > kinetics ODEs using scipy.integrate.odeint. > > The qualitative behaviour of the system looks sensible, but mass is not > conserved at all -it increases a lot during time, sometimes wildly, > depending on parameters. > > This puzzles me, and I am *very new* to this kind of problems; however > reading here and there it seems that stiff ODEs can show this kind of > artefacts. I checked the equation values and looks stiff; > infodict['mused'] is always 1... so odeint sees it as stiff too. This > does not look good. > > I wonder what can one do to tame this beast. Any hint? In terms of software, you could take a look at the pydstool package, which has several more modern C-level solvers (which may particularly help with stiff systems), as well as various other useful tools for dealing with dynamical systems. In terms of your specific problem, you could (using the above tool) impose mass conservation as an algebraic constraint. This should, I think, help the solver slow down in situations where mass conservation would otherwise be violated by the (necessarily approximate) solution. On the other hand, it may be more valuable to keep the total mass as a free parameter so that you can judge the quality of your solutions by looking at how much it varies. After all, the total mass is only one direction in which your approximate solution can deviate from the true solution. Which of these two approaches is more useful probably depends on your problem. If it is one where, like some problems in planetary dynamics, even if the solution is somewhat wrong in detail, it's still useful as long as the energy (and angular momentum) don't change, then it's useful to impose the constraint (or in the case of planetary dyamics, use a solver that has that constraint built in). On the other hand, if total mass does not have a special status beyond the fact you know it shouldn't change, so that errors in other directions are just as important as errors in mass, it's probably more useful to keep it as an error-monitoring mechanism. It's also, of course, possible there's a bug in your derivative function - it's worth checking that the derivative vector is always orthogonal to the gradient of mass-as-a-function-of-your-parameters. But you are probably right, and this is probably just a tough ODE. Anne > > thanks! > m. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From devicerandom at gmail.com Tue Jun 2 14:47:32 2009 From: devicerandom at gmail.com (ms) Date: Tue, 02 Jun 2009 19:47:32 +0100 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: References: <4A2551C8.20408@gmail.com> Message-ID: <4A2573C4.9040706@gmail.com> Hi Anne, Thanks for the suggestions! I didn't know about pydstool and I will definitely try it. > On the other hand, it may be more valuable to keep the total mass as a > free parameter so that you can judge the quality of your solutions by > looking at how much it varies. After all, the total mass is only one > direction in which your approximate solution can deviate from the true > solution. True; however we want to use the model to predict somehow the concentrations of species in a chemical system, and if mass is not conserved (it goes up like 10 times) such prediction won't look good :) > It's also, of course, possible there's a bug in your derivative > function - it's worth checking that the derivative vector is always > orthogonal to the gradient of mass-as-a-function-of-your-parameters. Thanks for the hint, I will check it. m. From rjchacko at gmail.com Tue Jun 2 16:05:40 2009 From: rjchacko at gmail.com (Ranjit Chacko) Date: Tue, 2 Jun 2009 16:05:40 -0400 Subject: [SciPy-user] can this be vectorized? Message-ID: I have a square array A and I want to produce a second square array B of the same dimension where each element of B is the sum of a square neighborhood of each element of A. Is there a way to do this without loops in numpy, or do I have to use for loops? Thanks, Ranjit -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jun 2 16:11:36 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Jun 2009 15:11:36 -0500 Subject: [SciPy-user] can this be vectorized? In-Reply-To: References: Message-ID: <3d375d730906021311j14937f5ax7f861fd4f89c1b1e@mail.gmail.com> On Tue, Jun 2, 2009 at 15:05, Ranjit Chacko wrote: > I have a square array A and I want to produce a second square array B of the > same dimension where each element of B is the sum of a square neighborhood > of each element of A. Is there a way to do this without loops in numpy, or > do I have to use for loops? scipy.ndimage.convolve() with a square array of 1s the size of the desired neighborhood as the weights parameter. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob.clewley at gmail.com Tue Jun 2 16:17:50 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 2 Jun 2009 16:17:50 -0400 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: <4A2573C4.9040706@gmail.com> References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> Message-ID: FYI if you conserve something explicitly in your system using an algebraic constraint then you have to remember to reduce your list of differential equations accordingly. Imposing one constraint will eliminate one D.E. when you substitute it's effect into them. You'll have to work that out symbolically in advance of setting it up in any integrator. You will need to use Radau in pydstool to solve the resulting differential-algebraic eqn (DAE), and there are one or more Radau-based constraint examples in the pydstool/tests directory that is part of the download. In particular, I know there's one called DAE_example.py. -Rob On Tue, Jun 2, 2009 at 2:47 PM, ms wrote: > Hi Anne, > > Thanks for the suggestions! I didn't know about pydstool and I will > definitely try it. > >> On the other hand, it may be more valuable to keep the total mass as a >> free parameter so that you can judge the quality of your solutions by >> looking at how much it varies. After all, the total mass is only one >> direction in which your approximate solution can deviate from the true >> solution. > > True; however we want to use the model to predict somehow the > concentrations of species in a chemical system, and if mass is not > conserved (it goes up like 10 times) such prediction won't look good :) > >> It's also, of course, possible there's a bug in your derivative >> function - it's worth checking that the derivative vector is always >> orthogonal to the gradient of mass-as-a-function-of-your-parameters. > > Thanks for the hint, I will check it. > > m. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Robert H. Clewley, Ph.D. Assistant Professor Department of Mathematics and Statistics and Neuroscience Institute Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From R.Springuel at umit.maine.edu Tue Jun 2 16:25:09 2009 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Tue, 02 Jun 2009 16:25:09 -0400 Subject: [SciPy-user] Import statements Message-ID: <4A258AA5.5080601@umit.maine.edu> Sorry about not including error messages. I was writing the email from a different computer as the one giving the problem and so I didn't have the exact text with me at the time. I did figure out what the problem was, however. One of the computers I was working on is a PPC MAC and so was complaining about me using the Intel version of the Superpack. I downloaded the PPC version and that error was resolved. Obvious mistake. I should have caught it faster. -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By appointment only -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5632 bytes Desc: S/MIME Cryptographic Signature URL: From peridot.faceted at gmail.com Tue Jun 2 16:43:24 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 2 Jun 2009 16:43:24 -0400 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: <4A2573C4.9040706@gmail.com> References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> Message-ID: 2009/6/2 ms : > Hi Anne, > > Thanks for the suggestions! I didn't know about pydstool and I will > definitely try it. > >> On the other hand, it may be more valuable to keep the total mass as a >> free parameter so that you can judge the quality of your solutions by >> looking at how much it varies. After all, the total mass is only one >> direction in which your approximate solution can deviate from the true >> solution. > > True; however we want to use the model to predict somehow the > concentrations of species in a chemical system, and if mass is not > conserved (it goes up like 10 times) such prediction won't look good :) Indeed not, but it seems to me that there's a risk that enforcing mass conservation will avoid that problem but then leave you with answers that are just as wrong but not in an obvious way. Anne >> It's also, of course, possible there's a bug in your derivative >> function - it's worth checking that the derivative vector is always >> orthogonal to the gradient of mass-as-a-function-of-your-parameters. > > Thanks for the hint, I will check it. > > m. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From joshua.stults at gmail.com Tue Jun 2 21:31:17 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Tue, 2 Jun 2009 21:31:17 -0400 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> Message-ID: I think the standard way to "fix" stiff problems is to go to implicit time-stepping. Here's a nice write-up about using implicit time-stepping for chemical kinetics: http://www.osti.gov/bridge/servlets/purl/45627-eDTnun/webviewable/45627.pdf It's over a decade old, but might give you some hints on how to solve your problem. If you are integrating the system accurately you should be conserving mass, even with big implicit time-steps, your production and loss terms should balance at each step. On Tue, Jun 2, 2009 at 4:43 PM, Anne Archibald wrote: > 2009/6/2 ms : >> Hi Anne, >> >> Thanks for the suggestions! I didn't know about pydstool and I will >> definitely try it. >> >>> On the other hand, it may be more valuable to keep the total mass as a >>> free parameter so that you can judge the quality of your solutions by >>> looking at how much it varies. After all, the total mass is only one >>> direction in which your approximate solution can deviate from the true >>> solution. >> >> True; however we want to use the model to predict somehow the >> concentrations of species in a chemical system, and if mass is not >> conserved (it goes up like 10 times) such prediction won't look good :) > > Indeed not, but it seems to me that there's a risk that enforcing mass > conservation will avoid that problem but then leave you with answers > that are just as wrong but not in an obvious way. > > Anne I agree with Anne, mass conservation is a good diagnostic to help you catch errors (modeling, coding or otherwise). > >>> It's also, of course, possible there's a bug in your derivative >>> function - it's worth checking that the derivative vector is always >>> orthogonal to the gradient of mass-as-a-function-of-your-parameters. >> >> Thanks for the hint, I will check it. >> >> m. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joshua Stults Website: http://j-stults.blogspot.com From sebastian.walter at gmail.com Wed Jun 3 03:18:30 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Wed, 3 Jun 2009 09:18:30 +0200 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> Message-ID: On Wed, Jun 3, 2009 at 3:31 AM, Joshua Stults wrote: > I think the standard way to "fix" stiff problems is to go to implicit > time-stepping. > > Here's a nice write-up about using implicit time-stepping for chemical kinetics: > http://www.osti.gov/bridge/servlets/purl/45627-eDTnun/webviewable/45627.pdf > > It's over a decade old, but might give you some hints on how to solve > your problem. > > If you are integrating the system accurately you should be conserving > mass, even with big implicit time-steps, your production and loss > terms should balance at each step. > > On Tue, Jun 2, 2009 at 4:43 PM, Anne Archibald > wrote: >> 2009/6/2 ms : >>> Hi Anne, >>> >>> Thanks for the suggestions! I didn't know about pydstool and I will >>> definitely try it. >>> >>>> On the other hand, it may be more valuable to keep the total mass as a >>>> free parameter so that you can judge the quality of your solutions by >>>> looking at how much it varies. After all, the total mass is only one >>>> direction in which your approximate solution can deviate from the true >>>> solution. >>> >>> True; however we want to use the model to predict somehow the >>> concentrations of species in a chemical system, and if mass is not >>> conserved (it goes up like 10 times) such prediction won't look good :) >> >> Indeed not, but it seems to me that there's a risk that enforcing mass >> conservation will avoid that problem but then leave you with answers >> that are just as wrong but not in an obvious way. >> >> Anne > > I agree with Anne, mass conservation is a good diagnostic to help you > catch errors (modeling, coding or otherwise). Well, I don't know. To omit crucial information of a model is unlikely to improve the overall quality of the solution. I'd go with a BDF method to solve the DAE system with enforced mass conservation. For example with: https://computation.llnl.gov/casc/sundials/main.html or http://www.iwr.uni-heidelberg.de/~Jan.Albersmeyer/solvind/index.php?t=0 Both packages have the advantage that they can compute not only the solution in a robust way but also give you derivative information that you need if you want to solve an optimization problem with DAE constraint. > >> >>>> It's also, of course, possible there's a bug in your derivative >>>> function - it's worth checking that the derivative vector is always >>>> orthogonal to the gradient of mass-as-a-function-of-your-parameters. >>> >>> Thanks for the hint, I will check it. >>> >>> m. >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -- > Joshua Stults > Website: http://j-stults.blogspot.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From devicerandom at gmail.com Wed Jun 3 10:07:15 2009 From: devicerandom at gmail.com (ms) Date: Wed, 03 Jun 2009 15:07:15 +0100 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> Message-ID: <4A268393.6000608@gmail.com> Joshua Stults ha scritto: > I think the standard way to "fix" stiff problems is to go to implicit > time-stepping. > > Here's a nice write-up about using implicit time-stepping for chemical kinetics: > http://www.osti.gov/bridge/servlets/purl/45627-eDTnun/webviewable/45627.pdf > > It's over a decade old, but might give you some hints on how to solve > your problem. > > If you are integrating the system accurately you should be conserving > mass, even with big implicit time-steps, your production and loss > terms should balance at each step. I'm confused, this thing is getting much worse than I could imagine. Does it mean I have to rewrite the integrator myself and that odeint is not good in this case? I would use CHEMSODE, but it seems unavailable on the web, I only can find the paper. I'll give a spin to pydstool, then let's see... m. From joshua.stults at gmail.com Wed Jun 3 10:16:58 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Wed, 3 Jun 2009 10:16:58 -0400 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: <4A268393.6000608@gmail.com> References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> <4A268393.6000608@gmail.com> Message-ID: I didn't mean for you to write your own integrator, just linked to the paper to provide a little inspiration on how folks have solved similar problems before. Sorry for increasing rather than decreasing the confusion. On Wed, Jun 3, 2009 at 10:07 AM, ms wrote: > Joshua Stults ha scritto: >> I think the standard way to "fix" stiff problems is to go to implicit >> time-stepping. >> >> Here's a nice write-up about using implicit time-stepping for chemical kinetics: >> http://www.osti.gov/bridge/servlets/purl/45627-eDTnun/webviewable/45627.pdf >> >> It's over a decade old, but might give you some hints on how to solve >> your problem. >> >> If you are integrating the system accurately you should be conserving >> mass, even with big implicit time-steps, your production and loss >> terms should balance at each step. > > I'm confused, this thing is getting much worse than I could imagine. > Does it mean I have to rewrite the integrator myself and that odeint is > not good in this case? > > I would use CHEMSODE, but it seems unavailable on the web, I only can > find the paper. > > I'll give a spin to pydstool, then let's see... > > m. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joshua Stults Website: http://j-stults.blogspot.com From devicerandom at gmail.com Wed Jun 3 10:27:05 2009 From: devicerandom at gmail.com (ms) Date: Wed, 03 Jun 2009 15:27:05 +0100 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> Message-ID: <4A268839.3010701@gmail.com> Rob Clewley ha scritto: > FYI if you conserve something explicitly in your system using an > algebraic constraint then you have to remember to reduce your list of > differential equations accordingly. Imposing one constraint will > eliminate one D.E. when you substitute it's effect into them. You'll > have to work that out symbolically in advance of setting it up in any > integrator. Right, trying to work it out. > You will need to use Radau in pydstool to solve the > resulting differential-algebraic eqn (DAE), Uh? Why a DAE? thanks! M. From devicerandom at gmail.com Wed Jun 3 10:34:59 2009 From: devicerandom at gmail.com (ms) Date: Wed, 03 Jun 2009 15:34:59 +0100 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: References: <4A2551C8.20408@gmail.com> <4A2573C4.9040706@gmail.com> <4A268393.6000608@gmail.com> Message-ID: <4A268A13.4000505@gmail.com> Joshua Stults ha scritto: > I didn't mean for you to write your own integrator, just linked to the > paper to provide a little inspiration on how folks have solved similar > problems before. Sorry for increasing rather than decreasing the > confusion. Don't worry, thanks for all the suggestions. It's only that I'm really new to this stuff and I'm not that good in math -I am a biologist "borrowed" to biophysics, so I run into obstacles here and there. Anyway, chemsode looks cool, but it seems to describe its own integration routine, that's why I wondered if using its algorithms means to rewrite the integrator myself. I would use chemsode directly, but unfortunately again I can find no download whatsoever of it. A more general question arises, anyway: since there were many warning on how keeping the mass constant could nonetheless skew the results in other directions, how do I check the correctness of the solutions of my system? thanks, m. From hasslerjc at comcast.net Wed Jun 3 10:57:39 2009 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 03 Jun 2009 10:57:39 -0400 Subject: [SciPy-user] integrate.odeint , stiff chemical equations and mass conservation -any hint? In-Reply-To: <4A2551C8.20408@gmail.com> References: <4A2551C8.20408@gmail.com> Message-ID: <4A268F63.9030200@comcast.net> An HTML attachment was scrubbed... URL: From rjchacko at gmail.com Wed Jun 3 11:20:22 2009 From: rjchacko at gmail.com (Ranjit Chacko) Date: Wed, 3 Jun 2009 11:20:22 -0400 Subject: [SciPy-user] can this be vectorized? In-Reply-To: <3d375d730906021311j14937f5ax7f861fd4f89c1b1e@mail.gmail.com> References: <3d375d730906021311j14937f5ax7f861fd4f89c1b1e@mail.gmail.com> Message-ID: Thanks, that worked but I guess I was actually asking the wrong question. Here's some code with loops that I want to speed up. I have an example where someone used weave to speed it up, but I'd like to see if there's a way to speed this up using just numpy. def oneMCS(self,s,beta): r = numpy.random.random((self.N,self.N)) for i in range(self.N): for j in range(self.N): field=s[(i+self.N+1)%self.N][j]+s[i-1][j]+s[i][(j+self.N+1 )%self.N]+s[i][j-1] boltzmann_factor=numpy.exp(-beta*field*s[i][j]) if(boltzmann_factor>r[i][j]): s[i][j]=-s[i][j] return s Thanks, -Ranjit On Tue, Jun 2, 2009 at 4:11 PM, Robert Kern wrote: > On Tue, Jun 2, 2009 at 15:05, Ranjit Chacko wrote: > > I have a square array A and I want to produce a second square array B of > the > > same dimension where each element of B is the sum of a square > neighborhood > > of each element of A. Is there a way to do this without loops in numpy, > or > > do I have to use for loops? > > scipy.ndimage.convolve() with a square array of 1s the size of the > desired neighborhood as the weights parameter. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Jun 3 15:02:36 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 3 Jun 2009 21:02:36 +0200 Subject: [SciPy-user] can this be vectorized? In-Reply-To: References: <3d375d730906021311j14937f5ax7f861fd4f89c1b1e@mail.gmail.com> Message-ID: <9457e7c80906031202x31a5a171gb30a7061de0eaeee@mail.gmail.com> Hi Ranjit 2009/6/3 Ranjit Chacko : > Thanks, that worked but I guess I was actually asking the wrong question. > Here's some code with loops that I want to speed up. I have an example where > someone used weave to speed it up, but I'd like to see if there's a way to > speed this up using just numpy. Have a look at http://thread.gmane.org/gmane.comp.python.numeric.general/24508 HTH, Cheers St?fan From jeremy at jeremysanders.net Wed Jun 3 15:52:59 2009 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Wed, 03 Jun 2009 20:52:59 +0100 Subject: [SciPy-user] ANN: Veusz 1.4 Message-ID: Veusz 1.4 --------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2009 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a Qt4 based scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Changes in 1.4: * Dates can be plotted on axes * Bar graph component, support bars in groups and stacked bars with error bars * Improved import - text lines can be ignored in imported files - prefix and suffix can be added to dataset names - more robust import dialog * Markers can be "thinned" for large datasets * Further LaTeX support, including \frac for fractions and \\ for line breaks. * Keys show error bars on datasets with errors * Axes can scale plotted data by a factor More minor changes * Mathematical expressions can be entered in many places where numbers are entered (e.g. axis minima) * Many more latex symbols * Text labels can also be placed outside graphs directly on pages * Dataset expressions can be edited * Data can be copied out of data edit dialog. Rows can be inserted or deleted. * Mac format line terminators are allowed in import files * Preview window resizes properly in import dialog Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * EPS/PDF/PNG/SVG export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV and FITS importing Requirements: Python (2.4 or greater required) http://www.python.org/ Qt >= 4.3 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits For documentation on using Veusz, see the "Documents" directory. The manual is in pdf, html and text format (generated from docbook). Issues with the current version: * Due to Qt, hatched regions sometimes look rather poor when exported to PostScript or PDF. * Due to a bug in Qt, some long lines, or using log scales, can lead to very slow plot times under X11. This problem is seen with dashed/dotted lines. It is fixed by upgrading to Qt-4.5.1 (the Veusz binary version includes this Qt version). * Can be very slow to plot large datasets if antialiasing is enabled. Right click on graph and disable antialias to speed up output. This is mostly a problem with older Qt versions, however. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From koepsell at gmail.com Wed Jun 3 17:39:56 2009 From: koepsell at gmail.com (killian koepsell) Date: Wed, 3 Jun 2009 14:39:56 -0700 Subject: [SciPy-user] quaternion and slerp interpolator, and all that In-Reply-To: <4A25246F.1090301@lpta.univ-montp2.fr> References: <4A24BA2F.8030907@lpta.in2p3.fr> <3d375d730906012241h2d9ab41fs29ba58a2b8e40716@mail.gmail.com> <4A24C327.8090003@lpta.in2p3.fr> <3d375d730906012325kf95fb4fy4e64eab61e3e14b9@mail.gmail.com> <4A25246F.1090301@lpta.univ-montp2.fr> Message-ID: Hi, I have a related question. I have programmed a couple of quaternion-valued functions and used arrays with a customized numpy dtype for it. Something like dtype = [('r',np.double),('i',np.double),('j',np.double),('k',np.double)] Now, I am wondering if there is an easy way (e.g. using cython) to define a custom quaternion scalar type and make numpy aware of it. I found some information in the numpy book about this topic but couldn't find any example code. If anyone has working example code, that would be great -- as I said, ideally using cython, but c code would be fine too. Thanks, Kilian On Tue, Jun 2, 2009 at 6:09 AM, Johann Cohen-Tanugi wrote: > well, I found > https://scicompforge.org/tracker/scicompforge/file/trunk/crystallography/usage/test/output/python/Quaternion.py?rev=2139 > > and > http://cgkit.sourceforge.net/doc2/quat.html > > I hope that helps others in the list..... it would be nice to have this > in scipy though.... > best, > JCT > > Robert Kern wrote: >> On Tue, Jun 2, 2009 at 01:13, Cohen-Tanugi Johann wrote: >> >>> is there an interest in them? >>> >> >> Sure! >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david_baddeley at yahoo.com.au Wed Jun 3 17:48:53 2009 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 3 Jun 2009 14:48:53 -0700 (PDT) Subject: [SciPy-user] can this be vectorized? In-Reply-To: References: Message-ID: <42904.26504.qm@web33003.mail.mud.yahoo.com> Hi Ranjit, I think this is going to be hard to vectorise your code as it is, as the array s (which is used to calculate field) changes within the loop. r = numpy.random.random((self.N,self.N)) field= scipy.ndimage.convolve(s, numpy.array([[0,1,0],[1,0,1],[0,1,0]]), 'same') boltzmann_factor=numpy.exp(-beta*field*s) s = s*2*(0.5 - (boltzmann_factor>r)) would give you similar results with a few subtle differences (the 'field' is calculated with the initial s rather than the constantly updated s) which may or may not be important depending on how large you expect your boltzmann factor to be. Also note that the convolution the way I've written it doesn't wrap at the edges - you could achieve this by circularly padding s before the convolution and then cropping back down to the original size. cheers, David ----- Original Message ---- Date: Wed, 3 Jun 2009 11:20:22 -0400 From: Ranjit Chacko Subject: Re: [SciPy-user] can this be vectorized? To: Robert Kern Cc: SciPy Users List Message-ID: Content-Type: text/plain; charset="iso-8859-1" Thanks, that worked but I guess I was actually asking the wrong question. Here's some code with loops that I want to speed up. I have an example where someone used weave to speed it up, but I'd like to see if there's a way to speed this up using just numpy. def oneMCS(self,s,beta): r = numpy.random.random((self.N,self.N)) for i in range(self.N): for j in range(self.N): field=s[(i+self.N+1)%self.N][j]+s[i-1][j]+s[i][(j+self.N+1 )%self.N]+s[i][j-1] boltzmann_factor=numpy.exp(-beta*field*s[i][j]) if(boltzmann_factor>r[i][j]): s[i][j]=-s[i][j] return s Thanks, -Ranjit On Tue, Jun 2, 2009 at 4:11 PM, Robert Kern wrote: > On Tue, Jun 2, 2009 at 15:05, Ranjit Chacko wrote: > > I have a square array A and I want to produce a second square array B of > the > > same dimension where each element of B is the sum of a square > neighborhood > > of each element of A. Is there a way to do this without loops in numpy, > or > > do I have to use for loops? > > scipy.ndimage.convolve() with a square array of 1s the size of the > desired neighborhood as the weights parameter. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20090603/66fc9cdf/attachment-0001.html ------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user End of SciPy-user Digest, Vol 70, Issue 5 ***************************************** From robert.kern at gmail.com Wed Jun 3 17:56:40 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 3 Jun 2009 16:56:40 -0500 Subject: [SciPy-user] quaternion and slerp interpolator, and all that In-Reply-To: References: <4A24BA2F.8030907@lpta.in2p3.fr> <3d375d730906012241h2d9ab41fs29ba58a2b8e40716@mail.gmail.com> <4A24C327.8090003@lpta.in2p3.fr> <3d375d730906012325kf95fb4fy4e64eab61e3e14b9@mail.gmail.com> <4A25246F.1090301@lpta.univ-montp2.fr> Message-ID: <3d375d730906031456l32a4ce38nc14f9ecfcbfb16fb@mail.gmail.com> On Wed, Jun 3, 2009 at 16:39, killian koepsell wrote: > Hi, > > I have a related question. I have programmed a couple of > quaternion-valued functions and used arrays with a customized numpy > dtype for it. Something like > > ?dtype = [('r',np.double),('i',np.double),('j',np.double),('k',np.double)] > > Now, I am wondering if there is an easy way (e.g. using cython) to > define a custom quaternion scalar type and make numpy aware of it. I > found some information in the numpy book about this topic but couldn't > find any example code. > > If anyone has working example code, that would be great -- as I said, > ideally using cython, but c code would be fine too. Look at the docs/newdtype_example/ directory. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josephsmidt at gmail.com Wed Jun 3 22:12:08 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Wed, 3 Jun 2009 19:12:08 -0700 Subject: [SciPy-user] Error estimates with leastsq? Message-ID: <142682e10906031912i61841e01hc7e52e9851fa0288@mail.gmail.com> Hello, I am trying to best fit data with theory using leastsq. It works, in that the best fit curve fits the data fairly well. I was wondering how I could find the error bars on the parameters. Is this what cov_x is for that leastsq returns? (See http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html#scipy.optimize.leastsq) What is meant by "estimate of the jacobian around the solution"? Is this related to the error bars? It says "see curve_fit", but I couldn't find that page. For example, for output I get the best fit parameters are: [ 10.8138327 , 25.18203823] with cov_x = [[ 773.42733539, -1791.83769517], [-1791.83769517, 5203.77670479]] Is this saying the best fit for parameter 1 is 10.81 +/- sqrt(773) and for parameter 2 = 25.18 +/- sqrt(5203)? Thanks. Thanks. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From josef.pktd at gmail.com Wed Jun 3 22:43:52 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 3 Jun 2009 22:43:52 -0400 Subject: [SciPy-user] Error estimates with leastsq? In-Reply-To: <142682e10906031912i61841e01hc7e52e9851fa0288@mail.gmail.com> References: <142682e10906031912i61841e01hc7e52e9851fa0288@mail.gmail.com> Message-ID: <1cd32cbb0906031943xcc76bc3x38d83097a3e3f96@mail.gmail.com> On Wed, Jun 3, 2009 at 10:12 PM, Joseph Smidt wrote: > Hello, > > ? ?I am trying to best fit data with theory using leastsq. ?It works, > in that the best fit curve fits the data fairly well. ?I was wondering > how I could find the error bars on the parameters. > > ? ?Is this what cov_x is for that leastsq returns? ?(See > http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html#scipy.optimize.leastsq) > ?What is meant by "estimate of the jacobian around the solution"? ?Is > this related to the error bars? ?It says "see curve_fit", but I > couldn't find that page. > > For example, for output I get the best fit parameters are: [ > 10.8138327 , ?25.18203823] with cov_x = [[ ?773.42733539, > -1791.83769517], > ? ? ? [-1791.83769517, ?5203.77670479]] > > ?Is this saying the best fit for parameter 1 is 10.81 +/- sqrt(773) > and for parameter 2 = 25.18 +/- sqrt(5203)? ? Thanks. No, cov_x is not the correct covariance matrix for the parameter estimates. It needs to be multiplied by the SSE. curvefit source is here: http://projects.scipy.org/scipy/browser/trunk/scipy/optimize/minpack.py#L331 or try >>> from scipy import optimize >>> help(optimize.curve_fit) You can also look for the discussion 4 months ago (Feb 12 scipy-user) when curve_fit got introduced and we were checking the correct normalizations. Josef From david at ar.media.kyoto-u.ac.jp Thu Jun 4 06:24:32 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 04 Jun 2009 19:24:32 +0900 Subject: [SciPy-user] Scipy 0.7.1rc1 released Message-ID: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> Hi, The RC1 for 0.7.1 scipy release has just been tagged. This is a bug-only release, see below for the release notes. More information can also be found on the trac website: http://projects.scipy.org/scipy/milestone/0.7.1 Please test it ! The scipy developers -- ========================= SciPy 0.7.1 Release Notes ========================= .. contents:: SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. scipy.signal ============ Several memory leaks in lfilter have been fixed, and the support for array object has been fixed as well. scipy.sparse ============ scipy.io ======== Some performance regressions in 0.7.0 have been fixed in 0.7.1 (#882 and #885). Windows binaries for python 2.6 =============================== python 2.6 binaries for windows are now included. Python 2.6 binaries require numpy 1.3.0 or above, other binaries require numpy 1.2.0 or above. Universal build for scipy ========================= Mac OS X binary installer is now a universal build, and does not require gfortran to be installed. The binary requires numpy 1.2.0 or above and the python found on python.org. Checksums ========= 1632c340651ad097967921dd7539f0f0 release/installers/scipy-0.7.1rc1.zip 66d18c5557014ba0a839ca3c22c0f191 release/installers/scipy-0.7.1rc1-win32-superpack-python2.5.exe 64a007f88619c2ce75d8a2a7e558b4d4 release/installers/scipy-0.7.1rc1-win32-superpack-python2.6.exe be5697925454f2b5c9da0dd092fdcd03 release/installers/scipy-0.7.1rc1.tar.gz From matthew.brett at gmail.com Thu Jun 4 11:11:26 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 4 Jun 2009 11:11:26 -0400 Subject: [SciPy-user] [Numpy-discussion] Scipy 0.7.1rc1 released In-Reply-To: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> Message-ID: <1e2af89e0906040811g2671142j987922ac69ccd909@mail.gmail.com> Hi, > ? ?The RC1 for 0.7.1 scipy release has just been tagged. This is a > bug-only release I feel (y)our pain, but don't you mean 'bug-fix only release'? ;-) Matthew From rjchacko at gmail.com Thu Jun 4 11:47:26 2009 From: rjchacko at gmail.com (Ranjit Chacko) Date: Thu, 4 Jun 2009 11:47:26 -0400 Subject: [SciPy-user] SciPy-user Digest, Vol 70, Issue 6 In-Reply-To: References: Message-ID: Thanks David, that's kind of what I figured but I wanted to make sure I wasn't missing anything. It does make a difference to not use the constantly updated array, so it seems weave is the only way to really speed this up. Is there a built in function for doing this circular padding? > > ------------------------------ > > Message: 4 > Date: Wed, 3 Jun 2009 14:48:53 -0700 (PDT) > From: David Baddeley > Subject: Re: [SciPy-user] can this be vectorized? > To: scipy-user at scipy.org > Message-ID: <42904.26504.qm at web33003.mail.mud.yahoo.com> > Content-Type: text/plain; charset=utf-8 > > > Hi Ranjit, > > I think this is going to be hard to vectorise your code as it is, as the > array s (which is used to calculate field) changes within the loop. > > r = numpy.random.random((self.N,self.N)) > field= scipy.ndimage.convolve(s, numpy.array([[0,1,0],[1,0,1],[0,1,0]]), > 'same') > boltzmann_factor=numpy.exp(-beta*field*s) > s = s*2*(0.5 - (boltzmann_factor>r)) > > would give you similar results with a few subtle differences (the 'field' > is calculated with the initial s rather than the constantly updated s) which > may or may not be important depending on how large you expect your boltzmann > factor to be. Also note that the convolution the way I've written it doesn't > wrap at the edges - you could achieve this by circularly padding s before > the convolution and then cropping back down to the original size. > > cheers, > David > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From R.Springuel at umit.maine.edu Thu Jun 4 15:21:14 2009 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 04 Jun 2009 15:21:14 -0400 Subject: [SciPy-user] Interpreter behavior Message-ID: <4A281EAA.2000604@umit.maine.edu> I just updated my installation of numpy/scipy using the Superpack for Mac OS 10.5. However, doing so seems to have disabled interactive input editing and history substitution in the python interpreter (that's the feature that allows the use of up and down keys to scroll through recent commands, as well as other line editing features). Does anyone know how to re-enable this feature and if it's possible to prevent future updates from disabling it again? -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By appointment only -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5632 bytes Desc: S/MIME Cryptographic Signature URL: From josef.pktd at gmail.com Thu Jun 4 16:47:28 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 4 Jun 2009 16:47:28 -0400 Subject: [SciPy-user] [Numpy-discussion] Scipy 0.7.1rc1 released In-Reply-To: References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> Message-ID: <1cd32cbb0906041347l7b5573f3r275e6239db87ff61@mail.gmail.com> On Thu, Jun 4, 2009 at 4:17 PM, Pauli Virtanen wrote: > Thu, 04 Jun 2009 19:24:32 +0900, David Cournapeau wrote: > [clip] >> ========================= >> SciPy 0.7.1 Release Notes >> ========================= >> >> .. contents:: >> >> SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. > > scipy.special > ============= > > Several bugs of varying severity were fixed in the special functions: > > - #503, #640: iv: problems at large arguments fixed by new > ?implementation > - #623: jv: fix errors at large arguments > - #679: struve: fix wrong output for v < 0 > - #803: pbdv produces invalid output > - #804: lqmn: fix crashes on some input > - #823: betainc: fix documentation > - #834: exp1 strange behavior near negative integer values > - #852: jn_zeros: more accurate results for large s, also > ?in jnp/yn/ ynp_zeros > - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x > - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain > - #927: ellipj: fix segfault on Windows > - ive, jve, yve, kv, kve: with real-valued input, return nan for > ?out-of-domain instead of returning only the real part of the result. > > Also, when ``scipy.special.errprint(1)`` has been enabled, warning > messages are now issued as Python warnings instead of printing them to > stderr. > > ? ?*** > > I added this to 0.7.1-notes.rst. > I should also add the bugfixes to stats (especially linregress) Josef From david at ar.media.kyoto-u.ac.jp Thu Jun 4 19:42:39 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 05 Jun 2009 08:42:39 +0900 Subject: [SciPy-user] [Numpy-discussion] Scipy 0.7.1rc1 released In-Reply-To: <1e2af89e0906040811g2671142j987922ac69ccd909@mail.gmail.com> References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> <1e2af89e0906040811g2671142j987922ac69ccd909@mail.gmail.com> Message-ID: <4A285BEF.6040502@ar.media.kyoto-u.ac.jp> Matthew Brett wrote: > Hi, > > >> The RC1 for 0.7.1 scipy release has just been tagged. This is a >> bug-only release >> > > I feel (y)our pain, but don't you mean 'bug-fix only release'? ;-) > Actually, there is one big bug on python 2.6 for mac os x, so maybe the bug-only is appropriate :) cheers, David From cournape at gmail.com Fri Jun 5 07:09:45 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 5 Jun 2009 20:09:45 +0900 Subject: [SciPy-user] scipy 0.7.1rc2 released Message-ID: <5b8d13220906050409u30286931w7bd9aac1e01b9ebf@mail.gmail.com> Hi, The RC2 for 0.7.1 scipy release has just been tagged. This is a bug-fixes only release, see below for the release notes. More information can also be found on the trac website: http://projects.scipy.org/scipy/milestone/0.7.1 The only code change compared to the RC1 is one fix which is essential for mac os x/python 2.6 combination. Tarballs, windows and mac os x binaries are available. Please test it ! I am particularly interested in results for scipy binaries on mac os x (do they work on ppc). The scipy developers -- ========================= SciPy 0.7.1 Release Notes ========================= .. contents:: SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. scipy.io ======== Bugs fixed: - Several fixes in Matlab file IO scipy.odr ========= Bugs fixed: - Work around a failure with Python 2.6 scipy.signal ============ Memory leak in lfilter have been fixed, as well as support for array object Bugs fixed: - #880, #925: lfilter fixes - #871: bicgstab fails on Win32 scipy.sparse ============ Bugs fixed: - #883: scipy.io.mmread with scipy.sparse.lil_matrix broken - lil_matrix and csc_matrix reject now unexpected sequences, cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 scipy.special ============= Several bugs of varying severity were fixed in the special functions: - #503, #640: iv: problems at large arguments fixed by new implementation - #623: jv: fix errors at large arguments - #679: struve: fix wrong output for v < 0 - #803: pbdv produces invalid output - #804: lqmn: fix crashes on some input - #823: betainc: fix documentation - #834: exp1 strange behavior near negative integer values - #852: jn_zeros: more accurate results for large s, also in jnp/yn/ynp_zeros - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain - #927: ellipj: fix segfault on Windows - #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. - ive, jve, yve, kv, kve: with real-valued input, return nan for out-of-domain instead of returning only the real part of the result. Also, when ``scipy.special.errprint(1)`` has been enabled, warning messages are now issued as Python warnings instead of printing them to stderr. scipy.stats =========== - linregress, mannwhitneyu, describe: errors fixed - kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, truncexpon, planck: improvements to numerical accuracy in distributions Windows binaries for python 2.6 =============================== python 2.6 binaries for windows are now included. The binary for python 2.5 requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy 1.3.0 or above. Universal build for scipy ========================= Mac OS X binary installer is now a proper universal build, and does not depend on gfortran anymore (libgfortran is statically linked). The python 2.5 version of scipy requires numpy 1.2.0 or above, the python 2.6 version requires numpy 1.3.0 or above. Checksums ========= 08cdf8d344535fcb5407dafd9f120b9b release/installers/scipy-0.7.1rc2.tar.gz 93595ca9f0b5690a6592c9fc43e9253d release/installers/scipy-0.7.1rc2-py2.6-macosx10.5.dmg fc8f434a9b4d76f1b38b7025f425127b release/installers/scipy-0.7.1rc2.zip 8cdc2472f3282f08a703cdcca5c92952 release/installers/scipy-0.7.1rc2-win32-superpack-python2.5.exe 15c4c45de931bd7f13e4ce24bd59579e release/installers/scipy-0.7.1rc2-win32-superpack-python2.6.exe e42853e39b3b4f590824e3a262863ef6 release/installers/scipy-0.7.1rc2-py2.5-macosx10.5.dmg From jgomezdans at gmail.com Fri Jun 5 07:50:10 2009 From: jgomezdans at gmail.com (Jose =?iso-8859-1?q?G=F3mez-Dans?=) Date: Fri, 5 Jun 2009 12:50:10 +0100 Subject: [SciPy-user] Efficient update of dictionary holding arrays Message-ID: <200906051250.11040.jgomezdans@gmail.com> Hi, This is probably something that is very python-centric and not very scipy-centric, but maybe the folk here can shed light on it. I am storing numpy arrays in dictionaries. The keys to such dicionaries are tuples, and hence fairly complex. The dictionaries are huge (they are around 100000 elements, but potentially I'd like to process around 6M elements). Each dictionary element, in itself, points to another dictionary, with some 10 keys, where each of them is a numpy array. What I want to achieve is to "trim" these numpy arrays (remove the first TRIM elements for all the arrays). My attempt goes like this: for k in self.data_dictionary.iterkeys(): for w in self.data_dictionary[k].iterkeys(): if w<>'array2d': self.data_dictionary[k][w] = self.data_dictionary[k][w][TRIM:] else: for b in xrange(7): self.data_dictionary[k][w][b] = self.data_dictionary[k][w][b][TRIM:] This is very slow. I guess that I have two nested loops, and then a further test. Is there some way to speed it up? Thanks! Jose From ferrell at diablotech.com Fri Jun 5 09:17:21 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Fri, 5 Jun 2009 07:17:21 -0600 Subject: [SciPy-user] Append a date to a DateArray Message-ID: <94777D2F-332A-45D0-8B69-505BB6578455@diablotech.com> Is there a canonical way to append a date to a DateArray in timeseries? The way I'm doing it is kind of kludgy (convert to ndarray, append, convert back to DateArray). The DateArray isn't huge, so I'm not particularly concerned about speed, just looking for something more elegant. thanks, -robert From rmay31 at gmail.com Fri Jun 5 10:35:38 2009 From: rmay31 at gmail.com (Ryan May) Date: Fri, 5 Jun 2009 09:35:38 -0500 Subject: [SciPy-user] Efficient update of dictionary holding arrays In-Reply-To: <200906051250.11040.jgomezdans@gmail.com> References: <200906051250.11040.jgomezdans@gmail.com> Message-ID: 2009/6/5 Jose G?mez-Dans > Hi, > This is probably something that is very python-centric and not very > scipy-centric, but maybe the folk here can shed light on it. I am storing > numpy arrays in dictionaries. The keys to such dicionaries are tuples, and > hence fairly complex. The dictionaries are huge (they are around 100000 > elements, but potentially I'd like to process around 6M elements). Each > dictionary element, in itself, points to another dictionary, with some 10 > keys, where each of them is a numpy array. > > What I want to achieve is to "trim" these numpy arrays (remove the first > TRIM > elements for all the arrays). My attempt goes like this: > > for k in self.data_dictionary.iterkeys(): > for w in self.data_dictionary[k].iterkeys(): > if w<>'array2d': > self.data_dictionary[k][w] = > self.data_dictionary[k][w][TRIM:] > else: > for b in xrange(7): > self.data_dictionary[k][w][b] = > self.data_dictionary[k][w][b][TRIM:] > > I haven't put much thought here on approaches, but one thing that should help is using this for the outer loop: for k,value in self.data_dictionary.iteritems(): # Gives key, value pairs from dictionary And then replace everywhere you use self.data_dictionary[k] with value. That cuts down on having to look up data_dictionary[k] all over the place. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpyle at post.harvard.edu Fri Jun 5 11:04:13 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Fri, 05 Jun 2009 11:04:13 -0400 Subject: [SciPy-user] Scipy 0.7.1rc1 released In-Reply-To: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> Message-ID: <3DB1C83D-186C-4CB2-BAB2-28A8506AEEBF@post.harvard.edu> Hi David, I've got a dual G5 PPC mac, running 10.5.7. The binary installer for 0.7.1rc2 worked without a hitch, but running scipy.test() resulted in a segfault. I'm attaching what I got in Terminal. Bob Pyle On Jun 4, 2009, at 6:24 AM, David Cournapeau wrote: > Hi, > > The RC1 for 0.7.1 scipy release has just been tagged. This is a > bug-only release, see below for the release notes. More information > can > also be found on the trac website: > > http://projects.scipy.org/scipy/milestone/0.7.1 > > Please test it ! -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy_test_segfault.txt URL: -------------- next part -------------- From david at ar.media.kyoto-u.ac.jp Fri Jun 5 11:42:34 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 06 Jun 2009 00:42:34 +0900 Subject: [SciPy-user] Scipy 0.7.1rc1 released In-Reply-To: <3DB1C83D-186C-4CB2-BAB2-28A8506AEEBF@post.harvard.edu> References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> <3DB1C83D-186C-4CB2-BAB2-28A8506AEEBF@post.harvard.edu> Message-ID: <4A293CEA.5050306@ar.media.kyoto-u.ac.jp> Robert Pyle wrote: > Hi David, > > I've got a dual G5 PPC mac, running 10.5.7. > > The binary installer for 0.7.1rc2 worked without a hitch, but running > scipy.test() resulted in a segfault. I'm attaching what I got in > Terminal. Hm, I got a segfault as well on my intel-based macbook when forcing for PPC emulation, but I thought (hoped would be more appropriate I guess) it was a rosetta problem. The good news is that I can reproduce the bug, I guess :) cheers, David From josephsmidt at gmail.com Fri Jun 5 12:15:50 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Fri, 5 Jun 2009 09:15:50 -0700 Subject: [SciPy-user] Avoiding For Loops Question Message-ID: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> Hi, I know that avoiding large for loops is a good way to speed up your program. I know you can avoid a for loop like this: for i in xrange(len(x)): a[i] = 2.0*x[i] as: a[:] = 2.0*x[:] Now, is it possible to write get around these types of for loops using any tools from scipy? for i in xrange(len(x)): a[i] = i*(i+1)/2*x[i] or this: for i in xrange(y.shape[0]): for k in xrange(y.shape[1]): a[i] += x[i] + y[i][k] or similarly: for i in xrange(y.shape[0]): for k in xrange(y.shape[1]): a[i][k] = x[i] + y[i][k] Thanks. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From tim.whitcomb at nrlmry.navy.mil Fri Jun 5 12:37:22 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Fri, 5 Jun 2009 09:37:22 -0700 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> Message-ID: > Now, is it possible to write get around these types of for > loops using any tools from scipy? Numpy, yes. > for i in xrange(len(x)): > a[i] = i*(i+1)/2*x[i] > The values of i are just indices, and those can be precomputed beforehand: i = numpy.arange(len(x)) a[:] = i[:]*(i[:]+1)/2*x[:] > for i in xrange(y.shape[0]): > for k in xrange(y.shape[1]): > a[i] += x[i] + y[i][k] Break the sum into two pieces - the x component is just repeated y.shape[1] times, and y is added up along the second axis: a[:] = x[:]*y.shape[1] + y[:].sum(axis=1) > for i in xrange(y.shape[0]): > for k in xrange(y.shape[1]): > a[i][k] = x[i] + y[i][k] Here, you are copying y into a, then adding the same value of x across an entire axis. Use array broadcasting to make x be the same shape as y, but with each column the same value: a[:,:] = x[:, numpy.newaxis] + y[:,:] I don't know what the style standard is regarding using the colons to indicate entire arrays (i.e. a = x[:,numpy.newaxis] + y instead), but these should work for you. Tim From josephsmidt at gmail.com Fri Jun 5 13:03:46 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Fri, 5 Jun 2009 10:03:46 -0700 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> Message-ID: <142682e10906051003l31157807lcf9b05d3f925aacc@mail.gmail.com> Hey, thanks a lot, you guys are the best. On Fri, Jun 5, 2009 at 9:37 AM, Whitcomb, Mr. Tim wrote: >> ? ?Now, is it possible to write get around these types of for >> loops using any tools from scipy? > > Numpy, yes. > >> for i in xrange(len(x)): >> ? ? ?a[i] = i*(i+1)/2*x[i] >> > > The values of i are just indices, and those can be precomputed > beforehand: > i = numpy.arange(len(x)) > a[:] = i[:]*(i[:]+1)/2*x[:] > >> for i in xrange(y.shape[0]): >> ? ? for k in xrange(y.shape[1]): >> ? ? ? ? a[i] += x[i] + y[i][k] > > Break the sum into two pieces - the x component is just repeated > y.shape[1] times, and y is added up along the second axis: > a[:] = x[:]*y.shape[1] + y[:].sum(axis=1) > >> for i in xrange(y.shape[0]): >> ? ? for k in xrange(y.shape[1]): >> ? ? ? ? a[i][k] = x[i] + y[i][k] > > Here, you are copying y into a, then adding the same value of x across > an entire axis. ?Use array broadcasting to make x be the same shape as > y, but with each column the same value: > a[:,:] = x[:, numpy.newaxis] + y[:,:] > > I don't know what the style standard is regarding using the colons to > indicate entire arrays (i.e. > a = x[:,numpy.newaxis] + y instead), but these should work for you. > > Tim > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From josef.pktd at gmail.com Fri Jun 5 13:26:50 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 5 Jun 2009 13:26:50 -0400 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> Message-ID: <1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> On Fri, Jun 5, 2009 at 12:37 PM, Whitcomb, Mr. Tim wrote: >> ? ?Now, is it possible to write get around these types of for >> loops using any tools from scipy? > > Numpy, yes. > >> for i in xrange(len(x)): >> ? ? ?a[i] = i*(i+1)/2*x[i] >> > > The values of i are just indices, and those can be precomputed > beforehand: > i = numpy.arange(len(x)) > a[:] = i[:]*(i[:]+1)/2*x[:] > >> for i in xrange(y.shape[0]): >> ? ? for k in xrange(y.shape[1]): >> ? ? ? ? a[i] += x[i] + y[i][k] > > Break the sum into two pieces - the x component is just repeated > y.shape[1] times, and y is added up along the second axis: > a[:] = x[:]*y.shape[1] + y[:].sum(axis=1) > >> for i in xrange(y.shape[0]): >> ? ? for k in xrange(y.shape[1]): >> ? ? ? ? a[i][k] = x[i] + y[i][k] > > Here, you are copying y into a, then adding the same value of x across > an entire axis. ?Use array broadcasting to make x be the same shape as > y, but with each column the same value: > a[:,:] = x[:, numpy.newaxis] + y[:,:] > > I don't know what the style standard is regarding using the colons to > indicate entire arrays (i.e. > a = x[:,numpy.newaxis] + y instead), but these should work for you. these 2 are two different operations a[:,:] = x[:, numpy.newaxis] + y[:,:] on right side: y[:,:] is the same as y, [:,:] is redundant on the left side a[:,:] = assigns the content of the right side to existing array `a` if the dimensions don't agree, then you get an exception a = x[:,numpy.newaxis] + y this assigns the temporary result of the right side to the name `a`, no matter what `a` was before so, in the examples above, I think, you can drop all [:], [:,:] Josef > > Tim From josephsmidt at gmail.com Fri Jun 5 13:47:14 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Fri, 5 Jun 2009 10:47:14 -0700 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: <1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> <1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> Message-ID: <142682e10906051047l2420a00dsdbe095d12457b903@mail.gmail.com> All right guys, last one: for l in xrange(1,1000): for m in xrange(0,l+1): Alm[l][m] = alm[l][m]*cl[l] Blm[l][m] =alm[l][m]*cl[l] Here is one where the second index depends on value of the first. Joseph Smidt On Fri, Jun 5, 2009 at 10:26 AM, wrote: > On Fri, Jun 5, 2009 at 12:37 PM, Whitcomb, Mr. Tim > wrote: >>> ? ?Now, is it possible to write get around these types of for >>> loops using any tools from scipy? >> >> Numpy, yes. >> >>> for i in xrange(len(x)): >>> ? ? ?a[i] = i*(i+1)/2*x[i] >>> >> >> The values of i are just indices, and those can be precomputed >> beforehand: >> i = numpy.arange(len(x)) >> a[:] = i[:]*(i[:]+1)/2*x[:] >> >>> for i in xrange(y.shape[0]): >>> ? ? for k in xrange(y.shape[1]): >>> ? ? ? ? a[i] += x[i] + y[i][k] >> >> Break the sum into two pieces - the x component is just repeated >> y.shape[1] times, and y is added up along the second axis: >> a[:] = x[:]*y.shape[1] + y[:].sum(axis=1) >> >>> for i in xrange(y.shape[0]): >>> ? ? for k in xrange(y.shape[1]): >>> ? ? ? ? a[i][k] = x[i] + y[i][k] >> >> Here, you are copying y into a, then adding the same value of x across >> an entire axis. ?Use array broadcasting to make x be the same shape as >> y, but with each column the same value: >> a[:,:] = x[:, numpy.newaxis] + y[:,:] >> >> I don't know what the style standard is regarding using the colons to >> indicate entire arrays (i.e. >> a = x[:,numpy.newaxis] + y instead), but these should work for you. > > these 2 are two different operations > > ?a[:,:] = x[:, numpy.newaxis] + y[:,:] > > on right side: ?y[:,:] is the same as y, [:,:] is redundant > on the left side a[:,:] = ? ?assigns the content of the right side to > existing array `a` > ? ? ?if the dimensions don't agree, then you get an exception > > a = x[:,numpy.newaxis] + y > this assigns the temporary result of the right side to the name `a`, > no matter what `a` was before > > so, in the examples above, I think, you can drop all [:], [:,:] > > Josef > > >> >> Tim > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From tim.whitcomb at nrlmry.navy.mil Fri Jun 5 13:48:59 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Fri, 5 Jun 2009 10:48:59 -0700 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: <1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> <1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> Message-ID: > > I don't know what the style standard is regarding using the > colons to > > indicate entire arrays (i.e. > > a = x[:,numpy.newaxis] + y instead), but these should work for you. > > > so, in the examples above, I think, you can drop all [:], [:,:] > > Josef You can indeed - I checked and made sure. What is the generally accepted practice for their inclusion? Sometimes in my Fortran code I use (:) to make it clear that these are array operations, but is the Numpy/Scipy convention to leave them out if they are not necessary? Tim From ivo.maljevic at gmail.com Fri Jun 5 13:56:45 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 5 Jun 2009 13:56:45 -0400 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> Message-ID: <826c64da0906051056t57cb8e22m54cc189e699efb35@mail.gmail.com> Tim, Just out of curiosity, why: > i = numpy.arange(len(x)) > a[:] = i[:]*(i[:]+1)/2*x[:] and not: > i = numpy.arange(len(x)) > a = i*(i+1)/2*x Ivo 2009/6/5 Whitcomb, Mr. Tim : >> ? ?Now, is it possible to write get around these types of for >> loops using any tools from scipy? > > Numpy, yes. > >> for i in xrange(len(x)): >> ? ? ?a[i] = i*(i+1)/2*x[i] >> > > The values of i are just indices, and those can be precomputed > beforehand: > i = numpy.arange(len(x)) > a[:] = i[:]*(i[:]+1)/2*x[:] > >> for i in xrange(y.shape[0]): >> ? ? for k in xrange(y.shape[1]): >> ? ? ? ? a[i] += x[i] + y[i][k] > > Break the sum into two pieces - the x component is just repeated > y.shape[1] times, and y is added up along the second axis: > a[:] = x[:]*y.shape[1] + y[:].sum(axis=1) > >> for i in xrange(y.shape[0]): >> ? ? for k in xrange(y.shape[1]): >> ? ? ? ? a[i][k] = x[i] + y[i][k] > > Here, you are copying y into a, then adding the same value of x across > an entire axis. ?Use array broadcasting to make x be the same shape as > y, but with each column the same value: > a[:,:] = x[:, numpy.newaxis] + y[:,:] > > I don't know what the style standard is regarding using the colons to > indicate entire arrays (i.e. > a = x[:,numpy.newaxis] + y instead), but these should work for you. > > Tim > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Fri Jun 5 13:58:34 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 5 Jun 2009 13:58:34 -0400 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> <1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> Message-ID: <1cd32cbb0906051058u9b2184di547474c2c7cd4234@mail.gmail.com> On Fri, Jun 5, 2009 at 1:48 PM, Whitcomb, Mr. Tim wrote: >> > I don't know what the style standard is regarding using the >> colons to >> > indicate entire arrays (i.e. >> > a = x[:,numpy.newaxis] + y instead), but these should work for you. >> >> >> so, in the examples above, I think, you can drop all [:], [:,:] >> >> Josef > > You can indeed - I checked and made sure. ?What is the generally > accepted practice for their inclusion? ?Sometimes in my Fortran code I > use (:) to make it clear that these are array operations, but is the > Numpy/Scipy convention to leave them out if they are not necessary? > On the left hand side they have different meaning, so it's important to decide whether they are necessary or not. On the right hand side, they are redundant and I didn't see much code that would use them. Also, they restrict you in the shape of the array that you can use in the expression, e.g. using [:,:] will raise an exception if the array is only one dimensional and will have different meaning if the array is 3d. So to write flexible code it is better to leave them off if they are not necessary. Josef From tim.whitcomb at nrlmry.navy.mil Fri Jun 5 14:02:17 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Fri, 5 Jun 2009 11:02:17 -0700 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: <826c64da0906051056t57cb8e22m54cc189e699efb35@mail.gmail.com> References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> <826c64da0906051056t57cb8e22m54cc189e699efb35@mail.gmail.com> Message-ID: > Tim, > Just out of curiosity, why: > > > i = numpy.arange(len(x)) > > a[:] = i[:]*(i[:]+1)/2*x[:] > > and not: > > > i = numpy.arange(len(x)) > > a = i*(i+1)/2*x > > Ivo > Because I'm used to doing this in some of my Fortran 90 code, and I wasn't sure what the "right" way is to do it in Numpy/Scipy. See Josef's notes on this thread - he writes "[t]o write flexible code it is better to leave [the [:] parts] off if they are not necessary." Given that, the second version is the "better" of the functionally equivalent pair. Tim From ivo.maljevic at gmail.com Fri Jun 5 14:05:22 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 5 Jun 2009 14:05:22 -0400 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> <826c64da0906051056t57cb8e22m54cc189e699efb35@mail.gmail.com> Message-ID: <826c64da0906051105n1a64c0b1i8a5852a5bb72531c@mail.gmail.com> I know, sorry, you have already responded to this question. I fired my question to fast. Thanks, Ivo 2009/6/5 Whitcomb, Mr. Tim : >> Tim, >> Just out of curiosity, why: >> >> > i = numpy.arange(len(x)) >> > a[:] = i[:]*(i[:]+1)/2*x[:] >> >> and not: >> >> > i = numpy.arange(len(x)) >> > a = i*(i+1)/2*x >> >> Ivo >> > > Because I'm used to doing this in some of my Fortran 90 code, and I > wasn't sure what the "right" way is to do it in Numpy/Scipy. ?See > Josef's notes on this thread - he writes "[t]o write flexible code it is > better to leave [the [:] parts] off if they are not necessary." ?Given > that, the second version is the "better" of the functionally equivalent > pair. > > Tim > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From tim.whitcomb at nrlmry.navy.mil Fri Jun 5 14:15:23 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Fri, 5 Jun 2009 11:15:23 -0700 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: <142682e10906051047l2420a00dsdbe095d12457b903@mail.gmail.com> References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com><1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> <142682e10906051047l2420a00dsdbe095d12457b903@mail.gmail.com> Message-ID: > All right guys, last one: > > for l in xrange(1,1000): > for m in xrange(0,l+1): > Alm[l][m] = alm[l][m]*cl[l] > Blm[l][m] =alm[l][m]*cl[l] > > Here is one where the second index depends on value of the first. In this case, you are still summing across columns (i.e. the first index of all parts of the expression is constant). The only variation is that in row N, you only want the first N columns. The output pattern that you get from your for loop looks like: x x x x x x ... which, in a rectangular matrix, means that it's lower triangular, so you can do something like: A = numpy.tril(alm + cl[:,numpy.newaxis]) Tim From tim.whitcomb at nrlmry.navy.mil Fri Jun 5 14:19:59 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Fri, 5 Jun 2009 11:19:59 -0700 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com><1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com><142682e10906051047l2420a00dsdbe095d12457b903@mail.gmail.com> Message-ID: > > All right guys, last one: > > > > for l in xrange(1,1000): > > for m in xrange(0,l+1): > > Alm[l][m] = alm[l][m]*cl[l] > > Blm[l][m] =alm[l][m]*cl[l] > > > > Here is one where the second index depends on value of the first. > > In this case, you are still summing across columns (i.e. the > first index of all parts of the expression is constant). The > only variation is that in row N, you only want the first N > columns. The output pattern that you get from your for loop > looks like: > x > x x > x x x > ... > which, in a rectangular matrix, means that it's lower > triangular, so you can do something like: Whoops - got the signs mixed up. This should be A = numpy.tril(alm * cl[:,numpy.newaxis]) Tim From dwf at cs.toronto.edu Fri Jun 5 14:53:01 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 5 Jun 2009 14:53:01 -0400 Subject: [SciPy-user] Avoiding For Loops Question In-Reply-To: References: <142682e10906050915n47bf6c98gd14d03d53cda1579@mail.gmail.com> <1cd32cbb0906051026l2ea07fefh6430fdaae78fd94@mail.gmail.com> Message-ID: <10D7E473-DF18-46FF-8E3B-CC56FC3D555D@cs.toronto.edu> On 5-Jun-09, at 1:48 PM, Whitcomb, Mr. Tim wrote: > You can indeed - I checked and made sure. What is the generally > accepted practice for their inclusion? Sometimes in my Fortran code I > use (:) to make it clear that these are array operations, but is the > Numpy/Scipy convention to leave them out if they are not necessary? As Josef hinted at but (as far as I can see) never explicitly spelled out, [:] on the left hand side indicates e.g. that for foo[:] = bar the contents of bar should be copied into the location already pointed to by foo. foo = bar is basically a reference assignment. In [1]: x = zeros(50) In [2]: y = arange(50) In [3]: x[:] = y In [4]: x[0] = -500 # won't affect y, since previous line was copy In [5]: y[0] Out[5]: 0 In [6]: x = y In [7]: x[0] = -500 In [8]: y[0] Out[8]: -500 In [9]: x[:] = arange(100) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /Users/dwf/ in () ValueError: shape mismatch: objects cannot be broadcast to a single shape In [10]: x = arange(100) # succeeds In [11]: len(x) Out[11]: 100 From Christopher.Chang at nrel.gov Fri Jun 5 20:04:15 2009 From: Christopher.Chang at nrel.gov (Chang, Christopher) Date: Fri, 5 Jun 2009 18:04:15 -0600 Subject: [SciPy-user] Broken dmg? Message-ID: Hi, After building Python 2.6.2 with the system gcc on a MacBook Pro (OS X 10.5.7, Intel Core 2 Duo) and numpy with gcc + gfortran from http://r.research.att.com/tools/, I tried to install the binary scipy-0.7.1rc2-py2.6-macosx10.5. When it comes time to select a destination, I get "You cannot install scipy 0.7.1rc2 on this volume. scipy requires System Python 2.6 to install" If the installer is just checking Python from the environment, it should be picking up the new version, not the system 2.5. At any rate, the same thing happens with the py2.5 scipy package. What exactly is the installer checking for and not finding? Thanks, Chris From textdirected at gmail.com Fri Jun 5 23:18:42 2009 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Sat, 6 Jun 2009 12:18:42 +0900 Subject: [SciPy-user] Scipy 0.7.1rc1 released In-Reply-To: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> Message-ID: Hello all, 2009/6/4 David Cournapeau : (snip) > Please test it ! I tried to build scipy-0.7.1rc2 on MAC OS X (10.3) ppc, but compilation failed. See the following lines for details: $ pwd /Users/zgzgu/scipy-0.7.1rc2 $ uname -a Darwin hagi.local 7.9.0 Darwin Kernel Version 7.9.0: Wed Mar 30 20:11:17 PST 2005; root:xnu/xnu-517.12.7.obj~1/RELEASE_PPC Power Macintosh powerpc $ which python /Users/zgzg/bin/python $ python Python 2.5.4 (r254:67916, May 2 2009, 21:23:05) [GCC 3.3 20030304 (Apple Computer, Inc. build 1671)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.3.0' >>> ^D $ which gcc /usr/bin/gcc $ gcc --version gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1671) Copyright (C) 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ python setup.py build (SNIP MANY LINES) creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS compile options: '-Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Users/zgzg/lib/python2.5/site-packages/numpy/core/include -c' gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: warning: type defaults to `int' in declaration of `complex' scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: error: parse error before "float" scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:6: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_cdotc_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: (Each undeclared identifier is reported only once scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: for each function it appears in.) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `dotc' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: warning: type defaults to `int' in declaration of `complex' scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: error: parse error before "float" scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:12: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_cdotu_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `dotu' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:16: error: parse error before '*' token scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:18: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_zdotc_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `dotu' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:21: error: parse error before '*' token scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:23: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_zdotu_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `dotu' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: warning: type defaults to `int' in declaration of `complex' scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: error: parse error before "float" scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:6: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_cdotc_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: (Each undeclared identifier is reported only once scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: for each function it appears in.) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: error: `dotc' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: warning: type defaults to `int' in declaration of `complex' scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: error: parse error before "float" scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:12: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_cdotu_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: error: `dotu' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:16: error: parse error before '*' token scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:18: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_zdotc_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: error: `dotu' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:21: error: parse error before '*' token scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:23: warning: function declaration isn't a prototype scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In function `veclib_zdotu_': scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `N' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `X' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `incX' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `Y' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `incY' undeclared (first use in this function) scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: error: `dotu' undeclared (first use in this function) error: Command "gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Users/zgzg/lib/python2.5/site-packages/numpy/core/include -c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" failed with exit status 1 Best regards, From textdirected at gmail.com Fri Jun 5 23:36:39 2009 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Sat, 6 Jun 2009 12:36:39 +0900 Subject: [SciPy-user] Scipy 0.7.1rc1 released In-Reply-To: References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> Message-ID: Sorry let me cancel this post. I remembered that I have to insert #include line to 3 files: 1. ./scipy/lib/blas/fblaswrap_veclib_c.c.src 2. ./scipy/linalg/src/fblaswrap_veclib_c.c 3. ./scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c regards, 2009/6/6 HEMMI, Shigeru : > Hello all, > > 2009/6/4 David Cournapeau : > (snip) >> Please test it ! > > I tried to build scipy-0.7.1rc2 on MAC OS X (10.3) ppc, but > compilation failed. See the following lines for details: > > $ pwd > /Users/zgzgu/scipy-0.7.1rc2 > > $ uname -a > Darwin hagi.local 7.9.0 Darwin Kernel Version 7.9.0: Wed Mar 30 > 20:11:17 PST 2005; root:xnu/xnu-517.12.7.obj~1/RELEASE_PPC Power > Macintosh powerpc > > $ which python > /Users/zgzg/bin/python > > $ python > Python 2.5.4 (r254:67916, May 2 2009, 21:23:05) > [GCC 3.3 20030304 (Apple Computer, Inc. build 1671)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy >>>> numpy.__version__ > '1.3.0' >>>> ^D > > $ which gcc > /usr/bin/gcc > > $ gcc --version > gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1671) > Copyright (C) 2002 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > $ python setup.py build > > (SNIP MANY LINES) > > creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen > creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack > creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK > creating build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS > compile options: '-Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC > -I/Users/zgzg/lib/python2.5/site-packages/numpy/core/include -c' > gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: > warning: type defaults to `int' in declaration of `complex' > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: > error: parse error before "float" > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:6: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_cdotc_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: (Each undeclared identifier is reported only once > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: for each function it appears in.) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `dotc' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: > warning: type defaults to `int' in declaration of `complex' > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: > error: parse error before "float" > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:12: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_cdotu_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `dotu' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:16: > error: parse error before '*' token > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:18: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_zdotc_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `dotu' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:21: > error: parse error before '*' token > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:23: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_zdotu_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `dotu' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: > warning: type defaults to `int' in declaration of `complex' > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: > error: parse error before "float" > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:6: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_cdotc_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: (Each undeclared identifier is reported only once > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: for each function it appears in.) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:7: > error: `dotc' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: > warning: type defaults to `int' in declaration of `complex' > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: > error: parse error before "float" > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:12: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_cdotu_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:13: > error: `dotu' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:16: > error: parse error before '*' token > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:18: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_zdotc_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:19: > error: `dotu' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: At top level: > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:21: > error: parse error before '*' token > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:23: > warning: function declaration isn't a prototype > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c: In > function `veclib_zdotu_': > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `N' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `X' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `incX' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `Y' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `incY' undeclared (first use in this function) > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:24: > error: `dotu' undeclared (first use in this function) > error: Command "gcc -fno-strict-aliasing -Wno-long-double > -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC > -I/Users/zgzg/lib/python2.5/site-packages/numpy/core/include -c > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o > build/temp.macosx-10.3-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" > failed with exit status 1 > > Best regards, > From david at ar.media.kyoto-u.ac.jp Sat Jun 6 01:22:46 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 06 Jun 2009 14:22:46 +0900 Subject: [SciPy-user] Broken dmg? In-Reply-To: References: Message-ID: <4A29FD26.8080005@ar.media.kyoto-u.ac.jp> Chang, Christopher wrote: > Hi, > > After building Python 2.6.2 with the system gcc on a MacBook Pro (OS X 10.5.7, Intel Core 2 Duo) and numpy with gcc + gfortran from http://r.research.att.com/tools/, I tried to install the binary scipy-0.7.1rc2-py2.6-macosx10.5. When it comes time to select a destination, I get > > "You cannot install scipy 0.7.1rc2 on this volume. scipy requires System Python 2.6 to install" > > If the installer is just checking Python from the environment, it should be picking up the new version It can't, or at least I don't know a way to do it - the installer can only install for the python it was build with. > At any rate, the same thing happens with the py2.5 scipy package. What exactly is the installer checking for and not finding? A python installed from python.org (the system python message is misleading here). Anything else is unlikely to work - and anyway, if you build your python, I think you should not expect binaries to work on it (not only numpy/scipy). cheers, David From cournape at gmail.com Sat Jun 6 05:53:13 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 6 Jun 2009 18:53:13 +0900 Subject: [SciPy-user] Scipy 0.7.1rc1 released In-Reply-To: <3DB1C83D-186C-4CB2-BAB2-28A8506AEEBF@post.harvard.edu> References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> <3DB1C83D-186C-4CB2-BAB2-28A8506AEEBF@post.harvard.edu> Message-ID: <5b8d13220906060253u108641b1j52e30fee12f51f24@mail.gmail.com> On Sat, Jun 6, 2009 at 12:04 AM, Robert Pyle wrote: > Hi David, > > I've got a dual G5 PPC mac, running 10.5.7. > > The binary installer for 0.7.1rc2 worked without a hitch, but running > scipy.test() resulted in a segfault. ?I'm attaching what I got in Terminal. Would you mind trying this one ? http://www.ar.media.kyoto-u.ac.jp/members/david/archives/scipy-0.7.1rc3.dev-py2.5-python.org.dmg This one did not crash under rosetta, and only two test failures happen, of which at least one is caused by rosetta. thanks, David From giorgio.luciano at inwind.it Sat Jun 6 06:34:21 2009 From: giorgio.luciano at inwind.it (giorgio.luciano at inwind.it) Date: Sat, 6 Jun 2009 12:34:21 +0200 Subject: [SciPy-user] Jcamp format read Message-ID: Hello to all, I've done a script for importing all spectra files in a directory and merge all them in one matrix. The file imported are dx files. the bad part is that the file is in matlab and it requite a function from bioinformatic toolbox (jcamp read). And now I just wnat to do the same in python. I guess I will have no problem for translating the script but I think I dont' have the time (and capabilities) to rewrite something like jcampread. Since jcamp dx format it's quite common among scientist. Does anyone can share some script/function for importing them in python (I guess that also a r routine can do the trick but I will prefer to use python). Thanks in advance to all Giorgio From rpyle at post.harvard.edu Sat Jun 6 11:22:25 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Sat, 06 Jun 2009 11:22:25 -0400 Subject: [SciPy-user] Scipy 0.7.1rc1 released In-Reply-To: <5b8d13220906060253u108641b1j52e30fee12f51f24@mail.gmail.com> References: <4A27A0E0.1060903@ar.media.kyoto-u.ac.jp> <3DB1C83D-186C-4CB2-BAB2-28A8506AEEBF@post.harvard.edu> <5b8d13220906060253u108641b1j52e30fee12f51f24@mail.gmail.com> Message-ID: <805376EA-1EBD-4E26-9041-4E77A05EE817@post.harvard.edu> On Jun 6, 2009, at 5:53 AM, David Cournapeau wrote: > On Sat, Jun 6, 2009 at 12:04 AM, Robert Pyle > wrote: >> Hi David, >> >> I've got a dual G5 PPC mac, running 10.5.7. >> >> The binary installer for 0.7.1rc2 worked without a hitch, but running >> scipy.test() resulted in a segfault. I'm attaching what I got in >> Terminal. > > Would you mind trying this one ? > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/scipy-0.7.1rc3.dev-py2.5-python.org.dmg This time, scipy.test() ran to completion with 1 failure. I'm attaching the output from Terminal. Bob -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy_test.txt URL: -------------- next part -------------- From pgmdevlist at gmail.com Sat Jun 6 14:57:24 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 6 Jun 2009 14:57:24 -0400 Subject: [SciPy-user] Append a date to a DateArray In-Reply-To: <94777D2F-332A-45D0-8B69-505BB6578455@diablotech.com> References: <94777D2F-332A-45D0-8B69-505BB6578455@diablotech.com> Message-ID: On Jun 5, 2009, at 9:17 AM, Robert Ferrell wrote: > Is there a canonical way to append a date to a DateArray in > timeseries? The way I'm doing it is kind of kludgy (convert to > ndarray, append, convert back to DateArray). The DateArray isn't > huge, so I'm not particularly concerned about speed, just looking for > something more elegant. Sorry for the delayed answer. No, not at this point. However, it is indeed a useful feature. I'll try to come up with something for a next version. From ivo.maljevic at gmail.com Sat Jun 6 15:42:50 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Sat, 6 Jun 2009 15:42:50 -0400 Subject: [SciPy-user] Cookbook addition? Message-ID: <826c64da0906061242n20ffd5feje9b41a5389c4e9ee@mail.gmail.com> Hi, I wanted to add something to cookbook, but I'm not quite familiar with this tool. I can edit the main page, but I do not dare save changes in fear of overwritting the main page, and I don't see how to create "child" pages. Anyway, if you guys think this would be useful, you can either post it, or tell me how to do it. Here is what I generated: ---------------------------- Raw text is bellow --------------------------- These two examples illustrate simple simulation of a digital BPSK modulated communication system where only one sample per symbol is used, and signal is affected only by AWGN noise. In the first example, we cycle through different signal to noise values, and the signal length is a function of theoretical probability of error. As a rule of thumb, we want to count about 100 errors for each SNR value, which determines the length of the signal (and noise) vector(s). {{{ #!python numbers=disable #!/usr/bin/python # BPSK digital modulation example # by Ivo Maljevic from numpy import * from scipy.special import erfc import matplotlib.pyplot as plt SNR_MIN = 0 SNR_MAX = 9 Eb_No_dB = arange(SNR_MIN,SNR_MAX+1) SNR = 10**(Eb_No_dB/10.0) # linear SNR Pe = empty(shape(SNR)) BER = empty(shape(SNR)) loop = 0 for snr in SNR: # SNR loop Pe[loop] = 0.5*erfc(sqrt(snr)) VEC_SIZE = ceil(100/Pe[loop]) # vector length is a function of Pe # signal vector, new vector for each SNR value s = 2*random.randint(0,high=2,size=VEC_SIZE)-1 # linear power of the noise; average signal power = 1 No = 1.0/snr # noise n = sqrt(No/2)*random.randn(VEC_SIZE) # signal + noise x = s + n # decode received signal + noise y = sign(x) # find erroneous symbols err = where(y != s) error_sum = float(len(err[0])) BER[loop] = error_sum/VEC_SIZE print 'Eb_No_dB=%4.2f, BER=%10.4e, Pe=%10.4e' % \ (Eb_No_dB[loop], BER[loop], Pe[loop]) loop += 1 #plt.semilogy(Eb_No_dB, Pe,'r',Eb_No_dB, BER,'s') plt.semilogy(Eb_No_dB, Pe,'r',linewidth=2) plt.semilogy(Eb_No_dB, BER,'-s') plt.grid(True) plt.legend(('analytical','simulation')) plt.xlabel('Eb/No (dB)') plt.ylabel('BER') plt.show() }}} In the second, slightly modified example, the problem of signal length growth is solved by braking a signal into frames. {{{ #!python numbers=disable #!/usr/bin/python # BPSK digital modulation: modified example # by Ivo Maljevic from scipy import * from math import sqrt, ceil # scalar calls are faster from scipy.special import erfc import matplotlib.pyplot as plt rand = random.rand normal = random.normal SNR_MIN = 0 SNR_MAX = 10 FrameSize = 10000 Eb_No_dB = arange(SNR_MIN,SNR_MAX+1) Eb_No_lin = 10**(Eb_No_dB/10.0) # linear SNR # Allocate memory Pe = empty(shape(Eb_No_lin)) BER = empty(shape(Eb_No_lin)) # signal vector (for faster exec we can repeat the same frame) s = 2*random.randint(0,high=2,size=FrameSize)-1 loop = 0 for snr in Eb_No_lin: No = 1.0/snr Pe[loop] = 0.5*erfc(sqrt(snr)) nFrames = ceil(100.0/FrameSize/Pe[loop]) error_sum = 0 scale = sqrt(No/2) for frame in arange(nFrames): # noise n = normal(scale=scale, size=FrameSize) # received signal + noise x = s + n # detection (information is encoded in signal phase) y = sign(x) # error counting err = where (y != s) error_sum += len(err[0]) # end of frame loop ################################################## BER[loop] = error_sum/(FrameSize*nFrames) # SNR loop level print 'Eb_No_dB=%2d, BER=%10.4e, Pe[loop]=%10.4e' % \ (Eb_No_dB[loop], BER[loop], Pe[loop]) loop += 1 plt.semilogy(Eb_No_dB, Pe,'r',linewidth=2) plt.semilogy(Eb_No_dB, BER,'-s') plt.grid(True) plt.legend(('analytical','simulation')) plt.xlabel('Eb/No (dB)') plt.ylabel('BER') plt.show() }}} ---- . CategoryCookbook ---- CategoryCookbook -------------- next part -------------- A non-text attachment was scrubbed... Name: BPSK_BER.png Type: image/png Size: 34816 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Sat Jun 6 15:47:43 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 6 Jun 2009 21:47:43 +0200 Subject: [SciPy-user] Cookbook addition? In-Reply-To: <826c64da0906061242n20ffd5feje9b41a5389c4e9ee@mail.gmail.com> References: <826c64da0906061242n20ffd5feje9b41a5389c4e9ee@mail.gmail.com> Message-ID: <20090606194743.GF10765@phare.normalesup.org> On Sat, Jun 06, 2009 at 03:42:50PM -0400, Ivo Maljevic wrote: > Anyway, if you guys think this would be useful, you can either post > it, or tell me how to do it. I think this is useful. Just edit the main page, and create a link to a non-existing page where you want to put your example. Then follow this link, and the wiki will ask you if you want to create a page. HTH, Ga?l From ivo.maljevic at gmail.com Sat Jun 6 15:53:17 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Sat, 6 Jun 2009 15:53:17 -0400 Subject: [SciPy-user] Cookbook addition? In-Reply-To: <20090606194743.GF10765@phare.normalesup.org> References: <826c64da0906061242n20ffd5feje9b41a5389c4e9ee@mail.gmail.com> <20090606194743.GF10765@phare.normalesup.org> Message-ID: <826c64da0906061253l73df4317t86b2919308c2a75e@mail.gmail.com> Thanks Ga?l. That seems non-intuitive to me, as I sometime use "Knowledge" system with similar syntax, but you have the option to generate "Child pages". Since there is nothing on Comm theory, I will try to add few more examples with higher order modulation schemes and pulse shaping after I return from vacation (in July). Ivo 2009/6/6 Gael Varoquaux : > On Sat, Jun 06, 2009 at 03:42:50PM -0400, Ivo Maljevic wrote: >> Anyway, if you guys think this would be useful, you can either post >> it, or tell me how to do it. > > I think this is useful. > > Just edit the main page, and create a link to a non-existing page where > you want to put your example. Then follow this link, and the wiki will > ask you if you want to create a page. > > HTH, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Sat Jun 6 15:58:24 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 6 Jun 2009 21:58:24 +0200 Subject: [SciPy-user] Cookbook addition? In-Reply-To: <826c64da0906061253l73df4317t86b2919308c2a75e@mail.gmail.com> References: <826c64da0906061242n20ffd5feje9b41a5389c4e9ee@mail.gmail.com> <20090606194743.GF10765@phare.normalesup.org> <826c64da0906061253l73df4317t86b2919308c2a75e@mail.gmail.com> Message-ID: <20090606195824.GA5180@phare.normalesup.org> On Sat, Jun 06, 2009 at 03:53:17PM -0400, Ivo Maljevic wrote: > Thanks Ga?l. That seems non-intuitive to me, as I sometime use > "Knowledge" system with similar syntax, but you have the option to > generate "Child pages". > Since there is nothing on Comm theory, I will try to add few more > examples with higher order modulation schemes and pulse shaping after > I return from vacation (in July). Excellent. I know nothing about all this. I look forward to reading these pages. (Off topic): I have a dream of an executable encyclopedia of scientific and numerical methods. It would be fantastic to be able to read article and execute all the examples, generate all the figures... We are not there, off course. Ga?l From ramercer at gmail.com Sat Jun 6 22:57:36 2009 From: ramercer at gmail.com (Adam Mercer) Date: Sat, 6 Jun 2009 21:57:36 -0500 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1rc2 released In-Reply-To: <5b8d13220906050409u30286931w7bd9aac1e01b9ebf@mail.gmail.com> References: <5b8d13220906050409u30286931w7bd9aac1e01b9ebf@mail.gmail.com> Message-ID: <799406d60906061957w2b0bd6c9n33fb898a7fc16e28@mail.gmail.com> On Fri, Jun 5, 2009 at 06:09, David Cournapeau wrote: > Please test it ! I am particularly interested in results for scipy > binaries on mac os x (do they work on ppc). Test suite passes on Intel Mac OS X (10.5.7) built from source: OK (KNOWNFAIL=6, SKIP=21) Cheers Adam From wierob83 at googlemail.com Sun Jun 7 05:45:37 2009 From: wierob83 at googlemail.com (wierob) Date: Sun, 07 Jun 2009 11:45:37 +0200 Subject: [SciPy-user] FloatingPointError: underflow encountered in stdtr Message-ID: <4A2B8C41.5080205@googlemail.com> Hi, I'm trying to perform a linear regression analysis using scipy.stats.linregress. I get an underflow during the calculation of the p-value. File "C:\Python26\lib\site-packages\scipy\stats\distributions.py", line 2829,in _cdf return special.stdtr(df, x) FloatingPointError: underflow encountered in stdtr It seems that the error occurs in scipy.special.stdtr(df, x) if df = array([13412]) and x = array([61.88071696]). >>> from scipy import special >>> import numpy >>> df = numpy.array([13412]) >>> x = numpy,array([61.88071696]) >>> special.stdtr(df,x) array([ 1.]) >>> numpy.seterr(all="raise") >>> special.stdtr(df,x) Traceback (most recent call last): File "", line 1, in FloatingPointError: underflow encountered in stdtr So, is there another function or datatype that can handle this? kind regards robert From mmueller at python-academy.de Sun Jun 7 18:26:57 2009 From: mmueller at python-academy.de (=?windows-1252?Q?Mike_M=FCller?=) Date: Mon, 08 Jun 2009 00:26:57 +0200 Subject: [SciPy-user] [ANN] EuroSciPy 2009 - Presentation Schedule Published Message-ID: <4A2C3EB1.60404@python-academy.de> EuroSciPy 2009 Presentation Schedule Published ============================================== The schedule of presentations for the EuroSciPy conference is online: http://www.euroscipy.org/presentations/schedule.html We have 16 talks from a variety of scientific fields. All about using Python for scientific work. EuroSciPy 2009 ============== We're pleased to announce the EuroSciPy 2009 Conference to be held in Leipzig, Germany on July 25-26, 2009. http://www.euroscipy.org This is the second conference after the successful conference last year. Again, EuroSciPy will be a venue for the European community of users of the Python programming language in science. Registration ------------ Registration is open. The registration fee is 100.00 ? for early registrants and will increase to 150.00 ? for late registration after June 15, 2009. Registration will include breakfast, snacks and lunch for Saturday and Sunday. Please register here: http://www.euroscipy.org/registration.html Important Dates --------------- March 21 Registration opens May 8 Abstract submission deadline May 15 Acceptance of presentations May 30 Announcement of conference program June 15 Early bird registration deadline July 15 Slides submission deadline July 20 - 24 Pre-Conference courses July 25/26 Conference August 15 Paper submission deadline Venue ----- mediencampus Poetenweg 28 04155 Leipzig Germany See http://www.euroscipy.org/venue.html for details. Help Welcome ------------ You like to help make the EuroSciPy 2009 a success? Here are some ways you can get involved: * attend the conference * submit an abstract for a presentation * give a lightning talk * make EuroSciPy known: - distribute the press release (http://www.euroscipy.org/media.html) to scientific magazines or other relevant media - write about it on your website - in your blog - talk to friends about it - post to local e-mail lists - post to related forums - spread flyers and posters in your institution - make entries in relevant event calendars - anything you can think of * inform potential sponsors about the event * become a sponsor If you're interested in volunteering to help organize things or have some other idea that can help the conference, please email us at mmueller at python-academy dot de. Sponsorship ----------- Do you like to sponsor the conference? There are several options available: http://www.euroscipy.org/sponsors/become_a_sponsor.html Pre-Conference Courses ---------------------- Would you like to learn Python or about some of the most used scientific libraries in Python? Then the "Python Summer Course" [1] might be for you. There are two parts to this course: * a two-day course "Introduction to Python" [2] for people with programming experience in other languages and * a three-day course "Python for Scientists and Engineers" [3] that introduces some of the most used Python tools for scientists and engineers such as NumPy, PyTables, and matplotlib Both courses can be booked individually [4]. Of course, you can attend the courses without registering for EuroSciPy. [1] http://www.python-academy.com/courses/python_summer_course.html [2] http://www.python-academy.com/courses/python_course_programmers.html [3] http://www.python-academy.com/courses/python_course_scientists.html [4] http://www.python-academy.com/courses/dates.html From lepto.python at gmail.com Mon Jun 8 04:19:12 2009 From: lepto.python at gmail.com (oyster) Date: Mon, 8 Jun 2009 16:19:12 +0800 Subject: [SciPy-user] compare scipy and matlab Message-ID: <6a4f17690906080119u2a8f56b8y55ece910f25321a0@mail.gmail.com> first of all, I must make it clear that I don't want to stir up any battle between languages! Never! what I want to know is the truth and availabilty to use scipy in serious works. Because I have met scipy bug but it has been fixed soon; and now I met 2(maybe 1) bugs with sympy, so I regard opensoftware(or any software?) with some suspicion. the following is written by lac. I hope he will not charge me for my post ;) ####### speed ######### # python >>> def xperm(L): if len(L) <= 1: yield L else: for i in range(len(L)): for p in xperm(L[:i]+L[i+1:]): yield [L[i]] + p >>> def test(i): for p in xperm(map(lambda x:x+1, range(i))): pass; >>> import timeit >>> timeit.Timer('test(5)','from __main__ import test').timeit() 1263.9717730891925 # MatLab - timeit.m function [ret] = timeit(times, repeat) ret = zeros(1,repeat); for i = 1:repeat tic(); for j = 1:times perms([1:5]); end ret(i) = toc(); end >> timeit(1000000,3) ans = 468.0005 449.7487 466.5325 ####### correctness ####### # MatLab >> A = hilb(256); >> [L,U,P] = lu(A); >> norm(P*A - L*U)/norm(A) ans = 3.2428e-017 >> [Q,R] = qr(A) >> norm(Q) ans = 1.0000 # SciPy >>> import numpy, scipy >>> Hilb = lambda N: scipy.mat( [ [ 1.0/(i + j + 1) for j in range(N) ] for i in range(N) ] ) >>> A = Hilb(256) >>> P, L, U = scipy.linalg.lu(A) >>> scipy.linalg.norm( scipy.mat(P)*A - scipy.mat(L)*scipy.mat(U) ) / scipy.linalg.norm(A) 0.56581114936540999 >>> Q, R = scipy.linalg.qr(A) >>> scipy.linalg.norm(scipy.mat(Q)) 16.000000000000011 From pav at iki.fi Mon Jun 8 07:53:52 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 8 Jun 2009 11:53:52 +0000 (UTC) Subject: [SciPy-user] compare scipy and matlab References: <6a4f17690906080119u2a8f56b8y55ece910f25321a0@mail.gmail.com> Message-ID: Mon, 08 Jun 2009 16:19:12 +0800, oyster kirjoitti: [clip] > what I want to know is the truth and availabilty to use scipy in serious > works. Because I have met scipy bug but it has been fixed soon; and now > I met 2(maybe 1) bugs with sympy, so I regard opensoftware(or any > software?) with some suspicion. [clip] > ####### speed ######### [clip] The algorithms in Matlab perms.m and the one you wrote are not comparable. The Matlab one is vectorized, whereas the one you wrote is a naive implementation, and this should account for much of the performance difference. > ####### correctness ####### [clip] >>> import numpy, scipy >>> Hilb = lambda N: scipy.mat( [ [ 1.0/(i + j + 1) for j in range(N) ] for i in range(N) ] ) >>> A = Hilb(256) >>> P, L, U = scipy.linalg.lu(A) >>> scipy.linalg.norm( scipy.mat(P)*A - scipy.mat(L)*scipy.mat(U) ) / scipy.linalg.norm(A) 0.56581114936540999 >>> Q, R = scipy.linalg.qr(A) >>> scipy.linalg.norm(scipy.mat(Q)) 16.000000000000011 The default matrix norm is the Frobenius norm, not the 2-norm. If you write the above using the correct norm, you get the same as from Matlab. Also, note that scipy.linalg.lu uses a different definition of the permutation matrix than Matlab's LU: >>> scipy.linalg.norm(A - scipy.mat(P)*scipy.mat(L)*scipy.mat(U), 2) / scipy.linalg.norm(A, 2) 1.8016647238506838e-17 >>> scipy.linalg.norm(scipy.mat(Q), 2) 1.0000000000000007 Based on the above, I do not see any problems with either reliability or speed. -- Pauli Virtanen From yosefmel at post.tau.ac.il Mon Jun 8 08:13:21 2009 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Mon, 8 Jun 2009 15:13:21 +0300 Subject: [SciPy-user] compare scipy and matlab In-Reply-To: <6a4f17690906080119u2a8f56b8y55ece910f25321a0@mail.gmail.com> References: <6a4f17690906080119u2a8f56b8y55ece910f25321a0@mail.gmail.com> Message-ID: <200906081513.21697.yosefmel@post.tau.ac.il> On Monday 08 June 2009 11:19:12 oyster wrote: > what I want to know is the truth and availabilty to use scipy in > serious works. Because I have met scipy bug but it has been fixed > soon; and now I met 2(maybe 1) bugs with sympy, so I regard > opensoftware(or any software?) with some suspicion. > > the following is written by lac. I hope he will not charge me for my post > ;) If he does, refuse to pay, for this code is over 8 times slower than it should be! See below. > ####### speed ######### > > # python > > >>> def xperm(L): > > if len(L) <= 1: > yield L > else: > for i in range(len(L)): > for p in xperm(L[:i]+L[i+1:]): > yield [L[i]] + p > > >>> def test(i): > > for p in xperm(map(lambda x:x+1, range(i))): pass; > > >>> import timeit > >>> timeit.Timer('test(5)','from __main__ import test').timeit() > > 1263.9717730891925 from numpy import arange from numpy.random import permutation And then: In [17]: %timeit test(5) # Your version 1000 loops, best of 3: 444 ?s per loop In [18]: %timeit test(5) 1000 loops, best of 3: 444 ?s per loop First get rid of the map (low hanging fruit): In [19]: def test2(i): for p in xperm(range(1, i+1)): pass ....: In [21]: %timeit test2(5) 1000 loops, best of 3: 429 ?s per loop In [22]: %timeit test2(5) 1000 loops, best of 3: 430 ?s per loop In [23]: %timeit test2(5) 1000 loops, best of 3: 431 ?s per loop Now use numpy and sets: In [38]: def xperm2(L): tried = set() while len(tried) < len(L): new_perm = tuple(permutation(L)) if new_perm in tried: continue tried.add(new_perm) yield new_perm In [45]: %timeit test2(5) 10000 loops, best of 3: 86.9 ?s per loop In [46]: %timeit test2(5) 10000 loops, best of 3: 86.2 ?s per loop And finally numpy.arange FTW: In [47]: def test2(i): for p in xperm2(arange(1, i+1)): pass In [51]: %timeit test2(5) 10000 loops, best of 3: 51.9 ?s per loop In [52]: %timeit test2(5) 10000 loops, best of 3: 51.4 ?s per loop Anyone have a faster, vectorized way? From matthieu.brucher at gmail.com Mon Jun 8 09:23:40 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 8 Jun 2009 15:23:40 +0200 Subject: [SciPy-user] Segmentation fault with 0.7 In-Reply-To: References: Message-ID: OK, I found a reference to a related issue on Intel's forum: http://software.intel.com/en-us/forums/intel-math-kernel-library/topic/62502/ Still the same issue than the one I has some months ago with Intel decision to change the link stage. I've tried with search_static_first=1, but it still picks up the .so libraries. Matthieu 2009/5/27 Matthieu Brucher : > Hi, > > I've also tested scipy 0.7 with the MKL (no choice, I don't have atlas > or refblas installed, and I found a way of using the latest by > preloading libmkl_core.so), and I got a segmentation fault on a LAPACK > function: > > test_y_bad_size (test_fblas.TestZswap) ... ok > test_y_stride (test_fblas.TestZswap) ... ok > test_clapack_dsyev (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_dsyev > Clapack empty, skip clapack test > test_clapack_dsyevr (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_dsyevr > Clapack empty, skip clapack test > test_clapack_dsyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_dsyevr_ranges > Clapack empty, skip clapack test > test_clapack_ssyev (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_ssyev > Clapack empty, skip clapack test > test_clapack_ssyevr (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_ssyevr > Clapack empty, skip clapack test > test_clapack_ssyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_ssyevr_ranges > Clapack empty, skip clapack test > test_dsyev (test_esv.TestEsv) ... ok > test_dsyevr (test_esv.TestEsv) ... Segmentation fault > > Is it a new function or something like that? I don't remember > encoutering this error in previous packages (although I didn't always > launched the full tests). > > Matthieu > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From ferrell at diablotech.com Mon Jun 8 12:38:32 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Mon, 8 Jun 2009 10:38:32 -0600 Subject: [SciPy-user] Append a date to a DateArray In-Reply-To: References: <94777D2F-332A-45D0-8B69-505BB6578455@diablotech.com> Message-ID: <94FC81A7-40E5-4B05-B6EC-26F0F4F9CE6C@diablotech.com> On Jun 6, 2009, at 12:57 PM, Pierre GM wrote: > > On Jun 5, 2009, at 9:17 AM, Robert Ferrell wrote: > >> Is there a canonical way to append a date to a DateArray in >> timeseries? The way I'm doing it is kind of kludgy (convert to >> ndarray, append, convert back to DateArray). The DateArray isn't >> huge, so I'm not particularly concerned about speed, just looking for >> something more elegant. > > Sorry for the delayed answer. > No, not at this point. However, it is indeed a useful feature. I'll > try to come up with something for a next version. Thanks for the response. I'll keep an out for it in the next version. -robert From ferrell at diablotech.com Mon Jun 8 12:59:41 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Mon, 8 Jun 2009 10:59:41 -0600 Subject: [SciPy-user] Masked array question Message-ID: <5D475494-643F-4BFD-8F93-DD502079DBB7@diablotech.com> I have a tuple of strings that I want to convert to an array of floats. Some of the strings are empty, so I thought I could use a masked array to mask out the empty strings. (In my application, empty string means no data, so ignore.) I tried: np.ma.masked_array(('1.', ' '), mask=(False, True), dtype=(np.float32, np.float32)) but that doesn't work. It returns /Library/Frameworks/Python.framework/Versions/2.5.2001/lib/python2.5/ site-packages/numpy-1.3.0.dev5934-py2.5-macosx-10.3-i386.egg/numpy/ma/ core.py in __new__(cls, data, mask, dtype, copy, subok, ndmin, fill_value, keep_mask, hard_mask, flag, shrink, **options) 1257 shrink = flag 1258 # Process data............ -> 1259 _data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) 1260 _baseclass = getattr(data, '_baseclass', type(_data)) 1261 # Check that we'ew not erasing the mask.......... : setting an array element with a sequence. Is there a way to do this (easily)? What I can do is first convert the strings to floats, and set the empty strings to some fill value, and then pass that to masked_array. Is that the best solution? Or is there some method that I've missed? thanks, -robert From pgmdevlist at gmail.com Mon Jun 8 13:32:09 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 8 Jun 2009 13:32:09 -0400 Subject: [SciPy-user] Masked array question In-Reply-To: <5D475494-643F-4BFD-8F93-DD502079DBB7@diablotech.com> References: <5D475494-643F-4BFD-8F93-DD502079DBB7@diablotech.com> Message-ID: On Jun 8, 2009, at 12:59 PM, Robert Ferrell wrote: > I have a tuple of strings that I want to convert to an array of > floats. Some of the strings are empty, so I thought I could use a > masked array to mask out the empty strings. (In my application, empty > string means no data, so ignore.) > > I tried: > > np.ma.masked_array(('1.', ' '), mask=(False, True), dtype=(np.float32, > np.float32)) As indicated by the error message, it's a pb w/ numpy: it doesn't know how to process the '' for float. That you're using masked arrays unfortunately doesn't change anything to that An idea is then to convert your '' to 'NaN' beforehand and use np.ma.fix_invalid on the results, or to define a mask as you wanted. From josef.pktd at gmail.com Mon Jun 8 13:48:56 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 8 Jun 2009 13:48:56 -0400 Subject: [SciPy-user] Masked array question In-Reply-To: References: <5D475494-643F-4BFD-8F93-DD502079DBB7@diablotech.com> Message-ID: <1cd32cbb0906081048j30b8803ep84e3b6594d3b049d@mail.gmail.com> On Mon, Jun 8, 2009 at 1:32 PM, Pierre GM wrote: > > On Jun 8, 2009, at 12:59 PM, Robert Ferrell wrote: > >> I have a tuple of strings that I want to convert to an array of >> floats. ?Some of the strings are empty, so I thought I could use a >> masked array to mask out the empty strings. ?(In my application, empty >> string means no data, so ignore.) >> >> I tried: >> >> np.ma.masked_array(('1.', ' '), mask=(False, True), dtype=(np.float32, >> np.float32)) > > As indicated by the error message, it's a pb w/ numpy: it doesn't know > how to process the '' for float. That you're using masked arrays > unfortunately doesn't change anything to that > An idea is then to convert your '' to 'NaN' beforehand and use > np.ma.fix_invalid on the results, or to define a mask as you wanted. > A very indirect way, that doesn't look worth the trouble in this case would be >>> np.mafromtxt(StringIO(','.join(['1','','2'])), delimiter=',') masked_array(data = [1.0 -- 2.0], mask = [False True False], fill_value = 1e+020) >>> np.genfromtxt(StringIO(','.join(['1','','2'])), delimiter=',') array([ 1., NaN, 2.]) if I understand the new functions correctly. Josef From pgmdevlist at gmail.com Mon Jun 8 13:56:38 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 8 Jun 2009 13:56:38 -0400 Subject: [SciPy-user] Masked array question In-Reply-To: <1cd32cbb0906081048j30b8803ep84e3b6594d3b049d@mail.gmail.com> References: <5D475494-643F-4BFD-8F93-DD502079DBB7@diablotech.com> <1cd32cbb0906081048j30b8803ep84e3b6594d3b049d@mail.gmail.com> Message-ID: <53C0BDD8-0CAA-455A-A11C-CFCC4AF7B11C@gmail.com> On Jun 8, 2009, at 1:48 PM, josef.pktd at gmail.com wrote: > > A very indirect way, that doesn't look worth the trouble in this > case would be > >>>> np.mafromtxt(StringIO(','.join(['1','','2'])), delimiter=',') > masked_array(data = [1.0 -- 2.0], > mask = [False True False], > fill_value = 1e+020) > >>>> np.genfromtxt(StringIO(','.join(['1','','2'])), delimiter=',') > array([ 1., NaN, 2.]) > > if I understand the new functions correctly. Well, mafromtxt is a shortcut for genfromtxt that sets the usemask keyword to True (it is False by default). If you really want a pure ndarray, use ndfromtxt From elmickerino at hotmail.com Mon Jun 8 15:59:08 2009 From: elmickerino at hotmail.com (ElMickerino) Date: Mon, 8 Jun 2009 12:59:08 -0700 (PDT) Subject: [SciPy-user] curve_fit step-size and optimal parameters Message-ID: <23931071.post@talk.nabble.com> Hello Fellow SciPythonistas, I've been trying to fit some data with a very simple model of a sine with a constant offset. The data (voltage vs. time) is very clearly sinusoidal (see attached program and data file), yet curve_fit fails to find the optimal parameters. I am able to specify very good initial guesses for the constant offset, the amplitude of the sinusoid and the frequency; the only thing that would be difficult to guess is the phase (I have many, many such datasets, all with random phase). My guess is that since the phase is only defined modulo 2pi, the minimization package sees that there are many deep minima of chi^2 and so gets confused. Ideally, I'd like to limit the phase to be between 0 and 2*pi to remove this ambiguity. My question is, how can I get curve_fit to use a very small step-size for the phase, or put in strict limits, and to therefore get a robust fit. I don't want to tune the phase by hand for each of my 60+ datasets. Thanks very much, Michael http://www.nabble.com/file/p23931071/fit_example.py fit_example.py http://www.nabble.com/file/p23931071/data_file.txt data_file.txt -- View this message in context: http://www.nabble.com/curve_fit-step-size-and-optimal-parameters-tp23931071p23931071.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Mon Jun 8 16:02:15 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Jun 2009 15:02:15 -0500 Subject: [SciPy-user] curve_fit step-size and optimal parameters In-Reply-To: <23931071.post@talk.nabble.com> References: <23931071.post@talk.nabble.com> Message-ID: <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> On Mon, Jun 8, 2009 at 14:59, ElMickerino wrote: > > Hello Fellow SciPythonistas, > > I've been trying to fit some data with a very simple model of a sine with a > constant offset. ?The data (voltage vs. time) is very clearly sinusoidal > (see attached program and data file), yet curve_fit fails to find the > optimal parameters. ?I am able to specify very good initial guesses for the > constant offset, the amplitude of the sinusoid and the frequency; the only > thing that would be difficult to guess is the phase (I have many, many such > datasets, all with random phase). ?My guess is that since the phase is only > defined modulo 2pi, the minimization package sees that there are many deep > minima of chi^2 and so gets confused. ?Ideally, I'd like to limit the phase > to be between 0 and 2*pi to remove this ambiguity. > > My question is, how can I get curve_fit to use a very small step-size for > the phase, or put in strict limits, and to therefore get a robust fit. ?I > don't want to tune the phase by hand for each of my 60+ datasets. You really can't. I recommend the A*sin(w*t)+B*cos(w*t) parameterization rather than the A*sin(w*t+phi) one. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jh at physics.ucf.edu Mon Jun 8 16:02:36 2009 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 08 Jun 2009 16:02:36 -0400 Subject: [SciPy-user] The SciPy Doc Marathon continues Message-ID: Let's Finish Documenting SciPy! Last year, we began the SciPy Documentation Marathon to write reference pages ("docstrings") for NumPy and SciPy. It was a huge job, bigger than we first imagined, with NumPy alone having over 2,000 functions. We created the doc wiki (now at docs.scipy.org), where you write, review, and proofread docs that then get integrated into the source code. In September, we had over 55% of NumPy in the "first draft" stage, and about 25% to the "needs review" stage. The PDF NumPy Reference Guide was over 300 pages, nicely formatted by ReST, which makes an HTML version as well. The PDF document now has over 500 pages, with the addition of sections from Travis Oliphant's book Guide to NumPy. That's an amazing amount of work, possible through the contributions of over 30 volunteers. It came back to us as the vastly-expanded help pages in NumPy 1.2, released last September. With your help, WE CAN FINISH! This summer we can: - Write all the "important" NumPy pages to the "Needs Review" stage - Start documenting the SciPy package - Get the SciPy User Manual started - Implement dual review - technical and presentation - on the doc wiki - Get NumPy docs and packaging on a sound financial footing We'll start with the first two. UCF has hired David Goldsmith to lead this summer's doc effort. David will write a lot of docs himself, but more importantly, he will organize our efforts toward completing doc milestones. There will be rewards, T-shirts, and likely other fun stuff for those who contribute the most. David will start the ball rolling shortly. This is a big vision, and it will require YOUR help to make it happen! The main need now is for people to work on the reference pages. Here's how: 1. Go to http://docs.scipy.org/NumPy 2. Read the intro and doc standards, and some docstrings on the wiki 3. Make an account 4. Ask the scipy-dev at scipy.org email list for editor access 5. EDIT! All doc discussions (except announcements like this one) should happen on the scipy-dev at scipy.org email list. You can browse the archives and sign up for the list at http://scipy.org/Mailing_Lists . That's where we will announce sprints on topic areas and so on. We'll also meet online every week, Wednesdays at 4:30pm US Eastern Time, on Skype. David will give the details. Welcome back to the Marathon! --jh-- Prof. Joseph Harrington Planetary Sciences Group Department of Physics MAP 414 4000 Central Florida Blvd. University of Central Florida Orlando, FL 32816-2385 jh at physics.ucf.edu planets.ucf.edu From ferrell at diablotech.com Mon Jun 8 16:11:05 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Mon, 8 Jun 2009 14:11:05 -0600 Subject: [SciPy-user] Masked array question In-Reply-To: References: <5D475494-643F-4BFD-8F93-DD502079DBB7@diablotech.com> Message-ID: On Jun 8, 2009, at 11:32 AM, Pierre GM wrote: > > On Jun 8, 2009, at 12:59 PM, Robert Ferrell wrote: > >> I have a tuple of strings that I want to convert to an array of >> floats. Some of the strings are empty, so I thought I could use a >> masked array to mask out the empty strings. (In my application, >> empty >> string means no data, so ignore.) >> >> I tried: >> >> np.ma.masked_array(('1.', ' '), mask=(False, True), >> dtype=(np.float32, >> np.float32)) > > As indicated by the error message, it's a pb w/ numpy: it doesn't know > how to process the '' for float. That you're using masked arrays > unfortunately doesn't change anything to that > An idea is then to convert your '' to 'NaN' beforehand and use > np.ma.fix_invalid on the results, or to define a mask as you wanted. np.ma.fix_invalid() is helpful. Originally I was wondering if there was a way to get masked_array to ignore (not process) the masked out entries. fix_invalid() helps, since I don't have to make up a mask before creating the masked_array. thanks, -robret From wierob83 at googlemail.com Mon Jun 8 16:09:37 2009 From: wierob83 at googlemail.com (wierob) Date: Mon, 08 Jun 2009 22:09:37 +0200 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr Message-ID: <4A2D7001.2010809@googlemail.com> Hi, I'm trying to do a regression analysis for a large data set using stats.linregress. Unfortunately, I keep getting strange results or errors. I've tested linregress with the code below and I get the error message: Traceback (most recent call last): File "C:/Users/wierob/Documents/Masterarbeit/underflow.py", line 25, in res = stats.linregress(x, y_es) File "C:\Python26\lib\site-packages\scipy\stats\stats.py", line 1799, in linregress prob = distributions.t.sf(np.abs(t),df)*2 File "C:\Python26\lib\site-packages\scipy\stats\distributions.py", line 665, in sf place(output,cond,self._sf(*goodargs)) File "C:\Python26\lib\site-packages\scipy\stats\distributions.py", line 531, in _sf return 1.0-self._cdf(x,*args) File "C:\Python26\lib\site-packages\scipy\stats\distributions.py", line 2829, in _cdf res = special.stdtr(df, x) FloatingPointError: underflow encountered in stdtr When setting z > 1 (len(x) >= 40) stats.linregress(x, y["dependency"]) fails. When setting z > 29 (len(x) >= 600) stats.linregress(x, y["dependency"]) and stats.linregress(x, y["dependency_with_noise"]) fails. from scipy import stats import numpy numpy.seterr(all="raise") # linregress fails -> FloatingPointError: underflow encountered in stdtr # for x and y["dependency"] if z > 1 # for x and y["dependency_with_noise"] if z > 29 z = 30 x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]*z print len(x) y = {} y["dependency"] = [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38]*z y["dependency_with_noise"] = [0, -1, 4, 3, 8, 10, 12, 14, 10, 25, 20, 22, 24, 27, 28, 17, 32, 34, 36, 40]*z for key, y_es in y.iteritems(): print "="*5, key, "="*5 res = stats.linregress(x, y_es) print "slope:", res[0] print "intercept:", res[1] print "r^2:", res[2] print "p-value:", res[3] print "stderr:", res[4] What's my mistake? Are there any restrictions for the use of linregress? regards robert From stefan at sun.ac.za Mon Jun 8 16:15:30 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 8 Jun 2009 22:15:30 +0200 Subject: [SciPy-user] curve_fit step-size and optimal parameters In-Reply-To: <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> References: <23931071.post@talk.nabble.com> <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> Message-ID: <9457e7c80906081315l6f95f19dl7e63ae4e6cb92a3b@mail.gmail.com> 2009/6/8 Robert Kern : > On Mon, Jun 8, 2009 at 14:59, ElMickerino wrote: >> My question is, how can I get curve_fit to use a very small step-size for >> the phase, or put in strict limits, and to therefore get a robust fit. ?I >> don't want to tune the phase by hand for each of my 60+ datasets. > > You really can't. I recommend the A*sin(w*t)+B*cos(w*t) > parameterization rather than the A*sin(w*t+phi) one. Could you expand? I can't immediately see why the second parametrisation is bad. Can't a person do this fit using non-linear least-squares? Ah, that's probably why you use the other parametrisation, so that you don't have to use non-linear least squares? Regards St?fan From robert.kern at gmail.com Mon Jun 8 16:19:43 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Jun 2009 15:19:43 -0500 Subject: [SciPy-user] curve_fit step-size and optimal parameters In-Reply-To: <9457e7c80906081315l6f95f19dl7e63ae4e6cb92a3b@mail.gmail.com> References: <23931071.post@talk.nabble.com> <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> <9457e7c80906081315l6f95f19dl7e63ae4e6cb92a3b@mail.gmail.com> Message-ID: <3d375d730906081319u1e43435pf6c932792e057f96@mail.gmail.com> 2009/6/8 St?fan van der Walt : > 2009/6/8 Robert Kern : >> On Mon, Jun 8, 2009 at 14:59, ElMickerino wrote: >>> My question is, how can I get curve_fit to use a very small step-size for >>> the phase, or put in strict limits, and to therefore get a robust fit. ?I >>> don't want to tune the phase by hand for each of my 60+ datasets. >> >> You really can't. I recommend the A*sin(w*t)+B*cos(w*t) >> parameterization rather than the A*sin(w*t+phi) one. > > Could you expand? ?I can't immediately see why the second > parametrisation is bad. The cyclic nature of phi. It complicates things precisely as the OP describes. >?Can't a person do this fit using non-linear > least-squares? ?Ah, that's probably why you use the other > parametrisation, so that you don't have to use non-linear least > squares? If you aren't also fitting the frequency, then yes. If you are fitting for the frequency, too, the problem is still non-linear. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Jun 8 16:24:53 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 8 Jun 2009 16:24:53 -0400 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <4A2D7001.2010809@googlemail.com> References: <4A2D7001.2010809@googlemail.com> Message-ID: <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> On Mon, Jun 8, 2009 at 4:09 PM, wierob wrote: > Hi, > > I'm trying to do a regression analysis for a large data set using > stats.linregress. Unfortunately, I keep getting strange results or > errors. I've tested linregress with the code below and I get the error > message: > > Traceback (most recent call last): > ?File "C:/Users/wierob/Documents/Masterarbeit/underflow.py", line 25, in > ? ?res = stats.linregress(x, y_es) > ?File "C:\Python26\lib\site-packages\scipy\stats\stats.py", line 1799, in linregress > ? ?prob = distributions.t.sf(np.abs(t),df)*2 > ?File "C:\Python26\lib\site-packages\scipy\stats\distributions.py", line 665, in sf > ? ?place(output,cond,self._sf(*goodargs)) > ?File "C:\Python26\lib\site-packages\scipy\stats\distributions.py", line 531, in _sf > ? ?return 1.0-self._cdf(x,*args) > ?File "C:\Python26\lib\site-packages\scipy\stats\distributions.py", line 2829, in _cdf > ? ?res = special.stdtr(df, x) > FloatingPointError: underflow encountered in stdtr > > When setting z > 1 (len(x) >= 40) ?stats.linregress(x, y["dependency"]) > fails. > When setting z > 29 (len(x) >= 600) ?stats.linregress(x, > y["dependency"]) and stats.linregress(x, y["dependency_with_noise"]) fails. > > from scipy import stats > > import numpy > numpy.seterr(all="raise") > > # linregress fails -> FloatingPointError: underflow encountered in stdtr > # ? for x and y["dependency"] if z > 1 > # ? for x and y["dependency_with_noise"] if z > 29 > z = 30 > > x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]*z > print len(x) > > y = {} > y["dependency"] = [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38]*z > > y["dependency_with_noise"] = [0, -1, 4, 3, 8, 10, 12, 14, 10, 25, 20, 22, 24, 27, 28, 17, 32, 34, 36, 40]*z > > > for key, y_es in y.iteritems(): > ? ?print "="*5, key, "="*5 > > ? ?res = stats.linregress(x, y_es) > > ? ?print "slope:", res[0] > ? ?print "intercept:", res[1] > ? ?print "r^2:", res[2] > ? ?print "p-value:", res[3] > ? ?print "stderr:", res[4] > > > What's my mistake? Are there any restrictions for the use of linregress? > > regards > robert > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > turn of numpy.seterr(all="raise") as explained in the reply to your previous messages Josef From wierob83 at googlemail.com Mon Jun 8 17:16:25 2009 From: wierob83 at googlemail.com (wierob) Date: Mon, 08 Jun 2009 23:16:25 +0200 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> References: <4A2D7001.2010809@googlemail.com> <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> Message-ID: <4A2D7FA9.3060002@googlemail.com> Hi, > turn of numpy.seterr(all="raise") > as explained in the reply to your previous messages > > Josef > turning of the error reporting doesn't prevent the error. Thus the result may be wrong, doesn't it? E.g. a p-value of 0.0 looks suspicious. regards robert From d_l_goldsmith at yahoo.com Mon Jun 8 18:10:43 2009 From: d_l_goldsmith at yahoo.com (David Goldsmith) Date: Mon, 8 Jun 2009 15:10:43 -0700 (PDT) Subject: [SciPy-user] Summer NumPy Doc Marathon (Reply-to: scipy-dev@scipy.org) Message-ID: <621234.10773.qm@web52111.mail.re2.yahoo.com> Dear SciPy Community Members: Hi! My name is David Goldsmith. I've been hired for the summer by Joe Harrington to further progress on NumPy documentation and ultimately, pending funding, SciPy documentation. Joe and I are reviving last summer?s enthusiasm in the community for this mission and enlisting as many of you as possible in the effort. On that note, please peruse the NumPy Doc Wiki (http://docs.scipy.org/numpy/Front Page/) and, in particular, the master list of functions/objects (?items?) needing work (http://docs.scipy.org/numpy/Milestones/). Our goal is to have every item to the ready-for-first-review stage (or better) by August 18 (i.e., the start of SciPyCon09). To accomplish this, we're forming teams to attack each doc category on the Milestones page. From the Milestones page: "To speed things up, get more uniformity in the docs, and add a social element, we're attacking these categories as teams. A team lead takes responsibility for getting a category to "Needs review" within one month [we expect that some categories will require less time ? please furnish your most ?optimistically realistic? deadline when ?claiming? a category], but no later than 18 August 2009. As leader, you commit to working with anyone who signs up in your category, and vice versa. The scipy-dev mailing list is a great place to recruit helpers. "Major doc contributors will be listed in NumPy's contributors file, THANKS.txt. Anyone writing more than 1000 words will get a T-shirt (while supplies last, etc.). Teams that reach their goals in time will get special mention in THANKS.txt. "Of course, you don't have to join a team. If you'd like to work on your own, please choose docstrings from an unclaimed category, and put your name after docstrings you are editing in the list below. If someone later claims that category, please coordinate with them or finish up your current docstrings and move to another category." Please note that, to edit anything on the Wiki (including the doc itself), you?ll need ?edit rights? ? how you get these is Item 5 under ?Before you start? on the ?Front Page,? but for your convenience, I?ll quote that here: "Register a username on [docs.scipy.org]. Send an e-mail with your username to the scipy-dev mailing list (requires subscribing to the mailing list first, [which can be done at http://mail.scipy.org/mailman/listinfo/scipy-dev]), so that we can give you edit rights. If you are not subscribed to the mailing-list, you can also send an email to gael dot varoquaux at normalesup dot org, but this will take longer [and you?ll want to subscribe to scipy-dev anyway, because that?s the place to post questions and comments about this whole doc development project]." Also, I?ll be holding a weekly Skype (www.skype.com) telecon ? Wednesdays at 4:30pm US Eastern Daylight Time - to review progress and discuss any roadblocks we may have encountered (or anticipate encountering). If you?d like to participate and haven?t already downloaded and installed Skype and registered a Skype ID, you should do those things; then, you'll be able to join in simply by "Skyping" me (Skype ID: d.l.goldsmith) and I'll add you to the call. So, thanks for your time reading this, and please make time this summer to help us meet (or beat) the goal. Sincerely, David Goldsmith, Technical Editor Olympia, WA From josef.pktd at gmail.com Mon Jun 8 19:13:44 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 8 Jun 2009 19:13:44 -0400 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <4A2D7FA9.3060002@googlemail.com> References: <4A2D7001.2010809@googlemail.com> <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> <4A2D7FA9.3060002@googlemail.com> Message-ID: <1cd32cbb0906081613o5bdc19a6t4d043136c60d5d2a@mail.gmail.com> On Mon, Jun 8, 2009 at 5:16 PM, wierob wrote: > Hi, > >> turn of numpy.seterr(all="raise") >> as explained in the reply to your previous messages >> >> Josef >> > turning of the error reporting doesn't prevent the error. Thus the > result may be wrong, doesn't it? E.g. a p-value of 0.0 looks suspicious. > anything else than a p-value of 0 would be suspicious, you have a perfect fit and the probability is zero that we observe a slope equal to the estimated slope under the null hypothesis( that the slope is zero). So (loosely speaking) we can reject the null of zero slope with probability 1. The result is not "maybe" wrong, it is correct. your r_square is 1, the standard error of the slope estimate is zero. floating point calculation with inf are correct (if they don't have a definite answer we get a nan). Dividing a non-zero number by zero has a well defined result, even if python raises a zerodivisionerror. >>> np.array(1)/0. inf >>> 1/(np.array(1)/0.) 0.0 >>> np.seterr(all="raise") {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> 1/(np.array(1)/0.) Traceback (most recent call last): File "", line 1, in 1/(np.array(1)/0.) FloatingPointError: divide by zero encountered in divide Josef From d_l_goldsmith at yahoo.com Tue Jun 9 02:11:59 2009 From: d_l_goldsmith at yahoo.com (David Goldsmith) Date: Mon, 8 Jun 2009 23:11:59 -0700 (PDT) Subject: [SciPy-user] More on Summer NumPy Doc Marathon Message-ID: <901731.25166.qm@web52108.mail.re2.yahoo.com> Hi again, folks. I have a special request. Part of the vision for my job is that I'll focus my writing efforts on the docs no one else is gung-ho to work on. So, even if you're not quite ready to commit, if you're leaning toward volunteering to be a team lead for one (or more) categories, please let me know which one(s) (off list, if you prefer) so I can get an initial idea of what the "leftovers" are going to be. Thanks! DG From wierob83 at googlemail.com Tue Jun 9 07:57:18 2009 From: wierob83 at googlemail.com (wierob) Date: Tue, 09 Jun 2009 13:57:18 +0200 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <1cd32cbb0906081613o5bdc19a6t4d043136c60d5d2a@mail.gmail.com> References: <4A2D7001.2010809@googlemail.com> <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> <4A2D7FA9.3060002@googlemail.com> <1cd32cbb0906081613o5bdc19a6t4d043136c60d5d2a@mail.gmail.com> Message-ID: <4A2E4E1E.5030403@googlemail.com> Hi, for z = 30 my code sample prints ===== dependency_with_noise ===== slope: 2.0022556391 intercept: -0.771428571429 r^2: 0.953601402677 p-value: 0.0 stderr: 0.0258507089053 so I'm just confused that the p-value claims the match is absolutely perfect while it is not (also its pretty close to perfect). If compared this result to R (www.*r*-project.org) : > summary(lm(y~x)) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -12.2624 0.7325 0.7477 0.7635 7.7511 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.77143 0.28728 -2.685 0.00745 ** x 2.00226 0.02585 77.455 < 2e-16 *** --- Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1 Residual standard error: 3.651 on 598 degrees of freedom Multiple R-squared: 0.9094, Adjusted R-squared: 0.9092 F-statistic: 5999 on 1 and 598 DF, p-value: < 2.2e-16 > summary(lm(y~x))$coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) -0.7714286 0.28728036 -2.685281 7.447975e-03 x 2.0022556 0.02585071 77.454574 6.009953e-314 The intercept, slope (x) and stderr values are equal but the p-value is 6.009953e-314 and r-squared is different. While 6.009953e-314 is small enough to say its 0 and the result is highly significant, I just wonder if Scipy decides its small enough to return 0.0 or if it returns 0.0 because it cant actually compute it. If 0.0 is returned deliberately what's the threshold for this decision. Maybe this behavior should be documented. regards robert josef.pktd at gmail.com schrieb: > On Mon, Jun 8, 2009 at 5:16 PM, wierob wrote: > >> Hi, >> >> >>> turn of numpy.seterr(all="raise") >>> as explained in the reply to your previous messages >>> >>> Josef >>> >>> >> turning of the error reporting doesn't prevent the error. Thus the >> result may be wrong, doesn't it? E.g. a p-value of 0.0 looks suspicious. >> >> > > anything else than a p-value of 0 would be suspicious, you have a > perfect fit and the probability is zero that we observe a slope equal > to the estimated slope under the null hypothesis( that the slope is > zero). So (loosely speaking) we can reject the null of zero slope with > probability 1. > The result is not "maybe" wrong, it is correct. your r_square is 1, > the standard error of the slope estimate is zero. > > > floating point calculation with inf are correct (if they don't have a > definite answer we get a nan). Dividing a non-zero number by zero has > a well defined result, even if python raises a zerodivisionerror. > > >>>> np.array(1)/0. >>>> > inf > >>>> 1/(np.array(1)/0.) >>>> > 0.0 > >>>> np.seterr(all="raise") >>>> > {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} > >>>> 1/(np.array(1)/0.) >>>> > Traceback (most recent call last): > File "", line 1, in > 1/(np.array(1)/0.) > FloatingPointError: divide by zero encountered in divide > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Jun 9 09:45:49 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 9 Jun 2009 09:45:49 -0400 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <4A2E4E1E.5030403@googlemail.com> References: <4A2D7001.2010809@googlemail.com> <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> <4A2D7FA9.3060002@googlemail.com> <1cd32cbb0906081613o5bdc19a6t4d043136c60d5d2a@mail.gmail.com> <4A2E4E1E.5030403@googlemail.com> Message-ID: <1cd32cbb0906090645o50494221u7e2a6c4f2acc5493@mail.gmail.com> On Tue, Jun 9, 2009 at 7:57 AM, wierob wrote: > Hi, > > for z = 30 my code sample prints > > ===== dependency_with_noise ===== > slope: 2.0022556391 > intercept: -0.771428571429 > r^2: 0.953601402677 > p-value: 0.0 > stderr: 0.0258507089053 > > so I'm just confused that the p-value claims the match is absolutely > perfect while it is not (also its pretty close to perfect). If compared > this result to R (www.*r*-project.org) : > >> summary(lm(y~x)) > > Call: > lm(formula = y ~ x) > > Residuals: > ? ? Min ? ? ? 1Q ? Median ? ? ? 3Q ? ? ?Max > -12.2624 ? 0.7325 ? 0.7477 ? 0.7635 ? 7.7511 > > Coefficients: > ? ? ? ? ? ?Estimate Std. Error t value Pr(>|t|) > (Intercept) -0.77143 ? ?0.28728 ?-2.685 ?0.00745 ** > x ? ? ? ? ? ?2.00226 ? ?0.02585 ?77.455 ?< 2e-16 *** > --- > Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1 > > Residual standard error: 3.651 on 598 degrees of freedom > Multiple R-squared: 0.9094, ? ? Adjusted R-squared: 0.9092 > F-statistic: ?5999 on 1 and 598 DF, ?p-value: < 2.2e-16 > >> summary(lm(y~x))$coefficients > ? ? ? ? ? ? ?Estimate Std. Error ? t value ? ? ?Pr(>|t|) > (Intercept) -0.7714286 0.28728036 -2.685281 ?7.447975e-03 > x ? ? ? ? ? ?2.0022556 0.02585071 77.454574 6.009953e-314 > > > The intercept, slope (x) and stderr values are equal good but the p-value is > 6.009953e-314 and r-squared is different. While 6.009953e-314 is small > enough to say its 0 and the result is highly significant *highly significant* ??? What significance level do you want to use to accept the Null when you are using the result of R? Note: R initially only reported Pr(>|t|) < 2e-16 *** There are arguments for reporting any statistics only to a few decimals. I wonder why? Josef , I just wonder > if Scipy decides its small enough to return 0.0 or if it returns 0.0 > because it cant actually compute it. If 0.0 is returned deliberately > what's the threshold for this decision. Maybe this behavior should be > documented. > > > regards > robert > > josef.pktd at gmail.com schrieb: >> On Mon, Jun 8, 2009 at 5:16 PM, wierob wrote: >> >>> Hi, >>> >>> >>>> turn of numpy.seterr(all="raise") >>>> as explained in the reply to your previous messages >>>> >>>> Josef >>>> >>>> >>> turning of the error reporting doesn't prevent the error. Thus the >>> result may be wrong, doesn't it? E.g. a p-value of 0.0 looks suspicious. >>> >>> >> >> anything else than a p-value of 0 would be suspicious, you have a >> perfect fit and the probability is zero that we observe a slope equal >> to the estimated slope under the null hypothesis( that the slope is >> zero). So (loosely speaking) we can reject the null of zero slope with >> probability 1. >> The result is not "maybe" wrong, it is correct. your r_square is 1, >> the standard error of the slope estimate is zero. >> >> >> floating point calculation with inf are correct (if they don't have a >> definite answer we get a nan). Dividing a non-zero number by zero has >> a well defined result, even if python raises a zerodivisionerror. >> >> >>>>> np.array(1)/0. >>>>> >> inf >> >>>>> 1/(np.array(1)/0.) >>>>> >> 0.0 >> >>>>> np.seterr(all="raise") >>>>> >> {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >> >>>>> 1/(np.array(1)/0.) >>>>> >> Traceback (most recent call last): >> ? File "", line 1, in >> ? ? 1/(np.array(1)/0.) >> FloatingPointError: divide by zero encountered in divide >> >> Josef >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From saffsd at gmail.com Tue Jun 9 10:11:05 2009 From: saffsd at gmail.com (Marco Lui) Date: Wed, 10 Jun 2009 00:11:05 +1000 Subject: [SciPy-user] numpy.compress and sparse matrices? Message-ID: Hello everyone. I am looking for a sparse matrix implementation that supports the numpy.compress operation, which is akin to "fancy indexing" by a boolean vector. Working in scipy 0.6.0, I have not been able to find such an implementation. I usually get an error like the following: File "/home/mlui/workspace/sparse/modules/hydrat/task/task.py", line 59, in train_vectors return numpy.compress(self.train_indices, data, axis=0) File "/usr/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 794, in compress return _wrapit(a, 'compress', condition, axis, out) File "/usr/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 37, in _wrapit result = getattr(asarray(obj),method)(*args, **kwds) IndexError: index out of range for array The above example works correctly when data is a normal numpy array. Does any implementation of sparse arrays compatible with numpy.compress exist? Thanks in advance Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From Christopher.Chang at nrel.gov Tue Jun 9 10:11:21 2009 From: Christopher.Chang at nrel.gov (Chang, Christopher) Date: Tue, 9 Jun 2009 08:11:21 -0600 Subject: [SciPy-user] Broken dmg? In-Reply-To: <4A29FD26.8080005@ar.media.kyoto-u.ac.jp> Message-ID: David, Thanks for clearing that up. I managed to get Scipy compiled under non-system (but still python.org source) Python 2.6.2, and Numpy 1.3. Cheers, Chris On 6/5/09 11:22 PM, "David Cournapeau" wrote: Chang, Christopher wrote: > Hi, > > After building Python 2.6.2 with the system gcc on a MacBook Pro (OS X 10.5.7, Intel Core 2 Duo) and numpy with gcc + gfortran from http://r.research.att.com/tools/, I tried to install the binary scipy-0.7.1rc2-py2.6-macosx10.5. When it comes time to select a destination, I get > > "You cannot install scipy 0.7.1rc2 on this volume. scipy requires System Python 2.6 to install" > > If the installer is just checking Python from the environment, it should be picking up the new version It can't, or at least I don't know a way to do it - the installer can only install for the python it was build with. > At any rate, the same thing happens with the py2.5 scipy package. What exactly is the installer checking for and not finding? A python installed from python.org (the system python message is misleading here). Anything else is unlikely to work - and anyway, if you build your python, I think you should not expect binaries to work on it (not only numpy/scipy). cheers, David _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From d_l_goldsmith at yahoo.com Tue Jun 9 12:31:25 2009 From: d_l_goldsmith at yahoo.com (David Goldsmith) Date: Tue, 9 Jun 2009 09:31:25 -0700 (PDT) Subject: [SciPy-user] [SciPy-dev] More on Summer NumPy Doc Marathon Message-ID: <667868.5819.qm@web52110.mail.re2.yahoo.com> Thanks, Stefan. The lists you suggest already exist (more or less, depending on the "thing," i.e., list of categories, completely, prioritized list of individual items, sort of, at least w/in the categories) on the Milestones page (that's essentially what the Milestones page is) and the list of individual items is far too long to duplicate here, but for everyone's convenience I'll provide the list of categories (at least those for which the goal has not been, or is not close to being, met, which is most of them): Data type investigation Fourier transforms Linear algebra Error handling Financial functions Functional operations Help routines Indexing Input/Output Logic, comparisons etc. Polynomials Random number generation Other random operations Boolean set operations Searching Sorting Statistics Comparison Window functions Sums, interpolation, gradients, etc Arithmetic + basic functions I Arithmetic + basic functions II Arithmetic + basic functions III Masked arrays Masked arrays, II Masked arrays, III Masked arrays, IV Operations on masks Even more MA functions I Even more MA functions II Numpy internals C-types Other math The matrix library Numarray compatibility Numeric compatibility Other array subclasses Matrix subclass Ndarray Ndarray, II Dtypes Ufunc Scalar base class Scalar types Comments: 0) The number of individual items in each of these categories varies from one to a few dozen or so 1) Omitted are a few "meta-categories," e.g., "Routines," "Basic Objects," etc. 2) IMO, there are still too many of these (at least too many to not be intimidating in the manner Stefan has implied); I had it in mind to try to create an intermediate level of organization, i.e., "meso-categories," but I couldn't really justify it on grounds other than there are simply still too many categories to be unintimidating, so I was advised against usage of time in that endeavor. However, if there's an outpouring of support for me doing that, it would fall on sympathetic ears. As far as prioritizing individual items, my opinion is that team leads should do that (or not, as they deem appropriate) - I wouldn't presume to know enough to do that in most cases. However, if people want to furnish me with suggested prioritizations, I'd be happy to be the one to edit the Wiki to reflect these. DG --- On Tue, 6/9/09, St?fan van der Walt wrote: > From: St?fan van der Walt > Subject: Re: [SciPy-dev] More on Summer NumPy Doc Marathon > To: "SciPy Developers List" > Date: Tuesday, June 9, 2009, 1:34 AM > Hi David > > 2009/6/9 David Goldsmith : > > > > Hi again, folks. ?I have a special request. ?Part of > the vision for my job is that I'll focus my writing efforts > on the docs no one else is gung-ho to work on. ?So, even if > you're not quite ready to commit, if you're leaning toward > volunteering to be a team lead for one (or more) categories, > please let me know which one(s) (off list, if you prefer) so > I can get an initial idea of what the "leftovers" are going > to be. ?Thanks! > > That's a pretty wide question.? Maybe you could post a > list of > categories and ask who would be willing to mentor and write > on each? > For the writing, we could decide on a prioritised list of > functions, > publish that list and then document those entries one by > one (i.e. > make the tasks small enough so that people don't run away > screaming). > > Cheers > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From wangxj.uc at gmail.com Tue Jun 9 12:35:45 2009 From: wangxj.uc at gmail.com (Xiaojian Wang) Date: Tue, 9 Jun 2009 09:35:45 -0700 Subject: [SciPy-user] any symbolic manipulation of eigenvalue in Scipy? Message-ID: Hi, I would like to know if there is any module in Scipy (or Numpy?), which has capability of symbolic manipulation and can generate matrix's eigenvalues/eigenvectors. My matrix is symbolic, such as a 3x3 matrix: [ a, b, a*b ] [a+b, 0, 0 ] [c, a, c+b] a,b,c could be any values in my future applications. if not, do you guy know any other programs can do this? Many thanks for your help in advance! Xiaoijan -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue Jun 9 12:45:39 2009 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 9 Jun 2009 11:45:39 -0500 Subject: [SciPy-user] any symbolic manipulation of eigenvalue in Scipy? In-Reply-To: References: Message-ID: <114880320906090945p66400e53me69b7a8da06045e2@mail.gmail.com> No, scipy and numpy do not do symbolic manipulation. Take a look at sympy: http://code.google.com/p/sympy/ In partcular, check out the linear algebra section: http://docs.sympy.org/modules/matrices.html Warren On Tue, Jun 9, 2009 at 11:35 AM, Xiaojian Wang wrote: > Hi, > I would like to know if there is any module in Scipy (or Numpy?), which > has capability > of symbolic manipulation and can generate matrix's > eigenvalues/eigenvectors. > My matrix is symbolic, such as a 3x3 matrix: > > [ a, b, a*b ] > [a+b, 0, 0 ] > [c, a, c+b] > > a,b,c could be any values in my future applications. > if not, do you guy know any other programs can do this? > > Many thanks for your help in advance! > > Xiaoijan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harald.schilly at gmail.com Tue Jun 9 12:46:20 2009 From: harald.schilly at gmail.com (Harald Schilly) Date: Tue, 9 Jun 2009 18:46:20 +0200 Subject: [SciPy-user] any symbolic manipulation of eigenvalue in Scipy? In-Reply-To: References: Message-ID: <20548feb0906090946x5c3f3eb0l56b8d052f0ec0d94@mail.gmail.com> On Tue, Jun 9, 2009 at 18:35, Xiaojian Wang wrote: > if not, do you guy know any other programs can do this? I don't know it about scipy, but In Sage ( http://www.sagemath.org ) you can do this: sage: var('a b c') (a, b, c) sage: m = matrix([[a, b, a*b], [a + b, 0, 0], [c, a, b + c]]) sage: m_evals = m.eigenvalues() sage: m_evals [-1/18*(-I*sqrt(3) + 1)*(((3*c + 2)*b - c)*a + a^2 + 4*b^2 + 2*b*c + c^2)/(1/54*(27*b + 2)*a^3 + 1/18*((3*c + 2)*b + 9*b^2 - c)*a^2 - 8/27*b^3 - 2/9*b^2*c + 1/9*b*c^2 + 1/27*c^3 + 1/18*((3*c - 4)*b^2 + (3*c^2 - 8*c)*b - c^2)*a + 1/18*sqrt(4*a^6*b - 32*a^2*b^5 - ((a^2 + 4)*b^2 - 2*(a^2 - 2*a)*b + a^2)*c^4 + (27*a^4 - 56*a^3 - 16*a^2)*b^4 + 2*(27*a^5 - 6*a^4 - 8*a^3)*b^3 - 2*((2*a^3 + a^2 + 10*a + 8)*b^3 - (5*a^3 - 5*a^2 - 4*a)*b^2 - a^3 + 2*(a^3 - a^2)*b)*c^3 + (27*a^6 + 16*a^5 - 4*a^4)*b^2 - ((13*a^2 + 40*a + 16)*b^4 - 4*(4*a^3 - 5*a^2 + 4*a)*b^3 + a^4 - (17*a^4 - 8*a^3 + 24*a^2)*b^2 + 4*(a^4 - a^3)*b)*c^2 - 2*(16*a*b^5 - (9*a^3 - 32*a^2 + 16*a)*b^4 - 2*(9*a^4 - 23*a^3 + 4*a^2)*b^3 - (9*a^5 - 27*a^4 - 4*a^3)*b^2 + (3*a^5 + 2*a^4)*b)*c)*sqrt(3))^(1/3) - 1/2*(I*sqrt(3) + 1)*(1/54*(27*b + 2)*a^3 + 1/18*((3*c + 2)*b + 9*b^2 - c)*a^2 - 8/27*b^3 - 2/9*b^2*c + 1/9*b*c^2 + 1/27*c^3 + 1/18*((3*c - 4)*b^2 + (3*c^2 - 8*c)*b - c^2)*a + 1/18*sqrt(4*a^6*b - 32*a^2*b^5 - ((a^2 + 4)*b^2 - 2*(a^2 - 2*a)*b + a^2)*c^4 + (27*a^4 - 56*a^3 - 16*a^2)*b^4 + 2*(27*a^5 - 6*a^4 - 8*a^3)*b^3 - 2*((2*a^3 + a^2 + 10*a + 8)*b^3 - (5*a^3 - 5*a^2 - 4*a)*b^2 - a^3 + 2*(a^3 - a^2)*b)*c^3 + (27*a^6 + 16*a^5 - 4*a^4)*b^2 - ((13*a^2 + 40*a + 16)*b^4 - 4*(4*a^3 - 5*a^2 + 4*a)*b^3 + a^4 - (17*a^4 - 8*a^3 + 24*a^2)*b^2 + 4*(a^4 - a^3)*b)*c^2 - 2*(16*a*b^5 - (9*a^3 - 32*a^2 + 16*a)*b^4 - 2*(9*a^4 - 23*a^3 + 4*a^2)*b^3 - (9*a^5 - 27*a^4 - 4*a^3)*b^2 + (3*a^5 + 2*a^4)*b)*c)*sqrt(3))^(1/3) + 1/3*a + 1/3*b + 1/3*c, -1/18*(I*sqrt(3) + 1)*(((3*c + 2)*b - c)*a + a^2 + 4*b^2 + 2*b*c + c^2)/(1/54*(27*b + 2)*a^3 + 1/18*((3*c + 2)*b + 9*b^2 - c)*a^2 - 8/27*b^3 - 2/9*b^2*c + 1/9*b*c^2 + 1/27*c^3 + 1/18*((3*c - 4)*b^2 + (3*c^2 - 8*c)*b - c^2)*a + 1/18*sqrt(4*a^6*b - 32*a^2*b^5 - ((a^2 + 4)*b^2 - 2*(a^2 - 2*a)*b + a^2)*c^4 + (27*a^4 - 56*a^3 - 16*a^2)*b^4 + 2*(27*a^5 - 6*a^4 - 8*a^3)*b^3 - 2*((2*a^3 + a^2 + 10*a + 8)*b^3 - (5*a^3 - 5*a^2 - 4*a)*b^2 - a^3 + 2*(a^3 - a^2)*b)*c^3 + (27*a^6 + 16*a^5 - 4*a^4)*b^2 - ((13*a^2 + 40*a + 16)*b^4 - 4*(4*a^3 - 5*a^2 + 4*a)*b^3 + a^4 - (17*a^4 - 8*a^3 + 24*a^2)*b^2 + 4*(a^4 - a^3)*b)*c^2 - 2*(16*a*b^5 - (9*a^3 - 32*a^2 + 16*a)*b^4 - 2*(9*a^4 - 23*a^3 + 4*a^2)*b^3 - (9*a^5 - 27*a^4 - 4*a^3)*b^2 + (3*a^5 + 2*a^4)*b)*c)*sqrt(3))^(1/3) - 1/2*(-I*sqrt(3) + 1)*(1/54*(27*b + 2)*a^3 + 1/18*((3*c + 2)*b + 9*b^2 - c)*a^2 - 8/27*b^3 - 2/9*b^2*c + 1/9*b*c^2 + 1/27*c^3 + 1/18*((3*c - 4)*b^2 + (3*c^2 - 8*c)*b - c^2)*a + 1/18*sqrt(4*a^6*b - 32*a^2*b^5 - ((a^2 + 4)*b^2 - 2*(a^2 - 2*a)*b + a^2)*c^4 + (27*a^4 - 56*a^3 - 16*a^2)*b^4 + 2*(27*a^5 - 6*a^4 - 8*a^3)*b^3 - 2*((2*a^3 + a^2 + 10*a + 8)*b^3 - (5*a^3 - 5*a^2 - 4*a)*b^2 - a^3 + 2*(a^3 - a^2)*b)*c^3 + (27*a^6 + 16*a^5 - 4*a^4)*b^2 - ((13*a^2 + 40*a + 16)*b^4 - 4*(4*a^3 - 5*a^2 + 4*a)*b^3 + a^4 - (17*a^4 - 8*a^3 + 24*a^2)*b^2 + 4*(a^4 - a^3)*b)*c^2 - 2*(16*a*b^5 - (9*a^3 - 32*a^2 + 16*a)*b^4 - 2*(9*a^4 - 23*a^3 + 4*a^2)*b^3 - (9*a^5 - 27*a^4 - 4*a^3)*b^2 + (3*a^5 + 2*a^4)*b)*c)*sqrt(3))^(1/3) + 1/3*a + 1/3*b + 1/3*c, 1/3*a + 1/3*b + 1/3*c + 1/9*(((3*c + 2)*b - c)*a + a^2 + 4*b^2 + 2*b*c + c^2)/(1/54*(27*b + 2)*a^3 + 1/18*((3*c + 2)*b + 9*b^2 - c)*a^2 - 8/27*b^3 - 2/9*b^2*c + 1/9*b*c^2 + 1/27*c^3 + 1/18*((3*c - 4)*b^2 + (3*c^2 - 8*c)*b - c^2)*a + 1/18*sqrt(4*a^6*b - 32*a^2*b^5 - ((a^2 + 4)*b^2 - 2*(a^2 - 2*a)*b + a^2)*c^4 + (27*a^4 - 56*a^3 - 16*a^2)*b^4 + 2*(27*a^5 - 6*a^4 - 8*a^3)*b^3 - 2*((2*a^3 + a^2 + 10*a + 8)*b^3 - (5*a^3 - 5*a^2 - 4*a)*b^2 - a^3 + 2*(a^3 - a^2)*b)*c^3 + (27*a^6 + 16*a^5 - 4*a^4)*b^2 - ((13*a^2 + 40*a + 16)*b^4 - 4*(4*a^3 - 5*a^2 + 4*a)*b^3 + a^4 - (17*a^4 - 8*a^3 + 24*a^2)*b^2 + 4*(a^4 - a^3)*b)*c^2 - 2*(16*a*b^5 - (9*a^3 - 32*a^2 + 16*a)*b^4 - 2*(9*a^4 - 23*a^3 + 4*a^2)*b^3 - (9*a^5 - 27*a^4 - 4*a^3)*b^2 + (3*a^5 + 2*a^4)*b)*c)*sqrt(3))^(1/3) + (1/54*(27*b + 2)*a^3 + 1/18*((3*c + 2)*b + 9*b^2 - c)*a^2 - 8/27*b^3 - 2/9*b^2*c + 1/9*b*c^2 + 1/27*c^3 + 1/18*((3*c - 4)*b^2 + (3*c^2 - 8*c)*b - c^2)*a + 1/18*sqrt(4*a^6*b - 32*a^2*b^5 - ((a^2 + 4)*b^2 - 2*(a^2 - 2*a)*b + a^2)*c^4 + (27*a^4 - 56*a^3 - 16*a^2)*b^4 + 2*(27*a^5 - 6*a^4 - 8*a^3)*b^3 - 2*((2*a^3 + a^2 + 10*a + 8)*b^3 - (5*a^3 - 5*a^2 - 4*a)*b^2 - a^3 + 2*(a^3 - a^2)*b)*c^3 + (27*a^6 + 16*a^5 - 4*a^4)*b^2 - ((13*a^2 + 40*a + 16)*b^4 - 4*(4*a^3 - 5*a^2 + 4*a)*b^3 + a^4 - (17*a^4 - 8*a^3 + 24*a^2)*b^2 + 4*(a^4 - a^3)*b)*c^2 - 2*(16*a*b^5 - (9*a^3 - 32*a^2 + 16*a)*b^4 - 2*(9*a^4 - 23*a^3 + 4*a^2)*b^3 - (9*a^5 - 27*a^4 - 4*a^3)*b^2 + (3*a^5 + 2*a^4)*b)*c)*sqrt(3))^(1/3)] and then do stuff like sage: v = m_evals[0] sage: v(a=1,b=-1.1, c=0) -1/2*(I*sqrt(3) + 1)*((8.44590139893299e-18 + 0.137932037299462*I)*sqrt(3) + 0.0952962962962964)^(1/3) - 0.202222222222222*(-I*sqrt(3) + 1)/((8.44590139893299e-18 + 0.137932037299462*I)*sqrt(3) + 0.0952962962962964)^(1/3) - 0.0333333333333334 sage: _.n() -0.193821989474223 Harald From emmanuelle.gouillart at normalesup.org Tue Jun 9 13:08:38 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Tue, 9 Jun 2009 19:08:38 +0200 Subject: [SciPy-user] any symbolic manipulation of eigenvalue in Scipy? In-Reply-To: References: Message-ID: <20090609170838.GC27916@phare.normalesup.org> Hi Xiaojian , you can use sympy, a very good Python module for symbolic mathematics >>> from sympy import Symbol, Matrix >>> a = Symbol('a') >>> b = Symbol('b') >>> c = Symbol('c') >>> m = Matrix(([a, b, a*b], [a+b, 0, 0], [c, a, c+b])) >>> m.det() -a*b*c + a**2*b**2 - a*b**2 - c*b**2 - b**3 + b*a**3 >>> m.eigenvals() [... long lines of symbols!!] Check the sympy documentation on http://docs.sympy.org (and http://docs.sympy.org/modules/matrices.html#linear-algebra of linear algebra). Cheers, Emmanuelle On Tue, Jun 09, 2009 at 09:35:45AM -0700, Xiaojian Wang wrote: > Hi, > I would like to know if there is any module in Scipy (or Numpy?),? which > has capability > of symbolic manipulation and can generate matrix's > eigenvalues/eigenvectors. > My matrix is symbolic, such as a 3x3 matrix: > [ a,???? b,?? a*b? ] > [a+b,? 0,? ? 0??? ] > [c,???? a,? ?? c+b] > a,b,c could be any values in my future applications. > if not, do you guy know any other programs can do this? > Many thanks for your help in advance! > Xiaoijan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From d_l_goldsmith at yahoo.com Tue Jun 9 13:31:47 2009 From: d_l_goldsmith at yahoo.com (David Goldsmith) Date: Tue, 9 Jun 2009 10:31:47 -0700 (PDT) Subject: [SciPy-user] [SciPy-dev] More on Summer NumPy Doc Marathon Message-ID: <570350.72787.qm@web52101.mail.re2.yahoo.com> Thanks, Bruce. --- On Tue, 6/9/09, Bruce Southey wrote: > Hi, > Great. > Can you provide the actual colors at the start for : > ? ? * Edit white, light gray, or yellow > ? ? * Don't edit dark gray > While not color-blind, not all browsers render the same > colors on all > operating systems etc. > What are you using for light gray or is that meant to be > blue. If it is > 'blue' then what does it mean? > It appears to be the same color used on the Front Page to > say 'Proofed'. Yes, by all means, I agree 100% (I'm having the same problem). :-) In fact, I think (and have thought) that the light and dark grey and cyan are all too close to each other - anyone object to me replacing the greys w/ orange and lavender? > What does the 'green' color mean? > The links says 'Reviewed (needs proof)'? but how does > one say 'proofed'. Cyan (if I understand you correctly). > Also the milestone link from the Front Page does not go > anywhere: > http://docs.scipy.org/numpy/Front%20Page/#milestones Ooops, accidentally broke that editing the page to make the Milestones link more prominent - I'll fix it imminently. Thanks again, DG > > Bruce > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From d_l_goldsmith at yahoo.com Tue Jun 9 13:41:21 2009 From: d_l_goldsmith at yahoo.com (David Goldsmith) Date: Tue, 9 Jun 2009 10:41:21 -0700 (PDT) Subject: [SciPy-user] [SciPy-dev] More on Summer NumPy Doc Marathon Message-ID: <988376.79271.qm@web52101.mail.re2.yahoo.com> > Also the milestone link from the Front Page does not go > anywhere: > http://docs.scipy.org/numpy/Front%20Page/#milestones Fixed. DG From wierob83 at googlemail.com Tue Jun 9 14:21:04 2009 From: wierob83 at googlemail.com (wierob) Date: Tue, 09 Jun 2009 20:21:04 +0200 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <1cd32cbb0906090645o50494221u7e2a6c4f2acc5493@mail.gmail.com> References: <4A2D7001.2010809@googlemail.com> <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> <4A2D7FA9.3060002@googlemail.com> <1cd32cbb0906081613o5bdc19a6t4d043136c60d5d2a@mail.gmail.com> <4A2E4E1E.5030403@googlemail.com> <1cd32cbb0906090645o50494221u7e2a6c4f2acc5493@mail.gmail.com> Message-ID: <4A2EA810.8090608@googlemail.com> Hi, > but the p-value is > >> 6.009953e-314 and r-squared is different. While 6.009953e-314 is small >> enough to say its 0 and the result is highly significant >> > > *highly significant* ??? > As far as I understood a value below a certain confidence value (usually 0.05 or 0.01) means the result is statistically proofed . > What significance level do you want to use to accept the Null when you > are using the result of R? > > Note: R initially only reported Pr(>|t|) < 2e-16 *** > > There are arguments for reporting any statistics only to a few > decimals. I wonder why? > I tested again with Scipy 0.7.0 and the p-value was 6.00995334253e-314 the same as R's. So, for now I assume that linregress works as it should. Also, I don't understand why ignoring these errors is ok. Thanks a lot regards robert From robert.kern at gmail.com Tue Jun 9 14:33:11 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 9 Jun 2009 13:33:11 -0500 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <4A2EA810.8090608@googlemail.com> References: <4A2D7001.2010809@googlemail.com> <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> <4A2D7FA9.3060002@googlemail.com> <1cd32cbb0906081613o5bdc19a6t4d043136c60d5d2a@mail.gmail.com> <4A2E4E1E.5030403@googlemail.com> <1cd32cbb0906090645o50494221u7e2a6c4f2acc5493@mail.gmail.com> <4A2EA810.8090608@googlemail.com> Message-ID: <3d375d730906091133v618c419ewee4e9c163168203f@mail.gmail.com> On Tue, Jun 9, 2009 at 13:21, wierob wrote: > Hi, > >> but the p-value is >> >>> 6.009953e-314 and r-squared is different. While 6.009953e-314 is small >>> enough to say its 0 and the result is highly significant >>> >> >> *highly significant* ??? >> > As far as I understood a value below a certain confidence value (usually > 0.05 or 0.01) means the result is statistically proofed . >> What significance level do you want to use to accept the Null when you >> are using the result of R? >> >> Note: R initially only reported Pr(>|t|) < 2e-16 *** >> >> There are arguments for reporting any statistics only to a few >> decimals. I wonder why? >> > I tested again with Scipy 0.7.0 and the p-value was 6.00995334253e-314 > the same as R's. > > So, for now I assume that linregress works as it should. Also, I don't > understand why ignoring these errors is ok. They aren't really errors in this case. That's why we give you the ability to control how underflows are handled instead of always raising an exception. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wangxj.uc at gmail.com Tue Jun 9 19:21:54 2009 From: wangxj.uc at gmail.com (Xiaojian Wang) Date: Tue, 9 Jun 2009 16:21:54 -0700 Subject: [SciPy-user] any symbolic manipulation of eigenvalue in Scipy? In-Reply-To: <20090609170838.GC27916@phare.normalesup.org> References: <20090609170838.GC27916@phare.normalesup.org> Message-ID: Hi, I got so many inputs, thanks a lot for your time and help. Have a nice day! Xiaojian ps Andy Fraser, Sorry, I am not the person from Portland State University. You may know Dan Hammerand, he came from lanl and works here right now. On Tue, Jun 9, 2009 at 10:08 AM, Emmanuelle Gouillart < emmanuelle.gouillart at normalesup.org> wrote: > Hi Xiaojian , > > you can use sympy, a very good Python module for symbolic mathematics > > >>> from sympy import Symbol, Matrix > > >>> a = Symbol('a') > > >>> b = Symbol('b') > > >>> c = Symbol('c') > > >>> m = Matrix(([a, b, a*b], [a+b, 0, 0], [c, a, c+b])) > > >>> m.det() > -a*b*c + a**2*b**2 - a*b**2 - c*b**2 - b**3 + b*a**3 > > >>> m.eigenvals() > [... long lines of symbols!!] > > Check the sympy documentation on http://docs.sympy.org (and > http://docs.sympy.org/modules/matrices.html#linear-algebra of linear > algebra). > > Cheers, > > Emmanuelle > > On Tue, Jun 09, 2009 at 09:35:45AM -0700, Xiaojian Wang wrote: > > Hi, > > I would like to know if there is any module in Scipy (or Numpy?), > which > > has capability > > of symbolic manipulation and can generate matrix's > > eigenvalues/eigenvectors. > > My matrix is symbolic, such as a 3x3 matrix: > > > [ a, b, a*b ] > > [a+b, 0, 0 ] > > [c, a, c+b] > > > a,b,c could be any values in my future applications. > > if not, do you guy know any other programs can do this? > > > Many thanks for your help in advance! > > > Xiaoijan > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Tue Jun 9 20:04:59 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Tue, 9 Jun 2009 20:04:59 -0400 Subject: [SciPy-user] any symbolic manipulation of eigenvalue in Scipy? In-Reply-To: References: Message-ID: <826c64da0906091704s7a7e6e8s34ad6ffb7239498e@mail.gmail.com> If you need to find a quick result than the easiest way is to go through wolfram alpha. Just enter: eig{{ a, b, a*b ]},{a+b, 0, 0 } ,{c, a, c+b}} or use the following link: http://www06.wolframalpha.com/input/?i=eig{{+a%2C+++++b%2C+++a*b++]}%2C{a%2Bb%2C++0%2C++++0++++}+%2C{c%2C+++++a%2C+++++c%2Bb}} (put the line together if broken). Ivo 2009/6/9 Xiaojian Wang > Hi, > I would like to know if there is any module in Scipy (or Numpy?), which > has capability > of symbolic manipulation and can generate matrix's > eigenvalues/eigenvectors. > My matrix is symbolic, such as a 3x3 matrix: > > [ a, b, a*b ] > [a+b, 0, 0 ] > [c, a, c+b] > > a,b,c could be any values in my future applications. > if not, do you guy know any other programs can do this? > > Many thanks for your help in advance! > > Xiaoijan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Tue Jun 9 20:12:34 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Tue, 9 Jun 2009 20:12:34 -0400 Subject: [SciPy-user] any symbolic manipulation of eigenvalue in Scipy? In-Reply-To: <826c64da0906091704s7a7e6e8s34ad6ffb7239498e@mail.gmail.com> References: <826c64da0906091704s7a7e6e8s34ad6ffb7239498e@mail.gmail.com> Message-ID: <826c64da0906091712v201767c8vd6c8c181ec9cb0fa@mail.gmail.com> I had a missmatching brackets which didn't seem to matter. Anyway, a better way to write your query is to input: eigenvalue{{ a, b, a*b },{a+b, 0, 0 } ,{c, a, c+b}} Ivo 2009/6/9 Ivo Maljevic > If you need to find a quick result than the easiest way is to go through > wolfram alpha. Just enter: > eig{{ a, b, a*b ]},{a+b, 0, 0 } ,{c, a, c+b}} > > or use the following link: > > > http://www06.wolframalpha.com/input/?i=eig{{+a%2C+++++b%2C+++a*b++]}%2C{a%2Bb%2C++0%2C++++0++++}+%2C{c%2C+++++a%2C+++++c%2Bb}} > > (put the line together if broken). > > Ivo > > 2009/6/9 Xiaojian Wang > >> Hi, >> I would like to know if there is any module in Scipy (or Numpy?), which >> has capability >> of symbolic manipulation and can generate matrix's >> eigenvalues/eigenvectors. >> My matrix is symbolic, such as a 3x3 matrix: >> >> [ a, b, a*b ] >> [a+b, 0, 0 ] >> [c, a, c+b] >> >> a,b,c could be any values in my future applications. >> if not, do you guy know any other programs can do this? >> >> Many thanks for your help in advance! >> >> Xiaoijan >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Tue Jun 9 20:29:45 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 9 Jun 2009 20:29:45 -0400 Subject: [SciPy-user] Limits of linrgress - underflow encountered in stdtr In-Reply-To: <4A2EA810.8090608@googlemail.com> References: <4A2D7001.2010809@googlemail.com> <1cd32cbb0906081324r5e1176b2v46658cab44b9645d@mail.gmail.com> <4A2D7FA9.3060002@googlemail.com> <1cd32cbb0906081613o5bdc19a6t4d043136c60d5d2a@mail.gmail.com> <4A2E4E1E.5030403@googlemail.com> <1cd32cbb0906090645o50494221u7e2a6c4f2acc5493@mail.gmail.com> <4A2EA810.8090608@googlemail.com> Message-ID: <2059E05D-DD6D-4E7E-827F-651489EC302C@cs.toronto.edu> On 9-Jun-09, at 2:21 PM, wierob wrote: > So, for now I assume that linregress works as it should. Also, I don't > understand why ignoring these errors is ok. Because the underflow doesn't change the answer enough to make any conceivable difference. Underflow just means that some quantity that was computed was too tiny to be represented, in which case it is effectively 0. It's notable that R reports "p-value < 2.2e-16", which is machine epsilon in 64-bit floating point, i.e. a bound on the relative error due to rounding. I'd read that as R thinking that the P-value is probably garbage and rounding error dominates (and will vary with the exact formula used), but "less than 2.2e-16" is still small enough to be considered "statistically significant" by any wild stretch of the imagination. David From wnbell at gmail.com Tue Jun 9 23:04:45 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 9 Jun 2009 23:04:45 -0400 Subject: [SciPy-user] numpy.compress and sparse matrices? In-Reply-To: References: Message-ID: On Tue, Jun 9, 2009 at 10:11 AM, Marco Lui wrote: > Hello everyone. > > I am looking for a sparse matrix implementation that supports the > numpy.compress operation, which is akin to "fancy indexing" by a boolean > vector. Working in scipy 0.6.0, I have not been able to find such an > implementation. I usually get an error like the following: > > The above example works correctly when data is a normal numpy array. Does > any implementation of sparse arrays compatible with numpy.compress exist? > The sparse csr_matrix and csc_matrix support fancy row and column indexing. You could just use train_indices.nonzero() to convert the boolean array to an index array and pass that into the normal [] operator as follows: data[train_indices.nonzero()[0], :] (untested) FWIW, CSR should be faster for row extraction and CSC should be faster for columns. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From sebastian.walter at gmail.com Wed Jun 10 03:58:36 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Wed, 10 Jun 2009 09:58:36 +0200 Subject: [SciPy-user] curve_fit step-size and optimal parameters In-Reply-To: <3d375d730906081319u1e43435pf6c932792e057f96@mail.gmail.com> References: <23931071.post@talk.nabble.com> <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> <9457e7c80906081315l6f95f19dl7e63ae4e6cb92a3b@mail.gmail.com> <3d375d730906081319u1e43435pf6c932792e057f96@mail.gmail.com> Message-ID: If you try to fit the frequency with the least-squares distance the problem is not only nonlinearity but rather the fact that the objective functions has many local minimizers. At least that's what I have observed in a toy example once. Has anyone experience what to do in that case? (Maybe use L1 norm instead?) On Mon, Jun 8, 2009 at 10:19 PM, Robert Kern wrote: > 2009/6/8 St?fan van der Walt : >> 2009/6/8 Robert Kern : >>> On Mon, Jun 8, 2009 at 14:59, ElMickerino wrote: >>>> My question is, how can I get curve_fit to use a very small step-size for >>>> the phase, or put in strict limits, and to therefore get a robust fit. I >>>> don't want to tune the phase by hand for each of my 60+ datasets. >>> >>> You really can't. I recommend the A*sin(w*t)+B*cos(w*t) >>> parameterization rather than the A*sin(w*t+phi) one. >> >> Could you expand? I can't immediately see why the second >> parametrisation is bad. > > The cyclic nature of phi. It complicates things precisely as the OP describes. > >> Can't a person do this fit using non-linear >> least-squares? Ah, that's probably why you use the other >> parametrisation, so that you don't have to use non-linear least >> squares? > > If you aren't also fitting the frequency, then yes. If you are fitting > for the frequency, too, the problem is still non-linear. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From loris.bennett at fu-berlin.de Wed Jun 10 09:18:30 2009 From: loris.bennett at fu-berlin.de (Dr. Loris Bennett) Date: Wed, 10 Jun 2009 15:18:30 +0200 Subject: [SciPy-user] Compile Error: error: expected `)' before 'PRIdPTR' In-Reply-To: <87octcozqf.fsf@slate.zedat.fu-berlin.de> (Loris Bennett's message of "Fri\, 29 May 2009 14\:33\:28 +0200") References: <87octcozqf.fsf@slate.zedat.fu-berlin.de> Message-ID: <8763f4qlax.fsf@slate.zedat.fu-berlin.de> Hi, Anyone had any thoughts on this? Loris -- Dr. Loris Bennett Computer Centre Freie Universit?t Berlin Berlin, Germany From cournape at gmail.com Wed Jun 10 10:38:47 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 10 Jun 2009 23:38:47 +0900 Subject: [SciPy-user] Compile Error: error: expected `)' before 'PRIdPTR' In-Reply-To: <8763f4qlax.fsf@slate.zedat.fu-berlin.de> References: <87octcozqf.fsf@slate.zedat.fu-berlin.de> <8763f4qlax.fsf@slate.zedat.fu-berlin.de> Message-ID: <5b8d13220906100738i7a06be0cg3ae8632322d6465f@mail.gmail.com> On Wed, Jun 10, 2009 at 10:18 PM, Dr. Loris Bennett wrote: > Hi, > > Anyone had any thoughts on this? I guess that inttypes.h (where PRIdPTR is defined in C99) is not correct for C++ on AIX. A hack to fix this for you would be to replace PRIdPTR by 'ld' in the *numpy* ndarrayobject.h header. cheers, David From david at ar.media.kyoto-u.ac.jp Wed Jun 10 10:22:31 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 10 Jun 2009 23:22:31 +0900 Subject: [SciPy-user] Compile Error: error: expected `)' before 'PRIdPTR' In-Reply-To: <5b8d13220906100738i7a06be0cg3ae8632322d6465f@mail.gmail.com> References: <87octcozqf.fsf@slate.zedat.fu-berlin.de> <8763f4qlax.fsf@slate.zedat.fu-berlin.de> <5b8d13220906100738i7a06be0cg3ae8632322d6465f@mail.gmail.com> Message-ID: <4A2FC1A7.30609@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > On Wed, Jun 10, 2009 at 10:18 PM, Dr. Loris > Bennett wrote: > >> Hi, >> >> Anyone had any thoughts on this? >> > > I guess that inttypes.h (where PRIdPTR is defined in C99) is not > correct for C++ on AIX. A hack to fix this for you would be to replace > PRIdPTR by 'ld' in the *numpy* ndarrayobject.h header. > Sorry, should be "ld", not 'ld' David From josef.pktd at gmail.com Wed Jun 10 12:12:51 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 10 Jun 2009 12:12:51 -0400 Subject: [SciPy-user] curve_fit step-size and optimal parameters In-Reply-To: References: <23931071.post@talk.nabble.com> <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> <9457e7c80906081315l6f95f19dl7e63ae4e6cb92a3b@mail.gmail.com> <3d375d730906081319u1e43435pf6c932792e057f96@mail.gmail.com> Message-ID: <1cd32cbb0906100912q9091a55w2e93cb72621db465@mail.gmail.com> On Wed, Jun 10, 2009 at 3:58 AM, Sebastian Walter wrote: > If you try to fit the frequency with the least-squares distance ?the > problem is not only nonlinearity > but rather the fact that the objective functions has many local minimizers. > At least that's what I have observed in a toy example once. > > Has anyone experience what to do in that case? (Maybe use L1 norm instead?) I would look at estimation in frequency domain, which I know next to nothing about. But for your example, I manged to get the estimate either by adding a bit of noise to your y, or by estimating the constant separately. When I remove the constant, the indeterminacy (?) in the parameter estimate went away. Also if there is a small trend then the estimation worked. The other way, I would try in cases like this would be to use a penalization term (as in Tychonov or Ridge) in the objective function, but I didn't try out how well this would work in your case. Josef > > > > On Mon, Jun 8, 2009 at 10:19 PM, Robert Kern wrote: >> 2009/6/8 St?fan van der Walt : >>> 2009/6/8 Robert Kern : >>>> On Mon, Jun 8, 2009 at 14:59, ElMickerino wrote: >>>>> My question is, how can I get curve_fit to use a very small step-size for >>>>> the phase, or put in strict limits, and to therefore get a robust fit. ?I >>>>> don't want to tune the phase by hand for each of my 60+ datasets. >>>> >>>> You really can't. I recommend the A*sin(w*t)+B*cos(w*t) >>>> parameterization rather than the A*sin(w*t+phi) one. >>> >>> Could you expand? ?I can't immediately see why the second >>> parametrisation is bad. >> >> The cyclic nature of phi. It complicates things precisely as the OP describes. >> >>> Can't a person do this fit using non-linear >>> least-squares? ?Ah, that's probably why you use the other >>> parametrisation, so that you don't have to use non-linear least >>> squares? >> >> If you aren't also fitting the frequency, then yes. If you are fitting >> for the frequency, too, the problem is still non-linear. >> >> -- -------------- next part -------------- # author: michael a schmidt # purpose: fit sinusoid with constant offset from numpy import * from scipy.optimize import * def f(x, a, b, c, d): return (b*sin(c*x + d)) + a#/(x+1e2) #a*x**2 # #return (a + b*sin(c*x + d)) #return (a + b*sin(c*x)) + d*cos(e*x) def f2(x, b, c, d): return (b*sin(c*x + d)) def fp(y, x, a, b, c, d): # penalized objective function, not used return np.sum(y - (b*sin(c*x + d))**2) + 1e-4*(a**2 + b**2 + c**2 + d**2) def do_it(): x, y = load_data('data_file.txt') y = (1e6)*y + 0.05*np.random.normal(size=y.shape) f_probe = 5.380 f_pump = 5.408 #Hz f_beat = abs(f_probe - f_pump) w_beat = 2.*pi*f_beat V0 = 0.5*(max(y) + min(y)) V1 = max(y) - V0 d0 = 0.0 #initial guess of zero phase p0=[V0, V1, w_beat, d0] popt, pcov = curve_fit(f, x, y, p0=[V0, V1, w_beat, d0]) print "found: V1 = %f +/- %f" % (popt[1], sqrt(pcov[1][1])) return popt, x,y, p0, pcov def do_it2(): x, y = load_data('data_file.txt') y = (1e6)*y f_probe = 5.380 f_pump = 5.408 #Hz f_beat = abs(f_probe - f_pump) w_beat = 2.*pi*f_beat V0 = 0.5*(max(y) + min(y)) V1 = max(y) - V0 d0 = 0.0 #initial guess of zero phase p0=[V0, V1, w_beat, d0] popt, pcov = curve_fit(f2, x, y-y.mean(), p0=[V1, w_beat, d0]) aest = (y - f2(x, *popt)).mean() popt2 = np.hstack((aest,popt)) print "found: V1 = %f +/- %f" % (popt[1], sqrt(pcov[1][1])) return popt2, x,y, p0, pcov def load_data(in_file): input = open(in_file, 'r') lines = input.readlines() x = array([0.]*len(lines)) y = array([0.]*len(lines)) for i, line in enumerate(lines): v1 = float(line.split()[0]) v2 = float(line.split()[1]) x[i] = v1 y[i] = v2 return (x, y) popt, x,y, p0, pcov = do_it() import matplotlib.pyplot as plt yh = f(x, *popt) plt.figure() plt.plot(x,y,x,yh) pstd = np.sqrt(np.diag(pcov)) print pcov/np.outer(pstd,pstd) popt, x,y, p0, pcov = do_it2() yh = f2(x, *popt[1:]) + popt[0] plt.figure() plt.plot(x,y,x,yh) #plt.show() pstd = np.sqrt(np.diag(pcov)) print pcov/np.outer(pstd,pstd) From josef.pktd at gmail.com Wed Jun 10 12:27:42 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 10 Jun 2009 12:27:42 -0400 Subject: [SciPy-user] curve_fit step-size and optimal parameters In-Reply-To: <1cd32cbb0906100912q9091a55w2e93cb72621db465@mail.gmail.com> References: <23931071.post@talk.nabble.com> <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> <9457e7c80906081315l6f95f19dl7e63ae4e6cb92a3b@mail.gmail.com> <3d375d730906081319u1e43435pf6c932792e057f96@mail.gmail.com> <1cd32cbb0906100912q9091a55w2e93cb72621db465@mail.gmail.com> Message-ID: <1cd32cbb0906100927o3950afefsae8172b48f310d43@mail.gmail.com> On Wed, Jun 10, 2009 at 12:12 PM, wrote: > On Wed, Jun 10, 2009 at 3:58 AM, Sebastian > Walter wrote: >> If you try to fit the frequency with the least-squares distance ?the >> problem is not only nonlinearity >> but rather the fact that the objective functions has many local minimizers. >> At least that's what I have observed in a toy example once. >> >> Has anyone experience what to do in that case? (Maybe use L1 norm instead?) > > I would look at estimation in frequency domain, which I know next to > nothing about. > > But for your example, I manged to get the estimate either by adding a > bit of noise to your y, or by estimating the constant separately. When > I remove the constant, the indeterminacy (?) in the parameter estimate > went away. Also if there is a small trend then the estimation worked. > > The other way, I would try in cases like this would be to use a > penalization term (as in Tychonov or Ridge) in the objective function, > but I didn't try out how well this would work in your case. > Your example also estimates correctly with a starting value V0 = 0 or small negative V0. In scipy stats, I also found a distribution, where the estimation converges to the correct estimate only from one side of the true parameter (but no idea why) Josef From cournape at gmail.com Wed Jun 10 12:35:49 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 11 Jun 2009 01:35:49 +0900 Subject: [SciPy-user] curve_fit step-size and optimal parameters In-Reply-To: <1cd32cbb0906100912q9091a55w2e93cb72621db465@mail.gmail.com> References: <23931071.post@talk.nabble.com> <3d375d730906081302w7f8158f3sd7c95fc20829eae3@mail.gmail.com> <9457e7c80906081315l6f95f19dl7e63ae4e6cb92a3b@mail.gmail.com> <3d375d730906081319u1e43435pf6c932792e057f96@mail.gmail.com> <1cd32cbb0906100912q9091a55w2e93cb72621db465@mail.gmail.com> Message-ID: <5b8d13220906100935m75383c7djb9587da443eb209b@mail.gmail.com> On Thu, Jun 11, 2009 at 1:12 AM, wrote: > On Wed, Jun 10, 2009 at 3:58 AM, Sebastian > Walter wrote: >> If you try to fit the frequency with the least-squares distance ?the >> problem is not only nonlinearity >> but rather the fact that the objective functions has many local minimizers. >> At least that's what I have observed in a toy example once. >> >> Has anyone experience what to do in that case? (Maybe use L1 norm instead?) > > I would look at estimation in frequency domain, which I know next to > nothing about. Yes, that's how I would do as well. Use a periodogram with a simple peak picking algorithm to get the frequency estimate, and then use the parametrisation suggested by Robert to get A and B (as a linear problem). If the signal has constant frequency/phase/amplitude with uncorrelated noise, this should work very well, even in relatively low SNR situation. cheers, David From ryanlists at gmail.com Wed Jun 10 15:57:58 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 10 Jun 2009 14:57:58 -0500 Subject: [SciPy-user] f2py problem: failed to map segment from shared object: Operation not permitted Message-ID: I am trying to reuse some old code. I am running into a problem importing a module created by f2py. Here is the traceback: /home/ryan/pythonutil/rwkmisc.pyc in my_import(name) 347 348 def my_import(name): --> 349 mod = __import__(name) 350 components = name.split('.') 351 for comp in components[1:]: ImportError: /home/ryan/thesis/sym_control_design/fortran_model_bode1f.so: failed to map segment from shared object: Operation not permitted WARNING: Failure executing file: I just tried recompiling the code. The output of "f2py -c -m fortran_model_bode1f fortran_model_bode1_out.f" is below. Note that g77 doesn't show up in my Ubuntu 9.04 package manager. I have gfortran installed. Can I safely ignore all the messages about not finding various executeables since it eventually finds gfortran, or is this part of the problem? Recompiling did not fix my problem. The fortran file is attached. Thanks, Ryan ryan at ryan-duo-laptop:~/thesis/sym_control_design$ f2py -c -m fortran_model_bode1f fortran_model_bode1_out.f running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "fortran_model_bode1f" sources f2py options: [] f2py:> /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c creating /tmp/tmp3H0RSG creating /tmp/tmp3H0RSG/src.linux-i686-2.6 Reading fortran codes... Reading file 'fortran_model_bode1_out.f' (format:fix,strict) Post-processing... Block: fortran_model_bode1f Block: bodevect Block: invbodevect Block: invbode Block: zcosh Block: zsinh Block: bode Post-processing (stage 2)... Building modules... Building module "fortran_model_bode1f"... Constructing wrapper function "bodevect"... outvect = bodevect(svect,ucv) Constructing wrapper function "invbodevect"... outvect = invbodevect(svect,ucv) Creating wrapper for Fortran function "invbode"("invbode")... Constructing wrapper function "invbode"... invbode = invbode(s,ucv) Creating wrapper for Fortran function "zcosh"("zcosh")... Constructing wrapper function "zcosh"... zcosh = zcosh(z) Creating wrapper for Fortran function "zsinh"("zsinh")... Constructing wrapper function "zsinh"... zsinh = zsinh(z) Creating wrapper for Fortran function "bode"("bode")... Constructing wrapper function "bode"... bode = bode(s,ucv) Wrote C/API module "fortran_model_bode1f" to file "/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c" Fortran 77 wrappers are saved to "/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f" adding '/tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.c' to sources. adding '/tmp/tmp3H0RSG/src.linux-i686-2.6' to include_dirs. copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmp3H0RSG/src.linux-i686-2.6 copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmp3H0RSG/src.linux-i686-2.6 adding '/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 Could not locate executable pgf77 customize AbsoftFCompiler Could not locate executable f90 customize NAGFCompiler Found executable /usr/bin/f95 customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_ext building 'fortran_model_bode1f' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC creating /tmp/tmp3H0RSG/tmp creating /tmp/tmp3H0RSG/tmp/tmp3H0RSG creating /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6 compile options: '-I/tmp/tmp3H0RSG/src.linux-i686-2.6 -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c' gcc: /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c gcc: /tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.c compiling Fortran sources Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double compile options: '-I/tmp/tmp3H0RSG/src.linux-i686-2.6 -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c' gfortran:f77: fortran_model_bode1_out.f gfortran:f77: /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f /usr/bin/gfortran -Wall -Wall -shared /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.o /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.o /tmp/tmp3H0RSG/fortran_model_bode1_out.o /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.o -lgfortran -o ./fortran_model_bode1f.so running scons Removing build directory /tmp/tmp3H0RSG -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fortran_model_bode1_out.f Type: text/x-fortran Size: 6127 bytes Desc: not available URL: From ryanlists at gmail.com Wed Jun 10 17:50:28 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 10 Jun 2009 16:50:28 -0500 Subject: [SciPy-user] f2py problem: failed to map segment from shared object: Operation not permitted In-Reply-To: References: Message-ID: I found this thread from a year ago: http://mail.scipy.org/pipermail/numpy-discussion/2008-June/035107.html The --fcompiler=gnu95 switch cleans up my output, but doesn't get rid of my problem: #------------------------------------ ryan at ryan-duo-laptop:~/thesis/sym_control_design$ f2py -c -m fortran_model_bode1f --fcompiler=gnu95 fortran_model_bode1_out.f running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "fortran_model_bode1f" sources f2py options: [] f2py:> /tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.c creating /tmp/tmpocK64J creating /tmp/tmpocK64J/src.linux-i686-2.6 Reading fortran codes... Reading file 'fortran_model_bode1_out.f' (format:fix,strict) Post-processing... Block: fortran_model_bode1f Block: bodevect Block: invbodevect Block: invbode Block: zcosh Block: zsinh Block: bode Post-processing (stage 2)... Building modules... Building module "fortran_model_bode1f"... Constructing wrapper function "bodevect"... outvect = bodevect(svect,ucv) Constructing wrapper function "invbodevect"... outvect = invbodevect(svect,ucv) Creating wrapper for Fortran function "invbode"("invbode")... Constructing wrapper function "invbode"... invbode = invbode(s,ucv) Creating wrapper for Fortran function "zcosh"("zcosh")... Constructing wrapper function "zcosh"... zcosh = zcosh(z) Creating wrapper for Fortran function "zsinh"("zsinh")... Constructing wrapper function "zsinh"... zsinh = zsinh(z) Creating wrapper for Fortran function "bode"("bode")... Constructing wrapper function "bode"... bode = bode(s,ucv) Wrote C/API module "fortran_model_bode1f" to file "/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.c" Fortran 77 wrappers are saved to "/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f" adding '/tmp/tmpocK64J/src.linux-i686-2.6/fortranobject.c' to sources. adding '/tmp/tmpocK64J/src.linux-i686-2.6' to include_dirs. copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpocK64J/src.linux-i686-2.6 copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpocK64J/src.linux-i686-2.6 adding '/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler using build_ext building 'fortran_model_bode1f' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC creating /tmp/tmpocK64J/tmp creating /tmp/tmpocK64J/tmp/tmpocK64J creating /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6 compile options: '-I/tmp/tmpocK64J/src.linux-i686-2.6 -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c' gcc: /tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.c gcc: /tmp/tmpocK64J/src.linux-i686-2.6/fortranobject.c compiling Fortran sources Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double compile options: '-I/tmp/tmpocK64J/src.linux-i686-2.6 -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c' gfortran:f77: fortran_model_bode1_out.f gfortran:f77: /tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f /usr/bin/gfortran -Wall -Wall -shared /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.o /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6/fortranobject.o /tmp/tmpocK64J/fortran_model_bode1_out.o /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.o -lgfortran -o ./fortran_model_bode1f.so running scons Removing build directory /tmp/tmpocK64J #-------------------------------- In [2]: import fortran_model_bode1f --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /home/ryan/siue/Research/papers/noncolocated_TMM/expgraphs/symbolic_modeling.py in () ----> 1 2 3 4 5 ImportError: /home/ryan/thesis/sym_control_design/fortran_model_bode1f.so: failed to map segment from shared object: Operation not permitted On Wed, Jun 10, 2009 at 2:57 PM, Ryan Krauss wrote: > I am trying to reuse some old code. I am running into a problem importing > a module created by f2py. Here is the traceback: > > /home/ryan/pythonutil/rwkmisc.pyc in my_import(name) > 347 > 348 def my_import(name): > --> 349 mod = __import__(name) > 350 components = name.split('.') > 351 for comp in components[1:]: > > ImportError: /home/ryan/thesis/sym_control_design/fortran_model_bode1f.so: > failed to map segment from shared object: Operation not permitted > WARNING: Failure executing file: > > I just tried recompiling the code. The output of "f2py -c -m > fortran_model_bode1f fortran_model_bode1_out.f" is below. Note that g77 > doesn't show up in my Ubuntu 9.04 package manager. I have gfortran > installed. Can I safely ignore all the messages about not finding various > executeables since it eventually finds gfortran, or is this part of the > problem? Recompiling did not fix my problem. The fortran file is attached. > > Thanks, > > Ryan > > ryan at ryan-duo-laptop:~/thesis/sym_control_design$ f2py -c -m > fortran_model_bode1f fortran_model_bode1_out.f > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands --compiler > options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > building extension "fortran_model_bode1f" sources > f2py options: [] > f2py:> /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c > creating /tmp/tmp3H0RSG > creating /tmp/tmp3H0RSG/src.linux-i686-2.6 > Reading fortran codes... > Reading file 'fortran_model_bode1_out.f' (format:fix,strict) > Post-processing... > Block: fortran_model_bode1f > Block: bodevect > Block: invbodevect > Block: invbode > Block: zcosh > Block: zsinh > Block: bode > Post-processing (stage 2)... > Building modules... > Building module "fortran_model_bode1f"... > Constructing wrapper function "bodevect"... > outvect = bodevect(svect,ucv) > Constructing wrapper function "invbodevect"... > outvect = invbodevect(svect,ucv) > Creating wrapper for Fortran function "invbode"("invbode")... > Constructing wrapper function "invbode"... > invbode = invbode(s,ucv) > Creating wrapper for Fortran function "zcosh"("zcosh")... > Constructing wrapper function "zcosh"... > zcosh = zcosh(z) > Creating wrapper for Fortran function "zsinh"("zsinh")... > Constructing wrapper function "zsinh"... > zsinh = zsinh(z) > Creating wrapper for Fortran function "bode"("bode")... > Constructing wrapper function "bode"... > bode = bode(s,ucv) > Wrote C/API module "fortran_model_bode1f" to file > "/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c" > Fortran 77 wrappers are saved to > "/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f" > adding '/tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.c' to sources. > adding '/tmp/tmp3H0RSG/src.linux-i686-2.6' to include_dirs. > copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.c -> > /tmp/tmp3H0RSG/src.linux-i686-2.6 > copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.h -> > /tmp/tmp3H0RSG/src.linux-i686-2.6 > adding > '/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f' to > sources. > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize GnuFCompiler > Could not locate executable g77 > Could not locate executable f77 > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize LaheyFCompiler > Could not locate executable lf95 > customize PGroupFCompiler > Could not locate executable pgf90 > Could not locate executable pgf77 > customize AbsoftFCompiler > Could not locate executable f90 > customize NAGFCompiler > Found executable /usr/bin/f95 > customize VastFCompiler > customize GnuFCompiler > customize CompaqFCompiler > Could not locate executable fort > customize IntelItaniumFCompiler > Could not locate executable efort > Could not locate executable efc > customize IntelEM64TFCompiler > customize Gnu95FCompiler > Found executable /usr/bin/gfortran > customize Gnu95FCompiler > customize Gnu95FCompiler using build_ext > building 'fortran_model_bode1f' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall > -Wstrict-prototypes -fPIC > > creating /tmp/tmp3H0RSG/tmp > creating /tmp/tmp3H0RSG/tmp/tmp3H0RSG > creating /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6 > compile options: '-I/tmp/tmp3H0RSG/src.linux-i686-2.6 > -I/usr/lib/python2.6/dist-packages/numpy/core/include > -I/usr/include/python2.6 -c' > gcc: /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c > gcc: /tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.c > compiling Fortran sources > Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 > -msse -msse3 -fomit-frame-pointer -malign-double > Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC > -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 > -fomit-frame-pointer -malign-double > Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops > -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double > compile options: '-I/tmp/tmp3H0RSG/src.linux-i686-2.6 > -I/usr/lib/python2.6/dist-packages/numpy/core/include > -I/usr/include/python2.6 -c' > gfortran:f77: fortran_model_bode1_out.f > gfortran:f77: > /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f > /usr/bin/gfortran -Wall -Wall -shared > /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.o > /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.o > /tmp/tmp3H0RSG/fortran_model_bode1_out.o > /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.o > -lgfortran -o ./fortran_model_bode1f.so > running scons > Removing build directory /tmp/tmp3H0RSG > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Wed Jun 10 18:03:57 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 10 Jun 2009 17:03:57 -0500 Subject: [SciPy-user] f2py problem: failed to map segment from shared object: Operation not permitted In-Reply-To: References: Message-ID: So, I found a thread on ubuntuforums that say that passing -std=legacy to gfortran allows it to correctly complie files written for g77 or f77. How would I pass such a flag to f2py? On Wed, Jun 10, 2009 at 4:50 PM, Ryan Krauss wrote: > I found this thread from a year ago: > http://mail.scipy.org/pipermail/numpy-discussion/2008-June/035107.html > > The --fcompiler=gnu95 switch cleans up my output, but doesn't get rid of my > problem: > > > #------------------------------------ > ryan at ryan-duo-laptop:~/thesis/sym_control_design$ f2py -c -m > fortran_model_bode1f --fcompiler=gnu95 fortran_model_bode1_out.f > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands --compiler > options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > building extension "fortran_model_bode1f" sources > f2py options: [] > f2py:> /tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.c > creating /tmp/tmpocK64J > creating /tmp/tmpocK64J/src.linux-i686-2.6 > > Reading fortran codes... > Reading file 'fortran_model_bode1_out.f' (format:fix,strict) > Post-processing... > Block: fortran_model_bode1f > Block: bodevect > Block: invbodevect > Block: invbode > Block: zcosh > Block: zsinh > Block: bode > Post-processing (stage 2)... > Building modules... > Building module "fortran_model_bode1f"... > Constructing wrapper function "bodevect"... > outvect = bodevect(svect,ucv) > Constructing wrapper function "invbodevect"... > outvect = invbodevect(svect,ucv) > Creating wrapper for Fortran function "invbode"("invbode")... > Constructing wrapper function "invbode"... > invbode = invbode(s,ucv) > Creating wrapper for Fortran function "zcosh"("zcosh")... > Constructing wrapper function "zcosh"... > zcosh = zcosh(z) > Creating wrapper for Fortran function "zsinh"("zsinh")... > Constructing wrapper function "zsinh"... > zsinh = zsinh(z) > Creating wrapper for Fortran function "bode"("bode")... > Constructing wrapper function "bode"... > bode = bode(s,ucv) > Wrote C/API module "fortran_model_bode1f" to file > "/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.c" > Fortran 77 wrappers are saved to > "/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f" > adding '/tmp/tmpocK64J/src.linux-i686-2.6/fortranobject.c' to sources. > adding '/tmp/tmpocK64J/src.linux-i686-2.6' to include_dirs. > copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.c -> > /tmp/tmpocK64J/src.linux-i686-2.6 > copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.h -> > /tmp/tmpocK64J/src.linux-i686-2.6 > adding > '/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f' to > sources. > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize Gnu95FCompiler > Found executable /usr/bin/gfortran > customize Gnu95FCompiler using build_ext > building 'fortran_model_bode1f' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall > -Wstrict-prototypes -fPIC > > creating /tmp/tmpocK64J/tmp > creating /tmp/tmpocK64J/tmp/tmpocK64J > creating /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6 > compile options: '-I/tmp/tmpocK64J/src.linux-i686-2.6 > -I/usr/lib/python2.6/dist-packages/numpy/core/include > -I/usr/include/python2.6 -c' > gcc: /tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.c > gcc: /tmp/tmpocK64J/src.linux-i686-2.6/fortranobject.c > compiling Fortran sources > Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 > -msse -msse3 -fomit-frame-pointer -malign-double > Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC > -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 > -fomit-frame-pointer -malign-double > Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops > -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double > compile options: '-I/tmp/tmpocK64J/src.linux-i686-2.6 > -I/usr/lib/python2.6/dist-packages/numpy/core/include > -I/usr/include/python2.6 -c' > gfortran:f77: fortran_model_bode1_out.f > gfortran:f77: > /tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f > /usr/bin/gfortran -Wall -Wall -shared > /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1fmodule.o > /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6/fortranobject.o > /tmp/tmpocK64J/fortran_model_bode1_out.o > /tmp/tmpocK64J/tmp/tmpocK64J/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.o > -lgfortran -o ./fortran_model_bode1f.so > running scons > Removing build directory /tmp/tmpocK64J > > #-------------------------------- > > In [2]: import fortran_model_bode1f > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > > /home/ryan/siue/Research/papers/noncolocated_TMM/expgraphs/symbolic_modeling.py > in () > ----> 1 > 2 > 3 > 4 > 5 > > ImportError: /home/ryan/thesis/sym_control_design/fortran_model_bode1f.so: > failed to map segment from shared object: Operation not permitted > > > > > > On Wed, Jun 10, 2009 at 2:57 PM, Ryan Krauss wrote: > >> I am trying to reuse some old code. I am running into a problem importing >> a module created by f2py. Here is the traceback: >> >> /home/ryan/pythonutil/rwkmisc.pyc in my_import(name) >> 347 >> 348 def my_import(name): >> --> 349 mod = __import__(name) >> 350 components = name.split('.') >> 351 for comp in components[1:]: >> >> ImportError: /home/ryan/thesis/sym_control_design/fortran_model_bode1f.so: >> failed to map segment from shared object: Operation not permitted >> WARNING: Failure executing file: >> >> I just tried recompiling the code. The output of "f2py -c -m >> fortran_model_bode1f fortran_model_bode1_out.f" is below. Note that g77 >> doesn't show up in my Ubuntu 9.04 package manager. I have gfortran >> installed. Can I safely ignore all the messages about not finding various >> executeables since it eventually finds gfortran, or is this part of the >> problem? Recompiling did not fix my problem. The fortran file is attached. >> >> Thanks, >> >> Ryan >> >> ryan at ryan-duo-laptop:~/thesis/sym_control_design$ f2py -c -m >> fortran_model_bode1f fortran_model_bode1_out.f >> running build >> running config_cc >> unifing config_cc, config, build_clib, build_ext, build commands >> --compiler options >> running config_fc >> unifing config_fc, config, build_clib, build_ext, build commands >> --fcompiler options >> running build_src >> building extension "fortran_model_bode1f" sources >> f2py options: [] >> f2py:> /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c >> creating /tmp/tmp3H0RSG >> creating /tmp/tmp3H0RSG/src.linux-i686-2.6 >> Reading fortran codes... >> Reading file 'fortran_model_bode1_out.f' (format:fix,strict) >> Post-processing... >> Block: fortran_model_bode1f >> Block: bodevect >> Block: invbodevect >> Block: invbode >> Block: zcosh >> Block: zsinh >> Block: bode >> Post-processing (stage 2)... >> Building modules... >> Building module "fortran_model_bode1f"... >> Constructing wrapper function "bodevect"... >> outvect = bodevect(svect,ucv) >> Constructing wrapper function "invbodevect"... >> outvect = invbodevect(svect,ucv) >> Creating wrapper for Fortran function "invbode"("invbode")... >> Constructing wrapper function "invbode"... >> invbode = invbode(s,ucv) >> Creating wrapper for Fortran function "zcosh"("zcosh")... >> Constructing wrapper function "zcosh"... >> zcosh = zcosh(z) >> Creating wrapper for Fortran function "zsinh"("zsinh")... >> Constructing wrapper function "zsinh"... >> zsinh = zsinh(z) >> Creating wrapper for Fortran function "bode"("bode")... >> Constructing wrapper function "bode"... >> bode = bode(s,ucv) >> Wrote C/API module "fortran_model_bode1f" to file >> "/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c" >> Fortran 77 wrappers are saved to >> "/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f" >> adding '/tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.c' to sources. >> adding '/tmp/tmp3H0RSG/src.linux-i686-2.6' to include_dirs. >> copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.c -> >> /tmp/tmp3H0RSG/src.linux-i686-2.6 >> copying /usr/lib/python2.6/dist-packages/numpy/f2py/src/fortranobject.h -> >> /tmp/tmp3H0RSG/src.linux-i686-2.6 >> adding >> '/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f' to >> sources. >> running build_ext >> customize UnixCCompiler >> customize UnixCCompiler using build_ext >> customize GnuFCompiler >> Could not locate executable g77 >> Could not locate executable f77 >> customize IntelFCompiler >> Could not locate executable ifort >> Could not locate executable ifc >> customize LaheyFCompiler >> Could not locate executable lf95 >> customize PGroupFCompiler >> Could not locate executable pgf90 >> Could not locate executable pgf77 >> customize AbsoftFCompiler >> Could not locate executable f90 >> customize NAGFCompiler >> Found executable /usr/bin/f95 >> customize VastFCompiler >> customize GnuFCompiler >> customize CompaqFCompiler >> Could not locate executable fort >> customize IntelItaniumFCompiler >> Could not locate executable efort >> Could not locate executable efc >> customize IntelEM64TFCompiler >> customize Gnu95FCompiler >> Found executable /usr/bin/gfortran >> customize Gnu95FCompiler >> customize Gnu95FCompiler using build_ext >> building 'fortran_model_bode1f' extension >> compiling C sources >> C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 >> -Wall -Wstrict-prototypes -fPIC >> >> creating /tmp/tmp3H0RSG/tmp >> creating /tmp/tmp3H0RSG/tmp/tmp3H0RSG >> creating /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6 >> compile options: '-I/tmp/tmp3H0RSG/src.linux-i686-2.6 >> -I/usr/lib/python2.6/dist-packages/numpy/core/include >> -I/usr/include/python2.6 -c' >> gcc: /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.c >> gcc: /tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.c >> compiling Fortran sources >> Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form >> -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 >> -msse -msse3 -fomit-frame-pointer -malign-double >> Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC >> -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -msse3 >> -fomit-frame-pointer -malign-double >> Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form >> -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops >> -march=i686 -mmmx -msse2 -msse -msse3 -fomit-frame-pointer -malign-double >> compile options: '-I/tmp/tmp3H0RSG/src.linux-i686-2.6 >> -I/usr/lib/python2.6/dist-packages/numpy/core/include >> -I/usr/include/python2.6 -c' >> gfortran:f77: fortran_model_bode1_out.f >> gfortran:f77: >> /tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.f >> /usr/bin/gfortran -Wall -Wall -shared >> /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1fmodule.o >> /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortranobject.o >> /tmp/tmp3H0RSG/fortran_model_bode1_out.o >> /tmp/tmp3H0RSG/tmp/tmp3H0RSG/src.linux-i686-2.6/fortran_model_bode1f-f2pywrappers.o >> -lgfortran -o ./fortran_model_bode1f.so >> running scons >> Removing build directory /tmp/tmp3H0RSG >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Jun 10 21:46:34 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 11 Jun 2009 10:46:34 +0900 Subject: [SciPy-user] f2py problem: failed to map segment from shared object: Operation not permitted In-Reply-To: References: Message-ID: <5b8d13220906101846i4dfb7c5ej4ea027dd6a3c47ff@mail.gmail.com> On Thu, Jun 11, 2009 at 7:03 AM, Ryan Krauss wrote: > So, I found a thread on ubuntuforums that say that passing -std=legacy to > gfortran allows it to correctly complie files written for g77 or f77.? How > would I pass such a flag to f2py? I don't think it has anything to do with g77 vs gfortran. Are you using Fedora by any chance ? I think this is an error caused by selinux, and that something is needed to make .so files "dlopenable". I have never understood how this was supposed to work, though. cheers, David From ryanlists at gmail.com Wed Jun 10 23:24:27 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 10 Jun 2009 22:24:27 -0500 Subject: [SciPy-user] f2py problem: failed to map segment from shared object: Operation not permitted In-Reply-To: <5b8d13220906101846i4dfb7c5ej4ea027dd6a3c47ff@mail.gmail.com> References: <5b8d13220906101846i4dfb7c5ej4ea027dd6a3c47ff@mail.gmail.com> Message-ID: Hmm. The other thread of a year ago mentioned seliniux as well. I am running a vanilla Ubuntu Jaunty Jackalope 9.04 install. But that got me to thinking. It turns out that these files were on a FAT32 partition. I copied them to an ext3 partition, and things seem to be going better. (I am still battling my old code, but the f2py modules import and seem to be working). Thanks for spurring my thinking and saving me a long, fruitless battle with how to make gfortran act like g77. Ryan On Wed, Jun 10, 2009 at 8:46 PM, David Cournapeau wrote: > On Thu, Jun 11, 2009 at 7:03 AM, Ryan Krauss wrote: > > So, I found a thread on ubuntuforums that say that passing -std=legacy to > > gfortran allows it to correctly complie files written for g77 or f77. > How > > would I pass such a flag to f2py? > > I don't think it has anything to do with g77 vs gfortran. Are you > using Fedora by any chance ? I think this is an error caused by > selinux, and that something is needed to make .so files "dlopenable". > I have never understood how this was supposed to work, though. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sahar at cmt.co.il Thu Jun 11 01:16:33 2009 From: sahar at cmt.co.il (Sahar) Date: Thu, 11 Jun 2009 08:16:33 +0300 Subject: [SciPy-user] Save array as tiff image Message-ID: Hi, I would like to save a 2D array as tif image of 16bit gray scale. Thanks for your help, Sahar ******************************************************************************************************* This e-mail message may contain confidential,and privileged information or data that constitute proprietary information of CMT Medical Ltd. Any review or distribution by others is strictly prohibited. If you are not the intended recipient you are hereby notified that any use of this information or data by any other person is absolutely prohibited. If you are not the intended recipient, please delete all copies. Thank You. http://www.cmt.co.il ******************************************************************************************************** ************************************************************************************ This footnote confirms that this email message has been scanned by PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses. ************************************************************************************ -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at rudin.co.uk Thu Jun 11 03:42:22 2009 From: paul at rudin.co.uk (Paul Rudin) Date: Thu, 11 Jun 2009 08:42:22 +0100 Subject: [SciPy-user] scipy.optimize.anneal - problem with lower and upper Message-ID: <87fxe76wtd.fsf@rudin.co.uk> I'm experimenting with scipy.optimize.anneal, but I'm confused about the 'lower' and 'upper' arguments. I was expecting that this would limit the range of values passed to the function being optimized. However this appears not the be the case, as running the snippet below illustrates. You'll soon see values being passed in that fall outside the range given by upper and lower. Have I misunderstood what these arguments are supposed to mean or is there a bug? (Incidentally - I'm also surprised that that first thing printed isn't the value for x0 that's passed in - but that's another question.) from scipy.optimize import anneal import numpy def test(*args): print args return numpy.random.random() anneal(test, numpy.ones(3)*0.5, lower=numpy.zeros(3), upper=numpy.ones(3)) From seb.haase at gmail.com Thu Jun 11 03:56:06 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 11 Jun 2009 09:56:06 +0200 Subject: [SciPy-user] Save array as tiff image In-Reply-To: References: Message-ID: On Thu, Jun 11, 2009 at 7:16 AM, Sahar wrote: > Hi, > I would like to save a 2D array as tif image of 16bit gray scale. > > Thanks for your help, > > Sahar Hi Sahar, You need the python imaging library (PIL) for this. Then it's no problem - there are even "helper functions" in scipy that shorten a "3 liner" into a "one liner" - if I'm not mistaken. HTH Sebastian Haase > > > ******************************************************************************************************* > This e-mail message may contain confidential,and privileged information or > data that constitute proprietary information of CMT Medical Ltd. Any review > or distribution by others is strictly prohibited. If you are not the > intended recipient you are hereby notified that any use of this information > or data by any other person is absolutely prohibited. If you are not the > intended recipient, please delete all copies. Thank You. > http://www.cmt.co.il > ******************************************************************************************************** > > > > > ************************************************************************************ > This footnote confirms that this email message has been scanned by > PineApp Mail-SeCure for the presence of malicious code, vandals & computer > viruses. > ************************************************************************************ > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From seb.haase at gmail.com Thu Jun 11 03:58:46 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 11 Jun 2009 09:58:46 +0200 Subject: [SciPy-user] scipy.optimize.anneal - problem with lower and upper In-Reply-To: <87fxe76wtd.fsf@rudin.co.uk> References: <87fxe76wtd.fsf@rudin.co.uk> Message-ID: On Thu, Jun 11, 2009 at 9:42 AM, Paul Rudin wrote: > > I'm experimenting with scipy.optimize.anneal, but I'm confused about the > 'lower' and 'upper' arguments. I was expecting that this would limit the > range of values passed to the function being optimized. However this > appears not the be the case, as running the snippet below > illustrates. You'll soon see values being passed in that fall outside > the range given by upper and lower. Have I misunderstood what these > arguments are supposed to mean or is there a bug? > > (Incidentally - I'm also surprised that that first thing printed isn't > the value for x0 that's passed in - but that's another question.) > > from scipy.optimize import anneal > import numpy > > def test(*args): > ? ?print args > ? ?return numpy.random.random() > > anneal(test, numpy.ones(3)*0.5, lower=numpy.zeros(3), upper=numpy.ones(3)) > > Hi Paul, As was recently mentioned on this list - the bounds refer the "end result" being within the given range. But the function itself should "somehow" be able to return values even outside -- you could try to fake some values that "point back into the right region" HTH, Sebastian Haase From scott.sinclair.za at gmail.com Thu Jun 11 04:09:29 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 11 Jun 2009 10:09:29 +0200 Subject: [SciPy-user] scipy.optimize.anneal - problem with lower and upper In-Reply-To: <87fxe76wtd.fsf@rudin.co.uk> References: <87fxe76wtd.fsf@rudin.co.uk> Message-ID: <6a17e9ee0906110109k673c9554na97d4f30494fa4f9@mail.gmail.com> > 2009/6/11 Paul Rudin : > > I'm experimenting with scipy.optimize.anneal, but I'm confused about the > 'lower' and 'upper' arguments. I was expecting that this would limit the > range of values passed to the function being optimized. However this > appears not the be the case If you're trying to put bounds on the parameters, you could try the following in your objective function: http://mail.scipy.org/pipermail/scipy-dev/2009-March/011465.html Cheers, Scott From william.ratcliff at gmail.com Thu Jun 11 05:35:56 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 11 Jun 2009 05:35:56 -0400 Subject: [SciPy-user] scipy.optimize.anneal - problem with lower and upper In-Reply-To: <6a17e9ee0906110109k673c9554na97d4f30494fa4f9@mail.gmail.com> References: <87fxe76wtd.fsf@rudin.co.uk> <6a17e9ee0906110109k673c9554na97d4f30494fa4f9@mail.gmail.com> Message-ID: <827183970906110235w7d072fadseff06d00b277488e@mail.gmail.com> Or if you want, I can post my ansatz again, which will respect bounds on the parameters, which I really think is what most people expect. Basically, if a montecarlo step would go outside of the box defined by the upper and lower bounds on the parameters, I stay at the current location and try again the next turn to see if I move. Cheers, William On Thu, Jun 11, 2009 at 4:09 AM, Scott Sinclair wrote: > > 2009/6/11 Paul Rudin : > > > > I'm experimenting with scipy.optimize.anneal, but I'm confused about the > > 'lower' and 'upper' arguments. I was expecting that this would limit the > > range of values passed to the function being optimized. However this > > appears not the be the case > > If you're trying to put bounds on the parameters, you could try the > following in your objective function: > > http://mail.scipy.org/pipermail/scipy-dev/2009-March/011465.html > > Cheers, > Scott > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at rudin.co.uk Thu Jun 11 05:58:47 2009 From: paul at rudin.co.uk (Paul Rudin) Date: Thu, 11 Jun 2009 10:58:47 +0100 Subject: [SciPy-user] scipy.optimize.anneal - problem with lower and upper References: <87fxe76wtd.fsf@rudin.co.uk> <6a17e9ee0906110109k673c9554na97d4f30494fa4f9@mail.gmail.com> <827183970906110235w7d072fadseff06d00b277488e@mail.gmail.com> Message-ID: <87ski75bxk.fsf@rudin.co.uk> william ratcliff writes: > Or if you want, I can post my ansatz again, which will respect bounds on the > parameters, which I really think is what most people expect. ?Basically, if a > montecarlo step would go outside of the box defined by the upper and lower > bounds on the parameters, I stay at the current location and try again the > next turn to see if I move. > Sure, please do. At the moment I'm just experimenting, so it would be interesting to compare different things. Thanks for all the replies so far. From romain.jacquet.dev at free.fr Thu Jun 11 08:40:41 2009 From: romain.jacquet.dev at free.fr (romain.jacquet.dev at free.fr) Date: Thu, 11 Jun 2009 14:40:41 +0200 Subject: [SciPy-user] undefined symbol: slamch_ Message-ID: <1244724041.4a30fb49bfa89@webmail.free.fr> Hello, I'm working on Ubuntu. I'm trying to install scipy on a python installation compiled from source: - python-2.5.4 - lapack-3.2.1 - atlas3.8.3 - numpy-1.2.0 - scipy-0.7.1rc2 Everything is compiled and installed correctly. But it doesn't work: python -c "from scipy.interpolate import interpolate" Traceback (most recent call last): File "", line 1, in File "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 13, in from rbf import Rbf File "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/interpolate/rbf.py", line 47, in from scipy import linalg File "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: /tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/flapack.so: undefined symbol: slamch_ Ths slamch symbols is located in the lapack library: strings /tmp/test2/lapack-3.2.1/lapack_LINUX.a |grep slamch_ | wc -l 269 But the ATLAS dynamic library doesn't have slamch_ strings /tmp/test2/ATLAS/install_dir/lib/liblapack.so |grep slamch_ | wc -l 0 The LD_LIBRARY_PATH is fine: ldd /tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/flapack.so linux-gate.so.1 => (0xb7f9f000) liblapack.so => /tmp/test2/ATLAS/install_dir/lib/liblapack.so (0xb7f1c000) libptf77blas.so => /tmp/test2/ATLAS/install_dir/lib/libptf77blas.so (0xb7f00000) libptcblas.so => /tmp/test2/ATLAS/install_dir/lib/libptcblas.so (0xb7ee1000) libatlas.so => /tmp/test2/ATLAS/install_dir/lib/libatlas.so (0xb7aa8000) libpython2.5.so.1.0 => /tmp/test2/Python-2.5.4/install_dir/lib/libpython2.5.so.1.0 (0xb796f000) libgfortran.so.2 => /usr/lib/libgfortran.so.2 (0xb78be000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7898000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb788d000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb773e000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7726000) libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb7722000) libutil.so.1 => /lib/tls/i686/cmov/libutil.so.1 (0xb771d000) /lib/ld-linux.so.2 (0xb7fa0000) So what is the problem? Where the slamch_ must be find? Thanks in advance. From david at ar.media.kyoto-u.ac.jp Thu Jun 11 08:28:58 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 11 Jun 2009 21:28:58 +0900 Subject: [SciPy-user] undefined symbol: slamch_ In-Reply-To: <1244724041.4a30fb49bfa89@webmail.free.fr> References: <1244724041.4a30fb49bfa89@webmail.free.fr> Message-ID: <4A30F88A.4050107@ar.media.kyoto-u.ac.jp> romain.jacquet.dev at free.fr wrote: > Hello, > > I'm working on Ubuntu. > I'm trying to install scipy on a python installation compiled from source: > - python-2.5.4 > - lapack-3.2.1 > - atlas3.8.3 > - numpy-1.2.0 > - scipy-0.7.1rc2 > > Everything is compiled and installed correctly. But it doesn't work: > > python -c "from scipy.interpolate import interpolate" > Traceback (most recent call last): > File "", line 1, in > File > "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/interpolate/__init__.py", > line 13, in > from rbf import Rbf > File > "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/interpolate/rbf.py", > line 47, in > from scipy import linalg > File > "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/__init__.py", > line 8, in > from basic import * > File > "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/basic.py", > line 17, in > from lapack import get_lapack_funcs > File > "/tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/lapack.py", > line 17, in > from scipy.linalg import flapack > ImportError: > /tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/flapack.so: > undefined symbol: slamch_ > > Ths slamch symbols is located in the lapack library: > strings /tmp/test2/lapack-3.2.1/lapack_LINUX.a |grep slamch_ | wc -l > 269 > > But the ATLAS dynamic library doesn't have slamch_ > strings /tmp/test2/ATLAS/install_dir/lib/liblapack.so |grep slamch_ | wc -l > 0 > > The LD_LIBRARY_PATH is fine: > > ldd > /tmp/test2/Python-2.5.4/install_dir/lib/python2.5/site-packages/scipy/linalg/flapack.so > linux-gate.so.1 => (0xb7f9f000) > liblapack.so => /tmp/test2/ATLAS/install_dir/lib/liblapack.so > (0xb7f1c000) > libptf77blas.so => /tmp/test2/ATLAS/install_dir/lib/libptf77blas.so > (0xb7f00000) > libptcblas.so => /tmp/test2/ATLAS/install_dir/lib/libptcblas.so > (0xb7ee1000) > libatlas.so => /tmp/test2/ATLAS/install_dir/lib/libatlas.so (0xb7aa8000) > libpython2.5.so.1.0 => > /tmp/test2/Python-2.5.4/install_dir/lib/libpython2.5.so.1.0 (0xb796f000) > libgfortran.so.2 => /usr/lib/libgfortran.so.2 (0xb78be000) > libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7898000) > libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb788d000) > libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb773e000) > libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7726000) > libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb7722000) > libutil.so.1 => /lib/tls/i686/cmov/libutil.so.1 (0xb771d000) > /lib/ld-linux.so.2 (0xb7fa0000) > > So what is the problem? Where the slamch_ must be find? > It is (optionally) built in lapack - most likely, you did not build lapack correctly. I would advise against using LAPACK 3.2 (3.1.1 is fine), I noticed several problems with it in the past. cheers, David From loris.bennett at fu-berlin.de Thu Jun 11 09:24:07 2009 From: loris.bennett at fu-berlin.de (Dr. Loris Bennett) Date: Thu, 11 Jun 2009 15:24:07 +0200 Subject: [SciPy-user] Compile Error: error: expected `)' before 'PRIdPTR' References: <87octcozqf.fsf@slate.zedat.fu-berlin.de> <8763f4qlax.fsf@slate.zedat.fu-berlin.de> <5b8d13220906100738i7a06be0cg3ae8632322d6465f@mail.gmail.com> <4A2FC1A7.30609@ar.media.kyoto-u.ac.jp> Message-ID: <87fxe69a4o.fsf@slate.zedat.fu-berlin.de> David Cournapeau writes: > David Cournapeau wrote: >> On Wed, Jun 10, 2009 at 10:18 PM, Dr. Loris >> Bennett wrote: >> >>> Hi, >>> >>> Anyone had any thoughts on this? >>> >> >> I guess that inttypes.h (where PRIdPTR is defined in C99) is not >> correct for C++ on AIX. A hack to fix this for you would be to replace >> PRIdPTR by 'ld' in the *numpy* ndarrayobject.h header. >> > > Sorry, should be "ld", not 'ld' > > David OK, thanks for the help. That did the trick. -- Dr. Loris Bennett Computer Centre Freie Universit?t Berlin Berlin, Germany From zane at ideotrope.org Thu Jun 11 16:55:01 2009 From: zane at ideotrope.org (Zane Selvans) Date: Thu, 11 Jun 2009 13:55:01 -0700 Subject: [SciPy-user] Efficiently searching the surface of a sphere Message-ID: <1e723bf50906111355q2c2f66dbhe16bad0a13c924c@mail.gmail.com> Does anyone have a good approach for solving this problem in a computationally efficient way? * Given a fixed set S of points (~10^6 of them) on a sphere, defined as pairs of latitude/longitude values and * Given another set of points L on the sphere. * Find the set of points X in S having the smallest geodesic distances (i.e. distance along the surface of the sphere) to each of the points in L. The best idea I've been able to come up with so far is to sort S by latitude and longitude, and then search that sorted list for points with lats and lons which are close to those of the points in L, and then calculate the geodesic distances to those few points, and take the nearest one. But it won't behave well near the poles, or near the longitude discontinuity... It's unclear to me at the moment whether I can use the kd-tree data structure for this: http://docs.scipy.org/doc/scipy/reference/spatial.html and somehow define the "distance" metric to be the spherical distance... Thanks for any suggestions! Zane -- Zane A. Selvans Amateur Earthling http://zaneselvans.org +1 303 815 6866 From zane at ideotrope.org Thu Jun 11 17:48:20 2009 From: zane at ideotrope.org (Zane Selvans) Date: Thu, 11 Jun 2009 21:48:20 +0000 (UTC) Subject: [SciPy-user] Efficiently searching the surface of a sphere References: <1e723bf50906111355q2c2f66dbhe16bad0a13c924c@mail.gmail.com> Message-ID: Zane Selvans ideotrope.org> writes: > It's unclear to me at the moment whether I can use the kd-tree data > structure for this: > > http://docs.scipy.org/doc/scipy/reference/spatial.html > > and somehow define the "distance" metric to be the spherical distance... Hmm. Maybe on second thought, the right thing to do is just convert all the points in question to 3D cartesian coordinates at the beginning, and use the kd-tree as it is. The distances involved will change, but the nearest-neighbors should still be the same points. From robert.kern at gmail.com Thu Jun 11 17:55:22 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Jun 2009 16:55:22 -0500 Subject: [SciPy-user] Efficiently searching the surface of a sphere In-Reply-To: References: <1e723bf50906111355q2c2f66dbhe16bad0a13c924c@mail.gmail.com> Message-ID: <3d375d730906111455k609b28d9jb806632ecd2a4cd0@mail.gmail.com> On Thu, Jun 11, 2009 at 16:48, Zane Selvans wrote: > Zane Selvans ideotrope.org> writes: > >> It's unclear to me at the moment whether I can use the kd-tree data >> structure for this: >> >> http://docs.scipy.org/doc/scipy/reference/spatial.html >> >> and somehow define the "distance" metric to be the spherical distance... > > Hmm. ?Maybe on second thought, the right thing to do is just convert all the > points in question to 3D cartesian coordinates at the beginning, and use the > kd-tree as it is. ?The distances involved will change, but the nearest-neighbors > should still be the same points. That's probably the most straightforward thing to do at this time. You may also consider using a spherical Voronoi diagram. Robert Renka's FORTRAN STRIPACK code would do the trick: http://www.netlib.org/toms/772 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From loris.bennett at fu-berlin.de Fri Jun 12 05:58:16 2009 From: loris.bennett at fu-berlin.de (Dr. Loris Bennett) Date: Fri, 12 Jun 2009 11:58:16 +0200 Subject: [SciPy-user] Numpy on AIX 5.3: numpy.test() failure - no such option 'doctest-extension' Message-ID: <87y6rxlqo7.fsf@slate.zedat.fu-berlin.de> Hi, With a little help from the list I managed install numpy. However the tests fail with nose.config.ConfigError: Error reading config file 'setup.cfg': no such option 'doctest-extension' I installed nose from the source package without using setuptools. Any help would be much appreciated. Full error below. Cheers Loris Python 2.6.2 (r262:71600, May 20 2009, 15:23:26) [GCC 4.3.3] on aix5 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Running unit tests for numpy NumPy version 1.3.0 NumPy is installed in /opt/sw/python/Python-2.6.2/lib/python2.6/site-packages/numpy Python version 2.6.2 (r262:71600, May 20 2009, 15:23:26) [GCC 4.3.3] nose version 0.11.1 Traceback (most recent call last): File "", line 1, in File "/opt/sw/python/Python-2.6.2/lib/python2.6/site-packages/numpy/testing/nosetester.py", line 251, in test t = NumpyTestProgram(argv=argv, exit=False, plugins=plugins) File "nose/core.py", line 113, in __init__ argv=argv, testRunner=testRunner, testLoader=testLoader) File "/opt/sw/python/Python-2.6.2/lib/python2.6/unittest.py", line 816, in __init__ self.parseArgs(argv) File "nose/core.py", line 130, in parseArgs self.config.configure(argv, doc=self.usage()) File "nose/config.py", line 249, in configure options, args = self._parseArgs(argv, cfg_files) File "nose/config.py", line 237, in _parseArgs return parser.parseArgsAndConfigFiles(argv[1:], cfg_files) File "nose/config.py", line 132, in parseArgsAndConfigFiles self._applyConfigurationToValues(self._parser, config, values) File "nose/config.py", line 118, in _applyConfigurationToValues name=name, filename=filename) File "nose/config.py", line 234, in warn_sometimes raise ConfigError(msg) nose.config.ConfigError: Error reading config file 'setup.cfg': no such option 'doctest-extension' -- Dr. Loris Bennett Computer Centre Freie Universit?t Berlin Berlin, Germany From david at ar.media.kyoto-u.ac.jp Fri Jun 12 07:54:10 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 12 Jun 2009 20:54:10 +0900 Subject: [SciPy-user] scipy 0.7.1 rc3 Message-ID: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> Hi, I have uploaded the binaries and source tarballs for 0.7.1rc3. The rc3 fixes some issues in scipy.special, which caused wrong behavior/crashes on some platforms. Hopefully, this will be the 0.7.1 release, cheers, David ========================= SciPy 0.7.1 Release Notes ========================= .. contents:: SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. scipy.io ======== Bugs fixed: - Several fixes in Matlab file IO scipy.odr ========= Bugs fixed: - Work around a failure with Python 2.6 scipy.signal ============ Memory leak in lfilter have been fixed, as well as support for array object Bugs fixed: - #880, #925: lfilter fixes - #871: bicgstab fails on Win32 scipy.sparse ============ Bugs fixed: - #883: scipy.io.mmread with scipy.sparse.lil_matrix broken - lil_matrix and csc_matrix reject now unexpected sequences, cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 scipy.special ============= Several bugs of varying severity were fixed in the special functions: - #503, #640: iv: problems at large arguments fixed by new implementation - #623: jv: fix errors at large arguments - #679: struve: fix wrong output for v < 0 - #803: pbdv produces invalid output - #804: lqmn: fix crashes on some input - #823: betainc: fix documentation - #834: exp1 strange behavior near negative integer values - #852: jn_zeros: more accurate results for large s, also in jnp/yn/ynp_zeros - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain - #927: ellipj: fix segfault on Windows - #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. - ive, jve, yve, kv, kve: with real-valued input, return nan for out-of-domain instead of returning only the real part of the result. Also, when ``scipy.special.errprint(1)`` has been enabled, warning messages are now issued as Python warnings instead of printing them to stderr. scipy.stats =========== - linregress, mannwhitneyu, describe: errors fixed - kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, truncexpon, planck: improvements to numerical accuracy in distributions Windows binaries for python 2.6 =============================== python 2.6 binaries for windows are now included. The binary for python 2.5 requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy 1.3.0 or above. Universal build for scipy ========================= Mac OS X binary installer is now a proper universal build, and does not depend on gfortran anymore (libgfortran is statically linked). The python 2.5 version of scipy requires numpy 1.2.0 or above, the python 2.6 version requires numpy 1.3.0 or above. Checksums ========= 9dd5af43cc26ae6d38a13b373ba430fa release/installers/scipy-0.7.1rc3-py2.6-python.org.dmg 290c2e056fda1f86dfa9f3a76d207a8c release/installers/scipy-0.7.1rc3-win32-superpack-python2.6.exe d582dff7535d2b64a097fb4bfbc75d09 release/installers/scipy-0.7.1rc3-win32-superpack-python2.5.exe a19400ccfd65d1a0a5030848af6f78ea release/installers/scipy-0.7.1rc3.tar.gz d4ebf322c62b09c4ebaad7b67f92d032 release/installers/scipy-0.7.1rc3.zip a0ea0366b178a7827f10a480f97c3c47 release/installers/scipy-0.7.1rc3-py2.5-python.org.dmg From jgomezdans at gmail.com Fri Jun 12 08:43:46 2009 From: jgomezdans at gmail.com (Jose =?iso-8859-1?q?G=F3mez-Dans?=) Date: Fri, 12 Jun 2009 13:43:46 +0100 Subject: [SciPy-user] f2py and "-m32" flag on X86_64 Message-ID: <200906121343.46430.jgomezdans@gmail.com> Hi, I've got some code I need to wrap using f2py. In my laptop (i686 linux), I need the following line for f2py: f2py --noopt --noarch --debug -m I want to run it in our cluster, which is X86_64 linux. Apart from the flags above, I also need my code to be compiled with the gcc option -m32 (make ints, longs and pointers 4 bytes). However, the -m32 is incompatible with the architecture python has been compiled with. Any hints? From cadamantine at gmail.com Fri Jun 12 10:35:13 2009 From: cadamantine at gmail.com (Caleb Adamantine) Date: Fri, 12 Jun 2009 12:05:13 -0230 Subject: [SciPy-user] Return type inconsistency in polyfit and lstsq Message-ID: SciPy users and developers: It seems to me that using SciPy's polyfit (more precisely, the underlying lstsq function) returns inconsistent results depending on what SciPy modules are loaded. Further, because the lstsq function used in NumPy is dynamically selected to use SciPy's, NumPy's lstsq and thus polyfit will also return inconsistent results if used with SciPy. The inconsistency is that without scipy.stats imported, numpy.polyfit (and scipy.polyfit) returns a vector (1D array) of coefficients, as per the documentation, but when scipy.stats is imported, numpy.polyfit (and scipy.polyfit) may return a 2D array. This is very dangerous. For example, a user using numpy.polyfit or scipy.polfit will suddenly get an unexpected data type returned by simply importing scipy.stats, even if that import is done in another module (for me, scipy.stats was imported deep in a library, which I didn't even know about). The following examples should demonstrate the problem. Note the changing imports and the printed results: 1. numpy polfit only (OK) #from scipy import stats #from scipy import polyfit from numpy import polyfit xs = [67.60750,85.00000, 99.1] ys = [97.99417,113.00000, 102.34] print polyfit(xs, ys, 3) >>> [ -2.72577441e-04 1.72069354e-02 3.01853416e+00 -1.00498891e+02] 2. numpy polyfit with scipy.stats (Not OK) from scipy import stats #from scipy import polyfit from numpy import polyfit xs = [67.60750,85.00000, 99.1] ys = [97.99417,113.00000, 102.34] print polyfit(xs, ys, 3) >>> [[ -2.72577441e-04] >>> [ 1.72069354e-02] >>> [ 3.01853416e+00] >>> [ -1.00498891e+02]] Note that using SciPy's polyfit produces the same results. I have searched SciPy's and NumPys's issue trackers and mailing lists. The only reference I can find to this issue is an unanswered post at http://article.gmane.org/gmane.comp.python.scientific.user/7695/match=lstsq and duplicated below. Any comments? Is this a bug? CAdamantine ---------- Forwarded message ---------- From: Hugo van der Merwe gmail.com> Date: May 11, 2006 12:28 PM Subject: linalg.lstsq: inconsistent return "type"? To: scipy-user scipy.net Consider the attached example, which solves for three parameters, first given four samples (overspecified), then three (precisely specified), then two (underspecified)... In the first two cases lstsq returns a 1D array as expected. In the last case, it returns a 2D array (with size 3x1). Is this correct behaviour? I would have expected 1D return values consistently... Also, replacing "from scipy import linalg" with "from numpy import linalg" fixes the behaviour, thus numpy does the right thing, scipy not. Comments? Thanks, Hugo van der Merwe -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Jun 12 10:41:06 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 12 Jun 2009 14:41:06 +0000 (UTC) Subject: [SciPy-user] Return type inconsistency in polyfit and lstsq References: Message-ID: Fri, 12 Jun 2009 12:05:13 -0230, Caleb Adamantine kirjoitti: > The inconsistency is that without scipy.stats imported, numpy.polyfit > (and scipy.polyfit) returns a vector (1D array) of coefficients, as per > the documentation, but when scipy.stats is imported, numpy.polyfit (and > scipy.polyfit) may return a 2D array. This was probably fixed by r6140, and so the issue should not be present any more in Numpy 1.3.0. -- Pauli Virtanen From cadamantine at gmail.com Fri Jun 12 11:36:08 2009 From: cadamantine at gmail.com (Caleb Adamantine) Date: Fri, 12 Jun 2009 13:06:08 -0230 Subject: [SciPy-user] Return type inconsistency in polyfit and lstsq In-Reply-To: References: Message-ID: On Fri, Jun 12, 2009 at 12:11 PM, Pauli Virtanen wrote: > > Fri, 12 Jun 2009 12:05:13 -0230, Caleb Adamantine kirjoitti: > > The inconsistency is that without scipy.stats imported, numpy.polyfit > > (and scipy.polyfit) returns a vector (1D array) of coefficients, as per > > the documentation, but when scipy.stats is imported, numpy.polyfit (and > > scipy.polyfit) may return a 2D array. > > This was probably fixed by r6140, and so the issue should not be present > any more in Numpy 1.3.0. I upgraded SciPy and NumPy to the latest versions: the issue has been fixed as you suggest. Thanks. CAdamantine From josh.k.lawrence at gmail.com Fri Jun 12 13:14:16 2009 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Fri, 12 Jun 2009 13:14:16 -0400 Subject: [SciPy-user] += Operator and Slicing of Arrays Message-ID: Greetings, I am in need of some help. I am trying to use the += operator to sum over edge elements on a triangular mesh. Each edge has an unknown associated with it. After solving for the unknowns, I am trying to compute a quantity at the centroid of all triangles in the mesh. On a flat surface, each edge will be connected to either one or two triangles. The orientation of the edge and the normals of the triangles determines whether each triangle attached to the edge is a "plus" or "minus" triangle for that edge. It is possible for one triangle to be referenced three times as a "plus" triangle, three times as a "minus" triangle or any combination of "plus" and "minus" (1 and 2 or 2 and 1, respectively). I have a variable tri_idx which relates the edges to the "plus" and "minus" triangles. I then compute the quantity at the centroid for the "plus" triangle and "minus" triangle attached to each edge. An example follows: tri_idx_plus = [0 3 6 3 2 4 1 4] tri_idx_minus = [1 2 5 3 6 0 1 4] quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * basis_p[:len(tri_idx_plus),...] quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * basis_m[:len(tri_idx_minus),...] where basis_p and basis_m are basis functions that expand the unknown of each edge into a surface function over the "plus" or "minus" triangle. I am pretty sure the problem I am encountering is that tri_idx_plus mentions indices 3 and 4 twice and tri_idx_minus contains index 1 twice. Is there a way of doing this operation without reverting to looping over each edge (read: not doing this the slow way). Thanks in advance! Josh Lawrence Ph.D. Student Clemson University From romain.jacquet.dev at free.fr Fri Jun 12 13:28:40 2009 From: romain.jacquet.dev at free.fr (Romain JACQUET) Date: Fri, 12 Jun 2009 19:28:40 +0200 Subject: [SciPy-user] undefined symbol: slamch_ References: 1244724041.4a30fb49bfa89@webmail.free.fr Message-ID: <4A329048.1000606@free.fr> Hello David, You're right. Lapack was not compiled correctly. Instead of giving: --with-netlib-lapack=/path/to/lapack/lapack_.a. I gave only --with-netlib-lapack=/path/to/lapack/. So Lapack was not complete :(. Thanks for your help. From emmanuelle.gouillart at normalesup.org Fri Jun 12 14:28:13 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Fri, 12 Jun 2009 20:28:13 +0200 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: References: Message-ID: <20090612182813.GA29204@phare.normalesup.org> Hi Josh, what kind of problem do you have exactly ? Do you have trouble implementing the computation you describe, or do you get unexpected results? If I understood well what you want to do, you cannot use directly use += with fancy indexing (quantity[tri_idx_plus,...]) because the repeated elements will be incremented just once (see http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 for more details). However, I think you can solve your problem by using a weighted histogram. Using your notations weights = edge_unknown[:len(tri_idx_plus),...] * basis_p[:len(tri_idx_plus),...] histogram_values = np.histogram(tri_idx_plus, np.arange(tri_idx_plus.max() +2), weights=weights) unique_plus = np.unique1d(tri_idx_plus) quantity[unique_plus,...] = histogram_values[0][unique_plus] The weighted histogram allows you to make the sums corresponding to each triangle. Here is an example >>> quantity = np.zeros(10) >>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) >>> weights = 2*tri_idx_plus + 1 >>> histogram_values = np.histogram(tri_idx_plus, np.arange(tri_idx_plus.max() +2), weights=weights) >>> unique_plus = np.unique1d(tri_idx_plus) >>> quantity[unique_plus,...] = histogram_values[0][unique_plus] Actually, this may only work with positive values of weights (not checked)... Please tell us if this meets your needs or not. Cheers, Emmanuelle On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: > Greetings, > I am in need of some help. I am trying to use the += operator to sum > over edge elements on a triangular mesh. Each edge has an unknown > associated with it. After solving for the unknowns, I am trying to > compute a quantity at the centroid of all triangles in the mesh. On a > flat surface, each edge will be connected to either one or two > triangles. The orientation of the edge and the normals of the > triangles determines whether each triangle attached to the edge is a > "plus" or "minus" triangle for that edge. It is possible for one > triangle to be referenced three times as a "plus" triangle, three > times as a "minus" triangle or any combination of "plus" and > "minus" (1 and 2 or 2 and 1, respectively). > I have a variable tri_idx which relates the edges to the "plus" and > "minus" triangles. I then compute the quantity at the centroid for the > "plus" triangle and "minus" triangle attached to each edge. An example > follows: > tri_idx_plus = [0 3 6 3 2 4 1 4] > tri_idx_minus = [1 2 5 3 6 0 1 4] > quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * > basis_p[:len(tri_idx_plus),...] > quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * > basis_m[:len(tri_idx_minus),...] > where basis_p and basis_m are basis functions that expand the unknown > of each edge into a surface function over the "plus" or "minus" > triangle. > I am pretty sure the problem I am encountering is that tri_idx_plus > mentions indices 3 and 4 twice and tri_idx_minus contains index 1 > twice. Is there a way of doing this operation without reverting to > looping over each edge (read: not doing this the slow way). > Thanks in advance! > Josh Lawrence > Ph.D. Student > Clemson University > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From peridot.faceted at gmail.com Fri Jun 12 15:09:33 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 12 Jun 2009 15:09:33 -0400 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: <20090612182813.GA29204@phare.normalesup.org> References: <20090612182813.GA29204@phare.normalesup.org> Message-ID: 2009/6/12 Emmanuelle Gouillart : > Hi Josh, > > what kind of problem do you have exactly ? Do you have trouble > implementing the computation you describe, or do you get unexpected > results? > > If I understood well what you want to do, you cannot use directly use += > with fancy indexing (quantity[tri_idx_plus,...]) because the repeated > elements will be incremented just once (see > http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 > for more details). > > However, I think you can solve your problem by using a weighted > histogram. Using your notations > > weights = edge_unknown[:len(tri_idx_plus),...] * > basis_p[:len(tri_idx_plus),...] > histogram_values = np.histogram(tri_idx_plus, > np.arange(tri_idx_plus.max() +2), weights=weights) > unique_plus = np.unique1d(tri_idx_plus) > quantity[unique_plus,...] = histogram_values[0][unique_plus] > > The weighted histogram allows you to make the sums corresponding to each > triangle. > > Here is an example > >>>> quantity = np.zeros(10) >>>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) >>>> weights = 2*tri_idx_plus + 1 >>>> histogram_values = np.histogram(tri_idx_plus, > np.arange(tri_idx_plus.max() +2), weights=weights) >>>> unique_plus = np.unique1d(tri_idx_plus) >>>> quantity[unique_plus,...] = histogram_values[0][unique_plus] > > Actually, this may only work with positive values of weights (not > checked)... This histogram function is intended to support negative weights for just this reason, so if this does not work, please let us know. Anne > Please tell us if this meets your needs or not. > > Cheers, > > Emmanuelle > > On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: >> Greetings, > >> I am in need of some help. I am trying to use the += operator to sum >> over edge elements on a triangular mesh. Each edge has an unknown >> associated with it. After solving for the unknowns, I am trying to >> compute a quantity at the centroid of all triangles in the mesh. On a >> flat surface, each edge will be connected to either one or two >> triangles. The orientation of the edge and the normals of the >> triangles determines whether each triangle attached to the edge is a >> "plus" or "minus" triangle for that edge. It is possible for one >> triangle to be referenced three times as a "plus" triangle, three >> times as a "minus" triangle or any combination of "plus" and >> "minus" (1 and 2 or 2 and 1, respectively). > >> I have a variable tri_idx which relates the edges to the "plus" and >> "minus" triangles. I then compute the quantity at the centroid for the >> "plus" triangle and "minus" triangle attached to each edge. An example >> follows: > >> tri_idx_plus = [0 3 6 3 2 4 1 4] >> tri_idx_minus = [1 2 5 3 6 0 1 4] > >> quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * >> basis_p[:len(tri_idx_plus),...] >> quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * >> basis_m[:len(tri_idx_minus),...] > >> where basis_p and basis_m are basis functions that expand the unknown >> of each edge into a surface function over the "plus" or "minus" >> triangle. > >> I am pretty sure the problem I am encountering is that tri_idx_plus >> mentions indices 3 and 4 twice and tri_idx_minus contains index 1 >> twice. Is there a way of doing this operation without reverting to >> looping over each edge (read: not doing this the slow way). > >> Thanks in advance! > >> Josh Lawrence >> Ph.D. Student >> Clemson University > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josh.k.lawrence at gmail.com Fri Jun 12 15:09:47 2009 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Fri, 12 Jun 2009 15:09:47 -0400 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: <20090612182813.GA29204@phare.normalesup.org> References: <20090612182813.GA29204@phare.normalesup.org> Message-ID: <6a643b830906121209p6160ce6fs192105c3573f0b66@mail.gmail.com> Emmanuelle, Thank you for your speedy reply. I apologize if the list gets this and another message replying to the same thing. Gmail imap seems to be down so I am having to use webmail. Anyways, I get an error telling me that for numpy.histogram, the "weights" variable needs to be the same shape as "a" which I believe corresponds to tri_idx_plus in the code example you gave me. Also, the edge_unkown (and hence quantity) has dtype of complex128 (double valued complex). Does this preclude using numpy.histogram? Thanks! Josh Lawrence On Fri, Jun 12, 2009 at 2:28 PM, Emmanuelle Gouillart wrote: > Hi Josh, > > what kind of problem do you have exactly ? Do you have trouble > implementing the computation you describe, or do you get unexpected > results? > > If I understood well what you want to do, you cannot use directly use += > with fancy indexing (quantity[tri_idx_plus,...]) because the repeated > elements will be incremented just once (see > http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 > for more details). > > However, I think you can solve your problem by using a weighted > histogram. Using your notations > > weights = edge_unknown[:len(tri_idx_plus),...] * > ? ? ? ?basis_p[:len(tri_idx_plus),...] > histogram_values = np.histogram(tri_idx_plus, > ? ? ? ?np.arange(tri_idx_plus.max() +2), weights=weights) > unique_plus = np.unique1d(tri_idx_plus) > quantity[unique_plus,...] = histogram_values[0][unique_plus] > > The weighted histogram allows you to make the sums corresponding to each > triangle. > > Here is an example > >>>> quantity = np.zeros(10) >>>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) >>>> weights = 2*tri_idx_plus + 1 >>>> histogram_values = np.histogram(tri_idx_plus, > ? ? ? ?np.arange(tri_idx_plus.max() +2), weights=weights) >>>> unique_plus = np.unique1d(tri_idx_plus) >>>> quantity[unique_plus,...] = histogram_values[0][unique_plus] > > Actually, this may only work with positive values of weights (not > checked)... > > Please tell us if this meets your needs or not. > > Cheers, > > Emmanuelle > > On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: >> Greetings, > >> I am in need of some help. I am trying to use the += operator to sum >> over edge elements on a triangular mesh. Each edge has an unknown >> associated with it. After solving for the unknowns, I am trying to >> compute a quantity at the centroid of all triangles in the mesh. On a >> flat surface, each edge will be connected to either one or two >> triangles. The orientation of the edge and the normals of the >> triangles determines whether each triangle attached to the edge is a >> "plus" or "minus" triangle for that edge. It is possible for one >> triangle to be referenced three times as a "plus" triangle, three >> times as a "minus" triangle or any combination of "plus" and >> "minus" (1 and 2 or 2 and 1, respectively). > >> I have a variable tri_idx which relates the edges to the "plus" and >> "minus" triangles. I then compute the quantity at the centroid for the >> "plus" triangle and "minus" triangle attached to each edge. An example >> follows: > >> tri_idx_plus = [0 3 6 3 2 4 1 4] >> tri_idx_minus = [1 2 5 3 6 0 1 4] > >> quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * >> basis_p[:len(tri_idx_plus),...] >> quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * >> basis_m[:len(tri_idx_minus),...] > >> where basis_p and basis_m are basis functions that expand the unknown >> of each edge into a surface function over the "plus" or "minus" >> triangle. > >> I am pretty sure the problem I am encountering is that tri_idx_plus >> mentions indices 3 and 4 twice and tri_idx_minus contains index 1 >> twice. Is there a way of doing this operation without reverting to >> looping over each edge (read: not doing this the slow way). > >> Thanks in advance! > >> Josh Lawrence >> Ph.D. Student >> Clemson University > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Josh Lawrence Ph.D. Student Clemson University From josh.k.lawrence at gmail.com Fri Jun 12 15:23:11 2009 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Fri, 12 Jun 2009 15:23:11 -0400 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: <20090612182813.GA29204@phare.normalesup.org> References: <20090612182813.GA29204@phare.normalesup.org> Message-ID: <28873EDC-E30C-48C3-84F7-0A901FAEE398@gmail.com> Emmanuell, It tells me that the weights argument needs to be the same shape as tri_idx_plus. Also, edge_unknown is complex valued. So I have +- real and imaginary components. I do not know much about numpy.histogram... Does the complex dtype preclude it from use? Cheers, Josh Lawrence Ph.D. Student Clemson University On Jun 12, 2009, at 2:28 PM, Emmanuelle Gouillart wrote: > Hi Josh, > > what kind of problem do you have exactly ? Do you have trouble > implementing the computation you describe, or do you get unexpected > results? > > If I understood well what you want to do, you cannot use directly > use += > with fancy indexing (quantity[tri_idx_plus,...]) because the repeated > elements will be incremented just once (see > http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 > for more details). > > However, I think you can solve your problem by using a weighted > histogram. Using your notations > > weights = edge_unknown[:len(tri_idx_plus),...] * > basis_p[:len(tri_idx_plus),...] > histogram_values = np.histogram(tri_idx_plus, > np.arange(tri_idx_plus.max() +2), weights=weights) > unique_plus = np.unique1d(tri_idx_plus) > quantity[unique_plus,...] = histogram_values[0][unique_plus] > > The weighted histogram allows you to make the sums corresponding to > each > triangle. > > Here is an example > >>>> quantity = np.zeros(10) >>>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) >>>> weights = 2*tri_idx_plus + 1 >>>> histogram_values = np.histogram(tri_idx_plus, > np.arange(tri_idx_plus.max() +2), weights=weights) >>>> unique_plus = np.unique1d(tri_idx_plus) >>>> quantity[unique_plus,...] = histogram_values[0][unique_plus] > > Actually, this may only work with positive values of weights (not > checked)... > > Please tell us if this meets your needs or not. > > Cheers, > > Emmanuelle > > On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: >> Greetings, > >> I am in need of some help. I am trying to use the += operator to sum >> over edge elements on a triangular mesh. Each edge has an unknown >> associated with it. After solving for the unknowns, I am trying to >> compute a quantity at the centroid of all triangles in the mesh. On a >> flat surface, each edge will be connected to either one or two >> triangles. The orientation of the edge and the normals of the >> triangles determines whether each triangle attached to the edge is a >> "plus" or "minus" triangle for that edge. It is possible for one >> triangle to be referenced three times as a "plus" triangle, three >> times as a "minus" triangle or any combination of "plus" and >> "minus" (1 and 2 or 2 and 1, respectively). > >> I have a variable tri_idx which relates the edges to the "plus" and >> "minus" triangles. I then compute the quantity at the centroid for >> the >> "plus" triangle and "minus" triangle attached to each edge. An >> example >> follows: > >> tri_idx_plus = [0 3 6 3 2 4 1 4] >> tri_idx_minus = [1 2 5 3 6 0 1 4] > >> quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * >> basis_p[:len(tri_idx_plus),...] >> quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * >> basis_m[:len(tri_idx_minus),...] > >> where basis_p and basis_m are basis functions that expand the unknown >> of each edge into a surface function over the "plus" or "minus" >> triangle. > >> I am pretty sure the problem I am encountering is that tri_idx_plus >> mentions indices 3 and 4 twice and tri_idx_minus contains index 1 >> twice. Is there a way of doing this operation without reverting to >> looping over each edge (read: not doing this the slow way). > >> Thanks in advance! > >> Josh Lawrence >> Ph.D. Student >> Clemson University > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josh.k.lawrence at gmail.com Fri Jun 12 15:23:11 2009 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Fri, 12 Jun 2009 15:23:11 -0400 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: <20090612182813.GA29204@phare.normalesup.org> References: <20090612182813.GA29204@phare.normalesup.org> Message-ID: <28873EDC-E30C-48C3-84F7-0A901FAEE398@gmail.com> Emmanuell, It tells me that the weights argument needs to be the same shape as tri_idx_plus. Also, edge_unknown is complex valued. So I have +- real and imaginary components. I do not know much about numpy.histogram... Does the complex dtype preclude it from use? Cheers, Josh Lawrence Ph.D. Student Clemson University On Jun 12, 2009, at 2:28 PM, Emmanuelle Gouillart wrote: > Hi Josh, > > what kind of problem do you have exactly ? Do you have trouble > implementing the computation you describe, or do you get unexpected > results? > > If I understood well what you want to do, you cannot use directly > use += > with fancy indexing (quantity[tri_idx_plus,...]) because the repeated > elements will be incremented just once (see > http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 > for more details). > > However, I think you can solve your problem by using a weighted > histogram. Using your notations > > weights = edge_unknown[:len(tri_idx_plus),...] * > basis_p[:len(tri_idx_plus),...] > histogram_values = np.histogram(tri_idx_plus, > np.arange(tri_idx_plus.max() +2), weights=weights) > unique_plus = np.unique1d(tri_idx_plus) > quantity[unique_plus,...] = histogram_values[0][unique_plus] > > The weighted histogram allows you to make the sums corresponding to > each > triangle. > > Here is an example > >>>> quantity = np.zeros(10) >>>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) >>>> weights = 2*tri_idx_plus + 1 >>>> histogram_values = np.histogram(tri_idx_plus, > np.arange(tri_idx_plus.max() +2), weights=weights) >>>> unique_plus = np.unique1d(tri_idx_plus) >>>> quantity[unique_plus,...] = histogram_values[0][unique_plus] > > Actually, this may only work with positive values of weights (not > checked)... > > Please tell us if this meets your needs or not. > > Cheers, > > Emmanuelle > > On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: >> Greetings, > >> I am in need of some help. I am trying to use the += operator to sum >> over edge elements on a triangular mesh. Each edge has an unknown >> associated with it. After solving for the unknowns, I am trying to >> compute a quantity at the centroid of all triangles in the mesh. On a >> flat surface, each edge will be connected to either one or two >> triangles. The orientation of the edge and the normals of the >> triangles determines whether each triangle attached to the edge is a >> "plus" or "minus" triangle for that edge. It is possible for one >> triangle to be referenced three times as a "plus" triangle, three >> times as a "minus" triangle or any combination of "plus" and >> "minus" (1 and 2 or 2 and 1, respectively). > >> I have a variable tri_idx which relates the edges to the "plus" and >> "minus" triangles. I then compute the quantity at the centroid for >> the >> "plus" triangle and "minus" triangle attached to each edge. An >> example >> follows: > >> tri_idx_plus = [0 3 6 3 2 4 1 4] >> tri_idx_minus = [1 2 5 3 6 0 1 4] > >> quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * >> basis_p[:len(tri_idx_plus),...] >> quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * >> basis_m[:len(tri_idx_minus),...] > >> where basis_p and basis_m are basis functions that expand the unknown >> of each edge into a surface function over the "plus" or "minus" >> triangle. > >> I am pretty sure the problem I am encountering is that tri_idx_plus >> mentions indices 3 and 4 twice and tri_idx_minus contains index 1 >> twice. Is there a way of doing this operation without reverting to >> looping over each edge (read: not doing this the slow way). > >> Thanks in advance! > >> Josh Lawrence >> Ph.D. Student >> Clemson University > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From emmanuelle.gouillart at normalesup.org Fri Jun 12 16:04:39 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Fri, 12 Jun 2009 22:04:39 +0200 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: <28873EDC-E30C-48C3-84F7-0A901FAEE398@gmail.com> References: <20090612182813.GA29204@phare.normalesup.org> <28873EDC-E30C-48C3-84F7-0A901FAEE398@gmail.com> Message-ID: <20090612200439.GB32705@phare.normalesup.org> > It tells me that the weights argument needs to be the same shape as > tri_idx_plus. Also, edge_unknown is complex valued. So I have +- real > and imaginary components. I do not know much about numpy.histogram... > Does the complex dtype preclude it from use? Yes, the weight array must have the same shape as the values array ("a" in the help of np.histogram). This is the case in the example I provided. Can you check that the shapes of the two arrays are the same in the code you execute? And if you can't find the origin of the error, can you post a minimal example of code that reproduces the error? Also, as Anne mentioned (thanks for your answer, Anne!), the weights keyword argument can be used with negative numbers, but also with complex numbers (did you check it before posting?). >>> a = np.arange(10) >>> b = 1j*np.ones_like(a) >>> np.histogram(a, a, weights=b) (array([ 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+2.j]), array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])) So you don't need to separate real and imaginary parts, you can use directly your complex array as weights. Cheers, Emmanuelle > Cheers, > Josh Lawrence > Ph.D. Student > Clemson University > On Jun 12, 2009, at 2:28 PM, Emmanuelle Gouillart wrote: > > Hi Josh, > > what kind of problem do you have exactly ? Do you have trouble > > implementing the computation you describe, or do you get unexpected > > results? > > If I understood well what you want to do, you cannot use directly > > use += > > with fancy indexing (quantity[tri_idx_plus,...]) because the repeated > > elements will be incremented just once (see > > http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 > > for more details). > > However, I think you can solve your problem by using a weighted > > histogram. Using your notations > > weights = edge_unknown[:len(tri_idx_plus),...] * > > basis_p[:len(tri_idx_plus),...] > > histogram_values = np.histogram(tri_idx_plus, > > np.arange(tri_idx_plus.max() +2), weights=weights) > > unique_plus = np.unique1d(tri_idx_plus) > > quantity[unique_plus,...] = histogram_values[0][unique_plus] > > The weighted histogram allows you to make the sums corresponding to > > each > > triangle. > > Here is an example > >>>> quantity = np.zeros(10) > >>>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) > >>>> weights = 2*tri_idx_plus + 1 > >>>> histogram_values = np.histogram(tri_idx_plus, > > np.arange(tri_idx_plus.max() +2), weights=weights) > >>>> unique_plus = np.unique1d(tri_idx_plus) > >>>> quantity[unique_plus,...] = histogram_values[0][unique_plus] > > Actually, this may only work with positive values of weights (not > > checked)... > > Please tell us if this meets your needs or not. > > Cheers, > > Emmanuelle > > On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: > >> Greetings, > >> I am in need of some help. I am trying to use the += operator to sum > >> over edge elements on a triangular mesh. Each edge has an unknown > >> associated with it. After solving for the unknowns, I am trying to > >> compute a quantity at the centroid of all triangles in the mesh. On a > >> flat surface, each edge will be connected to either one or two > >> triangles. The orientation of the edge and the normals of the > >> triangles determines whether each triangle attached to the edge is a > >> "plus" or "minus" triangle for that edge. It is possible for one > >> triangle to be referenced three times as a "plus" triangle, three > >> times as a "minus" triangle or any combination of "plus" and > >> "minus" (1 and 2 or 2 and 1, respectively). > >> I have a variable tri_idx which relates the edges to the "plus" and > >> "minus" triangles. I then compute the quantity at the centroid for > >> the > >> "plus" triangle and "minus" triangle attached to each edge. An > >> example > >> follows: > >> tri_idx_plus = [0 3 6 3 2 4 1 4] > >> tri_idx_minus = [1 2 5 3 6 0 1 4] > >> quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * > >> basis_p[:len(tri_idx_plus),...] > >> quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * > >> basis_m[:len(tri_idx_minus),...] > >> where basis_p and basis_m are basis functions that expand the unknown > >> of each edge into a surface function over the "plus" or "minus" > >> triangle. > >> I am pretty sure the problem I am encountering is that tri_idx_plus > >> mentions indices 3 and 4 twice and tri_idx_minus contains index 1 > >> twice. Is there a way of doing this operation without reverting to > >> looping over each edge (read: not doing this the slow way). > >> Thanks in advance! > >> Josh Lawrence > >> Ph.D. Student > >> Clemson University > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Dharhas.Pothina at twdb.state.tx.us Fri Jun 12 16:15:08 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Fri, 12 Jun 2009 15:15:08 -0500 Subject: [SciPy-user] Some help on pythonic design of time series data processing application Message-ID: <4A3270FC.63BA.009B.0@twdb.state.tx.us> Hi All, This might be slightly off topic but I'd appreciate any tips or references that anyone may have since I know a lot of you deal with scientific data. Over the last couple of months I have become very familiar with using python/numpy to write lots of small scripts for processing or plotting that I can then string together but I'm having trouble finding descriptions of how to write larger applications and also how to appropriately use classes. I'm trying to write an application to semi-automate the process of getting our field data from multiple instruments types into a common format. My idea for the workflow (based loosely on stuff we have already sorta implemented using shell scripts) is the following: 1) read all files in a specified directory 2) parse file names to work out site names and instrument used 3) according to the instrument used, load a plugin/call a script that understands how to read in the data from that instrument. The plugin would be aware of what parameters that type of instrument provides and in what units and be able to convert everything to SI units 4) append data to a site & parameter specific file that contains all data from that site 5) Later I'll be working on a QA/QC application built on top of this. I have starting developing a generic plugin class but I'm not sure of how to implement the plugins. In our old shell script based application we wrote an entirely new script for each new instrument and essentially just had a massive IF statement to choose which script to run. This wasn't very scalable and I'd like to be able to add new instruments without having to change the main routines. Since I'm not very familiar with python or object oriented programing I was wondering if anyone had any examples or packages they could recommend I look at for ideas. Or if anyone has done something similar please let me know. - dharhas From josh.k.lawrence at gmail.com Fri Jun 12 16:29:31 2009 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Fri, 12 Jun 2009 16:29:31 -0400 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: <20090612200439.GB32705@phare.normalesup.org> References: <20090612182813.GA29204@phare.normalesup.org> <28873EDC-E30C-48C3-84F7-0A901FAEE398@gmail.com> <20090612200439.GB32705@phare.normalesup.org> Message-ID: <6a643b830906121329u427e9c12vdbe0726df76c8619@mail.gmail.com> Emmanuelle, In the example I gave to begin with, tri_idx_plus has shape (8,) (if tri_idx_plus were a numpy array), edge_unknown has shape (8,..), and basis_p has shape (8,...). In practice, the shape for edge_unknown would be (8,1,n) and basis_p would have shape (8,3,1). Thus, weight = edge_unknown*basis_p would have a shape (8,3,n) in the example. Below is a better example. import numpy as np tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) edge_unknown = np.random.rand(8,1,1) + 1.0j*np.random.rand(8,1,1) basis_p = np.random.rand(8,3,1) weights = edge_unknown * basis_p tri_quantity = np.zeros((20,3,1)) + 0j for i in range(tri_idx_plus.size): tri_quantity[tri_idx_plus[i],:,:] += weights[i,:,:] I believe that does exactly what I want. Now, my desire is to get rid of the for loop. That is why I at first tried to do tri_quantity[tri_idx_plus,:,:] += weights but to no avail as only the last reference in tri_idx_plus to 3 and 4 would be summed to tri_quantity. I hope this is more clear. Cheers, Josh Lawrence On Fri, Jun 12, 2009 at 4:04 PM, Emmanuelle Gouillart wrote: > > > >> It tells me that the weights argument needs to be the same shape as >> tri_idx_plus. Also, edge_unknown is complex valued. So I have +- real >> and imaginary components. I do not know much about numpy.histogram... >> Does the complex dtype preclude it from use? > > ? ? ? ?Yes, the weight array must have the same shape as the values > array ("a" in the help of np.histogram). This is the case in the example > I provided. Can you check that the shapes of the two arrays are the same > in the code you execute? And if you can't find the origin of the error, > can you post a minimal example of code that reproduces the error? > > ? ? ? ?Also, as Anne mentioned (thanks for your answer, Anne!), the > weights keyword argument can be used with negative numbers, but also with > complex numbers (did you check it before posting?). >>>> a = np.arange(10) >>>> b = 1j*np.ones_like(a) >>>> np.histogram(a, a, weights=b) > (array([ 0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, > ? ? ? ?0.+1.j, ?0.+2.j]), array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])) > So you don't need to separate real and imaginary parts, you can use > directly your complex array as weights. > > ? ? ? ?Cheers, > > ? ? ? ?Emmanuelle > >> Cheers, > >> Josh Lawrence >> Ph.D. Student >> Clemson University > >> On Jun 12, 2009, at 2:28 PM, Emmanuelle Gouillart wrote: > >> > Hi Josh, > >> > what kind of problem do you have exactly ? Do you have trouble >> > implementing the computation you describe, or do you get unexpected >> > results? > >> > If I understood well what you want to do, you cannot use directly >> > use += >> > with fancy indexing (quantity[tri_idx_plus,...]) because the repeated >> > elements will be incremented just once (see >> > http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 >> > for more details). > >> > However, I think you can solve your problem by using a weighted >> > histogram. Using your notations > >> > weights = edge_unknown[:len(tri_idx_plus),...] * >> > ? ? basis_p[:len(tri_idx_plus),...] >> > histogram_values = np.histogram(tri_idx_plus, >> > ? ? np.arange(tri_idx_plus.max() +2), weights=weights) >> > unique_plus = np.unique1d(tri_idx_plus) >> > quantity[unique_plus,...] = histogram_values[0][unique_plus] > >> > The weighted histogram allows you to make the sums corresponding to >> > each >> > triangle. > >> > Here is an example > >> >>>> quantity = np.zeros(10) >> >>>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) >> >>>> weights = 2*tri_idx_plus + 1 >> >>>> histogram_values = np.histogram(tri_idx_plus, >> > ? ? ? ?np.arange(tri_idx_plus.max() +2), weights=weights) >> >>>> unique_plus = np.unique1d(tri_idx_plus) >> >>>> quantity[unique_plus,...] = histogram_values[0][unique_plus] > >> > Actually, this may only work with positive values of weights (not >> > checked)... > >> > Please tell us if this meets your needs or not. > >> > Cheers, > >> > Emmanuelle > >> > On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: >> >> Greetings, > >> >> I am in need of some help. I am trying to use the += operator to sum >> >> over edge elements on a triangular mesh. Each edge has an unknown >> >> associated with it. After solving for the unknowns, I am trying to >> >> compute a quantity at the centroid of all triangles in the mesh. On a >> >> flat surface, each edge will be connected to either one or two >> >> triangles. The orientation of the edge and the normals of the >> >> triangles determines whether each triangle attached to the edge is a >> >> "plus" or "minus" triangle for that edge. It is possible for one >> >> triangle to be referenced three times as a "plus" triangle, three >> >> times as a "minus" triangle or any combination of "plus" and >> >> "minus" (1 and 2 or 2 and 1, respectively). > >> >> I have a variable tri_idx which relates the edges to the "plus" and >> >> "minus" triangles. I then compute the quantity at the centroid for >> >> the >> >> "plus" triangle and "minus" triangle attached to each edge. An >> >> example >> >> follows: > >> >> tri_idx_plus = [0 3 6 3 2 4 1 4] >> >> tri_idx_minus = [1 2 5 3 6 0 1 4] > >> >> quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * >> >> basis_p[:len(tri_idx_plus),...] >> >> quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * >> >> basis_m[:len(tri_idx_minus),...] > >> >> where basis_p and basis_m are basis functions that expand the unknown >> >> of each edge into a surface function over the "plus" or "minus" >> >> triangle. > >> >> I am pretty sure the problem I am encountering is that tri_idx_plus >> >> mentions indices 3 and 4 twice and tri_idx_minus contains index 1 >> >> twice. Is there a way of doing this operation without reverting to >> >> looping over each edge (read: not doing this the slow way). > >> >> Thanks in advance! > >> >> Josh Lawrence >> >> Ph.D. Student >> >> Clemson University > >> >> _______________________________________________ >> >> SciPy-user mailing list >> >> SciPy-user at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ >> > SciPy-user mailing list >> > SciPy-user at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Josh Lawrence Ph.D. Student Clemson University From gnurser at googlemail.com Fri Jun 12 18:26:10 2009 From: gnurser at googlemail.com (George Nurser) Date: Fri, 12 Jun 2009 23:26:10 +0100 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> Message-ID: <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> Hi, I have just two failures on svn r 5835, which I believe is the same as 0.7.1.rc3. OS x 10.5.7,python.org 2.5.2, numpy svn 7044, gfortran 4.3.2, apple gcc 4.0.1. scipy built with python setup.py build_src build_clib config_fc --fcompiler=gnu95 --f77flags=" -O3 -march=core2" build_ext config_fc --fcompiler=gnu95 --f77flags=" -O3 -march=core2" build > & inst.log & scipy.test('10') [snip] ===================================================================== FAIL: test_random_real (test_basic.TestSingleIFFT) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py", line 205, in test_random_real assert_array_almost_equal (y1, x) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 537, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 395, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 0.900900900901%) x: array([ 0.56936401 -1.61093638e-09j, 0.42778808 -9.66561764e-09j, 0.52186930 -1.07395759e-09j, 0.09524378 -1.19960097e-12j, 0.37034556 -1.55349547e-08j, 0.87382299 +7.88580223e-09j,... y: array([ 0.56936431, 0.42778817, 0.52186954, 0.09524398, 0.37034538, 0.87382281, 0.11707421, 0.78889829, 0.92179215, 0.82394433, 0.23986711, 0.61445147, 0.64488971, 0.77991045, 0.91312933,... ====================================================================== FAIL: test_continuous_basic.test_cont_basic_slow(, (22,), 'ksone') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/tests/test_continuous_basic.py", line 290, in check_cdf_ppf ' - cdf-ppf roundtrip') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 262, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 537, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 395, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal ksone - cdf-ppf roundtrip (mismatch 33.3333333333%) x: array([ 0. , 0.5 , 0.999]) y: array([ 0.001, 0.5 , 0.999]) ---------------------------------------------------------------------- Ran 4273 tests in 588.327s --George. 2009/6/12 David Cournapeau : > Hi, > > ? ?I have uploaded the binaries and source tarballs for 0.7.1rc3. The > rc3 fixes some issues in scipy.special, which caused wrong > behavior/crashes on some platforms. Hopefully, this will be the 0.7.1 > release, > > cheers, > > David > > ========================= > SciPy 0.7.1 Release Notes > ========================= > > .. contents:: > > SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. > > scipy.io > ======== > > Bugs fixed: > > - Several fixes in Matlab file IO > > scipy.odr > ========= > > Bugs fixed: > > - Work around a failure with Python 2.6 > > scipy.signal > ============ > > Memory leak in lfilter have been fixed, as well as support for array object > > Bugs fixed: > > - #880, #925: lfilter fixes > - #871: bicgstab fails on Win32 > > > scipy.sparse > ============ > > Bugs fixed: > > - #883: scipy.io.mmread with scipy.sparse.lil_matrix broken > - lil_matrix and csc_matrix reject now unexpected sequences, > ?cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 > > scipy.special > ============= > > Several bugs of varying severity were fixed in the special functions: > > - #503, #640: iv: problems at large arguments fixed by new implementation > - #623: jv: fix errors at large arguments > - #679: struve: fix wrong output for v < 0 > - #803: pbdv produces invalid output > - #804: lqmn: fix crashes on some input > - #823: betainc: fix documentation > - #834: exp1 strange behavior near negative integer values > - #852: jn_zeros: more accurate results for large s, also in > jnp/yn/ynp_zeros > - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x > - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain > - #927: ellipj: fix segfault on Windows > - #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. > - ive, jve, yve, kv, kve: with real-valued input, return nan for > out-of-domain > ?instead of returning only the real part of the result. > > Also, when ``scipy.special.errprint(1)`` has been enabled, warning > messages are now issued as Python warnings instead of printing them to > stderr. > > > scipy.stats > =========== > > - linregress, mannwhitneyu, describe: errors fixed > - kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, > ?truncexpon, planck: improvements to numerical accuracy in distributions > > Windows binaries for python 2.6 > =============================== > > python 2.6 binaries for windows are now included. The binary for python 2.5 > requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy > 1.3.0 or above. > > Universal build for scipy > ========================= > > Mac OS X binary installer is now a proper universal build, and does not > depend > on gfortran anymore (libgfortran is statically linked). The python 2.5 > version > of scipy requires numpy 1.2.0 or above, the python 2.6 version requires > numpy > 1.3.0 or above. > > Checksums > ========= > > 9dd5af43cc26ae6d38a13b373ba430fa > release/installers/scipy-0.7.1rc3-py2.6-python.org.dmg > 290c2e056fda1f86dfa9f3a76d207a8c > release/installers/scipy-0.7.1rc3-win32-superpack-python2.6.exe > d582dff7535d2b64a097fb4bfbc75d09 > release/installers/scipy-0.7.1rc3-win32-superpack-python2.5.exe > a19400ccfd65d1a0a5030848af6f78ea ?release/installers/scipy-0.7.1rc3.tar.gz > d4ebf322c62b09c4ebaad7b67f92d032 ?release/installers/scipy-0.7.1rc3.zip > a0ea0366b178a7827f10a480f97c3c47 > release/installers/scipy-0.7.1rc3-py2.5-python.org.dmg > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From emmanuelle.gouillart at normalesup.org Fri Jun 12 18:26:11 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sat, 13 Jun 2009 00:26:11 +0200 Subject: [SciPy-user] += Operator and Slicing of Arrays In-Reply-To: <6a643b830906121329u427e9c12vdbe0726df76c8619@mail.gmail.com> References: <20090612182813.GA29204@phare.normalesup.org> <28873EDC-E30C-48C3-84F7-0A901FAEE398@gmail.com> <20090612200439.GB32705@phare.normalesup.org> <6a643b830906121329u427e9c12vdbe0726df76c8619@mail.gmail.com> Message-ID: <20090612222611.GC32705@phare.normalesup.org> Hi Josh, I forgot to tell you before, but this discussion should rather be on the numpy-disussion mailing-list: if you have similar questions about numpy arrays and indexing, try to post on the numpy list. I didn't understand that you want to sum multidimensional instead of scalars. I think it is impossible to use np.histogram for your case (unless the arrays to add have indeed a (3,1) shape, in which case I would just use three times np.histogram inside a for loop... Sometimes you should just leave well enough alone instead of removing *ALL* for loops). I had a look at the source code of np.histogram to understand how the function works, and it is actually fairly simple to use the same algorithm for you case with multidimensional arrays as weights. Here is an example below, you can try to adapt it to your data. >>> a = np.array([ 3, 9, 9, 7, 5, 1, 9, 3, 5, 10])#indices >>> w = np.ones((10, 2))#weights >>> w[::2] = 2#unequal weights >>> # The code below is inspired by the code of np.histogram >>> bins = np.arange(12) >>> sorting_index = np.argsort(a) >>> sa = a[sorting_index] >>> sw = w[sorting_index] >>> cw = np.concatenate((np.array([0,0]).reshape((1,2)), >>> sw.cumsum(axis=0)), axis=0) >>> bin_index = np.r_[sa.searchsorted(bins[:-1], 'left'), \ sa.searchsorted(bins[-1], 'right')] >>> n = cw[bin_index] >>> n = np.diff(n, axis=0) >>> a array([ 3, 9, 9, 7, 5, 1, 9, 3, 5, 10]) >>> w array([[ 2., 2.], [ 1., 1.], [ 2., 2.], [ 1., 1.], [ 2., 2.], [ 1., 1.], [ 2., 2.], [ 1., 1.], [ 2., 2.], [ 1., 1.]]) >>> n array([[ 0., 0.], [ 1., 1.], [ 0., 0.], [ 3., 3.], [ 0., 0.], [ 4., 4.], [ 0., 0.], [ 1., 1.], [ 0., 0.], [ 5., 5.], [ 1., 1.]]) Cheers, Emmanuelle On Fri, Jun 12, 2009 at 04:29:31PM -0400, Josh Lawrence wrote: > Emmanuelle, > In the example I gave to begin with, tri_idx_plus has shape (8,) (if > tri_idx_plus were a numpy array), edge_unknown has shape (8,..), and > basis_p has shape (8,...). In practice, the shape for edge_unknown > would be (8,1,n) and basis_p would have shape (8,3,1). Thus, weight = > edge_unknown*basis_p would have a shape (8,3,n) in the example. Below > is a better example. > import numpy as np > tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) > edge_unknown = np.random.rand(8,1,1) + 1.0j*np.random.rand(8,1,1) > basis_p = np.random.rand(8,3,1) > weights = edge_unknown * basis_p > tri_quantity = np.zeros((20,3,1)) + 0j > for i in range(tri_idx_plus.size): > tri_quantity[tri_idx_plus[i],:,:] += weights[i,:,:] > I believe that does exactly what I want. Now, my desire is to get rid > of the for loop. That is why I at first tried to do > tri_quantity[tri_idx_plus,:,:] += weights > but to no avail as only the last reference in tri_idx_plus to 3 and 4 > would be summed to tri_quantity. > I hope this is more clear. > Cheers, > Josh Lawrence > On Fri, Jun 12, 2009 at 4:04 PM, Emmanuelle > Gouillart wrote: > >> It tells me that the weights argument needs to be the same shape as > >> tri_idx_plus. Also, edge_unknown is complex valued. So I have +- real > >> and imaginary components. I do not know much about numpy.histogram... > >> Does the complex dtype preclude it from use? > > ? ? ? ?Yes, the weight array must have the same shape as the values > > array ("a" in the help of np.histogram). This is the case in the example > > I provided. Can you check that the shapes of the two arrays are the same > > in the code you execute? And if you can't find the origin of the error, > > can you post a minimal example of code that reproduces the error? > > ? ? ? ?Also, as Anne mentioned (thanks for your answer, Anne!), the > > weights keyword argument can be used with negative numbers, but also with > > complex numbers (did you check it before posting?). > >>>> a = np.arange(10) > >>>> b = 1j*np.ones_like(a) > >>>> np.histogram(a, a, weights=b) > > (array([ 0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, ?0.+1.j, > > ? ? ? ?0.+1.j, ?0.+2.j]), array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])) > > So you don't need to separate real and imaginary parts, you can use > > directly your complex array as weights. > > ? ? ? ?Cheers, > > ? ? ? ?Emmanuelle > >> Cheers, > >> Josh Lawrence > >> Ph.D. Student > >> Clemson University > >> On Jun 12, 2009, at 2:28 PM, Emmanuelle Gouillart wrote: > >> > Hi Josh, > >> > what kind of problem do you have exactly ? Do you have trouble > >> > implementing the computation you describe, or do you get unexpected > >> > results? > >> > If I understood well what you want to do, you cannot use directly > >> > use += > >> > with fancy indexing (quantity[tri_idx_plus,...]) because the repeated > >> > elements will be incremented just once (see > >> > http://www.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2 > >> > for more details). > >> > However, I think you can solve your problem by using a weighted > >> > histogram. Using your notations > >> > weights = edge_unknown[:len(tri_idx_plus),...] * > >> > ? ? basis_p[:len(tri_idx_plus),...] > >> > histogram_values = np.histogram(tri_idx_plus, > >> > ? ? np.arange(tri_idx_plus.max() +2), weights=weights) > >> > unique_plus = np.unique1d(tri_idx_plus) > >> > quantity[unique_plus,...] = histogram_values[0][unique_plus] > >> > The weighted histogram allows you to make the sums corresponding to > >> > each > >> > triangle. > >> > Here is an example > >> >>>> quantity = np.zeros(10) > >> >>>> tri_idx_plus = np.array([0, 3, 6, 3, 2, 4, 1, 4]) > >> >>>> weights = 2*tri_idx_plus + 1 > >> >>>> histogram_values = np.histogram(tri_idx_plus, > >> > ? ? ? ?np.arange(tri_idx_plus.max() +2), weights=weights) > >> >>>> unique_plus = np.unique1d(tri_idx_plus) > >> >>>> quantity[unique_plus,...] = histogram_values[0][unique_plus] > >> > Actually, this may only work with positive values of weights (not > >> > checked)... > >> > Please tell us if this meets your needs or not. > >> > Cheers, > >> > Emmanuelle > >> > On Fri, Jun 12, 2009 at 01:14:16PM -0400, Josh Lawrence wrote: > >> >> Greetings, > >> >> I am in need of some help. I am trying to use the += operator to sum > >> >> over edge elements on a triangular mesh. Each edge has an unknown > >> >> associated with it. After solving for the unknowns, I am trying to > >> >> compute a quantity at the centroid of all triangles in the mesh. On a > >> >> flat surface, each edge will be connected to either one or two > >> >> triangles. The orientation of the edge and the normals of the > >> >> triangles determines whether each triangle attached to the edge is a > >> >> "plus" or "minus" triangle for that edge. It is possible for one > >> >> triangle to be referenced three times as a "plus" triangle, three > >> >> times as a "minus" triangle or any combination of "plus" and > >> >> "minus" (1 and 2 or 2 and 1, respectively). > >> >> I have a variable tri_idx which relates the edges to the "plus" and > >> >> "minus" triangles. I then compute the quantity at the centroid for > >> >> the > >> >> "plus" triangle and "minus" triangle attached to each edge. An > >> >> example > >> >> follows: > >> >> tri_idx_plus = [0 3 6 3 2 4 1 4] > >> >> tri_idx_minus = [1 2 5 3 6 0 1 4] > >> >> quantity[tri_idx_plus,...] += edge_unknown[:len(tri_idx_plus),...] * > >> >> basis_p[:len(tri_idx_plus),...] > >> >> quantity[tri_idx_minus,...] += edge_unknown[:len(tri_idx_minus)...] * > >> >> basis_m[:len(tri_idx_minus),...] > >> >> where basis_p and basis_m are basis functions that expand the unknown > >> >> of each edge into a surface function over the "plus" or "minus" > >> >> triangle. > >> >> I am pretty sure the problem I am encountering is that tri_idx_plus > >> >> mentions indices 3 and 4 twice and tri_idx_minus contains index 1 > >> >> twice. Is there a way of doing this operation without reverting to > >> >> looping over each edge (read: not doing this the slow way). > >> >> Thanks in advance! > >> >> Josh Lawrence > >> >> Ph.D. Student > >> >> Clemson University > >> >> _______________________________________________ > >> >> SciPy-user mailing list > >> >> SciPy-user at scipy.org > >> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > _______________________________________________ > >> > SciPy-user mailing list > >> > SciPy-user at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Fri Jun 12 19:09:03 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Jun 2009 19:09:03 -0400 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> Message-ID: <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> On Fri, Jun 12, 2009 at 6:26 PM, George Nurser wrote: > Hi, > I have just two failures on svn r 5835, which I believe is the same as > 0.7.1.rc3. Are you sure you are using the tag version and not the trunk version? I think the test failure with ksone was created when I corrected and tightened the test in revision 5831 in the trunk, the precision in 0.7.1 is set low enough that it should not report this as a failure. I didn't catch the error in the changes to the trunk because I skipped the slow tests. This also means there is a bug or imprecision in ksone or special.smirnov Josef > OS x 10.5.7,python.org 2.5.2, numpy svn 7044, gfortran 4.3.2, apple gcc 4.0.1. > scipy built with > python setup.py build_src build_clib config_fc --fcompiler=gnu95 > --f77flags=" -O3 -march=core2" build_ext config_fc --fcompiler=gnu95 > --f77flags=" -O3 -march=core2" build > & inst.log & > > scipy.test('10') > [snip] > ===================================================================== > FAIL: test_random_real (test_basic.TestSingleIFFT) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py", > line 205, in test_random_real > ? ?assert_array_almost_equal (y1, x) > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 537, in assert_array_almost_equal > ? ?header='Arrays are not almost equal') > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 395, in assert_array_compare > ? ?raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 0.900900900901%) > ?x: array([ 0.56936401 -1.61093638e-09j, ?0.42778808 -9.66561764e-09j, > ? ? ? ?0.52186930 -1.07395759e-09j, ?0.09524378 -1.19960097e-12j, > ? ? ? ?0.37034556 -1.55349547e-08j, ?0.87382299 +7.88580223e-09j,... > ?y: array([ 0.56936431, ?0.42778817, ?0.52186954, ?0.09524398, ?0.37034538, > ? ? ? ?0.87382281, ?0.11707421, ?0.78889829, ?0.92179215, ?0.82394433, > ? ? ? ?0.23986711, ?0.61445147, ?0.64488971, ?0.77991045, ?0.91312933,... > > ====================================================================== > FAIL: test_continuous_basic.test_cont_basic_slow( object at 0x15d6f2d0>, (22,), 'ksone') > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/case.py", > line 182, in runTest > ? ?self.test(*self.arg) > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/tests/test_continuous_basic.py", > line 290, in check_cdf_ppf > ? ?' - cdf-ppf roundtrip') > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 262, in assert_almost_equal > ? ?return assert_array_almost_equal(actual, desired, decimal, err_msg) > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 537, in assert_array_almost_equal > ? ?header='Arrays are not almost equal') > ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 395, in assert_array_compare > ? ?raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > ksone - cdf-ppf roundtrip > (mismatch 33.3333333333%) > ?x: array([ 0. ? , ?0.5 ?, ?0.999]) > ?y: array([ 0.001, ?0.5 ?, ?0.999]) > > ---------------------------------------------------------------------- > Ran 4273 tests in 588.327s > > --George. > > > > 2009/6/12 David Cournapeau : >> Hi, >> >> ? ?I have uploaded the binaries and source tarballs for 0.7.1rc3. The >> rc3 fixes some issues in scipy.special, which caused wrong >> behavior/crashes on some platforms. Hopefully, this will be the 0.7.1 >> release, >> >> cheers, >> >> David >> >> ========================= >> SciPy 0.7.1 Release Notes >> ========================= >> >> .. contents:: >> >> SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. >> >> scipy.io >> ======== >> >> Bugs fixed: >> >> - Several fixes in Matlab file IO >> >> scipy.odr >> ========= >> >> Bugs fixed: >> >> - Work around a failure with Python 2.6 >> >> scipy.signal >> ============ >> >> Memory leak in lfilter have been fixed, as well as support for array object >> >> Bugs fixed: >> >> - #880, #925: lfilter fixes >> - #871: bicgstab fails on Win32 >> >> >> scipy.sparse >> ============ >> >> Bugs fixed: >> >> - #883: scipy.io.mmread with scipy.sparse.lil_matrix broken >> - lil_matrix and csc_matrix reject now unexpected sequences, >> ?cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 >> >> scipy.special >> ============= >> >> Several bugs of varying severity were fixed in the special functions: >> >> - #503, #640: iv: problems at large arguments fixed by new implementation >> - #623: jv: fix errors at large arguments >> - #679: struve: fix wrong output for v < 0 >> - #803: pbdv produces invalid output >> - #804: lqmn: fix crashes on some input >> - #823: betainc: fix documentation >> - #834: exp1 strange behavior near negative integer values >> - #852: jn_zeros: more accurate results for large s, also in >> jnp/yn/ynp_zeros >> - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x >> - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain >> - #927: ellipj: fix segfault on Windows >> - #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. >> - ive, jve, yve, kv, kve: with real-valued input, return nan for >> out-of-domain >> ?instead of returning only the real part of the result. >> >> Also, when ``scipy.special.errprint(1)`` has been enabled, warning >> messages are now issued as Python warnings instead of printing them to >> stderr. >> >> >> scipy.stats >> =========== >> >> - linregress, mannwhitneyu, describe: errors fixed >> - kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, >> ?truncexpon, planck: improvements to numerical accuracy in distributions >> >> Windows binaries for python 2.6 >> =============================== >> >> python 2.6 binaries for windows are now included. The binary for python 2.5 >> requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy >> 1.3.0 or above. >> >> Universal build for scipy >> ========================= >> >> Mac OS X binary installer is now a proper universal build, and does not >> depend >> on gfortran anymore (libgfortran is statically linked). The python 2.5 >> version >> of scipy requires numpy 1.2.0 or above, the python 2.6 version requires >> numpy >> 1.3.0 or above. >> >> Checksums >> ========= >> >> 9dd5af43cc26ae6d38a13b373ba430fa >> release/installers/scipy-0.7.1rc3-py2.6-python.org.dmg >> 290c2e056fda1f86dfa9f3a76d207a8c >> release/installers/scipy-0.7.1rc3-win32-superpack-python2.6.exe >> d582dff7535d2b64a097fb4bfbc75d09 >> release/installers/scipy-0.7.1rc3-win32-superpack-python2.5.exe >> a19400ccfd65d1a0a5030848af6f78ea ?release/installers/scipy-0.7.1rc3.tar.gz >> d4ebf322c62b09c4ebaad7b67f92d032 ?release/installers/scipy-0.7.1rc3.zip >> a0ea0366b178a7827f10a480f97c3c47 >> release/installers/scipy-0.7.1rc3-py2.5-python.org.dmg >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Fri Jun 12 20:14:41 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Jun 2009 20:14:41 -0400 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> Message-ID: <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> On Fri, Jun 12, 2009 at 7:09 PM, wrote: > On Fri, Jun 12, 2009 at 6:26 PM, George Nurser wrote: >> Hi, >> I have just two failures on svn r 5835, which I believe is the same as >> 0.7.1.rc3. > > Are you sure you are using the tag version and not the trunk version? > I think the test failure with ksone was created when I corrected and > tightened the test in revision 5831 in the trunk, > > the precision in 0.7.1 is set low enough that it should not report > this as a failure. > > I didn't catch the error in the changes to the trunk because I skipped > the slow tests. This also means there is a bug or imprecision in ksone > or special.smirnov > > Josef > > >> OS x 10.5.7,python.org 2.5.2, numpy svn 7044, gfortran 4.3.2, apple gcc 4.0.1. >> scipy built with >> python setup.py build_src build_clib config_fc --fcompiler=gnu95 >> --f77flags=" -O3 -march=core2" build_ext config_fc --fcompiler=gnu95 >> --f77flags=" -O3 -march=core2" build > & inst.log & >> >> scipy.test('10') >> [snip] >> ===================================================================== >> FAIL: test_random_real (test_basic.TestSingleIFFT) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py", >> line 205, in test_random_real >> ? ?assert_array_almost_equal (y1, x) >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 537, in assert_array_almost_equal >> ? ?header='Arrays are not almost equal') >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 395, in assert_array_compare >> ? ?raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal >> >> (mismatch 0.900900900901%) >> ?x: array([ 0.56936401 -1.61093638e-09j, ?0.42778808 -9.66561764e-09j, >> ? ? ? ?0.52186930 -1.07395759e-09j, ?0.09524378 -1.19960097e-12j, >> ? ? ? ?0.37034556 -1.55349547e-08j, ?0.87382299 +7.88580223e-09j,... >> ?y: array([ 0.56936431, ?0.42778817, ?0.52186954, ?0.09524398, ?0.37034538, >> ? ? ? ?0.87382281, ?0.11707421, ?0.78889829, ?0.92179215, ?0.82394433, >> ? ? ? ?0.23986711, ?0.61445147, ?0.64488971, ?0.77991045, ?0.91312933,... >> >> ====================================================================== >> FAIL: test_continuous_basic.test_cont_basic_slow(> object at 0x15d6f2d0>, (22,), 'ksone') >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/case.py", >> line 182, in runTest >> ? ?self.test(*self.arg) >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/tests/test_continuous_basic.py", >> line 290, in check_cdf_ppf >> ? ?' - cdf-ppf roundtrip') >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 262, in assert_almost_equal >> ? ?return assert_array_almost_equal(actual, desired, decimal, err_msg) >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 537, in assert_array_almost_equal >> ? ?header='Arrays are not almost equal') >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 395, in assert_array_compare >> ? ?raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal >> ksone - cdf-ppf roundtrip >> (mismatch 33.3333333333%) >> ?x: array([ 0. ? , ?0.5 ?, ?0.999]) >> ?y: array([ 0.001, ?0.5 ?, ?0.999]) >> >> ---------------------------------------------------------------------- >> Ran 4273 tests in 588.327s >> >> --George. >> >> >> >> 2009/6/12 David Cournapeau : >>> Hi, >>> >>> ? ?I have uploaded the binaries and source tarballs for 0.7.1rc3. The >>> rc3 fixes some issues in scipy.special, which caused wrong >>> behavior/crashes on some platforms. Hopefully, this will be the 0.7.1 >>> release, >>> >>> cheers, >>> >>> David >>> >>> ========================= >>> SciPy 0.7.1 Release Notes >>> ========================= >>> >>> .. contents:: >>> >>> SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. >>> >>> scipy.io >>> ======== >>> >>> Bugs fixed: >>> >>> - Several fixes in Matlab file IO >>> >>> scipy.odr >>> ========= >>> >>> Bugs fixed: >>> >>> - Work around a failure with Python 2.6 >>> >>> scipy.signal >>> ============ >>> >>> Memory leak in lfilter have been fixed, as well as support for array object >>> >>> Bugs fixed: >>> >>> - #880, #925: lfilter fixes >>> - #871: bicgstab fails on Win32 >>> >>> >>> scipy.sparse >>> ============ >>> >>> Bugs fixed: >>> >>> - #883: scipy.io.mmread with scipy.sparse.lil_matrix broken >>> - lil_matrix and csc_matrix reject now unexpected sequences, >>> ?cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 >>> >>> scipy.special >>> ============= >>> >>> Several bugs of varying severity were fixed in the special functions: >>> >>> - #503, #640: iv: problems at large arguments fixed by new implementation >>> - #623: jv: fix errors at large arguments >>> - #679: struve: fix wrong output for v < 0 >>> - #803: pbdv produces invalid output >>> - #804: lqmn: fix crashes on some input >>> - #823: betainc: fix documentation >>> - #834: exp1 strange behavior near negative integer values >>> - #852: jn_zeros: more accurate results for large s, also in >>> jnp/yn/ynp_zeros >>> - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x >>> - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain >>> - #927: ellipj: fix segfault on Windows >>> - #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. >>> - ive, jve, yve, kv, kve: with real-valued input, return nan for >>> out-of-domain >>> ?instead of returning only the real part of the result. >>> >>> Also, when ``scipy.special.errprint(1)`` has been enabled, warning >>> messages are now issued as Python warnings instead of printing them to >>> stderr. >>> >>> >>> scipy.stats >>> =========== >>> >>> - linregress, mannwhitneyu, describe: errors fixed >>> - kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, >>> ?truncexpon, planck: improvements to numerical accuracy in distributions >>> >>> Windows binaries for python 2.6 >>> =============================== >>> >>> python 2.6 binaries for windows are now included. The binary for python 2.5 >>> requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy >>> 1.3.0 or above. >>> >>> Universal build for scipy >>> ========================= >>> >>> Mac OS X binary installer is now a proper universal build, and does not >>> depend >>> on gfortran anymore (libgfortran is statically linked). The python 2.5 >>> version >>> of scipy requires numpy 1.2.0 or above, the python 2.6 version requires >>> numpy >>> 1.3.0 or above. >>> >>> Checksums >>> ========= >>> >>> 9dd5af43cc26ae6d38a13b373ba430fa >>> release/installers/scipy-0.7.1rc3-py2.6-python.org.dmg >>> 290c2e056fda1f86dfa9f3a76d207a8c >>> release/installers/scipy-0.7.1rc3-win32-superpack-python2.6.exe >>> d582dff7535d2b64a097fb4bfbc75d09 >>> release/installers/scipy-0.7.1rc3-win32-superpack-python2.5.exe >>> a19400ccfd65d1a0a5030848af6f78ea ?release/installers/scipy-0.7.1rc3.tar.gz >>> d4ebf322c62b09c4ebaad7b67f92d032 ?release/installers/scipy-0.7.1rc3.zip >>> a0ea0366b178a7827f10a480f97c3c47 >>> release/installers/scipy-0.7.1rc3-py2.5-python.org.dmg >>> Why are there now 310 known fail, it was only a few before, what I remember? now errors or failures. Josef Ran 4168 tests in 383.109s OK (KNOWNFAIL=310, SKIP=29) Winxp, sse2, 32bit >python -c "import scipy;scipy.test('full') Running unit tests for scipy NumPy version 1.3.0 NumPy is installed in c:\programs\python25\lib\site-packages\numpy SciPy version 0.7.1rc3 SciPy is installed in c:\programs\python25\lib\site-packages\scipy Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] nose version 0.11.1 From gnurser at googlemail.com Sat Jun 13 05:45:54 2009 From: gnurser at googlemail.com (George Nurser) Date: Sat, 13 Jun 2009 10:45:54 +0100 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> Message-ID: <1d1e6ea70906130245g7dcd0e70w9dbb85933b2e5bc1@mail.gmail.com> You're right; I was testing trunk. --George. 2009/6/13 : > On Fri, Jun 12, 2009 at 6:26 PM, George Nurser wrote: >> Hi, >> I have just two failures on svn r 5835, which I believe is the same as >> 0.7.1.rc3. > > Are you sure you are using the tag version and not the trunk version? > I think the test failure with ksone was created when I corrected and > tightened the test in revision 5831 in the trunk, > > the precision in 0.7.1 is set low enough that it should not report > this as a failure. > > I didn't catch the error in the changes to the trunk because I skipped > the slow tests. This also means there is a bug or imprecision in ksone > or special.smirnov > > Josef > > >> OS x 10.5.7,python.org 2.5.2, numpy svn 7044, gfortran 4.3.2, apple gcc 4.0.1. >> scipy built with >> python setup.py build_src build_clib config_fc --fcompiler=gnu95 >> --f77flags=" -O3 -march=core2" build_ext config_fc --fcompiler=gnu95 >> --f77flags=" -O3 -march=core2" build > & inst.log & >> >> scipy.test('10') >> [snip] >> ===================================================================== >> FAIL: test_random_real (test_basic.TestSingleIFFT) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py", >> line 205, in test_random_real >> ? ?assert_array_almost_equal (y1, x) >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 537, in assert_array_almost_equal >> ? ?header='Arrays are not almost equal') >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 395, in assert_array_compare >> ? ?raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal >> >> (mismatch 0.900900900901%) >> ?x: array([ 0.56936401 -1.61093638e-09j, ?0.42778808 -9.66561764e-09j, >> ? ? ? ?0.52186930 -1.07395759e-09j, ?0.09524378 -1.19960097e-12j, >> ? ? ? ?0.37034556 -1.55349547e-08j, ?0.87382299 +7.88580223e-09j,... >> ?y: array([ 0.56936431, ?0.42778817, ?0.52186954, ?0.09524398, ?0.37034538, >> ? ? ? ?0.87382281, ?0.11707421, ?0.78889829, ?0.92179215, ?0.82394433, >> ? ? ? ?0.23986711, ?0.61445147, ?0.64488971, ?0.77991045, ?0.91312933,... >> >> ====================================================================== >> FAIL: test_continuous_basic.test_cont_basic_slow(> object at 0x15d6f2d0>, (22,), 'ksone') >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/case.py", >> line 182, in runTest >> ? ?self.test(*self.arg) >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/tests/test_continuous_basic.py", >> line 290, in check_cdf_ppf >> ? ?' - cdf-ppf roundtrip') >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 262, in assert_almost_equal >> ? ?return assert_array_almost_equal(actual, desired, decimal, err_msg) >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 537, in assert_array_almost_equal >> ? ?header='Arrays are not almost equal') >> ?File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 395, in assert_array_compare >> ? ?raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal >> ksone - cdf-ppf roundtrip >> (mismatch 33.3333333333%) >> ?x: array([ 0. ? , ?0.5 ?, ?0.999]) >> ?y: array([ 0.001, ?0.5 ?, ?0.999]) >> >> ---------------------------------------------------------------------- >> Ran 4273 tests in 588.327s >> >> --George. >> >> >> >> 2009/6/12 David Cournapeau : >>> Hi, >>> >>> ? ?I have uploaded the binaries and source tarballs for 0.7.1rc3. The >>> rc3 fixes some issues in scipy.special, which caused wrong >>> behavior/crashes on some platforms. Hopefully, this will be the 0.7.1 >>> release, >>> >>> cheers, >>> >>> David >>> >>> ========================= >>> SciPy 0.7.1 Release Notes >>> ========================= >>> >>> .. contents:: >>> >>> SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. >>> >>> scipy.io >>> ======== >>> >>> Bugs fixed: >>> >>> - Several fixes in Matlab file IO >>> >>> scipy.odr >>> ========= >>> >>> Bugs fixed: >>> >>> - Work around a failure with Python 2.6 >>> >>> scipy.signal >>> ============ >>> >>> Memory leak in lfilter have been fixed, as well as support for array object >>> >>> Bugs fixed: >>> >>> - #880, #925: lfilter fixes >>> - #871: bicgstab fails on Win32 >>> >>> >>> scipy.sparse >>> ============ >>> >>> Bugs fixed: >>> >>> - #883: scipy.io.mmread with scipy.sparse.lil_matrix broken >>> - lil_matrix and csc_matrix reject now unexpected sequences, >>> ?cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 >>> >>> scipy.special >>> ============= >>> >>> Several bugs of varying severity were fixed in the special functions: >>> >>> - #503, #640: iv: problems at large arguments fixed by new implementation >>> - #623: jv: fix errors at large arguments >>> - #679: struve: fix wrong output for v < 0 >>> - #803: pbdv produces invalid output >>> - #804: lqmn: fix crashes on some input >>> - #823: betainc: fix documentation >>> - #834: exp1 strange behavior near negative integer values >>> - #852: jn_zeros: more accurate results for large s, also in >>> jnp/yn/ynp_zeros >>> - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x >>> - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain >>> - #927: ellipj: fix segfault on Windows >>> - #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. >>> - ive, jve, yve, kv, kve: with real-valued input, return nan for >>> out-of-domain >>> ?instead of returning only the real part of the result. >>> >>> Also, when ``scipy.special.errprint(1)`` has been enabled, warning >>> messages are now issued as Python warnings instead of printing them to >>> stderr. >>> >>> >>> scipy.stats >>> =========== >>> >>> - linregress, mannwhitneyu, describe: errors fixed >>> - kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, >>> ?truncexpon, planck: improvements to numerical accuracy in distributions >>> >>> Windows binaries for python 2.6 >>> =============================== >>> >>> python 2.6 binaries for windows are now included. The binary for python 2.5 >>> requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy >>> 1.3.0 or above. >>> >>> Universal build for scipy >>> ========================= >>> >>> Mac OS X binary installer is now a proper universal build, and does not >>> depend >>> on gfortran anymore (libgfortran is statically linked). The python 2.5 >>> version >>> of scipy requires numpy 1.2.0 or above, the python 2.6 version requires >>> numpy >>> 1.3.0 or above. >>> >>> Checksums >>> ========= >>> >>> 9dd5af43cc26ae6d38a13b373ba430fa >>> release/installers/scipy-0.7.1rc3-py2.6-python.org.dmg >>> 290c2e056fda1f86dfa9f3a76d207a8c >>> release/installers/scipy-0.7.1rc3-win32-superpack-python2.6.exe >>> d582dff7535d2b64a097fb4bfbc75d09 >>> release/installers/scipy-0.7.1rc3-win32-superpack-python2.5.exe >>> a19400ccfd65d1a0a5030848af6f78ea ?release/installers/scipy-0.7.1rc3.tar.gz >>> d4ebf322c62b09c4ebaad7b67f92d032 ?release/installers/scipy-0.7.1rc3.zip >>> a0ea0366b178a7827f10a480f97c3c47 >>> release/installers/scipy-0.7.1rc3-py2.5-python.org.dmg >>> >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pauli.virtanen at iki.fi Sat Jun 13 06:38:54 2009 From: pauli.virtanen at iki.fi (Pauli Virtanen) Date: Sat, 13 Jun 2009 10:38:54 +0000 (UTC) Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> Message-ID: On 2009-06-13, josef.pktd at gmail.com wrote: [clip] > Why are there now 310 known fail, it was only a few before, what I remember? > no errors or failures. Many weave tests were marked as known failures on Windows. Maybe David can comment on this if necessary. -- Pauli Virtanen From david at ar.media.kyoto-u.ac.jp Sat Jun 13 07:03:44 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 13 Jun 2009 20:03:44 +0900 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> Message-ID: <4A338790.5060201@ar.media.kyoto-u.ac.jp> josef.pktd at gmail.com wrote: > On Fri, Jun 12, 2009 at 7:09 PM, wrote: > >> On Fri, Jun 12, 2009 at 6:26 PM, George Nurser wrote: >> >>> Hi, >>> I have just two failures on svn r 5835, which I believe is the same as >>> 0.7.1.rc3. >>> >> Are you sure you are using the tag version and not the trunk version? >> I think the test failure with ksone was created when I corrected and >> tightened the test in revision 5831 in the trunk, >> >> the precision in 0.7.1 is set low enough that it should not report >> this as a failure. >> >> I didn't catch the error in the changes to the trunk because I skipped >> the slow tests. This also means there is a bug or imprecision in ksone >> or special.smirnov >> >> Josef >> >> >> >>> OS x 10.5.7,python.org 2.5.2, numpy svn 7044, gfortran 4.3.2, apple gcc 4.0.1. >>> scipy built with >>> python setup.py build_src build_clib config_fc --fcompiler=gnu95 >>> --f77flags=" -O3 -march=core2" build_ext config_fc --fcompiler=gnu95 >>> --f77flags=" -O3 -march=core2" build > & inst.log & >>> >>> scipy.test('10') >>> [snip] >>> ===================================================================== >>> FAIL: test_random_real (test_basic.TestSingleIFFT) >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py", >>> line 205, in test_random_real >>> assert_array_almost_equal (y1, x) >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >>> line 537, in assert_array_almost_equal >>> header='Arrays are not almost equal') >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >>> line 395, in assert_array_compare >>> raise AssertionError(msg) >>> AssertionError: >>> Arrays are not almost equal >>> >>> (mismatch 0.900900900901%) >>> x: array([ 0.56936401 -1.61093638e-09j, 0.42778808 -9.66561764e-09j, >>> 0.52186930 -1.07395759e-09j, 0.09524378 -1.19960097e-12j, >>> 0.37034556 -1.55349547e-08j, 0.87382299 +7.88580223e-09j,... >>> y: array([ 0.56936431, 0.42778817, 0.52186954, 0.09524398, 0.37034538, >>> 0.87382281, 0.11707421, 0.78889829, 0.92179215, 0.82394433, >>> 0.23986711, 0.61445147, 0.64488971, 0.77991045, 0.91312933,... >>> >>> ====================================================================== >>> FAIL: test_continuous_basic.test_cont_basic_slow(>> object at 0x15d6f2d0>, (22,), 'ksone') >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/case.py", >>> line 182, in runTest >>> self.test(*self.arg) >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/tests/test_continuous_basic.py", >>> line 290, in check_cdf_ppf >>> ' - cdf-ppf roundtrip') >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >>> line 262, in assert_almost_equal >>> return assert_array_almost_equal(actual, desired, decimal, err_msg) >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >>> line 537, in assert_array_almost_equal >>> header='Arrays are not almost equal') >>> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", >>> line 395, in assert_array_compare >>> raise AssertionError(msg) >>> AssertionError: >>> Arrays are not almost equal >>> ksone - cdf-ppf roundtrip >>> (mismatch 33.3333333333%) >>> x: array([ 0. , 0.5 , 0.999]) >>> y: array([ 0.001, 0.5 , 0.999]) >>> >>> ---------------------------------------------------------------------- >>> Ran 4273 tests in 588.327s >>> >>> --George. >>> >>> >>> >>> 2009/6/12 David Cournapeau : >>> >>>> Hi, >>>> >>>> I have uploaded the binaries and source tarballs for 0.7.1rc3. The >>>> rc3 fixes some issues in scipy.special, which caused wrong >>>> behavior/crashes on some platforms. Hopefully, this will be the 0.7.1 >>>> release, >>>> >>>> cheers, >>>> >>>> David >>>> >>>> ========================= >>>> SciPy 0.7.1 Release Notes >>>> ========================= >>>> >>>> .. contents:: >>>> >>>> SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. >>>> >>>> scipy.io >>>> ======== >>>> >>>> Bugs fixed: >>>> >>>> - Several fixes in Matlab file IO >>>> >>>> scipy.odr >>>> ========= >>>> >>>> Bugs fixed: >>>> >>>> - Work around a failure with Python 2.6 >>>> >>>> scipy.signal >>>> ============ >>>> >>>> Memory leak in lfilter have been fixed, as well as support for array object >>>> >>>> Bugs fixed: >>>> >>>> - #880, #925: lfilter fixes >>>> - #871: bicgstab fails on Win32 >>>> >>>> >>>> scipy.sparse >>>> ============ >>>> >>>> Bugs fixed: >>>> >>>> - #883: scipy.io.mmread with scipy.sparse.lil_matrix broken >>>> - lil_matrix and csc_matrix reject now unexpected sequences, >>>> cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 >>>> >>>> scipy.special >>>> ============= >>>> >>>> Several bugs of varying severity were fixed in the special functions: >>>> >>>> - #503, #640: iv: problems at large arguments fixed by new implementation >>>> - #623: jv: fix errors at large arguments >>>> - #679: struve: fix wrong output for v < 0 >>>> - #803: pbdv produces invalid output >>>> - #804: lqmn: fix crashes on some input >>>> - #823: betainc: fix documentation >>>> - #834: exp1 strange behavior near negative integer values >>>> - #852: jn_zeros: more accurate results for large s, also in >>>> jnp/yn/ynp_zeros >>>> - #853: jv, yv, iv: invalid results for non-integer v < 0, complex x >>>> - #854: jv, yv, iv, kv: return nan more consistently when out-of-domain >>>> - #927: ellipj: fix segfault on Windows >>>> - #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. >>>> - ive, jve, yve, kv, kve: with real-valued input, return nan for >>>> out-of-domain >>>> instead of returning only the real part of the result. >>>> >>>> Also, when ``scipy.special.errprint(1)`` has been enabled, warning >>>> messages are now issued as Python warnings instead of printing them to >>>> stderr. >>>> >>>> >>>> scipy.stats >>>> =========== >>>> >>>> - linregress, mannwhitneyu, describe: errors fixed >>>> - kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, >>>> truncexpon, planck: improvements to numerical accuracy in distributions >>>> >>>> Windows binaries for python 2.6 >>>> =============================== >>>> >>>> python 2.6 binaries for windows are now included. The binary for python 2.5 >>>> requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy >>>> 1.3.0 or above. >>>> >>>> Universal build for scipy >>>> ========================= >>>> >>>> Mac OS X binary installer is now a proper universal build, and does not >>>> depend >>>> on gfortran anymore (libgfortran is statically linked). The python 2.5 >>>> version >>>> of scipy requires numpy 1.2.0 or above, the python 2.6 version requires >>>> numpy >>>> 1.3.0 or above. >>>> >>>> Checksums >>>> ========= >>>> >>>> 9dd5af43cc26ae6d38a13b373ba430fa >>>> release/installers/scipy-0.7.1rc3-py2.6-python.org.dmg >>>> 290c2e056fda1f86dfa9f3a76d207a8c >>>> release/installers/scipy-0.7.1rc3-win32-superpack-python2.6.exe >>>> d582dff7535d2b64a097fb4bfbc75d09 >>>> release/installers/scipy-0.7.1rc3-win32-superpack-python2.5.exe >>>> a19400ccfd65d1a0a5030848af6f78ea release/installers/scipy-0.7.1rc3.tar.gz >>>> d4ebf322c62b09c4ebaad7b67f92d032 release/installers/scipy-0.7.1rc3.zip >>>> a0ea0366b178a7827f10a480f97c3c47 >>>> release/installers/scipy-0.7.1rc3-py2.5-python.org.dmg >>>> >>>> > > Why are there now 310 known fail, it was only a few before, what I remember? > now errors or failures. > > Josef > > Ran 4168 tests in 383.109s > OK (KNOWNFAIL=310, SKIP=29) > > Winxp, sse2, 32bit > That may be due to the 0.11 version of nose. I have noticed for quite some time some significant discrepancy between windows and other platforms in the number of found tests, and this always bothered me. There have not been more known failures on 0.7.1 compared to 0.7.0. cheers, David From cournape at gmail.com Sat Jun 13 08:09:31 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 13 Jun 2009 21:09:31 +0900 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <4A338790.5060201@ar.media.kyoto-u.ac.jp> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> On Sat, Jun 13, 2009 at 8:03 PM, David Cournapeau wrote: > > That may be due to the 0.11 version of nose. I have noticed for quite > some time some significant discrepancy between windows and other > platforms in the number of found tests, and this always bothered me. > There have not been more known failures on 0.7.1 compared to 0.7.0. Ok, actually, Pauli was right: of the 312 failures, 306 are from scipy.weave. Scipy.weave needs quite a lot of work, if someone wants for work on it. cheers, David From josef.pktd at gmail.com Sat Jun 13 08:38:25 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 13 Jun 2009 08:38:25 -0400 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> Message-ID: <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> On Sat, Jun 13, 2009 at 8:09 AM, David Cournapeau wrote: > On Sat, Jun 13, 2009 at 8:03 PM, David > Cournapeau wrote: > >> >> That may be due to the 0.11 version of nose. I have noticed for quite >> some time some significant discrepancy between windows and other >> platforms in the number of found tests, and this always bothered me. >> There have not been more known failures on 0.7.1 compared to 0.7.0. > > Ok, actually, Pauli was right: of the 312 failures, 306 are from > scipy.weave. Scipy.weave needs quite a lot of work, if someone wants > for work on it. > on my computer there is a big regression somewhere, (or maybe a change in the test counting)? with my old python 2.4 install I get just a few failures and errors and no skips (except the crashing one that I commented out) >>> import numpy >>> numpy.version.version '1.2.0rc2' >>> import scipy >>> scipy.version.version '0.6.0' >>> scipy.weave.test('full',verbosity=3) ------------------------------------------------------------- Ran 513 tests in 671.579s FAILED (failures=2, errors=8) with 0.7.1 I get the 300 skip known failures in a recent trunk version of scipy with numpy 1.3, some of the weave tests don't use mingw. For some time now, I have the problem that some build scripts with setup.py don't use mingw as compiler even though I have it specified in distutils.cfg, and they don't find a non-existing microsoft compiler. But I have no idea whether this is related to numpy distutils or any other changes that happened to my python 2.5 install. Do you know what the result for scipy.weave.test('full') is supposed to be for the current trunk with mingw as compiler? It would help to figure out where the problem with my setup and install is. Before the release of scipy 0.7.0 (which is the last time I tested this more extensively) I could still run most of scipy.weave testsuite with just a few errors or failures. Josef From david at ar.media.kyoto-u.ac.jp Sat Jun 13 08:30:40 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 13 Jun 2009 21:30:40 +0900 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> Message-ID: <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> josef.pktd at gmail.com wrote: > > > with 0.7.1 I get the 300 skip known failures > I don't remember the details, but there was an intersection on quite a few problems, that is another distutils wart (which exits the process through System.exit() in some weird cases, and there is no way around it except not executing the corresponding code), some problems with subprocess on windows, and mingw problems. I don't know why it used to work on 0.6.0 compared to 0.7.0. > For some time now, I have the problem that some build scripts with > setup.py don't use mingw as compiler even though I have it specified > in distutils.cfg, and they don't find a non-existing microsoft > compiler. > But I have no idea whether this is related to numpy distutils or any > other changes that happened to my python 2.5 install. > I don't know either. Frankly, all this code to detect compilers is such a mess in distutils and numpy.distutils, and it depends so much on the configuration (whether you have some version of Visual Studio or not, and it of course depends on the python version) that I consider the problem to be intractable, at least for someone like me who don't spend its time on windows. I don't have a better answer :) David From dgorman at berkeley.edu Sat Jun 13 11:44:59 2009 From: dgorman at berkeley.edu (Dylan Gorman) Date: Sat, 13 Jun 2009 11:44:59 -0400 Subject: [SciPy-user] linalg.expm() Illegal Instruction Error? Message-ID: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> Hi Folks, I'm having a bit of a weird problem. linalg.expm() is failing for matrices larger than size (51,51), and reports "Illegal instruction" and forces python to quit. I'm not sure where this is coming from. I created the following routine: for i in range(10,128): A = random.rand(i,i) B = linalg.expm(A) print i which fails with "Illegal instruction" when you get to 51. Has anyone seen this before? Thanks! Dylan From devicerandom at gmail.com Sat Jun 13 11:49:22 2009 From: devicerandom at gmail.com (ms) Date: Sat, 13 Jun 2009 16:49:22 +0100 Subject: [SciPy-user] Some help on pythonic design of time series data processing application In-Reply-To: <4A3270FC.63BA.009B.0@twdb.state.tx.us> References: <4A3270FC.63BA.009B.0@twdb.state.tx.us> Message-ID: <4A33CA82.5060901@gmail.com> Hi Dharhas, > This might be slightly off topic but I'd appreciate any tips or references that anyone may have since I know a lot of you deal with scientific data. Over the last couple of months I have become very familiar with using python/numpy to write lots of small scripts for processing or plotting that I can then string together but I'm having trouble finding descriptions of how to write larger applications and also how to appropriately use classes. > > I'm trying to write an application to semi-automate the process of getting our field data from multiple instruments types into a common format. > > My idea for the workflow (based loosely on stuff we have already sorta implemented using shell scripts) is the following: > > 1) read all files in a specified directory > 2) parse file names to work out site names and instrument used > 3) according to the instrument used, load a plugin/call a script that understands how to read in the data from that instrument. The plugin would be aware of what parameters that type of instrument provides and in what units and be able to convert everything to SI units > 4) append data to a site & parameter specific file that contains all data from that site > 5) Later I'll be working on a QA/QC application built on top of this. > > I have starting developing a generic plugin class but I'm not sure of how to implement the plugins. In our old shell script based application we wrote an entirely new script for each new instrument and essentially just had a massive IF statement to choose which script to run. This wasn't very scalable and I'd like to be able to add new instruments without having to change the main routines. > > Since I'm not very familiar with python or object oriented programing I was wondering if anyone had any examples or packages they could recommend I look at for ideas. Or if anyone has done something similar please let me know. I have done something very similar to what you describe for the analysis of single molecule force spectroscopy data. Have a look at it: http://code.google.com/p/hooke and let me know if I can be of help. cheers, M. From josef.pktd at gmail.com Sat Jun 13 16:41:10 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 13 Jun 2009 16:41:10 -0400 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> Message-ID: <1cd32cbb0906131341x45081c15s95cf9c27339fe488@mail.gmail.com> On Sat, Jun 13, 2009 at 8:30 AM, David Cournapeau wrote: > josef.pktd at gmail.com wrote: >> >> >> with 0.7.1 I get the 300 skip known failures >> > > I don't remember the details, but there was an intersection on quite a > few problems, that is another distutils wart (which exits the process > through System.exit() in some weird cases, and there is no way around it > except not executing the corresponding code), some problems with > subprocess on windows, and mingw problems. I don't know why it used to > work on 0.6.0 compared to 0.7.0. > >> For some time now, I have the problem that some build scripts with >> setup.py don't use mingw as compiler even though I have it specified >> in distutils.cfg, and they don't find a non-existing microsoft >> compiler. >> But I have no idea whether this is related to numpy distutils or any >> other changes that happened to my python 2.5 install. >> > > I don't know either. Frankly, all this code to detect compilers is such > a mess in distutils and numpy.distutils, and it depends so much on the > configuration (whether you have some version of Visual Studio or not, > and it of course depends on the python version) that I consider the > problem to be intractable, at least for someone like me who don't spend > its time on windows. I don't have a better answer :) > I narrowed down the weave/mingw problems to changes that occured during the release process of 0.7.0 that broke mingw detection for windows users. If I run 0.7.0b1 (the last version of 0.7.0 that I had fully tested and in my python25 install), I get the good result Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.weave >>> scipy.weave.test('full') Running unit tests for scipy.weave NumPy version 1.3.0 NumPy is installed in c:\programs\python25\lib\site-packages\numpy SciPy version 0.7.0.dev5180 SciPy is installed in c:\programs\python25\lib\site-packages\scipy Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] nose version 0.11.1 Ran 448 tests in 774.532s OK (KNOWNFAIL=2, SKIP=7) With the release version of 0.7.0 and removing the skip known failures decorators I get the errors because mingw is not used as the compiler. so the change happened between svn versions '5180' and '5542' It looks like the problem is not outside of scipy, since I can run 0.7.0b1 in my regular python25 without the problems, with no other changes in the setup. (I had the compiler detection problem the first time a while ago with theano, but I thought it was a problem with theano since they mention that it is not tested for windows) BTW: is there a way to give the no-skip argument, i.e. run also known failures, to nose in the call to test, e.g. scipy.test(...)? Josef From josef.pktd at gmail.com Sat Jun 13 18:03:58 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 13 Jun 2009 18:03:58 -0400 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906131341x45081c15s95cf9c27339fe488@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> <1cd32cbb0906131341x45081c15s95cf9c27339fe488@mail.gmail.com> Message-ID: <1cd32cbb0906131503mb8d7d62h634e5fb078cedec8@mail.gmail.com> On Sat, Jun 13, 2009 at 4:41 PM, wrote: > On Sat, Jun 13, 2009 at 8:30 AM, David > Cournapeau wrote: >> josef.pktd at gmail.com wrote: >>> >>> >>> with 0.7.1 I get the 300 skip known failures >>> >> >> I don't remember the details, but there was an intersection on quite a >> few problems, that is another distutils wart (which exits the process >> through System.exit() in some weird cases, and there is no way around it >> except not executing the corresponding code), some problems with >> subprocess on windows, and mingw problems. I don't know why it used to >> work on 0.6.0 compared to 0.7.0. >> >>> For some time now, I have the problem that some build scripts with >>> setup.py don't use mingw as compiler even though I have it specified >>> in distutils.cfg, and they don't find a non-existing microsoft >>> compiler. >>> But I have no idea whether this is related to numpy distutils or any >>> other changes that happened to my python 2.5 install. >>> >> >> I don't know either. Frankly, all this code to detect compilers is such >> a mess in distutils and numpy.distutils, and it depends so much on the >> configuration (whether you have some version of Visual Studio or not, >> and it of course depends on the python version) that I consider the >> problem to be intractable, at least for someone like me who don't spend >> its time on windows. I don't have a better answer :) >> > > I narrowed down the weave/mingw problems to changes that occured > during the release process of 0.7.0 that broke mingw detection for > windows users. > > If I run 0.7.0b1 (the last version of 0.7.0 that I had fully tested > and in my python25 install), I get the good result > > Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy.weave >>>> scipy.weave.test('full') > Running unit tests for scipy.weave > NumPy version 1.3.0 > NumPy is installed in c:\programs\python25\lib\site-packages\numpy > SciPy version 0.7.0.dev5180 > SciPy is installed in c:\programs\python25\lib\site-packages\scipy > Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int > el)] > nose version 0.11.1 > > Ran 448 tests in 774.532s > OK (KNOWNFAIL=2, SKIP=7) > > > > With the release version of 0.7.0 and removing the skip known failures > decorators I get the errors because mingw is not used as the compiler. > > so the change happened between svn versions '5180' and '5542' > > It looks like the problem is not outside of scipy, since I can run > 0.7.0b1 in my regular python25 without the problems, with no other > changes in the setup. > > (I had the compiler detection problem the first time a while ago with > theano, but I thought it was a problem with theano since they mention > that it is not tested for windows) > > BTW: is there a way to give the no-skip argument, i.e. run also known > failures, to nose in the call to test, e.g. scipy.test(...)? > > Josef > some bughunting later, if I drop platform_info.py from 0.7.0b1 into the recent scipy trunk, then everything works ok. the problem is that the change to subprocess in platform_info.py introduced a keyword, close_fds, that is not available on windows. Since it is inside a try except clause, the error is captured in the same way as if the shell command were not available. (the joy of broad try except clauses) so the fix for windows is to remove the unavailable close_fds keyword - p = subprocess.Popen([str(name), '-v'], shell=True, close_fds=True, + p = subprocess.Popen([str(name), '-v'], shell=True, the same change has been made for build_tools.py already in changesets 5438 and 5439 but not for platform_info.py Also the skip decorator for the segfaulting (with mingw) test that uses cout, has been removed in the current trunk, test_with_include in test_ext_tools.py I'm glad it's nothing serious with my setuptools/distutils/... setup. The new test results for scipy.weave.test('full') are: >>> scipy.weave.test('full') Running unit tests for scipy.weave NumPy version 1.3.0 NumPy is installed in c:\programs\python25\lib\site-packages\numpy SciPy version 0.8.0.dev5789 SciPy is installed in c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\sc ipy-0.8.0.dev5789.win32\programs\python25\lib\site-packages\scipy Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] nose version 0.11.1 Ran 440 tests in 789.235s OK (KNOWNFAIL=2) Josef From josef.pktd at gmail.com Sat Jun 13 18:11:40 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 13 Jun 2009 18:11:40 -0400 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906131503mb8d7d62h634e5fb078cedec8@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1d1e6ea70906121526ha5a7e8do8a222effdfdd56be@mail.gmail.com> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> <1cd32cbb0906131341x45081c15s95cf9c27339fe488@mail.gmail.com> <1cd32cbb0906131503mb8d7d62h634e5fb078cedec8@mail.gmail.com> Message-ID: <1cd32cbb0906131511y3b219dc3t5e54d8d4c9d56ab8@mail.gmail.com> On Sat, Jun 13, 2009 at 6:03 PM, wrote: > On Sat, Jun 13, 2009 at 4:41 PM, wrote: >> On Sat, Jun 13, 2009 at 8:30 AM, David >> Cournapeau wrote: >>> josef.pktd at gmail.com wrote: >>>> >>>> >>>> with 0.7.1 I get the 300 skip known failures >>>> >>> >>> I don't remember the details, but there was an intersection on quite a >>> few problems, that is another distutils wart (which exits the process >>> through System.exit() in some weird cases, and there is no way around it >>> except not executing the corresponding code), some problems with >>> subprocess on windows, and mingw problems. I don't know why it used to >>> work on 0.6.0 compared to 0.7.0. >>> >>>> For some time now, I have the problem that some build scripts with >>>> setup.py don't use mingw as compiler even though I have it specified >>>> in distutils.cfg, and they don't find a non-existing microsoft >>>> compiler. >>>> But I have no idea whether this is related to numpy distutils or any >>>> other changes that happened to my python 2.5 install. >>>> >>> >>> I don't know either. Frankly, all this code to detect compilers is such >>> a mess in distutils and numpy.distutils, and it depends so much on the >>> configuration (whether you have some version of Visual Studio or not, >>> and it of course depends on the python version) that I consider the >>> problem to be intractable, at least for someone like me who don't spend >>> its time on windows. I don't have a better answer :) >>> >> >> I narrowed down the weave/mingw problems to changes that occured >> during the release process of 0.7.0 that broke mingw detection for >> windows users. >> >> If I run 0.7.0b1 (the last version of 0.7.0 that I had fully tested >> and in my python25 install), I get the good result >> >> Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on >> win32 >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import scipy.weave >>>>> scipy.weave.test('full') >> Running unit tests for scipy.weave >> NumPy version 1.3.0 >> NumPy is installed in c:\programs\python25\lib\site-packages\numpy >> SciPy version 0.7.0.dev5180 >> SciPy is installed in c:\programs\python25\lib\site-packages\scipy >> Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int >> el)] >> nose version 0.11.1 >> >> Ran 448 tests in 774.532s >> OK (KNOWNFAIL=2, SKIP=7) >> >> >> >> With the release version of 0.7.0 and removing the skip known failures >> decorators I get the errors because mingw is not used as the compiler. >> >> so the change happened between svn versions '5180' and '5542' >> >> It looks like the problem is not outside of scipy, since I can run >> 0.7.0b1 in my regular python25 without the problems, with no other >> changes in the setup. >> >> (I had the compiler detection problem the first time a while ago with >> theano, but I thought it was a problem with theano since they mention >> that it is not tested for windows) >> >> BTW: is there a way to give the no-skip argument, i.e. run also known >> failures, to nose in the call to test, e.g. scipy.test(...)? >> >> Josef >> > some bughunting later, if I drop platform_info.py ?from 0.7.0b1 into > the recent scipy trunk, then everything works ok. > > the problem is that the change to subprocess in platform_info.py > introduced a keyword, close_fds, that is not available on windows. > Since it is inside a try except clause, the error is captured in the > same way as if the shell command were not available. (the joy of broad > try except clauses) > > so the fix for windows is to remove the unavailable close_fds keyword > > - p = subprocess.Popen([str(name), '-v'], shell=True, close_fds=True, > + p = subprocess.Popen([str(name), '-v'], shell=True, > > the same change has been made for build_tools.py already in changesets > 5438 and 5439 but not for platform_info.py > > Also the skip decorator for the segfaulting (with mingw) test that > uses cout, has been removed in the current trunk, > test_with_include in test_ext_tools.py > > I'm glad it's nothing serious with my setuptools/distutils/... setup. > > > > The new test results for scipy.weave.test('full') are: > >>>> scipy.weave.test('full') > Running unit tests for scipy.weave > NumPy version 1.3.0 > NumPy is installed in c:\programs\python25\lib\site-packages\numpy > SciPy version 0.8.0.dev5789 > SciPy is installed in c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\sc > ipy-0.8.0.dev5789.win32\programs\python25\lib\site-packages\scipy > Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int > el)] > nose version 0.11.1 > > Ran 440 tests in 789.235s > OK (KNOWNFAIL=2) > > > Josef > fixed in trunk in revision 5838 I think this should be backported to 0.7.1 Josef From robert.kern at gmail.com Sat Jun 13 18:23:56 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 13 Jun 2009 17:23:56 -0500 Subject: [SciPy-user] linalg.expm() Illegal Instruction Error? In-Reply-To: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> References: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> Message-ID: <3d375d730906131523g6cf55751n7db53b1feb6388e@mail.gmail.com> On Sat, Jun 13, 2009 at 10:44, Dylan Gorman wrote: > Hi Folks, > > I'm having a bit of a weird problem. linalg.expm() is failing for > matrices larger than size (51,51), and reports "Illegal instruction" > and forces python to quit. I'm not sure where this is coming from. I > created the following routine: > for i in range(10,128): > ? ? ? ?A = random.rand(i,i) > ? ? ? ?B = linalg.expm(A) > ? ? ? ?print i > > which fails with "Illegal instruction" when you get to 51. This happens when you use a numpy binary that was compiled to use an ATLAS library on a CPU with more advanced SSE instructions than your CPU. Exactly which binary did you install (please give a URL)? What is your CPU? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Sat Jun 13 21:51:09 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 14 Jun 2009 10:51:09 +0900 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906131511y3b219dc3t5e54d8d4c9d56ab8@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1cd32cbb0906121609h586f3782o1f28c6e73743c81e@mail.gmail.com> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> <1cd32cbb0906131341x45081c15s95cf9c27339fe488@mail.gmail.com> <1cd32cbb0906131503mb8d7d62h634e5fb078cedec8@mail.gmail.com> <1cd32cbb0906131511y3b219dc3t5e54d8d4c9d56ab8@mail.gmail.com> Message-ID: <5b8d13220906131851udef3897n569b6ae1d9d10c8@mail.gmail.com> On Sun, Jun 14, 2009 at 7:11 AM, wrote: > fixed in trunk in revision 5838 > > I think this should be backported to 0.7.1 I can backport the change but this won't fix the original problem: I have tons of failures on python 2.6 if VS 2008 is installed. Testing on python 2.5 is not enough - there needs to be at least 4 tested configurations (python 25, python 26, both with and without the corresponding VS installed - 2003 for python 25 and 2008 for python 26). David From josef.pktd at gmail.com Sat Jun 13 22:20:03 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 13 Jun 2009 22:20:03 -0400 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <5b8d13220906131851udef3897n569b6ae1d9d10c8@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <1cd32cbb0906121714n1f1641e5i3c085cce7000a1db@mail.gmail.com> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> <1cd32cbb0906131341x45081c15s95cf9c27339fe488@mail.gmail.com> <1cd32cbb0906131503mb8d7d62h634e5fb078cedec8@mail.gmail.com> <1cd32cbb0906131511y3b219dc3t5e54d8d4c9d56ab8@mail.gmail.com> <5b8d13220906131851udef3897n569b6ae1d9d10c8@mail.gmail.com> Message-ID: <1cd32cbb0906131920q282cda4bh88156956fc9b95a9@mail.gmail.com> On Sat, Jun 13, 2009 at 9:51 PM, David Cournapeau wrote: > On Sun, Jun 14, 2009 at 7:11 AM, wrote: > >> fixed in trunk in revision 5838 >> >> I think this should be backported to 0.7.1 > > I can backport the change but this won't fix the original problem: I > have tons of failures on python 2.6 if VS 2008 is installed. Testing > on python 2.5 is not enough - there needs to be at least 4 tested > configurations (python 25, python 26, both with and without the > corresponding VS installed - 2003 for python 25 and 2008 for python > 26). But it's a trivial fix, to get the status back of before 0.7.0 for pure mingw users like me. I don't have any matching python - VS combination, so I'm no help for that problem. (and "cl" of visual express 2005 or visual studio tools is not on the shell/system/windows path by default, so it cannot interfere with a false detection.) What I don't understand is why in platform_info.py the function `choose_compiler` doesn't ask distutils what the default compiler is, the one specified in distutils.cfg, instead of doing its own determination. pyximport in cython had the same problem of not using mingw, but this is fixed now (or in the next release). Josef > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Sat Jun 13 22:51:53 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 14 Jun 2009 11:51:53 +0900 Subject: [SciPy-user] [Numpy-discussion] scipy 0.7.1 rc3 In-Reply-To: <1cd32cbb0906131920q282cda4bh88156956fc9b95a9@mail.gmail.com> References: <4A3241E2.1010000@ar.media.kyoto-u.ac.jp> <4A338790.5060201@ar.media.kyoto-u.ac.jp> <5b8d13220906130509k2a971c4crbb6931f5c61898f6@mail.gmail.com> <1cd32cbb0906130538g1db930b2r2edcfe72c8594e6@mail.gmail.com> <4A339BF0.4040407@ar.media.kyoto-u.ac.jp> <1cd32cbb0906131341x45081c15s95cf9c27339fe488@mail.gmail.com> <1cd32cbb0906131503mb8d7d62h634e5fb078cedec8@mail.gmail.com> <1cd32cbb0906131511y3b219dc3t5e54d8d4c9d56ab8@mail.gmail.com> <5b8d13220906131851udef3897n569b6ae1d9d10c8@mail.gmail.com> <1cd32cbb0906131920q282cda4bh88156956fc9b95a9@mail.gmail.com> Message-ID: <5b8d13220906131951xce7ee8el7606e6175f0f1dff@mail.gmail.com> On Sun, Jun 14, 2009 at 11:20 AM, wrote: > > But it's a trivial fix, to get the status back of before 0.7.0 for > pure mingw users like me. I meant that it won't help for the known failures. > > What I don't understand is why in platform_info.py the function > `choose_compiler` doesn't ask distutils what the default compiler is, > the one specified in distutils.cfg, instead of doing its own > determination. weave code is fairly old, so maybe it was coded before numpy.distutils/distutils had proper mingw support. To know for sure, you would need to get into svn log/blame between numpy and scipy to see the history. David From dgorman at berkeley.edu Sun Jun 14 00:46:43 2009 From: dgorman at berkeley.edu (Dylan Gorman) Date: Sat, 13 Jun 2009 21:46:43 -0700 Subject: [SciPy-user] linalg.expm() Illegal Instruction Error? In-Reply-To: <3d375d730906131523g6cf55751n7db53b1feb6388e@mail.gmail.com> References: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> <3d375d730906131523g6cf55751n7db53b1feb6388e@mail.gmail.com> Message-ID: <864573FB-841D-486D-B16A-9964C62888FE@berkeley.edu> Thanks, Robert. The situation is a little tricky. I installed numpy 1.3.0 from source from, I guess, this sourceforge link: http://sourceforge.net/project/downloading.php?group_id=1369&filename=numpy-1.3.0.tar.gz&a=58155458 I should perhaps elaborate. I have a small cluster, on which I have non-root access. I've installed python 2.5.2, numpy 1.3.0, and scipy 0.7rc1 to ~. ATLAS is already configured on the system, so prior to installing these packages I set export ATLAS=/usr/local/atlas in ~/.bash_profile I'm just aiming to run non-parallel batch jobs on each node of the cluster. The code works quite happily on a set of newer nodes in this cluster, but on the older ones we are getting this Illegal instruction error. I'm wondering if there's a way to make numpy use, perhaps, an older version of ATLAS that will be somewhat slower, but will work on all the nodes. Can ATLAS be installed to ~? Thanks, Dylan On Jun 13, 2009, at 3:23 PM, Robert Kern wrote: > On Sat, Jun 13, 2009 at 10:44, Dylan Gorman > wrote: >> Hi Folks, >> >> I'm having a bit of a weird problem. linalg.expm() is failing for >> matrices larger than size (51,51), and reports "Illegal instruction" >> and forces python to quit. I'm not sure where this is coming from. I >> created the following routine: >> for i in range(10,128): >> A = random.rand(i,i) >> B = linalg.expm(A) >> print i >> >> which fails with "Illegal instruction" when you get to 51. > > This happens when you use a numpy binary that was compiled to use an > ATLAS library on a CPU with more advanced SSE instructions than your > CPU. Exactly which binary did you install (please give a URL)? What is > your CPU? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Sun Jun 14 00:49:42 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 13 Jun 2009 23:49:42 -0500 Subject: [SciPy-user] linalg.expm() Illegal Instruction Error? In-Reply-To: <864573FB-841D-486D-B16A-9964C62888FE@berkeley.edu> References: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> <3d375d730906131523g6cf55751n7db53b1feb6388e@mail.gmail.com> <864573FB-841D-486D-B16A-9964C62888FE@berkeley.edu> Message-ID: <3d375d730906132149k6ff1d5dakeec464ac7a1af6e5@mail.gmail.com> On Sat, Jun 13, 2009 at 23:46, Dylan Gorman wrote: > Thanks, Robert. > > The situation is a little tricky. I installed numpy 1.3.0 from source > from, I guess, this sourceforge link: http://sourceforge.net/project/downloading.php?group_id=1369&filename=numpy-1.3.0.tar.gz&a=58155458 > > I should perhaps elaborate. I have a small cluster, on which I have > non-root access. I've installed python 2.5.2, numpy 1.3.0, and scipy > 0.7rc1 to ~. ATLAS is already configured on the system, so prior to > installing these packages I set > export ATLAS=/usr/local/atlas > in ~/.bash_profile > > I'm just aiming to run non-parallel batch jobs on each node of the > cluster. The code works quite happily on a set of newer nodes in this > cluster, but on the older ones we are getting this Illegal instruction > error. > > I'm wondering if there's a way to make numpy use, perhaps, an older > version of ATLAS that will be somewhat slower, but will work on all > the nodes. If you're building everything from source, use whichever ATLAS you like. Or no ATLAS at all. > Can ATLAS be installed to ~? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Sun Jun 14 01:02:07 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 14 Jun 2009 14:02:07 +0900 Subject: [SciPy-user] linalg.expm() Illegal Instruction Error? In-Reply-To: <864573FB-841D-486D-B16A-9964C62888FE@berkeley.edu> References: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> <3d375d730906131523g6cf55751n7db53b1feb6388e@mail.gmail.com> <864573FB-841D-486D-B16A-9964C62888FE@berkeley.edu> Message-ID: <5b8d13220906132202m65db300cp5c91b3cec4216c98@mail.gmail.com> On Sun, Jun 14, 2009 at 1:46 PM, Dylan Gorman wrote: > Thanks, Robert. > > The situation is a little tricky. I installed numpy 1.3.0 from source > from, I guess, this sourceforge link: http://sourceforge.net/project/downloading.php?group_id=1369&filename=numpy-1.3.0.tar.gz&a=58155458 > > I should perhaps elaborate. I have a small cluster, on which I have > non-root access. I've installed python 2.5.2, numpy 1.3.0, and scipy > 0.7rc1 to ~. ATLAS is already configured on the system, so prior to > installing these packages I set > export ATLAS=/usr/local/atlas > in ~/.bash_profile > > I'm just aiming to run non-parallel batch jobs on each node of the > cluster. The code works quite happily on a set of newer nodes in this > cluster, but on the older ones we are getting this Illegal instruction > error. The "optimal" solution is to build atlas for each node - this way, you will get an optimized ATLAS for each node. That's a maintenance nightmare. The other solution is to use ATLAS optimized on the machine with the lowest common denominator (say a machine with SSE only), and deploy this everywhere else. The easiest is to avoid atlas and use pure BLAS/LAPACK, David From dgorman at berkeley.edu Sun Jun 14 02:19:32 2009 From: dgorman at berkeley.edu (Dylan Gorman) Date: Sat, 13 Jun 2009 23:19:32 -0700 Subject: [SciPy-user] linalg.expm() Illegal Instruction Error? In-Reply-To: <5b8d13220906132202m65db300cp5c91b3cec4216c98@mail.gmail.com> References: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> <3d375d730906131523g6cf55751n7db53b1feb6388e@mail.gmail.com> <864573FB-841D-486D-B16A-9964C62888FE@berkeley.edu> <5b8d13220906132202m65db300cp5c91b3cec4216c98@mail.gmail.com> Message-ID: David, Interesting. If I compile an ATLAS build for each node, how would I let numpy know to use the particular ATLAS for the node it happens to be running on? Regards, Dylan On Jun 13, 2009, at 10:02 PM, David Cournapeau wrote: > On Sun, Jun 14, 2009 at 1:46 PM, Dylan Gorman > wrote: >> Thanks, Robert. >> >> The situation is a little tricky. I installed numpy 1.3.0 from source >> from, I guess, this sourceforge link: http://sourceforge.net/project/downloading.php?group_id=1369&filename=numpy-1.3.0.tar.gz&a=58155458 >> >> I should perhaps elaborate. I have a small cluster, on which I have >> non-root access. I've installed python 2.5.2, numpy 1.3.0, and scipy >> 0.7rc1 to ~. ATLAS is already configured on the system, so prior to >> installing these packages I set >> export ATLAS=/usr/local/atlas >> in ~/.bash_profile >> >> I'm just aiming to run non-parallel batch jobs on each node of the >> cluster. The code works quite happily on a set of newer nodes in this >> cluster, but on the older ones we are getting this Illegal >> instruction >> error. > > The "optimal" solution is to build atlas for each node - this way, you > will get an optimized ATLAS for each node. That's a maintenance > nightmare. > > The other solution is to use ATLAS optimized on the machine with the > lowest common denominator (say a machine with SSE only), and deploy > this everywhere else. > > The easiest is to avoid atlas and use pure BLAS/LAPACK, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Sun Jun 14 02:27:53 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 14 Jun 2009 01:27:53 -0500 Subject: [SciPy-user] linalg.expm() Illegal Instruction Error? In-Reply-To: References: <94742914-A0F7-49A3-B330-60D2F95CCDF4@berkeley.edu> <3d375d730906131523g6cf55751n7db53b1feb6388e@mail.gmail.com> <864573FB-841D-486D-B16A-9964C62888FE@berkeley.edu> <5b8d13220906132202m65db300cp5c91b3cec4216c98@mail.gmail.com> Message-ID: <3d375d730906132327q3d648700ofb7ceec2ecd31435@mail.gmail.com> On Sun, Jun 14, 2009 at 01:19, Dylan Gorman wrote: > David, > > Interesting. If I compile an ATLAS build for each node, how would I > let numpy know to use the particular ATLAS for the node it happens to > be running on? One of two ways: 1. If you compile ATLAS as a shared library and link numpy to it dynamically, just install the correct ATLAS on each node. The OS's dynamic linking mechanism will pick the local ATLAS when numpy gets loaded in the local process. You can build numpy once in this case and run it from a shared filesystem if you like. 2. If you compile ATLAS as a static library, then you will also have to build numpy on each platform and install the correct numpy locally to each node. You cannot share the numpy build on a shared filesystem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mmueller at python-academy.de Sun Jun 14 07:43:23 2009 From: mmueller at python-academy.de (=?ISO-8859-15?Q?Mike_M=FCller?=) Date: Sun, 14 Jun 2009 13:43:23 +0200 Subject: [SciPy-user] [ANN] Reminder: EuroSciPy 2009 - Early Bird Deadline June 15, 2009 Message-ID: <4A34E25B.3000508@python-academy.de> EuroSciPy 2009 - Early Bird Deadline June 15, 2009 ================================================== The early bird deadline for EuroSciPy 2009 is June 15, 2009. Please register ( http://www.euroscipy.org/registration.html ) by this date to take advantage of the reduced early registration rate. EuroSciPy 2009 ============== We're pleased to announce the EuroSciPy 2009 Conference to be held in Leipzig, Germany on July 25-26, 2009. http://www.euroscipy.org This is the second conference after the successful conference last year. Again, EuroSciPy will be a venue for the European community of users of the Python programming language in science. Presentation Schedule --------------------- The schedule of presentations for the EuroSciPy conference is online: http://www.euroscipy.org/presentations/schedule.html We have 16 talks from a variety of scientific fields. All about using Python for scientific work. Registration ------------ Registration is open. The registration fee is 100.00 ? for early registrants and will increase to 150.00 ? for late registration after June 15, 2009. On-site registration and registration after July 23, 2009 will be 200.00 ?. Registration will include breakfast, snacks and lunch for Saturday and Sunday. Please register here: http://www.euroscipy.org/registration.html Important Dates --------------- March 21 Registration opens May 8 Abstract submission deadline May 15 Acceptance of presentations May 30 Announcement of conference program June 15 Early bird registration deadline July 15 Slides submission deadline July 20 - 24 Pre-Conference courses July 25/26 Conference August 15 Paper submission deadline Venue ----- mediencampus Poetenweg 28 04155 Leipzig Germany See http://www.euroscipy.org/venue.html for details. Help Welcome ------------ You like to help make the EuroSciPy 2009 a success? Here are some ways you can get involved: * attend the conference * submit an abstract for a presentation * give a lightning talk * make EuroSciPy known: - distribute the press release (http://www.euroscipy.org/media.html) to scientific magazines or other relevant media - write about it on your website - in your blog - talk to friends about it - post to local e-mail lists - post to related forums - spread flyers and posters in your institution - make entries in relevant event calendars - anything you can think of * inform potential sponsors about the event * become a sponsor If you're interested in volunteering to help organize things or have some other idea that can help the conference, please email us at mmueller at python-academy dot de. Sponsorship ----------- Do you like to sponsor the conference? There are several options available: http://www.euroscipy.org/sponsors/become_a_sponsor.html Pre-Conference Courses ---------------------- Would you like to learn Python or about some of the most used scientific libraries in Python? Then the "Python Summer Course" [1] might be for you. There are two parts to this course: * a two-day course "Introduction to Python" [2] for people with programming experience in other languages and * a three-day course "Python for Scientists and Engineers" [3] that introduces some of the most used Python tools for scientists and engineers such as NumPy, PyTables, and matplotlib Both courses can be booked individually [4]. Of course, you can attend the courses without registering for EuroSciPy. [1] http://www.python-academy.com/courses/python_summer_course.html [2] http://www.python-academy.com/courses/python_course_programmers.html [3] http://www.python-academy.com/courses/python_course_scientists.html [4] http://www.python-academy.com/courses/dates.html From contact at pythonxy.com Sun Jun 14 16:57:39 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 14 Jun 2009 22:57:39 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.1.14 Message-ID: <4A356443.1020607@pythonxy.com> Hi all, Release 2.1.14 is now available on http://www.pythonxy.com: - All-in-One Installer ("Full Edition"), - Plugin Installer -- to be downloaded with xyweb, - Update (This release was splitted in two parts: v2.1.13 and v2.1.14 -- only because the update was 20MB above the Google Code file size limit) Changes history Version 2.1.14 (06-14-2009) * Added: o gnuplot 1.8 - Complete gnuplot package: include the popular open-source plotting program gnuplot and the Python interface o psyco 1.6 - Specializing compiler which can massively speed up the execution of any Python code o formlayout 1.0.1 - Module for creating form dialogs/widgets to edit various type of parameters with having to write any GUI code o PyWavelets 0.1.6 o scikits.timeseries 0.91.1 * Updated: o Pydee 0.4.13 o ITK 3.14 o Photran 4.0.5 Version 2.1.13 (06-14-2009) * Updated: o Pydee 0.4.12 o xy 1.0.25 o NumPy 1.3.0 o numexpr 1.3 o Matplotlib 0.98.5.4 (added PyQt4 widget and associated QtDesigner plugin) o VTK 5.4.0 o Enthought Tool Suite 3.2.0.1 o VPython 5.1 o PyOpenGL 3.0.0 o SymPy 0.6.4 o pydicom 0.9.3 o GDAL 1.6.1 o pyExcelerator 0.6.4.1 o Pywin32 2.13.1 (plugin minor bugfix) o Cython 0.11.2 o jinja 2.1.1 (plugin major bugfix) o nose 0.11.1 o winpdb 1.4.6 o Pydev 1.4.6 o StartExplorer 0.5.0 o SWIG 1.3.39 o Following updates are relevant only for a new install of Python(x,y) (there is absolutely no need to update your current install) o SciTE 1.78 o PyQt4 4.4.3.7 (minor update: added documentation) o OpenCV 1.0.0.2 o PyGTK 2.12.1.1 o gettext 0.14.4.1 o console2 2.0.141.9 * Fixed: o Python(x,y) is now built using a special NSIS build with advanced logging support *and* long strings support (fixed a -quite rarely encoutered but existing- corrupting PATH issue) Regards, Pierre Raybaut From saffsd at gmail.com Mon Jun 15 09:54:07 2009 From: saffsd at gmail.com (Marco Lui) Date: Mon, 15 Jun 2009 23:54:07 +1000 Subject: [SciPy-user] Problem building scipy from source Message-ID: Hello everyone I have followed the instructions at http://scipy.org/Installing_SciPy/BuildingGeneral I have successfully built BLAS and LAPACK, but the build for scipy itself fails as follows: building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources conv_template:> build/src.linux-x86_64-2.5/scipy/signal/lfilter.inc Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 152, in setup_package configuration=configuration ) File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", line 176, in setup return old_setup(**new_attr) File "/usr/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/lib/python2.5/distutils/command/build.py", line 113, in run self.run_command(cmd_name) File "/usr/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 130, in run self.build_sources() File "/usr/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 147, in build_sources self.build_extension_sources(ext) File "/usr/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 252, in build_extension_sources sources = self.template_sources(sources, ext) File "/usr/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 359, in template_sources outstr = process_c_file(source) File "/usr/lib/python2.5/site-packages/numpy/distutils/conv_template.py", line 185, in process_file % (sourcefile, process_str(''.join(lines)))) File "/usr/lib/python2.5/site-packages/numpy/distutils/conv_template.py", line 150, in process_str newstr[sub[0]:sub[1]], sub[4]) File "/usr/lib/python2.5/site-packages/numpy/distutils/conv_template.py", line 114, in expand_sub for k in range(numsubs): TypeError: range() integer end argument expected, got NoneType. The underlying system is an installation of Ubuntu Hardy with Python 2.5.2. Any suggestions are much appreciated. Thank you, Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey15 at ukr.net Mon Jun 15 10:00:54 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Mon, 15 Jun 2009 17:00:54 +0300 Subject: [SciPy-user] [ANN][optimization] OpenOpt release 0.24 Message-ID: <4A365416.9080607@ukr.net> Hi all, I'm glad to inform you about new release (0.24) of OpenOpt, a free Python-written universal numerical optimization framework (license: BSD) Our homepage: http://openopt.org Introduction to the framework: http://openopt.org/Foreword All release details here: http://forum.openopt.org/viewtopic.php?id=110 or http://openopt.org/Changelog Regards, D. From aleck at marlboro.edu Mon Jun 15 11:22:44 2009 From: aleck at marlboro.edu (Alec Koumjian) Date: Mon, 15 Jun 2009 11:22:44 -0400 Subject: [SciPy-user] scikits.timeseries module installation Message-ID: <61a4c0ba0906150822k77f938cfq7396accdb52a505c@mail.gmail.com> Hello all, I'm looking for a little installation support for the scikits.timeseries module. The machine is running Ubuntu Jaunty 9.04. The module appears to have successfully installed to the following directory: /usr/local/lib/python2.6/dist-packages/scikits.timeseries-0.91.1-py2.6-linux-i686.egg The installation was done using the setuptools setup.py file. However, when I attempt to do the following: "import scikits.timeseries as ts" Python can't find the module. I've tried manually copying the scikits directory to /usr/lib/python2.6/dist-packages/, but it's expecting to find further dependencies elsewhere. Any suggestions? From jsseabold at gmail.com Mon Jun 15 11:37:48 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 15 Jun 2009 11:37:48 -0400 Subject: [SciPy-user] scikits.timeseries module installation In-Reply-To: <61a4c0ba0906150822k77f938cfq7396accdb52a505c@mail.gmail.com> References: <61a4c0ba0906150822k77f938cfq7396accdb52a505c@mail.gmail.com> Message-ID: On Mon, Jun 15, 2009 at 11:22 AM, Alec Koumjian wrote: > Hello all, > I'm looking for a little installation support for the > scikits.timeseries module. ?The machine is running Ubuntu Jaunty 9.04. > ?The module appears to have successfully installed to the following > directory: > /usr/local/lib/python2.6/dist-packages/scikits.timeseries-0.91.1-py2.6-linux-i686.egg > I can't remember whether I had to do this manually for the timeseries scikit on Jaunty, but is the scikits directory in your python path? To check you can import sys sys.path Mine has an entry '/usr/local/lib/python2.6/dist-packages/scikits.timeseries-0.91.1-py2.6-linux-i686.egg', If not, you can append to this list as a session fix, but if you want it to always be in your path you should be able to follow the instructions here or add this line to your .bashrc export PYTHONPATH="/path/to/ts" > The installation was done using the setuptools setup.py file. > However, when I attempt to do the following: > "import scikits.timeseries as ts" > Python can't find the module. ?I've tried manually copying the scikits > directory to /usr/lib/python2.6/dist-packages/, but it's expecting to > find further dependencies elsewhere. > > Any suggestions? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Skipper From akoumjian at gmail.com Mon Jun 15 11:59:12 2009 From: akoumjian at gmail.com (Alec Koumjian) Date: Mon, 15 Jun 2009 11:59:12 -0400 Subject: [SciPy-user] scikits.timeseries module installation In-Reply-To: References: <61a4c0ba0906150822k77f938cfq7396accdb52a505c@mail.gmail.com> Message-ID: <61a4c0ba0906150859k20d96e72v695979fcc28fbf85@mail.gmail.com> Worked great, thank you. An additional note to other Ubuntu users, do not use the numpy or scipy packages in the repository. They are still too old. On Mon, Jun 15, 2009 at 11:37 AM, Skipper Seabold wrote: > On Mon, Jun 15, 2009 at 11:22 AM, Alec Koumjian wrote: >> Hello all, >> I'm looking for a little installation support for the >> scikits.timeseries module. ?The machine is running Ubuntu Jaunty 9.04. >> ?The module appears to have successfully installed to the following >> directory: >> /usr/local/lib/python2.6/dist-packages/scikits.timeseries-0.91.1-py2.6-linux-i686.egg >> > > I can't remember whether I had to do this manually for the timeseries > scikit on Jaunty, but is the scikits directory in your python path? > To check you can > > import sys > sys.path > > Mine has an entry > '/usr/local/lib/python2.6/dist-packages/scikits.timeseries-0.91.1-py2.6-linux-i686.egg', > > If not, you can append to this list as a session fix, but if you want > it to always be in your path you should be able to follow the > instructions here or add this line to > your .bashrc > > export PYTHONPATH="/path/to/ts" > >> The installation was done using the setuptools setup.py file. >> However, when I attempt to do the following: >> "import scikits.timeseries as ts" >> Python can't find the module. ?I've tried manually copying the scikits >> directory to /usr/lib/python2.6/dist-packages/, but it's expecting to >> find further dependencies elsewhere. >> >> Any suggestions? >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > Skipper > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Mon Jun 15 12:11:49 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 16 Jun 2009 01:11:49 +0900 Subject: [SciPy-user] Problem building scipy from source In-Reply-To: References: Message-ID: <5b8d13220906150911m6867b999keecf2b92bf5b8936@mail.gmail.com> On Mon, Jun 15, 2009 at 10:54 PM, Marco Lui wrote: > Hello everyone > > I have followed the instructions at > > http://scipy.org/Installing_SciPy/BuildingGeneral You need numpy 1.3.0 or above to build scipy from svn. David From nwagner at iam.uni-stuttgart.de Mon Jun 15 13:01:21 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 15 Jun 2009 19:01:21 +0200 Subject: [SciPy-user] equivalent to Matlab's interp1 Message-ID: Hi all, Is there an equivalent function to Matlab's interp1(x,y,x_new) in scipy.interpolate ? Nils From pav at iki.fi Mon Jun 15 13:47:58 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Jun 2009 17:47:58 +0000 (UTC) Subject: [SciPy-user] equivalent to Matlab's interp1 References: Message-ID: On 2009-06-15, Nils Wagner wrote: > Hi all, > > Is there an equivalent function to Matlab's > interp1(x,y,x_new) in scipy.interpolate ? Yes. >>> scipy.interpolate.interp1d([1,2,3],[4,5,6])([1.5, 2.5]) array([ 4.5, 5.5]) -- Pauli Virtanen From brian.lewis17 at gmail.com Mon Jun 15 13:52:51 2009 From: brian.lewis17 at gmail.com (Brian Lewis) Date: Mon, 15 Jun 2009 10:52:51 -0700 Subject: [SciPy-user] Dot in greater than 2D Message-ID: How is dot defined for matrices with dimension greater than 2? The docstring says: dot(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. which seemed a bit cryptic to me. Is there any documentation on this with an example using arrays with greater than 2 dimensions? On first read, it seemed like only the summed dimensions needed to agree. So, a.shape == (5,3,2) b.shape == (10,2,20) but I get that the objects are not aligned. On second read, it seemed like maybe dot would apply itself recursively. So maybe a.shape == (5,3,2) b.shape == (3,2,5) would work since a[:,:,i] has shape (5,3) and is compatible (via dot) with b[:,i,:] whose shape is (3,5). Not so surprisingly, the objects are also not aligned. Apologies for my naivety. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Mon Jun 15 14:00:46 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 15 Jun 2009 14:00:46 -0400 Subject: [SciPy-user] equivalent to Matlab's interp1 In-Reply-To: References: Message-ID: <10D59CEA-2508-413F-A34A-E9EF51C73060@yale.edu> There's also numpy.interp: Definition: numpy.interp(x, xp, fp, left=None, right=None) Docstring: One-dimensional linear interpolation. Returns the one-dimensional piecewise linear interpolant to a function with given values at discrete data-points. Parameters ---------- x : array_like The x-coordinates of the interpolated values. xp : 1-D sequence of floats The x-coordinates of the data points, must be increasing. fp : 1-D sequence of floats The y-coordinates of the data points, same length as `xp`. left : float, optional Value to return for `x < xp[0]`, default is `fp[0]`. right : float, optional Value to return for `x > xp[-1]`, defaults is `fp[-1]`. Returns ------- y : {float, ndarray} The interpolated values, same shape as `x`. Raises ------ ValueError If `xp` and `fp` have different length Notes ----- Does not check that the x-coordinate sequence `xp` is increasing. If `xp` is not increasing, the results are nonsense. A simple check for increasingness is:: np.all(np.diff(xp) > 0) Examples -------- >>> xp = [1, 2, 3] >>> fp = [3, 2, 0] >>> np.interp(2.5, xp, fp) 1.0 >>> np.interp([0, 1, 1.5, 2.72, 3.14], xp, fp) array([ 3. , 3. , 2.5, 0.56, 0. ]) >>> UNDEF = -99.0 >>> np.interp(3.14, xp, fp, right=UNDEF) -99.0 Plot an interpolant to the sine function: >>> x = np.linspace(0, 2*np.pi, 10) >>> y = np.sin(x) >>> xvals = np.linspace(0, 2*np.pi, 50) >>> yinterp = np.interp(xvals, x, y) >>> import matplotlib.pyplot as plt >>> plt.plot(x, y, 'o') >>> plt.plot(xvals, yinterp, '-x') >>> plt.show() On Jun 15, 2009, at 1:47 PM, Pauli Virtanen wrote: > On 2009-06-15, Nils Wagner wrote: >> Hi all, >> >> Is there an equivalent function to Matlab's >> interp1(x,y,x_new) in scipy.interpolate ? > > Yes. > >>>> scipy.interpolate.interp1d([1,2,3],[4,5,6])([1.5, 2.5]) > array([ 4.5, 5.5]) > > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From coughlan at ski.org Mon Jun 15 13:32:00 2009 From: coughlan at ski.org (James Coughlan) Date: Mon, 15 Jun 2009 10:32:00 -0700 Subject: [SciPy-user] equivalent to Matlab's interp1 In-Reply-To: References: Message-ID: <4A368590.1020702@ski.org> Nils Wagner wrote: > Hi all, > > Is there an equivalent function to Matlab's > interp1(x,y,x_new) in scipy.interpolate ? > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, interp1d looks very similar: http://www.scipy.org/doc/api_docs/SciPy.interpolate.interpolate.interp1d.html Best, James -- ------------------------------------------------------- James Coughlan, Ph.D., Scientist The Smith-Kettlewell Eye Research Institute Email: coughlan at ski.org URL: http://www.ski.org/Rehab/Coughlan_lab/ Phone: 415-345-2146 Fax: 415-345-8455 ------------------------------------------------------- From josef.pktd at gmail.com Mon Jun 15 14:14:13 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 15 Jun 2009 14:14:13 -0400 Subject: [SciPy-user] Dot in greater than 2D In-Reply-To: References: Message-ID: <1cd32cbb0906151114q72dba9a0mefb3e38d6a68abc5@mail.gmail.com> On Mon, Jun 15, 2009 at 1:52 PM, Brian Lewis wrote: > How is dot defined for matrices with dimension greater than 2?? The > docstring says: > > ??? dot(a,b) > ??? Returns the dot product of a and b for arrays of floating point types. > ??? Like the generic numpy equivalent the product sum is over > ??? the last dimension of a and the second-to-last dimension of b. > ??? NB: The first argument is not conjugated. What dot does this refer to? a namespace would be useful information >>> np.dot(np.ones((5,3,2)),np.ones((10,2,20))).shape (5, 3, 10, 20) Josef > > which seemed a bit cryptic to me.? Is there any documentation on this with > an example using arrays with greater than 2 dimensions? On first read, it > seemed like only the summed dimensions needed to agree.? So, > > a.shape == (5,3,2) > b.shape == (10,2,20) > > but I get that the objects are not aligned.? On second read, it seemed like > maybe dot would apply itself recursively.? So maybe > > a.shape == (5,3,2) > b.shape == (3,2,5) > > would work since a[:,:,i] has shape (5,3) and is compatible (via dot) with > b[:,i,:] whose shape is (3,5). Not so surprisingly, the objects are also not > aligned. > > Apologies for my naivety. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From pav at iki.fi Mon Jun 15 14:39:26 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Jun 2009 18:39:26 +0000 (UTC) Subject: [SciPy-user] Dot in greater than 2D References: Message-ID: On 2009-06-15, Brian Lewis wrote: > How is dot defined for matrices with dimension greater than 2? [clip] See http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html -- Pauli Virtanen From jeffery.kline at gmail.com Mon Jun 15 15:35:38 2009 From: jeffery.kline at gmail.com (Jeffery Kline) Date: Mon, 15 Jun 2009 14:35:38 -0500 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? Message-ID: This is from an old thread about rfft versions in numpy and scipy. (http://mail.scipy.org/pipermail/scipy-user/2009-March/020131.html) I'm writing in support of numpy's rfft output format. In my application, I am interested in the real portion of the rfft of a n-dimensional array. With numpy.rfft, the syntax is elegant: X=npfft.rfft(X, axis=j).real With scipy.rfft, the syntax is more complicated. It requires something like index=range(1,n,2) index.insert(0,0) X=spfft.rfft(X, axis=j).take(index,axis=j) Occasionally, for debugging/testing, I recreate the full spectrum with the resulting output from rfft. To do this with numpy's rfft is natural and easy to read. The corresponding code with scipy's rfft output is more involved. Similar statements hold with respect to the corresponding irfft functions. Finally, speed is an issue for me. I currently have no benefit to using scipy over numpy -- scipy's rfft returns somewhat faster for me, but after collecting the array elements I need, the cost is larger in time. Possibly for sufficiently large arrays, I will see a benefit, but the cost in code development is not currently worth it for me. --Jeff From brian.lewis17 at gmail.com Mon Jun 15 16:43:32 2009 From: brian.lewis17 at gmail.com (Brian Lewis) Date: Mon, 15 Jun 2009 13:43:32 -0700 Subject: [SciPy-user] Dot in greater than 2D In-Reply-To: <1cd32cbb0906151114q72dba9a0mefb3e38d6a68abc5@mail.gmail.com> References: <1cd32cbb0906151114q72dba9a0mefb3e38d6a68abc5@mail.gmail.com> Message-ID: On Mon, Jun 15, 2009 at 11:14 AM, wrote: > On Mon, Jun 15, 2009 at 1:52 PM, Brian Lewis > wrote: > > How is dot defined for matrices with dimension greater than 2? The > > docstring says: > > > > dot(a,b) > > Returns the dot product of a and b for arrays of floating point > types. > > Like the generic numpy equivalent the product sum is over > > the last dimension of a and the second-to-last dimension of b. > > NB: The first argument is not conjugated. > > What dot does this refer to? a namespace would be useful information > > >>> np.dot(np.ones((5,3,2)),np.ones((10,2,20))).shape > (5, 3, 10, 20) > > :( I used reshape() thinking it was inplace. Ok...everything makes sense. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Mon Jun 15 16:48:52 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 15 Jun 2009 16:48:52 -0400 Subject: [SciPy-user] Serializing LSQUnivariateSpline? Message-ID: <7561E7F7-F7E0-4B60-BD9B-830B52E8EB37@cs.toronto.edu> I have a bit of a dilemma: I like the LSQUnivariateSpline wrapper, but I need to serialize the fitted splines, and be able to reconstruct them. (I'm using pytables as a backend, by the way). (t,c,k) pairs are slightly easier to work with in this respect but less descriptive and less elegant when actually using them. The pickled description of even a simple one of these objects leans towards three kiliobytes, and I'm going to be working with a lot of them. I'm thinking of going with an HDF5 table of all the contents of _data as I think I can get an okay read of what sizes and types are required, but what about deserializing? Is there some obvious way I'm missing? Any ideas are appreciated, David From robert.kern at gmail.com Mon Jun 15 16:54:36 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Jun 2009 15:54:36 -0500 Subject: [SciPy-user] Serializing LSQUnivariateSpline? In-Reply-To: <7561E7F7-F7E0-4B60-BD9B-830B52E8EB37@cs.toronto.edu> References: <7561E7F7-F7E0-4B60-BD9B-830B52E8EB37@cs.toronto.edu> Message-ID: <3d375d730906151354w2fa22805q4267e30ab5704209@mail.gmail.com> On Mon, Jun 15, 2009 at 15:48, David Warde-Farley wrote: > I have a bit of a dilemma: I like the LSQUnivariateSpline wrapper, but > I need to serialize the fitted splines, and be able to reconstruct > them. ?(I'm using pytables as a backend, by the way). > > (t,c,k) pairs are slightly easier to work with in this respect but > less descriptive and less elegant when actually using them. > > The pickled description of even a simple one of these objects leans > towards three kiliobytes, and I'm going to be working with a lot of > them. I'm thinking of going with an HDF5 table of all the contents of > _data as I think I ?can get an okay read of what sizes and types are > required, but what about deserializing? Is there some obvious way I'm > missing? def deserialize(data): self = LSQUnivariateSpline.__new__(LSQUnivariateSpline) self._data = data self._reset_class() return self -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at enthought.com Mon Jun 15 16:31:16 2009 From: oliphant at enthought.com (Travis Oliphant) Date: Mon, 15 Jun 2009 15:31:16 -0500 Subject: [SciPy-user] Join us for the 2nd Scientific Computing with Python Webinar Message-ID: <9806A7FA-C9F7-4491-8AE1-514DC13851F8@enthought.com> Hello all Python users: I am pleased to announce the second installment of a free Webinar series that discusses using Python for scientific computing. Enthought hosts this free series which takes place once a month for about 60-90 minutes. The schedule and length may change based on participation feedback, but for now it is scheduled for the third Friday of every month. This free webinar should not be confused with the EPD webinar on the first Friday of each month which is open only to subscribers to the Enthought Python Distribution at the Basic level or above. This session's speakers will be me (Travis Oliphant) and Peter Wang. I will show off a bit of EPDLab which is an interactive Python environment built using IPython, Traits, and Envisage. Peter Wang will present a demo of Chaco and provide some examples of interactive visualizations that can be easily constructed using it's classes. If there is time after the Chaco demo, I will continue the discussion about Mayavi, but I suspect this will have to wait until the next session. All of the tools we will show are open-source, freely- available tools from multiple sources. They can all be conveniently installed using the Enthought Python Distribution. This event will take place on Friday, June 19th at 1:00pm CDT and will last 60 to 90 minutes depending on the questions asked. If you would like to participate, please register by clicking on the link below or going to https://www1.gotomeeting.com/register/303689873. There will be a 15 minute technical help-session prior to the on-line meeting which you should plan to use if you have never participated in a GoToWebinar previously. During this time you can test your connection and audio equipment as well as familiarize yourself with the GoTo Meeting software (which currently only works with Mac and Windows systems). I am looking forward to interacting with many of you again this Friday. Best regards, Travis Oliphant Enthought, Inc. Enthought is the company that sponsored the creation of SciPy and the Enthought Tool Suite. It continues to sponsor the SciPy community by hosting the SciPy mailing list and website and participating in the development of SciPy and NumPy. Enthought creates custom scientific and technical software applications and provides training on using Python for technical computing. Enthought also provides the Enthought Python Distribution. Learn more at http://www.enthought.com Bios for Travis Oliphant and Peter Wang can be read at http://www.enthought.com/company/executive-team.php -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.lewis17 at gmail.com Mon Jun 15 18:21:59 2009 From: brian.lewis17 at gmail.com (Brian Lewis) Date: Mon, 15 Jun 2009 15:21:59 -0700 Subject: [SciPy-user] logexpdot Message-ID: Hi all, This is my first foray into np.newaxis. I needed a dot product for logarithm arrays and since I am always working with matrices, I just did 2D. I'm looking for comments and suggestions for what I've implemented. Also, is it easy to generalize this beyond 2D? import numpy as np def logdotexp2__1(x,y, out=None): # horrible r,c = x.shape[0], y.shape[1] if out is None: out = np.empty((r,c)) for i in xrange(r): for j in xrange(c): out[i,j] = np.logaddexp2.reduce(x[i,:]+y[:,j]) return out def logdotexp2__2(x,y, out=None): # less horrible r,c = x.shape[0], y.shape[1] if out is None: out = np.empty((r,c)) for i in xrange(r): np.logaddexp2.reduce(x[i,:,np.newaxis]+y, out=out[i,:]) return out def logdotexp2__3(x,y, out=None): # good? r,c = x.shape[0], y.shape[1] if out is None: out = np.empty((r,c)) np.logaddexp2.reduce(x[:,:,np.newaxis]+y[np.newaxis,:,:], axis=1, out=out) return out if __name__ == '__main__': k = 10 a = np.arange(k**2).reshape((k,k)) b = np.arange(k**2,2*k**2).reshape((k,k)) # timeit logdotexp2__1(a,b) # timeit logdotexp2__2(a,b) # timeit logdotexp2__3(a,b) Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From saffsd at gmail.com Mon Jun 15 21:09:14 2009 From: saffsd at gmail.com (Marco Lui) Date: Tue, 16 Jun 2009 11:09:14 +1000 Subject: [SciPy-user] Problem building scipy from source In-Reply-To: <5b8d13220906150911m6867b999keecf2b92bf5b8936@mail.gmail.com> References: <5b8d13220906150911m6867b999keecf2b92bf5b8936@mail.gmail.com> Message-ID: > > > > Hello everyone > > > > I have followed the instructions at > > > > http://scipy.org/Installing_SciPy/BuildingGeneral > > You need numpy 1.3.0 or above to build scipy from svn. > Thanks. Turns out this was all I needed. Installed numpy SVN as well and all is well now. Cheers Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Tue Jun 16 08:13:33 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 16 Jun 2009 07:13:33 -0500 Subject: [SciPy-user] Some help on pythonic design of time series dataprocessing application In-Reply-To: <4A33CA82.5060901@gmail.com> References: <4A3270FC.63BA.009B.0@twdb.state.tx.us> <4A33CA82.5060901@gmail.com> Message-ID: <4A37461D.63BA.009B.0@twdb.state.tx.us> Thank you, I'll have a look at your code and will probably be in touch later with questions. - dharhas > I have done something very similar to what you describe for the analysis > of single molecule force spectroscopy data. Have a look at it: > http://code.google.com/p/hooke > > and let me know if I can be of help. > > cheers, > M. From devicerandom at gmail.com Tue Jun 16 08:45:15 2009 From: devicerandom at gmail.com (ms) Date: Tue, 16 Jun 2009 13:45:15 +0100 Subject: [SciPy-user] Some help on pythonic design of time series dataprocessing application In-Reply-To: <4A37461D.63BA.009B.0@twdb.state.tx.us> References: <4A3270FC.63BA.009B.0@twdb.state.tx.us> <4A33CA82.5060901@gmail.com> <4A37461D.63BA.009B.0@twdb.state.tx.us> Message-ID: <4A3793DB.7010700@gmail.com> Dharhas Pothina ha scritto: > Thank you, I'll have a look at your code and will probably be in touch later with questions. > There is a bit of documentation in the website on how the plugin system has been designed. It is far from perfect but at least could give some ideas. m. From hettling at few.vu.nl Tue Jun 16 09:01:21 2009 From: hettling at few.vu.nl (hettling) Date: Tue, 16 Jun 2009 15:01:21 +0200 Subject: [SciPy-user] scipy installation on scientific linux Message-ID: <1245157281.11183.52.camel@killer2000> Dear all, I am trying to install scipy on a cluster running scientific linux (x86_64) from the "ashigabou" repository. I followed the installation instructions on the official scipy site: I downloaded the ashigabou.repo file and stored it into /etc/yum.repos.d . If I try to install, the packages can't be found: $ yum install python-numpy python-scipy gives me the following error: Parsing package install arguments No Match for argument: python-numpy No Match for argument: python-scipy although the repository seems to be found: $ yum list available | grep ashigabou refblas3.x86_64 3.0-11.1 home_ashigabou refblas3.i586 3.0-11.1 home_ashigabou refblas3-devel.i586 3.0-11.1 home_ashigabou refblas3-devel.x86_64 3.0-11.1 home_ashigabou refblas3-test.x86_64 3.0-13.1 home_ashigabou refblas3-test.i586 3.0-13.1 home_ashigabou But the only libraries found are the "refblas" libraries; No atlas, no scipy no numpy... Maybe this is less a scipy question than a question about the package manager, though I hope someone can help me... Thanks in advance, Hannes From dwf at cs.toronto.edu Tue Jun 16 13:58:08 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 16 Jun 2009 13:58:08 -0400 Subject: [SciPy-user] Serializing LSQUnivariateSpline? In-Reply-To: <3d375d730906151354w2fa22805q4267e30ab5704209@mail.gmail.com> References: <7561E7F7-F7E0-4B60-BD9B-830B52E8EB37@cs.toronto.edu> <3d375d730906151354w2fa22805q4267e30ab5704209@mail.gmail.com> Message-ID: On 15-Jun-09, at 4:54 PM, Robert Kern wrote: > > def deserialize(data): > self = LSQUnivariateSpline.__new__(LSQUnivariateSpline) > self._data = data > self._reset_class() > return self Ah. Somehow I wasn't aware of the __new__ magic (I was doing something clumsy by resetting __class__ manually) and without _reset_class() the whole thing fails. Thanks. David From stef.mientki at gmail.com Tue Jun 16 14:27:12 2009 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 16 Jun 2009 20:27:12 +0200 Subject: [SciPy-user] [ANN] first full alpha release of PyLab_Works v0.3 Message-ID: <4A37E400.4080305@gmail.com> hello, I am pleased to announce the first full alpha release of PyLab_Works, v0.3. PyLab_Works is a modular Visual Development Environment, based on data-flow programming technics. PyLab_Works is specially aimed at Education, Engineering and Science. The ideas behind PyLab_Works are, that the final user should not be burdened with programming details and domain details, whereas the domain expert should be able to implement the specific domain knowledge without being a full educated programmer. You can always find my notes on PyLab_Works on http://pic.flappie.nl Most of these pages are also collected in a single pdf document, which can be found here: http://pylab-works.googlecode.com/files/pw_manual.pdf The source code and a one-button-Windows-Installer can be found on codegoogle: http://code.google.com/p/pylab-works/ The files are rather large, because they contain some data samples. The Windows-Installer contains everything you need to get started with PyLab_Works: ConfigObj, gprof2dot, HTTPlib, MatPlotLib, Numpy, Pickle, Psyco, pyclbr, PyGame, PyLab_Works, PyODBC, Python, RLCompleter, Scipy, Sendkeys, SQLite3, SQLObject, URLparse, wave, Visual, win32*, wxPython. Although the PyLab_Works programs are compiled with Py2Exe, all the source files are explicitly included. have fun, Stef Mientki From aisaac at american.edu Tue Jun 16 16:39:21 2009 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 16 Jun 2009 16:39:21 -0400 Subject: [SciPy-user] [ANN] first full alpha release of PyLab_Works v0.3 In-Reply-To: <4A37E400.4080305@gmail.com> References: <4A37E400.4080305@gmail.com> Message-ID: <4A3802F9.9010800@american.edu> On 6/16/2009 2:27 PM Stef Mientki apparently wrote: > http://code.google.com/p/pylab-works/ It's worth mentioning the license: New BSD. Cheers, Alan Isaac From jturner at gemini.edu Tue Jun 16 21:02:23 2009 From: jturner at gemini.edu (James Turner) Date: Tue, 16 Jun 2009 21:02:23 -0400 Subject: [SciPy-user] JOB: Data Process Developer at the Gemini Observatory in Hawaii Message-ID: <4A38409F.2010207@gemini.edu> I thought some of you might be interested in the opening we have for developing scientific Python software in Hawaii: http://www.gemini.edu/jobs#41 http://members.aas.org/JobReg/JobDetailPage.cfm?JobID=25672 Note that applications should be submitted as outlined at the above links and not sent to me personally. Sorry I haven't been active here recently; I'm finding it hard to keep up with the list traffic these days. Thanks, James. From Dharhas.Pothina at twdb.state.tx.us Wed Jun 17 11:43:45 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 17 Jun 2009 10:43:45 -0500 Subject: [SciPy-user] scikits.timeseries : masking data after it has been read in using tsfromtxt Message-ID: <4A38C8E1.63BA.009B.0@twdb.state.tx.us> Hi, I'm reading in a timeseries using : SB1S = ts.tsfromtxt('SB1S.txt',freq='T',comments='#',dateconverter=dateconverter,datecols=(0,1,2,3,4),usecols =(0,1,2,3,4,7,9,8),names='Y,M,D,hh,mm,Temperature,Salinity,WaterLevel') so SB1S looks like : In [93]: SB1S Out[93]: timeseries([(18.710000000000001, -10000000000.0, 0.27100000000000002) (18.710000000000001, -10000000000.0, 0.27100000000000002) (18.920000000000002, -10000000000.0, 0.25900000000000001) ..., (21.379999999999999, 1.8200000000000001, 0.041000000000000002) (21.399999999999999, 1.79, 0.040000000000000001) (21.5, 1.77, 0.035000000000000003)], dtype = [('Temperature', ' Hi all, Is there any function in scipy that gives the prime factorization? ie 50 -> [5, 5, 2]? TIA Cheers, -- Fred From dwf at cs.toronto.edu Wed Jun 17 12:39:48 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 17 Jun 2009 12:39:48 -0400 Subject: [SciPy-user] prime factorization... In-Reply-To: <4A3915D6.50106@gmail.com> References: <4A3915D6.50106@gmail.com> Message-ID: I don't think so, you should check in the documentation for Sage though, as it's frequently used by number theorists/cryptographers. http://sagemath.org David On 17-Jun-09, at 12:12 PM, fred wrote: > Hi all, > > Is there any function in scipy that gives the prime factorization? > > ie 50 -> [5, 5, 2]? > > TIA > > Cheers, > > -- > Fred > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From jh at physics.ucf.edu Wed Jun 17 13:39:01 2009 From: jh at physics.ucf.edu (Joe Harrington) Date: Wed, 17 Jun 2009 13:39:01 -0400 Subject: [SciPy-user] prime factorization... In-Reply-To: (scipy-user-request@scipy.org) References: Message-ID: fred : >Is there any function in scipy that gives the prime factorization? >ie 50 -> [5, 5, 2]? There may be a solution in sympy: http://mail.python.org/pipermail/python-list/2008-March/653138.html There is also a linux 'factor' program that does numbers up to around 1.8e19: factor 18440000000000000000 18440000000000000000: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 461 If you're looking for a quick-and-dirty solution, that could be wrapped. The code is likely GPL, though, so not good for integrating into numpy/scipy. I'm sure there must be other code out there that isn't, nor are the algorithms patented. I've wanted this capability in numpy/scipy for a while. Factoring seems like a basic thing you'd want, but whenever I suggest adding a little basic thing, people try to pile on a lot of not-basic stuff that then makes it inconceivable to add anything. --jh-- From fredmfp at gmail.com Wed Jun 17 13:45:37 2009 From: fredmfp at gmail.com (fred) Date: Wed, 17 Jun 2009 19:45:37 +0200 Subject: [SciPy-user] prime factorization... In-Reply-To: References: Message-ID: <4A392BC1.4030305@gmail.com> Joe Harrington a ?crit : > fred : >> Is there any function in scipy that gives the prime factorization? > >> ie 50 -> [5, 5, 2]? > > There may be a solution in sympy: > > http://mail.python.org/pipermail/python-list/2008-March/653138.html Perfect! (I have installed sympy and don't plan to install sage only to compute prime factorization, David; thanks anyway). > > There is also a linux 'factor' program that does numbers up to around 1.8e19: Arg, I forgot this one! Thanks a lot! Cheers, -- Fred From dwf at cs.toronto.edu Wed Jun 17 13:56:34 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 17 Jun 2009 13:56:34 -0400 Subject: [SciPy-user] prime factorization... In-Reply-To: <4A392BC1.4030305@gmail.com> References: <4A392BC1.4030305@gmail.com> Message-ID: On 17-Jun-09, at 1:45 PM, fred wrote: > (I have installed sympy and don't plan to install sage only to compute > prime factorization, David; thanks anyway). Of course not :) Though they do make their whole code repository browsable, so you can just take what you need: http://hg.sagemath.org/ David From pgmdevlist at gmail.com Wed Jun 17 14:44:47 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 17 Jun 2009 14:44:47 -0400 Subject: [SciPy-user] scikits.timeseries : masking data after it has been read in using tsfromtxt In-Reply-To: <4A38C8E1.63BA.009B.0@twdb.state.tx.us> References: <4A38C8E1.63BA.009B.0@twdb.state.tx.us> Message-ID: <74ACF7F1-B521-45B4-9EC2-FE764917696A@gmail.com> On Jun 17, 2009, at 11:43 AM, Dharhas Pothina wrote: > > SB1S = > ts > .tsfromtxt > ('SB1S > .txt > ',freq > = > 'T > ',comments > ='#',dateconverter=dateconverter,datecols=(0,1,2,3,4),usecols > = > (0,1,2,3,4,7,9,8),names='Y,M,D,hh,mm,Temperature,Salinity,WaterLevel') > > I want to perform some extra masking. ie mask all salinities below 0 > and above 100, mask all temperatures = -999 etc Dharhas, Here's an example: >>> ndtype = [('T', '>> series = ts.time_series(zip(np.arange(12),np.arange(12)), >>> dtype=ndtype, >>> start_date=ts.now('M')) >>> print series >>> # Mask all the records for which 'T' < 5 >>> series[series['T']<5] = ma.masked >>> print series >>> # Mask field 'S' if 'S' is even >>> s = series['S'] >>> s[s%2==0] = ma.masked >>> print series As you can see, you can mask whole records or individual fields, depending on what you want. From timmichelsen at gmx-topmail.de Wed Jun 17 15:39:52 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 17 Jun 2009 21:39:52 +0200 Subject: [SciPy-user] [ANN] first full alpha release of PyLab_Works v0.3 In-Reply-To: <4A3802F9.9010800@american.edu> References: <4A37E400.4080305@gmail.com> <4A3802F9.9010800@american.edu> Message-ID: have you seen this: http://code.google.com/p/pydee/ there might be things that overlap or invite for cooperation. From Dharhas.Pothina at twdb.state.tx.us Wed Jun 17 16:20:35 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 17 Jun 2009 15:20:35 -0500 Subject: [SciPy-user] scikits.timeseries : masking data after it hasbeen read in using tsfromtxt In-Reply-To: <74ACF7F1-B521-45B4-9EC2-FE764917696A@gmail.com> References: <4A38C8E1.63BA.009B.0@twdb.state.tx.us> <74ACF7F1-B521-45B4-9EC2-FE764917696A@gmail.com> Message-ID: <4A3909C3.63BA.009B.0@twdb.state.tx.us> That worked great. Thanks - dharhas >>> Pierre GM 6/17/2009 1:44 PM >>> On Jun 17, 2009, at 11:43 AM, Dharhas Pothina wrote: > > SB1S = > ts > .tsfromtxt > ('SB1S > .txt > ',freq > = > 'T > ',comments > ='#',dateconverter=dateconverter,datecols=(0,1,2,3,4),usecols > = > (0,1,2,3,4,7,9,8),names='Y,M,D,hh,mm,Temperature,Salinity,WaterLevel') > > I want to perform some extra masking. ie mask all salinities below 0 > and above 100, mask all temperatures = -999 etc Dharhas, Here's an example: >>> ndtype = [('T', '>> series = ts.time_series(zip(np.arange(12),np.arange(12)), >>> dtype=ndtype, >>> start_date=ts.now('M')) >>> print series >>> # Mask all the records for which 'T' < 5 >>> series[series['T']<5] = ma.masked >>> print series >>> # Mask field 'S' if 'S' is even >>> s = series['S'] >>> s[s%2==0] = ma.masked >>> print series As you can see, you can mask whole records or individual fields, depending on what you want. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From stef.mientki at gmail.com Wed Jun 17 19:21:37 2009 From: stef.mientki at gmail.com (Stef Mientki) Date: Thu, 18 Jun 2009 01:21:37 +0200 Subject: [SciPy-user] [ANN] first full alpha release of PyLab_Works v0.3 In-Reply-To: References: <4A37E400.4080305@gmail.com> <4A3802F9.9010800@american.edu> Message-ID: <4A397A81.4010008@gmail.com> Tim Michelsen wrote: > have you seen this: > http://code.google.com/p/pydee/ > > there might be things that overlap or invite for cooperation. > Yes, I think python(x,y) is one of the best python packages to replace MatLab, If I had knew it, when I started PyLab_Works, I probably would never have created PyLab_Works ;-) There are certainly similarities: We use the same basic packages: numpy, scipy, etc we use the same concepts: concentrate on the job instead of the programming details But there are a few differences: we use a different GUI: PyQt versus wxPython and the largest problem would be the PyQt license :-( But if there are possibilities to cooperate, I'ld love to. cheers, Stef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jsseabold at gmail.com Wed Jun 17 21:43:26 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 17 Jun 2009 21:43:26 -0400 Subject: [SciPy-user] prime factorization... In-Reply-To: References: Message-ID: On Wed, Jun 17, 2009 at 1:39 PM, Joe Harrington wrote: > fred : >>Is there any function in scipy that gives the prime factorization? > >>ie 50 -> [5, 5, 2]? > > There may be a solution in sympy: > > http://mail.python.org/pipermail/python-list/2008-March/653138.html > > There is also a linux 'factor' program that does numbers up to around 1.8e19: > > factor 18440000000000000000 > 18440000000000000000: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 461 > > If you're looking for a quick-and-dirty solution, that could be > wrapped. ?The code is likely GPL, though, so not good for integrating > into numpy/scipy. ?I'm sure there must be other code out there that > isn't, nor are the algorithms patented. > > I've wanted this capability in numpy/scipy for a while. ?Factoring > seems like a basic thing you'd want, but whenever I suggest adding a > little basic thing, people try to pile on a lot of not-basic stuff > that then makes it inconceivable to add anything. > > --jh-- I was curious about this and came across Msieve which is (the only one?) in the public domain. http://www.boo.net/~jasonp/qs.html From mark.mahabir at gmail.com Thu Jun 18 07:00:07 2009 From: mark.mahabir at gmail.com (Mark Mahabir) Date: Thu, 18 Jun 2009 12:00:07 +0100 Subject: [SciPy-user] build trouble Message-ID: I'm trying to build SciPy on a Scientific Linux system running 4.7 with Python 2.5.1 (installed separately from the system default Python 2.3.4) unsuccessfully thus far. I've attached an install.log file. Every other package installed OK as far as I know, including ATLAS and NumPy. I have little to no experience of Python and SciPy so I would appreciate any pointers! Many thanks in advance, Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: install.log Type: application/octet-stream Size: 17020 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Thu Jun 18 06:56:15 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 18 Jun 2009 19:56:15 +0900 Subject: [SciPy-user] build trouble In-Reply-To: References: Message-ID: <4A3A1D4F.3040109@ar.media.kyoto-u.ac.jp> Mark Mahabir wrote: > I'm trying to build SciPy on a Scientific Linux system running 4.7 > with Python 2.5.1 (installed separately from the system default Python > 2.3.4) unsuccessfully thus far. > > I've attached an install.log file. Every other package installed OK as > far as I know, including ATLAS and NumPy. > For some reason, the Include path of python relatively to the python *sources* are included (the /usr/local/Python-2.5.1/Include), and that should never happen. Are you executing the installed python (in /usr/local/bin it seems), or the built python in the python source tree ? If the later, that's where the error is coming from. Otherwise, we would need more information I think (the exact steps to install python, etc...). Doesn't scientific linux has rpms for python 2.4 at least ? I strongly advise to use the system python if you want to avoid trouble, David From mark.mahabir at gmail.com Thu Jun 18 07:19:07 2009 From: mark.mahabir at gmail.com (Mark Mahabir) Date: Thu, 18 Jun 2009 12:19:07 +0100 Subject: [SciPy-user] build trouble In-Reply-To: <4A3A1D4F.3040109@ar.media.kyoto-u.ac.jp> References: <4A3A1D4F.3040109@ar.media.kyoto-u.ac.jp> Message-ID: 2009/6/18 David Cournapeau : > Mark Mahabir wrote: >> I'm trying to build SciPy on a Scientific Linux system running 4.7 >> with Python 2.5.1 (installed separately from the system default Python >> 2.3.4) unsuccessfully thus far. >> >> I've attached an install.log file. Every other package installed OK as >> far as I know, including ATLAS and NumPy. >> > > For some reason, the Include path of python relatively to the python > *sources* are included (the /usr/local/Python-2.5.1/Include), and that > should never happen. Are you executing the installed python (in > /usr/local/bin it seems), or the built python in the python source tree > ? If the later, that's where the error is coming from. Otherwise, we > would need more information I think (the exact steps to install python, > etc...). I'm doing an alias python /usr/local/Python-2.5.1/python and then running that version. > Doesn't scientific linux has rpms for python 2.4 at least ? I strongly > advise to use the system python if you want to avoid trouble, It probably does, but my users need a version greater than 2.5, unfortunately. It'll be a month or two away before I can migrate those users to a newer operating system. Mark From david at ar.media.kyoto-u.ac.jp Thu Jun 18 07:07:15 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 18 Jun 2009 20:07:15 +0900 Subject: [SciPy-user] build trouble In-Reply-To: References: <4A3A1D4F.3040109@ar.media.kyoto-u.ac.jp> Message-ID: <4A3A1FE3.2030909@ar.media.kyoto-u.ac.jp> Mark Mahabir wrote: > > I'm doing an > > alias python /usr/local/Python-2.5.1/python > > and then running that version. > That's the problem, right there: you should install python, not run it from its source directory. I am quite baffled that installing numpy worked, but that certainly explains your build error. David From mark.mahabir at gmail.com Thu Jun 18 07:40:54 2009 From: mark.mahabir at gmail.com (Mark Mahabir) Date: Thu, 18 Jun 2009 12:40:54 +0100 Subject: [SciPy-user] build trouble In-Reply-To: <4A3A1FE3.2030909@ar.media.kyoto-u.ac.jp> References: <4A3A1D4F.3040109@ar.media.kyoto-u.ac.jp> <4A3A1FE3.2030909@ar.media.kyoto-u.ac.jp> Message-ID: 2009/6/18 David Cournapeau : > Mark Mahabir wrote: >> >> I'm doing an >> >> alias python /usr/local/Python-2.5.1/python >> >> and then running that version. >> > > That's the problem, right there: you should install python, not run it > from its source directory. I am quite baffled that installing numpy > worked, but that certainly explains your build error. Thank you for your help, that works now. I guess I was just being over-cautious about screwing something else up. Mark From gael.varoquaux at normalesup.org Fri Jun 19 10:01:58 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 19 Jun 2009 16:01:58 +0200 Subject: [SciPy-user] [ANN] SciPy 2009 conference opened up for registration Message-ID: <20090619140158.GA12012@phare.normalesup.org> We are finally opening the registration for the SciPy 2009 conference. It took us time, but the reason is that we made careful budget estimations to bring the registration cost down. We are very happy to announce that this year registration to the conference will be only $150, sprints $100, and students get half price! We made this effort because we hope it will open up the conference to more people, especially students that often have to finance this trip with little budget. As a consequence, however, catering at noon is not included. This does not mean that we are getting a reduced conference. Quite on the contrary, this year we have two keynote speakers. And what speakers: Peter Norvig and Jon Guyer! Peter Norvig is the director of research at Google and Jon Guyer is a research scientist at NIST, in the Thermodynamics and Kinetics Group, where he leads a fiPy, a finite element project in Python. The SciPy 2009 Conference ========================== SciPy 2009, the 8th Python in Science conference (http://conference.scipy.org), will be held from August 18-23, 2009 at Caltech in Pasadena, CA, USA. Each year SciPy attracts leading figures in research and scientific software development with Python from a wide range of scientific and engineering disciplines. The focus of the conference is both on scientific libraries and tools developed with Python and on scientific or engineering achievements using Python. Call for Papers ================ We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Please read the full call for papers (http://conference.scipy.org/call_for_papers). Important Dates ================ * Friday, June 26: Abstracts Due * Saturday, July 4: Announce accepted talks, post schedule * Friday, July 10: Early Registration ends * Tuesday-Wednesday, August 18-19: Tutorials * Thursday-Friday, August 20-21: Conference * Saturday-Sunday, August 22-23: Sprints * Friday, September 4: Papers for proceedings due The SciPy 2009 executive committee ----------------------------------- * Jarrod Millman, UC Berkeley, USA (Conference Chair) * Ga?l Varoquaux, INRIA Saclay, France (Program Co-Chair) * St?fan van der Walt, University of Stellenbosch, South Africa * (Program Co-Chair) * Fernando P?rez, UC Berkeley, USA (Tutorial Chair) From gael.varoquaux at normalesup.org Fri Jun 19 12:15:16 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 19 Jun 2009 18:15:16 +0200 Subject: [SciPy-user] [Correction] Re: [ANN] SciPy 2009 conference opened up for registration In-Reply-To: <20090619140158.GA12012@phare.normalesup.org> References: <20090619140158.GA12012@phare.normalesup.org> Message-ID: <20090619161516.GB16549@phare.normalesup.org> Please excuse me for incorrect information in my announcement: On Fri, Jun 19, 2009 at 04:01:58PM +0200, Gael Varoquaux wrote: > We are very happy to announce that this year registration to the > conference will be only $150, sprints $100, and students get half price! This should read that the tutorials are $100, not the sprints. The sprints are actually free, off course. We will be very please to see as many people as possible willing to participate at the sprint in making the SciPy ecosystem thrive. Thanks to Travis Oliphant for pointing out the typo. Ga?l Varoquaux From textdirected at gmail.com Fri Jun 19 23:40:47 2009 From: textdirected at gmail.com (HEMMI, Shigeru) Date: Sat, 20 Jun 2009 12:40:47 +0900 Subject: [SciPy-user] prime factorization... In-Reply-To: <4A3915D6.50106@gmail.com> References: <4A3915D6.50106@gmail.com> Message-ID: 2009/6/18 fred > Hi all, > > Is there any function in scipy that gives the prime factorization? > > ie 50 -> [5, 5, 2]? > > TIA > > Cheers, > > -- > Fred > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Let me tell you there is another solution: NZMATH http://tnt.math.metro-u.ac.jp/nzmath/index.html which is written in Python. >>> import nzmath.factor.methods as methods >>> dir(methods) ['DefaultMethod', 'EllipticCurveMethod', 'MPQSMethod', 'PMinusOneMethod', 'RhoMethod', 'TrialDivision', '__builtins__', '__doc__', '__file__', '__name__', 'arith1', 'bigrange', 'ecm', 'ecmfind', 'factor', 'find', 'mpqs', 'mpqsfind', 'pmom', 'prime', 'rhomethod', 'trialDivision', 'util'] >>> methods.factor(50) [(2, 1), (5, 2)] >>> methods.factor(18440000000000000000) [(2, 18), (5, 16), (461L, 1)] Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrickmarshwx at gmail.com Sat Jun 20 00:54:44 2009 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Fri, 19 Jun 2009 23:54:44 -0500 Subject: [SciPy-user] SCIPY install with Intel Math Kernel and Intel Fortran Message-ID: Greetings, I have run into a problem where I can't build Scipy using the Intel Fortran compilers and the Intel Math Kernel. I have (at least I believe so) set up my environment as is directed by Intel, but I can't seem to get it to work. The install routine find the ifort executable, but then fails later. I've spent many hours banging my head against this so any help is appreciated. If I can't get this to work, I guess I'll try and install by building ATLAS and LAPACK myself. I should also mention that f2py also has this same problem. Namely, the script finds the executable but then tells me it isn't available to use. I've included the end of the output of this command used for installing: python setup.py config --compiler=intele --fcompiler=intele build_clib --compiler=intele --fcompiler=intele build_ext --compiler=intele --fcompiler=intele install ***********snip************* building extension "scipy.ndimage._nd_image" sources building extension "scipy.stsci.convolve._correlate" sources building extension "scipy.stsci.convolve._lineshape" sources building extension "scipy.stsci.image._combine" sources building data_files sources Found executable /opt/intel/Compiler/11.0/074/bin/ia64/icc Could not locate executable ecc customize IntelItaniumCCompiler customize IntelItaniumCCompiler using build_clib customize IntelItaniumFCompiler Found executable /opt/intel/Compiler/11.0/074/bin/ia64/ifort Found executable /usr/bin/gfortran customize IntelItaniumFCompiler using build_clib running build_ext customize IntelItaniumCCompiler customize IntelItaniumCCompiler using build_ext extending extension 'scipy.sparse.linalg.dsolve._zsuperlu' defined_macros with [('USE_VEND OR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._dsuperlu' defined_macros with [('USE_VEND OR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._csuperlu' defined_macros with [('USE_VEND OR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._ssuperlu' defined_macros with [('USE_VEND OR_BLAS', 1)] customize IntelItaniumCCompiler customize IntelItaniumCCompiler using build_ext customize IntelItaniumFCompiler warning: build_ext: f77_compiler=intele is not available. building 'scipy.lib.lapack.calc_lwork' extension error: extension 'scipy.lib.lapack.calc_lwork' has Fortran sources but no Fortran compiler found Thanks for any possible help. Patrick --- Patrick Marsh Graduate Research Assistant School of Meteorology University of Oklahoma http://www.patricktmarsh.com From cournape at gmail.com Sat Jun 20 01:17:13 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 20 Jun 2009 14:17:13 +0900 Subject: [SciPy-user] SCIPY install with Intel Math Kernel and Intel Fortran In-Reply-To: References: Message-ID: <5b8d13220906192217m3fa1ea1cuf569ace4ab774a29@mail.gmail.com> On Sat, Jun 20, 2009 at 1:54 PM, Patrick Marsh wrote: > Greetings, > > I have run into a problem where I can't build Scipy using the Intel > Fortran compilers and the Intel Math Kernel. ?I have (at least I > believe so) set up my environment as is directed by Intel, but I can't > seem to get it to work. ?The install routine find the ifort > executable, but then fails later. ?I've spent many hours banging my > head against this so any help is appreciated. Unfortunately, Itanium is not a well tested platform, since not many developers have access to one. From your build log, the problem is not MKL, but the fortran compiler not being properly setup. But this is hard to debug without being able to run the build by ourselves, unfortunately. >From the log, I would first look at the fcompiler customization in the build_ext command. In the installed numpy, in numpy/distutils/commands/build_ext, there is the following code in the run method: if need_f77_compiler: ctype = self.fcompiler self._f77_compiler = new_fcompiler(compiler=self.fcompiler, verbose=self.verbose, dry_run=self.dry_run, force=self.force, requiref90=False, c_compiler=self.compiler) fcompiler = self._f77_compiler if fcompiler: ctype = fcompiler.compiler_type fcompiler.customize(self.distribution) if fcompiler and fcompiler.get_version(): fcompiler.customize_cmd(self) fcompiler.show_customization() else: self.warn('f77_compiler=%s is not available.' % (ctype)) Could you add a print fcompiler, fcompiler.get_version() just before the "if fcompiler and fcompiler.get_version()" ? This should confirm whether it fails where I think it fails (i.e. the get_version) cheers, David From patrickmarshwx at gmail.com Sat Jun 20 11:02:43 2009 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Sat, 20 Jun 2009 10:02:43 -0500 Subject: [SciPy-user] SCIPY install with Intel Math Kernel and Intel Fortran In-Reply-To: <5b8d13220906192217m3fa1ea1cuf569ace4ab774a29@mail.gmail.com> References: <5b8d13220906192217m3fa1ea1cuf569ace4ab774a29@mail.gmail.com> Message-ID: Here is the result of the print statement: None Initially, I tried to build using gfortran and the MKL but to no avail (it failed...error is below). I figured that MKL required the Intel compilers, which is how I started down this path. /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-ia64-2.5/build/src.linux-ia64-2.5/scipy/lib/lapack/calc_lworkmodule.o build/temp.linux-ia64-2.5/build/src.linux-ia64-2.5/fortranobject.o build/temp.linux-ia64-2.5/scipy/lib/lapack/calc_lwork.o -L/opt/intel/Compiler/11.0/069/mkl/lib/64 -Lbuild/temp.linux-ia64-2.5 -lmkl_lapack32 -lmkl_lapack64 -lmkl -lpthread -lgfortran -o build/lib.linux-ia64-2.5/scipy/lib/lapack/calc_lwork.so /usr/lib/gcc/ia64-suse-linux/4.1.2/../../../../ia64-suse-linux/bin/ld: cannot find -lmkl_lapack32 collect2: ld returned 1 exit status /usr/lib/gcc/ia64-suse-linux/4.1.2/../../../../ia64-suse-linux/bin/ld: cannot find -lmkl_lapack32 collect2: ld returned 1 exit status error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.linux-ia64-2.5/build/src.linux-ia64-2.5/scipy/lib/lapack/calc_lworkmodule.o build/temp.linux-ia64-2.5/build/src.linux-ia64-2.5/fortranobject.o build/temp.linux-ia64-2.5/scipy/lib/lapack/calc_lwork.o -L/opt/intel/Compiler/11.0/069/mkl/lib/64 -Lbuild/temp.linux-ia64-2.5 -lmkl_lapack32 -lmkl_lapack64 -lmkl -lpthread -lgfortran -o build/lib.linux-ia64-2.5/scipy/lib/lapack/calc_lwork.so" failed with exit status 1 --- Patrick Marsh Graduate Research Assistant School of Meteorology University of Oklahoma http://www.patricktmarsh.com On Sat, Jun 20, 2009 at 12:17 AM, David Cournapeau wrote: > On Sat, Jun 20, 2009 at 1:54 PM, Patrick Marsh wrote: >> Greetings, >> >> I have run into a problem where I can't build Scipy using the Intel >> Fortran compilers and the Intel Math Kernel. ?I have (at least I >> believe so) set up my environment as is directed by Intel, but I can't >> seem to get it to work. ?The install routine find the ifort >> executable, but then fails later. ?I've spent many hours banging my >> head against this so any help is appreciated. > > Unfortunately, Itanium is not a well tested platform, since not many > developers have access to one. From your build log, the problem is not > MKL, but the fortran compiler not being properly setup. But this is > hard to debug without being able to run the build by ourselves, > unfortunately. > > From the log, I would first look at the fcompiler customization in the > build_ext command. In the installed numpy, in > numpy/distutils/commands/build_ext, there is the following code in the > run method: > > ? ? ? ?if need_f77_compiler: > ? ? ? ? ? ?ctype = self.fcompiler > ? ? ? ? ? ?self._f77_compiler = new_fcompiler(compiler=self.fcompiler, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? verbose=self.verbose, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? dry_run=self.dry_run, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? force=self.force, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? requiref90=False, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? c_compiler=self.compiler) > ? ? ? ? ? ?fcompiler = self._f77_compiler > ? ? ? ? ? ?if fcompiler: > ? ? ? ? ? ? ? ?ctype = fcompiler.compiler_type > ? ? ? ? ? ? ? ?fcompiler.customize(self.distribution) > ? ? ? ? ? ?if fcompiler and fcompiler.get_version(): > ? ? ? ? ? ? ? ?fcompiler.customize_cmd(self) > ? ? ? ? ? ? ? ?fcompiler.show_customization() > ? ? ? ? ? ?else: > ? ? ? ? ? ? ? ?self.warn('f77_compiler=%s is not available.' % > ? ? ? ? ? ? ? ? ? ? ? ? ?(ctype)) > > Could you add a print fcompiler, fcompiler.get_version() just before > the "if fcompiler and fcompiler.get_version()" ? This should confirm > whether it fails where I think it fails (i.e. the get_version) > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Sat Jun 20 11:20:59 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 21 Jun 2009 00:20:59 +0900 Subject: [SciPy-user] SCIPY install with Intel Math Kernel and Intel Fortran In-Reply-To: References: <5b8d13220906192217m3fa1ea1cuf569ace4ab774a29@mail.gmail.com> Message-ID: <5b8d13220906200820s6943a26brec0edfd04bb68c22@mail.gmail.com> On Sun, Jun 21, 2009 at 12:02 AM, Patrick Marsh wrote: > Here is the result of the print statement: > 0x2000000002413440> None Ok, that's the problem. What does the following command return: ifort -FI -V -c foo.f -o foo.o with foo.f any fortran file which can be compiled, like: subroutine dummy() end cheers, David From patrickmarshwx at gmail.com Sat Jun 20 11:31:30 2009 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Sat, 20 Jun 2009 10:31:30 -0500 Subject: [SciPy-user] SCIPY install with Intel Math Kernel and Intel Fortran In-Reply-To: <5b8d13220906200820s6943a26brec0edfd04bb68c22@mail.gmail.com> References: <5b8d13220906192217m3fa1ea1cuf569ace4ab774a29@mail.gmail.com> <5b8d13220906200820s6943a26brec0edfd04bb68c22@mail.gmail.com> Message-ID: On Sat, Jun 20, 2009 at 10:20 AM, David Cournapeau wrote: > On Sun, Jun 21, 2009 at 12:02 AM, Patrick Marsh wrote: > Ok, that's the problem. What does the following command return: > > ifort -FI -V -c foo.f -o foo.o > Here's the command and output: in: ifort -Fl -V -c LMAgrid_standalone.f -o tmp out: Intel(R) Fortran IA-64 Compiler Professional for applications running on IA-64, Version 11.0 Build 20081105 Package ID: l_cprof_p_11.0.074 Copyright (C) 1985-2008 Intel Corporation. All rights reserved. Intel Fortran 11.0-1558 From cournape at gmail.com Sat Jun 20 12:42:13 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 21 Jun 2009 01:42:13 +0900 Subject: [SciPy-user] SCIPY install with Intel Math Kernel and Intel Fortran In-Reply-To: References: <5b8d13220906192217m3fa1ea1cuf569ace4ab774a29@mail.gmail.com> <5b8d13220906200820s6943a26brec0edfd04bb68c22@mail.gmail.com> Message-ID: <5b8d13220906200942h3bea58e6t5ab2dbbbaca13b0e@mail.gmail.com> On Sun, Jun 21, 2009 at 12:31 AM, Patrick Marsh wrote: > On Sat, Jun 20, 2009 at 10:20 AM, David Cournapeau wrote: >> On Sun, Jun 21, 2009 at 12:02 AM, Patrick Marsh wrote: >> Ok, that's the problem. What does the following command return: >> >> ifort -FI -V -c foo.f -o foo.o >> > Here's the command and output: > > in: > ifort -Fl -V -c LMAgrid_standalone.f -o tmp > > out: > Intel(R) Fortran IA-64 Compiler Professional for applications running > on IA-64, Version 11.0 I have just updated the numpy trunk, which should fix the problem. Can you check it (r7070) ? David From patrickmarshwx at gmail.com Sat Jun 20 13:30:11 2009 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Sat, 20 Jun 2009 12:30:11 -0500 Subject: [SciPy-user] SCIPY install with Intel Math Kernel and Intel Fortran In-Reply-To: <5b8d13220906200942h3bea58e6t5ab2dbbbaca13b0e@mail.gmail.com> References: <5b8d13220906192217m3fa1ea1cuf569ace4ab774a29@mail.gmail.com> <5b8d13220906200820s6943a26brec0edfd04bb68c22@mail.gmail.com> <5b8d13220906200942h3bea58e6t5ab2dbbbaca13b0e@mail.gmail.com> Message-ID: On Sat, Jun 20, 2009 at 11:42 AM, David Cournapeau wrote: > On Sun, Jun 21, 2009 at 12:31 AM, Patrick Marsh wrote: > > I have just updated the numpy trunk, which should fix the problem. Can > you check it (r7070) ? > > David I just checked it out and installed it. I get the same error as before. It isn't finding the compiler version. Patrick From read.beyond.data at gmx.net Sun Jun 21 13:59:01 2009 From: read.beyond.data at gmx.net (Celvin) Date: Sun, 21 Jun 2009 19:59:01 +0200 Subject: [SciPy-user] SciPy cubic interpolation coefficients Message-ID: <1366244265.20090621195901@gmx.net> Hi all, I'm currently porting some old FORTRAN code over to Python. The code makes heavy use of cubic spline coefficients obtained by interpolating a given signal. Now, while I know that I can obtain coefficients using scipy.signal.cspline1d or scipy.interpolate.splrep, all I get is an 1-d array. I'd like to know how to obtain coefficient arrays a, b, c and d to be able to use the familiar cubic polynomial a[i]*x*x*x + b[i]*x*x + c[i]*x + d[i]. I know how to evaluate the resulting spline, but I actually need those coefficients for several calculations. Any insights on how to do this using SciPy? Regards, Celvin From vanforeest at gmail.com Sun Jun 21 14:53:48 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Sun, 21 Jun 2009 20:53:48 +0200 Subject: [SciPy-user] building (very) large Markov chains Message-ID: Hi, I am trying to get into contact with somebody that is interested in, or has experience with, building very large sparse Markov chains (+5e5 states) in python. I tried some different approaches to solve my problems, but none of these meet my demands. Perhaps one of you has a better idea. bye Nicky From josef.pktd at gmail.com Sun Jun 21 15:23:27 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 21 Jun 2009 15:23:27 -0400 Subject: [SciPy-user] SciPy cubic interpolation coefficients In-Reply-To: <1366244265.20090621195901@gmx.net> References: <1366244265.20090621195901@gmx.net> Message-ID: <1cd32cbb0906211223k5045e10fq93196a5b7e0f9c4a@mail.gmail.com> On Sun, Jun 21, 2009 at 1:59 PM, Celvin wrote: > > Hi all, > > I'm currently porting some old FORTRAN code over to Python. > The code makes heavy use of cubic spline coefficients obtained by > interpolating a given signal. > > Now, while I know that I can obtain coefficients using > scipy.signal.cspline1d or scipy.interpolate.splrep, all I get is an > 1-d array. > > I'd like to know how to obtain coefficient arrays a, b, c and d to > be able to use the familiar cubic polynomial a[i]*x*x*x + b[i]*x*x + > c[i]*x + d[i]. I know how to evaluate the resulting spline, but I > actually need those coefficients for several calculations. > > Any insights on how to do this using SciPy? > Did you try UnivariateSpline.get_coeffs() UnivariateSpline.get_knots() >From my experiments with interpolate splines, I would think this provides what you want. But the documentation is still a bit sparse. Josef > Regards, > Celvin From dwf at cs.toronto.edu Sun Jun 21 16:00:55 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 21 Jun 2009 16:00:55 -0400 Subject: [SciPy-user] building (very) large Markov chains In-Reply-To: References: Message-ID: <5E70B7B8-F126-40D8-BE00-1ED5F952FE12@cs.toronto.edu> Can you elaborate on what you're trying to do with the chains and what problem you're having? David On 21-Jun-09, at 2:53 PM, nicky van foreest wrote: > Hi, > > I am trying to get into contact with somebody that is interested in, > or has experience with, building very large sparse Markov chains (+5e5 > states) in python. I tried some different approaches to solve my > problems, but none of these meet my demands. Perhaps one of you has a > better idea. > > bye > > Nicky > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From read.beyond.data at gmx.net Sun Jun 21 16:13:43 2009 From: read.beyond.data at gmx.net (Celvin) Date: Sun, 21 Jun 2009 22:13:43 +0200 Subject: [SciPy-user] SciPy cubic interpolation coefficients In-Reply-To: <1cd32cbb0906211223k5045e10fq93196a5b7e0f9c4a@mail.gmail.com> References: <1366244265.20090621195901@gmx.net> <1cd32cbb0906211223k5045e10fq93196a5b7e0f9c4a@mail.gmail.com> Message-ID: <1778628852.20090621221343@gmx.net> josef.pktd at gmail.com @ Jun, 21, 2009 09:23PM > Did you try > UnivariateSpline.get_coeffs() > UnivariateSpline.get_knots() >From my experiments with interpolate splines, I would think this > provides what you want. But the documentation is still a bit sparse. Yes, I did try using UnivariateSpline. Apart from being "more off" than using splrep with a smoothing factor of 0, get_coeffs() also returns an 1-d array, with far too few coefficients. Assuming a signal with about 390 data points, I also expect about 390 coefficients (which is consistent with what I get using signal.cspline1d for example, but I need 4 arrays, not just one). I would expect coefficients to be a list of shape coeff = [ [a0, b0, c0, d0], [a1, b1, c1, d1], [a2, b1, c2, d2], ... ... [an, bn, cn, dn], ] but I am only able to obtain 1-d array of coefficients, no matter what function or module I use. http://www.physics.utah.edu/~detar/phys6720/handouts/cubic_spline/cubic_spline/node1.html ...is basically what I'm looking for. I used to do the matrix calculations for Si(x) myself using C++, but I'd prefer to somehow get the coefficients using numpy/scipy and not use an extension. Any further ideas? From josef.pktd at gmail.com Sun Jun 21 17:28:15 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 21 Jun 2009 17:28:15 -0400 Subject: [SciPy-user] SciPy cubic interpolation coefficients In-Reply-To: <1778628852.20090621221343@gmx.net> References: <1366244265.20090621195901@gmx.net> <1cd32cbb0906211223k5045e10fq93196a5b7e0f9c4a@mail.gmail.com> <1778628852.20090621221343@gmx.net> Message-ID: <1cd32cbb0906211428w37cbb98do28e4ef32f77a9f42@mail.gmail.com> On Sun, Jun 21, 2009 at 4:13 PM, Celvin wrote: > josef.pktd at gmail.com @ Jun, 21, 2009 09:23PM > >> Did you try >> UnivariateSpline.get_coeffs() > >> UnivariateSpline.get_knots() > > >From my experiments with interpolate splines, I would think this >> provides what you want. But the documentation is still a bit sparse. > Yes, I did try using UnivariateSpline. Apart from being "more off" > than using splrep with a smoothing factor of 0, get_coeffs() also > returns an 1-d array, with far too few coefficients. > > Assuming a signal with about 390 data points, I also expect about > 390 coefficients (which is consistent with what I get using > signal.cspline1d for example, but I need 4 arrays, not just one). > > I would expect coefficients to be a list of shape > > coeff = [ [a0, b0, c0, d0], > ? ? ? ? ?[a1, b1, c1, d1], > ? ? ? ? ?[a2, b1, c2, d2], > ? ? ? ? ?... > ? ? ? ? ?... > ? ? ? ? ?[an, bn, cn, dn], > ? ? ? ?] > > but I am only able to obtain 1-d array of coefficients, no matter what > function or module I use. > > http://www.physics.utah.edu/~detar/phys6720/handouts/cubic_spline/cubic_spline/node1.html > > ...is basically what I'm looking for. I used to do the matrix > calculations for Si(x) myself using C++, but I'd prefer to somehow get the > coefficients using numpy/scipy and not use an extension. > > Any further ideas? interpolate.spalde gives you the derivatives of as a 1 by 4 array for each point given it's a cubic function, it should be possible to recover the coefficients something like coeff = np.vstack(interpolate.spalde(x,tck))*[1,1,0.5,1/3.] I have no idea yet, whether this actually does what you want. If you find out how it works, it would be good to get an example since figuring out the splines with the current documentation is pretty difficult. Josef From josef.pktd at gmail.com Sun Jun 21 17:52:27 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 21 Jun 2009 17:52:27 -0400 Subject: [SciPy-user] SciPy cubic interpolation coefficients In-Reply-To: <1cd32cbb0906211428w37cbb98do28e4ef32f77a9f42@mail.gmail.com> References: <1366244265.20090621195901@gmx.net> <1cd32cbb0906211223k5045e10fq93196a5b7e0f9c4a@mail.gmail.com> <1778628852.20090621221343@gmx.net> <1cd32cbb0906211428w37cbb98do28e4ef32f77a9f42@mail.gmail.com> Message-ID: <1cd32cbb0906211452w1690af3cy66bca35d75c6f868@mail.gmail.com> On Sun, Jun 21, 2009 at 5:28 PM, wrote: > On Sun, Jun 21, 2009 at 4:13 PM, Celvin wrote: >> josef.pktd at gmail.com @ Jun, 21, 2009 09:23PM >> >>> Did you try >>> UnivariateSpline.get_coeffs() >> >>> UnivariateSpline.get_knots() >> >> >From my experiments with interpolate splines, I would think this >>> provides what you want. But the documentation is still a bit sparse. >> Yes, I did try using UnivariateSpline. Apart from being "more off" >> than using splrep with a smoothing factor of 0, get_coeffs() also >> returns an 1-d array, with far too few coefficients. >> >> Assuming a signal with about 390 data points, I also expect about >> 390 coefficients (which is consistent with what I get using >> signal.cspline1d for example, but I need 4 arrays, not just one). >> >> I would expect coefficients to be a list of shape >> >> coeff = [ [a0, b0, c0, d0], >> ? ? ? ? ?[a1, b1, c1, d1], >> ? ? ? ? ?[a2, b1, c2, d2], >> ? ? ? ? ?... >> ? ? ? ? ?... >> ? ? ? ? ?[an, bn, cn, dn], >> ? ? ? ?] >> >> but I am only able to obtain 1-d array of coefficients, no matter what >> function or module I use. >> >> http://www.physics.utah.edu/~detar/phys6720/handouts/cubic_spline/cubic_spline/node1.html >> >> ...is basically what I'm looking for. I used to do the matrix >> calculations for Si(x) myself using C++, but I'd prefer to somehow get the >> coefficients using numpy/scipy and not use an extension. >> >> Any further ideas? > > interpolate.spalde ? gives you the derivatives of as a 1 by 4 array > for each point > given it's a cubic function, it should be possible to recover the coefficients > > something like > > coeff = np.vstack(interpolate.spalde(x,tck))*[1,1,0.5,1/3.] I don't think that makes any sense. (sleep deprivation) Josef > > I have no idea yet, whether this actually does what you want. > If you find out how it works, it would be good to get an example since > figuring out the splines with the current documentation is pretty > difficult. From shaibani at ymail.com Sun Jun 21 18:13:50 2009 From: shaibani at ymail.com (Ala Al-Shaibani) Date: Sun, 21 Jun 2009 15:13:50 -0700 (PDT) Subject: [SciPy-user] Variable coupled ODE's Message-ID: <42591.17931.qm@web24615.mail.ird.yahoo.com> Hello everyone, I am wondering on how to solve this issue. Basically I have a simulator that depends on data files, and the simulation will be represented through the use of first order coupled ODEs. Number of coupled ODEs to be solved will vary depending on the data set. The equation itself is the same throughout, but because it's coupled, it might need to be solved for 2 variables, 5 variables or more, with each variable requiring its own equation. Is it possible to implement this using scipy's odeint ? Because as I can see for coupled ODEs, the equations will have to be pre-set in the derivation function. Any suggestions will be appreciated. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eadrogue at gmx.net Sun Jun 21 18:25:56 2009 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Mon, 22 Jun 2009 00:25:56 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints Message-ID: <20090621222556.GA19846@doriath.local> Hi all, I am stuck in an obnoxious optimisation problem. Essentially, I want to find the local maximum of a multivariate nonlinear function subject to a linear constraint. x = (a1, a2, a2, ..., a_n, b1, b2, b3, ..., b_n) Maximise: f(x) Subject to: sum(a) - sum(b) = 0 No big deal, apparently. The problem is that f(x) is defined only when x_n > 0 for all n, as it contains lots of log(a[i] * b[j]) which are undefined when a[i] or b[j] < 0. I have tried to specify a lower bound for x, but both fmin_l_bfgs_b and fmin_tnc seem to evaluate the objective function with elements of x < 0, regardless of the bounds specified, making my programme to crash. fmin_cobyla seems to ignore the constraint altogether. I have run out of ideas on how to deal with this. Any advice? Thanks. Ernest From sebastian.walter at gmail.com Mon Jun 22 03:57:05 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Mon, 22 Jun 2009 09:57:05 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: <20090621222556.GA19846@doriath.local> References: <20090621222556.GA19846@doriath.local> Message-ID: Hello Ernest, 2009/6/22 Ernest Adrogu? : > Hi all, > > I am stuck in an obnoxious optimisation problem. > > Essentially, I want to find the local maximum of a > multivariate nonlinear function subject to a linear > constraint. > > x = (a1, a2, a2, ..., a_n, b1, b2, b3, ..., b_n) > > Maximise: ? ?f(x) > Subject to: ?sum(a) - sum(b) = 0 > > No big deal, apparently. The problem is that f(x) is defined > only when x_n > 0 for all n, as it contains lots of > > log(a[i] * b[j]) > > which are undefined when a[i] or b[j] < 0. > > I have tried to specify a lower bound for x, but both > fmin_l_bfgs_b and fmin_tnc seem to evaluate the objective > function with elements of x < 0, regardless of the bounds > specified, making my programme to crash. > > fmin_cobyla seems to ignore the constraint altogether. > > I have run out of ideas on how to deal with this. > Any advice? are you sure you can't reformulate the problem? maybe you should try an interior point method. By definition, all iterates will be feasible. There is a python wrapper for IPOPT out there. It's called pyipopt. It worked reasonably well when I tried it. OPENOPT also interfaces to IPOPT as far as I know, but I have never used that interface. Sebastian > > Thanks. > > Ernest > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From eadrogue at gmx.net Mon Jun 22 07:13:51 2009 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Mon, 22 Jun 2009 13:13:51 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: References: <20090621222556.GA19846@doriath.local> Message-ID: <20090622111351.GA22607@doriath.local> Hi Sebastian, 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter: > > are you sure you can't reformulate the problem? Another approach would be to try to solve the system of equations resulting from equating the gradient to zero. Such equations are defined for all x. I have already tried that with fsolve(), but it only seems to find the obvious, useless solution of x=0. I was going to try with a Newton-Raphson alorithm, but since this would require the hessian matrix to be calculated, I'm leaving this option as a last resort :) > maybe you should try an interior point method. By definition, all > iterates will be feasible. > There is a python wrapper for IPOPT out there. It's called pyipopt. It > worked reasonably well when I tried it. > OPENOPT also interfaces to IPOPT as far as I know, but I have never > used that interface. Thanks, this looks interesting. I'm going to check out this pyipopt. -- Ernest From sebastian.walter at gmail.com Mon Jun 22 07:54:31 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Mon, 22 Jun 2009 13:54:31 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: <20090622111351.GA22607@doriath.local> References: <20090621222556.GA19846@doriath.local> <20090622111351.GA22607@doriath.local> Message-ID: 2009/6/22 Ernest Adrogu? : > Hi Sebastian, > > 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter: >> >> are you sure you can't reformulate the problem? > > Another approach would be to try to solve the system of > equations resulting from equating the gradient to zero. > Such equations are defined for all x. I have already tried > that with fsolve(), but it only seems to find the obvious, > useless solution of x=0. I was going to try with a > Newton-Raphson alorithm, but since this would require the > hessian matrix to be calculated, I'm leaving this option > as a last resort :) Ermmm, I don't quite get it. You have an NLP with linear equality constraints and box constraints. Of course you could write down the Lagrangian for that and define an algorithm that satisifies the first and second order optimality conditions. But that is not going to be easy, even if you have the exact hessian: you'll need some globalization strategy (linesearch, trust-region,...) to guarantee global convergence and implement something like projected gradients so you stay within the box-constraints. I guess it will be easier to use an existing algorithm... And I just had a look at fmin_l_bfgs_b: how did you set the equality constraints for this algorithm. It seems to me that this is an unconstrained optimization algorithm which is worthless if you have a constrained NLP. remark: To compute the Hessian you can always use an AD tool. There are several available in Python. My biased favourite one being pyadolc ( http://github.com/b45ch1/pyadolc ) which is slowly approaching version 1.0. > >> maybe you should try an interior point method. By definition, all >> iterates will be feasible. >> There is a python wrapper for IPOPT out there. It's called pyipopt. It >> worked reasonably well when I tried it. >> OPENOPT also interfaces to IPOPT as far as I know, but I have never >> used that interface. > > Thanks, this looks interesting. I'm going to check out > this pyipopt. > > -- > Ernest > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From rob.clewley at gmail.com Mon Jun 22 11:32:09 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 22 Jun 2009 11:32:09 -0400 Subject: [SciPy-user] Variable coupled ODE's In-Reply-To: <42591.17931.qm@web24615.mail.ird.yahoo.com> References: <42591.17931.qm@web24615.mail.ird.yahoo.com> Message-ID: On Sun, Jun 21, 2009 at 6:13 PM, Ala Al-Shaibani wrote: > Hello everyone, I am wondering on how to solve this issue. > Basically I have a simulator that depends on data files, and the simulation > will be represented through the use of first order coupled ODEs. > Number of coupled ODEs to be solved will vary depending on the data set. The > equation itself is the same throughout, but because it's coupled, it might > need to be solved for 2 variables, 5 variables or more, with each variable > requiring its own equation. > > Is it possible to implement this using scipy's odeint ? Because as I can see > for coupled ODEs, the equations will have to be pre-set in the derivation > function. Hi Ala, Maybe you can spell out your equations (or their form at least), explain your data set, and summarize your modeling problem so that we can see what exactly you have and how to help. For instance, it's not clear how you mean the number of ODEs will vary on the data set. PyDSTool lets you include known signals (such as array data loaded from a file) on the RHS of an ODE, if that's what you mean. I don't think odeint lets you do that. -Rob From keflavich at gmail.com Mon Jun 22 12:36:34 2009 From: keflavich at gmail.com (Adam) Date: Mon, 22 Jun 2009 09:36:34 -0700 (PDT) Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <622e6568-afa3-4dd1-8414-4f5640b07b45@p21g2000prn.googlegroups.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> <622e6568-afa3-4dd1-8414-4f5640b07b45@p21g2000prn.googlegroups.com> Message-ID: <3f4d203f-a8cf-4870-823d-d1fc96d7cb07@f38g2000pra.googlegroups.com> It turns out... that is not the problem. I still have the error I reported earlier: >>> from matplotlib import pyplot Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/pyplot.py", line 6, in from matplotlib.figure import Figure, figaspect File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/figure.py", line 19, in from axes import Axes, SubplotBase, subplot_class_factory File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/axes.py", line 11, in import matplotlib.axis as maxis File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/axis.py", line 9, in import matplotlib.font_manager as font_manager File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/font_manager.py", line 52, in from matplotlib import ft2font ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/ lib/python2.6/site-packages/matplotlib/ft2font.so, 2): Symbol not found: _FT_Attach_File Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/ lib/python2.6/site-packages/matplotlib/ft2font.so Expected in: dynamic lookup but apparently I DO have a 64 bit version of the FreeType library: eta ~/Downloads/matplotlib-0.98.5.3$ file /Library/Frameworks/ Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/ ft2font.so /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- packages/matplotlib/ft2font.so: Mach-O universal binary with 4 architectures /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- packages/matplotlib/ft2font.so (for architecture i386): Mach-O bundle i386 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- packages/matplotlib/ft2font.so (for architecture ppc7400): Mach-O bundle ppc /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- packages/matplotlib/ft2font.so (for architecture ppc64): Mach-O 64- bit bundle ppc64 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- packages/matplotlib/ft2font.so (for architecture x86_64): Mach-O 64- bit bundle x86_64 Any ideas? This has been the sticking point for me for quite some time. A friend with a nearly identical setup was able to get this working, but we haven't been able to figure out the difference in our two installs. Adam From robert.kern at gmail.com Mon Jun 22 12:39:52 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Jun 2009 11:39:52 -0500 Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <3f4d203f-a8cf-4870-823d-d1fc96d7cb07@f38g2000pra.googlegroups.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> <622e6568-afa3-4dd1-8414-4f5640b07b45@p21g2000prn.googlegroups.com> <3f4d203f-a8cf-4870-823d-d1fc96d7cb07@f38g2000pra.googlegroups.com> Message-ID: <3d375d730906220939x4fe3c5a5wa06c7ff7f8836f2d@mail.gmail.com> On Mon, Jun 22, 2009 at 11:36, Adam wrote: > > It turns out... that is not the problem. ?I still have the error I > reported earlier: > >>> from matplotlib import pyplot > Traceback (most recent call last): > ?File "", line 1, in > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > python2.6/site-packages/matplotlib/pyplot.py", line 6, in > ? ?from matplotlib.figure import Figure, figaspect > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > python2.6/site-packages/matplotlib/figure.py", line 19, in > ? ?from axes import Axes, SubplotBase, subplot_class_factory > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > python2.6/site-packages/matplotlib/axes.py", line 11, in > ? ?import matplotlib.axis as maxis > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > python2.6/site-packages/matplotlib/axis.py", line 9, in > ? ?import matplotlib.font_manager as font_manager > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > python2.6/site-packages/matplotlib/font_manager.py", line 52, in > > ? ?from matplotlib import ft2font > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/ > lib/python2.6/site-packages/matplotlib/ft2font.so, 2): Symbol not > found: _FT_Attach_File > ?Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/ > lib/python2.6/site-packages/matplotlib/ft2font.so > ?Expected in: dynamic lookup > > but apparently I DO have a 64 bit version of the FreeType library: > > eta ~/Downloads/matplotlib-0.98.5.3$ file /Library/Frameworks/ > Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/ > ft2font.so > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > packages/matplotlib/ft2font.so: Mach-O universal binary with 4 > architectures > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > packages/matplotlib/ft2font.so (for architecture i386): ? ?Mach-O > bundle i386 > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > packages/matplotlib/ft2font.so (for architecture ppc7400): Mach-O > bundle ppc > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > packages/matplotlib/ft2font.so (for architecture ppc64): ? Mach-O 64- > bit bundle ppc64 > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > packages/matplotlib/ft2font.so (for architecture x86_64): ?Mach-O 64- > bit bundle x86_64 No, you have a 64-bit build of the ft2font.so extension module, not the libfreetype.dylib that it links with. Please use "otool -L ft2font.so" to see what libraries it is trying to load dynamically. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ?-- Umberto Eco From eadrogue at gmx.net Mon Jun 22 13:15:14 2009 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Mon, 22 Jun 2009 19:15:14 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: References: <20090621222556.GA19846@doriath.local> <20090622111351.GA22607@doriath.local> Message-ID: <20090622171514.GA31692@doriath.local> 22/06/09 @ 13:54 (+0200), thus spake Sebastian Walter: > 2009/6/22 Ernest Adrogu? : > > Hi Sebastian, > > > > 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter: > >> > >> are you sure you can't reformulate the problem? > > > > Another approach would be to try to solve the system of > > equations resulting from equating the gradient to zero. > > Such equations are defined for all x. I have already tried > > that with fsolve(), but it only seems to find the obvious, > > useless solution of x=0. I was going to try with a > > Newton-Raphson alorithm, but since this would require the > > hessian matrix to be calculated, I'm leaving this option > > as a last resort :) > > Ermmm, I don't quite get it. You have an NLP with linear equality > constraints and box constraints. > Of course you could write down the Lagrangian for that and define an > algorithm that satisifies the first and second order optimality > conditions. > But that is not going to be easy, even if you have the exact hessian: > you'll need some globalization strategy (linesearch, trust-region,...) > to guarantee global convergence > and implement something like projected gradients so you stay within > the box-constraints. > > I guess it will be easier to use an existing algorithm... Mmmm, yes, but the box constraints are merely to prevent the algorithm from evaluating f(x) with values of x for which f(x) is not defined. It's not a "real" constraint, because I know beforehand that all elements of x are > 0 at the maximum. > And I just had a look at fmin_l_bfgs_b: how did you set the equality > constraints for this algorithm. It seems to me that this is an > unconstrained optimization algorithm which is worthless if you have a > constrained NLP. You're right. I included the equality constraint within the function itself, so that the function I omptimised with fmin_l_bfgs_b had one parameter less and computed the "missing" parameter internally as a function of the others. The problem is that this dependent parameter, was unaffected by the box constraint and eventually would take values < 0. Fortunately, Siddhardh Chandra has told me the solution, which is to maximise f(|x|) instead of f(x), with the linear constraint incorporated into the function, using a simple unconstrained optimisation algorithm. His message hasn't made it to the list though. I have just done this and it seems to work! After 10.410 function evaluations and 8.904 iterations fmin has found the solution and it looks sound at first sight. Thanks for your help. > remark: > To compute the Hessian you can always use an AD tool. There are > several available in Python. > My biased favourite one being pyadolc ( > http://github.com/b45ch1/pyadolc ) which is slowly approaching version > 1.0. I will have a look. Thanks again :) Ernest From robert.kern at gmail.com Mon Jun 22 13:20:50 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Jun 2009 12:20:50 -0500 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: <20090622171514.GA31692@doriath.local> References: <20090621222556.GA19846@doriath.local> <20090622111351.GA22607@doriath.local> <20090622171514.GA31692@doriath.local> Message-ID: <3d375d730906221020r70aa4ce6pe6bbab9a0c0eb505@mail.gmail.com> 2009/6/22 Ernest Adrogu? : > Mmmm, yes, but the box constraints are merely to prevent the > algorithm from evaluating f(x) with values of x for which f(x) > is not defined. It's not a "real" constraint, because I know > beforehand that all elements of x are > 0 at the maximum. Unfortunately, that's not how constraints work in most optimizers. Usually, the infeasible region is necessarily also explored. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Jun 22 15:07:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 22 Jun 2009 15:07:30 -0400 Subject: [SciPy-user] SciPy cubic interpolation coefficients In-Reply-To: <1cd32cbb0906211452w1690af3cy66bca35d75c6f868@mail.gmail.com> References: <1366244265.20090621195901@gmx.net> <1cd32cbb0906211223k5045e10fq93196a5b7e0f9c4a@mail.gmail.com> <1778628852.20090621221343@gmx.net> <1cd32cbb0906211428w37cbb98do28e4ef32f77a9f42@mail.gmail.com> <1cd32cbb0906211452w1690af3cy66bca35d75c6f868@mail.gmail.com> Message-ID: <1cd32cbb0906221207k345b5a2cr9bdcb6bfbb01ddc6@mail.gmail.com> On Sun, Jun 21, 2009 at 5:52 PM, wrote: > On Sun, Jun 21, 2009 at 5:28 PM, wrote: >> On Sun, Jun 21, 2009 at 4:13 PM, Celvin wrote: >>> josef.pktd at gmail.com @ Jun, 21, 2009 09:23PM >>> >>>> Did you try >>>> UnivariateSpline.get_coeffs() >>> >>>> UnivariateSpline.get_knots() >>> >>> >From my experiments with interpolate splines, I would think this >>>> provides what you want. But the documentation is still a bit sparse. >>> Yes, I did try using UnivariateSpline. Apart from being "more off" >>> than using splrep with a smoothing factor of 0, get_coeffs() also >>> returns an 1-d array, with far too few coefficients. >>> >>> Assuming a signal with about 390 data points, I also expect about >>> 390 coefficients (which is consistent with what I get using >>> signal.cspline1d for example, but I need 4 arrays, not just one). >>> >>> I would expect coefficients to be a list of shape >>> >>> coeff = [ [a0, b0, c0, d0], >>> ? ? ? ? ?[a1, b1, c1, d1], >>> ? ? ? ? ?[a2, b1, c2, d2], >>> ? ? ? ? ?... >>> ? ? ? ? ?... >>> ? ? ? ? ?[an, bn, cn, dn], >>> ? ? ? ?] >>> >>> but I am only able to obtain 1-d array of coefficients, no matter what >>> function or module I use. >>> >>> http://www.physics.utah.edu/~detar/phys6720/handouts/cubic_spline/cubic_spline/node1.html >>> >>> ...is basically what I'm looking for. I used to do the matrix >>> calculations for Si(x) myself using C++, but I'd prefer to somehow get the >>> coefficients using numpy/scipy and not use an extension. >>> >>> Any further ideas? >> >> interpolate.spalde ? gives you the derivatives of as a 1 by 4 array >> for each point >> given it's a cubic function, it should be possible to recover the coefficients >> >> something like >> >> coeff = np.vstack(interpolate.spalde(x,tck))*[1,1,0.5,1/3.] > > > I don't think that makes any sense. (sleep deprivation) > > Josef > >> >> I have no idea yet, whether this actually does what you want. >> If you find out how it works, it would be good to get an example since >> figuring out the splines with the current documentation is pretty >> difficult. > Here is a version that seems to work up to a precision of 1e-10. I just wanted to see how it works, and didn't try for numerical precision. I'm not sure it's useful, since the spline code calculates the interpolation faster and with higher precision. deriv2coeff(x2, tck): uses the derivatives of the spline at an array of points to recover the polynomial coefficients by direct calculation Josef """ try_spline_coeff.py """ import numpy as np from scipy import interpolate def cubic(x,a): return np.sum(a*np.c_[np.ones(x.shape), x, x**2, x**3],axis=-1) def cubice(x,a): return (a[:,0]+a[:,1]*x+a[:,2]*x**2+a[:,3]*x**3) npoints = 51 x = np.linspace(0, 20, npoints) y = np.sqrt(x) + np.sin(x) + 0.2* np.random.randn(npoints) tck = interpolate.splrep(x, y) x2 = np.linspace(0, 20, 200) y2 = interpolate.splev(x2, tck) def deriv2coeff(x2, tck): '''get polynomial coefficients for spline at points x2''' coeffr = np.vstack(interpolate.spalde(x2,tck)) coeffr[:,3] /= 6. coeffr[:,2] = (coeffr[:,2] - 6*coeffr[:,3]*x2)/2. coeffr[:,1] = coeffr[:,1] - 2*coeffr[:,2]*x2 - 3*coeffr[:,3]*x2**2 coeffr[:,0] = coeffr[:,0] - coeffr[:,1]*x2 - coeffr[:,2]*x2**2 - coeffr[:,3]*x2**3 return coeffr coeffr = deriv2coeff(x2, tck) print cubic(x2[:10],coeffr[:10,:]) print cubice(x2[:10],coeffr[:10,:]) print y2[:10] y2p = cubice(x2,coeffr) print np.max(np.abs(y2-y2p)) y3s = interpolate.splev(x2+0.01, tck) y3p = cubice(x2+0.01,coeffr) print np.mean(np.abs(y3s-y3p)>1e-10) y3s = interpolate.splev(x2-0.01, tck) y3p = cubice(x2-0.01,coeffr) print np.mean(np.abs(y3s-y3p)>1e-10) x3 = tck[0][3:-3] # check at knots + 0.02 y3s = interpolate.splev(x3+0.02, tck) coeffr0 = deriv2coeff(x3, tck) y3p = cubice(x3+0.02,coeffr0) print np.mean(np.abs(y3s-y3p)>1e-10) From keflavich at gmail.com Mon Jun 22 16:15:48 2009 From: keflavich at gmail.com (Adam) Date: Mon, 22 Jun 2009 13:15:48 -0700 (PDT) Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <3d375d730906220939x4fe3c5a5wa06c7ff7f8836f2d@mail.gmail.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> <622e6568-afa3-4dd1-8414-4f5640b07b45@p21g2000prn.googlegroups.com> <3f4d203f-a8cf-4870-823d-d1fc96d7cb07@f38g2000pra.googlegroups.com> <3d375d730906220939x4fe3c5a5wa06c7ff7f8836f2d@mail.gmail.com> Message-ID: <393a3a2b-df8c-4177-b474-58865cf5e581@w31g2000prd.googlegroups.com> Ah, good point: otool -L /Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/ft2font.so /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- packages/matplotlib/ft2font.so: /usr/X11/lib/libfreetype.6.dylib (compatibility version 10.0.0, current version 10.20.0) /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.3) /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.4.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.4) /usr/local/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) Adam On Jun 22, 10:39?am, Robert Kern wrote: > On Mon, Jun 22, 2009 at 11:36, Adam wrote: > > > It turns out... that is not the problem. ?I still have the error I > > reported earlier: > > >>> from matplotlib import pyplot > > Traceback (most recent call last): > > ?File "", line 1, in > > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > > python2.6/site-packages/matplotlib/pyplot.py", line 6, in > > ? ?from matplotlib.figure import Figure, figaspect > > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > > python2.6/site-packages/matplotlib/figure.py", line 19, in > > ? ?from axes import Axes, SubplotBase, subplot_class_factory > > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > > python2.6/site-packages/matplotlib/axes.py", line 11, in > > ? ?import matplotlib.axis as maxis > > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > > python2.6/site-packages/matplotlib/axis.py", line 9, in > > ? ?import matplotlib.font_manager as font_manager > > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > > python2.6/site-packages/matplotlib/font_manager.py", line 52, in > > > > ? ?from matplotlib import ft2font > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/ > > lib/python2.6/site-packages/matplotlib/ft2font.so, 2): Symbol not > > found: _FT_Attach_File > > ?Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/ > > lib/python2.6/site-packages/matplotlib/ft2font.so > > ?Expected in: dynamic lookup > > > but apparently I DO have a 64 bit version of the FreeType library: > > > eta ~/Downloads/matplotlib-0.98.5.3$ file /Library/Frameworks/ > > Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/ > > ft2font.so > > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > > packages/matplotlib/ft2font.so: Mach-O universal binary with 4 > > architectures > > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > > packages/matplotlib/ft2font.so (for architecture i386): ? ?Mach-O > > bundle i386 > > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > > packages/matplotlib/ft2font.so (for architecture ppc7400): Mach-O > > bundle ppc > > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > > packages/matplotlib/ft2font.so (for architecture ppc64): ? Mach-O 64- > > bit bundle ppc64 > > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > > packages/matplotlib/ft2font.so (for architecture x86_64): ?Mach-O 64- > > bit bundle x86_64 > > No, you have a 64-bit build of the ft2font.so extension module, not > the libfreetype.dylib that it links with. Please use "otool -L > ft2font.so" to see what libraries it is trying to load dynamically. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Mon Jun 22 16:33:35 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Jun 2009 15:33:35 -0500 Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <393a3a2b-df8c-4177-b474-58865cf5e581@w31g2000prd.googlegroups.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> <622e6568-afa3-4dd1-8414-4f5640b07b45@p21g2000prn.googlegroups.com> <3f4d203f-a8cf-4870-823d-d1fc96d7cb07@f38g2000pra.googlegroups.com> <3d375d730906220939x4fe3c5a5wa06c7ff7f8836f2d@mail.gmail.com> <393a3a2b-df8c-4177-b474-58865cf5e581@w31g2000prd.googlegroups.com> Message-ID: <3d375d730906221333o12a7fe7as1752fc0d6720f48b@mail.gmail.com> On Mon, Jun 22, 2009 at 15:15, Adam wrote: > Ah, good point: > otool -L ?/Library/Frameworks/Python.framework/Versions/2.6/lib/ > python2.6/site-packages/matplotlib/ft2font.so > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > packages/matplotlib/ft2font.so: > ? ? ? ?/usr/X11/lib/libfreetype.6.dylib (compatibility version > 10.0.0, current version 10.20.0) Hmm, this is 64-bit on my machine: $ file /usr/X11/lib/libfreetype.6.dylib /usr/X11/lib/libfreetype.6.dylib: Mach-O universal binary with 4 architectures /usr/X11/lib/libfreetype.6.dylib (for architecture ppc7400): Mach-O dynamically linked shared library ppc /usr/X11/lib/libfreetype.6.dylib (for architecture ppc64): Mach-O 64-bit dynamically linked shared library ppc64 /usr/X11/lib/libfreetype.6.dylib (for architecture i386): Mach-O dynamically linked shared library i386 /usr/X11/lib/libfreetype.6.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From keflavich at gmail.com Mon Jun 22 19:04:16 2009 From: keflavich at gmail.com (Adam) Date: Mon, 22 Jun 2009 16:04:16 -0700 (PDT) Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <3d375d730906221333o12a7fe7as1752fc0d6720f48b@mail.gmail.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> <622e6568-afa3-4dd1-8414-4f5640b07b45@p21g2000prn.googlegroups.com> <3f4d203f-a8cf-4870-823d-d1fc96d7cb07@f38g2000pra.googlegroups.com> <3d375d730906220939x4fe3c5a5wa06c7ff7f8836f2d@mail.gmail.com> <393a3a2b-df8c-4177-b474-58865cf5e581@w31g2000prd.googlegroups.com> <3d375d730906221333o12a7fe7as1752fc0d6720f48b@mail.gmail.com> Message-ID: me too: file /usr/X11/lib/libfreetype.6.dylib /usr/X11/lib/libfreetype.6.dylib: Mach-O universal binary with 4 architectures /usr/X11/lib/libfreetype.6.dylib (for architecture ppc7400): Mach-O dynamically linked shared library ppc /usr/X11/lib/libfreetype.6.dylib (for architecture ppc64): Mach-O 64-bit dynamically linked shared library ppc64 /usr/X11/lib/libfreetype.6.dylib (for architecture i386): Mach-O dynamically linked shared library i386 /usr/X11/lib/libfreetype.6.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 I ended up copying over my friend's successfully-installed 64 bit python and matplotlib imports fine. There's no obvious difference in the procedures we used. So, my problem has been worked around, but I didn't find a solution. Adam On Jun 22, 2:33?pm, Robert Kern wrote: > On Mon, Jun 22, 2009 at 15:15, Adam wrote: > > Ah, good point: > > otool -L ?/Library/Frameworks/Python.framework/Versions/2.6/lib/ > > python2.6/site-packages/matplotlib/ft2font.so > > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site- > > packages/matplotlib/ft2font.so: > > ? ? ? ?/usr/X11/lib/libfreetype.6.dylib (compatibility version > > 10.0.0, current version 10.20.0) > > Hmm, this is 64-bit on my machine: > > $ file /usr/X11/lib/libfreetype.6.dylib > /usr/X11/lib/libfreetype.6.dylib: Mach-O universal binary with 4 architectures > /usr/X11/lib/libfreetype.6.dylib (for architecture ppc7400): ? ?Mach-O > dynamically linked shared library ppc > /usr/X11/lib/libfreetype.6.dylib (for architecture ppc64): ? ? ?Mach-O > 64-bit dynamically linked shared library ppc64 > /usr/X11/lib/libfreetype.6.dylib (for architecture i386): ? ? ? Mach-O > dynamically linked shared library i386 > /usr/X11/lib/libfreetype.6.dylib (for architecture x86_64): ? ? Mach-O > 64-bit dynamically linked shared library x86_64 > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From vanforeest at gmail.com Tue Jun 23 02:53:27 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Tue, 23 Jun 2009 08:53:27 +0200 Subject: [SciPy-user] building (very) large Markov chains In-Reply-To: <5E70B7B8-F126-40D8-BE00-1ED5F952FE12@cs.toronto.edu> References: <5E70B7B8-F126-40D8-BE00-1ED5F952FE12@cs.toronto.edu> Message-ID: Hi David, Thanks for replying. The problem is like this. The states I have to deal with represent schedules of jobs to process on a machine. Such schedules can be quite complicated. Currently I store these schedules as a tuple. Generating the transition matrix is a pain. The problem is that I do not have a simple method to enumerate all the states in the state space. I use a trick instead that resembles building a graph by means of induction. Specifically, for a given state ( that is a schedule stored as a tuple) I have a rule to compute the states to which the markov chain can move. I implemented this rule in a function generateRow(state) which returns a list of states to which the chain can jump and a list of probabilities associated to these states. The induction works as follows. I apply generateRow to an initial state. This yields a set of new states. To each of these new states I apply generateRow, which generate new states and perhaps some old ones. I add the new states to a set called frontier, and move the state which I just have dealt with from this frontier set to the state space set. Then I pop another state form the frontier set and move it to the state space set, apply generateRow to this state, put new states in the frontier set, etc...Once the frontier set is empy (due to the popping of states), I must have visited all reachable states. Now all these tuples may consume considerable amounts of memory. Worse yet, two identitical tuples may sometimes claim memory twice. To see this, suppose I generate the tuple (3,4,4,4,4,4,) and do a lot of computations as sketched above. It may occur during these computations that this very tuple is generated again. Python, now, may claim new memory for this newly generated tuple, although the memory has already been claimed for the initial tuple. I am breaking my head over this problem. One way to circumvent this is by using a dictionary that mappes tuples to indices. I include the code below. However, I do not like to use this dictionary as it means extra storage. Do you perhaps have a memory efficient work around? Thanks bye Nicky class StateSpace(): def __init__(self): self.state2index = {} self.index2state = {} def __call__(self, state): if type(state) == tuple: self.returnIndex(state) if type(state) == int: return self.index2state[state] def returnIndex(self, state): if state not in self.state2index: index = len(self.state2index) self.state2index[state] = index self.index2state[index] = state return self.state2index[state] 2009/6/21 David Warde-Farley > Can you elaborate on what you're trying to do with the chains and what > problem you're having? > > David > > On 21-Jun-09, at 2:53 PM, nicky van foreest wrote: > > > Hi, > > > > I am trying to get into contact with somebody that is interested in, > > or has experience with, building very large sparse Markov chains (+5e5 > > states) in python. I tried some different approaches to solve my > > problems, but none of these meet my demands. Perhaps one of you has a > > better idea. > > > > bye > > > > Nicky > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.walter at gmail.com Tue Jun 23 04:00:31 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 23 Jun 2009 10:00:31 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: <3d375d730906221020r70aa4ce6pe6bbab9a0c0eb505@mail.gmail.com> References: <20090621222556.GA19846@doriath.local> <20090622111351.GA22607@doriath.local> <20090622171514.GA31692@doriath.local> <3d375d730906221020r70aa4ce6pe6bbab9a0c0eb505@mail.gmail.com> Message-ID: On Mon, Jun 22, 2009 at 7:20 PM, Robert Kern wrote: > 2009/6/22 Ernest Adrogu? : > >> Mmmm, yes, but the box constraints are merely to prevent the >> algorithm from evaluating f(x) with values of x for which f(x) >> is not defined. It's not a "real" constraint, because I know >> beforehand that all elements of x are > 0 at the maximum. > > Unfortunately, that's not how constraints work in most optimizers. > Usually, the infeasible region is necessarily also explored. Do you have a certain solver in mind? >From what I read in the literature, I thought that for simple box constraints usually some projection to the set of feasible search directions is performed. And only for the nonlinear constraints one uses a merit function and an active set strategy for the inequality constraints which may yield infeasible steps in the process. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Tue Jun 23 04:10:53 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 23 Jun 2009 10:10:53 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: <20090622171514.GA31692@doriath.local> References: <20090621222556.GA19846@doriath.local> <20090622111351.GA22607@doriath.local> <20090622171514.GA31692@doriath.local> Message-ID: 2009/6/22 Ernest Adrogu? : > 22/06/09 @ 13:54 (+0200), thus spake Sebastian Walter: >> 2009/6/22 Ernest Adrogu? : >> > Hi Sebastian, >> > >> > 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter: >> >> >> >> are you sure you can't reformulate the problem? >> > >> > Another approach would be to try to solve the system of >> > equations resulting from equating the gradient to zero. >> > Such equations are defined for all x. I have already tried >> > that with fsolve(), but it only seems to find the obvious, >> > useless solution of x=0. I was going to try with a >> > Newton-Raphson alorithm, but since this would require the >> > hessian matrix to be calculated, I'm leaving this option >> > as a last resort :) >> >> Ermmm, I don't quite get it. You have an NLP with linear equality >> constraints and box constraints. >> Of course you could write down the Lagrangian for that and define an >> algorithm that satisifies the first and second order optimality >> conditions. >> But that is not going to be easy, even if you have the exact hessian: >> you'll need some globalization strategy (linesearch, trust-region,...) >> to guarantee global convergence >> and implement something like projected gradients so you stay within >> the box-constraints. >> >> I guess it will be easier to use an existing algorithm... > > Mmmm, yes, but the box constraints are merely to prevent the > algorithm from evaluating f(x) with values of x for which f(x) > is not defined. It's not a "real" constraint, because I know > beforehand that all elements of x are > 0 at the maximum. > >> And I just had a look at fmin_l_bfgs_b: how did you set the equality >> constraints for this algorithm. It seems to me that this is an >> unconstrained optimization algorithm which is worthless if you have a >> constrained NLP. > > You're right. I included the equality constraint within the > function itself, so that the function I omptimised with fmin_l_bfgs_b > had one parameter less and computed the "missing" parameter > internally as a function of the others. > > The problem is that this dependent parameter, was unaffected by > the box constraint and eventually would take values < 0. > > Fortunately, Siddhardh Chandra has told me the solution, which > is to maximise f(|x|) instead of f(x), with the linear > constraint incorporated into the function, using a simple > unconstrained optimisation algorithm. His message hasn't made it > to the list though. I'm curious: Could you elaborate how you incoroporated the linear constraints into the objective function? > > I have just done this and it seems to work! After 10.410 > function evaluations and 8.904 iterations fmin has found the > solution and it looks sound at first sight > > Thanks for your help. > >> remark: >> To compute the Hessian you can always use an AD tool. There are >> several available in Python. >> My biased favourite one being pyadolc ( >> http://github.com/b45ch1/pyadolc ) which is slowly approaching version >> 1.0. > > I will have a look. > > Thanks again :) > > Ernest > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From eadrogue at gmx.net Tue Jun 23 05:45:03 2009 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Tue, 23 Jun 2009 11:45:03 +0200 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: References: <20090621222556.GA19846@doriath.local> <20090622111351.GA22607@doriath.local> <20090622171514.GA31692@doriath.local> Message-ID: <20090623094503.GA2291@doriath.local> 23/06/09 @ 10:10 (+0200), thus spake Sebastian Walter: > 2009/6/22 Ernest Adrogu? : > > 22/06/09 @ 13:54 (+0200), thus spake Sebastian Walter: > >> 2009/6/22 Ernest Adrogu? : > >> > Hi Sebastian, > >> > > >> > 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter: > >> >> > >> >> are you sure you can't reformulate the problem? > >> > > >> > Another approach would be to try to solve the system of > >> > equations resulting from equating the gradient to zero. > >> > Such equations are defined for all x. I have already tried > >> > that with fsolve(), but it only seems to find the obvious, > >> > useless solution of x=0. I was going to try with a > >> > Newton-Raphson alorithm, but since this would require the > >> > hessian matrix to be calculated, I'm leaving this option > >> > as a last resort :) > >> > >> Ermmm, I don't quite get it. You have an NLP with linear equality > >> constraints and box constraints. > >> Of course you could write down the Lagrangian for that and define an > >> algorithm that satisifies the first and second order optimality > >> conditions. > >> But that is not going to be easy, even if you have the exact hessian: > >> you'll need some globalization strategy (linesearch, trust-region,...) > >> to guarantee global convergence > >> and implement something like projected gradients so you stay within > >> the box-constraints. > >> > >> I guess it will be easier to use an existing algorithm... > > > > Mmmm, yes, but the box constraints are merely to prevent the > > algorithm from evaluating f(x) with values of x for which f(x) > > is not defined. It's not a "real" constraint, because I know > > beforehand that all elements of x are > 0 at the maximum. > > > >> And I just had a look at fmin_l_bfgs_b: how did you set the equality > >> constraints for this algorithm. It seems to me that this is an > >> unconstrained optimization algorithm which is worthless if you have a > >> constrained NLP. > > > > You're right. I included the equality constraint within the > > function itself, so that the function I omptimised with fmin_l_bfgs_b > > had one parameter less and computed the "missing" parameter > > internally as a function of the others. > > > > The problem is that this dependent parameter, was unaffected by > > the box constraint and eventually would take values < 0. > > > > Fortunately, Siddhardh Chandra has told me the solution, which > > is to maximise f(|x|) instead of f(x), with the linear > > constraint incorporated into the function, using a simple > > unconstrained optimisation algorithm. His message hasn't made it > > to the list though. > > I'm curious: Could you elaborate how you incoroporated the linear > constraints into the objective function? There was only one constraint: max: f(x) s.t.: sum(a) - sum(b) = 0 (1) where 'a' is the first half of vector x, and 'b' the second half. What this constraint really says is that one function parameter is dependent on the others, as you can rewrite the constraint as: a0 = sum(b0, b1 ... b_n) - sum(a1, a2 ... a_n) Therefore, if it is possible to incorporate the constraint into the function, by writing a function g(x) that takes (2*n)-1 parameters, calculates the dependent parameter, and calls the original f(x) with 2*n parameters. This g(x) function will have the constraint "incorporated" and can be optimised with an unconstrained algorithm. Ernest From dwf at cs.toronto.edu Tue Jun 23 05:45:48 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 23 Jun 2009 05:45:48 -0400 Subject: [SciPy-user] ndimage.convolve behaviour? Message-ID: Try as I might I can't seem to figure out why this behaviour might be happening: In [352]: X.shape Out[352]: (20, 20) In [353]: fm.filter.shape Out[353]: (5, 5) In [354]: ndimage.convolve(X, fm.filter, mode='constant', cval=0)[2,2] Out[354]: -1.4177409721087026 In [355]: (X[:5, :5] * fm.filter).sum() # which should be [2,2] in the convolved image, no? Out[355]: 0.33535125912538849 I get roughly the same answer with scipy.signal.convolve2d(X, fm.filter, mode='same'). Am I missing something fundamental? Does my kernel need to be separable or something like that? Regards, David From stefan at sun.ac.za Tue Jun 23 05:54:20 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 23 Jun 2009 11:54:20 +0200 Subject: [SciPy-user] ndimage.convolve behaviour? In-Reply-To: References: Message-ID: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> 2009/6/23 David Warde-Farley : > In [354]: ndimage.convolve(X, fm.filter, mode='constant', cval=0)[2,2] > Out[354]: -1.4177409721087026 > > In [355]: (X[:5, :5] * fm.filter).sum() # which should be [2,2] in the > convolved image, no? > Out[355]: 0.33535125912538849 Convolution flips the filter around, unlike correlation, so you are seeing: (X[:5, :5] * np.fliplr(np.flipud(fm.filter))).sum() Regards St?fan From robfalck at gmail.com Tue Jun 23 06:22:05 2009 From: robfalck at gmail.com (Rob Falck) Date: Tue, 23 Jun 2009 06:22:05 -0400 Subject: [SciPy-user] nonlinear optimisation with constraints In-Reply-To: <20090623094503.GA2291@doriath.local> References: <20090621222556.GA19846@doriath.local> <20090622111351.GA22607@doriath.local> <20090622171514.GA31692@doriath.local> <20090623094503.GA2291@doriath.local> Message-ID: 2009/6/23 Ernest Adrogu? > 23/06/09 @ 10:10 (+0200), thus spake Sebastian Walter: > > 2009/6/22 Ernest Adrogu? : > > > 22/06/09 @ 13:54 (+0200), thus spake Sebastian Walter: > > >> 2009/6/22 Ernest Adrogu? : > > >> > Hi Sebastian, > > >> > > > >> > 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter: > > >> >> > > >> >> are you sure you can't reformulate the problem? > > >> > > > >> > Another approach would be to try to solve the system of > > >> > equations resulting from equating the gradient to zero. > > >> > Such equations are defined for all x. I have already tried > > >> > that with fsolve(), but it only seems to find the obvious, > > >> > useless solution of x=0. I was going to try with a > > >> > Newton-Raphson alorithm, but since this would require the > > >> > hessian matrix to be calculated, I'm leaving this option > > >> > as a last resort :) > > >> > > >> Ermmm, I don't quite get it. You have an NLP with linear equality > > >> constraints and box constraints. > > >> Of course you could write down the Lagrangian for that and define an > > >> algorithm that satisifies the first and second order optimality > > >> conditions. > > >> But that is not going to be easy, even if you have the exact hessian: > > >> you'll need some globalization strategy (linesearch, trust-region,...) > > >> to guarantee global convergence > > >> and implement something like projected gradients so you stay within > > >> the box-constraints. > > >> > > >> I guess it will be easier to use an existing algorithm... > > > > > > Mmmm, yes, but the box constraints are merely to prevent the > > > algorithm from evaluating f(x) with values of x for which f(x) > > > is not defined. It's not a "real" constraint, because I know > > > beforehand that all elements of x are > 0 at the maximum. > > > > > >> And I just had a look at fmin_l_bfgs_b: how did you set the equality > > >> constraints for this algorithm. It seems to me that this is an > > >> unconstrained optimization algorithm which is worthless if you have a > > >> constrained NLP. > > > > > > You're right. I included the equality constraint within the > > > function itself, so that the function I omptimised with fmin_l_bfgs_b > > > had one parameter less and computed the "missing" parameter > > > internally as a function of the others. > > > > > > The problem is that this dependent parameter, was unaffected by > > > the box constraint and eventually would take values < 0. > > > > > > Fortunately, Siddhardh Chandra has told me the solution, which > > > is to maximise f(|x|) instead of f(x), with the linear > > > constraint incorporated into the function, using a simple > > > unconstrained optimisation algorithm. His message hasn't made it > > > to the list though. > > > > I'm curious: Could you elaborate how you incoroporated the linear > > constraints into the objective function? > > There was only one constraint: > > max: f(x) > s.t.: sum(a) - sum(b) = 0 (1) > > where 'a' is the first half of vector x, and 'b' the second half. > What this constraint really says is that one function parameter > is dependent on the others, as you can rewrite the constraint as: > > a0 = sum(b0, b1 ... b_n) - sum(a1, a2 ... a_n) > > Therefore, if it is possible to incorporate the constraint into > the function, by writing a function g(x) that takes (2*n)-1 > parameters, calculates the dependent parameter, and calls the > original f(x) with 2*n parameters. This g(x) function will have > the constraint "incorporated" and can be optimised with an > unconstrained algorithm. > > > Ernest > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > In my experience fmin_slsqp does not attempt to evaluate regions of the independent variables outside of the box bounds, but YMMV. -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Jun 23 12:40:46 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 23 Jun 2009 18:40:46 +0200 Subject: [SciPy-user] subprocess Message-ID: Hi all, Sorry if the subject is off-topic. How can I run gimp from python using subprocess ? I would like to use several arguments, e.g. gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b '(script-autocrop "a.png")' -b '(gimp-quit 0)' Thanks in advance. Nils From Jim.Vickroy at noaa.gov Tue Jun 23 12:52:13 2009 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Tue, 23 Jun 2009 10:52:13 -0600 Subject: [SciPy-user] subprocess In-Reply-To: References: Message-ID: <4A41083D.7070002@noaa.gov> Nils Wagner wrote: > Hi all, > > Sorry if the subject is off-topic. > > How can I run gimp from python using subprocess ? > > I would like to use several arguments, e.g. > > gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b > '(script-autocrop "a.png")' -b '(gimp-quit 0)' > > > Thanks in advance. > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > ... don't know what Python version you are using but recent releases (version 2.4 and later) include the *subprocess* module. Questions of this sort are best posted to *comp.lang.python*. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Jun 23 13:00:30 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 23 Jun 2009 19:00:30 +0200 Subject: [SciPy-user] subprocess In-Reply-To: <4A41083D.7070002@noaa.gov> References: <4A41083D.7070002@noaa.gov> Message-ID: On Tue, 23 Jun 2009 10:52:13 -0600 Jim Vickroy wrote: > Nils Wagner wrote: >> Hi all, >> >> Sorry if the subject is off-topic. >> >> How can I run gimp from python using subprocess ? >> >> I would like to use several arguments, e.g. >> >> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b >> '(script-autocrop "a.png")' -b '(gimp-quit 0)' >> >> >> Thanks in advance. >> >> Nils >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > ... don't know what Python version you are using but >recent releases (version 2.4 and later) include the >*subprocess* module. > > Questions of this sort are best posted to >*comp.lang.python*. Thank you for your response. I know how to import subprocess, but how do I deal with all the quotes in the argument list of my gimp command ? Nils From gokhansever at gmail.com Tue Jun 23 13:14:34 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_SEVER?=) Date: Tue, 23 Jun 2009 12:14:34 -0500 Subject: [SciPy-user] subprocess In-Reply-To: References: <4A41083D.7070002@noaa.gov> Message-ID: <49d6b3500906231014o55b3e62ah97783c369a83c60b@mail.gmail.com> On Tue, Jun 23, 2009 at 12:00 PM, Nils Wagner wrote: > On Tue, 23 Jun 2009 10:52:13 -0600 > Jim Vickroy wrote: > > Nils Wagner wrote: > >> Hi all, > >> > >> Sorry if the subject is off-topic. > >> > >> How can I run gimp from python using subprocess ? > >> > >> I would like to use several arguments, e.g. > >> > >> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b > >> '(script-autocrop "a.png")' -b '(gimp-quit 0)' > >> > >> > >> Thanks in advance. > >> > >> Nils > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > ... don't know what Python version you are using but > >recent releases (version 2.4 and later) include the > >*subprocess* module. > > > > Questions of this sort are best posted to > >*comp.lang.python*. > > Thank you for your response. > I know how to import subprocess, but how do I deal with > all the quotes in the argument list > of my gimp command ? > > Nils > _______________________________________________ Nils, Will that subprocess call executed Gimp session be similar in using it from within Python-fu console? I don't know of any other way of scripting the Gimp with Python. (Externally as you are trying to do or accessing its modules using Ipython maybe) Could you please share your findings? I would also like to learn how to script Blender once I start get going on Gimp. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jim.Vickroy at noaa.gov Tue Jun 23 13:20:45 2009 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Tue, 23 Jun 2009 11:20:45 -0600 Subject: [SciPy-user] subprocess In-Reply-To: References: <4A41083D.7070002@noaa.gov> Message-ID: <4A410EED.1090807@noaa.gov> Nils Wagner wrote: > On Tue, 23 Jun 2009 10:52:13 -0600 > Jim Vickroy wrote: > >> Nils Wagner wrote: >> >>> Hi all, >>> >>> Sorry if the subject is off-topic. >>> >>> How can I run gimp from python using subprocess ? >>> >>> I would like to use several arguments, e.g. >>> >>> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b >>> '(script-autocrop "a.png")' -b '(gimp-quit 0)' >>> >>> >>> Thanks in advance. >>> >>> Nils >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> ... don't know what Python version you are using but >> recent releases (version 2.4 and later) include the >> *subprocess* module. >> >> Questions of this sort are best posted to >> *comp.lang.python*. >> > > Thank you for your response. > I know how to import subprocess, but how do I deal with > all the quotes in the argument list > of my gimp command ? > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > It is usually a good idea to (at a minimum) post: * a small script that demonstrates the error * the exact text of the error the script triggers * the version of Python you are using * the OS you are using -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Jun 23 13:29:48 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 23 Jun 2009 19:29:48 +0200 Subject: [SciPy-user] subprocess In-Reply-To: <4A410EED.1090807@noaa.gov> References: <4A41083D.7070002@noaa.gov> <4A410EED.1090807@noaa.gov> Message-ID: On Tue, 23 Jun 2009 11:20:45 -0600 Jim Vickroy wrote: > Nils Wagner wrote: >> On Tue, 23 Jun 2009 10:52:13 -0600 >> Jim Vickroy wrote: >> >>> Nils Wagner wrote: >>> >>>> Hi all, >>>> >>>> Sorry if the subject is off-topic. >>>> >>>> How can I run gimp from python using subprocess ? >>>> >>>> I would like to use several arguments, e.g. >>>> >>>> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b >>>> '(script-autocrop "a.png")' -b '(gimp-quit 0)' >>>> >>>> >>>> Thanks in advance. >>>> >>>> Nils >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> ... don't know what Python version you are using but >>> recent releases (version 2.4 and later) include the >>> *subprocess* module. >>> >>> Questions of this sort are best posted to >>> *comp.lang.python*. >>> >> >> Thank you for your response. >> I know how to import subprocess, but how do I deal with >> all the quotes in the argument list >> of my gimp command ? >> >> Nils >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > It is usually a good idea to (at a minimum) post: > > * a small script that demonstrates the error > * the exact text of the error the script triggers > * the version of Python you are using > * the OS you are using Sure, I am using openSuSe 11.1 on x86_64 GNU/Linux, python2.6. gimp --version GNU Image Manipulation Program Version 2.6.2 The script script-autocrop.scm should be placed in in ~/.gimp2.6/scripts python -i autocrop.py (gimp:8196): Gimp-Core-CRITICAL **: gimp_image_opened: assertion `GIMP_IS_GIMP (gimp)' failed GIMP-Fehler: ?/home/nwagner/)'? konnte nicht ge?ffnet werden: Datei oder Verzeichnis nicht gefunden batch command executed successfully batch command executed successfully ^CTraceback (most recent call last): File "autocrop.py", line 6, in assert subprocess.call(cmd)==0, 'Error in cmd: %s' % cmd # Returncode should be zero gimp: terminated: Unterbrechung File "/usr/lib64/python2.6/subprocess.py", line 444, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib64/python2.6/subprocess.py", line 1137, in wait pid, sts = os.waitpid(self.pid, 0) KeyboardInterrupt /usr/lib64/gimp/2.0/plug-ins/script-fu terminated: Unterbrechung Datei oder Verzeichnis nicht gefunden means file or directory missing Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: autocrop.py Type: text/x-python Size: 273 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: script-autocrop.scm Type: text/x-scheme Size: 348 bytes Desc: not available URL: From gokhansever at gmail.com Tue Jun 23 13:43:38 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_SEVER?=) Date: Tue, 23 Jun 2009 12:43:38 -0500 Subject: [SciPy-user] subprocess In-Reply-To: References: <4A41083D.7070002@noaa.gov> <4A410EED.1090807@noaa.gov> Message-ID: <49d6b3500906231043m7c4bee0fqb9c2c124dc7a914f@mail.gmail.com> On Tue, Jun 23, 2009 at 12:29 PM, Nils Wagner wrote: > On Tue, 23 Jun 2009 11:20:45 -0600 > > Jim Vickroy wrote: > >> Nils Wagner wrote: >> >>> On Tue, 23 Jun 2009 10:52:13 -0600 >>> Jim Vickroy wrote: >>> >>> >>>> Nils Wagner wrote: >>>> >>>> >>>>> Hi all, >>>>> >>>>> Sorry if the subject is off-topic. >>>>> >>>>> How can I run gimp from python using subprocess ? >>>>> >>>>> I would like to use several arguments, e.g. >>>>> >>>>> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b >>>>> '(script-autocrop "a.png")' -b '(gimp-quit 0)' >>>>> >>>>> >>>>> Thanks in advance. >>>>> >>>>> Nils >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>> ... don't know what Python version you are using but recent releases >>>> (version 2.4 and later) include the *subprocess* module. >>>> >>>> Questions of this sort are best posted to *comp.lang.python*. >>>> >>>> >>> Thank you for your response. >>> I know how to import subprocess, but how do I deal with all the quotes in >>> the argument list >>> of my gimp command ? >>> >>> Nils >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> It is usually a good idea to (at a minimum) post: >> >> * a small script that demonstrates the error >> * the exact text of the error the script triggers >> * the version of Python you are using >> * the OS you are using >> > > Sure, I am using openSuSe 11.1 on x86_64 GNU/Linux, python2.6. > gimp --version > GNU Image Manipulation Program Version 2.6.2 > > The script script-autocrop.scm should be placed in in ~/.gimp2.6/scripts > > python -i autocrop.py > > (gimp:8196): Gimp-Core-CRITICAL **: gimp_image_opened: assertion > `GIMP_IS_GIMP (gimp)' failed > GIMP-Fehler: ?/home/nwagner/)'? konnte nicht ge?ffnet werden: Datei oder > Verzeichnis nicht gefunden > > batch command executed successfully > batch command executed successfully > > ^CTraceback (most recent call last): > File "autocrop.py", line 6, in > assert subprocess.call(cmd)==0, 'Error in cmd: %s' % cmd # Returncode > should be zero > gimp: terminated: Unterbrechung > File "/usr/lib64/python2.6/subprocess.py", line 444, in call > return Popen(*popenargs, **kwargs).wait() > File "/usr/lib64/python2.6/subprocess.py", line 1137, in wait > pid, sts = os.waitpid(self.pid, 0) > KeyboardInterrupt > /usr/lib64/gimp/2.0/plug-ins/script-fu terminated: Unterbrechung > > > Datei oder Verzeichnis nicht gefunden means file or directory missing > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > Tried the scripts same error here too... Python 2.5.2 (r252:60911, Sep 30 2008, 15:41:38) Linux ccn 2.6.27.19-170.2.35.fc10.i686.PAE (Fedora 10) [gsever at ccn ~]$ python -i autocrop.py (gimp:10078): Gimp-Core-CRITICAL **: gimp_image_opened: assertion `GIMP_IS_GIMP (gimp)' failed GIMP-Error: Opening '/home/gsever/)'' failed: No such file or directory batch command executed successfully batch command executed successfully ^Cgimp: terminated: Interrupt Traceback (most recent call last): File "autocrop.py", line 6, in assert subprocess.call(cmd)==0, 'Error in cmd: %s' % cmd # Returncode should be zero File "/usr/lib/python2.5/subprocess.py", line 444, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.5/subprocess.py", line 1122, in wait pid, sts = os.waitpid(self.pid, 0) KeyboardInterrupt /usr/lib/gimp/2.0/plug-ins/script-fu terminated: Interrupt (script-fu:10081): LibGimp-WARNING **: script-fu: gimp_flush(): error: Broken pipe -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Jun 23 14:07:30 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 23 Jun 2009 20:07:30 +0200 Subject: [SciPy-user] subprocess In-Reply-To: References: Message-ID: On Tue, 23 Jun 2009 13:57:13 -0400 "Nuttall, Brandon C" wrote: > Nils, > > Maybe I don't understand what you want to do, but it >looks like you want to batch process images from Python. >Is there a reason you want to use GIMP (a wonderful >program IMHO) instead of the Python Imaging Library >(PIL)? > > Brandon > > Brandon Nuttall, KRPG-1364 > Kentucky Geological Survey > www.uky.edu/kgs > bnuttall at uky.edu (KGS, Mo-Tu) > Brandon.nuttall at ky.gov (EEC, We-Fr) > 859-684-7473 (cell) Brandon, I tried PIL before but I didn't find an autocrop tool in PIL. So I moved to gimp. Cheers, Nils From nwagner at iam.uni-stuttgart.de Tue Jun 23 14:42:41 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 23 Jun 2009 20:42:41 +0200 Subject: [SciPy-user] subprocess In-Reply-To: <49d6b3500906231043m7c4bee0fqb9c2c124dc7a914f@mail.gmail.com> References: <4A41083D.7070002@noaa.gov> <4A410EED.1090807@noaa.gov> <49d6b3500906231043m7c4bee0fqb9c2c124dc7a914f@mail.gmail.com> Message-ID: On Tue, 23 Jun 2009 12:43:38 -0500 G?khan SEVER wrote: > On Tue, Jun 23, 2009 at 12:29 PM, Nils Wagner > wrote: > >> On Tue, 23 Jun 2009 11:20:45 -0600 >> >> Jim Vickroy wrote: >> >>> Nils Wagner wrote: >>> >>>> On Tue, 23 Jun 2009 10:52:13 -0600 >>>> Jim Vickroy wrote: >>>> >>>> >>>>> Nils Wagner wrote: >>>>> >>>>> >>>>>> Hi all, >>>>>> >>>>>> Sorry if the subject is off-topic. >>>>>> >>>>>> How can I run gimp from python using subprocess ? >>>>>> >>>>>> I would like to use several arguments, e.g. >>>>>> >>>>>> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b >>>>>> '(script-autocrop "a.png")' -b '(gimp-quit 0)' >>>>>> >>>>>> >>>>>> Thanks in advance. >>>>>> >>>>>> Nils >>>>>> _______________________________________________ >>>>>> SciPy-user mailing list >>>>>> SciPy-user at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> >>>>> ... don't know what Python version you are using but >>>>>recent releases >>>>> (version 2.4 and later) include the *subprocess* module. >>>>> >>>>> Questions of this sort are best posted to >>>>>*comp.lang.python*. >>>>> >>>>> >>>> Thank you for your response. >>>> I know how to import subprocess, but how do I deal with >>>>all the quotes in >>>> the argument list >>>> of my gimp command ? >>>> >>>> Nils >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> It is usually a good idea to (at a minimum) post: >>> >>> * a small script that demonstrates the error >>> * the exact text of the error the script triggers >>> * the version of Python you are using >>> * the OS you are using >>> >> >> Sure, I am using openSuSe 11.1 on x86_64 GNU/Linux, >>python2.6. >> gimp --version >> GNU Image Manipulation Program Version 2.6.2 >> >> The script script-autocrop.scm should be placed in in >>~/.gimp2.6/scripts >> >> python -i autocrop.py >> >> (gimp:8196): Gimp-Core-CRITICAL **: gimp_image_opened: >>assertion >> `GIMP_IS_GIMP (gimp)' failed >> GIMP-Fehler: ?/home/nwagner/)'? konnte nicht ge?ffnet >>werden: Datei oder >> Verzeichnis nicht gefunden >> >> batch command executed successfully >> batch command executed successfully >> >> ^CTraceback (most recent call last): >> File "autocrop.py", line 6, in >> assert subprocess.call(cmd)==0, 'Error in cmd: %s' % >>cmd # Returncode >> should be zero >> gimp: terminated: Unterbrechung >> File "/usr/lib64/python2.6/subprocess.py", line 444, in >>call >> return Popen(*popenargs, **kwargs).wait() >> File "/usr/lib64/python2.6/subprocess.py", line 1137, >>in wait >> pid, sts = os.waitpid(self.pid, 0) >> KeyboardInterrupt >> /usr/lib64/gimp/2.0/plug-ins/script-fu terminated: >>Unterbrechung >> >> >> Datei oder Verzeichnis nicht gefunden means file or >>directory missing >> >> Nils >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > Tried the scripts same error here too... > > Python 2.5.2 (r252:60911, Sep 30 2008, 15:41:38) > Linux ccn 2.6.27.19-170.2.35.fc10.i686.PAE (Fedora 10) > > > [gsever at ccn ~]$ python -i autocrop.py > > (gimp:10078): Gimp-Core-CRITICAL **: gimp_image_opened: >assertion > `GIMP_IS_GIMP (gimp)' failed > GIMP-Error: Opening '/home/gsever/)'' failed: No such >file or directory > > batch command executed successfully > batch command executed successfully > > ^Cgimp: terminated: Interrupt > Traceback (most recent call last): > File "autocrop.py", line 6, in > assert subprocess.call(cmd)==0, 'Error in cmd: %s' % >cmd # Returncode > should be zero > File "/usr/lib/python2.5/subprocess.py", line 444, in >call > return Popen(*popenargs, **kwargs).wait() > File "/usr/lib/python2.5/subprocess.py", line 1122, in >wait > pid, sts = os.waitpid(self.pid, 0) > KeyboardInterrupt > /usr/lib/gimp/2.0/plug-ins/script-fu terminated: >Interrupt > > (script-fu:10081): LibGimp-WARNING **: script-fu: >gimp_flush(): error: > Broken pipe What is the output of gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b '(script-autocrop "a.png")' -b '(gimp-quit 0)' I assume you have an png file called a.png in the current directory. Nils From gokhansever at gmail.com Tue Jun 23 14:47:54 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_SEVER?=) Date: Tue, 23 Jun 2009 13:47:54 -0500 Subject: [SciPy-user] subprocess In-Reply-To: References: <4A41083D.7070002@noaa.gov> <4A410EED.1090807@noaa.gov> <49d6b3500906231043m7c4bee0fqb9c2c124dc7a914f@mail.gmail.com> Message-ID: <49d6b3500906231147j419f98bbsbd01b965267169be@mail.gmail.com> On Tue, Jun 23, 2009 at 1:42 PM, Nils Wagner wrote: > On Tue, 23 Jun 2009 12:43:38 -0500 > G?khan SEVER wrote: > > On Tue, Jun 23, 2009 at 12:29 PM, Nils Wagner > > wrote: > > > >> On Tue, 23 Jun 2009 11:20:45 -0600 > >> > >> Jim Vickroy wrote: > >> > >>> Nils Wagner wrote: > >>> > >>>> On Tue, 23 Jun 2009 10:52:13 -0600 > >>>> Jim Vickroy wrote: > >>>> > >>>> > >>>>> Nils Wagner wrote: > >>>>> > >>>>> > >>>>>> Hi all, > >>>>>> > >>>>>> Sorry if the subject is off-topic. > >>>>>> > >>>>>> How can I run gimp from python using subprocess ? > >>>>>> > >>>>>> I would like to use several arguments, e.g. > >>>>>> > >>>>>> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b > >>>>>> '(script-autocrop "a.png")' -b '(gimp-quit 0)' > >>>>>> > >>>>>> > >>>>>> Thanks in advance. > >>>>>> > >>>>>> Nils > >>>>>> _______________________________________________ > >>>>>> SciPy-user mailing list > >>>>>> SciPy-user at scipy.org > >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>>>>> > >>>>>> > >>>>> ... don't know what Python version you are using but > >>>>>recent releases > >>>>> (version 2.4 and later) include the *subprocess* module. > >>>>> > >>>>> Questions of this sort are best posted to > >>>>>*comp.lang.python*. > >>>>> > >>>>> > >>>> Thank you for your response. > >>>> I know how to import subprocess, but how do I deal with > >>>>all the quotes in > >>>> the argument list > >>>> of my gimp command ? > >>>> > >>>> Nils > >>>> _______________________________________________ > >>>> SciPy-user mailing list > >>>> SciPy-user at scipy.org > >>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>>> > >>>> > >>> It is usually a good idea to (at a minimum) post: > >>> > >>> * a small script that demonstrates the error > >>> * the exact text of the error the script triggers > >>> * the version of Python you are using > >>> * the OS you are using > >>> > >> > >> Sure, I am using openSuSe 11.1 on x86_64 GNU/Linux, > >>python2.6. > >> gimp --version > >> GNU Image Manipulation Program Version 2.6.2 > >> > >> The script script-autocrop.scm should be placed in in > >>~/.gimp2.6/scripts > >> > >> python -i autocrop.py > >> > >> (gimp:8196): Gimp-Core-CRITICAL **: gimp_image_opened: > >>assertion > >> `GIMP_IS_GIMP (gimp)' failed > >> GIMP-Fehler: ?/home/nwagner/)'? konnte nicht ge?ffnet > >>werden: Datei oder > >> Verzeichnis nicht gefunden > >> > >> batch command executed successfully > >> batch command executed successfully > >> > >> ^CTraceback (most recent call last): > >> File "autocrop.py", line 6, in > >> assert subprocess.call(cmd)==0, 'Error in cmd: %s' % > >>cmd # Returncode > >> should be zero > >> gimp: terminated: Unterbrechung > >> File "/usr/lib64/python2.6/subprocess.py", line 444, in > >>call > >> return Popen(*popenargs, **kwargs).wait() > >> File "/usr/lib64/python2.6/subprocess.py", line 1137, > >>in wait > >> pid, sts = os.waitpid(self.pid, 0) > >> KeyboardInterrupt > >> /usr/lib64/gimp/2.0/plug-ins/script-fu terminated: > >>Unterbrechung > >> > >> > >> Datei oder Verzeichnis nicht gefunden means file or > >>directory missing > >> > >> Nils > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > > > > Tried the scripts same error here too... > > > > Python 2.5.2 (r252:60911, Sep 30 2008, 15:41:38) > > Linux ccn 2.6.27.19-170.2.35.fc10.i686.PAE (Fedora 10) > > > > > > [gsever at ccn ~]$ python -i autocrop.py > > > > (gimp:10078): Gimp-Core-CRITICAL **: gimp_image_opened: > >assertion > > `GIMP_IS_GIMP (gimp)' failed > > GIMP-Error: Opening '/home/gsever/)'' failed: No such > >file or directory > > > > batch command executed successfully > > batch command executed successfully > > > > ^Cgimp: terminated: Interrupt > > Traceback (most recent call last): > > File "autocrop.py", line 6, in > > assert subprocess.call(cmd)==0, 'Error in cmd: %s' % > >cmd # Returncode > > should be zero > > File "/usr/lib/python2.5/subprocess.py", line 444, in > >call > > return Popen(*popenargs, **kwargs).wait() > > File "/usr/lib/python2.5/subprocess.py", line 1122, in > >wait > > pid, sts = os.waitpid(self.pid, 0) > > KeyboardInterrupt > > /usr/lib/gimp/2.0/plug-ins/script-fu terminated: > >Interrupt > > > > (script-fu:10081): LibGimp-WARNING **: script-fu: > >gimp_flush(): error: > > Broken pipe > > What is the output of > > gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b > '(script-autocrop "a.png")' -b '(gimp-quit 0)' > [gsever at ccn ~]$ gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b '(script-autocrop "a.png")' -b '(gimp-quit 0)' batch command executed successfully Why we are executing script-fu and not python-fu? > I assume you have an png file called a.png in the current > directory. > True, I have a dummy a.png file. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Tue Jun 23 15:35:27 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 23 Jun 2009 15:35:27 -0400 Subject: [SciPy-user] ndimage.convolve behaviour? In-Reply-To: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> References: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> Message-ID: On 23-Jun-09, at 5:54 AM, St?fan van der Walt wrote: > Convolution flips the filter around, unlike correlation, so you are > seeing: > > (X[:5, :5] * np.fliplr(np.flipud(fm.filter))).sum() Ah, thanks. Is there some reason/convention for this that I'm unaware of? Should this be documented somewhere? David From robert.kern at gmail.com Tue Jun 23 15:38:49 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Jun 2009 14:38:49 -0500 Subject: [SciPy-user] ndimage.convolve behaviour? In-Reply-To: References: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> Message-ID: <3d375d730906231238q508d9889x7d664456811bc52f@mail.gmail.com> On Tue, Jun 23, 2009 at 14:35, David Warde-Farley wrote: > On 23-Jun-09, at 5:54 AM, St?fan van der Walt wrote: > >> Convolution flips the filter around, unlike correlation, so you are >> seeing: >> >> (X[:5, :5] * np.fliplr(np.flipud(fm.filter))).sum() > > > Ah, thanks. Is there some reason/convention for this that I'm unaware > of? Yes. Convolution flips the filter around and correlation doesn't. That's the convention. :-) > Should this be documented somewhere? Oh, probably. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From saintmlx at apstat.com Tue Jun 23 17:16:43 2009 From: saintmlx at apstat.com (Xavier Saint-Mleux) Date: Tue, 23 Jun 2009 17:16:43 -0400 Subject: [SciPy-user] subprocess In-Reply-To: References: <4A41083D.7070002@noaa.gov> <4A410EED.1090807@noaa.gov> Message-ID: <4A41463B.4080501@apstat.com> Nils Wagner wrote: > On Tue, 23 Jun 2009 11:20:45 -0600 > Jim Vickroy wrote: >> Nils Wagner wrote: >>> On Tue, 23 Jun 2009 10:52:13 -0600 >>> Jim Vickroy wrote: >>> >>>> Nils Wagner wrote: >>>> >>>>> Hi all, >>>>> >>>>> Sorry if the subject is off-topic. >>>>> >>>>> How can I run gimp from python using subprocess ? >>>>> >>>>> I would like to use several arguments, e.g. >>>>> >>>>> gimp --batch-interpreter=plug-in-script-fu-eval -i -d -b >>>>> '(script-autocrop "a.png")' -b '(gimp-quit 0)' >>>>> >>>>> >>>>> Thanks in advance. >>>>> >>>>> Nils >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> ... don't know what Python version you are using but recent >>>> releases (version 2.4 and later) include the *subprocess* module. >>>> >>>> Questions of this sort are best posted to *comp.lang.python*. >>>> >>> Thank you for your response. >>> I know how to import subprocess, but how do I deal with all the >>> quotes in the argument list >>> of my gimp command ? >>> >>> Nils >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> It is usually a good idea to (at a minimum) post: >> >> * a small script that demonstrates the error >> * the exact text of the error the script triggers >> * the version of Python you are using >> * the OS you are using > > Sure, I am using openSuSe 11.1 on x86_64 GNU/Linux, python2.6. > gimp --version > GNU Image Manipulation Program Version 2.6.2 > > The script script-autocrop.scm should be placed in in ~/.gimp2.6/scripts > > python -i autocrop.py > > (gimp:8196): Gimp-Core-CRITICAL **: gimp_image_opened: assertion > `GIMP_IS_GIMP (gimp)' failed > GIMP-Fehler: ?/home/nwagner/)'? konnte nicht ge?ffnet werden: Datei > oder Verzeichnis nicht gefunden > > batch command executed successfully > batch command executed successfully > > ^CTraceback (most recent call last): > File "autocrop.py", line 6, in > assert subprocess.call(cmd)==0, 'Error in cmd: %s' % cmd # > Returncode should be zero > gimp: terminated: Unterbrechung > File "/usr/lib64/python2.6/subprocess.py", line 444, in call > return Popen(*popenargs, **kwargs).wait() > File "/usr/lib64/python2.6/subprocess.py", line 1137, in wait > pid, sts = os.waitpid(self.pid, 0) > KeyboardInterrupt > /usr/lib64/gimp/2.0/plug-ins/script-fu terminated: Unterbrechung > > > Datei oder Verzeichnis nicht gefunden means file or directory missing > > Nils subprocess.call has shell=False by default, which means that the parameters are not interpreted by a shell and are sent as is. In your case, it means that "'(script-autocrop ", filename and ")'" are three separate parameters instead of one; you should concatenate everything into a single string. Also, the single quotes are superfluous: '(quit-gimp 0)' is a symbol, not a function call, and is always "executed successfully." e.g. (I have not installed your scheme script) >>> cmd=["gimp", "--batch-interpreter=plug-in-script-fu-eval", "-i", "-d", "-b","'(gimp-quit 0)'"] >>> assert subprocess.call(cmd)==0, 'Error in cmd: %s' % cmd # Returncode should be zero batch command: executed successfully. Traceback (most recent call last): /usr/lib/gimp/2.0/plug-ins/script-fu terminated: Interrupt File "", line 1, in File "/usr/lib/python2.5/subprocess.py", line 444, in call gimp: terminated: Interrupt return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.5/subprocess.py", line 1178, in wait pid, sts = self._waitpid_no_intr(self.pid, 0) File "/usr/lib/python2.5/subprocess.py", line 1008, in _waitpid_no_intr return os.waitpid(pid, options) KeyboardInterrupt >>> cmd=["gimp", "--batch-interpreter=plug-in-script-fu-eval", "-i", "-d", "-b","(gimp-quit 0)"] >>> assert subprocess.call(cmd)==0, 'Error in cmd: %s' % cmd # Returncode should be zero >>> Hope this helps, Xavier P.S. Sorry if you get this message twice; I sent it to the list but it is still not showing up after almost 3h. From cwainwri at ucsc.edu Tue Jun 23 20:37:34 2009 From: cwainwri at ucsc.edu (Max Wainwright) Date: Tue, 23 Jun 2009 17:37:34 -0700 Subject: [SciPy-user] solving an ode with boundary conditions Message-ID: I have a 1D second-order differential equation that I'm trying to solve with the following boundary conditions: dy/dt = 0 at r = 0 and y (t) = 0 at t = infinity. I can solve this by doing guess and check for the initial value of y at t = 0. If I guess too high, then y goes to negative infinity at t = infinity. If I guess too low, y never reaches zero and instead oscillates about some minimum. Therefore, to check if I've overshot or undershot the solution all I need to do is stop the integration once either y or -dy/dt goes negative. Is there any way to do this with scipy? I also tried looking at PyDSTool, but I had a hard time finding what I need. I don't in principle know the time-scale of the solution (I'll be doing this for lots of different parameters), so I'd like to avoid using a fixed timestep. Thanks. -Max From rob.clewley at gmail.com Tue Jun 23 21:57:43 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 23 Jun 2009 21:57:43 -0400 Subject: [SciPy-user] solving an ode with boundary conditions In-Reply-To: References: Message-ID: On Tue, Jun 23, 2009 at 8:37 PM, Max Wainwright wrote: > I have a 1D second-order differential equation that I'm trying to > solve with the following boundary conditions: dy/dt = 0 at r = 0 and y > (t) = 0 at t = infinity. ?I can solve this by doing guess and check > for the initial value of y at t = 0. ?If I guess too high, then y > goes to negative infinity at t = infinity. ?If I guess too low, y > never reaches zero and instead oscillates about some minimum. > Therefore, to check if I've overshot or undershot the solution all I > need to do is stop the integration once either y or -dy/dt goes > negative. ?Is there any way to do this with scipy? ?I also tried > looking at PyDSTool, but I had a hard time finding what I need. ?I > don't in principle know the time-scale of the solution (I'll be doing > this for lots of different parameters), so I'd like to avoid using a > fixed timestep. It's pretty easy to set up a standard minimizer from scipy to use an adaptive time step solver of your choice to implement the "shooting method" for this boundary value problem (BVP), which is simple enough to permit this method of solution. You just need to measure your over-/undershoot numerically in such a way that it provides correct feedback to the optimizer. PyDSTool would make catching the event of y or -dy/dt going negative very simple, but it does not have a BVP solver itself either. There are plenty of tutorial and demo scripts about that online. This would be hard with scipy's solvers. If you set up an attempt and need more help you can post it here for more feedback. Scipy does not have a BVP built in either. You can also try this recent package: http://www.elisanet.fi/ptvirtan/software/bvp/index.html but that might be like using a sledgehammer to crack a nut, as we say. -Rob From cwainwri at ucsc.edu Tue Jun 23 22:27:15 2009 From: cwainwri at ucsc.edu (Max Wainwright) Date: Tue, 23 Jun 2009 19:27:15 -0700 Subject: [SciPy-user] solving an ode with boundary conditions In-Reply-To: References: Message-ID: <81CCC0DE-F55B-470C-9C01-8E55C9EF66F5@ucsc.edu> On Jun 23, 2009, at 6:57 PM, Rob Clewley wrote: > On Tue, Jun 23, 2009 at 8:37 PM, Max Wainwright > wrote: >> I have a 1D second-order differential equation that I'm trying to >> solve with the following boundary conditions: dy/dt = 0 at r = 0 >> and y >> (t) = 0 at t = infinity. I can solve this by doing guess and check >> for the initial value of y at t = 0. If I guess too high, then y >> goes to negative infinity at t = infinity. If I guess too low, y >> never reaches zero and instead oscillates about some minimum. >> Therefore, to check if I've overshot or undershot the solution all I >> need to do is stop the integration once either y or -dy/dt goes >> negative. Is there any way to do this with scipy? I also tried >> looking at PyDSTool, but I had a hard time finding what I need. I >> don't in principle know the time-scale of the solution (I'll be doing >> this for lots of different parameters), so I'd like to avoid using a >> fixed timestep. > > It's pretty easy to set up a standard minimizer from scipy to use an > adaptive time step solver of your choice to implement the "shooting > method" for this boundary value problem (BVP), which is simple enough > to permit this method of solution. You just need to measure your > over-/undershoot numerically in such a way that it provides correct > feedback to the optimizer. PyDSTool would make catching the event of y > or -dy/dt going negative very simple, but it does not have a BVP > solver itself either. There are plenty of tutorial and demo scripts > about that online. This would be hard with scipy's solvers. If you set > up an attempt and need more help you can post it here for more > feedback. > > Scipy does not have a BVP built in either. You can also try this > recent package: > http://www.elisanet.fi/ptvirtan/software/bvp/index.html but that might > be like using a sledgehammer to crack a nut, as we say. > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Could you kindly point me to an example PyDSTool script that does some sort of variable tracking (ie, checking when y goes negative)? I'm completely new to the package, and I can't find it in the wiki. I don't think I can use the standard odeint or ode from scipy, because I don't know the point at which the over or undershooting will become obvious. It could be at r = .1, or it could be at r = 100. I suppose I could dynamically change the integration range, but this seems kind of kludgy. Thanks again. -Max From rob.clewley at gmail.com Tue Jun 23 22:39:16 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 23 Jun 2009 22:39:16 -0400 Subject: [SciPy-user] solving an ode with boundary conditions In-Reply-To: <81CCC0DE-F55B-470C-9C01-8E55C9EF66F5@ucsc.edu> References: <81CCC0DE-F55B-470C-9C01-8E55C9EF66F5@ucsc.edu> Message-ID: > Could you kindly point me to an example PyDSTool script that does > some sort of variable tracking (ie, checking when y goes negative)? > I'm completely new to the package, and I can't find it in the wiki. On the wiki it is here: http://www.cam.cornell.edu/~rclewley/cgi-bin/moin.cgi/Events If you have any suggestions about navigating the wiki documentation that might improve the experience for new users please send them to me privately. As for example scripts, you can search for "Events" in the PyDSTool/tests directory and find it in many of them, but I suggest you start with vode_event_test1.py which should contain all you need for your problem. There, the Event object thresh_ev_term will terminate integration when the variable w crosses parameter p_thresh from above (i.e. w-p_thresh goes from positive to negative). Details of the set up are explained on the Events wiki page above. If your RHS is fairly complex I suggest you define an "auxiliary function" for your RHS and use it to both define your ODE's RHS and to define the zero-crossing condition for the event. If your RHS is fairly simple you can just retype the definition into the event definition. > I don't think I can use the standard odeint or ode from scipy, > because I don't know the point at which the over or undershooting > will become obvious. ?It could be at r = .1, or it could be at r = > 100. ?I suppose I could dynamically change the integration range, but > this seems kind of kludgy. I agree. I think you will be much better off even using the slower built-in integrator Vode which has been wrapped in PyDSTool from its scipy form in order to support event detection. It should be adequate for your 1D problem without resorting to the other PyDSTool integrators which have more installation dependencies. -Rob From berthold.hoellmann at gl-group.com Wed Jun 24 11:23:14 2009 From: berthold.hoellmann at gl-group.com (=?ISO-8859-15?Q?Berthold_=22H=F6llmann=22?=) Date: Wed, 24 Jun 2009 17:23:14 +0200 Subject: [SciPy-user] Compilein debug version of extension using numpy.distutils Message-ID: I have a set of python extension modules bundled into one project. Trying to compile a debug version of these extensions i run into a strange problem: $ /cygdrive/c/Python25_d/python_d.exe setup.py config_fc --fcompiler=intelv --f77flags=/fpp --f90flags=/fpp --debug build_ext ... LINK : fatal error LNK1104: Datei 'python25.lib' kann nicht ge?ffnet werden ... (error says: can't open 'python25.lib', it shoud use 'python25_d.lib' instead) $ /cygdrive/c/Python25_d/python_d.exe setup.py config_fc --fcompiler=intelv --f77flags=/fpp --f90flags=/fpp --debug build_ext --debug ... ifort: command line error: option '/g' is ambiguous ... $ ifort Intel(R) Visual Fortran Compiler Professional for applications running on IA-32, Version 11.0 Build 20090318 Package ID: w_cprof_p_11.0.074 Copyright (C) 1985-2009 Intel Corporation. All rights reserved. $ cl Microsoft (R) 32-Bit C/C++-Standardcompiler Version 13.10.3077 f?r 80x86 Copyright (C) Microsoft Corporation 1984-2002. Alle Rechte vorbehalten. $ python -c "import numpy;print numpy.version.version;import sys; print sys.version" 1.3.0 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] What am I supposed to do to get a debug version of my extensions? Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disclaimer.txt Type: application/octet-stream Size: 2196 bytes Desc: not available URL: From rowen at u.washington.edu Wed Jun 24 13:31:14 2009 From: rowen at u.washington.edu (Russell E. Owen) Date: Wed, 24 Jun 2009 10:31:14 -0700 Subject: [SciPy-user] install ndimage standalone? Message-ID: Is there some way to easily obtain and install the ndimage component of scipy without installing the whole thing? I'm trying to migrate some old Numarray code that used ndimage and I don't want to have to install all of scipy just to use ndimage. -- Russell From nwagner at iam.uni-stuttgart.de Wed Jun 24 13:31:59 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 24 Jun 2009 19:31:59 +0200 Subject: [SciPy-user] call gimp's autocrop function from python In-Reply-To: <49d6b3500906231014o55b3e62ah97783c369a83c60b@mail.gmail.com> References: <4A41083D.7070002@noaa.gov> <49d6b3500906231014o55b3e62ah97783c369a83c60b@mail.gmail.com> Message-ID: > > Could you please share your findings? I would also like >to learn how to > script Blender once I start get going on Gimp. Finally I made it. Please find enclosed a solution. Cheers, Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: autocrop.py Type: text/x-python Size: 879 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: script-autocrop.scm Type: text/x-scheme Size: 359 bytes Desc: not available URL: From gokhansever at gmail.com Wed Jun 24 13:46:59 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_SEVER?=) Date: Wed, 24 Jun 2009 12:46:59 -0500 Subject: [SciPy-user] call gimp's autocrop function from python In-Reply-To: References: <4A41083D.7070002@noaa.gov> <49d6b3500906231014o55b3e62ah97783c369a83c60b@mail.gmail.com> Message-ID: <49d6b3500906241046sc0d1b7dg29ee066314343150@mail.gmail.com> On Wed, Jun 24, 2009 at 12:31 PM, Nils Wagner wrote: > > >> >> Could you please share your findings? I would also like to learn how to >> script Blender once I start get going on Gimp. >> > Finally I made it. Please find enclosed a solution. > > Cheers, > > Nils > Hey Nils, With a minute modification I have mad it work here, too. "gimp_version == 2.6.6" Can't we directly access python-fu scripts without using that scm file? To me it seems easier in this way. However still don't know how to externally access python-fu. Thanks for sharing your results. G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Jun 24 14:02:08 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 24 Jun 2009 18:02:08 +0000 (UTC) Subject: [SciPy-user] install ndimage standalone? References: Message-ID: On 2009-06-24, Russell E. Owen wrote: > Is there some way to easily obtain and install the ndimage component of > scipy without installing the whole thing? I'm trying to migrate some old > Numarray code that used ndimage and I don't want to have to install all > of scipy just to use ndimage. There's a setup.py under scipy/ndimage in the source directory, and I believe you can use it to build the ndimage package separately. -- Pauli Virtanen From strozzi2 at llnl.gov Thu Jun 25 01:07:52 2009 From: strozzi2 at llnl.gov (David J Strozzi) Date: Wed, 24 Jun 2009 22:07:52 -0700 Subject: [SciPy-user] interpolation questions Message-ID: I have a few questions about interpolation in scipy. First, it would be nice if the documentation included a concrete example, for instance: http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.PiecewisePolynomial.html#scipy.interpolate.PiecewisePolynomial Now for my actual problem: I am specifying some profiles for a physics simulation, and would like to give a few (x,y) pairs and have a nice, smooth interpolation. Actually, it is common for me to want to give extreme points, and get an interpolant that is monotonic and without overshoot on each interval. Splines (for order above linear) are well-known to be very 'smooth' (lots of continuous derivs) but suffer from overshoot and non-monotonicity. Perhaps there's a good solution to this in piecewise polynomials? But there's no sample usage and I got confused about what 'list of lists' for y to use. I know of an interpolant, made by some other Livermorons (Frisch? SIAM JNA ~1980), which maybe is called pchip in matlab (piecewise cubic hermite interpolating polynomial). It's supposed to be monotone and better than linear. Thanks for your help, David From vanforeest at gmail.com Thu Jun 25 04:36:03 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 25 Jun 2009 10:36:03 +0200 Subject: [SciPy-user] usings numpy arrays in sets Message-ID: Hi, I need to store numpy arrays in a python set. This is of course not possible right away as arrays are not hashable. One way to achieve this is to convert arrays to tuples, and use these tuples instead. However, tuples need much more memory than arrays, which for my purposes is undesirable. Does anybody know of a method to store numpy arrays in a set (or something that gives similar behavior)? Thanks in advance Nicky From josef.pktd at gmail.com Thu Jun 25 04:54:01 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 25 Jun 2009 04:54:01 -0400 Subject: [SciPy-user] interpolation questions In-Reply-To: References: Message-ID: <1cd32cbb0906250154k1826f0adm368d6d3104a98732@mail.gmail.com> On Thu, Jun 25, 2009 at 1:07 AM, David J Strozzi wrote: > I have a few questions about interpolation in scipy. ?First, it would > be nice if the documentation included a concrete example, for > instance: > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.PiecewisePolynomial.html#scipy.interpolate.PiecewisePolynomial > > Now for my actual problem: I am specifying some profiles for a > physics simulation, and would like to give a few (x,y) pairs and have > a nice, smooth interpolation. ?Actually, it is common for me to want > to give extreme points, and get an interpolant that is monotonic and > without overshoot on each interval. ?Splines (for order above linear) > are well-known to be very 'smooth' (lots of continuous derivs) but > suffer from overshoot and non-monotonicity. > > Perhaps there's a good solution to this in piecewise polynomials? > But there's no sample usage and I got confused about what 'list of > lists' for y to use. > > I know of an interpolant, made by some other Livermorons (Frisch? > SIAM JNA ~1980), which maybe is called pchip in matlab (piecewise > cubic hermite interpolating polynomial). ?It's supposed to be > monotone and better than linear. > > > Thanks for your help, > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Just to get you started (I don't know anything about this) help(interpolate.PiecewisePolynomial) is more informative than the online help There are some examples in the tests in scipy\interpolate\tests\test_polyint.py There have been questions about monotonic interpolation before on the mailing list, but I've never seen a positive answer. Josef From faltet at pytables.org Thu Jun 25 06:42:49 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 25 Jun 2009 12:42:49 +0200 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: References: Message-ID: <200906251242.49337.faltet@pytables.org> A Thursday 25 June 2009 10:36:03 nicky van foreest escrigu?: > Hi, > > I need to store numpy arrays in a python set. This is of course not > possible right away as arrays are not hashable. One way to achieve > this is to convert arrays to tuples, and use these tuples instead. > However, tuples need much more memory than arrays, which for my > purposes is undesirable. Does anybody know of a method to store numpy > arrays in a set (or something that gives similar behavior)? One possibility is to convert your arrays into strings by using the `tostring()` method. Then you can reconstruct your arrays with `fromstring()`. However, you will need to add type and shape information during the reconstruction process. Another possibility in case you want to avoid a copy of the array data buffer is to turn your arrays read-only (arr.flags.writeable=False), and use the data buffer `arr.data` as the entry for the set. To reconstruct the array, you will have to use `frombuffer()` and add type and shape information afterwards, as above. That should be pretty efficient, but the retrieved arrays will remain read-only. Hope that helps, -- Francesc Alted From lorenzo.isella at gmail.com Thu Jun 25 07:01:32 2009 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 25 Jun 2009 13:01:32 +0200 Subject: [SciPy-user] Ordering and Counting the Repetitions of the Rows of a Matrix In-Reply-To: References: Message-ID: <4A43590C.2080803@gmail.com> Dear All, I dug up an old post of mine to this list (the problem was mainly how to get rid of multiple rows in a matrix while counting the multiple occurrences of each row). Now, the problem is slightly more complex The matrix is of the kind A= 1 2 2 3 9 9 4 4 1 2 3 2 but this time, you consider the row with entries (2 3) equal to the one with entries (3 2), i.e. this time the ordering of elements in a row does not matter. How can I still calculate the repetitions of each row in the sense explained above and obtain the 'repetition-free' matrix? Furthermore, suppose that you have the matrix B= 2 1 2 4 4 2 3 9 8 9 9 7 5 4 4 1 6 1 2 2 4 3 2 9 Now, you have extra elements with respect to matrix A, but you consider two rows equal if the first and forth entry are coincident and the second and third entry are the same numbers or are swapped (like in the case of matrix A). E.g. the second and last row of matrix B would be considered equal in this case. You still want the number of occurrences of each row (with the new concept of equal rows) and the repetition-free matrix. Any ideas about how this could be efficiently implemented? Many thanks Lorenzo > Date: Sun, 27 Jul 2008 15:46:29 -0400 From: "Warren Weckesser" > Subject: Re: [SciPy-user] Ordering and > Counting the Repetitions of the Rows of a Matrix To: "SciPy Users > List" Message-ID: > <114880320807271246x1c922e7cg9539684fbad7bed9 at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" Lorenzo, Given a matrix > A like you showed, here is one way to find (and count) the unique > rows: ---------- d = {} for r in A: t = tuple(r) d[t] = d.get(t,0) + 1 > # The dict d now has the counts of the unique rows of A. B = > numpy.array(d.keys()) # The unique rows of A C = > numpy.array(d.values()) # The counts of the unique rows ---------- For > a large number of rows (e.g. 10000), this appears to be significantly > faster than the code that David Kaplan suggested in his email earlier > today. Regards, Warren On Sun, Jul 27, 2008 at 12:17 PM, Lorenzo > Isella wrote: >> > Dear All, >> > Consider an Nx2 matrix of the kind: >> > >> > A= 1 2 >> > 3 13 >> > 1 2 >> > 6 8 >> > 3 13 >> > 2 9 >> > 1 1 >> > >> > >> > The first entry in each row is always smaller or equal than the second >> > entry in the same row. >> > Now there are two things I would like to do with this A matrix: >> > (1) With a sort of n.unique1d (but have not been very successful yet), >> > let each row of A appear only once (i.e. get rid of the repetitions). >> > Therefore one should obtain the matrix: >> > B= 1 2 >> > 3 13 >> > 6 8 >> > 2 9 >> > 1 1 >> > >> > (2) At the same time, efficiently count how many times each row of B >> > appeared in A. I would like to get a C vector counting them as: >> > >> > C= 2 >> > 2 >> > 1 >> > 1 >> > 1 >> > >> > >> > Any suggestions about an efficient way of achieving this? >> > Many thanks >> > >> > Lorenzo >> > ______________________ From vanforeest at gmail.com Thu Jun 25 07:12:44 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 25 Jun 2009 13:12:44 +0200 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: <200906251242.49337.faltet@pytables.org> References: <200906251242.49337.faltet@pytables.org> Message-ID: Dear Francesc, Thanks. Your second suggestion works really nicely! I also thought about the first solution, converting arrays to strings, but even emtpy strings come with a memory penalty of 40 bytes (on my machine). As an aside I am also exploring pytables. This appears really useful for me. Hence, another thanks! bye Nicky 2009/6/25 Francesc Alted : > A Thursday 25 June 2009 10:36:03 nicky van foreest escrigu?: >> Hi, >> >> I need to store numpy arrays in a python set. This is of course not >> possible right away as arrays are not hashable. One way to achieve >> this is to convert arrays to tuples, and use these tuples instead. >> However, tuples need much more memory than arrays, which for my >> purposes is undesirable. Does anybody know of a method to store numpy >> arrays in a set (or something that gives similar behavior)? > > One possibility is to convert your arrays into strings by using the > `tostring()` method. ?Then you can reconstruct your arrays with > `fromstring()`. ?However, you will need to add type and shape information > during the reconstruction process. > > Another possibility in case you want to avoid a copy of the array data buffer > is to turn your arrays read-only (arr.flags.writeable=False), and use the data > buffer `arr.data` as the entry for the set. ?To reconstruct the array, you > will have to use `frombuffer()` and add type and shape information afterwards, > as above. ?That should be pretty efficient, but the retrieved arrays will > remain read-only. > > Hope that helps, > > -- > Francesc Alted > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wierob83 at googlemail.com Thu Jun 25 08:26:47 2009 From: wierob83 at googlemail.com (wierob) Date: Thu, 25 Jun 2009 14:26:47 +0200 Subject: [SciPy-user] How to perform a multiple linear regression analysis with Scipy? Message-ID: <4A436D07.7090208@googlemail.com> Hi, I'm trying to automate multiple linear regression (e.g. trend line does not fit y = m*x + n) analysis for a set of samples. For the simple case where the trend line adheres to y = m*x + n I found stats.linregress. polyfit can fit polynomials but does not calculate statistical values for goodness of fit. I could not find a function that does a regression analysis for e.g. a parabola. Specifically, I need: - the parameters of the fitted function for plotting - residuals for plotting and evaluation of the correlation - Q-Q-plots ? - statistics for evaluating the goodness of fit: - (adjusted) coefficient of determination - std errors / deviation - F-test / p-values I managed to get some of these using lstsq as follows: from pylab import * # example data taken from http://en.wikipedia.org/wiki/Linear_regression#Example # Height (m) height = [1.47, 1.5, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.7, 1.73, 1.75, 1.78, 1.8, 1.83] # Weight (kg) weight = [52.21, 53.12, 54.48, 55.84, 57.2, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.1, 69.92, 72.19, 74.46] observed = weight # just to retain the terminology sample_size = len(height) # = len(weight) # assume parabola -> weight = p0 + p1 * height + p2 * height**2 number_of_params = 3 # p0, p1 and p2 X = zeros((len(height), number_of_params), float) X[:,0] = 1 X[:,1] = height X[:,2] = [h**2 for h in height] # p - paramter of parabola, sse - sum of squared errors ??? params, sse = lstsq(X, weight)[:2] predicted = [ params[0] + params[1]*h + params[2]*h**2 for h in height ] predicted_error_u = [ params[0] + params[1]*h + params[2]*h**2 + sse for h in height ] # ??? predicted_erro_l = [ params[0] + params[1]*h + params[2]*h**2 - sse for h in height ] # ??? # according to http://en.wikipedia.org/wiki/Coefficient_of_determination mean_observed = mean(observed) mean_predicted = mean(predicted) ss_tot = sum([ (o-mean_observed)**2 for o in observed ]) ss_reg = sum([ (p-mean_observed)**2 for p in predicted ]) ss_err = sum([ (o-p)**2 for o, p in zip(observed, predicted) ]) # == see ?? r_squared = 1 - ss_err / ss_tot # coefficient_of_determination r_squared_adjusted = 1 - (1 - r_squared) * ((sample_size - 1) / (sample_size - number_of_params-1 - 1)) # ?? clf() plot(height, weight, "b.", label="observed") plot(height, predicted, 'r', label='predicted') plot(height, predicted_error_u, 'r:', label='error') plot(height, predicted_erro_l, 'r:',) text(1.565, 77, "weight = %.2f + %.2f*height + %.2f*height^2" % tuple(params)) text(1.565, 76, "R-squared = %f (adjusted = %f)" % (r_squared, r_squared_adjusted)) text(1.565, 75, "ss_tot = %f" % ss_tot) text(1.565, 74, "ss_reg = %f" % ss_reg) text(1.565, 73, "ss_err = %f" % ss_err) title("Observed and predicted data") xlabel("Height (m)") ylabel("Weight (kg)") legend(loc=0) savefig("regression.png") residuals = [ o - e for o, e in zip(observed, predicted) ] clf() plot(height, residuals, "b.", label="residuals") plot(gca().get_xlim(), [0, 0], "k") title("Residual plot") xlabel("Height (m)") ylabel("observed Weight - predicted Weiht (kg)") savefig("residual_plot.png") I compared the results to R: height <- c(1.47, 1.5, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.7, 1.73, 1.75, 1.78, 1.8, 1.83) weight <- c(52.21, 53.12, 54.48, 55.84, 57.2, 58.57, 59.93,61.29, 63.11, 64.47, 66.28, 68.1, 69.92, 72.19, 74.46) h <- height*height summary(lm(weight ~ height + h)) and they match fairly good, except for the adjusted R-squared which is equal to R-squared in the python code. However, I have no idea how to compute the F-statistic. Does Scipy have a function (or more) that already does this? If not, can anybody give me hint on how to compute the missing values (stderr, F-statistic)? Thanks in advance. regards robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at pytables.org Thu Jun 25 09:04:36 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 25 Jun 2009 15:04:36 +0200 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: References: <200906251242.49337.faltet@pytables.org> Message-ID: <200906251504.36743.faltet@pytables.org> A Thursday 25 June 2009 13:12:44 nicky van foreest escrigu?: > Dear Francesc, > > Thanks. Your second suggestion works really nicely! I also thought > about the first solution, converting arrays to strings, but even emtpy > strings come with a memory penalty of 40 bytes (on my machine). Hmm, now that I think about this, I ask myself if it would not be useful to implement a functional `__hash__()` method for read-only arrays. Perhaps there is a show stopper for that, but I can't see it. > As an aside I am also exploring pytables. This appears really useful > for me. Hence, another thanks! Good! -- Francesc Alted From bsouthey at gmail.com Thu Jun 25 09:34:41 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 25 Jun 2009 08:34:41 -0500 Subject: [SciPy-user] How to perform a multiple linear regression analysis with Scipy? In-Reply-To: <4A436D07.7090208@googlemail.com> References: <4A436D07.7090208@googlemail.com> Message-ID: <4A437CF1.8020501@gmail.com> On 06/25/2009 07:26 AM, wierob wrote: > Hi, > > I'm trying to automate multiple linear regression (e.g. trend line > does not fit y = m*x + n) analysis for a set of samples. > > For the simple case where the trend line adheres to y = m*x + n I > found stats.linregress. polyfit can fit polynomials but does not > calculate statistical values for goodness of fit. I could not find a > function that does a regression analysis for e.g. a parabola. > > Specifically, I need: > - the parameters of the fitted function for plotting > - residuals for plotting and evaluation of the correlation > - Q-Q-plots ? > - statistics for evaluating the goodness of fit: > - (adjusted) coefficient of determination > - std errors / deviation > - F-test / p-values > > I managed to get some of these using lstsq as follows: > from pylab import * > > # example data taken from http://en.wikipedia.org/wiki/Linear_regression#Example > # Height (m) > height = [1.47, 1.5, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, > 1.68, 1.7, 1.73, 1.75, 1.78, 1.8, 1.83] > # Weight (kg) > weight = [52.21, 53.12, 54.48, 55.84, 57.2, 58.57, 59.93, > 61.29, 63.11, 64.47, 66.28, 68.1, 69.92, 72.19, 74.46] > > observed = weight # just to retain the terminology > sample_size = len(height) # = len(weight) > > # assume parabola -> weight = p0 + p1 * height + p2 * height**2 > number_of_params = 3 # p0, p1 and p2 > X = zeros((len(height), number_of_params),float) > X[:,0] = 1 > X[:,1] = height > X[:,2] = [h**2for h in height] > > # p - paramter of parabola, sse - sum of squared errors ??? > params, sse = lstsq(X, weight)[:2] > predicted = [ params[0] + params[1]*h + params[2]*h**2for h in height ] > predicted_error_u = [ params[0] + params[1]*h + params[2]*h**2 + ssefor h in height ] # ??? > predicted_erro_l = [ params[0] + params[1]*h + params[2]*h**2 - ssefor h in height ] # ??? > > # according to http://en.wikipedia.org/wiki/Coefficient_of_determination > mean_observed = mean(observed) > mean_predicted = mean(predicted) > ss_tot = sum([ (o-mean_observed)**2for o in observed ]) > ss_reg = sum([ (p-mean_observed)**2for p in predicted ]) > ss_err = sum([ (o-p)**2for o, p in zip(observed, predicted) ]) # == see ?? > r_squared = 1 - ss_err / ss_tot # coefficient_of_determination > > r_squared_adjusted = 1 - (1 - r_squared) * ((sample_size - 1) / (sample_size - number_of_params-1 - 1)) # ?? > > > clf() > plot(height, weight,"b.", label="observed") > plot(height, predicted,'r', label='predicted') > plot(height, predicted_error_u, 'r:', label='error') > plot(height, predicted_erro_l, 'r:',) > text(1.565, 77,"weight = %.2f + %.2f*height + %.2f*height^2" % tuple(params)) > text(1.565, 76,"R-squared = %f (adjusted = %f)" % (r_squared, r_squared_adjusted)) > text(1.565, 75,"ss_tot = %f" % ss_tot) > text(1.565, 74,"ss_reg = %f" % ss_reg) > text(1.565, 73,"ss_err = %f" % ss_err) > title("Observed and predicted data") > xlabel("Height (m)") > ylabel("Weight (kg)") > legend(loc=0) > savefig("regression.png") > > residuals = [ o - efor o, e in zip(observed, predicted) ] > > clf() > plot(height, residuals,"b.", label="residuals") > plot(gca().get_xlim(), [0, 0],"k") > title("Residual plot") > xlabel("Height (m)") > ylabel("observed Weight - predicted Weiht (kg)") > savefig("residual_plot.png") > I compared the results to R: > height<- c(1.47, 1.5, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.7, 1.73, 1.75, 1.78, 1.8, 1.83) > weight<- c(52.21, 53.12, 54.48, 55.84, 57.2, 58.57, 59.93,61.29, 63.11, 64.47, 66.28, 68.1, 69.92, 72.19, 74.46) > h<- height*height > summary(lm(weight ~ height + h)) > and they match fairly good, except for the adjusted R-squared which is > equal to R-squared in the python code. However, I have no idea how to > compute the F-statistic. > > Does Scipy have a function (or more) that already does this? If not, > can anybody give me hint on how to compute the missing values (stderr, > F-statistic)? > > Thanks in advance. > > regards > robert > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, Numerous options to stop reinventing the wheel plus do everything using a more general 'matrix' terms: http://www.scipy.org/Cookbook/OLS http://code.google.com/p/econpy/source/browse/trunk/pytrix/ls.py Josef's work: http://thread.gmane.org/gmane.comp.python.scientific.user/18990/ Jonathan Taylor's statistical models also discussed on the list as part of Skipper Seabold's GSoC project: http://article.gmane.org/gmane.comp.python.scientific.devel/10625/ Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu Jun 25 09:46:42 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 25 Jun 2009 09:46:42 -0400 Subject: [SciPy-user] How to perform a multiple linear regression analysis with Scipy? In-Reply-To: <4A437CF1.8020501@gmail.com> References: <4A436D07.7090208@googlemail.com> <4A437CF1.8020501@gmail.com> Message-ID: On Thu, Jun 25, 2009 at 9:34 AM, Bruce Southey wrote: > On 06/25/2009 07:26 AM, wierob wrote: > > Hi, > > I'm trying to automate multiple linear regression (e.g. trend line does not > fit y = m*x + n) analysis for a set of samples. > > For the simple case where the trend line adheres to y = m*x + n I found > stats.linregress. polyfit can fit polynomials but does not calculate > statistical values for goodness of fit. I could not find a function that > does a regression analysis for e.g. a parabola. > > Specifically, I need: > ?- the parameters of the fitted function for plotting > ?- residuals for plotting and evaluation of the correlation > ?- Q-Q-plots ? > ?- statistics for evaluating the goodness of fit: > ??? ?? - (adjusted) coefficient of determination > ?????? - std errors / deviation > ?????? - F-test / p-values > > I managed to get some of these using lstsq as follows: > > from pylab import * > > # example data taken from > http://en.wikipedia.org/wiki/Linear_regression#Example > # Height (m) > height = [1.47, 1.5, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, > 1.68, 1.7, 1.73, 1.75, 1.78, 1.8, 1.83] > # Weight (kg) > weight = [52.21, 53.12, 54.48, 55.84, 57.2, 58.57, 59.93, > 61.29, 63.11, 64.47, 66.28, 68.1, 69.92, 72.19, 74.46] > > observed = weight # just to retain the terminology > sample_size = len(height) # = len(weight) > > # assume parabola -> weight = p0 + p1 * height + p2 * height**2 > number_of_params = 3 # p0, p1 and p2 > X = zeros((len(height), number_of_params), float) > X[:,0] = 1 > X[:,1] = height > X[:,2] = [h**2 for h in height] > > # p - paramter of parabola, sse - sum of squared errors ??? > params, sse = lstsq(X, weight)[:2] > predicted = [ params[0] + params[1]*h + params[2]*h**2 for h in height ] > predicted_error_u = [ params[0] + params[1]*h + params[2]*h**2 + sse for h > in height ] # ??? > predicted_erro_l = [ params[0] + params[1]*h + params[2]*h**2 - sse for h in > height ] # ??? > > # according to http://en.wikipedia.org/wiki/Coefficient_of_determination > mean_observed = mean(observed) > mean_predicted = mean(predicted) > ss_tot = sum([ (o-mean_observed)**2 for o in observed ]) > ss_reg = sum([ (p-mean_observed)**2 for p in predicted ]) > ss_err = sum([ (o-p)**2 for o, p in zip(observed, predicted) ]) # == see ?? > r_squared = 1 - ss_err / ss_tot # coefficient_of_determination > > r_squared_adjusted = 1 - (1 - r_squared) * ((sample_size - 1) / (sample_size > - number_of_params-1 - 1)) # ?? > > > clf() > plot(height, weight, "b.", label="observed") > plot(height, predicted, 'r', label='predicted') > plot(height, predicted_error_u, 'r:', label='error') > plot(height, predicted_erro_l, 'r:',) > text(1.565, 77, "weight = %.2f + %.2f*height + %.2f*height^2" % > tuple(params)) > text(1.565, 76, "R-squared = %f (adjusted = %f)" % (r_squared, > r_squared_adjusted)) > text(1.565, 75, "ss_tot = %f" % ss_tot) > text(1.565, 74, "ss_reg = %f" % ss_reg) > text(1.565, 73, "ss_err = %f" % ss_err) > title("Observed and predicted data") > xlabel("Height (m)") > ylabel("Weight (kg)") > legend(loc=0) > savefig("regression.png") > > residuals = [ o - e for o, e in zip(observed, predicted) ] > > clf() > plot(height, residuals, "b.", label="residuals") > plot(gca().get_xlim(), [0, 0], "k") > title("Residual plot") > xlabel("Height (m)") > ylabel("observed Weight - predicted Weiht (kg)") > savefig("residual_plot.png") > > I compared the results to R: > > height <- c(1.47, 1.5, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.7, 1.73, > 1.75, 1.78, 1.8, 1.83) > weight <- c(52.21, 53.12, 54.48, 55.84, 57.2, 58.57, 59.93,61.29, 63.11, > 64.47, 66.28, 68.1, 69.92, 72.19, 74.46) > h <- height*height > summary(lm(weight ~ height + h)) > > and they match fairly good, except for the adjusted R-squared which is equal > to R-squared in the python code. However, I have no idea how to compute the > F-statistic. > > Does Scipy have a function (or more) that already does this? If not, can > anybody give me hint on how to compute the missing values (stderr, > F-statistic)? > > Thanks in advance. > > regards > robert > > ________________________________ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > Hi, > Numerous options to stop reinventing the wheel plus do everything using a > more general 'matrix' terms: > http://www.scipy.org/Cookbook/OLS > http://code.google.com/p/econpy/source/browse/trunk/pytrix/ls.py > Josef's work: > http://thread.gmane.org/gmane.comp.python.scientific.user/18990/ > Jonathan Taylor's statistical models also discussed on the list as part of > Skipper Seabold's GSoC project: > http://article.gmane.org/gmane.comp.python.scientific.devel/10625/ > > > > Bruce > Those examples should get you there, but if you want help understanding where they come from without looking at any code you could take a look at the ANOVA table here and derive the R-Squared, Adj R-Squared, MSE, and F Stat. Cheers, Skipper From robert.kern at gmail.com Thu Jun 25 13:33:27 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jun 2009 12:33:27 -0500 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: <200906251504.36743.faltet@pytables.org> References: <200906251242.49337.faltet@pytables.org> <200906251504.36743.faltet@pytables.org> Message-ID: <3d375d730906251033r781ff5eepb64fb64a3e5d9a6f@mail.gmail.com> On Thu, Jun 25, 2009 at 08:04, Francesc Alted wrote: > A Thursday 25 June 2009 13:12:44 nicky van foreest escrigu?: >> Dear Francesc, >> >> Thanks. Your second suggestion works really nicely! I also thought >> about the first solution, converting arrays to strings, but even emtpy >> strings come with a memory penalty of 40 bytes (on my machine). > > Hmm, now that I think about this, I ask myself if it would not be useful to > implement a functional `__hash__()` method for read-only arrays. ?Perhaps > there is a show stopper for that, but I can't see it. Even if the memory is not writable from your array doesn't mean that it isn't being modified from another. In [1]: a = arange(10) In [2]: b = a[:] In [3]: b.flags.writeable = False In [4]: b Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [5]: a[3:7] = 10 In [6]: a Out[6]: array([ 0, 1, 2, 10, 10, 10, 10, 7, 8, 9]) In [7]: b Out[7]: array([ 0, 1, 2, 10, 10, 10, 10, 7, 8, 9]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From faltet at pytables.org Thu Jun 25 13:53:43 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 25 Jun 2009 19:53:43 +0200 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: <3d375d730906251033r781ff5eepb64fb64a3e5d9a6f@mail.gmail.com> References: <200906251504.36743.faltet@pytables.org> <3d375d730906251033r781ff5eepb64fb64a3e5d9a6f@mail.gmail.com> Message-ID: <200906251953.43210.faltet@pytables.org> A Thursday 25 June 2009 19:33:27 Robert Kern escrigu?: > > Hmm, now that I think about this, I ask myself if it would not be useful > > to implement a functional `__hash__()` method for read-only arrays. > > ?Perhaps there is a show stopper for that, but I can't see it. > > Even if the memory is not writable from your array doesn't mean that > it isn't being modified from another. > > In [1]: a = arange(10) > > In [2]: b = a[:] > > In [3]: b.flags.writeable = False > > In [4]: b > Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) > > In [5]: a[3:7] = 10 > > In [6]: a > Out[6]: array([ 0, 1, 2, 10, 10, 10, 10, 7, 8, 9]) > > In [7]: b > Out[7]: array([ 0, 1, 2, 10, 10, 10, 10, 7, 8, 9]) Yep. However, one could also check for the array owning the data, and if true, then we can be pretty sure that the array is immutable. I like the idea of having hasheable arrays; they can be handy in some scenarios. -- Francesc Alted From robert.kern at gmail.com Thu Jun 25 14:04:11 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jun 2009 13:04:11 -0500 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: <200906251953.43210.faltet@pytables.org> References: <200906251504.36743.faltet@pytables.org> <3d375d730906251033r781ff5eepb64fb64a3e5d9a6f@mail.gmail.com> <200906251953.43210.faltet@pytables.org> Message-ID: <3d375d730906251104q2dd8e7c7oa41337948b8150c7@mail.gmail.com> On Thu, Jun 25, 2009 at 12:53, Francesc Alted wrote: > A Thursday 25 June 2009 19:33:27 Robert Kern escrigu?: >> > Hmm, now that I think about this, I ask myself if it would not be useful >> > to implement a functional `__hash__()` method for read-only arrays. >> > ?Perhaps there is a show stopper for that, but I can't see it. >> >> Even if the memory is not writable from your array doesn't mean that >> it isn't being modified from another. >> >> In [1]: a = arange(10) >> >> In [2]: b = a[:] >> >> In [3]: b.flags.writeable = False >> >> In [4]: b >> Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >> >> In [5]: a[3:7] = 10 >> >> In [6]: a >> Out[6]: array([ 0, ?1, ?2, 10, 10, 10, 10, ?7, ?8, ?9]) >> >> In [7]: b >> Out[7]: array([ 0, ?1, ?2, 10, 10, 10, 10, ?7, ?8, ?9]) > > Yep. ?However, one could also check for the array owning the data, and if > true, then we can be pretty sure that the array is immutable. ?I like the idea > of having hasheable arrays; they can be handy in some scenarios. Is that a challenge? :-) In [15]: a = np.arange(10) In [16]: a.flags.writeable = False In [17]: d = a.__array_interface__.copy() In [18]: d['data'] = (d['data'][0], False) In [19]: b = np.asarray(np.lib.stride_tricks.DummyArray(d)) In [20]: b.flags Out[20]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [21]: b[3:7] = 10 In [22]: a Out[22]: array([ 0, 1, 2, 10, 10, 10, 10, 7, 8, 9]) I agree that it would be handy, but hashability is not the only problem. When hashes collide, the objects are then compared by equality. This is a problem for numpy arrays because we do not return bools. The proper fix is to make a set() implementation that allows you to provide your own hash and equality functions. This is a general solution to a problem that affects more than just numpy arrays. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Thu Jun 25 14:17:57 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 25 Jun 2009 20:17:57 +0200 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: <3d375d730906251104q2dd8e7c7oa41337948b8150c7@mail.gmail.com> References: <200906251504.36743.faltet@pytables.org> <3d375d730906251033r781ff5eepb64fb64a3e5d9a6f@mail.gmail.com> <200906251953.43210.faltet@pytables.org> <3d375d730906251104q2dd8e7c7oa41337948b8150c7@mail.gmail.com> Message-ID: <20090625181757.GF28552@phare.normalesup.org> On Thu, Jun 25, 2009 at 01:04:11PM -0500, Robert Kern wrote: > I agree that it would be handy, but hashability is not the only > problem. When hashes collide, the objects are then compared by > equality. This is a problem for numpy arrays because we do not return > bools. > The proper fix is to make a set() implementation that allows you to > provide your own hash and equality functions. This is a general > solution to a problem that affects more than just numpy arrays. I came up with this problem when I was trying to implement something like a memoize pattern for functions that where taking in arrays. I came up with a fairly complex solution that I don't want to expose in details here, but it involved using the 'id' of the arrays as a hash, and actually using this id has a key the set or dictionnary. That should probably be considered as a band aid, but my experience is that you can solve a lot of your hashing-related problems with that band aid, if you take it in account when designing your code (ie you keep in mind that you have mutables, and that id(a) != id(b) does not mean that they do not share the data. My 2 cents, Ga?l From emmanuelle.gouillart at normalesup.org Thu Jun 25 15:27:59 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Thu, 25 Jun 2009 21:27:59 +0200 Subject: [SciPy-user] Ordering and Counting the Repetitions of the Rows of a Matrix In-Reply-To: <4A43590C.2080803@gmail.com> References: <4A43590C.2080803@gmail.com> Message-ID: <20090625192759.GA16545@phare.normalesup.org> Hi Lorenzo, if you just sort your array along the first axis (A.sort(axis=1)), then you're back to your former problem, right? For your 4-column array you can sort only the two central columns. BTW, I had a look at your previous question and here's a solution that hadn't been proposed - if I read through the thread correctly. >>> a = np.array([[1, 2], [2, 3], [1, 2], [3, 4], [2,3]]) >>> dt = a.dtype >>> newdt = [('',dt)]*2 >>> b = a.view(newdt) >>> b = b.ravel() >>> np.uni np.unicode np.unicode0 np.unicode_ np.union1d np.unique np.unique1d >>> np.uniq np.unique np.unique1d >>> c = np.unique1d(b) >>> c array([(1, 2), (2, 3), (3, 4)], dtype=[('f0', '>> c = c.view(dt) >>> c array([1, 2, 2, 3, 3, 4]) >>> c = np.c_[c[::2], c[1::2]] >>> c array([[1, 2], [2, 3], [3, 4]]) (not as short as Stefan's solution, though :D). For the occurrence array, use the optional index arrays returned by np.unique1d: >>> c1, c2, c3 = np.unique1d(b, return_index=True, return_inverse=True) >>> occurrence = np.histogram(c3, bins = np.arange(c1.shape[0] +1)) (array([2, 2, 1]), array([0, 1, 2, 3])) Cheers, Emmanuelle On Thu, Jun 25, 2009 at 01:01:32PM +0200, Lorenzo Isella wrote: > Dear All, > I dug up an old post of mine to this list (the problem was mainly how to > get rid of multiple rows in a matrix while counting the multiple > occurrences of each row). > Now, the problem is slightly more complex > The matrix is of the kind > A= 1 2 > 2 3 > 9 9 > 4 4 > 1 2 > 3 2 > but this time, you consider the row with entries (2 3) equal to the one > with entries (3 2), i.e. this time the ordering of elements in a row > does not matter. > How can I still calculate the repetitions of each row in the sense > explained above and obtain the 'repetition-free' matrix? > Furthermore, suppose that you have the matrix > B= 2 1 2 4 > 4 2 3 9 > 8 9 9 7 > 5 4 4 1 > 6 1 2 2 > 4 3 2 9 > Now, you have extra elements with respect to matrix A, but you consider > two rows equal if the first and forth entry are coincident and the > second and third entry are the same numbers or are swapped (like in the > case of matrix A). E.g. the second and last row of matrix B would be > considered equal in this case. You still want the number of occurrences > of each row (with the new concept of equal rows) and the repetition-free > matrix. > Any ideas about how this could be efficiently implemented? > Many thanks > Lorenzo > > Date: Sun, 27 Jul 2008 15:46:29 -0400 From: "Warren Weckesser" > > Subject: Re: [SciPy-user] Ordering and > > Counting the Repetitions of the Rows of a Matrix To: "SciPy Users > > List" Message-ID: > > <114880320807271246x1c922e7cg9539684fbad7bed9 at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" Lorenzo, Given a matrix > > A like you showed, here is one way to find (and count) the unique > > rows: ---------- d = {} for r in A: t = tuple(r) d[t] = d.get(t,0) + 1 > > # The dict d now has the counts of the unique rows of A. B = > > numpy.array(d.keys()) # The unique rows of A C = > > numpy.array(d.values()) # The counts of the unique rows ---------- For > > a large number of rows (e.g. 10000), this appears to be significantly > > faster than the code that David Kaplan suggested in his email earlier > > today. Regards, Warren On Sun, Jul 27, 2008 at 12:17 PM, Lorenzo > > Isella wrote: > >> > Dear All, > >> > Consider an Nx2 matrix of the kind: > >> > A= 1 2 > >> > 3 13 > >> > 1 2 > >> > 6 8 > >> > 3 13 > >> > 2 9 > >> > 1 1 > >> > The first entry in each row is always smaller or equal than the second > >> > entry in the same row. > >> > Now there are two things I would like to do with this A matrix: > >> > (1) With a sort of n.unique1d (but have not been very successful yet), > >> > let each row of A appear only once (i.e. get rid of the repetitions). > >> > Therefore one should obtain the matrix: > >> > B= 1 2 > >> > 3 13 > >> > 6 8 > >> > 2 9 > >> > 1 1 > >> > (2) At the same time, efficiently count how many times each row of B > >> > appeared in A. I would like to get a C vector counting them as: > >> > C= 2 > >> > 2 > >> > 1 > >> > 1 > >> > 1 > >> > Any suggestions about an efficient way of achieving this? > >> > Many thanks > >> > Lorenzo > >> > ______________________ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From vanforeest at gmail.com Thu Jun 25 17:18:37 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 25 Jun 2009 23:18:37 +0200 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: <20090625181757.GF28552@phare.normalesup.org> References: <200906251504.36743.faltet@pytables.org> <3d375d730906251033r781ff5eepb64fb64a3e5d9a6f@mail.gmail.com> <200906251953.43210.faltet@pytables.org> <3d375d730906251104q2dd8e7c7oa41337948b8150c7@mail.gmail.com> <20090625181757.GF28552@phare.normalesup.org> Message-ID: Hi, >> The proper fix is to make a set() implementation that allows you to >> provide your own hash and equality functions. This is a general >> solution to a problem that affects more than just numpy arrays. I do not quite see how to do this in a memory efficient way, which is most surely due to my limited numpy knowledge, Nevertheless, with the solution of Francesc I can control the size of elements I need to store, and I can use these elements at the same time in the set itself. Besides this, I prefer not to think about a good hash function. For my case at hand using arrays rather than tuples allows me to use a state space of 1e6 (or even slightly more) rather than 2e5 (or somewhat less) states. This extra space is critical to make the referees of one my papers happy :-) bye Nicky From joschu at caltech.edu Thu Jun 25 18:01:05 2009 From: joschu at caltech.edu (John Schulman) Date: Thu, 25 Jun 2009 18:01:05 -0400 Subject: [SciPy-user] installation problems on SUSE 10.2 Message-ID: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> I ran zypper install gcc-fortran blas lapack I installed numpy from source, it passes all tests. I install scipy from source, and it seems to go through successfully. But I get errors when I try to import any of the submodules of scipy. Below I pasted what happens with test(). Maybe someone knows what I should try. I may give up on this and put on a new linux distro. What linux distro is it easy to get scipy and its dependencies working? Thanks, John >>> import scipy >>> scipy.test() Running unit tests for scipy-0.0.0-py2.5-linux-x86_64.egg.scipy NumPy version 1.4.0.dev7073 NumPy is installed in /usr/local/lib64/python2.5/site-packages/numpy SciPy version 0.7.1rc3 SciPy is installed in /usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy Python version 2.5 (r25:51908, Aug 1 2008, 00:36:28) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] nose version 0.11.1 /usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) EEEEEEEEEEEE ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/integrate/__init__.py", line 10, in from odepack import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/integrate/odepack.py", line 7, in import _odepack ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/interpolate/__init__.py", line 13, in from rbf import Rbf File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/interpolate/rbf.py", line 47, in from scipy import linalg File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/lib/blas/__init__.py", line 9, in import fblas ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/lib/lapack/__init__.py", line 9, in import calc_lwork ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linsolve/__init__.py", line 6, in from scipy.sparse.linalg.dsolve import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/__init__.py", line 5, in from isolve import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/__init__.py", line 4, in from iterative import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/iterative.py", line 5, in import _iterative ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/maxentropy/__init__.py", line 9, in from maxentropy import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/maxentropy/maxentropy.py", line 74, in from scipy import optimize File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/__init__.py", line 11, in from lbfgsb import fmin_l_bfgs_b File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/lbfgsb.py", line 30, in import _lbfgsb ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/odr/__init__.py", line 11, in import odrpack File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/odr/odrpack.py", line 103, in from scipy.odr import __odrpack ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/__init__.py", line 11, in from lbfgsb import fmin_l_bfgs_b File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/lbfgsb.py", line 30, in import _lbfgsb ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/signal/__init__.py", line 10, in from filter_design import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/signal/filter_design.py", line 12, in from scipy import special, optimize File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/__init__.py", line 11, in from lbfgsb import fmin_l_bfgs_b File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/lbfgsb.py", line 30, in import _lbfgsb ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/__init__.py", line 5, in from isolve import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/__init__.py", line 4, in from iterative import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/iterative.py", line 5, in import _iterative ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ====================================================================== ERROR: Failure: ImportError (/usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/stats/stats.py", line 199, in import scipy.linalg as linalg File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: /usr/lib64/libblas.so.3: undefined symbol: _gfortran_st_write_done ---------------------------------------------------------------------- Ran 12 tests in 0.029s FAILED (errors=12) >>> From jsseabold at gmail.com Thu Jun 25 18:07:42 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 25 Jun 2009 18:07:42 -0400 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> Message-ID: On Thu, Jun 25, 2009 at 6:01 PM, John Schulman wrote: > I ran > zypper install gcc-fortran blas lapack > > I installed numpy from source, it passes all tests. > I install scipy from source, and it seems to go through successfully. > But I get errors when I try to import any of the submodules of scipy. > Below I pasted what happens with test(). Maybe someone knows what I > should try. > > I may give up on this and put on a new linux distro. What linux distro > is it easy to get scipy and its dependencies working? > > Thanks, > John > > > FWIW, I switched from SuSE 11.x over to Kubuntu for similar reasons though my errors were different if I recall. Not as happy with the non-SuSE KDE experience and I miss Yast, but it's not that bad. There's a thread about my travails on the dev list a few months back. Skipper From davidanthonypowell at gmail.com Thu Jun 25 21:06:44 2009 From: davidanthonypowell at gmail.com (David Powell) Date: Fri, 26 Jun 2009 11:06:44 +1000 Subject: [SciPy-user] Performance Python examples Message-ID: <4f8149830906251806p2edb61d8l98e834d317ade349@mail.gmail.com> Hi all, I was looking at the code on the PerformancePython wiki page and I had a few questions/comments: Firstly, the f2py based example code does not build under windows. For me it gives an error and asks me to use the -c mingw option, and even if I do run setup.py with this option it still doesn't work. Has anyone else had this problem? Secondly, I noticed that on my computer the "fast inline" option runs a little bit slower than the "inline" version (not by much but it seems consistent when re-running). Finally, I have recently become aware of the package "numexpr" - it seems to me that this would be a good candidate to add to this page. Is anyone familiar with this package willing to add it, or should I just go ahead and do it myself? (disclaimer: since I am unfamiliar with this package I may not know how to use it properly so I might not give a fair comparison) The reason I am looking into this is because I am considering writing some more computationally intensive code in python, and would like to get a handle on the best way to speed up the key pieces of code. Currently I am leaning towards weave.inline because it doesn't seem to need much extra wrapping code, loops can often be parallelised on an SMP machine quite easily using openmp directives and it is fast. On the other hand I seem to remember reading somewhere that it is not really maintained at the moment - certainly its documentation in the scipy online manual is almost nonexistant. But I am happy to listen to any advice on this matter. cheers David From robert.kern at gmail.com Thu Jun 25 21:35:11 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jun 2009 20:35:11 -0500 Subject: [SciPy-user] Performance Python examples In-Reply-To: <4f8149830906251806p2edb61d8l98e834d317ade349@mail.gmail.com> References: <4f8149830906251806p2edb61d8l98e834d317ade349@mail.gmail.com> Message-ID: <3d375d730906251835s584ac4dbn6396e3d553aefc@mail.gmail.com> On Thu, Jun 25, 2009 at 20:06, David Powell wrote: > Hi all, > > I was looking at the code on the PerformancePython wiki page and I had > a few questions/comments: > > Firstly, the f2py based example code does not build under windows. > For me it gives an error and asks me to use the -c mingw option, and > even if I do run setup.py with this option it still doesn't work. ?Has > anyone else had this problem? Please, always copy-and-paste error messages instead of paraphrasing them, or worse, just saying that it doesn't work. What FORTRAN compiler are you using? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Jun 26 01:02:13 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 26 Jun 2009 14:02:13 +0900 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> Message-ID: <4A445655.8050807@ar.media.kyoto-u.ac.jp> John Schulman wrote: > I ran > zypper install gcc-fortran blas lapack > > I installed numpy from source, it passes all tests. > I install scipy from source, and it seems to go through successfully. > But I get errors when I try to import any of the submodules of scipy. > Below I pasted what happens with test(). Maybe someone knows what I > should try. > Some of the fortran libraries are built with g77, some with gfortran. That should be avoided. I suspect that you built numpy/scipy with g77, whereas the blas of opensus is built with gfortran. Remove/uninstall numpy and scipy, remove the build directories, and install as follows: python setup.py build --fcompiler=gnu95 install --prefix=yourprefix for both numpy and scipy. > I may give up on this and put on a new linux distro. What linux distro > is it easy to get scipy and its dependencies working? > There is really no need to use a new distribution. Just make sure you use the same fortran compiler everywhere (one good way is to uninstall g77 in your case, as gfortran is the default fortran compiler for SUSE 10.2). David From cournape at gmail.com Fri Jun 26 06:11:04 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 26 Jun 2009 19:11:04 +0900 Subject: [SciPy-user] Compilein debug version of extension using numpy.distutils In-Reply-To: References: Message-ID: <5b8d13220906260311kf3340c2xf682fd1763b9dc04@mail.gmail.com> 2009/6/25 Berthold "H?llmann" : > > I have a set of python extension modules bundled into one > project. Trying to compile a debug version of these extensions i run > into a strange problem: > > ?$ /cygdrive/c/Python25_d/python_d.exe setup.py config_fc --fcompiler=intelv --f77flags=/fpp --f90flags=/fpp --debug build_ext > ?... > ?LINK : fatal error LNK1104: Datei 'python25.lib' kann nicht ge?ffnet werden > ?... > ?(error says: can't open 'python25.lib', it shoud use 'python25_d.lib' instead) > > ?$ /cygdrive/c/Python25_d/python_d.exe setup.py config_fc --fcompiler=intelv --f77flags=/fpp --f90flags=/fpp --debug build_ext ?--debug > ?... > ?ifort: command line error: option '/g' is ambiguous > ?... > > ?$ ifort > ?Intel(R) Visual Fortran Compiler Professional for applications running on IA-32, Version 11.0 ? ?Build 20090318 Package ID: w_cprof_p_11.0.074 > ?Copyright (C) 1985-2009 Intel Corporation. ?All rights reserved. > > ?$ cl > ?Microsoft (R) 32-Bit C/C++-Standardcompiler Version 13.10.3077 f?r 80x86 > ?Copyright (C) Microsoft Corporation 1984-2002. Alle Rechte vorbehalten. > > ?$ python -c "import numpy;print numpy.version.version;import sys; print sys.version" > ?1.3.0 > ?2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] > > What am I supposed to do to get a debug version of my extensions? > You can get a debug version of your extensions without using a debug version of python - using a debug version of python for your extensions is a lot of work (especially on windows), and you will have to dig in the code yourself to do it, most likely. To build a debug extension of an extension with the normal python interpreter, use the -g option, e.g.: python setup.py build_ext -g cheers, David From faltet at pytables.org Fri Jun 26 06:42:06 2009 From: faltet at pytables.org (Francesc Alted) Date: Fri, 26 Jun 2009 12:42:06 +0200 Subject: [SciPy-user] usings numpy arrays in sets In-Reply-To: <3d375d730906251104q2dd8e7c7oa41337948b8150c7@mail.gmail.com> References: <200906251953.43210.faltet@pytables.org> <3d375d730906251104q2dd8e7c7oa41337948b8150c7@mail.gmail.com> Message-ID: <200906261242.06976.faltet@pytables.org> A Thursday 25 June 2009 20:04:11 Robert Kern escrigu?: > > > > Yep. ?However, one could also check for the array owning the data, and if > > true, then we can be pretty sure that the array is immutable. ?I like the > > idea of having hasheable arrays; they can be handy in some scenarios. > > Is that a challenge? :-) With you? No way :-) > In [15]: a = np.arange(10) > > In [16]: a.flags.writeable = False > > In [17]: d = a.__array_interface__.copy() > > In [18]: d['data'] = (d['data'][0], False) > > In [19]: b = np.asarray(np.lib.stride_tricks.DummyArray(d)) > > In [20]: b.flags > Out[20]: > C_CONTIGUOUS : True > F_CONTIGUOUS : True > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > In [21]: b[3:7] = 10 > > In [22]: a > Out[22]: array([ 0, 1, 2, 10, 10, 10, 10, 7, 8, 9]) Yeah, there are many ways to shoot in your foot if you try hard enough. > I agree that it would be handy, but hashability is not the only > problem. When hashes collide, the objects are then compared by > equality. This is a problem for numpy arrays because we do not return > bools. Oops, I think how equality works is a real problem. You win :) > The proper fix is to make a set() implementation that allows you to > provide your own hash and equality functions. This is a general > solution to a problem that affects more than just numpy arrays. -- Francesc Alted From shaibani at ymail.com Fri Jun 26 08:52:03 2009 From: shaibani at ymail.com (Ala Al-Shaibani) Date: Fri, 26 Jun 2009 05:52:03 -0700 (PDT) Subject: [SciPy-user] Regarding odeint timestep interval Message-ID: <699.53632.qm@web24606.mail.ird.yahoo.com> Hello all. I'm wondering, if instead of passing timestep interval to odeint, I just pass start and end range (for example: t = [0, 10]), what timestep interval would odeint use as default in this case? -------------- next part -------------- An HTML attachment was scrubbed... URL: From joschu at caltech.edu Fri Jun 26 10:00:08 2009 From: joschu at caltech.edu (John Schulman) Date: Fri, 26 Jun 2009 10:00:08 -0400 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <4A445655.8050807@ar.media.kyoto-u.ac.jp> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> <4A445655.8050807@ar.media.kyoto-u.ac.jp> Message-ID: <185761440906260700g2b442ebt55ed06a879116770@mail.gmail.com> Thanks for the help. I removed g77, and I removed blas and lapack just in case something went wrong the first time, and tried again. It looks like the installer couldn't find some stuff. Below, I included the results of install and test() > sudo python setup.py install root's password: Warning: No configuration returned, assuming unavailable. blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib64 libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib64 libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64 libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib64 libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /usr/local/lib64 libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib64 libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE /usr/local/lib64/python2.5/site-packages/numpy/distutils/system_info.py:1388: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/local/lib64 libraries blas not found in /usr/local/lib FOUND: libraries = ['blas'] library_dirs = ['/usr/lib64'] language = f77 FOUND: libraries = ['blas'] library_dirs = ['/usr/lib64'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib64 libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib64 libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64 libraries lapack_atlas not found in /usr/local/lib64 libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib64 libraries lapack_atlas not found in /usr/lib64 libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in /usr/local/lib64 libraries lapack_atlas not found in /usr/local/lib64 libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib64 libraries lapack_atlas not found in /usr/lib64 libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /usr/local/lib64/python2.5/site-packages/numpy/distutils/system_info.py:1295: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /usr/local/lib64 libraries lapack not found in /usr/local/lib FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib64'] language = f77 FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib64'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 umfpack_info: libraries umfpack not found in /usr/local/lib64 libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib64 libraries umfpack not found in /usr/lib /usr/local/lib64/python2.5/site-packages/numpy/distutils/system_info.py:452: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. adding 'build/src.linux-x86_64-2.5/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. adding 'build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.linux-x86_64-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.linux-x86_64-2.5/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.linux-x86_64-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. adding 'build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.linux-x86_64-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.linux-x86_64-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. adding 'build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.linux-x86_64-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.sparse.linalg.dsolve._zsuperlu" sources building extension "scipy.sparse.linalg.dsolve._dsuperlu" sources building extension "scipy.sparse.linalg.dsolve._csuperlu" sources building extension "scipy.sparse.linalg.dsolve._ssuperlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. adding 'build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.spatial.ckdtree" sources building extension "scipy.spatial._distance_wrap" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.stats.vonmises_cython" sources building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.linux-x86_64-2.5/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.5' to include_dirs. adding 'build/src.linux-x86_64-2.5/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources running build_py copying scipy/version.py -> build/lib.linux-x86_64-2.5/scipy copying build/src.linux-x86_64-2.5/scipy/__config__.py -> build/lib.linux-x86_64-2.5/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 Could not locate executable pgf77 customize AbsoftFCompiler Could not locate executable f90 customize NAGFCompiler Could not locate executable f95 customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext extending extension 'scipy.sparse.linalg.dsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler customize IntelFCompiler customize LaheyFCompiler customize PGroupFCompiler customize AbsoftFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using build_ext running scons running install_lib copying build/lib.linux-x86_64-2.5/scipy/cluster/_vq.so -> /usr/local/lib64/python2.5/site-packages/scipy/cluster copying build/lib.linux-x86_64-2.5/scipy/cluster/_hierarchy_wrap.so -> /usr/local/lib64/python2.5/site-packages/scipy/cluster copying build/lib.linux-x86_64-2.5/scipy/fftpack/_fftpack.so -> /usr/local/lib64/python2.5/site-packages/scipy/fftpack copying build/lib.linux-x86_64-2.5/scipy/fftpack/convolve.so -> /usr/local/lib64/python2.5/site-packages/scipy/fftpack copying build/lib.linux-x86_64-2.5/scipy/integrate/_quadpack.so -> /usr/local/lib64/python2.5/site-packages/scipy/integrate copying build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so -> /usr/local/lib64/python2.5/site-packages/scipy/integrate copying build/lib.linux-x86_64-2.5/scipy/integrate/vode.so -> /usr/local/lib64/python2.5/site-packages/scipy/integrate copying build/lib.linux-x86_64-2.5/scipy/interpolate/_fitpack.so -> /usr/local/lib64/python2.5/site-packages/scipy/interpolate copying build/lib.linux-x86_64-2.5/scipy/interpolate/dfitpack.so -> /usr/local/lib64/python2.5/site-packages/scipy/interpolate copying build/lib.linux-x86_64-2.5/scipy/interpolate/_interpolate.so -> /usr/local/lib64/python2.5/site-packages/scipy/interpolate copying build/lib.linux-x86_64-2.5/scipy/io/numpyio.so -> /usr/local/lib64/python2.5/site-packages/scipy/io copying build/lib.linux-x86_64-2.5/scipy/lib/blas/fblas.so -> /usr/local/lib64/python2.5/site-packages/scipy/lib/blas copying build/lib.linux-x86_64-2.5/scipy/lib/blas/cblas.so -> /usr/local/lib64/python2.5/site-packages/scipy/lib/blas copying build/lib.linux-x86_64-2.5/scipy/lib/lapack/flapack.so -> /usr/local/lib64/python2.5/site-packages/scipy/lib/lapack copying build/lib.linux-x86_64-2.5/scipy/lib/lapack/clapack.so -> /usr/local/lib64/python2.5/site-packages/scipy/lib/lapack copying build/lib.linux-x86_64-2.5/scipy/lib/lapack/calc_lwork.so -> /usr/local/lib64/python2.5/site-packages/scipy/lib/lapack copying build/lib.linux-x86_64-2.5/scipy/lib/lapack/atlas_version.so -> /usr/local/lib64/python2.5/site-packages/scipy/lib/lapack copying build/lib.linux-x86_64-2.5/scipy/linalg/fblas.so -> /usr/local/lib64/python2.5/site-packages/scipy/linalg copying build/lib.linux-x86_64-2.5/scipy/linalg/cblas.so -> /usr/local/lib64/python2.5/site-packages/scipy/linalg copying build/lib.linux-x86_64-2.5/scipy/linalg/flapack.so -> /usr/local/lib64/python2.5/site-packages/scipy/linalg copying build/lib.linux-x86_64-2.5/scipy/linalg/clapack.so -> /usr/local/lib64/python2.5/site-packages/scipy/linalg copying build/lib.linux-x86_64-2.5/scipy/linalg/_flinalg.so -> /usr/local/lib64/python2.5/site-packages/scipy/linalg copying build/lib.linux-x86_64-2.5/scipy/linalg/calc_lwork.so -> /usr/local/lib64/python2.5/site-packages/scipy/linalg copying build/lib.linux-x86_64-2.5/scipy/linalg/atlas_version.so -> /usr/local/lib64/python2.5/site-packages/scipy/linalg copying build/lib.linux-x86_64-2.5/scipy/odr/__odrpack.so -> /usr/local/lib64/python2.5/site-packages/scipy/odr copying build/lib.linux-x86_64-2.5/scipy/optimize/_minpack.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/optimize/_zeros.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/optimize/_lbfgsb.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/optimize/moduleTNC.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/optimize/_cobyla.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/optimize/minpack2.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/optimize/_slsqp.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/optimize/_nnls.so -> /usr/local/lib64/python2.5/site-packages/scipy/optimize copying build/lib.linux-x86_64-2.5/scipy/signal/sigtools.so -> /usr/local/lib64/python2.5/site-packages/scipy/signal copying build/lib.linux-x86_64-2.5/scipy/signal/spline.so -> /usr/local/lib64/python2.5/site-packages/scipy/signal copying build/lib.linux-x86_64-2.5/scipy/sparse/linalg/isolve/_iterative.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/linalg/isolve copying build/lib.linux-x86_64-2.5/scipy/sparse/linalg/dsolve/_zsuperlu.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/linalg/dsolve copying build/lib.linux-x86_64-2.5/scipy/sparse/linalg/dsolve/_dsuperlu.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/linalg/dsolve copying build/lib.linux-x86_64-2.5/scipy/sparse/linalg/dsolve/_csuperlu.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/linalg/dsolve copying build/lib.linux-x86_64-2.5/scipy/sparse/linalg/dsolve/_ssuperlu.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/linalg/dsolve copying build/lib.linux-x86_64-2.5/scipy/sparse/linalg/eigen/arpack/_arpack.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack copying build/lib.linux-x86_64-2.5/scipy/sparse/sparsetools/_csr.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/sparsetools copying build/lib.linux-x86_64-2.5/scipy/sparse/sparsetools/_csc.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/sparsetools copying build/lib.linux-x86_64-2.5/scipy/sparse/sparsetools/_coo.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/sparsetools copying build/lib.linux-x86_64-2.5/scipy/sparse/sparsetools/_bsr.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/sparsetools copying build/lib.linux-x86_64-2.5/scipy/sparse/sparsetools/_dia.so -> /usr/local/lib64/python2.5/site-packages/scipy/sparse/sparsetools copying build/lib.linux-x86_64-2.5/scipy/spatial/ckdtree.so -> /usr/local/lib64/python2.5/site-packages/scipy/spatial copying build/lib.linux-x86_64-2.5/scipy/spatial/_distance_wrap.so -> /usr/local/lib64/python2.5/site-packages/scipy/spatial copying build/lib.linux-x86_64-2.5/scipy/special/_cephes.so -> /usr/local/lib64/python2.5/site-packages/scipy/special copying build/lib.linux-x86_64-2.5/scipy/special/specfun.so -> /usr/local/lib64/python2.5/site-packages/scipy/special copying build/lib.linux-x86_64-2.5/scipy/stats/statlib.so -> /usr/local/lib64/python2.5/site-packages/scipy/stats copying build/lib.linux-x86_64-2.5/scipy/stats/vonmises_cython.so -> /usr/local/lib64/python2.5/site-packages/scipy/stats copying build/lib.linux-x86_64-2.5/scipy/stats/futil.so -> /usr/local/lib64/python2.5/site-packages/scipy/stats copying build/lib.linux-x86_64-2.5/scipy/stats/mvn.so -> /usr/local/lib64/python2.5/site-packages/scipy/stats copying build/lib.linux-x86_64-2.5/scipy/ndimage/_nd_image.so -> /usr/local/lib64/python2.5/site-packages/scipy/ndimage copying build/lib.linux-x86_64-2.5/scipy/version.py -> /usr/local/lib64/python2.5/site-packages/scipy copying build/lib.linux-x86_64-2.5/scipy/__config__.py -> /usr/local/lib64/python2.5/site-packages/scipy running install_data running install_egg_info Removing /usr/local/lib64/python2.5/site-packages/scipy-0.7.1rc3-py2.5.egg-info Writing /usr/local/lib64/python2.5/site-packages/scipy-0.7.1rc3-py2.5.egg-info >>> import scipy >>> scipy.test() Running unit tests for scipy-0.0.0-py2.5-linux-x86_64.egg.scipy NumPy version 1.4.0.dev7073 NumPy is installed in /usr/local/lib64/python2.5/site-packages/numpy SciPy version 0.7.1rc3 SciPy is installed in /usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy Python version 2.5 (r25:51908, Aug 1 2008, 00:36:28) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] nose version 0.11.1 /usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) EEEEEEEEEEEEEE ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/fftpack/__init__.py", line 10, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/fftpack/basic.py", line 13, in import _fftpack as fftpack ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/integrate/__init__.py", line 9, in from quadrature import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/integrate/quadrature.py", line 5, in from scipy.special.orthogonal import p_roots File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/basic.py", line 8, in from _cephes import * ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/interpolate/__init__.py", line 7, in from interpolate import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/interpolate/interpolate.py", line 13, in import scipy.special as spec File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/basic.py", line 8, in from _cephes import * ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/lib/blas/__init__.py", line 9, in import fblas ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/lib/lapack/__init__.py", line 9, in import calc_lwork ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/linsolve/__init__.py", line 6, in from scipy.sparse.linalg.dsolve import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/__init__.py", line 5, in from isolve import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/__init__.py", line 4, in from iterative import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/iterative.py", line 5, in import _iterative ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/maxentropy/__init__.py", line 9, in from maxentropy import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/maxentropy/maxentropy.py", line 74, in from scipy import optimize File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/__init__.py", line 7, in from optimize import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/optimize.py", line 28, in import linesearch File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/linesearch.py", line 3, in from scipy.optimize import minpack2 ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/odr/__init__.py", line 11, in import odrpack File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/odr/odrpack.py", line 103, in from scipy.odr import __odrpack ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/__init__.py", line 7, in from optimize import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/optimize.py", line 28, in import linesearch File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/optimize/linesearch.py", line 3, in from scipy.optimize import minpack2 ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/signal/__init__.py", line 9, in from bsplines import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/signal/bsplines.py", line 3, in import scipy.special File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/basic.py", line 8, in from _cephes import * ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/__init__.py", line 5, in from isolve import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/__init__.py", line 4, in from iterative import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/sparse/linalg/isolve/iterative.py", line 5, in import _iterative ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/basic.py", line 8, in from _cephes import * ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ====================================================================== ERROR: Failure: ImportError (libg2c.so.0: cannot open shared object file: No such file or directory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/local/lib64/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/stats/stats.py", line 198, in import scipy.special as special File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/__init__.py", line 8, in from basic import * File "/usr/local/lib64/python2.5/site-packages/scipy-0.0.0-py2.5-linux-x86_64.egg/scipy/special/basic.py", line 8, in from _cephes import * ImportError: libg2c.so.0: cannot open shared object file: No such file or directory ---------------------------------------------------------------------- Ran 14 tests in 0.031s FAILED (errors=14) >>> From joschu at caltech.edu Fri Jun 26 10:02:08 2009 From: joschu at caltech.edu (John Schulman) Date: Fri, 26 Jun 2009 10:02:08 -0400 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <185761440906260700g2b442ebt55ed06a879116770@mail.gmail.com> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> <4A445655.8050807@ar.media.kyoto-u.ac.jp> <185761440906260700g2b442ebt55ed06a879116770@mail.gmail.com> Message-ID: <185761440906260702h296c015bs93c285442d5ba821@mail.gmail.com> and I did rm -r build before building From joschu at caltech.edu Fri Jun 26 11:45:38 2009 From: joschu at caltech.edu (John Schulman) Date: Fri, 26 Jun 2009 11:45:38 -0400 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <185761440906260702h296c015bs93c285442d5ba821@mail.gmail.com> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> <4A445655.8050807@ar.media.kyoto-u.ac.jp> <185761440906260700g2b442ebt55ed06a879116770@mail.gmail.com> <185761440906260702h296c015bs93c285442d5ba821@mail.gmail.com> Message-ID: <185761440906260845s103ac83bm6a1fceb9f8b9c9bb@mail.gmail.com> And here's what happens when I try to use yast (following the instructions on http://www.scipy.org/Installing_SciPy/Linux ) First of all, the given repository at http://repos.opensuse.org/science/ doesn't have blas and lapack. This one does: http://download.opensuse.org/repositories/science:/ScientificLinux/ David C.'s repository only contains refblas. When I try to install python-numpy or python-scipy from the ScientificLinux repository, I get this error: "there are no installable providers of libgfortran.so.2" But libgfortran is installed. I removed and reinstalled libgfortran and gcc-fortran just to be sure. On Fri, Jun 26, 2009 at 10:02 AM, John Schulman wrote: > and I did > rm -r build > before building > From hbabcock at mac.com Fri Jun 26 14:23:12 2009 From: hbabcock at mac.com (Hazen Babcock) Date: Fri, 26 Jun 2009 14:23:12 -0400 Subject: [SciPy-user] fitwarp2d? Message-ID: <4A451210.3060508@mac.com> Hello, Is there a scipy equivalent to the IDL function fitwarp2d? This is a function that determines the best fit 2D polynomial for mapping a set of points in one image to a set of points in another image. xf = a1 + b1 xi + c1 yi + ... yf = a2 + b2 xi + c2 yi + ... Given a bunch of (xi,yi) (xf, yf) pairs and a polynomial order it would return coefficients a1, b1, etc... -Hazen From rob.clewley at gmail.com Fri Jun 26 16:33:21 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Fri, 26 Jun 2009 16:33:21 -0400 Subject: [SciPy-user] Regarding odeint timestep interval In-Reply-To: <699.53632.qm@web24606.mail.ird.yahoo.com> References: <699.53632.qm@web24606.mail.ird.yahoo.com> Message-ID: On Fri, Jun 26, 2009 at 8:52 AM, Ala Al-Shaibani wrote: > Hello all. > I'm wondering, if instead of passing timestep interval to odeint, I just > pass start and end range (for example: t = [0, 10]), what timestep interval > would odeint use as default in this case? The external integrator code for odeint (namely the fortran code lsoda, see below) will select its own initial time step by default. This should be controllable with the optional parameters you can pass through scipy's wrapper. See http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html for those, in particular the parameter h0. If you want to know more explicitly how it calculates the initial time step, you need to read the original lsoda code to see how it works it out. See here at around the tagged line 160: http://www.netlib.org/alliant/ode/prog/lsoda.f -Rob From gael.varoquaux at normalesup.org Sat Jun 27 03:09:34 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 27 Jun 2009 09:09:34 +0200 Subject: [SciPy-user] SciPy abstract submission deadline extended Message-ID: <20090627070934.GA6149@phare.normalesup.org> Greetings, The conference committee is extending the deadline for abstract submission for the Scipy conference 2009 one week. On Friday July 3th, at midnight Pacific, we will turn off the abstract submission on the conference site. Up to then, you can modify the already-submitted abstract, or submit new abstracts. Submitting Papers --------------------- The program features tutorials, contributed papers, lightning talks, and bird-of-a-feather sessions. We are soliciting talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics which center around scientific computing using Python. These include applications, teaching, future development directions, and research. A collection of peer-reviewed articles will be published as part of the proceedings. Proposals for talks are submitted as extended abstracts. There are two categories of talks: Paper presentations These talks are 35 minutes in duration (including questions). A one page abstract of no less than 500 words (excluding figures and references) should give an outline of the final paper. Proceeding papers are due two weeks after the conference, and may be in a formal academic style, or in a more relaxed magazine-style format. Rapid presentations These talks are 10 minutes in duration. An abstract of between 300 and 700 words should describe the topic and motivate its relevance to scientific computing. In addition, there will be an open session for lightning talks during which any attendee willing to do so is invited to do a couple-of-minutes-long presentation. If you wish to present a talk at the conference, please create an account on the website (http://conference.scipy.org). You may then submit an abstract by logging in, clicking on your profile and following the "Submit an abstract" link. Submission Guidelines * Submissions should be uploaded via the online form. * Submissions whose main purpose is to promote a commercial product or service will be refused. * All accepted proposals must be presented at the SciPy conference by at least one author. * Authors of an accepted proposal can provide a final paper for publication in the conference proceedings. Final papers are limited to 7 pages, including diagrams, figures, references, and appendices. The papers will be reviewed to help ensure the high-quality of the proceedings. For further information, please visit the conference homepage: http://conference.scipy.org. The SciPy 2009 executive committee ----------------------------------- * Jarrod Millman, UC Berkeley, USA (Conference Chair) * Ga?l Varoquaux, INRIA Saclay, France (Program Co-Chair) * St?fan van der Walt, University of Stellenbosch, South Africa (Program Co-Chair) * Fernando P?rez, UC Berkeley, USA (Tutorial Chair) From ryanlists at gmail.com Sat Jun 27 09:12:35 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 27 Jun 2009 08:12:35 -0500 Subject: [SciPy-user] poly1d mul bug Message-ID: I am multiplying a pol1d by a floating point number and getting different output types depending on whether I multiply on the right or the left: ipdb> (zero**2)*poly1d([1, 2*zeta_p*pole, pole**2, 0]) array([ 208.84082913, 2296.32485094, 100997.6419477 , 0. ]) ipdb> poly1d([1, 2*zeta_p*pole, pole**2, 0])*(zero**2) poly1d([ 208.84082913, 2296.32485094, 100997.6419477 , 0. ]) ipdb> zeta_p 0.25 ipdb> pole 21.991148575128552 ipdb> zero 14.451326206513047 ipdb> type(pole) ipdb> type(zeta_p) Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes at zellner.org Sat Jun 27 16:03:13 2009 From: johannes at zellner.org (Dr. Johannes Zellner) Date: Sat, 27 Jun 2009 22:03:13 +0200 Subject: [SciPy-user] 2d interpolation -- comparison of different methods Message-ID: <103840bf0906271303i3ee403f0nf164fca982bbd34f@mail.gmail.com> Hi, I'd like to do 2d interpolation on f(x, y) scattered input data (no order, no regular grid). The interpolation should go through the original data values. I'm confused about different methods I found, e.g. SmoothBivariateSpline, RectBivariateSpline, BivariateSpline, griddata, interp2, bisplrep / bisplev, Rbf 1. Does anyone have a comparison between these (and maybe more)? 2. Which of these are appropriate to do the interpolation I've written above? Any help much appreciated. -- Johannes Zellner From emmanuelle.gouillart at normalesup.org Sat Jun 27 18:06:35 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sun, 28 Jun 2009 00:06:35 +0200 Subject: [SciPy-user] nunmpy.lookfor and scipy submodules Message-ID: <20090627220635.GA18118@phare.normalesup.org> Hi list, while working on a course about scientific computing with Python, I found myself playing with numpy.lookfor. To my surprise, I found that the submodules of scipy are never searched by numpy.lookfor (see example below). Apparently, this comes from the fact that the submodules are not in scipy.__all__. I'm sure this is a very naive question, but why is it so ? It can be quite confusing for people looking for documentation when they don't find any match for 'optimize' in scipy... I'm using numpy 1.2.1 and scipy 0.7. Thanks in advance, Emmanuelle Example: >>> np.lookfor('optimize', module=scipy, import_modules=True) Search results for 'optimize' ----------------------------- >>> np.lookfor('interpol', module=scipy, import_modules=True) Search results for 'interpol' ----------------------------- scipy.interp One-dimensional linear interpolation. scipy.misc.imrotate Rotate an image counter-clockwise by angle degrees. scipy.sinc Return the sinc function. scipy.polyfit Least squares polynomial fit. scipy.bartlett Return the Bartlett window. scipy.ma.polyfit Least squares polynomial fit. >>> # nothing from scipy.interpolate !! >>> np.lookfor('filename', os) Search results for 'filename' ----------------------------- os.open Open a file (for low level IO). os.mknod Create a filesystem node (file, device special file or named pipe) os.mkfifo Create a FIFO (a POSIX named pipe). os.path.realpath Return the canonical path of the specified filename, eliminating any os.walk Directory tree generator. os.path.walk Directory tree walk with callback function. >>> # With os, submodules are also searched From pav at iki.fi Sat Jun 27 20:48:18 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 28 Jun 2009 00:48:18 +0000 (UTC) Subject: [SciPy-user] nunmpy.lookfor and scipy submodules References: <20090627220635.GA18118@phare.normalesup.org> Message-ID: On 2009-06-27, Emmanuelle Gouillart wrote: > while working on a course about scientific computing with Python, > I found myself playing with numpy.lookfor. To my surprise, I found that > the submodules of scipy are never searched by numpy.lookfor (see example > below). Apparently, this comes from the fact that the submodules are not > in scipy.__all__. I'm sure this is a very naive question, but why is it > so ? It can be quite confusing for people looking for documentation when > they don't find any match for 'optimize' in scipy... I'm using numpy > 1.2.1 and scipy 0.7. Design oversight. I don't there's a good reason not to behave more sensibly. -- Pauli Virtanen From shaibani at ymail.com Sun Jun 28 14:40:29 2009 From: shaibani at ymail.com (Ala Al-Shaibani) Date: Sun, 28 Jun 2009 11:40:29 -0700 (PDT) Subject: [SciPy-user] error after using odeint (excess work done + lsoda) Message-ID: <846519.98298.qm@web24609.mail.ird.yahoo.com> I used odeint to solve a coupled ode, and then printed the solution. The solution is correct, but I'm wondering as to why I'm receiving the error before printing the solution, and the lsoda error after printing the solution (although all I do is print the solution). Would appreciate any help. Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information. [ 13.9 20.7 79.8 14.6 78.8] lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout In above message, I1 = 500 In above message, R1 = 0.3409980342206E+01 -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sun Jun 28 15:36:01 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 28 Jun 2009 15:36:01 -0400 Subject: [SciPy-user] error after using odeint (excess work done + lsoda) In-Reply-To: <846519.98298.qm@web24609.mail.ird.yahoo.com> References: <846519.98298.qm@web24609.mail.ird.yahoo.com> Message-ID: 2009/6/28 Ala Al-Shaibani : > I used odeint to solve a coupled ode, and then printed the solution. The > solution is correct, but I'm wondering as to why I'm receiving the error > before printing the solution, and the lsoda error after printing the > solution (although all I do is print the solution). odeint tries to adjust the number of function evaluations it does to produce the level of accuracy you requested. But in order to avoid infinite loops and wasting time on bogus values, it limits the number of function evaluations. If it is unable to reach the desired accuracy in the specified number of function evaluations, it returns what it has with a warning. So this means that your result is probably not accurate, presumably because the ODE is difficult to evaluate in this region. This can happen with some ODEs; if the solution has some kind of oscillatory behaviour, it can require many function evaluations to track (even if you don't acutally care about the oscillations). But it can also happen if your right-hand-side function is returning bogus values, either because of a bug or because it's being evaluated outside a region of applicability. I would recommend double-checking the values your right-hand-side function returns. You might also want to inspect the shape of the solution, to see if there's a good reason it needs so many evaluations to track. If there is, you can turn up the maximum number of evaluations or turn down the required accuracy to avoid the warnings and get better answers. If you really need high accuracy from difficult ODEs, you might also look into pydstool, which provides some smarter ODE solvers, along with good tools to diagnose difficult behaviour from ODEs. Anne > Would appreciate any help. > > > > Excess work done on this call (perhaps wrong Dfun type). > > Run with full_output = 1 to get quantitative information. > > ?[ 13.9? 20.7? 79.8? 14.6? 78.8] > > ?lsoda--? at current t (=r1), mxstep (=i1) steps > > ?? ? ? taken on this call before reaching tout > > ? ? ? In above message,? I1 = ? ? ? 500 > > ? ? ? In above message,? R1 =? 0.3409980342206E+01 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From davidanthonypowell at gmail.com Sun Jun 28 22:41:49 2009 From: davidanthonypowell at gmail.com (David Powell) Date: Mon, 29 Jun 2009 12:41:49 +1000 Subject: [SciPy-user] Performance Python examples Message-ID: <4f8149830906281941i552e9df6g12e5ebc5d2e2a604@mail.gmail.com> > > ---------- Forwarded message ---------- > From: Robert Kern > To: SciPy Users List > Date: Thu, 25 Jun 2009 20:35:11 -0500 > Subject: Re: [SciPy-user] Performance Python examples > On Thu, Jun 25, 2009 at 20:06, David Powell > wrote: > > Hi all, > > > > I was looking at the code on the PerformancePython wiki page and I had > > a few questions/comments: > > > > Firstly, the f2py based example code does not build under windows. > > For me it gives an error and asks me to use the -c mingw option, and > > even if I do run setup.py with this option it still doesn't work. Has > > anyone else had this problem? > > Please, always copy-and-paste error messages instead of paraphrasing > them, or worse, just saying that it doesn't work. What FORTRAN > compiler are you using? > Firstly I get the following error: 'f2py' is not recognized as an internal or external command, operable program or batch file. C:\Python25\lib\site-packages\numpy\lib\utils.py:108: DeprecationWarning: ('get_numpy_include is deprecated, use get_include',) warnings.warn(str1, DeprecationWarning) running build_ext error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. However I can easily overcome this by adding f2py.bat to C:\Python25\Scripts to call f2py.py. Upon doing this, I then get the following error message running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "flaplace" sources f2py options: [] f2py:> c:\docume~1\dap124\locals~1\temp\tmpl4dqj7\src.win32-2.5\flaplacemodule.c creating c:\docume~1\dap124\locals~1\temp\tmpl4dqj7 creating c:\docume~1\dap124\locals~1\temp\tmpl4dqj7\src.win32-2.5 Reading fortran codes... Reading file 'src/flaplace.f' (format:fix,strict) Post-processing... Block: flaplace Block: timestep Post-processing (stage 2)... Building modules... Building module "flaplace"... Constructing wrapper function "timestep"... u,error = timestep(u,dx,dy) Wrote C/API module "flaplace" to file "c:\docume~1\dap124\locals~1\temp\tmpl4dqj7\src.win32-2.5/flaplacemodule.c" adding 'c:\docume~1\dap124\locals~1\temp\tmpl4dqj7\src.win32-2.5\fortranobject.c' to sources. adding 'c:\docume~1\dap124\locals~1\temp\tmpl4dqj7\src.win32-2.5' to include_dirs. copying C:\Python25\lib\site-packages\numpy\f2py\src\fortranobject.c -> c:\docume~1\dap124\locals~1\temp\tmpl4dqj7\src.win32-2.5 copying C:\Python25\lib\site-packages\numpy\f2py\src\fortranobject.h -> c:\docume~1\dap124\locals~1\temp\tmpl4dqj7\src.win32-2.5 running build_ext No module named msvccompiler in numpy.distutils; trying from distutils error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. C:\Python25\lib\site-packages\numpy\lib\utils.py:108: DeprecationWarning: ('get_numpy_include is deprecated, use get_include',) warnings.warn(str1, DeprecationWarning) running build_ext error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. C:\Documents and Settings\dap124\My Documents\code\perfpy>python setup.py build_ext --inplace 'f2py' is not recognized as an internal or external command, operable program or batch file. C:\Python25\lib\site-packages\numpy\lib\utils.py:108: DeprecationWarning: ('get_numpy_include is deprecated, use get_include',) warnings.warn(str1, DeprecationWarning) running build_ext error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. This happens even if I manually add -c mingw32 to the call to f2py in setup.py. My g77 --version gives: GNU Fortran (GCC) 3.4.5 (mingw-vista special r3) Anyway, I am not overly concerned about this for my own use because I'm not really planning on using f2py - I just thought there could be a problem with this example which is worth reporting as it prevents others from running it. PS: I had a try at writing something at numexpr, but what I came up with was slower than just using numpy arrays. So I'd probably stick to use weave.inline. However I just realised that to get the full benefit of using openmp needs gcc 4.x, but I am not sure which version of mingw numpy and scipy are built with and whether this will cause compatibility problems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Jun 28 22:47:21 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Jun 2009 21:47:21 -0500 Subject: [SciPy-user] Performance Python examples In-Reply-To: <4f8149830906281941i552e9df6g12e5ebc5d2e2a604@mail.gmail.com> References: <4f8149830906281941i552e9df6g12e5ebc5d2e2a604@mail.gmail.com> Message-ID: <3d375d730906281947n56aac729y7e3758c1304d4eb3@mail.gmail.com> On Sun, Jun 28, 2009 at 21:41, David Powell wrote: > error: Python was built with Visual Studio 2003; > extensions must be built with a compiler than can generate compatible > binaries. > Visual Studio 2003 was not found on this system. If you have Cygwin > installed, > you can try compiling with MingW32, by passing "-c mingw32" to setup.py. > > This happens even if I manually add -c mingw32 to the call to f2py in > setup.py. This message means that you should run the setup.py script like so: python setup.py build_ext -c mingw32 build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alan at ajackson.org Sun Jun 28 23:03:56 2009 From: alan at ajackson.org (alan at ajackson.org) Date: Sun, 28 Jun 2009 22:03:56 -0500 Subject: [SciPy-user] small bug in random.noncentral_f Message-ID: <20090628220356.75576e79@ajackson.org> (pardon me if this shows up twice. I seem to be having list issues) Currently working on documenting various functions, and I think I found a buglet with the random.noncentral_f function. Parameters are dfnum, dfden, and nonc, for the two degrees of freedom and the non=centrality. If noncentrality is zero, then this should reduce to simply an F statistic. But when I tested the function, it does not allow a dfnum = 1, while random.f does. In [15]: np.random.f(1., 48, 10) Out[15]: array([ 0.00858266, 0.03674191, 0.01160252, 0.00440044, 0.00113941, 0.00145305, 0.07257887, 0.01387694, 0.08171868, 0.00591559]) In [16]: np.random.noncentral_f(1., 48, 0, 10) ValueError: dfnum <= 1 Interestingly I tried the equivalent function in R, and it even works for dfnum<1, though I'm not clear on what that means. > rf(10, .9, 48, 0) [1] 0.005920147 0.246702423 0.213558149 0.328364697 4.157683259 0.048531606 [7] 0.463760833 6.817103616 0.058326258 0.813737713 -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From cournape at gmail.com Sun Jun 28 23:40:52 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 29 Jun 2009 12:40:52 +0900 Subject: [SciPy-user] Performance Python examples In-Reply-To: <4f8149830906281941i552e9df6g12e5ebc5d2e2a604@mail.gmail.com> References: <4f8149830906281941i552e9df6g12e5ebc5d2e2a604@mail.gmail.com> Message-ID: <5b8d13220906282040j2ce8eb69qea292ecfd04755b1@mail.gmail.com> On Mon, Jun 29, 2009 at 11:41 AM, David Powell wrote: > > PS: I had a try at writing something at numexpr, but what I came up with was > slower than just using numpy arrays.? So I'd probably stick to use > weave.inline.? However I just realised that to get the full benefit of using > openmp needs gcc 4.x, but I am not sure which version of mingw numpy and > scipy are built with and whether this will cause compatibility problems. You will not be able to use openmp on windows, because scipy/numpy is built with g77, and you can't mix g77 (fortran compiler of gcc 3.x) and gfortran (fortran compiler of gcc 4.x). You will have to build numpy and scipy by yourself in this case. David From davidanthonypowell at gmail.com Sun Jun 28 23:51:37 2009 From: davidanthonypowell at gmail.com (David Powell) Date: Mon, 29 Jun 2009 13:51:37 +1000 Subject: [SciPy-user] Performance Python examples In-Reply-To: <5b8d13220906282040j2ce8eb69qea292ecfd04755b1@mail.gmail.com> References: <4f8149830906281941i552e9df6g12e5ebc5d2e2a604@mail.gmail.com> <5b8d13220906282040j2ce8eb69qea292ecfd04755b1@mail.gmail.com> Message-ID: <4f8149830906282051h73fcd166n66aa4f28c08860c5@mail.gmail.com> 2009/6/29 David Cournapeau > On Mon, Jun 29, 2009 at 11:41 AM, David > Powell wrote: > > > > > PS: I had a try at writing something at numexpr, but what I came up with > was > > slower than just using numpy arrays. So I'd probably stick to use > > weave.inline. However I just realised that to get the full benefit of > using > > openmp needs gcc 4.x, but I am not sure which version of mingw numpy and > > scipy are built with and whether this will cause compatibility problems. > > You will not be able to use openmp on windows, because scipy/numpy is > built with g77, and you can't mix g77 (fortran compiler of gcc 3.x) > and gfortran (fortran compiler of gcc 4.x). You will have to build > numpy and scipy by yourself in this case. > > But if I calling C code with weave.inline, then this shouldn't have anything to do with the fortran compiler. As an aside, I notice that the windows 64 bit version of numpy 1.3 is built with mingw64, which as far as I can see uses gcc 4.4. Can we expect that scipy 0.8 will be too? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Jun 28 23:57:08 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 29 Jun 2009 12:57:08 +0900 Subject: [SciPy-user] Performance Python examples In-Reply-To: <4f8149830906282051h73fcd166n66aa4f28c08860c5@mail.gmail.com> References: <4f8149830906281941i552e9df6g12e5ebc5d2e2a604@mail.gmail.com> <5b8d13220906282040j2ce8eb69qea292ecfd04755b1@mail.gmail.com> <4f8149830906282051h73fcd166n66aa4f28c08860c5@mail.gmail.com> Message-ID: <5b8d13220906282057q7dacdc9bx74719838b38665cc@mail.gmail.com> On Mon, Jun 29, 2009 at 12:51 PM, David Powell wrote: > > > 2009/6/29 David Cournapeau >> >> On Mon, Jun 29, 2009 at 11:41 AM, David >> Powell wrote: >> >> > >> > PS: I had a try at writing something at numexpr, but what I came up with >> > was >> > slower than just using numpy arrays.? So I'd probably stick to use >> > weave.inline.? However I just realised that to get the full benefit of >> > using >> > openmp needs gcc 4.x, but I am not sure which version of mingw numpy and >> > scipy are built with and whether this will cause compatibility problems. >> >> You will not be able to use openmp on windows, because scipy/numpy is >> built with g77, and you can't mix g77 (fortran compiler of gcc 3.x) >> and gfortran (fortran compiler of gcc 4.x). You will have to build >> numpy and scipy by yourself in this case. >> > > But if I calling C code with weave.inline, then this shouldn't have anything > to do with the fortran compiler. Since you were talking about f2py, I assumed you wanted to use openmp in Fortran. I don't know if you can mix gcc 4 and gcc 3 even for C code, especially on windows. > As an aside, I notice that the windows 64 bit version of numpy 1.3 is built > with mingw64, which as far as I can see uses gcc 4.4.? Can we expect that > scipy 0.8 will be too? The 64 bits version has many problems, and is unsuitable for real work ATM. Official, "production-ready" versions may not be based on gcc 4.4 (or even gcc at all). David From millman at berkeley.edu Mon Jun 29 03:00:16 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 29 Jun 2009 00:00:16 -0700 Subject: [SciPy-user] ANN: SciPy 2009 student sponsorship Message-ID: I am pleased to announce that the Python Software Foundation is sponsoring 10 students' travel, registration, and accommodation for the SciPy 2009 conference (Aug. 18-23). The focus of the conference is both on scientific libraries and tools developed with Python and on scientific or engineering achievements using Python. If you're in college or a graduate program, please check out the details here: http://conference.scipy.org/student-funding About the conference -------------------- SciPy 2009, the 8th Python in Science conference, will be held from August 18-23, 2009 at Caltech in Pasadena, CA, USA. The conference starts with two days of tutorials to the scientific Python tools. There will be two tracks, one for introduction of the basic tools to beginners, and one for more advanced tools. The tutorials will be followed by two days of talks. Both days of talks will begin with a keynote address. The first day?s keynote will be given by Peter Norvig, the Director of Research at Google; while, the second keynote will be delivered by Jon Guyer, a Materials Scientist in the Thermodynamics and Kinetics Group at NIST. The program committee will select the remaining talks from submissions to our call for papers. All selected talks will be included in our conference proceedings edited by the program committee. After the talks each day we will provide several rooms for impromptu birds of a feather discussions. Finally, the last two days of the conference will be used for a number of coding sprints on the major software projects in our community. For the 8th consecutive year, the conference will bring together the developers and users of the open source software stack for scientific computing with Python. Attendees have the opportunity to review the available tools and how they apply to specific problems. By providing a forum for developers to share their Python expertise with the wider commercial, academic, and research communities, this conference fosters collaboration and facilitates the sharing of software components, techniques, and a vision for high level language use in scientific computing. For further information, please visit the conference homepage: http://conference.scipy.org. Important Dates --------------- * Friday, July 3: Abstracts Due * Friday, July 10: Announce accepted talks, post schedule * Friday, July 10: Early Registration ends * Tuesday-Wednesday, August 18-19: Tutorials * Thursday-Friday, August 20-21: Conference * Saturday-Sunday, August 22-23: Sprints * Friday, September 4: Papers for proceedings due Executive Committee ------------------- * Jarrod Millman, UC Berkeley, USA (Conference Chair) * Ga?l Varoquaux, INRIA Saclay, France (Program Co-Chair) * St?fan van der Walt, University of Stellenbosch, South Africa (Program Co-Chair) * Fernando P?rez, UC Berkeley, USA (Tutorial Chair) From christian-baehnisch at gmx.de Mon Jun 29 04:10:30 2009 From: christian-baehnisch at gmx.de (Christian =?iso-8859-1?q?B=E4hnisch?=) Date: Mon, 29 Jun 2009 10:10:30 +0200 Subject: [SciPy-user] scipy.linalg.inv returns array Message-ID: <200906291010.31210.christian-baehnisch@gmx.de> Hello, I noticed that "scipy.linalg.inv" returns the inverse of a matrix as an array not as a matrix. I think returning a matrix would be more convenient, or missed I something? greetings, Chris From dwf at cs.toronto.edu Mon Jun 29 05:36:16 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 29 Jun 2009 05:36:16 -0400 Subject: [SciPy-user] ndimage.convolve behaviour? In-Reply-To: <3d375d730906231238q508d9889x7d664456811bc52f@mail.gmail.com> References: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> <3d375d730906231238q508d9889x7d664456811bc52f@mail.gmail.com> Message-ID: On 23-Jun-09, at 3:38 PM, Robert Kern wrote: >> Ah, thanks. Is there some reason/convention for this that I'm unaware >> of? > > Yes. Convolution flips the filter around and correlation doesn't. > That's the convention. :-) So the only difference is which direction the axes are read? Odd. Is this a convention in other packages too? >> Should this be documented somewhere? > > Oh, probably. I've added reciprocal 'See Also' listings but I'm unsure how to word it to imply that "this is the same function with the stride/axes direction reversed", without it sounding a bit hokey. David From david at ar.media.kyoto-u.ac.jp Mon Jun 29 05:27:13 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 29 Jun 2009 18:27:13 +0900 Subject: [SciPy-user] ndimage.convolve behaviour? In-Reply-To: References: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> <3d375d730906231238q508d9889x7d664456811bc52f@mail.gmail.com> Message-ID: <4A4888F1.2030900@ar.media.kyoto-u.ac.jp> David Warde-Farley wrote: > > So the only difference is which direction the axes are read? Odd. Is > this a convention in other packages too? > That's more than a convention: that's the definition of the operations. Although the effect on signals often is similar, the underlying mathematical operation is quite different. For example convolution is a commutative operator, whereas correlation isn't. I guess it depends on the background, but for anyone with a bit of EE background, there is no ambiguity at all I guess. cheers, David From stefan at sun.ac.za Mon Jun 29 06:14:02 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 29 Jun 2009 12:14:02 +0200 Subject: [SciPy-user] ndimage.convolve behaviour? In-Reply-To: <4A4888F1.2030900@ar.media.kyoto-u.ac.jp> References: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> <3d375d730906231238q508d9889x7d664456811bc52f@mail.gmail.com> <4A4888F1.2030900@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80906290314q2cbc28cdl68994f5ceaf6ecf3@mail.gmail.com> 2009/6/29 David Cournapeau : > David Warde-Farley wrote: >> >> So the only difference is which direction the axes are read? Odd. Is >> this a convention in other packages too? >> > > That's more than a convention: that's the definition of the operations. > Although the effect on signals often is similar, the underlying > mathematical operation is quite different. For example convolution is a > commutative operator, whereas correlation isn't. I guess it depends on > the background, but for anyone with a bit of EE background, there is no > ambiguity at all I guess. You can think of a convolution as calculating the response of a system g(x) on a signal f(x). Since the first values of f(x) (i.e., at x=0) are the first values to enter the system g(x), when doing convolution you need to flip f(x) around before moving it across. See http://www.jhu.edu/signals/convolve/ for a nice illustration. Regards St?fan From nwagner at iam.uni-stuttgart.de Mon Jun 29 07:31:09 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 29 Jun 2009 13:31:09 +0200 Subject: [SciPy-user] scipy.linalg.inv returns array In-Reply-To: <200906291010.31210.christian-baehnisch@gmx.de> References: <200906291010.31210.christian-baehnisch@gmx.de> Message-ID: On Mon, 29 Jun 2009 10:10:30 +0200 Christian B?hnisch wrote: > Hello, > > I noticed that "scipy.linalg.inv" returns the inverse of >a matrix as an array > not as a matrix. I think returning a matrix would be >more convenient, or > missed I something? > > greetings, Chris > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi Chris, That is a known issue See http://projects.scipy.org/scipy/ticket/585 Cheers, Nils From joschu at caltech.edu Mon Jun 29 15:18:08 2009 From: joschu at caltech.edu (John Schulman) Date: Mon, 29 Jun 2009 15:18:08 -0400 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <185761440906260845s103ac83bm6a1fceb9f8b9c9bb@mail.gmail.com> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> <4A445655.8050807@ar.media.kyoto-u.ac.jp> <185761440906260700g2b442ebt55ed06a879116770@mail.gmail.com> <185761440906260702h296c015bs93c285442d5ba821@mail.gmail.com> <185761440906260845s103ac83bm6a1fceb9f8b9c9bb@mail.gmail.com> Message-ID: <185761440906291218k1051dd0w7a589d97c24ae135@mail.gmail.com> I'm going to post my solution, because it's a really easy general solution to these installation problems, and it isn't mentioned on the scipy webpages. I downloaded and built sage from source. All it requires is a c/c++ compiler. http://www.sagemath.org/download-source.html This took about 3 hours. This gives me a totally self-contained local python installation with scipy in it. It also has ipython, easy_install, and all that other nice stuff in the bin directory. When I log on, I go to the sage directory and type in the command ./sage -sh This sets all the shell variables so the executables in the sage bin directory becomes default, so I just have to type 'ipython' and I've got my sage ipython. From dwf at cs.toronto.edu Mon Jun 29 18:38:49 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 29 Jun 2009 18:38:49 -0400 Subject: [SciPy-user] ndimage.convolve behaviour? In-Reply-To: <9457e7c80906290314q2cbc28cdl68994f5ceaf6ecf3@mail.gmail.com> References: <9457e7c80906230254s30dfca56t6067999f7a8de4c3@mail.gmail.com> <3d375d730906231238q508d9889x7d664456811bc52f@mail.gmail.com> <4A4888F1.2030900@ar.media.kyoto-u.ac.jp> <9457e7c80906290314q2cbc28cdl68994f5ceaf6ecf3@mail.gmail.com> Message-ID: <73E247A2-1916-4CF2-9856-1ADC74DB754F@cs.toronto.edu> On 29-Jun-09, at 6:14 AM, St?fan van der Walt wrote: > You can think of a convolution as calculating the response of a system > g(x) on a signal f(x). Since the first values of f(x) (i.e., at x=0) > are the first values to enter the system g(x), when doing convolution > you need to flip f(x) around before moving it across. See > > http://www.jhu.edu/signals/convolve/ Right; somehow, even though I've seen that integral a thousand times, the way I was thinking about it in 2D did not quite match up with the negative sign being there. Apologies for my brain fart. Regards, David From alan at ajackson.org Mon Jun 29 21:32:08 2009 From: alan at ajackson.org (alan at ajackson.org) Date: Mon, 29 Jun 2009 20:32:08 -0500 Subject: [SciPy-user] Paper in Science Message-ID: <20090629203208.64a5e211@ajackson.org> Anne, Is that your paper in the June 12 issue of Science? Alan -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From stefan at sun.ac.za Tue Jun 30 03:12:55 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 30 Jun 2009 09:12:55 +0200 Subject: [SciPy-user] Paper in Science In-Reply-To: <20090629203208.64a5e211@ajackson.org> References: <20090629203208.64a5e211@ajackson.org> Message-ID: <9457e7c80906300012o6c4f1ef3l8bec426301943fca@mail.gmail.com> 2009/6/30 : > Anne, > > Is that your paper in the June 12 issue of Science? Yes, it is theirs. Well done! http://www.sciencemag.org/cgi/content/abstract/324/5933/1411 Cheers St?fan From eike.welk at gmx.net Tue Jun 30 08:43:02 2009 From: eike.welk at gmx.net (Eike Welk) Date: Tue, 30 Jun 2009 14:43:02 +0200 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <185761440906291218k1051dd0w7a589d97c24ae135@mail.gmail.com> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> <185761440906260845s103ac83bm6a1fceb9f8b9c9bb@mail.gmail.com> <185761440906291218k1051dd0w7a589d97c24ae135@mail.gmail.com> Message-ID: <200906301443.02960.eike.welk@gmx.net> On Monday 29 June 2009, John Schulman wrote: > I'm going to post my solution, because it's a really easy general > solution to these installation problems, and it isn't mentioned on > the scipy webpages. > > I downloaded and built sage from source. All it requires is a c/c++ > compiler. http://www.sagemath.org/download-source.html > > This took about 3 hours. > > This gives me a totally self-contained local python installation > with scipy in it. > It also has ipython, easy_install, and all that other nice stuff in > the bin directory. > > When I log on, I go to the sage directory and type in the command > ./sage -sh > This sets all the shell variables so the executables in the sage > bin directory becomes default, so I just have to type 'ipython' and > I've got my sage ipython. Thanks for telling us an alternative, and painless way to install Numpy/Scipy/Matplotlib on Suse Linux. Could you put a description of the installation process into Scipy's Wiki, please? The "science" repository mentioned in the Wiki worked when I installed openSuse 11.0 on my computer, about a year ago. (Both repositories that you talked about, 'science' and 'ScientificLinux', don't contain packages for your operating system, Suse 10.2, anymore. Therefore these repositories are of little help for you.) That there are now two repositories for Numpy/Scipy and also David Cournapeau's repository for Atlas is a bit confusing. Some time ago I have E-mailed some of the maintainers and proposed that they merge these repositories, but unfortunately it has not happend. I wrote the section about installing Numpy/Scipy/Matplotlib on Suse when I had trouble installing them myself. The wording has to be updated somehow, since there are no longer packages for Suse 10.2 at the mentioned locations. Kind regards, Eike. From joschu at caltech.edu Tue Jun 30 09:34:47 2009 From: joschu at caltech.edu (John Schulman) Date: Tue, 30 Jun 2009 09:34:47 -0400 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <200906301443.02960.eike.welk@gmx.net> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> <185761440906260845s103ac83bm6a1fceb9f8b9c9bb@mail.gmail.com> <185761440906291218k1051dd0w7a589d97c24ae135@mail.gmail.com> <200906301443.02960.eike.welk@gmx.net> Message-ID: <185761440906300634u1a7e87f0n2bbe07f12be29abf@mail.gmail.com> Done http://www.scipy.org/Installing_SciPy/Linux#head-f4511786c10fc5a608027f22e65df5e5078357b6 On Tue, Jun 30, 2009 at 8:43 AM, Eike Welk wrote: > On Monday 29 June 2009, John Schulman wrote: >> I'm going to post my solution, because it's a really easy general >> solution to these installation problems, and it isn't mentioned on >> the scipy webpages. >> >> I downloaded and built sage from source. All it requires is a c/c++ >> compiler. http://www.sagemath.org/download-source.html >> >> This took about 3 hours. >> >> This gives me a totally self-contained local python installation >> with scipy in it. >> It also has ipython, easy_install, and all that other nice stuff in >> the bin directory. >> >> When I log on, I go to the sage directory and type in the command >> ./sage -sh >> This sets all the shell variables so the executables in the sage >> bin directory becomes default, so I just have to type 'ipython' and >> I've got my sage ipython. > > Thanks for telling us an alternative, and painless way to install > Numpy/Scipy/Matplotlib on Suse Linux. Could you put a description of > the installation process into Scipy's Wiki, please? > > The "science" repository mentioned in the Wiki worked when I installed > openSuse 11.0 on my computer, about a year ago. (Both repositories > that you talked about, 'science' and 'ScientificLinux', don't contain > packages for your operating system, Suse 10.2, anymore. Therefore > these repositories are of little help for you.) > > That there are now two repositories for Numpy/Scipy and also > David Cournapeau's repository for Atlas is a bit confusing. Some time > ago I have E-mailed some of the maintainers and proposed that they > merge these repositories, but unfortunately it has not happend. > > I wrote the section about installing Numpy/Scipy/Matplotlib on Suse > when I had trouble installing them myself. The wording has to be > updated somehow, since there are no longer packages for Suse 10.2 at > the mentioned locations. > > Kind regards, > Eike. > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From eike.welk at gmx.net Tue Jun 30 12:58:37 2009 From: eike.welk at gmx.net (Eike Welk) Date: Tue, 30 Jun 2009 18:58:37 +0200 Subject: [SciPy-user] installation problems on SUSE 10.2 In-Reply-To: <185761440906300634u1a7e87f0n2bbe07f12be29abf@mail.gmail.com> References: <185761440906251501q153bb3f3vdb89ff2376d4d36e@mail.gmail.com> <200906301443.02960.eike.welk@gmx.net> <185761440906300634u1a7e87f0n2bbe07f12be29abf@mail.gmail.com> Message-ID: <200906301858.37386.eike.welk@gmx.net> On Tuesday 30 June 2009, John Schulman wrote: > Done > http://www.scipy.org/Installing_SciPy/Linux#head-f4511786c10fc5a608 >027f22e65df5e5078357b6 Great! I changed the wording of the Suse section a little and mentioned Sage there too. I plan to try out the repositories and update the text more this week. If you have additional knowledge about Suse feel free to edit the Suse section. Kind regards, Eike. From d_l_goldsmith at yahoo.com Tue Jun 30 14:04:27 2009 From: d_l_goldsmith at yahoo.com (David Goldsmith) Date: Tue, 30 Jun 2009 11:04:27 -0700 (PDT) Subject: [SciPy-user] Paper in Science Message-ID: <507365.34749.qm@web52104.mail.re2.yahoo.com> Indeed, congratulations! DG > Date: Mon, 29 Jun 2009 20:32:08 -0500 > From: alan at ajackson.org > Subject: [SciPy-user] Paper in Science > To: SciPy Users List > Message-ID: <20090629203208.64a5e211 at ajackson.org> > Content-Type: text/plain; charset=US-ASCII > > Anne, > > Is that your paper in the June 12 issue of Science? > > Alan > > -- > ----------------------------------------------------------------------- > | Alan K. Jackson? ? ? ? ? ? > Date: Tue, 30 Jun 2009 09:12:55 +0200 > From: St?fan van der Walt > Subject: Re: [SciPy-user] Paper in Science > To: SciPy Users List > Message-ID: > ??? <9457e7c80906300012o6c4f1ef3l8bec426301943fca at mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 > > 2009/6/30? : > > Anne, > > > > Is that your paper in the June 12 issue of Science? > > Yes, it is theirs.? Well done! > > http://www.sciencemag.org/cgi/content/abstract/324/5933/1411 > > Cheers > St?fan From chanley at stsci.edu Tue Jun 30 16:26:03 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 30 Jun 2009 16:26:03 -0400 Subject: [SciPy-user] install ndimage standalone? In-Reply-To: References: Message-ID: <4A4A74DB.50002@stsci.edu> Pauli Virtanen wrote: > On 2009-06-24, Russell E. Owen wrote: >> Is there some way to easily obtain and install the ndimage component of >> scipy without installing the whole thing? I'm trying to migrate some old >> Numarray code that used ndimage and I don't want to have to install all >> of scipy just to use ndimage. > > There's a setup.py under scipy/ndimage in the source directory, > and I believe you can use it to build the ndimage package separately. > I believe some of the newer parts of ndimage depend on other parts of scipy. However, if you are only interested in replicating functionality of ndimage that existed in the numarray era you can grab a standalone version from here: http://www.stsci.edu/resources/software_hardware/pyraf/stsci_python/current/download This version of ndimage is distributed as part of stsci_python. We try to keep it up-to-date with the scipy version sans the parts that depend on other scipy components. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From arokem at berkeley.edu Tue Jun 30 18:16:28 2009 From: arokem at berkeley.edu (Ariel Rokem) Date: Tue, 30 Jun 2009 15:16:28 -0700 Subject: [SciPy-user] Memory usage of scipy.io.loadmat Message-ID: <43958ee60906301516s637b180fm35b96d5b61a1b549@mail.gmail.com> Hi everyone, I have stumbled on some interesting behavior of scipy.io.loadmat. The short of it - it looks like loadmat is gobbling up memory in some unjustified manner and releasing it under some strange circumstances. Here's the long story. This is all happening on a Mac OS 10.5.7, running EPD 4.0.30001 (Python 2.5.2), but with a relatively new version of scipy (see below). I start ipython -pylab in one terminal and run 'top' in another, in order to monitor the memory usage. Here's what I get initially: PhysMem: 417M wired, 449M active, 183M inactive, 1055M used, 3041M free. Then: In [1]: import scipy In [2]: scipy.__version__ Out[2]: '0.8.0.dev5606' In [3]: import scipy.io as sio Here's what it looks like now: PhysMem: 418M wired, 450M active, 183M inactive, 1058M used, 3038M free. So far, so good. I read in a large matfile with tons of data in it: In [4]: a = sio.loadmat('/Users/arokem/Projects/SchizoSpread/Scans/SMR033109_MC/Gray/Original/TSeries/Scan1/tSeries1.mat') PhysMem: 419M wired, 1024M active, 183M inactive, 1632M used, 2464M free. So - about 600 MB of memory is taken up by this new variable. Now to the wierdness: In [5]: b --------------------------------------------------------------------------- NameError Traceback (most recent call last) /Users/arokem/ in () NameError: name 'b' is not defined Of course - 'b' doesn't exist! But now, the memory usage has dramatically gone down: PhysMem: 420M wired, 740M active, 183M inactive, 1350M used, 2746M free. So - just invoking an error in the ipython command line has freed up 300 MB. Where did they come from? I tried different things - assigning other variables doesn't seem to free up this memory. Neither do calls to other functions. Except "plot()", which does seem to do the trick for some reason. Interestingly, when I run all this in a python interactive session (and not ipython), I get a similar memory usage initially. Calling a non-existent variable does not free up the memory, but other things do. For example, import matplotlib.pylab into the namespace did the trick. Does anyone have any idea what is going on? Thanks -- Ariel -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From saintmlx at apstat.com Tue Jun 30 18:33:10 2009 From: saintmlx at apstat.com (Xavier Saint-Mleux) Date: Tue, 30 Jun 2009 18:33:10 -0400 Subject: [SciPy-user] Memory usage of scipy.io.loadmat In-Reply-To: <43958ee60906301516s637b180fm35b96d5b61a1b549@mail.gmail.com> References: <43958ee60906301516s637b180fm35b96d5b61a1b549@mail.gmail.com> Message-ID: <4A4A92A6.9070503@apstat.com> > > So - just invoking an error in the ipython command line has freed up > 300 MB. Where did they come from? I tried different things - assigning > other variables doesn't seem to free up this memory. Neither do calls > to other functions. Except "plot()", which does seem to do the trick > for some reason. Interestingly, when I run all this in a python > interactive session (and not ipython), I get a similar memory usage > initially. Calling a non-existent variable does not free up the > memory, but other things do. For example, import matplotlib.pylab into > the namespace did the trick. Does anyone have any idea what is going on? > It is probably just the garbage collector being invoked. If you invoke it manually, does it always free memory? e.g.: import gc gc.collect() Xavier Saint-Mleux From arokem at berkeley.edu Tue Jun 30 19:24:57 2009 From: arokem at berkeley.edu (Ariel Rokem) Date: Tue, 30 Jun 2009 16:24:57 -0700 Subject: [SciPy-user] [SciPy-dev] Memory usage of scipy.io.loadmat In-Reply-To: <4A4A92A6.9070503@apstat.com> References: <43958ee60906301516s637b180fm35b96d5b61a1b549@mail.gmail.com> <4A4A92A6.9070503@apstat.com> Message-ID: <43958ee60906301624l7fd7f445sebd0d69b4ac32e61@mail.gmail.com> Yes - that does seem to free up the memory. While running this: In [11]: for i in range(10): a = sio.loadmat('/Users/arokem/Projects/SchizoSpread/Scans/SMR033109_MC/Gray/Original/TSeries/Scan1/tSeries1.mat') causes a memory error, running this: In [14]: for i in range(10): a = sio.loadmat('/Users/arokem/Projects/SchizoSpread/Scans/SMR033109_MC/Gray/Original/TSeries/Scan1/tSeries1.mat') gc.collect() seems like it could go on forever (looking at the memory usage on a memory monitor, it just goes up and down to the same point, without net accumulation). Thanks a lot! Ariel On Tue, Jun 30, 2009 at 3:33 PM, Xavier Saint-Mleux wrote: > > > > > So - just invoking an error in the ipython command line has freed up > > 300 MB. Where did they come from? I tried different things - assigning > > other variables doesn't seem to free up this memory. Neither do calls > > to other functions. Except "plot()", which does seem to do the trick > > for some reason. Interestingly, when I run all this in a python > > interactive session (and not ipython), I get a similar memory usage > > initially. Calling a non-existent variable does not free up the > > memory, but other things do. For example, import matplotlib.pylab into > > the namespace did the trick. Does anyone have any idea what is going on? > > > > It is probably just the garbage collector being invoked. If you invoke > it manually, does it always free memory? e.g.: > > import gc > > gc.collect() > > > > Xavier Saint-Mleux > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: