From massimo.sandal at unibo.it Fri Jun 1 06:55:23 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 01 Jun 2007 12:55:23 +0200 Subject: [SciPy-user] SciPy Journal In-Reply-To: <465F00F5.7080209@gmail.com> References: <465E5D58.9030107@ieee.org> <465EE016.3070900@unibo.it> <465EE88B.2070409@unibo.it> <465F00F5.7080209@gmail.com> Message-ID: <465FFB1B.1000000@unibo.it> Robert Kern ha scritto: > massimo sandal wrote: > >> Same here. The problem is, will someone publish it? Will it gain >> academic respectability? An academic journal revolving around a single >> software library seems very odd to me -is there a, let's say, "GLIBC >> Journal" somewhere? Maybe that's just me being ignorant. > > Look at the "Journal of Statistical Software". Its name might as well be the > "Journal of R". > > http://www.jstatsoft.org/ LOL. Still, I don't feel convinced it's a worthwile idea (J.Stat.Softw. doesn't look like one, either). However, if you want to do it, well, do it. :) It can also be possible I'll contribute to it in the future, if you take into consideration entries about software that uses SciPy. I'd also broaden the scope to take into account NumPy, Matplotlib and in general all science things doing Python. Another interesting requirement could be that algorithms, software etc. presented must show at least one open-source (as defined by OSI) implementation. The only thing I am really worried is the requirement of an article along with documentation for inclusion of code into SciPy. This can have drawbacks. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From peridot.faceted at gmail.com Fri Jun 1 16:57:15 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 1 Jun 2007 16:57:15 -0400 Subject: [SciPy-user] hyp2f1 - bad performance for some values of the argument Message-ID: On 30/05/07, Anne Archibald wrote: > So this is not generally useful, but I have cured my problem through > an application of one of the "quadratic transformations" (equation > 15.3.32 in Abramowitz & Stegun - which is online!). It gives me > accuracies on the order of one part in 10^13, not as good as I was > hoping but better than the 1 in 10^8 I was getting from the averaging > shortcut. Good enough to get positive definite matrices out of it > anyway. Perhaps I spoke too soon. It appears that scipy evaluates hyp2f1 very slowly for certain values of the argument: In [92]: x=0.1; n=1000; m=100; timeit.Timer("hyp2f1(1/2.,1.,3.2,x)",setup="from scipy.special import hyp2f1; from numpy import ones; x=%f*ones(%d)"%(x,m)).timeit(n)/float(n*m) Out[92]: 2.2619605064392088e-06 In [93]: x=0.999; n=1000; m=100; timeit.Timer("hyp2f1(1/2.,1.,3.2,x)",setup="from scipy.special import hyp2f1; from numpy import ones; x=%f*ones(%d)"%(x,m)).timeit(n)/float(n*m) Out[93]: 0.00085240376949310298 (uh, no I don't write real code like this; those numbers are seconds for a single function evaluation. I use the vectorized version because for small x the function-call overhead swamps the evaluation time.) Presumably it works by transforming the function until |x|<1 and then using the series. But the series converges very slowly for |x| close to 1. It is possible to keep transforming the function until |x|<1/2, where the series converges at a respectable speed. Of course these transformations suffer from the same difficulties as the ones used to get |x|<1, namely, they are singular for certain values of the arguments; while there are special-case formulas in A&S for these situations, they do not handle points *near* the singularities (although conceivably a derivative formula could be used there...) Any suggestions on how to efficiently evaluate hyp2f1(0.5,1,a+2,x) for x near 1? It turns out to be the limiting factor for performance of my program (dwarfing the linear algebra on 300 by 300 matrices). Thanks, Anne M. Archibald From wweckesser at mail.colgate.edu Sun Jun 3 13:55:12 2007 From: wweckesser at mail.colgate.edu (Warren Weckesser) Date: Sun, 03 Jun 2007 13:55:12 -0400 Subject: [SciPy-user] Announcement: VFGEN Message-ID: <1180893312.8340.8.camel@localhost.localdomain> Dear SciPy users: I would like to let the users of the SciPy ODE solvers know about a tool that I have developed called VFGEN. VFGEN is a program that takes a specification of a vector field (in other words, a system of differential equations) and generates source code for a wide variety of ODE solvers and other numerical tools. VFGEN includes a command for generating Python code to be used with the SciPy ODEINT function. You can find the program here: http://math.colgate.edu/~wweckesser/software/vfgen Comments, corrections, and requests for enhancements would all be appreciated. Best regards, Warren Weckesser From cookedm at physics.mcmaster.ca Sun Jun 3 15:38:48 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 3 Jun 2007 15:38:48 -0400 Subject: [SciPy-user] hyp2f1 - bad performance for some values of the argument In-Reply-To: References: Message-ID: <20070603193847.GA11046@arbutus.physics.mcmaster.ca> On Fri, Jun 01, 2007 at 04:57:15PM -0400, Anne Archibald wrote: > On 30/05/07, Anne Archibald wrote: > > > So this is not generally useful, but I have cured my problem through > > an application of one of the "quadratic transformations" (equation > > 15.3.32 in Abramowitz & Stegun - which is online!). It gives me > > accuracies on the order of one part in 10^13, not as good as I was > > hoping but better than the 1 in 10^8 I was getting from the averaging > > shortcut. Good enough to get positive definite matrices out of it > > anyway. > > Perhaps I spoke too soon. > > It appears that scipy evaluates hyp2f1 very slowly for certain values > of the argument: > > In [92]: x=0.1; n=1000; m=100; > timeit.Timer("hyp2f1(1/2.,1.,3.2,x)",setup="from scipy.special import > hyp2f1; from numpy import ones; > x=%f*ones(%d)"%(x,m)).timeit(n)/float(n*m) > Out[92]: 2.2619605064392088e-06 > > In [93]: x=0.999; n=1000; m=100; > timeit.Timer("hyp2f1(1/2.,1.,3.2,x)",setup="from scipy.special import > hyp2f1; from numpy import ones; > x=%f*ones(%d)"%(x,m)).timeit(n)/float(n*m) > Out[93]: 0.00085240376949310298 > > (uh, no I don't write real code like this; those numbers are seconds > for a single function evaluation. I use the vectorized version because > for small x the function-call overhead swamps the evaluation time.) > > Presumably it works by transforming the function until |x|<1 and then > using the series. But the series converges very slowly for |x| close > to 1. It is possible to keep transforming the function until |x|<1/2, > where the series converges at a respectable speed. For c < a+b, |x| < 1, it first tries the power series. If the accumulated error is too large, it uses the recurrence from 15.2.27 to move the value of c. (I don't know how effective that is). For c > a+b, |x| < 1, it again tries the power series first, then uses 15.3.6 to transform x to 1-x, and tries the power series again. > Of course these transformations suffer from the same difficulties as > the ones used to get |x|<1, namely, they are singular for certain > values of the arguments; while there are special-case formulas in A&S > for these situations, they do not handle points *near* the > singularities (although conceivably a derivative formula could be used > there...) Except that you need derivatives with respect to the parameters, for which I don't know of any nice expressions except for doing it on the power series > Any suggestions on how to efficiently evaluate hyp2f1(0.5,1,a+2,x) for > x near 1? It turns out to be the limiting factor for performance of my > program (dwarfing the linear algebra on 300 by 300 matrices). AMS 15.3.6? That turns hyp2f1(0.5, 1, a+2, x) (for x < 1) into GAMMA(a+2)*GAMMA(a+0.5)/(GAMMA(a+1.5)*GAMMA(a+1)) * hyp2f1(0.5, 1, 0.5-a, 1-x) + (1-x)^(a+0.5)*GAMMA(a+2)*GAMMA(-0.5-a)/sqrt(pi) * hyp1f0(a+1,1-x) hyp1f0(a+1,1-x) = x^(-a-1), so the last term is (1/x-1)^a * sqrt(1-x) * GAMMA(a+2)*GAMMA(-0.5-a)/sqrt(pi) Further Maple manipulation suggests that total is (2*a+2)/(2*a+1) * hyp2f1(1/2,1,1/2-a, 1-x) - sqrt(Pi)*GAMMA(a+2)/GAMMA(a+3/2) * sec(Pi*a) * x^(-a-1)*(1-x)^(a+1/2) which is probably as good as it's going to get. There is a (removable) singularity at a ~ (2*n+1)/2; I'm not sure how to avoid that except by clever rewriting of the power series for hyp2f1(1/2,1,1/2-a,1-x). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From rhc28 at cornell.edu Sun Jun 3 22:16:09 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Sun, 3 Jun 2007 20:16:09 -0600 Subject: [SciPy-user] Announcement: VFGEN In-Reply-To: <1180893312.8340.8.camel@localhost.localdomain> References: <1180893312.8340.8.camel@localhost.localdomain> Message-ID: Hi Warren, This is nice. I will endeavour to provide an export from PyDSTool to your XML format in the future, as portability is a big issue! I would like to make it easier for the XPP and Matlab community to use SciPy and PyDSTool, for one thing :) I have made a couple of additions to the Scipy wiki pages to reflect this utility for conversion from Matlab. Cheers, Rob On 03/06/07, Warren Weckesser wrote: > Dear SciPy users: > > I would like to let the users of the SciPy ODE solvers know about > a tool that I have developed called VFGEN. VFGEN is a program > that takes a specification of a vector field (in other words, a > system of differential equations) and generates source code for a > wide variety of ODE solvers and other numerical tools. VFGEN > includes a command for generating Python code to be used with the > SciPy ODEINT function. > > You can find the program here: > http://math.colgate.edu/~wweckesser/software/vfgen > > Comments, corrections, and requests for enhancements would all be > appreciated. > > Best regards, > > Warren Weckesser > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From giorgio.luciano at chimica.unige.it Mon Jun 4 04:59:05 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Mon, 04 Jun 2007 10:59:05 +0200 Subject: [SciPy-user] signal processing chapter for book Message-ID: <4663D459.8040200@chimica.unige.it> first of all sorry for cross posting As I wrote some time ago we are trying to write a book proposal about the use of python/scipy/numpy in chemometrics and analytical chemistry. Since now I've received positive answer from eight authors and the only "missing" chapter is one about the use of python in digital signal processing (I've contacted some possible authors but since now they are busy). The schedule will not too tight and the chapter doesnt' need to be too long. Hope to hear from you soon Giorgio From ryanlists at gmail.com Mon Jun 4 11:24:30 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 4 Jun 2007 10:24:30 -0500 Subject: [SciPy-user] odeint with digital data Message-ID: I have a vector of experimental data that I need to use as part of a system of ode's. I would like to solve this system using integrate.odeint. Can odeint be forced to solve only at the discrete points in time where the experimentally measured signal is available? Or do I need to set up some interpolation function to find the measured signal at any time? If that signal is the only thing that explictly depends on time and I set up a digital look-up table that returns the same constant value for the range from t to t+dt, would I effectively force odeint to do what I want? Am I making any sense? Is there a better way? Thanks, Ryan From rhc28 at cornell.edu Mon Jun 4 11:50:56 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Mon, 4 Jun 2007 09:50:56 -0600 Subject: [SciPy-user] odeint with digital data In-Reply-To: References: Message-ID: Hi Ryan, To my knowledge you cannot do this with odeint unless you change a constant value on the RHS (i.e. technically change your system) to reflect the changing input value after every time-step, but it is very easy to do in PyDSTool. There you can also force integration to be only at the discrete mesh points of your input signal (and would linearly interpolate in-between otherwise). I am certainly happy to help you set up your script to do this if you wish to try it in PyDSTool. HTH! Rob On 04/06/07, Ryan Krauss wrote: > I have a vector of experimental data that I need to use as part of a > system of ode's. I would like to solve this system using > integrate.odeint. Can odeint be forced to solve only at the discrete > points in time where the experimentally measured signal is available? > Or do I need to set up some interpolation function to find the > measured signal at any time? If that signal is the only thing that > explictly depends on time and I set up a digital look-up table that > returns the same constant value for the range from t to t+dt, would I > effectively force odeint to do what I want? Am I making any sense? > Is there a better way? > > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Mon Jun 4 13:04:38 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 4 Jun 2007 12:04:38 -0500 Subject: [SciPy-user] odeint with digital data In-Reply-To: References: Message-ID: Thanks Rob. Perhaps the time has finally come. Yes, please help me set this up in PyDSTool. I will send a schematic description of the system in a few minutes. On 6/4/07, Rob Clewley wrote: > Hi Ryan, > > To my knowledge you cannot do this with odeint unless you change a > constant value on the RHS (i.e. technically change your system) to > reflect the changing input value after every time-step, but it is very > easy to do in PyDSTool. There you can also force integration to be > only at the discrete mesh points of your input signal (and would > linearly interpolate in-between otherwise). I am certainly happy to > help you set up your script to do this if you wish to try it in > PyDSTool. > > HTH! > Rob > > On 04/06/07, Ryan Krauss wrote: > > I have a vector of experimental data that I need to use as part of a > > system of ode's. I would like to solve this system using > > integrate.odeint. Can odeint be forced to solve only at the discrete > > points in time where the experimentally measured signal is available? > > Or do I need to set up some interpolation function to find the > > measured signal at any time? If that signal is the only thing that > > explictly depends on time and I set up a digital look-up table that > > returns the same constant value for the range from t to t+dt, would I > > effectively force odeint to do what I want? Am I making any sense? > > Is there a better way? > > > > Thanks, > > > > Ryan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: fbd.jpg Type: image/jpeg Size: 63348 bytes Desc: not available URL: From ryanlists at gmail.com Mon Jun 4 13:23:44 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 4 Jun 2007 12:23:44 -0500 Subject: [SciPy-user] odeint with digital data In-Reply-To: References: Message-ID: So, my system is an impact test machine that is being used for something slightly different than it was designed for and we are having to do some calculations to get the information we really want. There is a foam sample which I am trying to verify a stress-strain curve for. The problem is that the mass of the impactor and the compliance of the load cell lead to ringing in the data. m_1, m_2, and the load cell all initially are moving downward together with the same velocity, which is measured just before impacting the foam. Once the impact occurs, there is some relative motion between m_1 and m_2. F_measure (or F_meas) is the load cell force measured at constant sampling frequency (I think it is 10kHz). The force from the foam specimen should be a (theoretically known) function of the displacement x_1. Sorry, my arrows are showing a bad sign convention for x_1 and x_2. The displacement will be downward at least for the first half of the impact event because the initial velocity is downward. The accelerations are shown in the right direction, but are in the opposite direction of the initial velocity and the displacement. So, I think the states are y1 = x1 y2 = x2 y3 = x1dot y4 = x2dot and I was planning to set it up like this Fmeasured = Fmeasured(t) #but only known at discrete time intervals of 10kHz Ffoam = Ffoam(y1) #I have a function to find Ffoam based on y1 using a piecewise linear function y1dot = y3 y2dot = y4 y3dot = 1/m_1*(Ffoam-Fmeasured) y4dot = 1/m_2*Fmeasured If I could set this up in PyDSTool properly handling the fact that Fmeasured is known at discrete times and integrate from one of those times to the next, I would be very happy. A plot of y1dot vs. Ffoam should get me my stress-strain curve back. Thanks, Ryan On 6/4/07, Ryan Krauss wrote: > Thanks Rob. Perhaps the time has finally come. Yes, please help me > set this up in PyDSTool. I will send a schematic description of the > system in a few minutes. > > On 6/4/07, Rob Clewley wrote: > > Hi Ryan, > > > > To my knowledge you cannot do this with odeint unless you change a > > constant value on the RHS (i.e. technically change your system) to > > reflect the changing input value after every time-step, but it is very > > easy to do in PyDSTool. There you can also force integration to be > > only at the discrete mesh points of your input signal (and would > > linearly interpolate in-between otherwise). I am certainly happy to > > help you set up your script to do this if you wish to try it in > > PyDSTool. > > > > HTH! > > Rob > > > > On 04/06/07, Ryan Krauss wrote: > > > I have a vector of experimental data that I need to use as part of a > > > system of ode's. I would like to solve this system using > > > integrate.odeint. Can odeint be forced to solve only at the discrete > > > points in time where the experimentally measured signal is available? > > > Or do I need to set up some interpolation function to find the > > > measured signal at any time? If that signal is the only thing that > > > explictly depends on time and I set up a digital look-up table that > > > returns the same constant value for the range from t to t+dt, would I > > > effectively force odeint to do what I want? Am I making any sense? > > > Is there a better way? > > > > > > Thanks, > > > > > > Ryan > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: system_description.pdf Type: application/pdf Size: 52176 bytes Desc: not available URL: From hasslerjc at comcast.net Mon Jun 4 14:49:52 2007 From: hasslerjc at comcast.net (John Hassler) Date: Mon, 04 Jun 2007 14:49:52 -0400 Subject: [SciPy-user] odeint with digital data In-Reply-To: References: Message-ID: <46645ED0.9060902@comcast.net> An HTML attachment was scrubbed... URL: From lxander.m at gmail.com Mon Jun 4 16:17:16 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Mon, 4 Jun 2007 16:17:16 -0400 Subject: [SciPy-user] [Numpy-discussion] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: <525f23e80706041317y2d6ba31dqc03c54ebacab4b6a@mail.gmail.com> On 5/31/07, Travis Oliphant wrote: > Hi everybody, > > I'm sorry for the cross posting, but I wanted to reach a wide audience > and I know not everybody subscribes to all the lists. > > I've been thinking more about the "SciPy Journal" that we discussed > before and I have some thoughts. > > 1) I'd like to get it going so that we can push out an electronic issue > after the SciPy conference (in September) > > 2) I think it's scope should be limited to papers that describe > algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. > > 3) I'd like to make a requirement for inclusion of new code in SciPy > that it have an associated journal article describing the algorithms, > design approach, etc. I don't see this journal article as being > user-interface documentation for the code. I see this is as a place to > describe why the code is organized as it is and to detail any algorithms > that are used. > > 4) The purpose of the journal as I see it is to > > a) provide someplace to document what is actually done in SciPy and > related software. > b) provide a teaching tool of numerical methods with actual "people > use-it" code that would be > useful to researchers, students, and professionals. > c) hopefully clever new algorithms will be developed for SciPy by > people using Python > that could be show-cased here > d) provide a peer-review publication opportunity for people who > contribute to open-source > software > > 5) We obviously need associate editors and people willing to review > submitted articles as well as people willing to submit articles. I > have two articles that can be submitted within the next two months. > What do other people have? > > > As an example of the kind of thing a SciPy Journal would be useful for. > I have recently over-hauled the interpolation.py file for SciPy by > incorporating the B-spline stuff that is partly in fitpack. In the > process I noticed two things: > > 1) I have (what seems to me) a different recursive algorithm for > calculating derivatives of B-splines than I could find in fitpack. > 2) I have developed a different way to determine the K-1 extra degrees > of freedom for Kth-order spline fitting than I have seen before. > > The SciPy Journal would be a great place to document both of these > things while describing the spline interpolation design of scipy.interpolate > > It is true that I could submit this stuff to other journals, but it > seems like that doing that makes the information harder to find in the > future and not easier. I'm also dissatisfied with how information > exclusionary academic journals seem to be. They are catching up, but > they are still not as accessible as other things available on the internet. > > Given the open nature of most scientific research, it is remarkable that > getting access to the information is not as easy as it should be with > modern search engines (if your internet domain does not subscribe to the > e-journal). > > Comments and feedback is welcome. An implementation oriented journal/newsletter in the vain of RNews () would be great. [Note: I remember seeing some mentions of the R project in various comments, but I am not sure anyone brought RNews as a model. Please excuse me if it was already brought up.] About R News R News is the newsletter of the R project for statistical computing and features short to medium length articles covering topics that might be of interest to users or developers of R, including * Changes in R: new features of the latest release * Changes on CRAN: new add-on packages, manuals, binary distributions, mirrors,... * Add-on packages: short introductions to or reviews of R extension packages * Programmer's Niche: nifty hints for programming in R (or S) * Hints for newcomers: Explaining sides of R that might not be so obvious from reading the manuals and FAQs. * Applications: Examples of analyzing data with R Of course, any write-up of library code should also be distributed with/in the code (doc strings) as well. Such a publication would provide a great outlet for people to write about how they implemented their research and would make a great companion to the publication of the analysis and results. Additionally, the development of a good document template and commendable examples from other contributors would likely encourage better communication as with leading journals. A lot of the material could be culled from the mailing lists and should be written up in a way (and in a format) that would allow it to be dropped into the wiki (e.g. the cookbook page) as well as included in the publication. From fredmfp at gmail.com Tue Jun 5 05:39:33 2007 From: fredmfp at gmail.com (fred) Date: Tue, 05 Jun 2007 11:39:33 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465693B9.2000103@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> Message-ID: <46652F55.4020907@gmail.com> Pearu Peterson a ?crit : >> - -xM option does not exist anymore in recent ifort release; thus it >> conflicts >> with other options, such as -xP, -xT, etc... >> > > Do you know which version of the compiler dropped -xM option? Then > we can disable it by checking the value of self.get_version(). > Hi, I just got the answer from intel: -xM was supported in the 7.1 compilers, but support was discontinued in the 8.0 compiler.Please note that the 7.1 compilers might not work on more recent Linux distributions, though they should work on older ones. If you have a Pentium 4 or more recent processor, you should use -xW or similar switch in place of -xM. The SSE2 and later instructions are more powerful than the old MMX. Cheers, -- http://scipy.org/FredericPetit From pearu at cens.ioc.ee Tue Jun 5 05:57:29 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 05 Jun 2007 11:57:29 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <46652F55.4020907@gmail.com> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <46652F55.4020907@gmail.com> Message-ID: <46653389.50009@cens.ioc.ee> fred wrote: > Pearu Peterson a ?crit : >>> - -xM option does not exist anymore in recent ifort release; thus it >>> conflicts >>> with other options, such as -xP, -xT, etc... >>> >> Do you know which version of the compiler dropped -xM option? Then >> we can disable it by checking the value of self.get_version(). >> > Hi, > > I just got the answer from intel: > > -xM was supported in the 7.1 compilers, but support was discontinued in > the 8.0 compiler.Please note that the 7.1 compilers might not work on > more recent Linux distributions, though they should work on older ones. > > If you have a Pentium 4 or more recent processor, you should use -xW or > similar switch in place of -xM. The SSE2 and later instructions are more > powerful than the old MMX. Ok, thanks. I will add the following codelet to intel compiler get_flags_arch() method: if v and v <= '7.1': if cpu.has_mmx() and (cpu.is_PentiumII() or cpu.is_PentiumIII()): opt.append('-xM') Do you have references how the processor types map to options such as -xP, -xT, etc...? Pearu From fredmfp at gmail.com Tue Jun 5 06:02:36 2007 From: fredmfp at gmail.com (fred) Date: Tue, 05 Jun 2007 12:02:36 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <46653389.50009@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <46652F55.4020907@gmail.com> <46653389.50009@cens.ioc.ee> Message-ID: <466534BC.5020100@gmail.com> Pearu Peterson a ?crit : > Do you have references how the processor types map to options such as > -xP, -xT, etc...? > From the manpage: -x

(i32 and i32em) Generates specialized and optimized code for the processor that executes your program. The characters K, W, N, B, P, and T denote the processor types (

). The following are -x options: ? -xK Generates code for Intel Pentium III processors and compatible Intel processors. ? -xW Generates code for Intel Pentium 4 processors and compatible Intel processors. ? -xN Generates code for Intel Pentium 4 and compatible Intel processors with Streaming SIMD Extensions 2. The resulting code may contain unconditional use of features that are not supported on other proces- sors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors. ? -xB Generates code for Intel Pentium M processors and compatible Intel processors. Also enables new optimizations in addition to Intel processor-specific optimizations. ? -xP Generates code for Intel(R) Core(TM) Duo processors, Intel(R) Core(TM) Solo processors, Intel(R) Pen- tium(R) 4 processors with Streaming SIMD Extensions 3, and compatible Intel processors with Streaming SIMD Extensions 3. The resulting code may contain unconditional use of features that are not sup- ported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors. ? -xT Generates code for Intel(R) Core(TM)2 Duo processors, Intel(R) Core(TM)2 Extreme processors, and the Dual-Core Intel(R) Xeon(R) processor 5100 series. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors. The only options available on Intel(R) EM64T systems are -xW, -xP, and -xT. On Mac OS systems, the only valid option is -xP. On these systems, it is the default and is always set. If you specify more than one processor value, code is generated for only the highest-performing proces- sor specified. The highest-performing to lowest-performing processor values are: T, P, B, N, W, K. Cheers, -- http://scipy.org/FredericPetit From hazelnusse at gmail.com Tue Jun 5 22:19:26 2007 From: hazelnusse at gmail.com (Luke) Date: Tue, 5 Jun 2007 19:19:26 -0700 Subject: [SciPy-user] scipy.odeint args question Message-ID: <99214b470706051919y3a2099fy96075f22db02a5a@mail.gmail.com> I'm trying to write a tool that needs to be able to integrate n-dimensional systems of 1st order differential equations. I need to be able to basically plug various dynamical systems, along with their Jacobians, and any associated parameters that occur on the RHS of the ODE's that may need to be varied. My issue is with how scipy.odeint handles extra function arguments. My differential equations are of the form: def f(x,t,param): ... return dxdt def J(x,t,param): ... return dfdx The param argument would be a rank 1 array (or list), that gets used in the function definitions of the RHS. For example, the Lorenz equations have three parameters, sigma, r, and b, but other systems have other numbers of parameters, so it makes sense to just pass this as a vector (or list). If it is not done in this fashion, then systems with different numbers of parameters have to be hard-coded... really annoying. I would like to be able to call scipy.odeint something like: y = scipy.odeint(f, x0, t, args = param, Dfun = J) or y = scipy.odeint(f, x0, t, args = (param), Dfun = J) This is where I can't get things to work -- odeint needs a tuple for the args argument, and I can't figure out how to make it work. The following works, but is way to restrictive for what I need because I want to be able to make my code modular enough to be able to integrate *any* dynamical system that has an arbritrary number of parameters. Here is what works, for a system with three parameters, a, b, and c; def f(x,t,a,b,c): ... return dxdt def J(x,t,a,b,c): ... return dfdx y = scipy.odeint(f, x0, t, args = (a,b,c), Dfun = J) This above works fine, but again, if you need to evaluate f anywhere, then you have to know how many parameters it takes and call the function in a fashion that explicitly lays out how each parameter gets passed. Am I overlooking something really simple here that would make this work? I know in matlab's ode45 you can just pass scalars or arrays or matrices of additional parameters and it doesn't really matter -- you just pass them through. Thanks, ~Luke From wweckesser at mail.colgate.edu Tue Jun 5 22:28:08 2007 From: wweckesser at mail.colgate.edu (Warren Weckesser) Date: Tue, 05 Jun 2007 22:28:08 -0400 Subject: [SciPy-user] scipy.odeint args question In-Reply-To: <99214b470706051919y3a2099fy96075f22db02a5a@mail.gmail.com> References: <99214b470706051919y3a2099fy96075f22db02a5a@mail.gmail.com> Message-ID: <1181096888.6934.3.camel@localhost.localdomain> Pass the list "param" to odeint like this: y = scipy.odeint(f, x0, t, args = (param,), Dfun = J) Note the extra comma. Then, for example, f might start like this: def f(x,t,param) a = param[0] b = param[1] c = param[2] ... return dxdt This works for me. Warren Weckesser On Tue, 2007-06-05 at 19:19 -0700, Luke wrote: > I'm trying to write a tool that needs to be able to integrate > n-dimensional systems of 1st order differential equations. I need to > be able to basically plug various dynamical systems, along with their > Jacobians, and any associated parameters that occur on the RHS of the > ODE's that may need to be varied. My issue is with how scipy.odeint > handles extra function arguments. > > My differential equations are of the form: > > def f(x,t,param): > ... > return dxdt > > def J(x,t,param): > ... > return dfdx > > The param argument would be a rank 1 array (or list), that gets used > in the function definitions of the RHS. For example, the Lorenz > equations have three parameters, sigma, r, and b, but other systems > have other numbers of parameters, so it makes sense to just pass this > as a vector (or list). If it is not done in this fashion, then > systems with different numbers of parameters have to be hard-coded... > really annoying. > > I would like to be able to call scipy.odeint something like: > > y = scipy.odeint(f, x0, t, args = param, Dfun = J) > or > y = scipy.odeint(f, x0, t, args = (param), Dfun = J) > > This is where I can't get things to work -- odeint needs a tuple for > the args argument, and I can't figure out how to make it work. The > following works, but is way to restrictive for what I need because I > want to be able to make my code modular enough to be able to integrate > *any* dynamical system that has an arbritrary number of parameters. > Here is what works, for a system with three parameters, a, b, and c; > > def f(x,t,a,b,c): > ... > return dxdt > > def J(x,t,a,b,c): > ... > return dfdx > > y = scipy.odeint(f, x0, t, args = (a,b,c), Dfun = J) > > This above works fine, but again, if you need to evaluate f anywhere, > then you have to know how many parameters it takes and call the > function in a fashion that explicitly lays out how each parameter gets > passed. > > > Am I overlooking something really simple here that would make this > work? I know in matlab's ode45 you can just pass scalars or arrays or > matrices of additional parameters and it doesn't really matter -- you > just pass them through. > > Thanks, > ~Luke > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Wed Jun 6 03:02:52 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 06 Jun 2007 09:02:52 +0200 Subject: [SciPy-user] scipy.odeint args question In-Reply-To: <1181096888.6934.3.camel@localhost.localdomain> References: <99214b470706051919y3a2099fy96075f22db02a5a@mail.gmail.com> <1181096888.6934.3.camel@localhost.localdomain> Message-ID: <46665C1C.7040102@gmx.net> Warren Weckesser wrote: > Pass the list "param" to odeint like this: > > y = scipy.odeint(f, x0, t, args = (param,), Dfun = J) > > Note the extra comma. > > Then, for example, f might start like this: > > def f(x,t,param) > a = param[0] > b = param[1] > c = param[2] > ... > return dxdt > See also http://docs.python.org/tut/node7.html#SECTION007300000000000000000 -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From robert.vergnes at yahoo.fr Wed Jun 6 06:40:26 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Wed, 6 Jun 2007 12:40:26 +0200 (CEST) Subject: [SciPy-user] weight of matrix or other object Message-ID: <359204.72242.qm@web27408.mail.ukl.yahoo.com> Hello, I have to stop calculation when my data variable reach a certain size ( weight). How can i know the weight in (Kb or Mb) of an object - either a list of matrix or a matrix itself ? Best Regards Robert --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From lxander.m at gmail.com Wed Jun 6 08:21:12 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Wed, 6 Jun 2007 08:21:12 -0400 Subject: [SciPy-user] weight of matrix or other object In-Reply-To: <359204.72242.qm@web27408.mail.ukl.yahoo.com> References: <359204.72242.qm@web27408.mail.ukl.yahoo.com> Message-ID: <525f23e80706060521s3cd41f36ub716ae965dddae0d@mail.gmail.com> On 6/6/07, Robert VERGNES wrote: > Hello, > I have to stop calculation when my data variable reach a certain size ( > weight). > How can i know the weight in (Kb or Mb) of an object - either a list of > matrix or a matrix itself ? > > Best Regards > > Robert Are you asking for the number of bytes of memory allocated for a numpy array? In [1]: import numpy In [2]: a = numpy.ones((2,2)) In [3]: a.nbytes Out[3]: 32 From eugen.wintersberger at jku.at Wed Jun 6 08:27:38 2007 From: eugen.wintersberger at jku.at (Eugen Wintersberger) Date: Wed, 06 Jun 2007 14:27:38 +0200 Subject: [SciPy-user] scipy.integrate and threading Message-ID: <1181132858.17446.4.camel@wheeler.hlphys.uni-linz.ac.at> Hi there I have just started with using threads in python and want to use them together with the integration routines in scipy. However, there seems to be a serious problem with threadsafty. I want to run two threads which perform integration simultaneously. However, when I do so the program exits with a Segmentation Fault. If I use only one thread the program runs without problems. Is there any simple possibility to use scipy in threads? Tanks Eugen From robert.vergnes at yahoo.fr Wed Jun 6 09:03:28 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Wed, 6 Jun 2007 15:03:28 +0200 (CEST) Subject: [SciPy-user] RE : Re: weight of matrix or other object In-Reply-To: <525f23e80706060521s3cd41f36ub716ae965dddae0d@mail.gmail.com> Message-ID: <66880.93209.qm@web27405.mail.ukl.yahoo.com> yes. it helps. Just the problem is that I have a list with some numarray inside, but also some string and some other things. ( this list is an incoming stack which I have to clean when it reahes a certain size... I thought that there would be a 'nbytes' fonction attached any to python data object ? Alexander Michael a ?crit : On 6/6/07, Robert VERGNES wrote: > Hello, > I have to stop calculation when my data variable reach a certain size ( > weight). > How can i know the weight in (Kb or Mb) of an object - either a list of > matrix or a matrix itself ? > > Best Regards > > Robert Are you asking for the number of bytes of memory allocated for a numpy array? In [1]: import numpy In [2]: a = numpy.ones((2,2)) In [3]: a.nbytes Out[3]: 32 _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuelez at gmail.com Wed Jun 6 10:13:06 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Wed, 6 Jun 2007 16:13:06 +0200 Subject: [SciPy-user] indices, lists and arrays Message-ID: i have something like this: sizes = ndimage.sum(bl_img, labels=bl_l, index=range(1,bl_n+1)) sizes = array(sizes) bl_obj_indices = where(sizes<21) bl_l[bl_objects[bl_obj_indices]] = 0 sizes was a list, but i converted it to an array in order to use the function where on it where returns an array of arrays, something like (array([14, 17]),) bl_objects is the output of ndimage.find_objects and is a list. this means that the assignment on the last row of the proposed code does not work. is there an elegant solution to solve the problem? list(bl_obj_indices) returns [array([14, 17])] so it does not do the trick. i would need something like [14, 17] any hint? Emanuele -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Wed Jun 6 11:34:14 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 06 Jun 2007 17:34:14 +0200 Subject: [SciPy-user] indices, lists and arrays In-Reply-To: References: Message-ID: <1181144055.3560.25.camel@localhost> El dc 06 de 06 del 2007 a les 16:13 +0200, en/na Emanuele Zattin va escriure: > i have something like this: > > sizes = ndimage.sum(bl_img, labels=bl_l, index=range(1,bl_n+1)) > sizes = array(sizes) > bl_obj_indices = where(sizes<21) > bl_l[bl_objects[bl_obj_indices]] = 0 > > sizes was a list, but i converted it to an array in order to use the > function where on it > where returns an array of arrays, something like (array([14, 17]),) > bl_objects is the output of ndimage.find_objects and is a list. this > means that the assignment on the last row of the proposed code does > not work. > is there an elegant solution to solve the problem? In order to use fancy indexing, you always need an array as the base, so, the next should do the trick: bl_l[array(bl_objects)[bl_obj_indices]] = 0 or, for short: bl_l[array(bl_objects)[sizes<21]] = 0 HTH, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From emanuelez at gmail.com Wed Jun 6 11:46:04 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Wed, 6 Jun 2007 17:46:04 +0200 Subject: [SciPy-user] indices, lists and arrays In-Reply-To: <1181144055.3560.25.camel@localhost> References: <1181144055.3560.25.camel@localhost> Message-ID: On 6/6/07, Francesc Altet wrote: > > El dc 06 de 06 del 2007 a les 16:13 +0200, en/na Emanuele Zattin va > escriure: > > i have something like this: > > > > sizes = ndimage.sum(bl_img, labels=bl_l, index=range(1,bl_n+1)) > > sizes = array(sizes) > > bl_obj_indices = where(sizes<21) > > bl_l[bl_objects[bl_obj_indices]] = 0 > > > > sizes was a list, but i converted it to an array in order to use the > > function where on it > > where returns an array of arrays, something like (array([14, 17]),) > > bl_objects is the output of ndimage.find_objects and is a list. this > > means that the assignment on the last row of the proposed code does > > not work. > > is there an elegant solution to solve the problem? > > In order to use fancy indexing, you always need an array as the base, > so, the next should do the trick: > > bl_l[array(bl_objects)[bl_obj_indices]] = 0 > > or, for short: > > bl_l[array(bl_objects)[sizes<21]] = 0 Stupid me, i forgot to mention that i actually converted bl_objects to array before that... but still that does not work. it compains that arrays used as indices should be integers... but i can see that where returns a tuple... mmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Wed Jun 6 11:56:26 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 06 Jun 2007 17:56:26 +0200 Subject: [SciPy-user] indices, lists and arrays In-Reply-To: References: <1181144055.3560.25.camel@localhost> Message-ID: <1181145386.3560.29.camel@localhost> El dc 06 de 06 del 2007 a les 17:46 +0200, en/na Emanuele Zattin va escriure: > > > On 6/6/07, Francesc Altet wrote: > El dc 06 de 06 del 2007 a les 16:13 +0200, en/na Emanuele > Zattin va > escriure: > > i have something like this: > > > > sizes = ndimage.sum(bl_img, labels=bl_l, index=range(1,bl_n > +1)) > > sizes = array(sizes) > > bl_obj_indices = where(sizes<21) > > bl_l[bl_objects[bl_obj_indices]] = 0 > > > > sizes was a list, but i converted it to an array in order to > use the > > function where on it > > where returns an array of arrays, something like (array([14, > 17]),) > > bl_objects is the output of ndimage.find_objects and is a > list. this > > means that the assignment on the last row of the proposed > code does > > not work. > > is there an elegant solution to solve the problem? > > In order to use fancy indexing, you always need an array as > the base, > so, the next should do the trick: > > bl_l[array(bl_objects)[bl_obj_indices]] = 0 > > or, for short: > > bl_l[array(bl_objects)[sizes<21]] = 0 > > Stupid me, i forgot to mention that i actually converted bl_objects to > array before that... but still that does not work. it compains that > arrays used as indices should be integers... but i can see that where > returns a tuple... mmm That's strange. Fancy indexing seems to happily accept tuples as well: In [1]:import numpy In [2]:a=numpy.arange(10) In [3]:a[a>3] Out[3]:array([4, 5, 6, 7, 8, 9]) In [4]:a[numpy.where(a>3)] Out[4]:array([4, 5, 6, 7, 8, 9]) In [5]:numpy.where(a>3) Out[5]:(array([4, 5, 6, 7, 8, 9]),) Perhaps your bl_objects is multidimensional? But even in this case, this should work fine: In [19]:a=numpy.arange(10).reshape(2,5) In [20]:a Out[20]: array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) In [21]:a[a>3] Out[21]:array([4, 5, 6, 7, 8, 9]) In [22]:a[numpy.where(a>3)] Out[22]:array([4, 5, 6, 7, 8, 9]) Which version of NumPy are you using? -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From s.mientki at ru.nl Thu Jun 7 17:19:57 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 07 Jun 2007 23:19:57 +0200 Subject: [SciPy-user] How to solve name mangling ? Message-ID: <4668767D.40208@ru.nl> hello, the beauty of Python is that you can "rename" everything . In the languages I've been using up to now, an integer is an integer and stays an integer forever (has it's beauty too). The modules I write for myself, I always start with the "dangerous": form scipy import * Now if I take modules form others, or take code snippets (with forgetting to use the scipy import), I get different type of arrays, with all kind of weird behavior. The module at it's own works perfect, but if I call the module from another program, it doesn't work (as expected) anymore. Any clever solution to solve this without thinking about those tiny details ? Is using form scipy import * as the last global import in each module, a solution ? thanks, Stef Mientki From stefan at sun.ac.za Thu Jun 7 17:56:58 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 7 Jun 2007 23:56:58 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <4668767D.40208@ru.nl> References: <4668767D.40208@ru.nl> Message-ID: <20070607215657.GQ7609@mentat.za.net> Hi Stef On Thu, Jun 07, 2007 at 11:19:57PM +0200, Stef Mientki wrote: > the beauty of Python is that you can "rename" everything . > In the languages I've been using up to now, > an integer is an integer and stays an integer forever (has it's beauty too). > > The modules I write for myself, > I always start with the "dangerous": > form scipy import * "Namespaces are one honking great idea -- let's do more of those!" As you found out the hard way, not using namespaces effectively leads to problems. The "dangerous" shouldn't be in quotation marks! Why not use import scipy as S Thereby you protect yourself from other modules overwriting your method and variables, without having to do much extra typing. Cheers St?fan From gnurser at googlemail.com Thu Jun 7 18:07:11 2007 From: gnurser at googlemail.com (George Nurser) Date: Thu, 7 Jun 2007 23:07:11 +0100 Subject: [SciPy-user] new problem with f2py --fcompiler=intelem no longer works. Message-ID: <1d1e6ea70706071507k3a5e0958n2089edcc9d551cf5@mail.gmail.com> After updating to the latest version of numpy in SVN, rev 3841, f2py --fcompiler=intelem seems to have stopped working for me [however it stil works for the deafulat compiler,i.e. leaving out --fcompiler=intelem ] e.g. f2py --fcompiler=intelem -c -m J Jackett_et_al.f90 Traceback (most recent call last): File "/noc/users/agn/bin/f2py", line 26, in main() File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 552, in main run_compile() File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 440, in run_compile allowed_keys = fcompiler.fcompiler_class.keys() AttributeError: 'NoneType' object has no attribute 'keys' The first problem seems to be that numpy.distutils.fcompiler.load_all_fcompiler_classes() hasn't been called yet, so fcompiler.fcompiler_class is empty. If I run it in ipython, first calling numpy.distutils.fcompiler.load_all_fcompiler_classes() then doing %pdb on %run -i ~/bin/f2py --fcompiler=intelem -c -m J Jackett_et_al.f90 the program dies in ccompiler.py: .... 180 ctype = fcompiler.compiler_type --> 181 if fcompiler and fcompiler.get_version(): 182 fcompiler.customize(self.distribution) 183 fcompiler.customize_cmd(self) /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/ccompiler.py in CCompiler_get_version(self, force, ok_status) 263 if not version_cmd or not version_cmd[0]: 264 return None --> 265 cmd = ' '.join(version_cmd) 266 try: 267 matcher = self.version_match : sequence item 1: expected string, NoneType found > /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/ccompiler.py(265)CCompiler_get_version() 264 return None --> 265 cmd = ' '.join(version_cmd) ipdb> print version_cmd ['/data/ncs/packages4/linux/intel_compilers/v9.1/em64t/fc/9.1.036/bin/ifort', None] --George Nurser. From elcorto at gmx.net Thu Jun 7 18:40:03 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 08 Jun 2007 00:40:03 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <20070607215657.GQ7609@mentat.za.net> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> Message-ID: <46688943.6060506@gmx.net> Stefan van der Walt wrote: > Hi Stef > > On Thu, Jun 07, 2007 at 11:19:57PM +0200, Stef Mientki wrote: >> the beauty of Python is that you can "rename" everything . >> In the languages I've been using up to now, >> an integer is an integer and stays an integer forever (has it's beauty too). Agreed. Sometimes this can be useful (read: makes things more explict, but also more static). You have to pay attention to stuff like that, but with a #comment here and there, it's all OK. The power and flexibilty of Python/numpy/scipy weighs much more than this. >> >> The modules I write for myself, >> I always start with the "dangerous": >> form scipy import * > > "Namespaces are one honking great idea -- let's do more of those!" As > you found out the hard way, not using namespaces effectively leads to > problems. The "dangerous" shouldn't be in quotation marks! > > Why not use > > import scipy as S > > Thereby you protect yourself from other modules overwriting your > method and variables, without having to do much extra typing. > Beeing as explict as possible (e.g. scipy.integrate.odeint(...)) is a good thing. If you can be explicit, be it :) -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From pearu at cens.ioc.ee Fri Jun 8 03:35:04 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 08 Jun 2007 09:35:04 +0200 Subject: [SciPy-user] new problem with f2py --fcompiler=intelem no longer works. In-Reply-To: <1d1e6ea70706071507k3a5e0958n2089edcc9d551cf5@mail.gmail.com> References: <1d1e6ea70706071507k3a5e0958n2089edcc9d551cf5@mail.gmail.com> Message-ID: <466906A8.9000509@cens.ioc.ee> Hi, Try again svn update. The problem was that setting version command flags where not updated properly after merging David's branch. Pearu George Nurser wrote: > After updating to the latest version of numpy in SVN, rev 3841, f2py > --fcompiler=intelem seems to have stopped working for me [however it > stil works for the deafulat compiler,i.e. leaving out > --fcompiler=intelem ] e.g. > > f2py --fcompiler=intelem -c -m J Jackett_et_al.f90 > Traceback (most recent call last): > File "/noc/users/agn/bin/f2py", line 26, in > main() > File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", > line 552, in main > run_compile() > File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", > line 440, in run_compile > allowed_keys = fcompiler.fcompiler_class.keys() > AttributeError: 'NoneType' object has no attribute 'keys' > > The first problem seems to be that > numpy.distutils.fcompiler.load_all_fcompiler_classes() > hasn't been called yet, so fcompiler.fcompiler_class is empty. > > If I run it in ipython, first calling > numpy.distutils.fcompiler.load_all_fcompiler_classes() > > then doing > %pdb on > %run -i ~/bin/f2py --fcompiler=intelem -c -m J Jackett_et_al.f90 > > the program dies in ccompiler.py: > .... > 180 ctype = fcompiler.compiler_type > --> 181 if fcompiler and fcompiler.get_version(): > 182 fcompiler.customize(self.distribution) > 183 fcompiler.customize_cmd(self) > > /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/ccompiler.py > in CCompiler_get_version(self, force, ok_status) > 263 if not version_cmd or not version_cmd[0]: > 264 return None > --> 265 cmd = ' '.join(version_cmd) > 266 try: > 267 matcher = self.version_match > > : sequence item 1: expected string, NoneType found >> /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/ccompiler.py(265)CCompiler_get_version() > 264 return None > --> 265 cmd = ' '.join(version_cmd) > > ipdb> print version_cmd > ['/data/ncs/packages4/linux/intel_compilers/v9.1/em64t/fc/9.1.036/bin/ifort', > None] > > --George Nurser. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From gnurser at googlemail.com Fri Jun 8 04:29:04 2007 From: gnurser at googlemail.com (George Nurser) Date: Fri, 8 Jun 2007 09:29:04 +0100 Subject: [SciPy-user] new problem with f2py --fcompiler=intelem no longer works. In-Reply-To: <466906A8.9000509@cens.ioc.ee> References: <1d1e6ea70706071507k3a5e0958n2089edcc9d551cf5@mail.gmail.com> <466906A8.9000509@cens.ioc.ee> Message-ID: <1d1e6ea70706080129v3425fc14p780f862ab172b0c0@mail.gmail.com> On 08/06/07, Pearu Peterson wrote: > Hi, > > Try again svn update. The problem was that setting version > command flags where not updated properly after merging David's branch. > > Pearu Hi, Thanks for looking at it so quickly. But it still fails with the same error, I'm afraid. George. Traceback (most recent call last): File "/noc/users/agn/bin/f2py", line 26, in main() File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 552, in main run_compile() File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 440, in run_compile allowed_keys = fcompiler.fcompiler_class.keys() AttributeError: 'NoneType' object has no attribute 'keys' From S.Mientki at ru.nl Fri Jun 8 05:20:50 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 08 Jun 2007 11:20:50 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46688943.6060506@gmx.net> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> Message-ID: <46691F72.5050203@ru.nl> Steve Schmerler wrote: > Stefan van der Walt wrote: > >> Hi Stef >> >> On Thu, Jun 07, 2007 at 11:19:57PM +0200, Stef Mientki wrote: >> >>> the beauty of Python is that you can "rename" everything . >>> In the languages I've been using up to now, >>> an integer is an integer and stays an integer forever (has it's beauty too). >>> > > Agreed. Sometimes this can be useful (read: makes things more explict, but also more > static). You have to pay attention to stuff like that, but with a #comment here and > there, it's all OK. The power and flexibilty of Python/numpy/scipy weighs much more > than this. > > >>> The modules I write for myself, >>> I always start with the "dangerous": >>> form scipy import * >>> >> "Namespaces are one honking great idea -- let's do more of those!" As >> you found out the hard way, not using namespaces effectively leads to >> problems. The "dangerous" shouldn't be in quotation marks! >> >> Why not use >> >> import scipy as S >> >> Thereby you protect yourself from other modules overwriting your >> method and variables, without having to do much extra typing. >> >> > > Beeing as explict as possible (e.g. scipy.integrate.odeint(...)) is a good thing. > If you can be explicit, be it :) > Steve and Stefan (and my name is Stef and was Stephan ;-), From a pure programmers point of view you might be fully right. btw I don't want to start a flame war, but from a non-programmers point of view (which is a growing group, me falling somewhere in between), reading the philosophy behind Python, I come to a totally different conclusion From what I remember some philosophical highlights are: Python should be intuitively, simple, universal and allow many solutions for the same problem. To drive my car, - I don't need to know the type of the spark-plug - I don't need to know how many spark-plugs my car has - I don't need to know if my car has any spark-plugs - I might even have never heard of a spark-plug To use Python, - I need to know what an numpy array is - I need to know what a numeric array is - I need to know what an array array is - I need to know what a scipy array is - and maybe a few others ... But I just want to use an array, and I just want that the array always to behave the same, even if I give the array to someone else. Ok, a car has a much longer history than programming languages. And "batteries included" is both the strong point and the weak point of a language like Python. From the viewpoint of the non-programmer, who just wants to drive their car (without wanting to have a look inside the car), there is still no good answer :-( cheers, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From david at ar.media.kyoto-u.ac.jp Fri Jun 8 05:47:28 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 08 Jun 2007 18:47:28 +0900 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46691F72.5050203@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> Message-ID: <466925B0.8090300@ar.media.kyoto-u.ac.jp> Stef Mientki wrote: > > Steve and Stefan (and my name is Stef and was Stephan ;-), > > From a pure programmers point of view you might be fully right. > btw I don't want to start a flame war, > but from a non-programmers point of view (which is a growing group, me > falling somewhere in between), > reading the philosophy behind Python, I come to a totally different > conclusion > > From what I remember some philosophical highlights are: > Python should be intuitively, simple, universal and allow many > solutions for the same problem. Python philosophy generally highlights one good solution for a given problem... > > To drive my car, > - I don't need to know the type of the spark-plug > - I don't need to know how many spark-plugs my car has > - I don't need to know if my car has any spark-plugs > - I might even have never heard of a spark-plug > > To use Python, > - I need to know what an numpy array is > - I need to know what a numeric array is > - I need to know what an array array is > - I need to know what a scipy array is > - and maybe a few others ... > I've never used numeric or "array array", and there is no such a thing as scipy array... I find the analogy with a car totally bogus because in most countries anyway, you need to get a permit to drive, and this takes more time than typing many times N.foo instead of foo (at least it did for me). And anyway, driving a car without paying attention to what you are doing is the cause of many deaths every year :) Namespaces is one of the top reason why I started using python instead of matlab (a bit behind the not broken C api reason). There are so many weird things happening in matlab because of the lack of namespace (which only becomes worse because foo(1) can be the first element of foo or the function foo called with 1, and the fact that there is the stupid limit one public function / one m file). This may sound minor to you, N.sum vs sum, but this has *major* consequences for the whole codebase. Also, being explicit is easier than being implicit, not the contrary. For example, what do you thing is the easiest ? Typing numpy.sum(a, 1) or tracking down a bug because sum(a, 1) gives you a totally bogus result (python sum and numpy sum are different) ? And it happens (one recent bug in scipy was exactly caused by that). > But I just want to use an array, > and I just want that the array always to behave the same, > even if I give the array to someone else. > > Ok, a car has a much longer history than programming languages. > And "batteries included" is both the strong point and the weak point of > a language like Python. > > From the viewpoint of the non-programmer, > who just wants to drive their car (without wanting to have a look inside > the car), > there is still no good answer :-( yes there is: call N.sum instead of sum in your module. Look at it that way: how many N. do you need to write, and how many character have you written instead in this email ? David From S.Mientki at ru.nl Fri Jun 8 06:10:41 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 08 Jun 2007 12:10:41 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <466925B0.8090300@ar.media.kyoto-u.ac.jp> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> Message-ID: <46692B21.5010506@ru.nl> > yes there is: call N.sum instead of sum in your module. Look at it that > way: how many N. do you need to write, and how many character have you > written instead in this email ? > and now I forget just 1 N, and the program seems to work correctly ... cheers, Stef Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From matthew.brett at gmail.com Fri Jun 8 06:19:16 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 8 Jun 2007 11:19:16 +0100 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46692B21.5010506@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> Message-ID: <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> Hi, > and now I forget just 1 N, and the program seems to work correctly ... I think what we're all saying is: If you start in matlab - for example - it seems like a good idea to do: from numpy import * That's how I started. Gradually, writing code, you begin to realize that it is just much, much better - for clarity, and for namespace safety, to do import numpy as N Of course, if you do that, if you forget an N., you get a syntax error. And that is the right way to solve your 'what is an array' problem. It's an N.array Best, Matthew From david at ar.media.kyoto-u.ac.jp Fri Jun 8 06:17:22 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 08 Jun 2007 19:17:22 +0900 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46692B21.5010506@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> Message-ID: <46692CB2.2090409@ar.media.kyoto-u.ac.jp> Stef Mientki wrote: >> yes there is: call N.sum instead of sum in your module. Look at it that >> way: how many N. do you need to write, and how many character have you >> written instead in this email ? >> > and now I forget just 1 N, and the program seems to work correctly ... Well, you have not given any code example yet, so it is kind of difficult to help you... Following you car analogy, saying that is seems to work is like saying driving without looking at the road works because you did it once in your backyard. It is easier, it may work sometimes; still, would you do it ? cheers, David From elcorto at gmx.net Fri Jun 8 07:45:15 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 08 Jun 2007 13:45:15 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46691F72.5050203@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> Message-ID: <4669414B.8000901@gmx.net> Stef Mientki wrote: > From the viewpoint of the non-programmer, > who just wants to drive their car (without wanting to have a look inside > the car), > there is still no good answer :-( If you use a programming language to solve problems, you are programming -- so you are a programmer in my view. If you don't want to do that (consider yourself a non-programmer), you have to use "tools" like LabView. I agree that there is a whole lot of documentation out there for scipy&friends which can be somewhat overwhelming at the begining. That's why there are lists like this one where you can get the right directions. If you want to drive the car, you have to learn driving first. If you want to be a good driver, you have to practise a bit more and also know the internals of you car in some detail. If you are "just a driver", you will never have the driving skills to acomplish something outstanding (like winning a race). If you use a language (Python, C, whatever) and libraries for that language to acomplish something more complex (e.g. a project with more than one source file), you have to read documentation, experiment, get used, make mistakes, ask poeple for help. That's how things work. Unfortunately, there is no such thing as a solve-my-problem-button :) OK enough with that. Just start using the namespace thing properly and you'll see that will pay off for you in long run. Happy coding :) -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From S.Mientki at ru.nl Fri Jun 8 08:06:12 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 08 Jun 2007 14:06:12 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> Message-ID: <46694634.80201@ru.nl> Matthew Brett wrote: > Hi, > > >> and now I forget just 1 N, and the program seems to work correctly ... >> > > I think what we're all saying is: > > If you start in matlab - for example - it seems like a good idea to do: > > from numpy import * > > That's how I started. > > Gradually, writing code, you begin to realize that it is just much, > much better - for clarity, and for namespace safety, to do > > import numpy as N > > Of course, if you do that, if you forget an N., you get a syntax error. > well I don't ;-) But that might have to do with my lousy organization. Here's what I have right now, from a friend I got a library, with the message, add this line at the top of your code, and you can use everthing in the library: from my_friends_module import * In my friends module there is the following code from array import array Now I get my N-less "array". So I probably shouldn't have followed my friends advice, and I should just have written import my_friends_module Another solution could be, but don't know if it's safe enough from my_friends_module import * # and always as the last import do my own from scipy import array Is this a good solution ? thanks, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From matthieu.brucher at gmail.com Fri Jun 8 08:23:26 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 8 Jun 2007 14:23:26 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46694634.80201@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> <46694634.80201@ru.nl> Message-ID: > > Now I get my N-less "array". > So I probably shouldn't have followed my friends advice, > and I should just have written > > import my_friends_module This is the correct way of doing things Another solution could be, but don't know if it's safe enough > > from my_friends_module import * > # and always as the last import do my own > from scipy import array > > Is this a good solution ? No because when you'll want to use array, which array is it really ? What is more, there can be border effects in your friend's module, there shouldn't be, but how to be sure ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.Mientki at ru.nl Fri Jun 8 08:57:06 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 08 Jun 2007 14:57:06 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> <46694634.80201@ru.nl> Message-ID: <46695222.5060606@ru.nl> > > > Another solution could be, but don't know if it's safe enough > > from my_friends_module import * > # and always as the last import do my own > from scipy import array > > Is this a good solution ? > > > No because when you'll want to use array, which array is it really ? Sorry, I must be missing something here, doesn't "from scipy import array" (as the LAST line) override any previous definition of "array" ?? thanks, Stef Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From rmay at ou.edu Fri Jun 8 09:08:07 2007 From: rmay at ou.edu (Ryan May) Date: Fri, 08 Jun 2007 08:08:07 -0500 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46694634.80201@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> <46694634.80201@ru.nl> Message-ID: <466954B7.7050003@ou.edu> Stef Mientki wrote: > >> > well I don't ;-) > But that might have to do with my lousy organization. > > Here's what I have right now, > from a friend I got a library, > with the message, > add this line at the top of your code, > and you can use everthing in the library: > > from my_friends_module import * > > In my friends module there is the following code > > from array import array > > Now I get my N-less "array". > So I probably shouldn't have followed my friends advice, > and I should just have written > > import my_friends_module > > Another solution could be, but don't know if it's safe enough > > from my_friends_module import * > # and always as the last import do my own > from scipy import array Why not this: from my_friends_module import foo,bar from scipy import array Which (if the apis) are compatible would allow a drop in replacement latter of: from my_friends_module2 import foo,bar from numpy import array with no other code changes. I'm assuming we're talking about long term maintained code, and not just a quick script. For that case I do the: import numpy as N import pylab as P Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From matthew.brett at gmail.com Fri Jun 8 09:27:58 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 8 Jun 2007 14:27:58 +0100 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46694634.80201@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> <46694634.80201@ru.nl> Message-ID: <1e2af89e0706080627v64b76604p404cd656cb90489b@mail.gmail.com> Hi, > > Of course, if you do that, if you forget an N., you get a syntax error. > > > well I don't ;-) > But that might have to do with my lousy organization. No, it's not lousy organization, honestly. As you can see, your friend, and you, and me when I started, all did this - because it's less typing, and because it is what we are used to. Learning not to do this is something, like many good coding practices, that is not atall obvious at first, and seems inconvenient, but as you get used to doing it, you realize that it's The Right Way To Do It (TM). Explaining why is hard - it's really something you've got to learn for yourself - as a result of the kind of problems you've had here, and just by trying to do it that way, and finding that is works well. > from my_friends_module import * > from array import array If you use a lot of classes and so on from this module, you might do: import numpy as N import my_friends_module as MF from my_friends_module import array as MFA Then you'd be doing: a = N.array([1,2]) b = MFA.array([1,2]) and it will probably be more obvious what's going on, and therefore less error prone... Best, Matthew From pearu at cens.ioc.ee Fri Jun 8 10:15:48 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 08 Jun 2007 16:15:48 +0200 Subject: [SciPy-user] new problem with f2py --fcompiler=intelem no longer works. In-Reply-To: <1d1e6ea70706080129v3425fc14p780f862ab172b0c0@mail.gmail.com> References: <1d1e6ea70706071507k3a5e0958n2089edcc9d551cf5@mail.gmail.com> <466906A8.9000509@cens.ioc.ee> <1d1e6ea70706080129v3425fc14p780f862ab172b0c0@mail.gmail.com> Message-ID: <46696494.50000@cens.ioc.ee> George Nurser wrote: > Hi, > Thanks for looking at it so quickly. > But it still fails with the same error, I'm afraid. > > George. > > > Traceback (most recent call last): > File "/noc/users/agn/bin/f2py", line 26, in > main() > File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", > line 552, in main > run_compile() > File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", > line 440, in run_compile > allowed_keys = fcompiler.fcompiler_class.keys() > AttributeError: 'NoneType' object has no attribute 'keys' Ok, try again from numpy svn. pearu From giorgio.luciano at chimica.unige.it Fri Jun 8 08:06:12 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Fri, 08 Jun 2007 14:06:12 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <4669414B.8000901@gmx.net> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <4669414B.8000901@gmx.net> Message-ID: <46694634.2010706@chimica.unige.it> I'm enjoying reading all those comments also because sometime metaphors a re risky to use ;) (it seems like in a episode of House M.D.) because I can reply.. if you want to drive a car you need to have a license... ok but if you want to drive a car you dont' need to be a mechanich or a mechanic engineer and know how the engine works... otherwise i guess there will be really less car drivers .. ;) Giorgio From gnurser at googlemail.com Fri Jun 8 18:15:22 2007 From: gnurser at googlemail.com (George Nurser) Date: Fri, 8 Jun 2007 23:15:22 +0100 Subject: [SciPy-user] new problem with f2py --fcompiler=intelem no longer works. In-Reply-To: <46696494.50000@cens.ioc.ee> References: <1d1e6ea70706071507k3a5e0958n2089edcc9d551cf5@mail.gmail.com> <466906A8.9000509@cens.ioc.ee> <1d1e6ea70706080129v3425fc14p780f862ab172b0c0@mail.gmail.com> <46696494.50000@cens.ioc.ee> Message-ID: <1d1e6ea70706081515o114d673dh3a653e3259cb33f0@mail.gmail.com> First problem is now solved, thanks, but compilationstill fails with the second error, with None in version_cmd --George. ipython In [1]: %pdb on Automatic pdb calling has been turned ON In [2]: run -i ~/bin/f2py --fcompiler=intelem -c -m J Wright.F ........ running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext Found executable /data/ncs/packages4/linux/intel_compilers/v9.1/em64t/fc/9.1.036/bin/ifort --------------------------------------------------------------------------- Traceback (most recent call last) /noc/users/agn/bin/f2py in () 24 print >> sys.stderr, "Unknown mode:",`mode` 25 sys.exit(1) ---> 26 main() 27 28 /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py in main() 551 return 552 if '-c' in sys.argv[1:]: --> 553 run_compile() 554 else: 555 run_main(sys.argv[1:]) /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/f2py/f2py2e.py in run_compile() 538 sys.argv.extend(['build_ext']+flib_flags) 539 --> 540 setup(ext_modules = [ext]) 541 542 if remove_build_dir and os.path.exists(build_dir): /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/core.py in setup(**attr) 174 new_attr['headers'] = [] 175 --> 176 return old_setup(**new_attr) 177 178 def _check_append_library(libraries, item): /noc/users/agn/lib/python2.5/distutils/core.py in setup(**attrs) 149 if ok: 150 try: --> 151 dist.run_commands() 152 except KeyboardInterrupt: 153 raise SystemExit, "interrupted" /noc/users/agn/lib/python2.5/distutils/dist.py in run_commands(self) 972 """ 973 for cmd in self.commands: --> 974 self.run_command(cmd) 975 976 /noc/users/agn/lib/python2.5/distutils/dist.py in run_command(self, command) 992 cmd_obj = self.get_command_obj(command) 993 cmd_obj.ensure_finalized() --> 994 cmd_obj.run() 995 self.have_run[command] = 1 996 /noc/users/agn/lib/python2.5/distutils/command/build.py in run(self) 110 # - build_scripts - (Python) scripts 111 for cmd_name in self.get_sub_commands(): --> 112 self.run_command(cmd_name) 113 114 /noc/users/agn/lib/python2.5/distutils/cmd.py in run_command(self, command) 331 necessary and then invokes its 'run()' method. 332 """ --> 333 self.distribution.run_command(command) 334 335 /noc/users/agn/lib/python2.5/distutils/dist.py in run_command(self, command) 992 cmd_obj = self.get_command_obj(command) 993 cmd_obj.ensure_finalized() --> 994 cmd_obj.run() 995 self.have_run[command] = 1 996 /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py in run(self) 179 if fcompiler: 180 ctype = fcompiler.compiler_type --> 181 if fcompiler and fcompiler.get_version(): 182 fcompiler.customize(self.distribution) 183 fcompiler.customize_cmd(self) /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/ccompiler.py in CCompiler_get_version(self, force, ok_status) 263 if not version_cmd or not version_cmd[0]: 264 return None --> 265 cmd = ' '.join(version_cmd) 266 try: 267 matcher = self.version_match : sequence item 1: expected string, NoneType found > /noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/distutils/ccompiler.py(265)CCompiler_get_version() 264 return None --> 265 cmd = ' '.join(version_cmd) 266 try: ipdb> print version_cmd ['/data/ncs/packages4/linux/intel_compilers/v9.1/em64t/fc/9.1.036/bin/ifort', None] i From peridot.faceted at gmail.com Fri Jun 8 21:46:53 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 8 Jun 2007 21:46:53 -0400 Subject: [SciPy-user] scipy.integrate and threading In-Reply-To: <1181132858.17446.4.camel@wheeler.hlphys.uni-linz.ac.at> References: <1181132858.17446.4.camel@wheeler.hlphys.uni-linz.ac.at> Message-ID: On 06/06/07, Eugen Wintersberger wrote: > Hi there > I have just started with using threads in python and want to use them > together with the integration routines in scipy. However, there seems to > be a serious problem with threadsafty. > I want to run two threads which perform integration simultaneously. > However, when I do so the program exits with a Segmentation Fault. If I > use only one thread the program runs without problems. > Is there any simple possibility to use scipy in threads? Much of scipy is based on publicly-available FORTRAN routines. This has the advantage that they tend to be numerically robust and efficient, but it has the disadvantage that they often have interfaces that are, ah, old-fashioned. Many of them have python wrappers that make them somewhat more comfortable to use from python, but thread-safety (let alone parallelism) has not always been a goal. So sometimes, yes, some scipy routines just aren't thread-safe. It's not documented, either, as far as I can tell. You should also be aware of python's Global Interpreter Lock, which doesn't help with this and which means that you often don't get the parallelism you were hoping for. How can you avoid these problems? Well, for the specific problem of integrating known functions, scipy includes (for example) scipy.integrate.quadrature which is written in scipy with a perfectly reasonable interface that should be perfectly thread-safe. (And as a bonus, for difficult integration problems it should do some significantly large vector operations, which release the GIL and allow other threads to run concurrently.) It's not a very smart integrator. A more general solution to the problem would be to wrap your favourite non-thread-safe scipy routines in a simple routine that made sure only one thread was in each at a time. You could even write a little decorator. The ideal solution, of course, is to file bugs on the scipy trac when you find one that isn't thread-safe, so somebody can at least put a mutex on it inside scipy, and maybe if possible make it properly thread-safe. Or, hmm. Someone could write a little file full of unit tests that check various scipy routines for thread safety. That would accelerate the above process. Anne From david at ar.media.kyoto-u.ac.jp Fri Jun 8 23:40:07 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 Jun 2007 12:40:07 +0900 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46695222.5060606@ru.nl> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> <46694634.80201@ru.nl> <46695222.5060606@ru.nl> Message-ID: <466A2117.5000607@ar.media.kyoto-u.ac.jp> Stef Mientki wrote: >> >> >> Another solution could be, but don't know if it's safe enough >> >> from my_friends_module import * >> # and always as the last import do my own >> from scipy import array >> >> Is this a good solution ? >> >> >> No because when you'll want to use array, which array is it really ? > Sorry, I must be missing something here, doesn't > "from scipy import array" > (as the LAST line) override any previous definition of "array" ?? Yes it does, after from scipy import array, but you only solved the problem of array, that is what if other names arouse ? I still don't understand what the problem is with doing import my_friends_module as MF, and use the functions as MF.foo. Using from module import *, that is importing everything from a module totally defeats the purpose of namespace (and is not specific to python; C++ namespace are exactly the same: doing from module import * everywhere is exactly the same than doing using namespace std, etc... everywhere in your header files, which is an horrible thing to do). David From peridot.faceted at gmail.com Sat Jun 9 03:01:29 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 9 Jun 2007 03:01:29 -0400 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <466954B7.7050003@ou.edu> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <466925B0.8090300@ar.media.kyoto-u.ac.jp> <46692B21.5010506@ru.nl> <1e2af89e0706080319h4b657cbt64c4e3fd694b9876@mail.gmail.com> <46694634.80201@ru.nl> <466954B7.7050003@ou.edu> Message-ID: On 08/06/07, Ryan May wrote: > with no other code changes. I'm assuming we're talking about long term > maintained code, and not just a quick script. For that case I do the: > > import numpy as N > import pylab as P I would take this a step further, and point out that for *interactive* code - i.e. typed at the ipython prompt - it's probably easier to do "from numpy import *". I'd also like to reiterate that scipy does not provide a function "array". If you import scipy, for some strange reason, it sets scipy.array to numpy.array - that is, for some reason scipy provides copies of a large number of numpy functions. Not reimplementations, not accelerated versions, just exactly the same functions under different names. Why is this? Anne M. Archibald From fredmfp at gmail.com Sun Jun 10 18:32:31 2007 From: fredmfp at gmail.com (fred) Date: Mon, 11 Jun 2007 00:32:31 +0200 Subject: [SciPy-user] fread (from numpyio) and stderr redirection... Message-ID: <466C7BFF.3070109@gmail.com> Hi, I want to test that data read from file with fread has exactly the right dimensions. - if less bytes are read that the required number of bytes, how can it be detected ? - if more bytes are tried to read, fread returns a warning message that I don't want to be displayed. How can I do this ? I have attached a sample file that shows the three cases: - data read from file has the right dimension: ok - data read from file has 1 byte instead of 2: no detection error - try to read from file 3 bytes instead of 2: warning message displayed Any suggestion ? Thanks in advance. -- http://scipy.org/FredericPetit -------------- next part -------------- A non-text attachment was scrubbed... Name: essai.py Type: text/x-python Size: 993 bytes Desc: not available URL: From rex at nosyntax.net Sun Jun 10 23:29:10 2007 From: rex at nosyntax.net (rex) Date: Sun, 10 Jun 2007 20:29:10 -0700 Subject: [SciPy-user] Building SciPy under SUSE 10.2 using Intel icc 10 and ifort 10 Message-ID: <20070611032910.GA10886@x2.nosyntax.com> I've managed to build NumPy 1.0.3 from source using the Intel icc compiler and the Intel MKL, and now I'm trying for SciPy. Reading the archives looking for help I found this: ==================================================================== Daniel Nogradi nogradi at gmail.... Sat May 19 12:03:49 CDT 2007 > > I have the following installed on a Fedora 3 box: [...] > > and am trying to do a fresh install of scipy-0.5.2 but "python > > setup.py build_ext" is complaining about a missing f95 executable and > > ld about missing dfftpack ..... > Pearu Peterson wrote: > You can ignore messages about missing f95 executable, they are just > parts of compiler detection tools. > > Libraries dfftpack and others are built with build_clib command, > so sole build_ext is not enough. Use > > python build > > to build scipy. Thanks very much, indeed, build_clib, build_ext, build works (in this order). ====================================================================== LAPACK and BLAS built successfully with the Intel compilers (details on SciPy-dev) using modifications of Steve Baum's instructions at: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 But every combination of options I can think of for the CL build of SciPy (recent svn) fails with one of two error messages: python setup.py build_clib --compiler=intel --fcompiler=intel build_ext --compiler=intel --fcompiler=intel build >inst12 [...] error: file 'dfftpack/*.f' does not exist python setup.py build_clib --compiler=intel --fcompiler=intel [...] error: file 'dfftpack/*.f' does not exist python setup.py build --compiler=intel --fcompiler=intel [...] error: '_fftpackmodule.c' missing Any help appreciated, thanks, -rex From david at ar.media.kyoto-u.ac.jp Sun Jun 10 23:32:02 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 11 Jun 2007 12:32:02 +0900 Subject: [SciPy-user] How to generate random positive definite matrix Message-ID: <466CC232.30806@ar.media.kyoto-u.ac.jp> Hi there, I need to generate random positive definite matrix, mainly for testing purpose. Before, I was generating them using a random matrix A given by randn, and computing A'A. Unfortunately, if A is singular, so is A'A. Is there a better way to do than testing whether A is singular ? They do not need to follow a specific distribution (but I would like to avoid them to follow a really special "pattern"). cheers, David From wbaxter at gmail.com Mon Jun 11 00:20:05 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 11 Jun 2007 13:20:05 +0900 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: <466CC232.30806@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> Message-ID: Maybe you can make creative use of the gershgorin circle theorem. http://en.wikipedia.org/wiki/Gershgorin_circle_theorem. I know it can be used to make a sufficient condition test for positive definiteness. --bb On 6/11/07, David Cournapeau wrote: > Hi there, > > I need to generate random positive definite matrix, mainly for > testing purpose. Before, I was generating them using a random matrix A > given by randn, and computing A'A. Unfortunately, if A is singular, so > is A'A. Is there a better way to do than testing whether A is singular ? > They do not need to follow a specific distribution (but I would like to > avoid them to follow a really special "pattern"). > > cheers, From robert.kern at gmail.com Mon Jun 11 01:48:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 Jun 2007 00:48:25 -0500 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: <466CC232.30806@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> Message-ID: <466CE229.3050802@gmail.com> David Cournapeau wrote: > Hi there, > > I need to generate random positive definite matrix, mainly for > testing purpose. Before, I was generating them using a random matrix A > given by randn, and computing A'A. Unfortunately, if A is singular, so > is A'A. Is there a better way to do than testing whether A is singular ? > They do not need to follow a specific distribution (but I would like to > avoid them to follow a really special "pattern"). I would generate N random direction vectors (draw from a multivariate normal distribution with eye(N) as the covariance matrix and normalize the samples to be unit vectors). Resample any vector which happens to be nearly parallel to another (i.e. the dot product is within some eps of 1). Now, form a correlation matrix using the dot products of each of the unit vectors. Draw N random positive values from some positive distribution like log-normal or gamma. Multiply this vector on either side of the correlation matrix: v * corr * v[:,newaxis] You now have a random positive definite matrix which is even somewhat interpretable. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Jun 11 03:01:20 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 11 Jun 2007 03:01:20 -0400 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: <466CC232.30806@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> Message-ID: On 10/06/07, David Cournapeau wrote: > Hi there, > > I need to generate random positive definite matrix, mainly for > testing purpose. Before, I was generating them using a random matrix A > given by randn, and computing A'A. Unfortunately, if A is singular, so > is A'A. Is there a better way to do than testing whether A is singular ? > They do not need to follow a specific distribution (but I would like to > avoid them to follow a really special "pattern"). Every symmetric positive definite matrix (you didn't say symmetric but I assume you want it...) can be diagonalized using an orthogonal matrix. That is, you can always factor it as O'DO, where D is diagonal and has positive elements on the diagonal. So to get a random one, do it backwards: pick a set of eigenvalues from some distribution, then for each pair of axes, rotate the matrix by a random angle in the plane they form (which you can do with a cos/sin/-sin/cos matrix) with a randomly-decided reflection. This gives a distribution that's invariant under orthonormal change of basis; your only nonuniformities come in the choice of eigenvalues, and those you will probably want to control in testing. Eigenvalues of wildly differing sizes tend to lead to big numerical headaches, so you can experiment with the size of headache you want to give your code. If you want them all roughly the same size you could take normal variates and square them, but for better control I'd go with exponentiating a normal distribution. There's an alternative, probably faster, approach based on the uniqueness of the Cholesky square root of a matrix - just choose your eigenvalues, again, put their square roots on the diagonal, and randomly generate the upper triangle. Then multiply A'A and you should be able to get an arbitrary (symmetric) positive definite matrix. Unless you are cleverer than I am about choosing the distributions of the entries, it won't be nicely direction-independent. Now, a sort-of interesting question is, is there a natural distribution on the cone of positive definite matrices one could hope to draw from? Apart from direction invariance, the only other criterion I can think of to include is some sort of convexity condition - all the matrices on the line segment connecting two positive definite matrices are positive definite - so perhaps one could want the probability densities for all those matrices to be at least as high as for the endpoints? It's not clear to me that this is a sufficient (or appropriate) constraint on the PDF. Anne From David.L.Goldsmith at noaa.gov Mon Jun 11 04:29:00 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Mon, 11 Jun 2007 01:29:00 -0700 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: <466CE229.3050802@gmail.com> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466CE229.3050802@gmail.com> Message-ID: <466D07CC.2090107@noaa.gov> Robert Kern wrote: > David Cournapeau wrote: > >> Hi there, >> >> I need to generate random positive definite matrix, mainly for >> testing purpose. Before, I was generating them using a random matrix A >> given by randn, and computing A'A. Unfortunately, if A is singular, so >> is A'A. Is there a better way to do than testing whether A is singular ? >> They do not need to follow a specific distribution (but I would like to >> avoid them to follow a really special "pattern"). >> > > I would generate N random direction vectors (draw from a multivariate normal > distribution with eye(N) as the covariance matrix and normalize the samples to > be unit vectors). Hi, David. You say they don't need to follow a specific dist., but you also say you were using randn, which is perhaps why Robert suggests normally distributed random directions, but if you truly don't care, might I suggest simply N uniformly distributed reals, t, between 0 and 2pi, the direction vectors then being simply (cos(t), sin(t)). Otherwise, "what he said." :-) DG > Resample any vector which happens to be nearly parallel to > another (i.e. the dot product is within some eps of 1). Now, form a correlation > matrix using the dot products of each of the unit vectors. Draw N random > positive values from some positive distribution like log-normal or gamma. > Multiply this vector on either side of the correlation matrix: > > v * corr * v[:,newaxis] > > You now have a random positive definite matrix which is even somewhat interpretable. > > -- ERD/ORR/NOS/NOAA From david at ar.media.kyoto-u.ac.jp Mon Jun 11 05:47:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 11 Jun 2007 18:47:14 +0900 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: References: <466CC232.30806@ar.media.kyoto-u.ac.jp> Message-ID: <466D1A22.40001@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > On 10/06/07, David Cournapeau wrote: >> Hi there, >> >> I need to generate random positive definite matrix, mainly for >> testing purpose. Before, I was generating them using a random matrix A >> given by randn, and computing A'A. Unfortunately, if A is singular, so >> is A'A. Is there a better way to do than testing whether A is singular ? >> They do not need to follow a specific distribution (but I would like to >> avoid them to follow a really special "pattern"). > > Every symmetric positive definite matrix (you didn't say symmetric but > I assume you want it...) Yes, I need them to as parameters of multivariate gaussian. so they need to be real, not hermitian. > can be diagonalized using an orthogonal > matrix. That is, you can always factor it as O'DO, where D is diagonal > and has positive elements on the diagonal. So to get a random one, do > it backwards: pick a set of eigenvalues from some distribution, then > for each pair of axes, rotate the matrix by a random angle in the > plane they form (which you can do with a cos/sin/-sin/cos matrix) with > a randomly-decided reflection. This gives a distribution that's > invariant under orthonormal change of basis; your only nonuniformities > come in the choice of eigenvalues, and those you will probably want to > control in testing. Eigenvalues of wildly differing sizes tend to lead > to big numerical headaches, so you can experiment with the size of > headache you want to give your code. If you want them all roughly the > same size you could take normal variates and square them, but for > better control I'd go with exponentiating a normal distribution. That's a pretty good idea, actually. I totally forgot about the fact that the diagonalization vectors of an hermitian matrix can be interpreted as rotation (I really have to check my knowledge on those special matrices, I got all rusty, and I studied linear algebra less than 5 years ago :) ). > > There's an alternative, probably faster, approach based on the > uniqueness of the Cholesky square root of a matrix - just choose your > eigenvalues, again, put their square roots on the diagonal, and > randomly generate the upper triangle. Then multiply A'A and you should > be able to get an arbitrary (symmetric) positive definite matrix. > Unless you are cleverer than I am about choosing the distributions of > the entries, it won't be nicely direction-independent. That's why I tried, but I found they are really "too special". > > Now, a sort-of interesting question is, is there a natural > distribution on the cone of positive definite matrices one could hope > to draw from? Well, we could draw samples from a Wishart, but I was too lazy to implement it. It is definitely "natural", specially for multivariate normal, but I need to improve my knowledge on multidimensional calculus before being able to understand them (and implement them, hopefully). cheers, David From massimo.sandal at unibo.it Mon Jun 11 09:59:43 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 11 Jun 2007 15:59:43 +0200 Subject: [SciPy-user] How to solve name mangling ? In-Reply-To: <46694634.2010706@chimica.unige.it> References: <4668767D.40208@ru.nl> <20070607215657.GQ7609@mentat.za.net> <46688943.6060506@gmx.net> <46691F72.5050203@ru.nl> <4669414B.8000901@gmx.net> <46694634.2010706@chimica.unige.it> Message-ID: <466D554F.1070801@unibo.it> Giorgio Luciano ha scritto: > I'm enjoying reading all those comments also because sometime metaphors > a re risky to use ;) > (it seems like in a episode of House M.D.) because I can reply.. > if you want to drive a car you need to have a license... ok > but if you want to drive a car you dont' need to be a mechanich or a > mechanic engineer and know how the engine works... otherwise i guess > there will be really less car drivers .. ;) Problem is, programming is a science in itself. I do not work in informatics, nor I have ever studied the theory of programming, and the only programming language I have a firm grasp of is Python. Still, I don't feel that "programming should be easy for everyone". Or better, it should be as easy as possible (I'm currenty trying to self-learn C++, and it's quite hellish...) but not easier. Otherwise we would all being coding with Logo. So, programming may be hard, but we can't expect it must be easy for everyone. It must be just as hard as it should be. (By the way, this is the same mentality of "computers should be easy for everyone". Computers are meta-tools, they are extremly complex objects. Using them is correspondingly complex. The fact that every layman can stay in front of a computer and write an email is a miracle of current technology, not something ordinary. We can't expect for computers to become much easier than they are today without being dumbed down: we must instead expect people to be educated about them.) IMHO, Python is just that: it is a really easy, ready-to-go language, but with all the guts to do everything. In this context, namespaces are a damn great Python feature, and using them correctly makes code: - more readable (because you now that module.function() belongs to module, whereas, by importing *, function() , you lose that information) - more robust (because there is no clash of different modules) - actually easier (because I do not have to worry that the function() I write myself will clash with some other function() I have imported - I had extremly quirky bugs due to that) And it is an extremly simple concept to grasp, IMHO. I also come from the early days of "from foo import *", and somewhere in my apps there are remains of that bad habit. But everytime I can I rewrite them correctly, because I quickly learned that importing * is the wrong way. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From dominique.orban at gmail.com Mon Jun 11 11:09:01 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 11 Jun 2007 11:09:01 -0400 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: <466CC232.30806@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> Message-ID: <466D658D.8040706@gmail.com> David Cournapeau wrote: > Hi there, > > I need to generate random positive definite matrix, mainly for > testing purpose. Before, I was generating them using a random matrix A > given by randn, and computing A'A. Unfortunately, if A is singular, so > is A'A. Is there a better way to do than testing whether A is singular ? > They do not need to follow a specific distribution (but I would like to > avoid them to follow a really special "pattern"). Strictly diagonally dominant matrices are positive definite. So you could generate a random A, compute AA= A'A and then increase the elements on the diagonal to make sure that AA[i,i] > sum( abs(AA[i,j]), j != i ), e.g., compute the sum on the right-hand side and then add 1 to it and assign the result to AA[i,i]. More simply, you could compute A'A + alpha*I for some alpha > 0 of your choice and where I is the identity matrix. All eigenvalues of this matrix are >= alpha, which make it "safely" positive definite. Dominique From rcsqtc at iiqab.csic.es Mon Jun 11 11:56:14 2007 From: rcsqtc at iiqab.csic.es (Ramon Crehuet) Date: Mon, 11 Jun 2007 17:56:14 +0200 Subject: [SciPy-user] non linear optimisation Message-ID: <466D709E.4010304@iiqab.csic.es> Dear all, I have a problem of non-linear optimisation (see below for a experimental description). I have many experimental curves (x,y) and want to fit the points to a non-linear function y=f(x; a,b,c...), where a,b,c... are the parameters. I know how to do that. But the tricky thing is that some parameters should be the same for all the curves, and some are different for each curve, and I don't know if there is any module that can help me in doing this non-linear fit of *all* the curves at the same time. For the experimental scientists, I will say that these curves correspond to absorption spectra for the same compound in different solvents. The model curve f(x; a,b,c...) has some parameters that depend on the solvent and others that depend only on the compound, and thus, should be the same for all the spectra. Before starting to program blindly, I'd like to know if someone has faced this problem before. I could not find anything in the documentation, probably because I don't know if there is a specific name for these kind of problems. Thanks in advance, Ramon From gael.varoquaux at normalesup.org Mon Jun 11 12:04:31 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 11 Jun 2007 18:04:31 +0200 Subject: [SciPy-user] non linear optimisation In-Reply-To: <466D709E.4010304@iiqab.csic.es> References: <466D709E.4010304@iiqab.csic.es> Message-ID: <20070611160426.GO16599@clipper.ens.fr> On Mon, Jun 11, 2007 at 05:56:14PM +0200, Ramon Crehuet wrote: > Before starting to program blindly, I'd like to know if someone has > faced this problem before. I could not find anything in the > documentation, probably because I don't know if there is a specific name > for these kind of problems. I can't give you a specific answer for your question but you can find a hackish solution on http://www.scipy.org/Cookbook/FittingData, section 1.3. This will not scale terribly well if you have a large number of different datasets, and you will probably need to write an abstraction layer to generate the cost functions. Good luck, Ga?l From kte608 at mail.usask.ca Mon Jun 11 13:56:37 2007 From: kte608 at mail.usask.ca (Karl Edler) Date: Mon, 11 Jun 2007 12:56:37 -0500 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 Message-ID: <466D8CD5.60102@mail.usask.ca> Hello, I am having trouble installing the full scipy in SuSE 10.2. I was unable to type "from scipy import *" because some functions were missing as explained in the Wiki under "Broken BLAS". I un-installed all my scipy, blas, lapack, matplotlib rpms and installed all the rpms in http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/ but then I could not install python-scipy because Ashigabou has no python-scipy package and the python-scipy package in http://repos.opensuse.org/science/ does not like the refblas found in the Ashigabou repository. Apart from compiling my own blas, lapack, and scipy I am at a loss as to what to do. Specifically the python-scipy package in "science" cannot find any blas library installed from the Ashigabou packages. Does anyone know what to do about this? Thanks in advance, Karl Edler From t_crane at mrl.uiuc.edu Mon Jun 11 14:09:19 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Mon, 11 Jun 2007 13:09:19 -0500 Subject: [SciPy-user] help importing integrate Message-ID: <9EADC1E53F9C70479BF6559370369114142EF5@mrlnt6.mrl.uiuc.edu> Hi all, I'm confused. In my code I start with this: import scipy as S later on, I call odeint like so: Y = S.integrate.odeint(args) Yet, this doesn't seem to work (raising an AttributeError), and I can't figure out why. It will work if I add: from scipy import integrate and call odeint like so: Y = integrate.odeint(args) but without that extra import line, I get an attribute error saying 'module' object has no attribute 'integrate'. It would seem I am misunderstanding some aspect of scipy's structure or something even more basic. Any help or clarification is appreciated. trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Mon Jun 11 15:05:41 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 11 Jun 2007 15:05:41 -0400 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: <466D1A22.40001@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> Message-ID: On 11/06/07, David Cournapeau wrote: > Yes, I need them to as parameters of multivariate gaussian. so they need > to be real, not hermitian. Oh, I forgot to say: I wrote a multivariate Gaussian random number generator (takes a covariance matrix as input) and some tools to compare covariance matrices. They're not perfect (in particular the testing is a bit limited) but if they come in handy I'd be happy to put them under a BSD license or contribute them to scipy. > > Now, a sort-of interesting question is, is there a natural > > distribution on the cone of positive definite matrices one could hope > > to draw from? > Well, we could draw samples from a Wishart, but I was too lazy to > implement it. It is definitely "natural", specially for multivariate > normal, but I need to improve my knowledge on multidimensional calculus > before being able to understand them (and implement them, hopefully). The Wishart distribution has too many parameters to really be natural, to my taste. (I suppose there is a technical definition of natural in terms of category theory, and I bet in the right setting it could be shown to be natural.) Anne -------------- next part -------------- A non-text attachment was scrubbed... Name: covariance.py Type: text/x-python Size: 1835 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_covariance.py Type: text/x-python Size: 2453 bytes Desc: not available URL: From cookedm at physics.mcmaster.ca Mon Jun 11 16:17:27 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 11 Jun 2007 16:17:27 -0400 Subject: [SciPy-user] help importing integrate In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EF5@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EF5@mrlnt6.mrl.uiuc.edu> Message-ID: <20070611201727.GA25843@arbutus.physics.mcmaster.ca> On Mon, Jun 11, 2007 at 01:09:19PM -0500, Trevis Crane wrote: > Hi all, > > I'm confused. In my code I start with this: > import scipy as S > later on, I call odeint like so: > Y = S.integrate.odeint(args) > Yet, this doesn't seem to work (raising an AttributeError), and I can't > figure out why. It will work if I add: > from scipy import integrate > and call odeint like so: > Y = integrate.odeint(args) > but without that extra import line, I get an attribute error saying > 'module' object has no attribute 'integrate'. It would seem I am > misunderstanding some aspect of scipy's structure or something even more > basic. Any help or clarification is appreciated. The toplevel scipy package doesn't import subpackages, so you need to do import scipy.integrate -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From t_crane at mrl.uiuc.edu Mon Jun 11 16:29:31 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Mon, 11 Jun 2007 15:29:31 -0500 Subject: [SciPy-user] help importing integrate Message-ID: <9EADC1E53F9C70479BF6559370369114142EF7@mrlnt6.mrl.uiuc.edu> > The toplevel scipy package doesn't import subpackages, so you need to do > > import scipy.integrate [Trevis Crane] I was confused because when you go to this link: http://www.scipy.org/doc/api_docs/ and click on the [+] button next to scipy under the modules heading I find an 'integrate' heading. Interestingly, if you click on the same button next to the scipy heading that is under the Packages heading, you again see 'integrate' listed. I assumed that these two lists contain the subpackages of scipy (under the packages heading) and the modules of scipy under the modules heading. Anyway, thanks for the clarification, trevis > > > -- > |>|\/|< > /----------------------------------------------------------------------- ---\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Mon Jun 11 20:03:15 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 12 Jun 2007 02:03:15 +0200 Subject: [SciPy-user] help importing integrate In-Reply-To: <20070611201727.GA25843@arbutus.physics.mcmaster.ca> References: <9EADC1E53F9C70479BF6559370369114142EF5@mrlnt6.mrl.uiuc.edu> <20070611201727.GA25843@arbutus.physics.mcmaster.ca> Message-ID: <466DE2C3.7010604@gmx.net> David M. Cooke wrote: > On Mon, Jun 11, 2007 at 01:09:19PM -0500, Trevis Crane wrote: >> Hi all, >> >> I'm confused. In my code I start with this: >> import scipy as S >> later on, I call odeint like so: >> Y = S.integrate.odeint(args) >> Yet, this doesn't seem to work (raising an AttributeError), and I can't >> figure out why. It will work if I add: >> from scipy import integrate >> and call odeint like so: >> Y = integrate.odeint(args) >> but without that extra import line, I get an attribute error saying >> 'module' object has no attribute 'integrate'. It would seem I am >> misunderstanding some aspect of scipy's structure or something even more >> basic. Any help or clarification is appreciated. > > The toplevel scipy package doesn't import subpackages, so you need to do > > import scipy.integrate > This is to reduce importing time. For interactive work, you can do import scipy as S S.pkgload() or ipython -p scipy -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From wbaxter at gmail.com Mon Jun 11 20:26:07 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 12 Jun 2007 09:26:07 +0900 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> Message-ID: On 6/12/07, Anne Archibald wrote: > On 11/06/07, David Cournapeau wrote: > > > Yes, I need them to as parameters of multivariate gaussian. so they need > > to be real, not hermitian. > > Oh, I forgot to say: I wrote a multivariate Gaussian random number > generator (takes a covariance matrix as input) and some tools to > compare covariance matrices. They're not perfect (in particular the > testing is a bit limited) but if they come in handy I'd be happy to > put them under a BSD license or contribute them to scipy. That would be a great addition to the cookbook if it's not wanted for scipy itself. http://www.scipy.org/Cookbook --bb From ckkart at hoc.net Mon Jun 11 22:07:59 2007 From: ckkart at hoc.net (Christian K) Date: Tue, 12 Jun 2007 11:07:59 +0900 Subject: [SciPy-user] non linear optimisation In-Reply-To: <466D709E.4010304@iiqab.csic.es> References: <466D709E.4010304@iiqab.csic.es> Message-ID: Ramon Crehuet wrote: > Dear all, > I have a problem of non-linear optimisation (see below for a > experimental description). I have many experimental curves (x,y) and > want to fit the points to a non-linear function y=f(x; a,b,c...), where > a,b,c... are the parameters. I know how to do that. But the tricky thing > is that some parameters should be the same for all the curves, and some > are different for each curve, and I don't know if there is any module > that can help me in doing this non-linear fit of *all* the curves at the > same time. > For the experimental scientists, I will say that these curves > correspond to absorption spectra for the same compound in different > solvents. The model curve f(x; a,b,c...) has some parameters that depend > on the solvent and others that depend only on the compound, and thus, > should be the same for all the spectra. Isn't it sufficient to determine the parameters corresponding to the absorption peak that is apparent in all spectra once and then hold those parameters fixed when ftting the other ones? Generally I would reccomend peak-o-mat (http://lorentz/sf/net) for peak fitting. It allows to define parameters as fixed though it cannot fit many spectra at once like you want to do. Christian From william.ratcliff at gmail.com Mon Jun 11 23:00:06 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 11 Jun 2007 23:00:06 -0400 Subject: [SciPy-user] non linear optimisation In-Reply-To: References: <466D709E.4010304@iiqab.csic.es> Message-ID: <827183970706112000j1399bc69ge78e3cf2ef19b3b2@mail.gmail.com> lorentz.sourceforge.net/ :> I'll have to try this :> On 6/11/07, Christian K wrote: > > Ramon Crehuet wrote: > > Dear all, > > I have a problem of non-linear optimisation (see below for a > > experimental description). I have many experimental curves (x,y) and > > want to fit the points to a non-linear function y=f(x; a,b,c...), where > > a,b,c... are the parameters. I know how to do that. But the tricky thing > > is that some parameters should be the same for all the curves, and some > > are different for each curve, and I don't know if there is any module > > that can help me in doing this non-linear fit of *all* the curves at the > > same time. > > For the experimental scientists, I will say that these curves > > correspond to absorption spectra for the same compound in different > > solvents. The model curve f(x; a,b,c...) has some parameters that depend > > on the solvent and others that depend only on the compound, and thus, > > should be the same for all the spectra. > > Isn't it sufficient to determine the parameters corresponding to the > absorption > peak that is apparent in all spectra once and then hold those parameters > fixed > when ftting the other ones? Generally I would reccomend peak-o-mat > (http://lorentz/sf/net) for peak fitting. It allows to define parameters > as > fixed though it cannot fit many spectra at once like you want to do. > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Jun 12 01:48:04 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 12 Jun 2007 14:48:04 +0900 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> Message-ID: <466E3394.8080204@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > Oh, I forgot to say: I wrote a multivariate Gaussian random number > generator (takes a covariance matrix as input) and some tools to > compare covariance matrices. They're not perfect (in particular the > testing is a bit limited) but if they come in handy I'd be happy to > put them under a BSD license or contribute them to scipy. I also have mine for quite some time now for my toolbox for EM. Mine is a bit more complicated, because I handle the diagonal case separately. I didn't know about this test (I intend to use something like Dvoretzky-Kiefer-Wolfowitz tests and apply it to the whole scipy.stats package, but that will take time). > > The Wishart distribution has too many parameters to really be natural, > to my taste. (I suppose there is a technical definition of natural in > terms of category theory, and I bet in the right setting it could be > shown to be natural.) Well, that's going beyond my current mathematics knowledge :) David From peridot.faceted at gmail.com Tue Jun 12 04:45:32 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 12 Jun 2007 04:45:32 -0400 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: <466E3394.8080204@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> <466E3394.8080204@ar.media.kyoto-u.ac.jp> Message-ID: On 12/06/07, David Cournapeau wrote: > I also have mine for quite some time now for my toolbox for EM. Mine is > a bit more complicated, because I handle the diagonal case separately. I > didn't know about this test (I intend to use something like > Dvoretzky-Kiefer-Wolfowitz tests and apply it to the whole scipy.stats > package, but that will take time). This test is not chosen in any kind of careful way, I was just looking for a way to check the covariance matrices output by my code against the random values they were supposed to be describing. Of course, I needed to be able to test the statistical test itself, so I wrote a multivariate normal generator - in too much of a hurry to make it match the usual random number generator API (and hey! generators are cool). This, of course, needed to be tested... you get the picture. Do you mean testing the scipy.stats distributions to make sure that they generate the same distribution as the cdf describes? In fact most of them are already tested this way using scipy's kstest module. (Unless I'm mistaken the kstest module doesn't work right for discrete-valued distributions, so they are not correctly tested. The kstest module itself doesn't appear to be tested either...) Anne From david at ar.media.kyoto-u.ac.jp Tue Jun 12 04:57:45 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 12 Jun 2007 17:57:45 +0900 Subject: [SciPy-user] How to generate random positive definite matrix In-Reply-To: References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> <466E3394.8080204@ar.media.kyoto-u.ac.jp> Message-ID: <466E6009.3060303@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > > > Do you mean testing the scipy.stats distributions to make sure that > they generate the same distribution as the cdf describes? In fact most > of them are already tested this way using scipy's kstest module. > (Unless I'm mistaken the kstest module doesn't work right for > discrete-valued distributions, so they are not correctly tested. The > kstest module itself doesn't appear to be tested either...) Mmmh, that's strange, the last time I checked, I didn't see any test like this, but you're right, they are definitely there, and for a really long time if I believe svn blame... David From emilia12 at mail.bg Tue Jun 12 06:42:23 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Tue, 12 Jun 2007 13:42:23 +0300 Subject: [SciPy-user] numpy unittests In-Reply-To: <466E6009.3060303@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> <466E3394.8080204@ar.media.kyoto-u.ac.jp> <466E6009.3060303@ar.media.kyoto-u.ac.jp> Message-ID: <1181644943.8e63401393d9a@mail.bg> hi list, is it normal each time when i import numpy, the numpy unittests runs? Regards, e. ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From david at ar.media.kyoto-u.ac.jp Tue Jun 12 06:35:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 12 Jun 2007 19:35:51 +0900 Subject: [SciPy-user] numpy unittests In-Reply-To: <1181644943.8e63401393d9a@mail.bg> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> <466E3394.8080204@ar.media.kyoto-u.ac.jp> <466E6009.3060303@ar.media.kyoto-u.ac.jp> <1181644943.8e63401393d9a@mail.bg> Message-ID: <466E7707.1030706@ar.media.kyoto-u.ac.jp> emilia12 at mail.bg wrote: > hi list, > > is it normal each time when i import numpy, the numpy > unittests runs? > > No, it's not. Which version are you using, on which platform ? cheers, David From david.huard at gmail.com Tue Jun 12 09:19:39 2007 From: david.huard at gmail.com (David Huard) Date: Tue, 12 Jun 2007 09:19:39 -0400 Subject: [SciPy-user] non linear optimisation In-Reply-To: <827183970706112000j1399bc69ge78e3cf2ef19b3b2@mail.gmail.com> References: <466D709E.4010304@iiqab.csic.es> <827183970706112000j1399bc69ge78e3cf2ef19b3b2@mail.gmail.com> Message-ID: <91cf711d0706120619o2b296066tf2511512d9aee034@mail.gmail.com> Ramon, If you want to calibrate all curves and parameters at once, my suggestion would be to create a unique function with all the parameters, shared and not. Let's say you have y1 = f1(x1,a,b,c,r1,s1) y2 = f2(x2,a,b,c,r2,s2) ... yn = fn(xn,a,b,c,rn,sn) where a,b,c are the shared parameters, ri,si the individual parameters and xi,yi the experimental data sets. write a function def residuals(a,b,c,r1,...rn, s1,...sn): z1 = y1 - f1(x1,a,b,c,r1,s1) z2 = y2 - f1(x2,a,b,c,r2,s2) ... zn = yn - fn(xn,a,b,c,rn,sn) sum zi**2 for all i return the sum of the zs and minimize the residuals with respect to the parameters using optimize.fmin or the like. Cheers, David 2007/6/11, william ratcliff : > > lorentz.sourceforge.net/ > > :> > > I'll have to try this :> > > On 6/11/07, Christian K wrote: > > > > Ramon Crehuet wrote: > > > Dear all, > > > I have a problem of non-linear optimisation (see below for a > > > experimental description). I have many experimental curves (x,y) and > > > want to fit the points to a non-linear function y=f(x; a,b,c...), > > where > > > a,b,c... are the parameters. I know how to do that. But the tricky > > thing > > > is that some parameters should be the same for all the curves, and > > some > > > are different for each curve, and I don't know if there is any module > > > that can help me in doing this non-linear fit of *all* the curves at > > the > > > same time. > > > For the experimental scientists, I will say that these curves > > > correspond to absorption spectra for the same compound in different > > > solvents. The model curve f(x; a,b,c...) has some parameters that > > depend > > > on the solvent and others that depend only on the compound, and thus, > > > should be the same for all the spectra. > > > > Isn't it sufficient to determine the parameters corresponding to the > > absorption > > peak that is apparent in all spectra once and then hold those parameters > > fixed > > when ftting the other ones? Generally I would reccomend peak-o-mat > > (http://lorentz/sf/net) for peak fitting. It allows to define > > parameters as > > fixed though it cannot fit many spectra at once like you want to do. > > > > Christian > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnurser at googlemail.com Tue Jun 12 11:04:21 2007 From: gnurser at googlemail.com (George Nurser) Date: Tue, 12 Jun 2007 16:04:21 +0100 Subject: [SciPy-user] new problem with f2py --fcompiler=intelem no longer works. In-Reply-To: <1d1e6ea70706081515o114d673dh3a653e3259cb33f0@mail.gmail.com> References: <1d1e6ea70706071507k3a5e0958n2089edcc9d551cf5@mail.gmail.com> <466906A8.9000509@cens.ioc.ee> <1d1e6ea70706080129v3425fc14p780f862ab172b0c0@mail.gmail.com> <46696494.50000@cens.ioc.ee> <1d1e6ea70706081515o114d673dh3a653e3259cb33f0@mail.gmail.com> Message-ID: <1d1e6ea70706120804u446caf9h81a3c0a249a4f373@mail.gmail.com> On 08/06/07, George Nurser wrote: > First problem is now solved, thanks, but compilationstill fails with > the second error, with None in version_cmd > If I replace None by '-V' in line 133 of /numpy/distutils/fcompiler/intel.py viz, in class IntelEM64TFCompiler executables = { 'version_cmd' : ['', None], ---> executables = { 'version_cmd' : ['', '-V'], it seems to run fine for me now. On our machines (running ifort 9.1) -V is also the option for getting the version for Intel32bit and Itanium machines, so I guess the same thing needs to be done for class IntelItaniumFCompiler & class IntelFCompiler No idea about the VisualFCompilers --George. From rex at nosyntax.net Tue Jun 12 16:58:37 2007 From: rex at nosyntax.net (rex) Date: Tue, 12 Jun 2007 13:58:37 -0700 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 Message-ID: <20070612205837.GA29384@x2.nosyntax.com> Karl Edler [2007-06-11 10:58]: > ...and installed all the rpms in > http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/ > but then I could not install python-scipy because Ashigabou has no > python-scipy package and the python-scipy package in > http://repos.opensuse.org/science/ does not like the refblas found in > the Ashigabou repository. Apart from compiling my own blas, lapack, and > scipy I am at a loss as to what to do. Hello Karl, Since none of the experts have responded, I will. I suggest compiling blas, lapack, numpy and scipy. Have a look at: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 There are a couple of minor errors. Feel free to email me if you have problems. I recently went through the process of building on SUSE 10.2 using Intel's compilers and MKL instead of gcc, blas, and lapack. -rex From david at ar.media.kyoto-u.ac.jp Tue Jun 12 21:52:06 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 Jun 2007 10:52:06 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: <466D8CD5.60102@mail.usask.ca> References: <466D8CD5.60102@mail.usask.ca> Message-ID: <466F4DC6.7010504@ar.media.kyoto-u.ac.jp> Karl Edler wrote: > Hello, > > I am having trouble installing the full scipy in SuSE 10.2. I was unable > to type "from scipy import *" because some functions were missing as > explained in the Wiki under "Broken BLAS". I un-installed all my scipy, > blas, lapack, matplotlib rpms and installed all the rpms in > http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/ > but then I could not install python-scipy because Ashigabou has no > python-scipy package and the python-scipy package in > http://repos.opensuse.org/science/ does not like the refblas found in > the Ashigabou repository. Yes, that's definitely possible (mixing different repositories). For scipy, do you use 64 bits arch ? If not, it should work (there is python-scipy on the repository). Again, I must say that all the packages on ashigabou are experimental for now (but I don't think scipy on science repo uses blas at all. I had some discussion with the people working on this repository, and they are not heavy users of numpy/scipy, and couldn't manage to compile the thing proprely) I put the project in stand by the last few weeks because of PhD works, but I've finished important things now, and hope to polish a bit more things. > > Specifically the python-scipy package in "science" cannot find any blas > library installed from the Ashigabou packages. > That's unfortunately expected for now. Ideally, once the package is OK, it should be included officialy in opensuse. But I am not a user of opensuse, and find it extremely painful to use myself, so I cannot be an official packager for it. Someone else will have to do it. David From david at ar.media.kyoto-u.ac.jp Tue Jun 12 22:00:34 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 Jun 2007 11:00:34 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: <20070612205837.GA29384@x2.nosyntax.com> References: <20070612205837.GA29384@x2.nosyntax.com> Message-ID: <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> rex wrote: > Karl Edler [2007-06-11 10:58]: > >> ...and installed all the rpms in >> http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/ >> but then I could not install python-scipy because Ashigabou has no >> python-scipy package and the python-scipy package in >> http://repos.opensuse.org/science/ does not like the refblas found in >> the Ashigabou repository. Apart from compiling my own blas, lapack, and >> scipy I am at a loss as to what to do. > > Hello Karl, > > Since none of the experts have responded, I will. I suggest compiling > blas, lapack, numpy and scipy. Have a look at: > > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 Please note that the instructions are not up-to-date anymore, specially concerning atlas. One of the problem is that now, many distributions (debian and ubuntu being exceptions here) use gfortran as the default fortran compiler instead of g77, and without extra carefulness, you cannot mix libraries compiled by one compiler with another. Also, the g77 (compat-g77 something on opensuse if I remember correctly) has been broken for a long time. On opensuse (at least 10.2), I would say use gfortran everywhere instead of g77, because the latter is being deprecated on this platform. David From nwagner at iam.uni-stuttgart.de Wed Jun 13 03:02:48 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 13 Jun 2007 09:02:48 +0200 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> References: <20070612205837.GA29384@x2.nosyntax.com> <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> Message-ID: <466F9698.3050907@iam.uni-stuttgart.de> David Cournapeau wrote: > rex wrote: > >> Karl Edler [2007-06-11 10:58]: >> >> >>> ...and installed all the rpms in >>> http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/ >>> but then I could not install python-scipy because Ashigabou has no >>> python-scipy package and the python-scipy package in >>> http://repos.opensuse.org/science/ does not like the refblas found in >>> the Ashigabou repository. Apart from compiling my own blas, lapack, and >>> scipy I am at a loss as to what to do. >>> >> Hello Karl, >> >> Since none of the experts have responded, I will. I suggest compiling >> blas, lapack, numpy and scipy. Have a look at: >> >> http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 >> > Please note that the instructions are not up-to-date anymore, specially > concerning atlas. One of the problem is that now, many distributions > (debian and ubuntu being exceptions here) use gfortran as the default > fortran compiler instead of g77, and without extra carefulness, you > cannot mix libraries compiled by one compiler with another. Also, the > g77 (compat-g77 something on opensuse if I remember correctly) has been > broken for a long time. > > On opensuse (at least 10.2), I would say use gfortran everywhere instead > of g77, because the latter is being deprecated on this platform. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi David, I use g77 on opensuse 10.2 without any problem. I have deinstalled gcc-fortran before the installation of LAPACK/ATLAS, numpy/scipy. compat-g77 works fine for me. The problem was fixed some month ago. IIRC Robert K. didn't recommend the usage of gfortran. Is it still valid ? Nils From david at ar.media.kyoto-u.ac.jp Wed Jun 13 03:07:38 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 Jun 2007 16:07:38 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: <466F9698.3050907@iam.uni-stuttgart.de> References: <20070612205837.GA29384@x2.nosyntax.com> <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> <466F9698.3050907@iam.uni-stuttgart.de> Message-ID: <466F97BA.20405@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Hi David, > > I use g77 on opensuse 10.2 without any problem. > I have deinstalled gcc-fortran before the installation of LAPACK/ATLAS, > numpy/scipy. You definitely can do it, but as this is not the default compiler, you may have problems with other packages. As long as you compile everything by yourself, this is fine, but if you need something compiled by the default fortran compiler, this is tricky. If you remove g77, you are already sure that everything is compiled by g77 and not gfortran, which is helpful, indeed. > > compat-g77 works fine for me. The problem was fixed some month ago. > IIRC Robert K. didn't recommend the usage of gfortran. Is it still valid ? I don't know the context where R. Kern told that, and he knows better than me for sure, but I think this depends on the distribution you're using; I compiled everything with gfortran for suse and fedora on ashigabou, because that's the default compiler. On debian based (including Ubuntu), definitely, g77 is the best choice, specially since everything is packaged correctly, so compiling dependencies is not needed. David From fdu.xiaojf at gmail.com Wed Jun 13 03:37:04 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Wed, 13 Jun 2007 15:37:04 +0800 Subject: [SciPy-user] questions about solving equations in scipy Message-ID: <466F9EA0.3040101@gmail.com> Hi all, I have posted this to python-list, and some people kindly pointed that here is a better place for scipy related questions. I have two questions about scipy. 1) When I was trying to solve a single variable equations using scipy, I found two methods: scipy.optimize.fsolve,and scipy.optimize.newton. I have tried both, and it seemed that both worked well, and fsolve ran faster. My questions is, which is the right choose ? And I also found that there are models and functions in both scipy and numpy, such as scipy.linalg.solve() and numpy.linalg.solve(), and both can solve a linear equation. Are they the same in the ground? 2) I have to solve a linear equation, with the constraint that all variables should be positive. Currently I can solve this problem by manually adjusting the solution in each iteration after get the solution bu using scipy.linalg.solve(). I found a web page about optimization solver in openoffice(http://wiki.services.openoffice.org/wiki/Optimization_Solver#Non-Linear_Programming). Openoffice has an option of "Allow only positive values", so I think that may be a well-defined problem. Sorry for my ignorance if I was wrong. Is there a smart way in python? Thanks in advance. Xiao Jianfeng From emilia12 at mail.bg Wed Jun 13 06:14:41 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Wed, 13 Jun 2007 13:14:41 +0300 Subject: [SciPy-user] numpy unittests In-Reply-To: <466E7707.1030706@ar.media.kyoto-u.ac.jp> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> <466E3394.8080204@ar.media.kyoto-u.ac.jp> <466E6009.3060303@ar.media.kyoto-u.ac.jp> <1181644943.8e63401393d9a@mail.bg> <466E7707.1030706@ar.media.kyoto-u.ac.jp> Message-ID: <1181729681.9a9581dca4bbb@mail.bg> helo David, list i am using win_xp, python-2.5.1, numpy-1.0.3.win32-py2.5, scipy-0.5.2.win32-py2.5 and the reason for this mail is these warnings: C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\linalg\__init__.py:32: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\ndimage\__init__.py:40: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\sparse\__init__.py:9: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\io\__init__.py:20: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\lib\__init__.py:5: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\linsolve\umfpack\__init__.py:7: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\linsolve\__init__.py:13: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\interpolate\__init__.py:15: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\optimize\__init__.py:17: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\special\__init__.py:22: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\stats\__init__.py:15: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\fftpack\__init__.py:21: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\integrate\__init__.py:16: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\signal\__init__.py:17: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test C:\Python25\lib\site-packages\scipy\maxentropy\__init__.py:12: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test ????? ?? ????? ?? David Cournapeau : > emilia12 at mail.bg wrote: > > hi list, > > > > is it normal each time when i import numpy, the numpy > > unittests runs? > > > > > No, it's not. Which version are you using, on which > platform ? > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From matthieu.brucher at gmail.com Wed Jun 13 06:18:36 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 13 Jun 2007 12:18:36 +0200 Subject: [SciPy-user] numpy unittests In-Reply-To: <1181729681.9a9581dca4bbb@mail.bg> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> <466E3394.8080204@ar.media.kyoto-u.ac.jp> <466E6009.3060303@ar.media.kyoto-u.ac.jp> <1181644943.8e63401393d9a@mail.bg> <466E7707.1030706@ar.media.kyoto-u.ac.jp> <1181729681.9a9581dca4bbb@mail.bg> Message-ID: Hi, Don't bother, they will disappear in the future (numpy was called scipy in the past, so the deprecation warning) Matthieu 2007/6/13, emilia12 at mail.bg : > > helo David, list > > i am using win_xp, python-2.5.1, numpy-1.0.3.win32-py2.5, > scipy-0.5.2.win32-py2.5 > and the reason for this mail is these warnings: > > C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\linalg\__init__.py:32: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\ndimage\__init__.py:40: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\sparse\__init__.py:9: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\io\__init__.py:20: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\lib\__init__.py:5: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\linsolve\umfpack\__init__.py:7: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\linsolve\__init__.py:13: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\interpolate\__init__.py:15: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\optimize\__init__.py:17: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\special\__init__.py:22: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\stats\__init__.py:15: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\fftpack\__init__.py:21: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\integrate\__init__.py:16: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\signal\__init__.py:17: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\maxentropy\__init__.py:12: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > > > > ????? ?? ????? ?? David Cournapeau > : > > > emilia12 at mail.bg wrote: > > > hi list, > > > > > > is it normal each time when i import numpy, the numpy > > > unittests runs? > > > > > > > > No, it's not. Which version are you using, on which > > platform ? > > > > cheers, > > > > David > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > ----------------------------- > > SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? > ??????????. > http://www.bgscena.com/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jun 13 06:13:31 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 Jun 2007 19:13:31 +0900 Subject: [SciPy-user] numpy unittests In-Reply-To: <1181729681.9a9581dca4bbb@mail.bg> References: <466CC232.30806@ar.media.kyoto-u.ac.jp> <466D1A22.40001@ar.media.kyoto-u.ac.jp> <466E3394.8080204@ar.media.kyoto-u.ac.jp> <466E6009.3060303@ar.media.kyoto-u.ac.jp> <1181644943.8e63401393d9a@mail.bg> <466E7707.1030706@ar.media.kyoto-u.ac.jp> <1181729681.9a9581dca4bbb@mail.bg> Message-ID: <466FC34B.4030005@ar.media.kyoto-u.ac.jp> emilia12 at mail.bg wrote: > helo David, list > > i am using win_xp, python-2.5.1, numpy-1.0.3.win32-py2.5, > scipy-0.5.2.win32-py2.5 > and the reason for this mail is these warnings: > > C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\linalg\__init__.py:32: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\ndimage\__init__.py:40: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\sparse\__init__.py:9: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\io\__init__.py:20: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\lib\__init__.py:5: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\linsolve\umfpack\__init__.py:7: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\linsolve\__init__.py:13: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\interpolate\__init__.py:15: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\optimize\__init__.py:17: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\special\__init__.py:22: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\stats\__init__.py:15: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\fftpack\__init__.py:21: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\integrate\__init__.py:16: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\signal\__init__.py:17: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test > C:\Python25\lib\site-packages\scipy\maxentropy\__init__.py:12: > DeprecationWarning: ScipyTest is now called NumpyTest; > please update your code > test = ScipyTest().test Oh, ok, this is harmless. Basically, there was a change in unit testing in numpy, and those changes were made in scipy source code, but this has been done (relatively) recently, and hence not yet present in the binaries, I guess, provided for windows. The tests are not really run, the warnings happen when importing the different packages. David From lorrmann at physik.uni-wuerzburg.de Wed Jun 13 09:56:37 2007 From: lorrmann at physik.uni-wuerzburg.de (Volker Lorrmann) Date: Wed, 13 Jun 2007 15:56:37 +0200 Subject: [SciPy-user] another (little) installation problem Message-ID: <466FF795.6040701@physik.uni-wuerzburg.de> Hello everybody, i?m trying to get scipy build. I?ve installed python-numpy and atlas with lapack support without any problems. "python setup.py build" fails with building extension "scipy.fftpack._fftpack" sources target build/src.linux-i686-2.5/_fftpackmodule.c does not exist: Assuming _fftpackmodule.c was generated with "build_src --inplace" command. error: '_fftpackmodule.c' missing Is there an easy workaround? Do you need some more informations? I?ve added the whole output. Thanks so far Volker -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy.log URL: From robert.kern at gmail.com Wed Jun 13 12:44:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 11:44:51 -0500 Subject: [SciPy-user] questions about solving equations in scipy In-Reply-To: <466F9EA0.3040101@gmail.com> References: <466F9EA0.3040101@gmail.com> Message-ID: <46701F03.2080005@gmail.com> fdu.xiaojf at gmail.com wrote: > Hi all, > > I have posted this to python-list, and some people kindly pointed that here is > a better place for scipy related questions. > > I have two questions about scipy. > > 1) When I was trying to solve a single variable equations using scipy, I > found two methods: scipy.optimize.fsolve,and scipy.optimize.newton. > > I have tried both, and it seemed that both worked well, and fsolve ran > faster. > > My questions is, which is the right choose ? Like I said on the python-list, whichever one works best for your particular problem. > And I also found that there are models and functions in both scipy and numpy, > such as scipy.linalg.solve() and numpy.linalg.solve(), and both can solve a > linear equation. Are they the same in the ground? Yes. > 2) I have to solve a linear equation, with the constraint that all > variables should be positive. Currently I can solve this problem by > manually adjusting the solution in each iteration after get the solution > bu using scipy.linalg.solve(). > > I found a web page about optimization solver in > openoffice(http://wiki.services.openoffice.org/wiki/Optimization_Solver#Non-Linear_Programming). > Openoffice has an option of "Allow only positive values", so I think that > may be a well-defined problem. Sorry for my ignorance if I was wrong. What this page is talking about is optimization (minimizing the error), not solving a linear equation (error = 0). Take a look at the bound-constrained optimizers in scipy.optimize: fmin_l_bfgs_b and fmin_tnc. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rex at nosyntax.net Wed Jun 13 13:09:58 2007 From: rex at nosyntax.net (rex) Date: Wed, 13 Jun 2007 10:09:58 -0700 Subject: [SciPy-user] another (little) installation problem In-Reply-To: <466FF795.6040701@physik.uni-wuerzburg.de> References: <466FF795.6040701@physik.uni-wuerzburg.de> Message-ID: <20070613170958.GY4818@x2.nosyntax.com> Volker Lorrmann [2007-06-13 06:56]: > > i'm trying to get scipy build. I've installed python-numpy and atlas > with lapack support without any problems. > "python setup.py build" fails with > > building extension "scipy.fftpack._fftpack" sources > target build/src.linux-i686-2.5/_fftpackmodule.c does not exist: > Assuming _fftpackmodule.c was generated with "build_src --inplace" > command. > error: '_fftpackmodule.c' missing > > Is there an easy workaround? Do you need some more informations? I've > added the whole output. Hello Volker, These errors look very similar to the errors I got using numpy-1.0.3. On the SciPy-dev list, Pearu pointed out that my errors were due to a bug in the 1.0.3 tarball, and suggested building numpy from the svn version. I did, and it fixed that problem. What OS and version are you building on? For anyone wondering about how to get the svn versions, the commands below will download all the files into directories (they will be automatically created) numpy and scipy below the current directory. svn co http://svn.scipy.org/svn/numpy/trunk numpy svn co http://svn.scipy.org/svn/scipy/trunk scipy -rex -- "It is inhumane, in my opinion, to force people who have a genuine medical need for coffee to wait in line behind people who apparently view it as some kind of recreational activity." --Dave Barry From ckkart at hoc.net Wed Jun 13 20:41:15 2007 From: ckkart at hoc.net (Christian K) Date: Thu, 14 Jun 2007 09:41:15 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: <466F97BA.20405@ar.media.kyoto-u.ac.jp> References: <20070612205837.GA29384@x2.nosyntax.com> <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> <466F9698.3050907@iam.uni-stuttgart.de> <466F97BA.20405@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > Nils Wagner wrote: >> Hi David, >> >> I use g77 on opensuse 10.2 without any problem. >> I have deinstalled gcc-fortran before the installation of LAPACK/ATLAS, >> numpy/scipy. > You definitely can do it, but as this is not the default compiler, you > may have problems with other packages. As long as you compile everything > by yourself, this is fine, but if you need something compiled by the > default fortran compiler, this is tricky. > > If you remove g77, you are already sure that everything is compiled by > g77 and not gfortran, which is helpful, indeed. >> compat-g77 works fine for me. The problem was fixed some month ago. >> IIRC Robert K. didn't recommend the usage of gfortran. Is it still valid ? > I don't know the context where R. Kern told that, and he knows better > than me for sure, but I think this depends on the distribution you're gfortran 4.0.xxx had some real problems, see this thread: http://thread.gmane.org/gmane.comp.python.scientific.user/6889 starting from 4.1 I did not encounter any problems. However I remember having read somewhere, that it is not recommended to build atlas with gfortran. > using; I compiled everything with gfortran for suse and fedora on > ashigabou, because that's the default compiler. On debian based > (including Ubuntu), definitely, g77 is the best choice, specially since Isn't gfortran the default on edgy and feisty gfortran? At least that is the one that is installed on my system. Christian From david at ar.media.kyoto-u.ac.jp Wed Jun 13 22:20:55 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Jun 2007 11:20:55 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: References: <20070612205837.GA29384@x2.nosyntax.com> <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> <466F9698.3050907@iam.uni-stuttgart.de> <466F97BA.20405@ar.media.kyoto-u.ac.jp> Message-ID: <4670A607.4020001@ar.media.kyoto-u.ac.jp> Christian K wrote: > > gfortran 4.0.xxx had some real problems, see this thread: > http://thread.gmane.org/gmane.comp.python.scientific.user/6889 > starting from 4.1 I did not encounter any problems. However I remember having > read somewhere, that it is not recommended to build atlas with gfortran. Disclaimer: I am not a fortran developer, I know nothing about fortran, only about packaging software compiled with fortran compiler and related problems. For atlas, it is hardly a problem I think, because atlas is a C library. There is an option to compiler a fortran wrapper, but that's it. The problem is more with blas and lapack (which tests are failing with gfortran but not with g77. Note that for Lapack, a number of tests fail with both compilers). > > Isn't gfortran the default on edgy and feisty gfortran? At least that is the one > that is installed on my system. By default, I meant from an ABI point of view: see for example the following link: http://gcc.gnu.org/ml/fortran/2007-01/msg00611.html. Calling convention default are different in g77 and gfortran, and as such, if you don't use the default ABI, you become incompatible with everything else. David From kte608 at mail.usask.ca Wed Jun 13 23:18:44 2007 From: kte608 at mail.usask.ca (Karl Edler) Date: Wed, 13 Jun 2007 22:18:44 -0500 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 (rex) In-Reply-To: References: Message-ID: <4670B394.7030507@mail.usask.ca> Thanks for the advice. I will try to build everything from source. I was able to build from source in previous versions, after much difficulty, so I should be able to do it again. For anyone who is interested I am using a 64-bit architecture. If I get it to compile maybe I can help put these repositories into opensuse so that other people can use them as well. From ckkart at hoc.net Wed Jun 13 23:19:20 2007 From: ckkart at hoc.net (Christian K) Date: Thu, 14 Jun 2007 12:19:20 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: <4670A607.4020001@ar.media.kyoto-u.ac.jp> References: <20070612205837.GA29384@x2.nosyntax.com> <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> <466F9698.3050907@iam.uni-stuttgart.de> <466F97BA.20405@ar.media.kyoto-u.ac.jp> <4670A607.4020001@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > Christian K wrote: >> gfortran 4.0.xxx had some real problems, see this thread: >> http://thread.gmane.org/gmane.comp.python.scientific.user/6889 >> starting from 4.1 I did not encounter any problems. However I remember having >> read somewhere, that it is not recommended to build atlas with gfortran. > Disclaimer: I am not a fortran developer, I know nothing about fortran, > only about packaging software compiled with fortran compiler and related > problems. > > For atlas, it is hardly a problem I think, because atlas is a C library. Ture, then I think it was that it was recomended to use gcc3 instead of 4 :) Me neither have ever written a single line of fortran, but sometimes I wrap some fortran code with f2py. I used SuSE for many years but once I tried ubuntu I realised that all libs and packages I was fighting with to get them compiled (like many other SuSE users apparently) are part of the distribution and just work. Keep that in mind the next time you plan to upgrade your system... Christian From david at ar.media.kyoto-u.ac.jp Wed Jun 13 23:53:44 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Jun 2007 12:53:44 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 In-Reply-To: References: <20070612205837.GA29384@x2.nosyntax.com> <466F4FC2.9080303@ar.media.kyoto-u.ac.jp> <466F9698.3050907@iam.uni-stuttgart.de> <466F97BA.20405@ar.media.kyoto-u.ac.jp> <4670A607.4020001@ar.media.kyoto-u.ac.jp> Message-ID: <4670BBC8.4070306@ar.media.kyoto-u.ac.jp> Christian K wrote: > David Cournapeau wrote: >> Christian K wrote: >>> gfortran 4.0.xxx had some real problems, see this thread: >>> http://thread.gmane.org/gmane.comp.python.scientific.user/6889 >>> starting from 4.1 I did not encounter any problems. However I remember having >>> read somewhere, that it is not recommended to build atlas with gfortran. >> Disclaimer: I am not a fortran developer, I know nothing about fortran, >> only about packaging software compiled with fortran compiler and related >> problems. >> >> For atlas, it is hardly a problem I think, because atlas is a C library. > > Ture, then I think it was that it was recomended to use gcc3 instead of 4 :) Oh yes, but this is a totally different issue: this is because gcc 3 is (was ?) better than gcc 4 for fpu code on x86 archs (not true anymore for core 2 duo). Also note that it is recommended to *link* the objects together with the default compiler of the platform, that is: compile the kernel objects files with gcc 3, and then, build the whole library using gcc 4. This is precisely to avoid special problems arising with ABI issues between different compilers. > Me > neither have ever written a single line of fortran, but sometimes I wrap some > fortran code with f2py. > I used SuSE for many years but once I tried ubuntu I realised that all libs and > packages I was fighting with to get them compiled (like many other SuSE users > apparently) are part of the distribution and just work. Keep that in mind the > next time you plan to upgrade your system... Well, as far as scipy and its dependencies are concerned, from my limited experience (I do not use openSUSE or fedora outside testing), ubuntu and debian are *much* better than FC or openSUSE. You can build numpy and scipy without compiling anything else. David From david at ar.media.kyoto-u.ac.jp Wed Jun 13 23:55:34 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Jun 2007 12:55:34 +0900 Subject: [SciPy-user] Problems installing scipy in SuSE 10.2 (rex) In-Reply-To: <4670B394.7030507@mail.usask.ca> References: <4670B394.7030507@mail.usask.ca> Message-ID: <4670BC36.7040009@ar.media.kyoto-u.ac.jp> Karl Edler wrote: > Thanks for the advice. I will try to build everything from source. I was > able to build from source in previous versions, after much difficulty, > so I should be able to do it again. For anyone who is interested I am > using a 64-bit architecture. Ok, my (experimental, I cannot stress enough) packages does not build on 64-bits arch for openSUSE, I don't know why. Now that I have a user, if you are willing to be the guinea pig, I amy manage to solve the problem, David From nwagner at iam.uni-stuttgart.de Thu Jun 14 07:13:07 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Jun 2007 13:13:07 +0200 Subject: [SciPy-user] Usage of integrate.quadrature Message-ID: <467122C3.8090703@iam.uni-stuttgart.de> Hi, How can I use integrate.quadrature to compute \int\limits_0^1 h(xi) d xi or \int\limits_0^1 h(xi) h^T(xi) d xi, where h(x) is a v e c t o r-valued function, e.g. def h(xi,le): """ Shape functions """ tmp = zeros(4,float) tmp[0] = 1-3*xi**2+2*xi**3 tmp[1] = (xi-2*xi**2+xi**3)*le tmp[2] = 3*xi**2-2*xi**3 tmp[3] = (-xi**2+xi**3)*le return tmp integrate.quadrature(?,0,1,args=(le)) Any pointer would be appreciated. Nils From charles.yanaitis at rochester.edu Thu Jun 14 08:12:26 2007 From: charles.yanaitis at rochester.edu (Charlie Yanaitis) Date: Thu, 14 Jun 2007 12:12:26 +0000 (UTC) Subject: [SciPy-user] scipy fblas.so functions not found Message-ID: I'm hoping somebody could offer some advice. I've been stymied for a while trying to build/install a good working version of scipy-0.5.2 on a Saturn Cluster that's running Red Hat RHEL4-U4. I'm using gcc version 3.4.6 and python-2.5. I've tried to build the Atlas variation, but was unsuccessful, so I back-tracked and reverted to building and installing numpy and then scipy. Going this route, scipy at least built and installed OK, but now, the fblas.so library is missing functions and a user reported to me that they got the following error: ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ If anybody can offer some advice on a fix or work around for this, I'd appreciate it! Thanks in advance! Charlie Yanaitis From lev at columbia.edu Thu Jun 14 09:41:22 2007 From: lev at columbia.edu (Lev Givon) Date: Thu, 14 Jun 2007 09:41:22 -0400 Subject: [SciPy-user] scipy fblas.so functions not found In-Reply-To: References: Message-ID: <20070614134122.GB21936@localhost.ee.columbia.edu> Received from Charlie Yanaitis on Thu, Jun 14, 2007 at 08:12:26AM EDT: > I'm hoping somebody could offer some advice. I've been stymied for a > while trying to build/install a good working version of scipy-0.5.2 > on a Saturn Cluster that's running Red Hat RHEL4-U4. I'm using gcc > version 3.4.6 and python-2.5. I've tried to build the Atlas > variation, but was unsuccessful, so I back-tracked and reverted to > building and installing numpy and then scipy. Going this route, > scipy at least built and installed OK, but now, the fblas.so library > is missing functions and a user reported to me that they got the > following error: > > ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: > undefined symbol: srotmg_ > > If anybody can offer some advice on a fix or work around for this, I'd > appreciate it! > > Thanks in advance! > > Charlie Yanaitis The lapack libraries you are using probably were not compiled against the full blas source (the lapack source package from netlib includes an incomplete subset of the blas source files). You might want to try rebuilding the lapack rpm from Fedora 6 or 7 on your system; it appears to include a patch providing the missing blas routines. L.G. From lev at columbia.edu Thu Jun 14 10:04:24 2007 From: lev at columbia.edu (Lev Givon) Date: Thu, 14 Jun 2007 10:04:24 -0400 Subject: [SciPy-user] scipy fblas.so functions not found In-Reply-To: <20070614134122.GB21936@localhost.ee.columbia.edu> References: <20070614134122.GB21936@localhost.ee.columbia.edu> Message-ID: <20070614140423.GC21936@localhost.ee.columbia.edu> Received from Lev Givon on Thu, Jun 14, 2007 at 09:41:22AM EDT: > Received from Charlie Yanaitis on Thu, Jun 14, 2007 at 08:12:26AM EDT: > > I'm hoping somebody could offer some advice. I've been stymied for a > > while trying to build/install a good working version of scipy-0.5.2 > > on a Saturn Cluster that's running Red Hat RHEL4-U4. I'm using gcc > > version 3.4.6 and python-2.5. I've tried to build the Atlas > > variation, but was unsuccessful, so I back-tracked and reverted to > > building and installing numpy and then scipy. Going this route, > > scipy at least built and installed OK, but now, the fblas.so library > > is missing functions and a user reported to me that they got the > > following error: > > > > ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: > > undefined symbol: srotmg_ > > > > If anybody can offer some advice on a fix or work around for this, I'd > > appreciate it! > > > > Thanks in advance! > > > > Charlie Yanaitis > > The lapack libraries you are using probably were not compiled against > the full blas source (the lapack source package from netlib includes > an incomplete subset of the blas source files). You might want to try > rebuilding the lapack rpm from Fedora 6 or 7 on your system; it > appears to include a patch providing the missing blas routines. > > L.G. I neglected to mention that Fedora also includes an atlas rpm that you might want to try rebuilding on RHEL instead of the lapack package. L.G. From peridot.faceted at gmail.com Thu Jun 14 11:54:16 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 14 Jun 2007 11:54:16 -0400 Subject: [SciPy-user] Usage of integrate.quadrature In-Reply-To: <467122C3.8090703@iam.uni-stuttgart.de> References: <467122C3.8090703@iam.uni-stuttgart.de> Message-ID: On 14/06/07, Nils Wagner wrote: > How can I use integrate.quadrature to compute > > \int\limits_0^1 h(xi) d xi > > or > > \int\limits_0^1 h(xi) h^T(xi) d xi, > > where h(x) is a v e c t o r-valued function, e.g. The short answer is, you can't. Also, you should be using integrate.quad if you can, but it won't do this either. > def h(xi,le): > """ Shape functions """ > tmp = zeros(4,float) > tmp[0] = 1-3*xi**2+2*xi**3 > tmp[1] = (xi-2*xi**2+xi**3)*le > tmp[2] = 3*xi**2-2*xi**3 > tmp[3] = (-xi**2+xi**3)*le > return tmp > > integrate.quadrature(?,0,1,args=(le)) > > Any pointer would be appreciated. Well, vector-valued integrals are by definition computed componentwise, so if your function really does look like this you could just integrate each piece separately (using scipy.integrate.quadrature or scipy.integrate.quad, which is faster and safer, though these are such simple functions...). In fact if this is your function, you can find the integrals analytically, which will be vastly faster and more robust (if your calculus is rusty, use MAPLE or the online integrator or whatever). If you have a function that unavoidably computes a vector every time - perhaps it's based on a Fourier transform (though remember the FT is linear!) or some other process - I don't think scipy has anything that will do the integral for you. But it will not be hard to be as smart as scipy.integrate.quadrature: all it does is Gaussian integration to higher and higher order until it converges or gives up. To do Gaussian quadrature, pick the orthogonal polynomials that correspond to your weight function (probably the Legendre), evaluate your vector function at the roots given by the orthogonal polynomial object, add the results up weighted by the weights given by the orthogonal polynomial object, and you've got the answer. To make it adaptive, just keep increasing the order until you think it's converged as much as it's going to. Anne P.S. Be careful with high-order orthogonal polynomials - the recurrence relations scipy uses to compute with them begin to have terrible roundoff error when you get to high orders (above maybe fifty). I had to implement the Chebyshev polynomials myself as cos(n arccos x) to get enough accuracy for my problem. -A From charles.yanaitis at rochester.edu Thu Jun 14 12:39:53 2007 From: charles.yanaitis at rochester.edu (Charlie Yanaitis) Date: Thu, 14 Jun 2007 16:39:53 +0000 (UTC) Subject: [SciPy-user] scipy fblas.so functions not found References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> Message-ID: Lev, Lev Givon columbia.edu> writes: > > Received from Lev Givon on Thu, Jun 14, 2007 at 09:41:22AM EDT: > > Received from Charlie Yanaitis on Thu, Jun 14, 2007 at 08:12:26AM EDT: > > > I'm hoping somebody could offer some advice. I've been stymied for a > > > while trying to build/install a good working version of scipy-0.5.2 > > > on a Saturn Cluster that's running Red Hat RHEL4-U4. I'm using gcc > > > version 3.4.6 and python-2.5. I've tried to build the Atlas > > > variation, but was unsuccessful, so I back-tracked and reverted to > > > building and installing numpy and then scipy. Going this route, > > > scipy at least built and installed OK, but now, the fblas.so library > > > is missing functions and a user reported to me that they got the > > > following error: > > > > > > ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: > > > undefined symbol: srotmg_ > > > > > > If anybody can offer some advice on a fix or work around for this, I'd > > > appreciate it! > > > > > > Thanks in advance! > > > > > > Charlie Yanaitis > > > > The lapack libraries you are using probably were not compiled against > > the full blas source (the lapack source package from netlib includes > > an incomplete subset of the blas source files). You might want to try > > rebuilding the lapack rpm from Fedora 6 or 7 on your system; it > > appears to include a patch providing the missing blas routines. > > > > L.G. > > I neglected to mention that Fedora also includes an atlas rpm that you > might want to try rebuilding on RHEL instead of the lapack package. > > L.G. > Thanks for the advice. I went ahead and got the atlas libraries for Linux_HAMMER64SSE2, put the libraries in place and got the following error when building scipy-0.5.2: /usr/bin/ld: /usr/local/lib/libf77blas.a(dscal.o): relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used when making a shared object; recompile with -fPIC The above error is what has stymied me with the Atlas version all along. I even got the source code for Atlas and tried to build it myself with gcc, with -fPIC. Here's the lines in the Makefile: SHELL = /bin/sh CC = gcc NM = -o OJ = -c F77 = /usr/bin/g77 F77FLAGS = -fomit-frame-pointer -O -m64 -fPIC FLINKER = $(F77) FLINKFLAGS = $(F77FLAGS) FCLINKFLAGS = $(FLINKFLAGS) It builds fine, but when I try to build scipy-0.5.2, I get the above error to "recompile with -fPIC", when I *did* build Atlas with -fPIC. It's been pretty frustrating. Thanks! Charlie From lorrmann at physik.uni-wuerzburg.de Thu Jun 14 13:46:35 2007 From: lorrmann at physik.uni-wuerzburg.de (Volker Lorrmann) Date: Thu, 14 Jun 2007 19:46:35 +0200 Subject: [SciPy-user] another (little) installation problem Message-ID: <46717EFB.7060702@physik.uni-wuerzburg.de> Volker Lorrmann [2007-06-13 06:56]: > > > i'm trying to get scipy build. I've installed python-numpy and atlas > > with lapack support without any problems. > > "python setup.py build" fails with > > > building extension "scipy.fftpack._fftpack" sources > > target build/src.linux-i686-2.5/_fftpackmodule.c does not exist: > > Assuming _fftpackmodule.c was generated with "build_src --inplace" > > command. > > error: '_fftpackmodule.c' missing > > > Is there an easy workaround? Do you need some more informations? I've > > added the whole output. > > Hello Volker, > > These errors look very similar to the errors I got using numpy-1.0.3. On > the SciPy-dev list, Pearu pointed out that my errors were due to a bug > in the 1.0.3 tarball, and suggested building numpy from the svn > version. I did, and it fixed that problem. > > What OS and version are you building on? > > For anyone wondering about how to get the svn versions, the commands > below will download all the files into directories (they will be > automatically created) numpy and scipy below the current directory. > > svn co http://svn.scipy.org/svn/numpy/trunk numpy > svn co http://svn.scipy.org/svn/scipy/trunk scipy > > -rex Hi rex, thanks for your advice. Building numpy and scipy from svn fixed my problem. > What OS and version are you building on? I?m using Arch Linux... volker From fdu.xiaojf at gmail.com Thu Jun 14 13:46:36 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Fri, 15 Jun 2007 01:46:36 +0800 Subject: [SciPy-user] lagrange multipliers in python Message-ID: <46717EFC.4040904@gmail.com> Hi all, Sorry for the cross-posting. I'm trying to find the minimum of a multivariate function F(x1, x2, ..., xn) subject to multiple constraints G1(x1, x2, ..., xn) = 0, G2(...) = 0, ..., Gm(...) = 0. The conventional way is to construct a dummy function Q, $$Q(X, \Lambda) = F(X) + \lambda_1 G1(X) + \lambda_2 G2(X) + ... + \lambda_m Gm(X)$$ and then calculate the value of X and \Lambda when the gradient of function Q equals 0. I think this is a routine work, so I want to know if there are available functions in python(mainly scipy) to do this? Or maybe there is already a better way in python? I have googled but haven't found helpful pages. Thanks a lot. Xiao Jianfeng From openopt at ukr.net Thu Jun 14 14:30:54 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 14 Jun 2007 21:30:54 +0300 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46717EFC.4040904@gmail.com> References: <46717EFC.4040904@gmail.com> Message-ID: <4671895E.1000400@ukr.net> afaik scipy hasn't NLP solvers with equality constraints, as well as CVXOPT. I had seen somewhere a Python package (seems like binding to c-code) where rSQP had been implemented, it allows to have nonlin equality constraints. Try web search "python rsqp optimization solver" or "python sqp optimization solver" for example visit http://trilinos.sandia.gov/packages/moocho/ and python binding to the latter http://trilinos.sandia.gov/packages/pytrilinos/ However, I didn't use the ones. Another one approach is use penalty coefficients (instead of Lagrange multipliers) with Naum Z. Shor r-alg implemented in scikits.openopt ralg solver (it doesn't contain c- or f-code, BSD lic.). It can handle gradient/subgradient provided by user and plot graphics output for NLP UC ralg solver. Currently it's unconstrained, but it allows to handle very huge penalties rather well. svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt sudo python setup.py install from scikits.openopt import NLP help(NLP) however, it doesn't produce pyc-files in the site-packages directory while installation, you'd better to do it by hands now. this is very preliminary version, only some months has been spent. WBR, D. fdu.xiaojf at gmail.com wrote: > Hi all, > > Sorry for the cross-posting. > > I'm trying to find the minimum of a multivariate function F(x1, x2, ..., > xn) subject to multiple constraints G1(x1, x2, ..., xn) = 0, G2(...) = > 0, ..., Gm(...) = 0. > > The conventional way is to construct a dummy function Q, > > $$Q(X, \Lambda) = F(X) + \lambda_1 G1(X) + \lambda_2 G2(X) + ... + \lambda_m > Gm(X)$$ > > and then calculate the value of X and \Lambda when the gradient of function Q > equals 0. > > I think this is a routine work, so I want to know if there are available > functions in python(mainly scipy) to do this? Or maybe there is already > a better way in python? > > I have googled but haven't found helpful pages. > > Thanks a lot. > > Xiao Jianfeng > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From lev at columbia.edu Thu Jun 14 15:32:11 2007 From: lev at columbia.edu (Lev Givon) Date: Thu, 14 Jun 2007 15:32:11 -0400 Subject: [SciPy-user] scipy fblas.so functions not found In-Reply-To: References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> Message-ID: <20070614193211.GG14029@avicenna.cc.columbia.edu> Received from Charlie Yanaitis on Thu, Jun 14, 2007 at 12:39:53PM EDT: > Lev, (snip) > > I neglected to mention that Fedora also includes an atlas rpm that you > > might want to try rebuilding on RHEL instead of the lapack package. > > > > L.G. > > > > Thanks for the advice. I went ahead and got the atlas libraries for > Linux_HAMMER64SSE2, put the libraries in place and got the following > error when building scipy-0.5.2: > > /usr/bin/ld: /usr/local/lib/libf77blas.a(dscal.o): relocation R_X86_64_PC32 > against `atl_f77wrap_dscal__' can not be used when making a shared object; > recompile with -fPIC > > The above error is what has stymied me with the Atlas version all > along. I even got the source code for Atlas and tried to build it > myself with gcc, with -fPIC. Here's the lines in the Makefile: > > SHELL = /bin/sh > CC = gcc > NM = -o > OJ = -c > F77 = /usr/bin/g77 > F77FLAGS = -fomit-frame-pointer -O -m64 -fPIC > FLINKER = $(F77) > FLINKFLAGS = $(F77FLAGS) > FCLINKFLAGS = $(FLINKFLAGS) > > It builds fine, but when I try to build scipy-0.5.2, I get the above error to > "recompile with -fPIC", when I *did* build Atlas with -fPIC. It's been pretty > frustrating. > > Thanks! > > Charlie Being that the binary atlas rpm in Fedora is built with gfortran rather than g77, you should try using the former when you build scipy. L.G. From domi at vision.ee.ethz.ch Thu Jun 14 16:47:41 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Thu, 14 Jun 2007 22:47:41 +0200 Subject: [SciPy-user] shape problem after flipud Message-ID: <4671A96D.70709@vision.ee.ethz.ch> Hi, The following trivial codelet does not work as expected: ------------------------------- from scipy import * import copy shape = (256,256) data = zeros(256*256) data.shape = shape print 'old shape', data.shape print data data=flipud(data) data.shape=(256*256,) print 'new shape', data.shape ------------------------------- exiting with an uncomprehensive error: data.shape=(256*256,) AttributeError: incompatible shape for a non-contiguous array If 'flipud' is ommited, it works as expected. I tried via a deepcopy, the problem persists. Why should flipud invalidate 'reshapeability'? What am I doing wrong? Thanks a lot for any hints, - Dominik -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From robert.kern at gmail.com Thu Jun 14 17:04:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Jun 2007 16:04:26 -0500 Subject: [SciPy-user] shape problem after flipud In-Reply-To: <4671A96D.70709@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> Message-ID: <4671AD5A.3040309@gmail.com> Dominik Szczerba wrote: > Hi, > > The following trivial codelet does not work as expected: > > ------------------------------- > from scipy import * > import copy > > shape = (256,256) > data = zeros(256*256) > data.shape = shape > print 'old shape', data.shape > print data > > data=flipud(data) > data.shape=(256*256,) > print 'new shape', data.shape > ------------------------------- > > exiting with an uncomprehensive error: > data.shape=(256*256,) > AttributeError: incompatible shape for a non-contiguous array > > If 'flipud' is ommited, it works as expected. I tried via a deepcopy, > the problem persists. Why should flipud invalidate 'reshapeability'? Assigning to .shape only adjusts the strides. It does not change any of the memory. It will only let you do that when the memory layout is consistent with the desired shape. flipud() just gets a view on the original memory by using different strides; the result is non-contiguous. The memory layout is no longer consistent with the flattened view that you are requesting. Here is an example: In [25]: data = arange(4) This is the layout in memory for 'data' and (later) 'd2': In [26]: data Out[26]: array([0, 1, 2, 3]) In [29]: data.shape = (2, 2) In [30]: data Out[30]: array([[0, 1], [2, 3]]) In [31]: d2 = flipud(data) In [32]: d2 Out[32]: array([[2, 3], [0, 1]]) Calling .ravel() will copy the array if it is non-contiguous and will show you the memory layout that 'd2' is mimicking with its strides. In [33]: d2.ravel() Out[33]: array([2, 3, 0, 1]) Assigning to .shape will only let you do that if the memory layout is consistent with the view that the array is trying to do. In [52]: import copy In [53]: d3 = copy.deepcopy(d2) In [54]: d3 Out[54]: array([[2, 3], [0, 1]]) In [55]: d3.shape = (4,) In [56]: d3 Out[56]: array([2, 3, 0, 1]) copy.deepcopy() should have worked. I don't know why it didn't for you. However: > What am I doing wrong? You will want to use numpy.reshape() if you want the most foolproof and idiomatic way to get a reshaped array. It will copy the array if necessary. In [57]: reshape(d2, (4,)) Out[57]: array([2, 3, 0, 1]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From domi at vision.ee.ethz.ch Thu Jun 14 17:41:50 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Thu, 14 Jun 2007 23:41:50 +0200 Subject: [SciPy-user] shape problem after flipud In-Reply-To: <4671AD5A.3040309@gmail.com> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> Message-ID: <4671B61E.1040304@vision.ee.ethz.ch> Thank you for a very helpful explanation. Please see below: Robert Kern wrote: > Dominik Szczerba wrote: >> Hi, >> >> The following trivial codelet does not work as expected: >> >> ------------------------------- >> from scipy import * >> import copy >> >> shape = (256,256) >> data = zeros(256*256) >> data.shape = shape >> print 'old shape', data.shape >> print data >> >> data=flipud(data) >> data.shape=(256*256,) >> print 'new shape', data.shape >> ------------------------------- >> >> exiting with an uncomprehensive error: >> data.shape=(256*256,) >> AttributeError: incompatible shape for a non-contiguous array >> >> If 'flipud' is ommited, it works as expected. I tried via a deepcopy, >> the problem persists. Why should flipud invalidate 'reshapeability'? > > Assigning to .shape only adjusts the strides. It does not change any of the > memory. It will only let you do that when the memory layout is consistent with > the desired shape. flipud() just gets a view on the original memory by using > different strides; the result is non-contiguous. The memory layout is no longer > consistent with the flattened view that you are requesting. Here is an example: > > In [25]: data = arange(4) > > This is the layout in memory for 'data' and (later) 'd2': > > In [26]: data > Out[26]: array([0, 1, 2, 3]) > > In [29]: data.shape = (2, 2) > > In [30]: data > Out[30]: > array([[0, 1], > [2, 3]]) > > In [31]: d2 = flipud(data) > > In [32]: d2 > Out[32]: > array([[2, 3], > [0, 1]]) > > Calling .ravel() will copy the array if it is non-contiguous and will show you > the memory layout that 'd2' is mimicking with its strides. quite a bit of a gotcha for a post-matlab user. deepcopy thing was already not pleasant to swallow. > > In [33]: d2.ravel() > Out[33]: array([2, 3, 0, 1]) > > Assigning to .shape will only let you do that if the memory layout is consistent > with the view that the array is trying to do. > > In [52]: import copy > > In [53]: d3 = copy.deepcopy(d2) > > In [54]: d3 > Out[54]: > array([[2, 3], > [0, 1]]) > > In [55]: d3.shape = (4,) > > In [56]: d3 > Out[56]: array([2, 3, 0, 1]) > > copy.deepcopy() should have worked. I don't know why it didn't for you. However: I was doing it in another way, namely flipping a deepcopy. You say to deepcopy the result and it works: data = flipud(data) data3 = copy.deepcopy(data) data3.shape = (256*256,) print 'new shape', data3.shape > >> What am I doing wrong? > > You will want to use numpy.reshape() if you want the most foolproof and > idiomatic way to get a reshaped array. It will copy the array if necessary. > > In [57]: reshape(d2, (4,)) > Out[57]: array([2, 3, 0, 1]) > This actually did not work: shape = (256,256) data = zeros(256*256) data.shape = shape print 'old shape', data.shape print data data2 = flipud(data) data2.ravel() reshape(data2,(256*256,)) print 'new shape', data2.shape The shape is preserved! Even though I am fine with the previously given solution, I am still curious what is wrong here. BTW> Why (size,) and not (size,1)? Thanks a lot for your help, - Dominik -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From robert.kern at gmail.com Thu Jun 14 17:52:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Jun 2007 16:52:50 -0500 Subject: [SciPy-user] shape problem after flipud In-Reply-To: <4671B61E.1040304@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> Message-ID: <4671B8B2.2040601@gmail.com> Dominik Szczerba wrote: > Thank you for a very helpful explanation. Please see below: > > Robert Kern wrote: >> Dominik Szczerba wrote: >>> Hi, >>> >>> The following trivial codelet does not work as expected: >>> >>> ------------------------------- >>> from scipy import * >>> import copy >>> >>> shape = (256,256) >>> data = zeros(256*256) >>> data.shape = shape >>> print 'old shape', data.shape >>> print data >>> >>> data=flipud(data) >>> data.shape=(256*256,) >>> print 'new shape', data.shape >>> ------------------------------- >>> >>> exiting with an uncomprehensive error: >>> data.shape=(256*256,) >>> AttributeError: incompatible shape for a non-contiguous array >>> >>> If 'flipud' is ommited, it works as expected. I tried via a deepcopy, >>> the problem persists. Why should flipud invalidate 'reshapeability'? >> Assigning to .shape only adjusts the strides. It does not change any of the >> memory. It will only let you do that when the memory layout is consistent with >> the desired shape. flipud() just gets a view on the original memory by using >> different strides; the result is non-contiguous. The memory layout is no longer >> consistent with the flattened view that you are requesting. Here is an example: >> >> In [25]: data = arange(4) >> >> This is the layout in memory for 'data' and (later) 'd2': >> >> In [26]: data >> Out[26]: array([0, 1, 2, 3]) >> >> In [29]: data.shape = (2, 2) >> >> In [30]: data >> Out[30]: >> array([[0, 1], >> [2, 3]]) >> >> In [31]: d2 = flipud(data) >> >> In [32]: d2 >> Out[32]: >> array([[2, 3], >> [0, 1]]) >> >> Calling .ravel() will copy the array if it is non-contiguous and will show you >> the memory layout that 'd2' is mimicking with its strides. > > quite a bit of a gotcha for a post-matlab user. deepcopy thing was > already not pleasant to swallow. I don't recommend using deepcopy. Use reshape(). >>> What am I doing wrong? >> You will want to use numpy.reshape() if you want the most foolproof and >> idiomatic way to get a reshaped array. It will copy the array if necessary. >> >> In [57]: reshape(d2, (4,)) >> Out[57]: array([2, 3, 0, 1]) >> > > This actually did not work: > > shape = (256,256) > data = zeros(256*256) > data.shape = shape > print 'old shape', data.shape > print data > data2 = flipud(data) > data2.ravel() > reshape(data2,(256*256,)) > print 'new shape', data2.shape > > The shape is preserved! Even though I am fine with the previously given > solution, I am still curious what is wrong here. It returns a (possibly new) object with the requested shape. It does not affect the shape of the original array since it is not always possible to do that safely. Assigning to .shape is the appropriate way to change the shape of an existing array in-place if it is safe to do so. Two different ways to do two different things. > BTW> Why (size,) and not (size,1)? Because they are different things. Not everything is a 2D array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Thu Jun 14 20:23:34 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 14 Jun 2007 17:23:34 -0700 Subject: [SciPy-user] can't build on OS X from SVN Message-ID: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> Hi all, I tried to build scipy from today's svn (r3105) on OS X (PPC), using GCC 4.0.1 and gfortran, (as 'python setup.py config_fc -- fcompiler=gnu95 build') and I got the following error: /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 Which is strange since MACOSX_DEPLOYMENT_TARGET is set to 10.4. I nuked the build tree and the current install, made sure that the MACOSX_DEPLOYMENT_TARGET value was correct, tried again, and again got the same error. Any thoughts? (The context within which this error was raised is below). The only other irregularity during the build was this error text that was printed earlier: library 'mach' defined more than once, overwriting build_info {'sources': ['Lib/integrate/mach/d1mach.f', 'Lib/integrate/mach/ i1mach.f', 'Lib/integrate/mach/r1mach.f', 'Lib/integrate/mach/ xerror.f'], 'config_fc': {'noopt': ('Lib/integrate/setup.pyc', 1)}, 'source_languages': ['f77']} with {'sources': ['Lib/special/mach/ d1mach.f', 'Lib/special/mach/i1mach.f', 'Lib/special/mach/r1mach.f', 'Lib/special/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/special/ setup.pyc', 1)}, 'source_languages': ['f77']}. Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 collect2: ld returned 1 exit status /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 collect2: ld returned 1 exit status error: Command "/usr/local/bin/gfortran -Wall -undefined dynamic_lookup -bundle build/temp.darwin-8.9.0-Power_Macintosh-2.4/ build/src.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/ _fftpackmodule.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/ fftpack/src/zfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/ fftpack/src/drfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/ fftpack/src/zrfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/ fftpack/src/zfftnd.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/ build/src.darwin-8.9.0-Power_Macintosh-2.4/fortranobject.o -L/usr/ local/lib/gcc/powerpc-apple-darwin8.9.0/4.3.0 -Lbuild/ temp.darwin-8.9.0-Power_Macintosh-2.4 -ldfftpack -lgfortran -o build/ lib.darwin-8.9.0-Power_Macintosh-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 From robert.kern at gmail.com Thu Jun 14 20:34:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Jun 2007 19:34:44 -0500 Subject: [SciPy-user] can't build on OS X from SVN In-Reply-To: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> References: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> Message-ID: <4671DEA4.3050102@gmail.com> Zachary Pincus wrote: > Hi all, > > I tried to build scipy from today's svn (r3105) on OS X (PPC), using > GCC 4.0.1 and gfortran, (as 'python setup.py config_fc -- > fcompiler=gnu95 build') and I got the following error: > > /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with > MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 > > Which is strange since MACOSX_DEPLOYMENT_TARGET is set to 10.4. I > nuked the build tree and the current install, made sure that the > MACOSX_DEPLOYMENT_TARGET value was correct, tried again, and again > got the same error. > > Any thoughts? (The context within which this error was raised is > below). Gah. Looks like more fallout from the merge. The get_flags_linker_so() methods which have all of this information don't seem to be called any more. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Thu Jun 14 20:42:44 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 14 Jun 2007 17:42:44 -0700 Subject: [SciPy-user] can't build on OS X from SVN In-Reply-To: <4671DEA4.3050102@gmail.com> References: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> <4671DEA4.3050102@gmail.com> Message-ID: <4ECE2216-9B9D-423A-8468-D864C15CDCF9@stanford.edu> For what it's worth, a different linker error seems to happen in the same place with g77 / gcc3.3: % python setup.py config_fc --fcompiler=gnu build /usr/bin/ld: can't locate file for: -lgcc_s collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -lgcc_s collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/ src.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/src.darwin-8.9.0- Power_Macintosh-2.4/fortranobject.o -L/usr/local/lib/gcc/powerpc- apple-darwin7.9.0/3.4.4 -Lbuild/temp.darwin-8.9.0-Power_Macintosh-2.4 -ldfftpack -lg2c -lcc_dynamic -o build/lib.darwin-8.9.0- Power_Macintosh-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 Also, the "library 'mach' defined more than once" error (or "error"?) is still present with g77. Zach On Jun 14, 2007, at 5:34 PM, Robert Kern wrote: > Zachary Pincus wrote: >> Hi all, >> >> I tried to build scipy from today's svn (r3105) on OS X (PPC), using >> GCC 4.0.1 and gfortran, (as 'python setup.py config_fc -- >> fcompiler=gnu95 build') and I got the following error: >> >> /usr/bin/ld: flag: -undefined dynamic_lookup can't be used with >> MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 >> >> Which is strange since MACOSX_DEPLOYMENT_TARGET is set to 10.4. I >> nuked the build tree and the current install, made sure that the >> MACOSX_DEPLOYMENT_TARGET value was correct, tried again, and again >> got the same error. >> >> Any thoughts? (The context within which this error was raised is >> below). > > Gah. Looks like more fallout from the merge. The get_flags_linker_so > () methods > which have all of this information don't seem to be called any more. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Thu Jun 14 20:48:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Jun 2007 19:48:18 -0500 Subject: [SciPy-user] can't build on OS X from SVN In-Reply-To: <4ECE2216-9B9D-423A-8468-D864C15CDCF9@stanford.edu> References: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> <4671DEA4.3050102@gmail.com> <4ECE2216-9B9D-423A-8468-D864C15CDCF9@stanford.edu> Message-ID: <4671E1D2.2080400@gmail.com> Zachary Pincus wrote: > For what it's worth, a different linker error seems to happen in the > same place with g77 / gcc3.3: > > % python setup.py config_fc --fcompiler=gnu build > > /usr/bin/ld: can't locate file for: -lgcc_s > collect2: ld returned 1 exit status > /usr/bin/ld: can't locate file for: -lgcc_s > collect2: ld returned 1 exit status You can't use g77 with a Universal Python (and thus gcc 4). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Thu Jun 14 20:57:10 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 14 Jun 2007 17:57:10 -0700 Subject: [SciPy-user] can't build on OS X from SVN In-Reply-To: <4671E1D2.2080400@gmail.com> References: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> <4671DEA4.3050102@gmail.com> <4ECE2216-9B9D-423A-8468-D864C15CDCF9@stanford.edu> <4671E1D2.2080400@gmail.com> Message-ID: <581D5E6A-1B5E-46D0-A089-BFDB084CC571@stanford.edu> >> For what it's worth, a different linker error seems to happen in the >> same place with g77 / gcc3.3: >> >> % python setup.py config_fc --fcompiler=gnu build >> >> /usr/bin/ld: can't locate file for: -lgcc_s >> collect2: ld returned 1 exit status >> /usr/bin/ld: can't locate file for: -lgcc_s >> collect2: ld returned 1 exit status > > You can't use g77 with a Universal Python (and thus gcc 4). True indeed. My python (2.4.3) was compiled from source for PPC, and before building scipy with g77, I switched to gcc3.3. (This is how I used to build scipy before it got better compatibility with gfortran on PPC macs.) Anyhow, I was just providing another data point about the failure I'd seen -- not sure if it was useful or not. Zach > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Thu Jun 14 21:02:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Jun 2007 20:02:41 -0500 Subject: [SciPy-user] can't build on OS X from SVN In-Reply-To: <4671DEA4.3050102@gmail.com> References: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> <4671DEA4.3050102@gmail.com> Message-ID: <3d375d730706141802r4c80f565g28dd4c80fb5a48a1@mail.gmail.com> On 6/14/07, Robert Kern wrote: > Gah. Looks like more fallout from the merge. The get_flags_linker_so() methods > which have all of this information don't seem to be called any more. Never mind. It does get called in a roundabout way. Please send the full output from the build. Use "python setup.py -v config_fc ... etc." to turn on verbose mode. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From novak at ucolick.org Thu Jun 14 21:23:18 2007 From: novak at ucolick.org (Greg Novak) Date: Thu, 14 Jun 2007 18:23:18 -0700 Subject: [SciPy-user] Debugging memory exhaustion in Python? Message-ID: I've written Python code to calculate a bunch of things for a bunch of simulations. The code goes through about 5GB in 10-100MB chunks. The problem is that Python eventually runs out of memory, consuming (according to top) 3GB. I don't see why it should be doing this--as far as I know I'm not hanging on to any references of anything. I've fooled around with the garbage collector, turning debugging information on and trying to see if it will give me useful info about who or what is still hanging around in memory. I've tried to delete all the user variables but even after this the garbage collector can't free any more memory. What I need is du for python memory, just to get a sense of how/why this is happening. Anyone have suggestions about how to get traction on this? Thanks, Greg From fdu.xiaojf at gmail.com Thu Jun 14 21:29:44 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Fri, 15 Jun 2007 09:29:44 +0800 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <4671895E.1000400@ukr.net> References: <46717EFC.4040904@gmail.com> <4671895E.1000400@ukr.net> Message-ID: <4671EB88.3000606@gmail.com> Hi dmitrey: dmitrey wrote: > afaik scipy hasn't NLP solvers with equality constraints, as well as CVXOPT. > I had seen somewhere a Python package (seems like binding to c-code) > where rSQP had been implemented, it allows to have nonlin equality > constraints. Try web search "python rsqp optimization solver" or "python > sqp optimization solver" The equailty constraints in my problem are linear equations. Does this make things easier? > > for example visit > http://trilinos.sandia.gov/packages/moocho/ > and python binding to the latter > http://trilinos.sandia.gov/packages/pytrilinos/ > > However, I didn't use the ones. > Another one approach is use penalty coefficients (instead of Lagrange > multipliers) with Naum Z. Shor r-alg implemented in scikits.openopt ralg > solver (it doesn't contain c- or f-code, BSD lic.). It can handle > gradient/subgradient provided by user and plot graphics output for NLP > UC ralg solver. > Currently it's unconstrained, but it allows to handle very huge > penalties rather well. > > svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt > sudo python setup.py install > > from scikits.openopt import NLP > help(NLP) > > however, it doesn't produce pyc-files in the site-packages directory while installation, you'd better to do it by hands now. > this is very preliminary version, only some months has been spent. > > > WBR, D. Thanks a lot! Regards, Xiao Jianfeng From zpincus at stanford.edu Thu Jun 14 21:36:35 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 14 Jun 2007 18:36:35 -0700 Subject: [SciPy-user] can't build on OS X from SVN In-Reply-To: <3d375d730706141802r4c80f565g28dd4c80fb5a48a1@mail.gmail.com> References: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> <4671DEA4.3050102@gmail.com> <3d375d730706141802r4c80f565g28dd4c80fb5a48a1@mail.gmail.com> Message-ID: <7A2BF5DC-CC44-4BC8-83B9-A08D2F170B7A@stanford.edu> Attached is the log of a build made thusly: cd scipy rm -rf build svn up python setup.py config_fc --fcompiler=gnu95 build [which fails] python setup.py -v config_fc --fcompiler=gnu95 build > & build.log That is, this isn't the (huge) log of a build-from-scratch, but just the log of the failing part. If you want, I can generate and send the build-from-scratch log too. Zach -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.gz Type: application/x-gzip Size: 4584 bytes Desc: not available URL: -------------- next part -------------- On Jun 14, 2007, at 6:02 PM, Robert Kern wrote: > On 6/14/07, Robert Kern wrote: > >> Gah. Looks like more fallout from the merge. The >> get_flags_linker_so() methods >> which have all of this information don't seem to be called any more. > > Never mind. It does get called in a roundabout way. > > Please send the full output from the build. Use "python setup.py -v > config_fc ... etc." to turn on verbose mode. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From openopt at ukr.net Fri Jun 15 01:53:35 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 15 Jun 2007 08:53:35 +0300 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <4671EB88.3000606@gmail.com> References: <46717EFC.4040904@gmail.com> <4671895E.1000400@ukr.net> <4671EB88.3000606@gmail.com> Message-ID: <4672295F.7060206@ukr.net> fdu.xiaojf at gmail.com wrote: > Hi dmitrey: > > dmitrey wrote: > > afaik scipy hasn't NLP solvers with equality constraints, as well as CVXOPT. > > I had seen somewhere a Python package (seems like binding to c-code) > > where rSQP had been implemented, it allows to have nonlin equality > > constraints. Try web search "python rsqp optimization solver" or "python > > sqp optimization solver" > > The equailty constraints in my problem are linear equations. Does > this make things easier? > afaik no, at least for those free Python solvers that I knew Maybe some weeks later free (BSD) NLP linearization-based Python solver will be available in openopt module, that will be capable of handling both equality & inequality NL constraints. HTH, D. > > > > for example visit > > http://trilinos.sandia.gov/packages/moocho/ > > and python binding to the latter > > http://trilinos.sandia.gov/packages/pytrilinos/ > > > > However, I didn't use the ones. > > Another one approach is use penalty coefficients (instead of Lagrange > > multipliers) with Naum Z. Shor r-alg implemented in scikits.openopt ralg > > solver (it doesn't contain c- or f-code, BSD lic.). It can handle > > gradient/subgradient provided by user and plot graphics output for NLP > > UC ralg solver. > > Currently it's unconstrained, but it allows to handle very huge > > penalties rather well. > > > > svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt > > sudo python setup.py install > > > > from scikits.openopt import NLP > > help(NLP) > > > > however, it doesn't produce pyc-files in the site-packages directory while > installation, you'd better to do it by hands now. > > this is very preliminary version, only some months has been spent. > > > > > > WBR, D. > > Thanks a lot! > > Regards, > > Xiao Jianfeng > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From matthieu.brucher at gmail.com Fri Jun 15 02:06:38 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 15 Jun 2007 08:06:38 +0200 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: References: Message-ID: Are you using a specific IDE ? Matthieu 2007/6/15, Greg Novak : > > I've written Python code to calculate a bunch of things for a bunch of > simulations. The code goes through about 5GB in 10-100MB chunks. The > problem is that Python eventually runs out of memory, consuming > (according to top) 3GB. I don't see why it should be doing this--as > far as I know I'm not hanging on to any references of anything. > > I've fooled around with the garbage collector, turning debugging > information on and trying to see if it will give me useful info about > who or what is still hanging around in memory. > > I've tried to delete all the user variables but even after this the > garbage collector can't free any more memory. > > What I need is du for python memory, just to get a sense of how/why > this is happening. Anyone have suggestions about how to get traction > on this? > > Thanks, > Greg > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dahl.joachim at gmail.com Fri Jun 15 02:31:07 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Fri, 15 Jun 2007 08:31:07 +0200 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <4671EB88.3000606@gmail.com> References: <46717EFC.4040904@gmail.com> <4671895E.1000400@ukr.net> <4671EB88.3000606@gmail.com> Message-ID: <47347f490706142331l3d79af54ydccc7f7a30af2ae8@mail.gmail.com> What kind of function are you minimizing? CVXOPT handles convex functions with convex inequality constraints and linear equality constraints. If your function is non-convex, couldn't you eliminate your linear equality constraints and try Newton's method for the unconstrained problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From domi at vision.ee.ethz.ch Fri Jun 15 02:39:10 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Fri, 15 Jun 2007 08:39:10 +0200 Subject: [SciPy-user] shape problem after flipud In-Reply-To: <4671B8B2.2040601@gmail.com> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> Message-ID: <4672340E.2060302@vision.ee.ethz.ch> Robert Kern wrote: > Dominik Szczerba wrote: >> Thank you for a very helpful explanation. Please see below: >> >> Robert Kern wrote: >>> Dominik Szczerba wrote: >>>> Hi, >>>> >>>> The following trivial codelet does not work as expected: >>>> >>>> ------------------------------- >>>> from scipy import * >>>> import copy >>>> >>>> shape = (256,256) >>>> data = zeros(256*256) >>>> data.shape = shape >>>> print 'old shape', data.shape >>>> print data >>>> >>>> data=flipud(data) >>>> data.shape=(256*256,) >>>> print 'new shape', data.shape >>>> ------------------------------- >>>> >>>> exiting with an uncomprehensive error: >>>> data.shape=(256*256,) >>>> AttributeError: incompatible shape for a non-contiguous array >>>> >>>> If 'flipud' is ommited, it works as expected. I tried via a deepcopy, >>>> the problem persists. Why should flipud invalidate 'reshapeability'? >>> Assigning to .shape only adjusts the strides. It does not change any of the >>> memory. It will only let you do that when the memory layout is consistent with >>> the desired shape. flipud() just gets a view on the original memory by using >>> different strides; the result is non-contiguous. The memory layout is no longer >>> consistent with the flattened view that you are requesting. Here is an example: >>> >>> In [25]: data = arange(4) >>> >>> This is the layout in memory for 'data' and (later) 'd2': >>> >>> In [26]: data >>> Out[26]: array([0, 1, 2, 3]) >>> >>> In [29]: data.shape = (2, 2) >>> >>> In [30]: data >>> Out[30]: >>> array([[0, 1], >>> [2, 3]]) >>> >>> In [31]: d2 = flipud(data) >>> >>> In [32]: d2 >>> Out[32]: >>> array([[2, 3], >>> [0, 1]]) >>> >>> Calling .ravel() will copy the array if it is non-contiguous and will show you >>> the memory layout that 'd2' is mimicking with its strides. >> quite a bit of a gotcha for a post-matlab user. deepcopy thing was >> already not pleasant to swallow. > > I don't recommend using deepcopy. Use reshape(). > >>>> What am I doing wrong? >>> You will want to use numpy.reshape() if you want the most foolproof and >>> idiomatic way to get a reshaped array. It will copy the array if necessary. >>> >>> In [57]: reshape(d2, (4,)) >>> Out[57]: array([2, 3, 0, 1]) >>> >> This actually did not work: >> >> shape = (256,256) >> data = zeros(256*256) >> data.shape = shape >> print 'old shape', data.shape >> print data >> data2 = flipud(data) >> data2.ravel() >> reshape(data2,(256*256,)) >> print 'new shape', data2.shape >> >> The shape is preserved! Even though I am fine with the previously given >> solution, I am still curious what is wrong here. > > It returns a (possibly new) object with the requested shape. It does not affect > the shape of the original array since it is not always possible to do that > safely. Assigning to .shape is the appropriate way to change the shape of an > existing array in-place if it is safe to do so. Two different ways to do two > different things. > OK, so data3 = reshape(data2,(256*256,)) fixes it at a least expense. Thanks a lot for clearing up the confusion. - Dominik >> BTW> Why (size,) and not (size,1)? > > Because they are different things. Not everything is a 2D array. > -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From ellisonbg.net at gmail.com Fri Jun 15 03:08:16 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 15 Jun 2007 01:08:16 -0600 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: References: Message-ID: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Some questions: 1) What version of python are you using? Python 2.4 and below has some issues with memory not being released back to the OS. 2) What data structures are you using to represent the data? Brian On 6/14/07, Greg Novak wrote: > I've written Python code to calculate a bunch of things for a bunch of > simulations. The code goes through about 5GB in 10-100MB chunks. The > problem is that Python eventually runs out of memory, consuming > (according to top) 3GB. I don't see why it should be doing this--as > far as I know I'm not hanging on to any references of anything. > > I've fooled around with the garbage collector, turning debugging > information on and trying to see if it will give me useful info about > who or what is still hanging around in memory. > > I've tried to delete all the user variables but even after this the > garbage collector can't free any more memory. > > What I need is du for python memory, just to get a sense of how/why > this is happening. Anyone have suggestions about how to get traction > on this? > > Thanks, > Greg > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From S.Mientki at ru.nl Fri Jun 15 05:44:19 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 15 Jun 2007 11:44:19 +0200 Subject: [SciPy-user] Print number of significant digits ? Message-ID: <46725F73.3000501@ru.nl> hello, is there a simple way (some flag or something like that), in which you can limit the number of significant digits, when using the plain print statement ? so that you get an output like this: >>> print pi 3.14 thanks, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From dahl.joachim at gmail.com Fri Jun 15 05:52:18 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Fri, 15 Jun 2007 11:52:18 +0200 Subject: [SciPy-user] Print number of significant digits ? In-Reply-To: <46725F73.3000501@ru.nl> References: <46725F73.3000501@ru.nl> Message-ID: <47347f490706150252p28f1a444j6b4aece7d20b30af@mail.gmail.com> print "%3.2f" %pi On 6/15/07, Stef Mientki wrote: > > hello, > > is there a simple way (some flag or something like that), > in which you can limit the number of significant digits, > when using the plain print statement ? > > so that you get an output like this: > > >>> print pi > 3.14 > > > thanks, > Stef Mientki > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of > Commerce - trade register 41055629 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Fri Jun 15 06:02:02 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 15 Jun 2007 12:02:02 +0200 Subject: [SciPy-user] Print number of significant digits ? In-Reply-To: <46725F73.3000501@ru.nl> References: <46725F73.3000501@ru.nl> Message-ID: <4672639A.5030006@gmx.net> Stef Mientki wrote: > hello, > > is there a simple way (some flag or something like that), > in which you can limit the number of significant digits, > when using the plain print statement ? > > so that you get an output like this: > > >>> print pi > 3.14 > > http://docs.python.org/tut/node9.html#SECTION009100000000000000000 http://docs.python.org/lib/typesseq-strings.html -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From S.Mientki at ru.nl Fri Jun 15 06:02:15 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 15 Jun 2007 12:02:15 +0200 Subject: [SciPy-user] Print number of significant digits ? In-Reply-To: <47347f490706150252p28f1a444j6b4aece7d20b30af@mail.gmail.com> References: <46725F73.3000501@ru.nl> <47347f490706150252p28f1a444j6b4aece7d20b30af@mail.gmail.com> Message-ID: <467263A7.60301@ru.nl> Joachim Dahl wrote: > print "%3.2f" %pi thanks Joachim, but I didn't phrase my question accurate enough, I not only want to print pi, but I want to print anything e.g. I now get: Model 1.0000000149 1.0 [ 1.00000001 1.00000001 3.00000001 4.00000001 5.00000001] but I want Model 1.00 1.0 [ 1.00 1.00 3.00 4.00 5.00] and as I use "print" as a quick and dirty intermediate result for everything, I don't want to spell out each format statement. cheersm Stef Mientki > > On 6/15/07, *Stef Mientki* > > wrote: > > hello, > > is there a simple way (some flag or something like that), > in which you can limit the number of significant digits, > when using the plain print statement ? > > so that you get an output like this: > > >>> print pi > 3.14 > > > thanks, > Stef Mientki > > Kamer van Koophandel - handelsregister 41055629 / Netherlands > Chamber of Commerce - trade register 41055629 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From elcorto at gmx.net Fri Jun 15 06:17:05 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 15 Jun 2007 12:17:05 +0200 Subject: [SciPy-user] Print number of significant digits ? In-Reply-To: <467263A7.60301@ru.nl> References: <46725F73.3000501@ru.nl> <47347f490706150252p28f1a444j6b4aece7d20b30af@mail.gmail.com> <467263A7.60301@ru.nl> Message-ID: <46726721.6080108@gmx.net> Stef Mientki wrote: > > Joachim Dahl wrote: >> print "%3.2f" %pi > thanks Joachim, > > but I didn't phrase my question accurate enough, > I not only want to print pi, but I want to print anything > > e.g. I now get: > Model 1.0000000149 1.0 [ 1.00000001 1.00000001 3.00000001 > 4.00000001 5.00000001] > > but I want > Model 1.00 1.0 [ 1.00 1.00 3.00 4.00 5.00] > > and as I use "print" as a quick and dirty intermediate result for > everything, > I don't want to spell out each format statement. > Hmm I'm not aware of a built-in for doing this. A quick and dirty solution would be a = array([1,2,3,pi]) fmt = "%3.2f "*len(a) fmt = fmt.strip() Then, In [28]: a Out[28]: array([ 1. , 2. , 3. , 3.14159265]) In [29]: fmt %tuple([aa for aa in a]) Out[29]: '1.00 2.00 3.00 3.14' -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From nwagner at iam.uni-stuttgart.de Fri Jun 15 06:39:04 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 15 Jun 2007 12:39:04 +0200 Subject: [SciPy-user] Print number of significant digits ? In-Reply-To: <46726721.6080108@gmx.net> References: <46725F73.3000501@ru.nl> <47347f490706150252p28f1a444j6b4aece7d20b30af@mail.gmail.com> <467263A7.60301@ru.nl> <46726721.6080108@gmx.net> Message-ID: <46726C48.8090002@iam.uni-stuttgart.de> Steve Schmerler wrote: > Stef Mientki wrote: > >> Joachim Dahl wrote: >> >>> print "%3.2f" %pi >>> >> thanks Joachim, >> >> but I didn't phrase my question accurate enough, >> I not only want to print pi, but I want to print anything >> >> e.g. I now get: >> Model 1.0000000149 1.0 [ 1.00000001 1.00000001 3.00000001 >> 4.00000001 5.00000001] >> >> but I want >> Model 1.00 1.0 [ 1.00 1.00 3.00 4.00 5.00] >> >> and as I use "print" as a quick and dirty intermediate result for >> everything, >> I don't want to spell out each format statement. >> >> > > Hmm I'm not aware of a built-in for doing this. A quick and dirty solution would be > > a = array([1,2,3,pi]) > fmt = "%3.2f "*len(a) > fmt = fmt.strip() > > Then, > > In [28]: a > Out[28]: array([ 1. , 2. , 3. , 3.14159265]) > > In [29]: fmt %tuple([aa for aa in a]) > Out[29]: '1.00 2.00 3.00 3.14' > > > You can use set_printoptions set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, suppress=None) Set options associated with printing. :Parameters: precision : int Number of digits of precision for floating point output (default 8). threshold : int Total number of array elements which trigger summarization rather than full repr (default 1000). edgeitems : int Number of array items in summary at beginning and end of each dimension (default 3). linewidth : int The number of characters per line for the purpose of inserting line breaks (default 75). suppress : bool Whether or not suppress printing of small floating point values using scientific notation (default False). >>> from scipy import * >>> set_printoptions(precision=2) >>> print pi 3.14159265359 >>> print array([pi]) [ 3.14] Nils From elcorto at gmx.net Fri Jun 15 06:48:18 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 15 Jun 2007 12:48:18 +0200 Subject: [SciPy-user] Print number of significant digits ? In-Reply-To: <46726C48.8090002@iam.uni-stuttgart.de> References: <46725F73.3000501@ru.nl> <47347f490706150252p28f1a444j6b4aece7d20b30af@mail.gmail.com> <467263A7.60301@ru.nl> <46726721.6080108@gmx.net> <46726C48.8090002@iam.uni-stuttgart.de> Message-ID: <46726E72.50708@gmx.net> Nils Wagner wrote: >>> >>> but I want >>> Model 1.00 1.0 [ 1.00 1.00 3.00 4.00 5.00] >>> >>> >> Hmm I'm not aware of a built-in for doing this. A quick and dirty solution would be >> > You can use set_printoptions Ahh, ok. Good to know. -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From S.Mientki at ru.nl Fri Jun 15 07:35:50 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 15 Jun 2007 13:35:50 +0200 Subject: [SciPy-user] Print number of significant digits ? In-Reply-To: <46726C48.8090002@iam.uni-stuttgart.de> References: <46725F73.3000501@ru.nl> <47347f490706150252p28f1a444j6b4aece7d20b30af@mail.gmail.com> <467263A7.60301@ru.nl> <46726721.6080108@gmx.net> <46726C48.8090002@iam.uni-stuttgart.de> Message-ID: <46727996.1090504@ru.nl> > You can use set_printoptions > set_printoptions(precision=None, threshold=None, edgeitems=None, > linewidth=None, suppress=None) > > thanks Nils, that's almost Perfect !! the only thing I have left to do, is grouping individual elements as arrays, but that's just a small action compared to format strings so instead of (where A,B are simple floats print A, B, m[:5] I've to write print array([A, B]), m[:5] cheers, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From fdu.xiaojf at gmail.com Fri Jun 15 09:28:20 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Fri, 15 Jun 2007 21:28:20 +0800 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <47347f490706142331l3d79af54ydccc7f7a30af2ae8@mail.gmail.com> References: <46717EFC.4040904@gmail.com> <4671895E.1000400@ukr.net> <4671EB88.3000606@gmail.com> <47347f490706142331l3d79af54ydccc7f7a30af2ae8@mail.gmail.com> Message-ID: <467293F4.5080807@gmail.com> Joachim Dahl wrote: > What kind of function are you minimizing? > > CVXOPT handles convex functions with convex inequality constraints and > linear equality constraints. > > If your function is non-convex, couldn't you eliminate your linear > equality constraints and try Newton's method for the unconstrained problem? Can you give me some hints on how to eliminate the linear equality constraints ? How to judge if a function is non-convex? The expression of my function is too complex to calculate the derivative. Thanks. Xiao Jianfeng From dahl.joachim at gmail.com Fri Jun 15 09:50:12 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Fri, 15 Jun 2007 15:50:12 +0200 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <467293F4.5080807@gmail.com> References: <46717EFC.4040904@gmail.com> <4671895E.1000400@ukr.net> <4671EB88.3000606@gmail.com> <47347f490706142331l3d79af54ydccc7f7a30af2ae8@mail.gmail.com> <467293F4.5080807@gmail.com> Message-ID: <47347f490706150650m57dd081h7c795709d22e801b@mail.gmail.com> If your function is too complicated to evaluate derivatives, chances are that it's not convex. But you're still going to need the first and second order derivatives for Newton's method... If you want to solve min. f(x) s.t. A*x = b you could first find a feasible point x0 satisfying A*x0 = b (e.g., the least-norm solution to A*x = b) and parametrize all feasible points as z = x0+ B*y where B spans the nullspace of A, i.e., A*B = 0. Now you have an unconstrained problem min. f( x0 + B*y ) over the new variable y. On 6/15/07, fdu.xiaojf at gmail.com wrote: > > Joachim Dahl wrote: > > What kind of function are you minimizing? > > > > CVXOPT handles convex functions with convex inequality constraints and > > linear equality constraints. > > > > If your function is non-convex, couldn't you eliminate your linear > > equality constraints and try Newton's method for the unconstrained > problem? > > Can you give me some hints on how to eliminate the linear equality > constraints ? > > How to judge if a function is non-convex? The expression of my function > is > too complex to calculate the derivative. > > Thanks. > > Xiao Jianfeng > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Fri Jun 15 11:54:54 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 15 Jun 2007 11:54:54 -0400 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46717EFC.4040904@gmail.com> References: <46717EFC.4040904@gmail.com> Message-ID: <4672B64E.1060007@gmail.com> Hello Xiao, fdu.xiaojf at gmail.com wrote: > Hi all, > > Sorry for the cross-posting. > > I'm trying to find the minimum of a multivariate function F(x1, x2, ..., > xn) subject to multiple constraints G1(x1, x2, ..., xn) = 0, G2(...) = > 0, ..., Gm(...) = 0. > > The conventional way is to construct a dummy function Q, > > $$Q(X, \Lambda) = F(X) + \lambda_1 G1(X) + \lambda_2 G2(X) + ... + \lambda_m > Gm(X)$$ > > and then calculate the value of X and \Lambda when the gradient of function Q > equals 0. > > I think this is a routine work, so I want to know if there are available > functions in python(mainly scipy) to do this? Or maybe there is already > a better way in python? > > I have googled but haven't found helpful pages. > > Thanks a lot. > > Xiao Jianfeng I am working on a Python package for nonlinear optimization called NLPy: http://nlpy.sf.net NLPy doesn't feature an SQP method just yet. There is however a full-fledged method for problems with nonlinear constraints (equalities or inequalities) in the works. At this point, since I understand from another post that your equality constraints are in fact linear, you should be able to solve your problem in NLPy with minimal programming, but somehow, the environment will need the first and second derivatives of your objective function. There are basically two ways to achieve that: 1) the hard way: write Python functions to implement those derivatives, 2) the back way: model your problem using a modeling language such as AMPL (www.ampl.org), which will compute the derivatives for you using automatic differentiation. NLPy has hooks to AMPL to make things work seamlessly and transparently. However, AMPL is commerical software. There exists a size-limited "student version" that comes free of charge, though, and that will serve your purposes well if your problem isn't too large. If you can compute first, but not second derivatives, there is possibility of approximating those using a limited-memory BFGS matrix. NLPy features a L-BFGS implementation in pure Python, save for the linesearch, which is in Fortran. Eliminating the linear constraints, as somebody suggested, is referred to as a "nullspace method" in optimization lingo. It entices computing a basis for the nullspace of your constraints, which can sometimes be just as time consuming as performing the minimization on the constrained problem. What is the size of your problem (how many constraints and variables)? I can help you offline with setting up your problem for use with NLPy. I hope this help, Dominique From kte608 at mail.usask.ca Fri Jun 15 11:51:31 2007 From: kte608 at mail.usask.ca (Karl Edler) Date: Fri, 15 Jun 2007 11:51:31 -0400 Subject: [SciPy-user] Problems compiling scipy on SuSE 10.2 Message-ID: <4672B583.20707@mail.usask.ca> Hello, Recently I tried installing scipy from rpm packages on SuSE 10.2 (64bit) and failed. Now I have tried compiling scipy and have run into some problems. I compiled atlas successfully and copied its *.a files to /usr/lib64/atlas I ran "python setup.py build" in the scipy directory and the build failed since it couldn't find the -lgcc_s library. I made a symbolic link from /lib64/libgcc_s.so.1 -> /lib64/libgcc_s.so and the build was able to proceed. Now the build fails with something about -fPIC which I don't understand. Here is the last bit of the output from "python setup.py build" (notice : relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used when making a shared object; recompile with -fPIC): customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building 'scipy.integrate._odepack' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC compile options: '-DATLAS_INFO="\"3.6.0\"" -I/usr/local/lib64/atlas -I/usr/lib64/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' /usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64-2.5/Lib/integrate/_odepackmodule.o -L/usr/local/lib64/atlas -L/usr/lib/python2.5/config -Lbuild/temp.linux-x86_64-2.5 -lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas -lpython2.5 -lg2c -o build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib64/atlas/libptf77blas.a(dscal.o): relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used when making a shared object; recompile with -fPIC /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: final link failed: Bad value /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib64/atlas/libptf77blas.a(dscal.o): relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used when making a shared object; recompile with -fPIC /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: final link failed: Bad value error: Command "/usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64-2.5/Lib/integrate/_odepackmodule.o -L/usr/local/lib64/atlas -L/usr/lib/python2.5/config -Lbuild/temp.linux-x86_64-2.5 -lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas -lpython2.5 -lg2c -o build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so" failed with exit status 1 ---------------------------------------------------------------------------------------------------------------- Does anyone know what this means or how to fix it? Thanks, Karl Edler From charles.yanaitis at rochester.edu Fri Jun 15 12:58:40 2007 From: charles.yanaitis at rochester.edu (Charlie Yanaitis) Date: Fri, 15 Jun 2007 16:58:40 +0000 (UTC) Subject: [SciPy-user] scipy fblas.so functions not found References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> <20070614193211.GG14029@avicenna.cc.columbia.edu> Message-ID: Lev Givon columbia.edu> writes: > Being that the binary atlas rpm in Fedora is built with gfortran > rather than g77, you should try using the former when you build scipy. Thanks again for your help! I tried gfortran and still got the "recompile with -fPIC" error. I'm going to set this aside and then revisit it. Maybe when I come back to try again, I'll notice something I may have missed. Thanks again and have a great weekend! From charles.yanaitis at rochester.edu Fri Jun 15 13:03:12 2007 From: charles.yanaitis at rochester.edu (Charlie Yanaitis) Date: Fri, 15 Jun 2007 17:03:12 +0000 (UTC) Subject: [SciPy-user] Problems compiling scipy on SuSE 10.2 References: <4672B583.20707@mail.usask.ca> Message-ID: Karl Edler mail.usask.ca> writes: > Now the build fails with something about -fPIC which I don't understand. > Here is the last bit of the output from "python setup.py build" (notice > : relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used > when making a shared object; recompile with -fPIC): Karl, I posted yesterday and ran into the same problem with Redhat, the "recompile with -fPIC". I did recompile the Atlas libraries with -fPIC but scipy doesn't seem to pick up on it. I'm eager for a solution to this problem and will be watching this thread. If I find out anything more, I'll let you know. Have a good weekend! From charles.yanaitis at rochester.edu Fri Jun 15 13:07:11 2007 From: charles.yanaitis at rochester.edu (Charlie Yanaitis) Date: Fri, 15 Jun 2007 17:07:11 +0000 (UTC) Subject: [SciPy-user] scipy fblas.so functions not found References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> <20070614193211.GG14029@avicenna.cc.columbia.edu> Message-ID: > Lev Givon columbia.edu> writes: > > > Being that the binary atlas rpm in Fedora is built with gfortran > > rather than g77, you should try using the former when you build scipy. There's another guy, Karl Elder, who's having the same problem as me except it's with Suse 10.2 . Check out the thread, Problems compiling scipy on SuSE 10.2. Regards! From nwagner at iam.uni-stuttgart.de Fri Jun 15 13:17:25 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 15 Jun 2007 19:17:25 +0200 Subject: [SciPy-user] Problems compiling scipy on SuSE 10.2 In-Reply-To: References: <4672B583.20707@mail.usask.ca> Message-ID: On Fri, 15 Jun 2007 17:03:12 +0000 (UTC) Charlie Yanaitis wrote: > Karl Edler mail.usask.ca> writes: > > >> Now the build fails with something about -fPIC which I >>don't understand. >> Here is the last bit of the output from "python setup.py >>build" (notice >> : relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' >>can not be used >> when making a shared object; recompile with -fPIC): > > Karl, I posted yesterday and ran into the same problem >with Redhat, the > "recompile with -fPIC". I did recompile the Atlas >libraries with -fPIC but scipy > doesn't seem to pick up on it. I'm eager for a solution >to this problem and will > be watching this thread. If I find out anything more, >I'll let you know. > > Have a good weekend! > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user This is what you can find in INSTALL.txt of atlas3.7.33.tar.bz2 ATLAS natively builds to a static library (i.e. libs that usually end in ".a" under unix and ".lib" under windows). ATLAS always builds such a library, but it can also optionally be requested to build a dynamic/shared library (typically ending in .so for unix or .dll windows). In order to do so, you must tell ATLAS up front to compile with the proper flags (the same is true when building netlib's LAPACK, see the LAPACK note below). Assuming you are using the gnu C and Fortran compilers, you can add the following commands to your configure command: -Fa alg -fPIC to force ATLAS to be built using position independent code (required for a dynamic lib). If you use non-gnu compilers, you'll need to use -Fa to pass the correct flag(s) to append to force position independent code for each compiler (don't forget the gcc compiler used in the index files). NOTE: Since gcc uses one less int register when compiling with this flag, this could potentially impact performance of the architectural defaults, but we have not seen it so far. After you build is complete, you can cd to the OBJdir/lib directory, and ask ATLAS to build the .so you want. If you want all libraries, including the Fortran77 routines, the target choices are : shared : Create shared versions of ATLAS's sequential libs ptshared : Create shared versions of ATLAS's threaded libs If you want only the C routines (eg. you don't have a fortran compiler): cshared : Create shared versions of ATLAS's sequential libs cptshared : Create shared versions of ATLAS's threaded libs Nils From wbaxter at gmail.com Fri Jun 15 15:24:06 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 16 Jun 2007 04:24:06 +0900 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <4672B64E.1060007@gmail.com> References: <46717EFC.4040904@gmail.com> <4672B64E.1060007@gmail.com> Message-ID: On 6/16/07, Dominique Orban wrote: > > Hello Xiao, > > If you can compute first, but not second derivatives, there is possibility > of > approximating those using a limited-memory BFGS matrix. NLPy features a > L-BFGS > implementation in pure Python, save for the linesearch, which is in > Fortran. scipy.optimize also has an L-BFGS implementation -- in pure C I believe. Is there something yours offers that scipy's doesn't? (like constraints?) --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.j.lee at tnw.utwente.nl Fri Jun 15 15:26:25 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Fri, 15 Jun 2007 21:26:25 +0200 Subject: [SciPy-user] 3D density calculation Message-ID: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> Hi everyone, I was hoping this list could point me in the direction of a more efficient solution to a problem I have. I have 4 vectors: x, y, z, and t that are about 1 million in length that describe the positions of photons. As my simulation progresses it updates the positions so x, y, z, and t change by an unknown (and unknowable) amount every update. This worked very well for its original purpose but now I need to calculate the photon density change over time. Currently after each update, I iterate over time slices, x slices, and y slices and then make an histogram of z which I then stitch together to create a density. However, this becomes very slow as the photons spread out in space and time. Does anyone know how to take such a large vector set and return a density efficiently? From novak at ucolick.org Fri Jun 15 15:48:17 2007 From: novak at ucolick.org (Greg Novak) Date: Fri, 15 Jun 2007 12:48:17 -0700 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> References: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Message-ID: Matthieu Brucher wrote: > Are you using a specific IDE ? Plain old IPython, but it happens when I run it in a bare python interpreter as well. On 6/15/07, Brian Granger wrote: > 1) What version of python are you using? Python 2.4 and below has > some issues with memory not being released back to the OS. 2.5 > 2) What data structures are you using to represent the data? Lots of arrays... It's mostly particle data, although I do flagrantly generate lots of temporaries. I'm not careful at all about that. I thought this could have speed implications, but I didn't realize it could have memory exhaustion implications, too. Since I'm only handling 10's of MB at a time, I also thought that memory fragmentation wouldn't be a severe problem. If I had GB arrays and started generating lots of temporary copies, I could see that that would lead to trouble... I found a program called heapy that's supposed to help with this. Anyone have any experience with it? Thanks for your thoughts, Greg From dominique.orban at gmail.com Fri Jun 15 15:51:22 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 15 Jun 2007 15:51:22 -0400 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: References: <46717EFC.4040904@gmail.com> <4672B64E.1060007@gmail.com> Message-ID: <4672EDBA.2000206@gmail.com> Bill Baxter wrote: > On 6/16/07, Dominique Orban wrote: >> >> Hello Xiao, >> >> If you can compute first, but not second derivatives, there is possibility >> of >> approximating those using a limited-memory BFGS matrix. NLPy features a >> L-BFGS >> implementation in pure Python, save for the linesearch, which is in >> Fortran. > > > scipy.optimize also has an L-BFGS implementation -- in pure C I believe. Is > there something yours offers that scipy's doesn't? (like constraints?) L-BFGS has been generalized to bound constraints only (resulting in the code L-BFGS-B), but the implementation is very different from that of L-BFGS. NLPy only contains L-BFGS for now and the implementation is standard. Since it is all in Python it is easy to read and modify (e.g, implement different iteration-dependent scalings.) I am not sure which one is interfaced in SciPy. Dominique From wbaxter at gmail.com Fri Jun 15 16:02:52 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 16 Jun 2007 05:02:52 +0900 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <4672EDBA.2000206@gmail.com> References: <46717EFC.4040904@gmail.com> <4672B64E.1060007@gmail.com> <4672EDBA.2000206@gmail.com> Message-ID: On 6/16/07, Dominique Orban wrote: > > Bill Baxter wrote: > > On 6/16/07, Dominique Orban wrote: > >> > > scipy.optimize also has an L-BFGS implementation -- in pure C I > believe. Is > > there something yours offers that scipy's doesn't? (like constraints?) > > L-BFGS has been generalized to bound constraints only (resulting in the > code > L-BFGS-B), but the implementation is very different from that of L-BFGS. > NLPy > only contains L-BFGS for now and the implementation is standard. Since it > is all > in Python it is easy to read and modify (e.g, implement different > iteration-dependent scalings.) I am not sure which one is interfaced in > SciPy. Yes, it's L-BFGS-B in Scipy, actually. I was referring to constraints other than simple bounds. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Fri Jun 15 16:03:40 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 15 Jun 2007 16:03:40 -0400 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: References: <46717EFC.4040904@gmail.com> <4672B64E.1060007@gmail.com> <4672EDBA.2000206@gmail.com> Message-ID: <4672F09C.5000708@gmail.com> Bill Baxter wrote: > On 6/16/07, Dominique Orban wrote: >> >> Bill Baxter wrote: >> > On 6/16/07, Dominique Orban wrote: >> >> >> > scipy.optimize also has an L-BFGS implementation -- in pure C I >> believe. Is >> > there something yours offers that scipy's doesn't? (like constraints?) >> >> L-BFGS has been generalized to bound constraints only (resulting in the >> code >> L-BFGS-B), but the implementation is very different from that of L-BFGS. >> NLPy >> only contains L-BFGS for now and the implementation is standard. Since it >> is all >> in Python it is easy to read and modify (e.g, implement different >> iteration-dependent scalings.) I am not sure which one is interfaced in >> SciPy. > > Yes, it's L-BFGS-B in Scipy, actually. I was referring to constraints other > than simple bounds. Thanks, I will look into Scipy's interface. One of the ideas behind NLPy is that only fundamental building blocks for optimization are interfaces to state-of-the-art code in C or Fortran (e.g., some factorization routines, complex linesearches, ...) while I try to keep the algorithms themselves in Python. I haven't gotten around to implementing L-BFGS-B yet. There are also other ways to solve bound-constrained problems and still use an L-BFGS approx to the second derivatives. Dominique From cookedm at physics.mcmaster.ca Fri Jun 15 16:27:09 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 15 Jun 2007 16:27:09 -0400 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: References: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Message-ID: On Jun 15, 2007, at 15:48 , Greg Novak wrote: > Matthieu Brucher wrote: >> Are you using a specific IDE ? > > Plain old IPython, but it happens when I run it in a bare python > interpreter as well. > > On 6/15/07, Brian Granger wrote: >> 1) What version of python are you using? Python 2.4 and below has >> some issues with memory not being released back to the OS. > > 2.5 > >> 2) What data structures are you using to represent the data? > > Lots of arrays... It's mostly particle data, although I do flagrantly > generate lots of temporaries. I'm not careful at all about that. I > thought this could have speed implications, but I didn't realize it > could have memory exhaustion implications, too. Since I'm only > handling 10's of MB at a time, I also thought that memory > fragmentation wouldn't be a severe problem. If I had GB arrays and > started generating lots of temporary copies, I could see that that > would lead to trouble... When this happens to me its either because I screwed up handling the reference counts in a C extension, or I'm keeping old copies of arrays in a cache or a log object. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From matthieu.brucher at gmail.com Fri Jun 15 16:52:42 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 15 Jun 2007 22:52:42 +0200 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: References: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Message-ID: 2007/6/15, Greg Novak : > > Matthieu Brucher wrote: > > Are you using a specific IDE ? > > Plain old IPython, but it happens when I run it in a bare python > interpreter as well. I asked this because IPython can keep some extra references, so memory is not freed, but if that happens with the simple interpreter :| Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jun 15 17:43:02 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 15 Jun 2007 23:43:02 +0200 Subject: [SciPy-user] numpy r3871 broken Message-ID: gnu: no Fortran 90 compiler found Traceback (most recent call last): File "setup.py", line 90, in setup_package() File "setup.py", line 83, in setup_package configuration=configuration ) File "/home/nwagner/svn/numpy/numpy/distutils/core.py", line 176, in setup return old_setup(**new_attr) File "/usr/lib64/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib64/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/usr/lib64/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/home/nwagner/svn/numpy/numpy/distutils/command/install.py", line 16, in run r = old_install.run(self) File "/usr/lib64/python2.5/distutils/command/install.py", line 511, in run self.run_command('build') File "/usr/lib64/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib64/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/lib64/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib64/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib64/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/home/nwagner/svn/numpy/numpy/distutils/command/build_src.py", line 130, in run self.build_sources() File "/home/nwagner/svn/numpy/numpy/distutils/command/build_src.py", line 147, in build_sources self.build_extension_sources(ext) File "/home/nwagner/svn/numpy/numpy/distutils/command/build_src.py", line 250, in build_extension_sources sources = self.generate_sources(sources, ext) File "/home/nwagner/svn/numpy/numpy/distutils/command/build_src.py", line 307, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 51, in generate_config_h library_dirs = default_lib_dirs) File "/usr/lib64/python2.5/distutils/command/config.py", line 278, in try_run self._check_compiler() File "/home/nwagner/svn/numpy/numpy/distutils/command/config.py", line 30, in _check_compiler dry_run=self.dry_run, force=1) File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 787, in new_fcompiler compiler = get_default_fcompiler(plat, requiref90=requiref90) File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 771, in get_default_fcompiler requiref90=requiref90) File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 723, in _find_existing_fcompiler c.customize(dist) File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 475, in customize get_flags('opt', oflags) File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 466, in get_flags flags.extend(getattr(self.flag_vars, tag)) File "/home/nwagner/svn/numpy/numpy/distutils/environment.py", line 37, in __getattr__ return self._get_var(name, conf_desc) File "/home/nwagner/svn/numpy/numpy/distutils/environment.py", line 51, in _get_var var = self._hook_handler(name, hook) File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 668, in _environment_hook return hook() File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/gnu.py", line 166, in get_flags_opt if self.get_version()<='3.3.3': File "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 408, in get_version return CCompiler.get_version(force=force, ok_status=ok_status) TypeError: unbound method CCompiler_get_version() must be called with CCompiler instance as first argument (got nothing instead) From fdu.xiaojf at gmail.com Fri Jun 15 22:35:45 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Sat, 16 Jun 2007 10:35:45 +0800 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46717EFC.4040904@gmail.com> References: <46717EFC.4040904@gmail.com> Message-ID: <46734C81.2050400@gmail.com> Hi all, So much thanks for so many kind people. fdu.xiaojf at gmail.com wrote: > Hi all, > > Sorry for the cross-posting. > > I'm trying to find the minimum of a multivariate function F(x1, x2, ..., > xn) subject to multiple constraints G1(x1, x2, ..., xn) = 0, G2(...) = > 0, ..., Gm(...) = 0. I'm sorry that I haven't fully stated my question correctly. There are still inequality constraints for my problem. All the variables(x1, x2, ..., xn) should be equal or bigger than 0. > > The conventional way is to construct a dummy function Q, > > $$Q(X, \Lambda) = F(X) + \lambda_1 G1(X) + \lambda_2 G2(X) + ... + \lambda_m > Gm(X)$$ > > and then calculate the value of X and \Lambda when the gradient of function Q > equals 0. > > I think this is a routine work, so I want to know if there are available > functions in python(mainly scipy) to do this? Or maybe there is already > a better way in python? > > I have googled but haven't found helpful pages. > > Thanks a lot. > > Xiao Jianfeng Regards, Xiao Jianfeng From david at ar.media.kyoto-u.ac.jp Sat Jun 16 05:09:00 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 16 Jun 2007 18:09:00 +0900 Subject: [SciPy-user] Problems compiling scipy on SuSE 10.2 In-Reply-To: <4672B583.20707@mail.usask.ca> References: <4672B583.20707@mail.usask.ca> Message-ID: <4673A8AC.1070705@ar.media.kyoto-u.ac.jp> Karl Edler wrote: > Hello, > > Recently I tried installing scipy from rpm packages on SuSE 10.2 (64bit) > and failed. Now I have tried compiling scipy and have run into some > problems. I solved yesterday the problem preventing the build of scipy on 64 bits arch on opensuse. Now, the rpm are available in ashigabou; not having an install of opensuse 64 bits available, I cannot say if they work correctly. > > I compiled atlas successfully and copied its *.a files to /usr/lib64/atlas > > I ran "python setup.py build" in the scipy directory and the build > failed since it couldn't find the -lgcc_s library. I made a symbolic > link from /lib64/libgcc_s.so.1 -> /lib64/libgcc_s.so and the build was > able to proceed. > > Now the build fails with something about -fPIC which I don't understand. > Here is the last bit of the output from "python setup.py build" (notice > : relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used > when making a shared object; recompile with -fPIC): If you are not familiar with compiling big softwares by yourself, atlas +blas/lapack is certainly one of the most painful experience you can get; those are really difficult beasts to compile right, and one wrong step somewhere can make fail the process much later. Now, since you are willing to do it, here is the explanation. Assuming you know the difference between static (.a) and shared library (.so), you need to know that they need to be compiled differently: this is the -fPIC thing. That is, if you build an object file without the -fPIC option, say foo.o, the object file will *not* be usable to build any shared library (you cannot use it to build a .so; for a .a, it does not matter; this is oversimplication, but I hope you wil bear them for the current issue). The only way is to recompile it. If you don't understand the above, this is a good explanation on how things work on unix generally: http://users.actcom.co.il/~choo/lupg/tutorials/libraries/unix-c-libraries.html To go back to our problem: you built a static library (that's what atlas builds by default), and you want to use it to build a python module (which is a shared library on linux). That does not work. Actually, it sometimes work because to complicate the matter, the -fpic is not really necessary on x86, but is necessary for x86_64... This makes me realize that I don't understand how static atlas can work on this platform (the only way I see is to statically link atlas to the python extension, something which distutils does not seem to know how to do if I belive the log you pasted here). Concretely: you need to rebuild atlas, which unfortunately means rebuilding lapack too, since atlas does not implement full lapack. Here are the steps: Build lapack: - in make.inc, add -fPIC OPTS and NOOPTS - in make.inc, set LAPACKLIB to liblapack_pic.a - in make.inc, set FORTRAN and LOADER to g77. Then rebuild everything the whole lapack (really be sure to rebuild everything). Build Atlas: - extract atlas (a recent version) - create a directory like MyObj inside, and go inside - use the following options for configure: ../configure --with-netlib-lapack=LAPACK -C if g77 -Fa al -fPIC. LAPACK should be replaced by the full path of liblapack_pic.a compiled above. - build atlas After, do as before. Note that you still does not use atlas as shared library, but this should make things work anyway for scipy. According to rex, using my script garnumpy made life easier for openSUSE: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy-0.2.1.tbz2 This will download, configure and build a self contained scipy and all its dependencies (netlib blas/lapack by default, but you can use atlas instead by uncommenting the necessary lines in the gar.conf.mk file, as mentionned in the README). David From rex at nosyntax.net Sat Jun 16 12:18:04 2007 From: rex at nosyntax.net (rex) Date: Sat, 16 Jun 2007 09:18:04 -0700 Subject: [SciPy-user] Problems compiling scipy on SuSE 10.2 In-Reply-To: <4673A8AC.1070705@ar.media.kyoto-u.ac.jp> References: <4672B583.20707@mail.usask.ca> <4673A8AC.1070705@ar.media.kyoto-u.ac.jp> Message-ID: <20070616161804.GB4657@x2.nosyntax.com> David Cournapeau [2007-06-16 07:15]: > > After, do as before. Note that you still does not use atlas as shared > library, but this should make things work anyway for scipy. According to > rex, using my script garnumpy made life easier for openSUSE: > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy-0.2.1.tbz2 > > This will download, configure and build a self contained scipy and all > its dependencies (netlib blas/lapack by default, but you can use atlas > instead by uncommenting the necessary lines in the gar.conf.mk file, as > mentionned in the README). The url is: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.2.1.tbz2 As a test of the default settings, I created a newuser, downloaded the above file into newuser/, and: tar -xjvf garnumpy-0.2.1.tbz2 cd garnumpy-0.2.1 cd platform/scipy make install It completed the build & lengthy test process w/o errors. I was very pleasantly surprised that the default settings resulted in a successful build. What a change from the painful manual process! David, thank you very much for this contribution to the community. It promises to be a very useful tool, especially for new users. One small change needs to be made if you run Python2.5. In startgarnumpy.sh, the line PYTHONPATH=$GARNUMPYDIR/lib/python2.4/site-packages:$PYTHONPATH needs to be changed to PYTHONPATH=$GARNUMPYDIR/lib/python2.5/site-packages:$PYTHONPATH Then source startgarnumpy.sh The system is a Core 2 Duo running 32-bit openSUSE 10.2 & Python2.5. Next, I plan to experiment changing the defaults to match the hardware better. -rex -- Time flies like wind. Fruit flies like pears. From william.ratcliff at gmail.com Sat Jun 16 13:49:51 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Sat, 16 Jun 2007 13:49:51 -0400 Subject: [SciPy-user] svn checkout question Message-ID: <827183970706161049j199c8b5dre6a134ca420109e1@mail.gmail.com> I work under windows xp and recently was using tortoise svn to try to check out scipy with the hopes of building it from source. I was prompted for a user name and password. Is this supposed to happen? Also, do I need to do anything special to get files from the sandbox? Thanks, William -------------- next part -------------- An HTML attachment was scrubbed... URL: From novak at ucolick.org Sat Jun 16 14:52:28 2007 From: novak at ucolick.org (Greg Novak) Date: Sat, 16 Jun 2007 11:52:28 -0700 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: References: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Message-ID: Is there a simple way to get IPython to release its references? I'm interested in that, too, independently. Is it as simple as clearing out the In[] and Out[] lists? Greg On 6/15/07, Matthieu Brucher wrote: > > > 2007/6/15, Greg Novak : > > Matthieu Brucher wrote: > > > Are you using a specific IDE ? > > > > Plain old IPython, but it happens when I run it in a bare python > > interpreter as well. > > I asked this because IPython can keep some extra references, so memory is > not freed, but if that happens with the simple interpreter :| > > Matthieu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Sat Jun 16 16:31:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 16 Jun 2007 15:31:07 -0500 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: References: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Message-ID: <4674488B.3060903@gmail.com> Greg Novak wrote: > Is there a simple way to get IPython to release its references? I'm > interested in that, too, independently. Is it as simple as clearing > out the In[] and Out[] lists? There are also variables _NN which correspond to Out[NN] that need to be deleted. There are also _, __, and ___, but those will get rotated shortly. Also, don't worry about In; it's just the strings you typed, nothing too memory consuming. Here's a function that you can use: import bisect def clearout(__IP, upto=None): """ Clear the IPython Out cache, possibly only up to a given entry. """ ns = __IP.ns_table['user'] Out = ns.get('Out', None) if Out is not None: keys = sorted(Out) if upto is not None: keys = keys[:bisect.bisect_right(keys, upto)] for key in keys: del Out[key] else: # No cache. # Still might have the _NN variables sitting around. keys = [] for var in ns: if var.startswith('_'): try: nn = int(var[1:]) except ValueError: continue if upto is not None and nn < upto: keys.append(nn) for key in keys: _key = '_%s' % key del ns[_key] print 'Remove Out entries: %s' % keys In [1]: from clearout import clearout In [2]: 2 Out[2]: 2 In [3]: 3 Out[3]: 3 In [4]: 4 Out[4]: 4 In [5]: 5 Out[5]: 5 In [6]: 6 Out[6]: 6 In [7]: 7 Out[7]: 7 In [8]: 8 Out[8]: 8 In [9]: 9 Out[9]: 9 In [10]: 10 Out[10]: 10 In [11]: print Out {2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10} In [12]: clearout(__IP, upto=6) Remove Out entries: [2, 3, 4, 5, 6] In [13]: print Out {7: 7, 8: 8, 9: 9, 10: 10} -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Sat Jun 16 19:47:58 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 16 Jun 2007 19:47:58 -0400 Subject: [SciPy-user] numpy r3871 broken In-Reply-To: References: Message-ID: <20070616234758.GA5893@arbutus.physics.mcmaster.ca> On Fri, Jun 15, 2007 at 11:43:02PM +0200, Nils Wagner wrote: > gnu: no Fortran 90 compiler found > Traceback (most recent call last): > File "setup.py", line 90, in > ... > "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/gnu.py", > line 166, in get_flags_opt > if self.get_version()<='3.3.3': > File > "/home/nwagner/svn/numpy/numpy/distutils/fcompiler/__init__.py", > line 408, in get_version > return CCompiler.get_version(force=force, > ok_status=ok_status) > TypeError: unbound method CCompiler_get_version() must be > called with CCompiler instance as first argument (got > nothing instead) Sorry, my fault. Pearu's fixed it. I knew I should've done a bit more checking before going to the pub... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david at ar.media.kyoto-u.ac.jp Sun Jun 17 00:09:40 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 17 Jun 2007 13:09:40 +0900 Subject: [SciPy-user] svn checkout question In-Reply-To: <827183970706161049j199c8b5dre6a134ca420109e1@mail.gmail.com> References: <827183970706161049j199c8b5dre6a134ca420109e1@mail.gmail.com> Message-ID: <4674B404.2060403@ar.media.kyoto-u.ac.jp> william ratcliff wrote: > I work under windows xp and recently was using tortoise svn to try to > check out scipy with the hopes of building it from source. I was > prompted for a user name and password. Is this supposed to happen? No. This may happen because you are behind a proxy (you should tell that to tortoiseSVN, I guess). > Also, do I need to do anything special to get files from the sandbox? No, they are in the same repository. The only difference is that the packages in the sandbox are not built by default. David From david at ar.media.kyoto-u.ac.jp Sun Jun 17 00:21:50 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 17 Jun 2007 13:21:50 +0900 Subject: [SciPy-user] Problems compiling scipy on SuSE 10.2 In-Reply-To: <20070616161804.GB4657@x2.nosyntax.com> References: <4672B583.20707@mail.usask.ca> <4673A8AC.1070705@ar.media.kyoto-u.ac.jp> <20070616161804.GB4657@x2.nosyntax.com> Message-ID: <4674B6DE.3000504@ar.media.kyoto-u.ac.jp> rex wrote: > David Cournapeau [2007-06-16 07:15]: >> After, do as before. Note that you still does not use atlas as shared >> library, but this should make things work anyway for scipy. According to >> rex, using my script garnumpy made life easier for openSUSE: >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy-0.2.1.tbz2 >> >> This will download, configure and build a self contained scipy and all >> its dependencies (netlib blas/lapack by default, but you can use atlas >> instead by uncommenting the necessary lines in the gar.conf.mk file, as >> mentionned in the README). > > The url is: > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.2.1.tbz2 > > As a test of the default settings, I created a newuser, downloaded the > above file into newuser/, and: > > tar -xjvf garnumpy-0.2.1.tbz2 > cd garnumpy-0.2.1 > cd platform/scipy > make install > > It completed the build & lengthy test process w/o errors. I was very > pleasantly surprised that the default settings resulted in a successful > build. What a change from the painful manual process! David, thank you > very much for this contribution to the community. It promises to be a > very useful tool, especially for new users. > > One small change needs to be made if you run Python2.5. > > In startgarnumpy.sh, the line > > PYTHONPATH=$GARNUMPYDIR/lib/python2.4/site-packages:$PYTHONPATH > > needs to be changed to > > PYTHONPATH=$GARNUMPYDIR/lib/python2.5/site-packages:$PYTHONPATH > > Then > > source startgarnumpy.sh > > The system is a Core 2 Duo running 32-bit openSUSE 10.2 & > Python2.5. Next, I plan to experiment changing the defaults to match the > hardware better. The above version does not work on 64 bits arch, unfortunately. The changes to make it work on 64 bits (tested on ubuntu 64) are available here: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/garnumpy/garnumpy-0.3.tbz2 David From emilia12 at mail.bg Sun Jun 17 05:37:08 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Sun, 17 Jun 2007 12:37:08 +0300 Subject: [SciPy-user] Problems with importing scipy In-Reply-To: <4674B6DE.3000504@ar.media.kyoto-u.ac.jp> References: <4672B583.20707@mail.usask.ca> <4673A8AC.1070705@ar.media.kyoto-u.ac.jp> <20070616161804.GB4657@x2.nosyntax.com> <4674B6DE.3000504@ar.media.kyoto-u.ac.jp> Message-ID: <1182073028.5b4b4811f51d9@mail.bg> hi list, i have a problem - when i import the scipy, the pyton crashes: >>> import scipy Segmentation fault: 11 python is "Python 2.4.4 (#2, Mar 28 2007, 22:22:52) [GCC 3.4.6 [FreeBSD] 20060305] on freebsd6" and scipy is "py24-scipy-0.3.2_2 Scientific tools for Python" cheers e. ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From fdu.xiaojf at gmail.com Sun Jun 17 10:41:38 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Sun, 17 Jun 2007 22:41:38 +0800 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46734C81.2050400@gmail.com> References: <46717EFC.4040904@gmail.com> <46734C81.2050400@gmail.com> Message-ID: <46754822.5090902@gmail.com> Hi all, First, I have to say thanks to so many kind people. fdu.xiaojf at gmail.com wrote: > Hi all, > > So much thanks for so many kind people. > > fdu.xiaojf at gmail.com wrote: >> Hi all, >> >> Sorry for the cross-posting. >> >> I'm trying to find the minimum of a multivariate function F(x1, x2, ..., >> xn) subject to multiple constraints G1(x1, x2, ..., xn) = 0, G2(...) = >> 0, ..., Gm(...) = 0. > > I'm sorry that I haven't fully stated my question correctly. There are > still inequality constraints for my problem. All the variables(x1, x2, ..., > xn) should be equal or bigger than 0. > >> The conventional way is to construct a dummy function Q, >> >> $$Q(X, \Lambda) = F(X) + \lambda_1 G1(X) + \lambda_2 G2(X) + ... + \lambda_m >> Gm(X)$$ >> >> and then calculate the value of X and \Lambda when the gradient of function Q >> equals 0. >> >> I think this is a routine work, so I want to know if there are available >> functions in python(mainly scipy) to do this? Or maybe there is already >> a better way in python? >> >> I have googled but haven't found helpful pages. >> >> Thanks a lot. > My last email was composed in a hurry, so let me describe my problem in detail to make it clear. I have to minimize a multivariate function F(x1, x2, ..., xn) subject to multiple inequality constraints and equality constraints. The number of variables(x1, x2, ..., xn) is 10 ~ 20, the number of inequality constraints is the same with the number of variables( all variables should be no less than 0). The number of equality constraints is less than 8(mainly 4 or 5), and the equality constraints are linear. My function is too complicated to get the expression of derivate easily, so according to Joachim Dahl(dahl.joachim at gmail.com)'s post, it is probably non-convex. But I think it's possible to calculate the first derivate numerically. I have tried scipy.optimize.fmin_l_bfgs_b(), which can handle bound constraints but seems cannot handle equality constraints. Mr. Markus Amann has kindly sent me a script written by him, which can handle equality constraints and is easy to use. The method used by Markus involves the calculation of Jacobian, which I don't understand.(Sorry for my ignorance in this filed. My major is chemistry, and I'm trying to learn some knowledge about numerical optimization.) However, it seems that the script cannot handle inequality constraints. (sorry if I was wrong). I hope my bad English have described my problem clearly. Any help will be greatly appreciated. Best regards, Xiao Jianfeng From fdu.xiaojf at gmail.com Sun Jun 17 11:07:42 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Sun, 17 Jun 2007 23:07:42 +0800 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <47347f490706150650m57dd081h7c795709d22e801b@mail.gmail.com> References: <46717EFC.4040904@gmail.com> <4671895E.1000400@ukr.net> <4671EB88.3000606@gmail.com> <47347f490706142331l3d79af54ydccc7f7a30af2ae8@mail.gmail.com> <467293F4.5080807@gmail.com> <47347f490706150650m57dd081h7c795709d22e801b@mail.gmail.com> Message-ID: <46754E3E.3000503@gmail.com> Hi Joachim, Joachim Dahl wrote: > If your function is too complicated to evaluate derivatives, chances > are that > it's not convex. But you're still going to need the first and second > order derivatives > for Newton's method... > > If you want to solve > > min. f(x) > s.t. A*x = b > > you could first find a feasible point x0 satisfying A*x0 = b (e.g., the > least-norm solution to A*x = b) and parametrize all feasible points as > > z = x0+ B*y > > where B spans the nullspace of A, i.e., A*B = 0. Now you have an > unconstrained > problem > > min. f( x0 + B*y ) > > over the new variable y. > I still don't quite understand how to liminate linear equality constraints. Could you please point me to some web resources that describe this method in detail? Or what key words I should use if I want to google on the web? Thanks. Xiao Jianfeng From bernhard.voigt at gmail.com Sun Jun 17 11:35:06 2007 From: bernhard.voigt at gmail.com (Bernhard Voigt) Date: Sun, 17 Jun 2007 17:35:06 +0200 Subject: [SciPy-user] 3D density calculation In-Reply-To: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> References: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> Message-ID: <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> Hi Chris! you could try a grid of unit cells that cover your phase space (x,y,z,t). Count the number of photons per unit cell of your initial configuration and track photons leaving and entering a particular cell. A dictionary with a tuple of x,y,z,t coordinates obtained from integer division of the x,y,z,t coordinates could serve as keys. Example for 2-D: from numpy import * # phase space in x,y x = arange(-100,100.1,.1) y = arange(-100,100.1,.1) # cell dimension in both dimensions the same GRID_WIDTH=7.5 # computes the grid key from x,y coordinates def gridKey(x,y): '''return the a tuple of x,y integer divided by GRID_WIDHT''' return (int(x // GRID_WIDTH), int(y // GRID_WIDTH)) # setup your grid dictionary gridLowX, gridHighX = gridKey(min(x), max(x)) gridLowY, gridHighY = gridKey(min(y), max(y)) keys = [(i,j) for i in xrange(gridLowX, gridHighX + 1) \ for j in xrange(gridLowY, gridHighY + 1)] grid = dict().fromkeys(keys, 0) # random photons photons = random.uniform(-100.,100., (100000,2)) # count photons in each grid cell for p in photons: grid[gridKey(*p)] += 1 ######################################### # in your simulation you have to keep track of where your photons # are going to... # (the code below won't run, it's just an example) ######################################### oldKey = gridKey(photon) propagate(photon) # changes x,y coordinates of photon newKey = gridKey(photon) if oldKey != newKey: grid[oldKey] -= 1 grid[newKey] += 1 I hope this helps! Bernhard On 6/15/07, Chris Lee wrote: > > Hi everyone, > > I was hoping this list could point me in the direction of a more > efficient solution to a problem I have. > > I have 4 vectors: x, y, z, and t that are about 1 million in length > that describe the positions of photons. As my simulation progresses > it updates the positions so x, y, z, and t change by an unknown (and > unknowable) amount every update. > > This worked very well for its original purpose but now I need to > calculate the photon density change over time. Currently after each > update, I iterate over time slices, x slices, and y slices and then > make an histogram of z which I then stitch together to create a > density. However, this becomes very slow as the photons spread out > in space and time. > > Does anyone know how to take such a large vector set and return a > density efficiently? > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dahl.joachim at gmail.com Sun Jun 17 12:19:22 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Sun, 17 Jun 2007 18:19:22 +0200 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46754E3E.3000503@gmail.com> References: <46717EFC.4040904@gmail.com> <4671895E.1000400@ukr.net> <4671EB88.3000606@gmail.com> <47347f490706142331l3d79af54ydccc7f7a30af2ae8@mail.gmail.com> <467293F4.5080807@gmail.com> <47347f490706150650m57dd081h7c795709d22e801b@mail.gmail.com> <46754E3E.3000503@gmail.com> Message-ID: <47347f490706170919o43245eeao152f42541055a506@mail.gmail.com> Since your problem includes inequality constraints, the simple method I suggested doesn't apply; it only works for problems involving only linear equality constraints. To use the method, you need to identify the nullspace of your constraint matrix, e.g., using a singular value decomposition. On 6/17/07, fdu.xiaojf at gmail.com wrote: > > Hi Joachim, > > Joachim Dahl wrote: > > If your function is too complicated to evaluate derivatives, chances > > are that > > it's not convex. But you're still going to need the first and second > > order derivatives > > for Newton's method... > > > > If you want to solve > > > > min. f(x) > > s.t. A*x = b > > > > you could first find a feasible point x0 satisfying A*x0 = b (e.g., the > > least-norm solution to A*x = b) and parametrize all feasible points as > > > > z = x0+ B*y > > > > where B spans the nullspace of A, i.e., A*B = 0. Now you have an > > unconstrained > > problem > > > > min. f( x0 + B*y ) > > > > over the new variable y. > > > > I still don't quite understand how to liminate linear equality > constraints. Could you please point me to some web resources that > describe this method in detail? Or what key words I should use if I want > to google on the web? > > Thanks. > > Xiao Jianfeng > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Jun 17 13:41:23 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 17 Jun 2007 13:41:23 -0400 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46754E3E.3000503@gmail.com> References: <46717EFC.4040904@gmail.com><4671895E.1000400@ukr.net> <4671EB88.3000606@gmail.com> <47347f490706142331l3d79af54ydccc7f7a30af2ae8@mail.gmail.com> <467293F4.5080807@gmail.com><47347f490706150650m57dd081h7c795709d22e801b@mail.gmail.com><46754E3E.3000503@gmail.com> Message-ID: On Sun, 17 Jun 2007, "fdu.xiaojf at gmail.com" apparently wrote: > I still don't quite understand how to liminate linear > equality constraints. Could you please point me to some > web resources that describe this method in detail? Or what > key words I should use if I want to google on the web? Perhaps an example would be useful. Example: solve the bivariate constrained minimization problem min x1**2 + x2**2 subject to: 2 x1 + 3 x2 = 5 Reparametrize constraint: Particular Soln: (1,1) General soln: x = (1,1) + (1,-2/3)y So solve the unconstrained univariate problem: min (1+y)**2 + (1-2y/3)**2 -> y = -3/13 -> x = (1,1) + (1,-2/3)(-3/13) = (10/13,15/13) hth, Alan Isaac From fdu.xiaojf at gmail.com Mon Jun 18 04:35:36 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Mon, 18 Jun 2007 16:35:36 +0800 Subject: [SciPy-user] How to determine if a function is convex or not ? Message-ID: <467643D8.8050103@gmail.com> Hi all, CVXOPT(http://abel.ee.ucla.edu/cvxopt) can handle both equality and inequality constraints, but it can only deal with convex functions. So I want to know how to determine if a function is convex or not. Are there some rules for this? Or I have to calculate the derivatives ? Thanks. Xiao Jianfeng From dahl.joachim at gmail.com Mon Jun 18 04:54:52 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Mon, 18 Jun 2007 10:54:52 +0200 Subject: [SciPy-user] How to determine if a function is convex or not ? In-Reply-To: <467643D8.8050103@gmail.com> References: <467643D8.8050103@gmail.com> Message-ID: <47347f490706180154x59b49e2eud0d9e20f62d7b7b2@mail.gmail.com> there are many ways to determine if a function is convex, but not a single best check for all circumstances. Perhaps the simplest approach is to verify that the Hessian matrix is positive semidefinite everywhere by construction, or to use composition rules: http://www.stanford.edu/~boyd/cvxbook/ If you want to use CVXOPT for a general convex problem you first have to make sure the problem is actually convex (of course), and then you have to provide functions for evaluating first and second order derivatives, which sounded problematic in your case. What about the general nonlinear programming package someone else suggested? That might work for you (provided you figure out how to evaluate derivatives, which can be cumbersome but should be possible nevertheless). On 6/18/07, fdu.xiaojf at gmail.com wrote: > > Hi all, > > CVXOPT(http://abel.ee.ucla.edu/cvxopt) can handle both equality and > inequality constraints, but it can only deal with convex functions. > > So I want to know how to determine if a function is convex or not. Are > there some rules for this? Or I have to calculate the derivatives ? > > Thanks. > > Xiao Jianfeng > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Mon Jun 18 09:06:49 2007 From: david.huard at gmail.com (David Huard) Date: Mon, 18 Jun 2007 09:06:49 -0400 Subject: [SciPy-user] 3D density calculation In-Reply-To: <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> References: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> Message-ID: <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> Hi Chris, Have you tried numpy.histogramdd ? If its still too slow, I have a fortran implementation on the back burner. I could try to finish it quickly and send you a preliminary version. Other thought: the kernel density estimator scipy.stats.gaussian_kde David 2007/6/17, Bernhard Voigt : > > Hi Chris! > > you could try a grid of unit cells that cover your phase space (x,y,z,t). > Count the number of photons per unit cell of your initial configuration and > track photons leaving and entering a particular cell. A dictionary with a > tuple of x,y,z,t coordinates obtained from integer division of the x,y,z,t > coordinates could serve as keys. > > Example for 2-D: > > from numpy import * > # phase space in x,y > x = arange(-100,100.1,.1) > y = arange(-100,100.1,.1) > # cell dimension in both dimensions the same > GRID_WIDTH=7.5 > > # computes the grid key from x,y coordinates > def gridKey(x,y): > '''return the a tuple of x,y integer divided by GRID_WIDHT''' > return (int(x // GRID_WIDTH), int(y // GRID_WIDTH)) > > # setup your grid dictionary > gridLowX, gridHighX = gridKey(min(x), max(x)) > gridLowY, gridHighY = gridKey(min(y), max(y)) > keys = [(i,j) for i in xrange(gridLowX, gridHighX + 1) \ > for j in xrange(gridLowY, gridHighY + 1)] > grid = dict().fromkeys(keys, 0) > > # random photons > photons = random.uniform(-100.,100., (100000,2)) > > # count photons in each grid cell > for p in photons: > grid[gridKey(*p)] += 1 > > ######################################### > # in your simulation you have to keep track of where your photons > # are going to... > # (the code below won't run, it's just an example) > ######################################### > oldKey = gridKey(photon) > propagate(photon) # changes x,y coordinates of photon > newKey = gridKey(photon) > if oldKey != newKey: > grid[oldKey] -= 1 > grid[newKey] += 1 > > I hope this helps! Bernhard > > On 6/15/07, Chris Lee < c.j.lee at tnw.utwente.nl> wrote: > > > > Hi everyone, > > > > I was hoping this list could point me in the direction of a more > > efficient solution to a problem I have. > > > > I have 4 vectors: x, y, z, and t that are about 1 million in length > > that describe the positions of photons. As my simulation progresses > > it updates the positions so x, y, z, and t change by an unknown (and > > unknowable) amount every update. > > > > This worked very well for its original purpose but now I need to > > calculate the photon density change over time. Currently after each > > update, I iterate over time slices, x slices, and y slices and then > > make an histogram of z which I then stitch together to create a > > density. However, this becomes very slow as the photons spread out > > in space and time. > > > > Does anyone know how to take such a large vector set and return a > > density efficiently? > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdu.xiaojf at gmail.com Mon Jun 18 09:18:16 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Mon, 18 Jun 2007 21:18:16 +0800 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46754822.5090902@gmail.com> References: <46717EFC.4040904@gmail.com> <46734C81.2050400@gmail.com> <46754822.5090902@gmail.com> Message-ID: <46768618.8030808@gmail.com> Hi all, fdu.xiaojf at gmail.com wrote: > > My last email was composed in a hurry, so let me describe my problem in > detail to make it clear. > > I have to minimize a multivariate function F(x1, x2, ..., xn) subject to > multiple inequality constraints and equality constraints. > > The number of variables(x1, x2, ..., xn) is 10 ~ 20, the number of > inequality constraints is the same with the number of variables( all > variables should be no less than 0). The number of equality constraints > is less than 8(mainly 4 or 5), and the equality constraints are linear. > > My function is too complicated to get the expression of derivate easily, so > according to Joachim Dahl(dahl.joachim at gmail.com)'s post, it is probably > non-convex. But I think it's possible to calculate the first derivate > numerically. > > I have tried scipy.optimize.fmin_l_bfgs_b(), which can handle bound constraints > but seems cannot handle equality constraints. > > Mr. Markus Amann has kindly sent me a script written by him, which can > handle equality constraints and is easy to use. The method used by > Markus involves the calculation of Jacobian, which I don't > understand.(Sorry for my ignorance in this filed. My major is chemistry, > and I'm trying to learn some knowledge about numerical optimization.) > However, it seems that the script cannot handle inequality constraints. > (sorry if I was wrong). > > I hope my bad English have described my problem clearly. > > Any help will be greatly appreciated. > > Best regards, > > Xiao Jianfeng I have found COBYLA(http://www.jeannot.org/~js/code/index.en.html#COBYLA), and it has a Python interface, which make it very easy to use. It seems that COBYLA is capable to handle both equality and inequality constraints together. Here is an example of COBYLA(it actually example.py shiped with COBYLA): ##--------------------begin of example.py------------------ #!/usr/bin/env python # Python COBYLA example # @(#) $Jeannot: example.py,v 1.2 2004/04/13 16:35:11 js Exp $ import cobyla # A function to minimize # Must return a tuple with the function value and the value of the constraints # or None to abort the minimization def function(x): f = x[0]**2+abs(x[1])**3 # Two constraints to represent the equality constraint x**2+y**2 == 25 con = [0]*2 con[0] = x[0]**2 + x[1]**2 - 25 # x**2+y**2 >= 25 con[1] = - con[0] # x**2+y**2 <= 25 return f, con # Optimizer call rc, nf, x = cobyla.minimize(function, [-7, 3], low = [-10, 1], up = [3, 10]) print "After", nf, "function evaluations, COBYLA returned:", cobyla.RCSTRINGS[rc] print "x =", x print "f =", function(x)[0] print "con = ", function(x)[1] print "exact value = [-4.898979456, 1]" ##--------------------end of example.py------------------ Would somebody who is familiar with COBYLA tell me that is COBYLA suitable for my problem? Thanks. Xiao Jianfeng From ckkart at hoc.net Mon Jun 18 09:28:29 2007 From: ckkart at hoc.net (Christian K) Date: Mon, 18 Jun 2007 22:28:29 +0900 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46768618.8030808@gmail.com> References: <46717EFC.4040904@gmail.com> <46734C81.2050400@gmail.com> <46754822.5090902@gmail.com> <46768618.8030808@gmail.com> Message-ID: fdu.xiaojf at gmail.com wrote: > Hi all, > > fdu.xiaojf at gmail.com wrote: > > > > > My last email was composed in a hurry, so let me describe my problem in > > detail to make it clear. > > > > I have to minimize a multivariate function F(x1, x2, ..., xn) subject to > > multiple inequality constraints and equality constraints. > > > > The number of variables(x1, x2, ..., xn) is 10 ~ 20, the number of > > inequality constraints is the same with the number of variables( all > > variables should be no less than 0). The number of equality constraints > > is less than 8(mainly 4 or 5), and the equality constraints are linear. > > > > My function is too complicated to get the expression of derivate easily, so > > according to Joachim Dahl(dahl.joachim at gmail.com)'s post, it is probably > > non-convex. But I think it's possible to calculate the first derivate > > numerically. > > > > I have tried scipy.optimize.fmin_l_bfgs_b(), which can handle bound constraints > > but seems cannot handle equality constraints. > > > > Mr. Markus Amann has kindly sent me a script written by him, which can > > handle equality constraints and is easy to use. The method used by > > Markus involves the calculation of Jacobian, which I don't > > understand.(Sorry for my ignorance in this filed. My major is chemistry, > > and I'm trying to learn some knowledge about numerical optimization.) > > However, it seems that the script cannot handle inequality constraints. > > (sorry if I was wrong). > > > > I hope my bad English have described my problem clearly. > > > > Any help will be greatly appreciated. > > > > Best regards, > > > > Xiao Jianfeng > > I have found COBYLA(http://www.jeannot.org/~js/code/index.en.html#COBYLA), > and it has a Python interface, which make it very easy to use. You seem not to be aware that cobyla is part of scipy: scipy.optimize.fmin_cobyla Christian From fdu.xiaojf at gmail.com Mon Jun 18 09:56:10 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Mon, 18 Jun 2007 21:56:10 +0800 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: References: <46717EFC.4040904@gmail.com> <46734C81.2050400@gmail.com> <46754822.5090902@gmail.com> <46768618.8030808@gmail.com> Message-ID: <46768EFA.2040709@gmail.com> Christian K wrote: >> >> I have found COBYLA(http://www.jeannot.org/~js/code/index.en.html#COBYLA), >> and it has a Python interface, which make it very easy to use. > > You seem not to be aware that cobyla is part of scipy: > scipy.optimize.fmin_cobyla > > Christian > Oh, my God! I have studied fmin_l_bfgs_b, fmin_tnc, and fmin_cobyla before, but I don't quit understand the "cons" parameter for fmin_cobyla after I have found COBYLA and have seen the example shipped with it. It seems that the interface of fmin_cobyla is a little different with that is provided with COBYLA. Anyway, it doesn't matter and I think fmin_cobyla is what I want. Thanks :-) Xiao Jianfeng From dominique.orban at gmail.com Mon Jun 18 12:17:44 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 18 Jun 2007 12:17:44 -0400 Subject: [SciPy-user] How to determine if a function is convex or not ? In-Reply-To: <467643D8.8050103@gmail.com> References: <467643D8.8050103@gmail.com> Message-ID: <4676B028.2040405@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 fdu.xiaojf at gmail.com wrote: > CVXOPT(http://abel.ee.ucla.edu/cvxopt) can handle both equality and > inequality constraints, but it can only deal with convex functions. > > So I want to know how to determine if a function is convex or not. Are > there some rules for this? Or I have to calculate the derivatives ? If you know that your function is twice differentiable, a necessary and sufficient condition for it to be convex *at one given x* is for its Hessian matrix (the matrix of second derivatives) to be positive semi-definite *at that x*. This means all the eigenvalues must be >= 0. Unfortunately, this process is undecidable. Algorithms that compute the eigenvalues are of numerical nature and therefore, suffer from finite precision errors. If you were told that your smallest eigenvalue were -1.0e-15, you wouldn't be able to tell whether your matrix is indeed positive semi-definite and roundoff errors caused the smallest eigenvalue to be evaluated to a very small negative number, or whether your matrix is indefinite and really has a negative eigenvalue. Assessing convexity is difficult. I wrote a piece of software for functions modeled as part of an optimization problem in the AMPL (www.ampl.com) modeling language (again, but it is a standard in this field). The code is called DrAMPL and there is a website: www.gerad.ca/~orban/drampl. Let me know and I can send you the software. It isn't (yet) interface to Python, though, but it would let you assess the convexity of your problem. Dominique -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGdrAo2vhdTNgbn8wRAu7zAKDPb6RO9WlZRpZnNumyOYAJLl8rHQCgo+o1 qLLM+0m6D3XmvuU/wB7ZEnw= =tC31 -----END PGP SIGNATURE----- From bryan.cole at teraview.com Mon Jun 18 12:03:21 2007 From: bryan.cole at teraview.com (Bryan Cole) Date: Mon, 18 Jun 2007 16:03:21 +0000 (UTC) Subject: [SciPy-user] sparse least squares problem Message-ID: Hi, I've got a sparse (complex) least-squares problem to solve. Can anyone suggest the most appropriate scipy routines to apply to this (i.e. do the sparse solvers do least squares, for example?). scipy.linalg.lstsq works OK for small datasets but it. doesn't seem to scale up well. Maybe an iterative approach would be better. Any pointers would be much appreciated. I'm no linalg expert. cheers, Bryan From ckkart at hoc.net Mon Jun 18 21:32:24 2007 From: ckkart at hoc.net (Christian K) Date: Tue, 19 Jun 2007 10:32:24 +0900 Subject: [SciPy-user] lagrange multipliers in python In-Reply-To: <46768EFA.2040709@gmail.com> References: <46717EFC.4040904@gmail.com> <46734C81.2050400@gmail.com> <46754822.5090902@gmail.com> <46768618.8030808@gmail.com> <46768EFA.2040709@gmail.com> Message-ID: fdu.xiaojf at gmail.com wrote: > Christian K wrote: > > >> > >> I have found COBYLA(http://www.jeannot.org/~js/code/index.en.html#COBYLA), > >> and it has a Python interface, which make it very easy to use. > > > > You seem not to be aware that cobyla is part of scipy: > > scipy.optimize.fmin_cobyla > > > > Christian > > > Oh, my God! > > I have studied fmin_l_bfgs_b, fmin_tnc, and fmin_cobyla before, but I > don't quit understand the "cons" parameter for fmin_cobyla after I have > found COBYLA and have seen the example shipped with it. I used fmin_cobyla once to find the envelope to some noisy signal. Thus I had as many inequality constraints as data points. See the attached file. This is not what you want of course, but it gives you an idea of how to use cobyla. Christian -------------- next part -------------- A non-text attachment was scrubbed... Name: baseline_smaller.py Type: text/x-python Size: 1262 bytes Desc: not available URL: From openopt at ukr.net Tue Jun 19 04:56:36 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 19 Jun 2007 11:56:36 +0300 Subject: [SciPy-user] question splines Message-ID: <46779A44.3080603@ukr.net> hi all, I need a func from scipy or other Python free software that provides quickly 1-variable interpolation of some funcs (I mean vectoriezed form) by 2nd-order splines. for example, I have (x from R) func1(x) = sin(x) func2(x) = cos(x) func3(x) = x^2 + x + 2*atan(x) (the number of the funcs may be very great, up to ~ 1000) So I have several points x = 0.1, 0.2, 0.3, 0.4 and I want to obtain func1(x), func2(x), func3(x) values from any point within the [0.1; 0.4] region. Also, it would be very nice if 1) the interpolation function would provide a value outside the region, interpolated by linear or quadratic or any other way; 2) I have possibility to do binary insert of new points. I.e., initially I have values of func1, func2, func3 in x=0.1 and x=0.3. Then I got values in x=0.4. Then - in x = 0.2, etc, etc . I know that sorted x-arrays allow to obtain interpolated values more quickly, like interp1() vs interp() in MATLAB. Can I somewhow take advantages in my case? Thank you in advance, Dmitrey. From lorenzo.isella at gmail.com Tue Jun 19 04:57:19 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 19 Jun 2007 10:57:19 +0200 Subject: [SciPy-user] PDE Solver in SciPy Message-ID: Deal All, I have been using happily for a quite a while the ODE solver in SciPy (I refer to integrate.odeint). I wonder now if there is any PDE solver available for Python (better if it was incorporated into SciPy). After a bit of online search, the best I could find under the SciPy umbrella was fipy: http://www.scipy.org/FiPy?highlight=%28fipy%29 and http://www.ctcms.nist.gov/fipy/index.html Is that all? I am mainly interested in population balance equations for aerosol science applications. Any suggestion here is really welcome. Kind Regards Lorenzo From openopt at ukr.net Tue Jun 19 05:03:11 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 19 Jun 2007 12:03:11 +0300 Subject: [SciPy-user] question splines In-Reply-To: <46779A44.3080603@ukr.net> References: <46779A44.3080603@ukr.net> Message-ID: <46779BCF.2000206@ukr.net> I meant every time that I've got a func[i](x) spline estimations, for example in x=0.2, I calculate exact values func[i](x) in the point x=0.2, and then migrate to do spline estimations in next point. dmitrey wrote: > hi all, > I need a func from scipy or other Python free software that provides > quickly 1-variable interpolation of some funcs (I mean vectoriezed form) > by 2nd-order splines. > for example, I have > (x from R) > func1(x) = sin(x) > func2(x) = cos(x) > func3(x) = x^2 + x + 2*atan(x) > (the number of the funcs may be very great, up to ~ 1000) > > > So I have several points x = 0.1, 0.2, 0.3, 0.4 and I want to obtain > func1(x), func2(x), func3(x) values from any point within the [0.1; 0.4] > region. > Also, it would be very nice if > 1) the interpolation function would provide a value outside the region, > interpolated by linear or quadratic or any other way; > 2) I have possibility to do binary insert of new points. > I.e., initially I have values of func1, func2, func3 in x=0.1 and x=0.3. > Then I got values in x=0.4. Then - in x = 0.2, etc, etc . I know that > sorted x-arrays allow to obtain interpolated values more quickly, like > interp1() vs interp() in MATLAB. Can I somewhow take advantages in my case? > > Thank you in advance, > Dmitrey. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From guillemborrell at gmail.com Tue Jun 19 06:49:39 2007 From: guillemborrell at gmail.com (Guillem Borrell Nogueras) Date: Tue, 19 Jun 2007 12:49:39 +0200 Subject: [SciPy-user] Problems building scipy from svn Message-ID: <835da2a60706190349s175e6733j6ee5072771d46dbd@mail.gmail.com> Hello I am trying to build scipy from source and I get this error message building extension "scipy.fftpack._fftpack" sources > target build/src.linux-x86_64-2.5/_fftpackmodule.c does not exist: > Assuming _fftpackmodule.c was generated with "build_src --inplace" > command. > error: '_fftpackmodule.c' missing This happens for every module, if I disable the fftpack subpackage in Lib/setup.py I get this building extension "scipy.integrate.vode" sources > target build/src.linux-x86_64-2.5/vodemodule.c does not exist: > Assuming vodemodule.c was generated with "build_src --inplace" command. > error: 'vodemodule.c' missing I am able to build every single subpackage. [guillem at sisyphus scipy]$ cd Lib/fftpack/ > [guillem at sisyphus fftpack]$ python setup.py build > (...) > ok I got the same message on two different computers and I can't find any misconfiguration. Any ideas? Thanks, guillem -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Jun 19 06:43:26 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 19 Jun 2007 19:43:26 +0900 Subject: [SciPy-user] Problems building scipy from svn In-Reply-To: <835da2a60706190349s175e6733j6ee5072771d46dbd@mail.gmail.com> References: <835da2a60706190349s175e6733j6ee5072771d46dbd@mail.gmail.com> Message-ID: <4677B34E.6040605@ar.media.kyoto-u.ac.jp> Guillem Borrell Nogueras wrote: > Hello > > I am trying to build scipy from source and I get this error message > > building extension " scipy.fftpack._fftpack" sources > target build/src.linux-x86_64-2.5/_fftpackmodule.c does not exist: > Assuming _fftpackmodule.c was generated with "build_src > --inplace" command. > error: '_fftpackmodule.c' missing > > > This happens for every module, if I disable the fftpack subpackage in > Lib/setup.py I get this > > building extension "scipy.integrate.vode" sources > target build/src.linux-x86_64-2.5/vodemodule.c does not exist: > Assuming vodemodule.c was generated with "build_src --inplace" > command. > error: ' vodemodule.c' missing > > > I am able to build every single subpackage. > > [guillem at sisyphus scipy]$ cd Lib/fftpack/ > [guillem at sisyphus fftpack]$ python setup.py build > (...) > ok > > > I got the same message on two different computers and I can't find any > misconfiguration. > > Any ideas? > I got the same problem on 64 bits arch with last released numpy. You should update to numpy svn, and the problem is solved. David From openopt at ukr.net Tue Jun 19 12:37:43 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 19 Jun 2007 19:37:43 +0300 Subject: [SciPy-user] problems with _fitpack module Message-ID: <46780657.50306@ukr.net> hi all, I try to use scipy.sandbox.spline module, but it yields >>> from scipy.sandbox import spline Traceback (innermost last): File "", line 1, in File "/usr/lib/python2.5/site-packages/scipy/sandbox/spline/__init__.py", line 7, in from fitpack import * File "/usr/lib/python2.5/site-packages/scipy/sandbox/spline/fitpack.py", line 35, in import _fitpack ImportError: No module named _fitpack scipy 0.5.2 had been installed from ubuntu packages website (for Ubuntu Feisty Linux). Any suggestions? Thank you in advance, Dmitrey. From nwagner at iam.uni-stuttgart.de Tue Jun 19 12:39:23 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 19 Jun 2007 18:39:23 +0200 Subject: [SciPy-user] problems with _fitpack module In-Reply-To: <46780657.50306@ukr.net> References: <46780657.50306@ukr.net> Message-ID: On Tue, 19 Jun 2007 19:37:43 +0300 dmitrey wrote: > hi all, > I try to use scipy.sandbox.spline module, but it yields > > >>> from scipy.sandbox import spline > Traceback (innermost last): > File "", line 1, in > File > "/usr/lib/python2.5/site-packages/scipy/sandbox/spline/__init__.py", > line 7, in > from fitpack import * > File > "/usr/lib/python2.5/site-packages/scipy/sandbox/spline/fitpack.py", >line > 35, in > import _fitpack > ImportError: No module named _fitpack > > scipy 0.5.2 had been installed from ubuntu packages >website (for Ubuntu >Feisty Linux). > Any suggestions? > Thank you in advance, Dmitrey. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Works for me. Python 2.5 (r25:51908, May 25 2007, 16:11:33) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.sandbox import spline >>> import scipy >>> scipy.__version__ '0.5.3.dev3108' Nils From guillemborrell at gmail.com Tue Jun 19 19:17:09 2007 From: guillemborrell at gmail.com (Guillem Borrell Nogueras) Date: Wed, 20 Jun 2007 01:17:09 +0200 Subject: [SciPy-user] Problems building scipy from svn In-Reply-To: <4677B34E.6040605@ar.media.kyoto-u.ac.jp> References: <835da2a60706190349s175e6733j6ee5072771d46dbd@mail.gmail.com> <4677B34E.6040605@ar.media.kyoto-u.ac.jp> Message-ID: <835da2a60706191617r417a444dp3a65f5052b395a7a@mail.gmail.com> Thanks! It fixes the error in my 64 bit computer but my 32 bit one is still reluctant to build scipy. I'll do a bit more research. On 6/19/07, David Cournapeau wrote: > > Guillem Borrell Nogueras wrote: > > Hello > > > > I am trying to build scipy from source and I get this error message > > > > building extension " scipy.fftpack._fftpack" sources > > target build/src.linux-x86_64-2.5/_fftpackmodule.c does not exist: > > Assuming _fftpackmodule.c was generated with "build_src > > --inplace" command. > > error: '_fftpackmodule.c' missing > > > > > > This happens for every module, if I disable the fftpack subpackage in > > Lib/setup.py I get this > > > > building extension "scipy.integrate.vode" sources > > target build/src.linux-x86_64-2.5/vodemodule.c does not exist: > > Assuming vodemodule.c was generated with "build_src --inplace" > > command. > > error: ' vodemodule.c' missing > > > > > > I am able to build every single subpackage. > > > > [guillem at sisyphus scipy]$ cd Lib/fftpack/ > > [guillem at sisyphus fftpack]$ python setup.py build > > (...) > > ok > > > > > > I got the same message on two different computers and I can't find any > > misconfiguration. > > > > Any ideas? > > > I got the same problem on 64 bits arch with last released numpy. You > should update to numpy svn, and the problem is solved. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From novak at ucolick.org Tue Jun 19 22:57:30 2007 From: novak at ucolick.org (Greg Novak) Date: Tue, 19 Jun 2007 19:57:30 -0700 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> References: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Message-ID: On 6/15/07, Brian Granger wrote: > 1) What version of python are you using? Python 2.4 and below has > some issues with memory not being released back to the OS. It seems like this is what's happening, even though I'm using Python 2.5. I have a function that, when called, pretty reliably makes the memory usage and resident size go up by ~40 megabytes every time it's called. I'm looking at the VmSize and VmRSS lines in /proc/pid/status on an Ubuntu machine to determine memory usage. I expected to find zillions of objects added to the list returned by gc.get_objects. However, there were only 27 objects added, and they all seemed small -- strings, small dicts, one Frame object, and that's about it. I mentioned a python module called Heapy: http://guppy-pe.sourceforge.net/ It lets you set a reference point and then look at the sizes of all objects allocated after that time. This confirms what I found above manually-- only a few objects created, and they're small. So it does seem as though the Python garbage collector has freed the objects, but it hasn't returned the memory to the operating system. This continues until I have several GB allocated and the program crashes. I'm not using any of my own C extensions for this (where I could screw up the reference counting) and it doesn't look like the problem is leaking objects anyway. So... does anyone have any thoughts about what could cause this? Thanks, Greg From domi at vision.ee.ethz.ch Wed Jun 20 03:22:38 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Wed, 20 Jun 2007 09:22:38 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <4672340E.2060302@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> Message-ID: <4678D5BE.5000107@vision.ee.ethz.ch> Hi, Is it possible to directly read/write bz2 compressed binary files with scipy? Thanks, Dominik -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From massimo.sandal at unibo.it Wed Jun 20 09:09:41 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 20 Jun 2007 15:09:41 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? Message-ID: <46792715.6030707@unibo.it> Hi, We have a set of data that we fit to a nonlinear function using scipy.optimize.leastsq that, AFAIK, uses the Levenberg-Marquardt method. Talking with a collegue of another lab, he pointed me that the dataset we fit usually has intrinsically more noise in the first part of the data than the latter. So he fitted by taking into account the non uniform error -that is, instead of using plain chi-square, giving more weight to the distance from points with less intrinsic error. He told me that on Origin there is a function that does it. Is there something similar on scipy? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From ckkart at hoc.net Wed Jun 20 09:33:45 2007 From: ckkart at hoc.net (Christian K) Date: Wed, 20 Jun 2007 22:33:45 +0900 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <46792715.6030707@unibo.it> References: <46792715.6030707@unibo.it> Message-ID: massimo sandal wrote: > Hi, > > We have a set of data that we fit to a nonlinear function using > scipy.optimize.leastsq that, AFAIK, uses the Levenberg-Marquardt method. > > Talking with a collegue of another lab, he pointed me that the dataset > we fit usually has intrinsically more noise in the first part of the > data than the latter. So he fitted by taking into account the non > uniform error -that is, instead of using plain chi-square, giving more > weight to the distance from points with less intrinsic error. He told me > that on Origin there is a function that does it. Is there something > similar on scipy? Have a look at scipy.odr. This module does orthogonal distance regression (or just normal least squares if you prefer). Interesting for you is the fact that you can pass an array containing the weights of the data. Even better, odr gives you an error estimation of the fit. Note that in scipy versions <= 0.5.2 odr resides in scipy.sandbox.odr, however there is a bug which prevents importing itm which is fixed in svn. You might want to have a look at peak-o-mat (http://lorentz.sf.net), too. It's a general data fitting application which makes use of scipy.odr. Christian From wjdandreta at att.net Wed Jun 20 10:23:34 2007 From: wjdandreta at att.net (Bill Dandreta) Date: Wed, 20 Jun 2007 10:23:34 -0400 Subject: [SciPy-user] read/write compressed files In-Reply-To: <4678D5BE.5000107@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> Message-ID: <46793866.8050807@att.net> Dominik Szczerba wrote: > Hi, > Is it possible to directly read/write bz2 compressed binary files with > scipy? > Thanks, Dominik > Check out the Python bz2 module. -- Bill wjdandreta at att.net Gentoo Linux 2.6.20-gentoo-r8 Reclaim Your Inbox with http://www.mozilla.org/products/thunderbird/ All things cometh to he who waiteth as long as he who waiteth worketh like hell while he waiteth. From domi at vision.ee.ethz.ch Wed Jun 20 11:48:20 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Wed, 20 Jun 2007 17:48:20 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <46793866.8050807@att.net> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> Message-ID: <46794C44.5050505@vision.ee.ethz.ch> Yes, I know it, but it does not return a scipy array, does it? Can I achieve it without copying memory? (I have huge arrays to process) - Dominik Bill Dandreta wrote: > Dominik Szczerba wrote: >> Hi, >> Is it possible to directly read/write bz2 compressed binary files with >> scipy? >> Thanks, Dominik >> > Check out the Python bz2 module. > -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From faltet at carabos.com Wed Jun 20 13:27:07 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 20 Jun 2007 19:27:07 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <46794C44.5050505@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> Message-ID: <1182360427.2709.22.camel@carabos.com> El dc 20 de 06 del 2007 a les 17:48 +0200, en/na Dominik Szczerba va escriure: > Yes, I know it, but it does not return a scipy array, does it? > Can I achieve it without copying memory? (I have huge arrays to process) Do you need bzip2 for something in special? In general, zlib or lzo are enough for achieving decent compress ratios in numerical data, while allowing much better compression, and specially decompression, speed. In any case, PyTables does have support for the (zlib, lzo, bzip2) threesome right out of the box. In addition, it is meant to deal with huge arrays (it saves data in small chunks that are compressed and decompressed individually, so you don't have to worry about wasting too much memory for (de-)compression buffers). Regards, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From peridot.faceted at gmail.com Wed Jun 20 14:02:05 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 20 Jun 2007 14:02:05 -0400 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <46792715.6030707@unibo.it> References: <46792715.6030707@unibo.it> Message-ID: On 20/06/07, massimo sandal wrote: > Hi, > > We have a set of data that we fit to a nonlinear function using > scipy.optimize.leastsq that, AFAIK, uses the Levenberg-Marquardt method. > > Talking with a collegue of another lab, he pointed me that the dataset > we fit usually has intrinsically more noise in the first part of the > data than the latter. So he fitted by taking into account the non > uniform error -that is, instead of using plain chi-square, giving more > weight to the distance from points with less intrinsic error. He told me > that on Origin there is a function that does it. Is there something > similar on scipy? The easiest solution is to rescale your y values by the uncertainties before doing the fit. Now, if your errors are not Gaussian, least-squares is no longer the correct approach and your life becomes more difficult... Anne From peridot.faceted at gmail.com Wed Jun 20 14:12:53 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 20 Jun 2007 14:12:53 -0400 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <46792715.6030707@unibo.it> References: <46792715.6030707@unibo.it> Message-ID: On 20/06/07, massimo sandal wrote: > Hi, > > We have a set of data that we fit to a nonlinear function using > scipy.optimize.leastsq that, AFAIK, uses the Levenberg-Marquardt method. > > Talking with a collegue of another lab, he pointed me that the dataset > we fit usually has intrinsically more noise in the first part of the > data than the latter. So he fitted by taking into account the non > uniform error -that is, instead of using plain chi-square, giving more > weight to the distance from points with less intrinsic error. He told me > that on Origin there is a function that does it. Is there something > similar on scipy? The easiest solution is to rescale your y values by the uncertainties before doing the fit. Now, if your errors are not Gaussian, least-squares is no longer the correct approach and your life becomes more difficult... Anne From peridot.faceted at gmail.com Wed Jun 20 14:14:55 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 20 Jun 2007 14:14:55 -0400 Subject: [SciPy-user] read/write compressed files In-Reply-To: <46794C44.5050505@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> Message-ID: On 20/06/07, Dominik Szczerba wrote: > Yes, I know it, but it does not return a scipy array, does it? > Can I achieve it without copying memory? (I have huge arrays to process) If the bz2 module will provide a file-like object, scipy.read_array can read from that. Anne From peridot.faceted at gmail.com Wed Jun 20 14:21:56 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 20 Jun 2007 14:21:56 -0400 Subject: [SciPy-user] Debugging memory exhaustion in Python? In-Reply-To: References: <6ce0ac130706150008p23101bebjec4702a64ffd8cc5@mail.gmail.com> Message-ID: On 19/06/07, Greg Novak wrote: > It seems like this is what's happening, even though I'm using Python > 2.5. I have a function that, when called, pretty reliably makes the > memory usage and resident size go up by ~40 megabytes every time it's > called. I'm looking at the VmSize and VmRSS lines in > /proc/pid/status on an Ubuntu machine to determine memory usage. I > expected to find zillions of objects added to the list returned by > gc.get_objects. However, there were only 27 objects added, and they > all seemed small -- strings, small dicts, one Frame object, and that's > about it. What does the Frame object contain? Doesn't it have the complete set of function local variables? I suppose you're listing everything it points to as well. Keep in mind that numpy objects sometimes keep alive big hunks of memory. For example, if you allocate a huge array and then pick out a small piece using a view, the original huge chunk of memory is kept (and it is not allocated using python's malloc so it may not be accounted for in your tools). There's also the problem that a view holds a reference to the array object it's a view of, so taking views of views of views of ... can lead to arbitrarily long chains of objects. Anne From domi at vision.ee.ethz.ch Wed Jun 20 15:01:11 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Wed, 20 Jun 2007 21:01:11 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <1182360427.2709.22.camel@carabos.com> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <1182360427.2709.22.camel@carabos.com> Message-ID: <46797977.9090601@vision.ee.ethz.ch> PyTables is great (and big) while I just need to read in a sequence of values. Thanks a lot anyway, Dominik Francesc Altet wrote: > El dc 20 de 06 del 2007 a les 17:48 +0200, en/na Dominik Szczerba va > escriure: >> Yes, I know it, but it does not return a scipy array, does it? >> Can I achieve it without copying memory? (I have huge arrays to process) > > Do you need bzip2 for something in special? In general, zlib or lzo are > enough for achieving decent compress ratios in numerical data, while > allowing much better compression, and specially decompression, speed. > > In any case, PyTables does have support for the (zlib, lzo, bzip2) > threesome right out of the box. In addition, it is meant to deal with > huge arrays (it saves data in small chunks that are compressed and > decompressed individually, so you don't have to worry about wasting too > much memory for (de-)compression buffers). > > Regards, > -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From skraelings001 at gmail.com Wed Jun 20 15:21:05 2007 From: skraelings001 at gmail.com (Reynaldo) Date: Wed, 20 Jun 2007 14:21:05 -0500 Subject: [SciPy-user] windowed filter design Message-ID: <38032030706201221k6aeed059o5fd4a2b2d65de985@mail.gmail.com> Hi fellows, this is my first message to the list. Sorry for my english. I need to make a program that let me implement a windowed filter; with lowpass, bandpass, highpass and stopband configurations. At the university we use matlab and i can implement windowed filter with fir1 allowing me to pass an argument specifying the configuration but since i wanna implement it using python and scipy, how can i make it possible using signal.firwin? Am i missing something? Sincerely, Reynaldo -- |_|0|_| |_|_|0| |0|0|0| -------------- next part -------------- An HTML attachment was scrubbed... URL: From domi at vision.ee.ethz.ch Wed Jun 20 15:47:02 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Wed, 20 Jun 2007 21:47:02 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> Message-ID: <46798436.9040603@vision.ee.ethz.ch> That works very well for ascii files, but I failed to figure out about binary data... Thanks for any hints, - Dominik Anne Archibald wrote: > On 20/06/07, Dominik Szczerba wrote: >> Yes, I know it, but it does not return a scipy array, does it? >> Can I achieve it without copying memory? (I have huge arrays to process) > > If the bz2 module will provide a file-like object, scipy.read_array > can read from that. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From domi at vision.ee.ethz.ch Wed Jun 20 16:18:26 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Wed, 20 Jun 2007 22:18:26 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <46798436.9040603@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <46798436.9040603@vision.ee.ethz.ch> Message-ID: <46798B92.4@vision.ee.ethz.ch> I got it (partially) working, but am not sure about optimality. In particular, will fromstring copy memory into the array or decompress in place? I think the former (how else would it know the size, and tell() will be slow), but please correct me if I am wrong. import gzip fh = gzip.GzipFile("test.dat.gz", 'rb'); #ps = zeros(256*256) - will it help? ps = fromstring(fh.read(), 'd') ps.shape = (256,256) fh.close() fp = open('test.dat', 'wb') io.numpyio.fwrite(fp, ps.size, ps) fp.close() - Dominik Dominik Szczerba wrote: > That works very well for ascii files, but I failed to figure out about > binary data... > Thanks for any hints, > - Dominik > > Anne Archibald wrote: >> On 20/06/07, Dominik Szczerba wrote: >>> Yes, I know it, but it does not return a scipy array, does it? >>> Can I achieve it without copying memory? (I have huge arrays to process) >> If the bz2 module will provide a file-like object, scipy.read_array >> can read from that. >> >> Anne >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From skraelings001 at gmail.com Wed Jun 20 16:25:19 2007 From: skraelings001 at gmail.com (Reynaldo) Date: Wed, 20 Jun 2007 15:25:19 -0500 Subject: [SciPy-user] bug in scipy.signal.firwin? Message-ID: <38032030706201325h16808b7j66eaa36fe6bc58aa@mail.gmail.com> Hi In the following line the sum function is the standard python sum function, which doesn't have axis keyword. return h / sum(h,axis=0) I'm using In [56]: scipy.__version__ Out[56]: '0.5.2' In [57]: scipy.__numpy_version__ Out[57]: '1.0.1' on a Gentoo Linux, 2.6.18-gentoo-r4 -- |_|0|_| |_|_|0| |0|0|0| -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Jun 20 17:21:58 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 20 Jun 2007 23:21:58 +0200 Subject: [SciPy-user] bug in scipy.signal.firwin? In-Reply-To: <38032030706201325h16808b7j66eaa36fe6bc58aa@mail.gmail.com> References: <38032030706201325h16808b7j66eaa36fe6bc58aa@mail.gmail.com> Message-ID: <20070620212158.GL20362@mentat.za.net> On Wed, Jun 20, 2007 at 03:25:19PM -0500, Reynaldo wrote: > In the following line the sum function is the standard python sum function, > which doesn't have axis keyword. > > return h / sum(h,axis=0) > > I'm using > > In [56]: scipy.__version__ > Out[56]: '0.5.2' > > In [57]: scipy.__numpy_version__ > Out[57]: '1.0.1' > > on a Gentoo Linux, 2.6.18-gentoo-r4 I'm think this has been fixed in SVN. Please try the latest version and let us know. Cheers St?fan From skraelings001 at gmail.com Wed Jun 20 17:50:13 2007 From: skraelings001 at gmail.com (Reynaldo) Date: Wed, 20 Jun 2007 16:50:13 -0500 Subject: [SciPy-user] bug in scipy.signal.firwin? In-Reply-To: <20070620212158.GL20362@mentat.za.net> References: <38032030706201325h16808b7j66eaa36fe6bc58aa@mail.gmail.com> <20070620212158.GL20362@mentat.za.net> Message-ID: <38032030706201450h545288d4g82918076c2bf7351@mail.gmail.com> I've downloaded latest svn version and as you correctly said it's been fixed. But, i prefer not to mess with svn right now, i added 'numpy.sum' and works fine for me. Thanks Reynaldo 2007/6/20, Stefan van der Walt : > > On Wed, Jun 20, 2007 at 03:25:19PM -0500, Reynaldo wrote: > > In the following line the sum function is the standard python sum > function, > > which doesn't have axis keyword. > > > > return h / sum(h,axis=0) > > > > I'm using > > > > In [56]: scipy.__version__ > > Out[56]: '0.5.2' > > > > In [57]: scipy.__numpy_version__ > > Out[57]: '1.0.1' > > > > on a Gentoo Linux, 2.6.18-gentoo-r4 > > I'm think this has been fixed in SVN. Please try the latest version > and let us know. > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- |_|0|_| |_|_|0| |0|0|0| -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.warde.farley at utoronto.ca Wed Jun 20 19:17:47 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Wed, 20 Jun 2007 19:17:47 -0400 Subject: [SciPy-user] read/write compressed files In-Reply-To: <46798B92.4@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <46798436.9040603@vision.ee.ethz.ch> <46798B92.4@vision.ee.ethz.ch> Message-ID: <2E1480E5-1E8D-446C-8EE2-E3CFDC20AC7C@utoronto.ca> On 20-Jun-07, at 4:18 PM, Dominik Szczerba wrote: > I got it (partially) working, but am not sure about optimality. In > particular, will fromstring copy memory into the array or > decompress in > place? I think the former (how else would it know the size, and tell() > will be slow), but please correct me if I am wrong. I would almost certainly bet it would do a copy. Did you try using Anne's suggestion of scipy.read_array with your 'fh' object? Also, somebody correct me if I'm wrong, but I don't think modifying the 'shape' property directly is the recommended way to do it, I think you should be using ps.resize(). David > import gzip > fh = gzip.GzipFile("test.dat.gz", 'rb'); > #ps = zeros(256*256) - will it help? > ps = fromstring(fh.read(), 'd') > ps.shape = (256,256) > fh.close() > fp = open('test.dat', 'wb') > io.numpyio.fwrite(fp, ps.size, ps) > fp.close() > > - Dominik > > Dominik Szczerba wrote: >> That works very well for ascii files, but I failed to figure out >> about >> binary data... >> Thanks for any hints, >> - Dominik >> >> Anne Archibald wrote: >>> On 20/06/07, Dominik Szczerba wrote: >>>> Yes, I know it, but it does not return a scipy array, does it? >>>> Can I achieve it without copying memory? (I have huge arrays to >>>> process) >>> If the bz2 module will provide a file-like object, scipy.read_array >>> can read from that. >>> >>> Anne >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > -- > Dominik Szczerba, Ph.D. > Computer Vision Lab CH-8092 Zurich > http://www.vision.ee.ethz.ch/~domi > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From gonzalezmancera+scipy at gmail.com Wed Jun 20 20:48:44 2007 From: gonzalezmancera+scipy at gmail.com (Andres Gonzalez-Mancera) Date: Wed, 20 Jun 2007 20:48:44 -0400 Subject: [SciPy-user] Problems building scipy from svn Message-ID: I just had the exact same problem trying to install in a new Mac mini Core Duo (32 bits) with numpy 1.0.3. Using numpy from svn fixed the problem! Andres > Message: 1 > Date: Wed, 20 Jun 2007 01:17:09 +0200 > From: "Guillem Borrell Nogueras" > Subject: Re: [SciPy-user] Problems building scipy from svn > To: "SciPy Users List" > Message-ID: > <835da2a60706191617r417a444dp3a65f5052b395a7a at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Thanks! > > It fixes the error in my 64 bit computer but my 32 bit one is still > reluctant to build scipy. I'll do a bit more research. > > On 6/19/07, David Cournapeau wrote: > > > > Guillem Borrell Nogueras wrote: > > > Hello > > > > > > I am trying to build scipy from source and I get this error message > > > > > > building extension " scipy.fftpack._fftpack" sources > > > target build/src.linux-x86_64-2.5/_fftpackmodule.c does not exist: > > > Assuming _fftpackmodule.c was generated with "build_src > > > --inplace" command. > > > error: '_fftpackmodule.c' missing > > > > > > > > > This happens for every module, if I disable the fftpack subpackage in > > > Lib/setup.py I get this > > > > > > building extension "scipy.integrate.vode" sources > > > target build/src.linux-x86_64-2.5/vodemodule.c does not exist: > > > Assuming vodemodule.c was generated with "build_src --inplace" > > > command. > > > error: ' vodemodule.c' missing > > > > > > > > > I am able to build every single subpackage. > > > > > > [guillem at sisyphus scipy]$ cd Lib/fftpack/ > > > [guillem at sisyphus fftpack]$ python setup.py build > > > (...) > > > ok > > > > > > > > > I got the same message on two different computers and I can't find any > > > misconfiguration. > > > > > > Any ideas? > > > > > I got the same problem on 64 bits arch with last released numpy. You > > should update to numpy svn, and the problem is solved. > > > > David > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Andr?s Gonz?lez Mancera, Ph.D. Biofluid Mechanics Lab Department of Mechanical Engineering University of Maryland, Baltimore County andres.gonzalez at umbc.edu 410-455-3347 From domi at vision.ee.ethz.ch Thu Jun 21 03:20:20 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Thu, 21 Jun 2007 09:20:20 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <2E1480E5-1E8D-446C-8EE2-E3CFDC20AC7C@utoronto.ca> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <46798436.9040603@vision.ee.ethz.ch> <46798B92.4@vision.ee.ethz.ch> <2E1480E5-1E8D-446C-8EE2-E3CFDC20AC7C@utoronto.ca> Message-ID: <467A26B4.204@vision.ee.ethz.ch> David Warde-Farley wrote: > On 20-Jun-07, at 4:18 PM, Dominik Szczerba wrote: > >> I got it (partially) working, but am not sure about optimality. In >> particular, will fromstring copy memory into the array or >> decompress in >> place? I think the former (how else would it know the size, and tell() >> will be slow), but please correct me if I am wrong. > > I would almost certainly bet it would do a copy. Did you try using Is there a way to avoid it if I know the size of the unpacked sequence a priori? > Anne's suggestion of scipy.read_array > with your 'fh' object? Yes I did and reported it back to the list (it works only for ascii data) > > Also, somebody correct me if I'm wrong, but I don't think modifying > the 'shape' property directly is the > recommended way to do it, I think you should be using ps.resize(). Thanks for a warning, but actually, I was able to do things with so formed array (matplotlib plots, usual stuff like sqrt and powers etc.) Thanks a lot, Dominik > > David > > >> import gzip >> fh = gzip.GzipFile("test.dat.gz", 'rb'); >> #ps = zeros(256*256) - will it help? >> ps = fromstring(fh.read(), 'd') >> ps.shape = (256,256) >> fh.close() >> fp = open('test.dat', 'wb') >> io.numpyio.fwrite(fp, ps.size, ps) >> fp.close() >> >> - Dominik >> >> Dominik Szczerba wrote: >>> That works very well for ascii files, but I failed to figure out >>> about >>> binary data... >>> Thanks for any hints, >>> - Dominik >>> >>> Anne Archibald wrote: >>>> On 20/06/07, Dominik Szczerba wrote: >>>>> Yes, I know it, but it does not return a scipy array, does it? >>>>> Can I achieve it without copying memory? (I have huge arrays to >>>>> process) >>>> If the bz2 module will provide a file-like object, scipy.read_array >>>> can read from that. >>>> >>>> Anne >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >> -- >> Dominik Szczerba, Ph.D. >> Computer Vision Lab CH-8092 Zurich >> http://www.vision.ee.ethz.ch/~domi >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From massimo.sandal at unibo.it Thu Jun 21 05:24:58 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 21 Jun 2007 11:24:58 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> Message-ID: <467A43EA.4020402@unibo.it> Sorry, but I am quite a noob in serious data analysis (degree in molecular biology, sigh)... > The easiest solution is to rescale your y values by the uncertainties > before doing the fit. What do you mean by that? > Now, if your errors are not Gaussian, least-squares is no longer the > correct approach and your life becomes more difficult... In which sense not Gaussian? In the sense that for each point, the uncertainity is not Gaussian distributed? It should at least with good approximation be. If it is in another sense, please explain... thanks, m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From matthieu.brucher at gmail.com Thu Jun 21 05:29:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 21 Jun 2007 11:29:05 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <467A43EA.4020402@unibo.it> References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> Message-ID: > > Now, if your errors are not Gaussian, least-squares is no longer the > > correct approach and your life becomes more difficult... > > In which sense not Gaussian? In the sense that for each point, the > uncertainity is not Gaussian distributed? It should at least with good > approximation be. If it is in another sense, please explain... If the error is not Gaussian (normally distributed, ...), least squares is not the "most likely" optimization (maximizing likelyhood on gaussian data is the same as least squares), you should use more robust cost functions. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Thu Jun 21 06:31:55 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 21 Jun 2007 12:31:55 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> Message-ID: <467A539B.8070505@unibo.it> Matthieu Brucher ha scritto: > In which sense not Gaussian? In the sense that for each point, the > uncertainity is not Gaussian distributed? It should at least with good > approximation be. If it is in another sense, please explain... > > > If the error is not Gaussian (normally distributed, ...), least squares > is not the "most likely" optimization (maximizing likelyhood on gaussian > data is the same as least squares), you should use more robust cost > functions. I guess you refer to the distribution of the error for *each single point*, not the distribution of the average error in the dataset for different points. In this case yes, it is Gaussian, so there should be no problem. My question was different, anyway: each point can have a different error size (i.e. sigma of gaussian distribution) respect to its neighbours. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From faltet at carabos.com Thu Jun 21 06:30:02 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 21 Jun 2007 12:30:02 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <46797977.9090601@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <1182360427.2709.22.camel@carabos.com> <46797977.9090601@vision.ee.ethz.ch> Message-ID: <1182421803.2676.11.camel@carabos.com> El dc 20 de 06 del 2007 a les 21:01 +0200, en/na Dominik Szczerba va escriure: > PyTables is great (and big) while I just need to read in a sequence of > values. Ok, that's fine. In any case, I'm interested in knowing the reasons on why you are using bzip2 instead zlib. Have you detected some data pattern where you get significantly more compression than by using zlib for example?. I'm asking this because, in my experience with numerical data, I was unable to detect important compression level differences between bzip2 and zlib. See: http://www.pytables.org/docs/manual/ch05.html#compressionIssues for some experiments in that regard. I'd appreciate any input on this subject (bzip2 vs zlib). -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From matthieu.brucher at gmail.com Thu Jun 21 06:51:31 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 21 Jun 2007 12:51:31 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <467A539B.8070505@unibo.it> References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467A539B.8070505@unibo.it> Message-ID: > > I guess you refer to the distribution of the error for *each single > point*, not the distribution of the average error in the dataset for > different points. In this case yes, it is Gaussian, so there should be > no problem. It is the distribution for all errors. If it is the same distribution for all points, OK with least squares. If it is not, you have to scale the points so that the errors follow the same gaussian law. My question was different, anyway: each point can have a different error > size (i.e. sigma of gaussian distribution) respect to its neighbours. > > m. > > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From domi at vision.ee.ethz.ch Thu Jun 21 06:57:02 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Thu, 21 Jun 2007 12:57:02 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <1182421803.2676.11.camel@carabos.com> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <1182360427.2709.22.camel@carabos.com> <46797977.9090601@vision.ee.ethz.ch> <1182421803.2676.11.camel@carabos.com> Message-ID: <467A597E.6010105@vision.ee.ethz.ch> Hi, I meant bz2 over zlib due to higher compression, if slower performance. This common belief was usually parallel to my experience. However, a simple test below made with fresh morning data clearly undermines this thinking: > du -hsc test9*.dat 428M total > time gzip test9*.dat real 0m31.663s user 0m28.946s sys 0m1.612s > du -hsc test9*.dat.gz 215M total > time gunzip test9*.dat.gz real 0m7.447s user 0m6.036s sys 0m1.264s > time bzip2 test9*.dat real 2m1.696s user 1m54.527s sys 0m4.008s > du -hsc test9*.dat.bz2 219M total > time bunzip2 test9*.dat.bz2 real 0m43.252s user 0m39.926s sys 0m2.792s I am surprised, as I well remember cases where I could gain 20%. But indeed, given the much slower performance, you have me convinced to use zlib over bz2. thanks for forcing me to do this test, - Dominik Francesc Altet wrote: > El dc 20 de 06 del 2007 a les 21:01 +0200, en/na Dominik Szczerba va > escriure: >> PyTables is great (and big) while I just need to read in a sequence of >> values. > > Ok, that's fine. In any case, I'm interested in knowing the reasons on > why you are using bzip2 instead zlib. Have you detected some data > pattern where you get significantly more compression than by using zlib > for example?. > > I'm asking this because, in my experience with numerical data, I was > unable to detect important compression level differences between bzip2 > and zlib. See: > > http://www.pytables.org/docs/manual/ch05.html#compressionIssues > > for some experiments in that regard. > > I'd appreciate any input on this subject (bzip2 vs zlib). > -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From faltet at carabos.com Thu Jun 21 07:30:41 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 21 Jun 2007 13:30:41 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <467A597E.6010105@vision.ee.ethz.ch> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <1182360427.2709.22.camel@carabos.com> <46797977.9090601@vision.ee.ethz.ch> <1182421803.2676.11.camel@carabos.com> <467A597E.6010105@vision.ee.ethz.ch> Message-ID: <1182425441.2676.20.camel@carabos.com> El dj 21 de 06 del 2007 a les 12:57 +0200, en/na Dominik Szczerba va escriure: > Hi, > > I meant bz2 over zlib due to higher compression, if slower performance. > This common belief was usually parallel to my experience. However, a > simple test below made with fresh morning data clearly undermines this > thinking: > > > > > du -hsc test9*.dat > > 428M total > > > time gzip test9*.dat > > real 0m31.663s > user 0m28.946s > sys 0m1.612s > > > du -hsc test9*.dat.gz > > 215M total > > > time gunzip test9*.dat.gz > > real 0m7.447s > user 0m6.036s > sys 0m1.264s > > > time bzip2 test9*.dat > > real 2m1.696s > user 1m54.527s > sys 0m4.008s > > > du -hsc test9*.dat.bz2 > > 219M total > > > time bunzip2 test9*.dat.bz2 > > real 0m43.252s > user 0m39.926s > sys 0m2.792s > > > I am surprised, as I well remember cases where I could gain 20%. Yeah, there should be cases where bzip2 is clearly better than zlib and one of these could be images. My teammate Ivan has come with this example: -rw------- 1 ivan ivan 733373 2007-06-21 13:02 lena1.tif.gz -rw------- 1 ivan ivan 584478 2007-06-21 13:02 lena2.tif.bz2 (you should already know where the source is: www.lenna.org ) But when it comes to general binary data for scientific uses, the compression advantages of bzip2 over zlib are less clear. > But > indeed, given the much slower performance, you have me convinced to use > zlib over bz2. > > thanks for forcing me to do this test, You are welcome ;) -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From domi at vision.ee.ethz.ch Thu Jun 21 07:45:13 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Thu, 21 Jun 2007 13:45:13 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <1182425441.2676.20.camel@carabos.com> References: <4671A96D.70709@vision.ee.ethz.ch> <4671AD5A.3040309@gmail.com> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <1182360427.2709.22.camel@carabos.com> <46797977.9090601@vision.ee.ethz.ch> <1182421803.2676.11.camel@carabos.com> <467A597E.6010105@vision.ee.ethz.ch> <1182425441.2676.20.camel@carabos.com> Message-ID: <467A64C9.2010304@vision.ee.ethz.ch> There is also another thing, namely bz2 uses --best per default while gzip uses -6. The whole thing is of course strongly data-dependent. - Dominik Francesc Altet wrote: > El dj 21 de 06 del 2007 a les 12:57 +0200, en/na Dominik Szczerba va > escriure: >> Hi, >> >> I meant bz2 over zlib due to higher compression, if slower performance. >> This common belief was usually parallel to my experience. However, a >> simple test below made with fresh morning data clearly undermines this >> thinking: >> >> >> >>> du -hsc test9*.dat >> 428M total >> >>> time gzip test9*.dat >> real 0m31.663s >> user 0m28.946s >> sys 0m1.612s >> >>> du -hsc test9*.dat.gz >> 215M total >> >>> time gunzip test9*.dat.gz >> real 0m7.447s >> user 0m6.036s >> sys 0m1.264s >> >>> time bzip2 test9*.dat >> real 2m1.696s >> user 1m54.527s >> sys 0m4.008s >> >>> du -hsc test9*.dat.bz2 >> 219M total >> >>> time bunzip2 test9*.dat.bz2 >> real 0m43.252s >> user 0m39.926s >> sys 0m2.792s >> >> >> I am surprised, as I well remember cases where I could gain 20%. > > Yeah, there should be cases where bzip2 is clearly better than zlib and > one of these could be images. My teammate Ivan has come with this > example: > > -rw------- 1 ivan ivan 733373 2007-06-21 13:02 lena1.tif.gz > -rw------- 1 ivan ivan 584478 2007-06-21 13:02 lena2.tif.bz2 > > (you should already know where the source is: www.lenna.org ) > > But when it comes to general binary data for scientific uses, the > compression advantages of bzip2 over zlib are less clear. > >> But >> indeed, given the much slower performance, you have me convinced to use >> zlib over bz2. >> >> thanks for forcing me to do this test, > > You are welcome ;) > -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From massimo.sandal at unibo.it Thu Jun 21 08:31:24 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 21 Jun 2007 14:31:24 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467A539B.8070505@unibo.it> Message-ID: <467A6F9C.30209@unibo.it> Matthieu Brucher ha scritto: > I guess you refer to the distribution of the error for *each single > point*, not the distribution of the average error in the dataset for > different points. In this case yes, it is Gaussian, so there should be > no problem. > It is the distribution for all errors. If it is the same distribution > for all points, OK with least squares. If it is not, you have to scale > the points so that the errors follow the same gaussian law. Sigh, there is some misunderstanding between us (surely due to my utter ignorance). Imagine I have three data points, A B C. Usually, if the error is uniform (on Y) can be: Ay +/- 5 By +/- 5 Cy +/- 5 In my case it is: Ay +/- 5 By +/- 7 Cy +/- 11 Now, the error in all those three cases behaves gaussianly, but with different widths. 1)Does this mean that least squares is NOT ok? 2)What does "rescaling" mean in this context? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From matthieu.brucher at gmail.com Thu Jun 21 08:50:37 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 21 Jun 2007 14:50:37 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <467A6F9C.30209@unibo.it> References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467A539B.8070505@unibo.it> <467A6F9C.30209@unibo.it> Message-ID: > > 1)Does this mean that least squares is NOT ok? Yes, LS is _NOT_ OK because it assumes that the distribution (with its parameters) is the same for all errors. I don't remember exactly, but this may be due to ergodicity 2)What does "rescaling" mean in this context? You must change B and C so that : Ay +/- 5 B'y +/- 5 C'y +/- 5 Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Thu Jun 21 09:08:36 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 21 Jun 2007 09:08:36 -0400 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467A539B.8070505@unibo.it> <467A6F9C.30209@unibo.it> Message-ID: <91cf711d0706210608p701996e0tef6b9bcb383c3b6@mail.gmail.com> Hi, What you have is an heteroscedastic normal distribution (varying variance) describing the residuals. 2007/6/21, Matthieu Brucher : > > 1)Does this mean that least squares is NOT ok? > > > Yes, LS is _NOT_ OK because it assumes that the distribution (with its > parameters) is the same for all errors. I don't remember exactly, but this > may be due to ergodicity > Well, let's put things in perspective. You can still use ordinary least-squares. Theoretically, this means you're making the assumption that the error mean and variance are fixed and constant. In your case, this is not true and you can consider the LS solution like an approximation. What will happen under this approximation is that large errors on Cy will tend to dominate the residuals, and values in Ay will probably not be fitted optimally. I advise you try it anyway and visually check whether you care about that or not. 2)What does "rescaling" mean in this context? > > You must change B and C so that : > Ay +/- 5 > B'y +/- 5 > C'y +/- 5 > Or maximize the likelihood of a multivariate normal distribution, whose covariance matrix describes your assumption about the heteroscedasticity of the residuals. \Sigma = | \sigma_A^2 0 0 | | 0 \sigma_B^2 0 | | 0 0 \sigma_C^2 | Heteroscedastic likelihood = -n/2 \ln(2\pi) - 1/2 \sum \ln(\sigma_i^2) -1/2 \sum \sigma_i^{-2} (y_{obs} - y_{sim})^2 You might also consider the possibility that your errors are multiplicative rather than additive. In this case, describing the residuals by a lognormal distribution could make more sense. Maximize lognormal likelihood: L=lognormal(y_sim | ln(y_obs), \sigma) Cheers, David Matthieu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Thu Jun 21 09:14:52 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 21 Jun 2007 15:14:52 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467A539B.8070505@unibo.it> <467A6F9C.30209@unibo.it> Message-ID: <467A79CC.5030804@unibo.it> Matthieu Brucher ha scritto: > 1)Does this mean that least squares is NOT ok? > > Yes, LS is _NOT_ OK because it assumes that the distribution (with its > parameters) is the same for all errors. I don't remember exactly, but > this may be due to ergodicity OK. I just wanted to be sure I understood. > 2)What does "rescaling" mean in this context? > > > > You must change B and C so that : > Ay +/- 5 > B'y +/- 5 > C'y +/- 5 Huh? How can this be possible/make sense whatsoever? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From t_crane at mrl.uiuc.edu Thu Jun 21 09:40:45 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Thu, 21 Jun 2007 08:40:45 -0500 Subject: [SciPy-user] nonlinear fit with non uniform error? Message-ID: <9EADC1E53F9C70479BF6559370369114142F0B@mrlnt6.mrl.uiuc.edu> As an aside, will those of you who are *more* in the know on this topic than the rest of us suggest a good text that has a worthwhile treatment of this subject (as well as other related data analysis/statistical issues)? I'd love to learn more about it, but just jumping on Amazon and picking a book at almost random seems like a good way to waste a lot of money I don't have on books that I don't need, so if you have a favorite reference or text, I'm interested in knowing about it. thanks, trevis -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of David Huard Sent: Thursday, June 21, 2007 8:09 AM To: SciPy Users List Subject: Re: [SciPy-user] nonlinear fit with non uniform error? Hi, What you have is an heteroscedastic normal distribution (varying variance) describing the residuals. 2007/6/21, Matthieu Brucher : 1)Does this mean that least squares is NOT ok? Yes, LS is _NOT_ OK because it assumes that the distribution (with its parameters) is the same for all errors. I don't remember exactly, but this may be due to ergodicity Well, let's put things in perspective. You can still use ordinary least-squares. Theoretically, this means you're making the assumption that the error mean and variance are fixed and constant. In your case, this is not true and you can consider the LS solution like an approximation. What will happen under this approximation is that large errors on Cy will tend to dominate the residuals, and values in Ay will probably not be fitted optimally. I advise you try it anyway and visually check whether you care about that or not. 2)What does "rescaling" mean in this context? You must change B and C so that : Ay +/- 5 B'y +/- 5 C'y +/- 5 Or maximize the likelihood of a multivariate normal distribution, whose covariance matrix describes your assumption about the heteroscedasticity of the residuals. \Sigma = | \sigma_A^2 0 0 | | 0 \sigma_B^2 0 | | 0 0 \sigma_C^2 | Heteroscedastic likelihood = -n/2 \ln(2\pi) - 1/2 \sum \ln(\sigma_i^2) -1/2 \sum \sigma_i^{-2} (y_{obs} - y_{sim})^2 You might also consider the possibility that your errors are multiplicative rather than additive. In this case, describing the residuals by a lognormal distribution could make more sense. Maximize lognormal likelihood: L=lognormal(y_sim | ln(y_obs), \sigma) Cheers, David Matthieu _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Thu Jun 21 09:46:49 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 21 Jun 2007 15:46:49 +0200 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <91cf711d0706210608p701996e0tef6b9bcb383c3b6@mail.gmail.com> References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467A539B.8070505@unibo.it> <467A6F9C.30209@unibo.it> <91cf711d0706210608p701996e0tef6b9bcb383c3b6@mail.gmail.com> Message-ID: <467A8149.9080804@unibo.it> David Huard ha scritto: > Hi, > > What you have is an heteroscedastic normal distribution (varying > variance) describing the residuals. > > 2007/6/21, Matthieu Brucher >: > > 1)Does this mean that least squares is NOT ok? > > Yes, LS is _NOT_ OK because it assumes that the distribution (with > its parameters) is the same for all errors. I don't remember > exactly, but this may be due to ergodicity > > > Well, let's put things in perspective. You can still use ordinary > least-squares. Theoretically, this means you're making the assumption > that the error mean and variance are fixed and constant. In your case, > this is not true and you can consider the LS solution like an > approximation. What will happen under this approximation is that large > errors on Cy will tend to dominate the residuals, and values in Ay will > probably not be fitted optimally. I advise you try it anyway and > visually check whether you care about that or not. Yes, it's what I already do, and works fairly well. I'd like to see how *better* becomes. It can be useful in some contexts, so I wanted to know how to implement it. > Or maximize the likelihood of a multivariate normal distribution, whose > covariance matrix describes your assumption about the heteroscedasticity > of the residuals. > > \Sigma = > | \sigma_A^2 0 0 | > | 0 \sigma_B^2 0 | > | 0 0 \sigma_C^2 | > > Heteroscedastic likelihood = -n/2 \ln(2\pi) - 1/2 \sum \ln(\sigma_i^2) > -1/2 \sum \sigma_i^{-2} (y_{obs} - y_{sim})^2 > > You might also consider the possibility that your errors are > multiplicative rather than additive. In this case, describing the > residuals by a lognormal distribution could make more sense. > > Maximize lognormal likelihood: L=lognormal(y_sim | ln(y_obs), \sigma) I'll try to make sense of it... m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From david.huard at gmail.com Thu Jun 21 10:19:29 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 21 Jun 2007 10:19:29 -0400 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <9EADC1E53F9C70479BF6559370369114142F0B@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142F0B@mrlnt6.mrl.uiuc.edu> Message-ID: <91cf711d0706210719w65526575s25a73f3375e72506@mail.gmail.com> Trevis, 2007/6/21, Trevis Crane : > > As an aside, will those of you who are **more** in the know on this topic > than the rest of us suggest a good text that has a worthwhile treatment of > this subject (as well as other related data analysis/statistical issues)? > My bible is Probability Theory : The Logic of Science by E. T. Jaynes. http://omega.albany.edu:8008/JaynesBook.html It's not so much a book about optimization and fitting than on the general principles of probability. It was worth the reading time though. There is a paper in the hydrological literature (Sorooshian and Dracup, water resources research, vol.16, no.2, 1980) that discusses the calibration of hydrologic models in correlated and heteroscedastic error cases. I guess every discipline has a paper similar to this one but this is the one I know. There is also a Book by A. Zellner, An Introduction to Bayesian Inference in Econometrics, 1971 that I found helpful. As you can see, I'm not aware of a comprehensive treatise on the subject. I just picked up bits from different articles. HTH, David I'd love to learn more about it, but just jumping on Amazon and picking a > book at almost random seems like a good way to waste a lot of money I don't > have on books that I don't need, so if you have a favorite reference or > text, I'm interested in knowing about it. > > > > thanks, > > trevis > > > > -----Original Message----- > *From:* scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] > *On Behalf Of *David Huard > *Sent:* Thursday, June 21, 2007 8:09 AM > *To:* SciPy Users List > *Subject:* Re: [SciPy-user] nonlinear fit with non uniform error? > > > > Hi, > > What you have is an heteroscedastic normal distribution (varying variance) > describing the residuals. > > 2007/6/21, Matthieu Brucher : > > 1)Does this mean that least squares is NOT ok? > > Yes, LS is _NOT_ OK because it assumes that the distribution (with its > parameters) is the same for all errors. I don't remember exactly, but this > may be due to ergodicity > > > Well, let's put things in perspective. You can still use ordinary > least-squares. Theoretically, this means you're making the assumption that > the error mean and variance are fixed and constant. In your case, this is > not true and you can consider the LS solution like an approximation. What > will happen under this approximation is that large errors on Cy will tend to > dominate the residuals, and values in Ay will probably not be fitted > optimally. I advise you try it anyway and visually check whether you care > about that or not. > > 2)What does "rescaling" mean in this context? > > > > You must change B and C so that : > Ay +/- 5 > B'y +/- 5 > C'y +/- 5 > > > Or maximize the likelihood of a multivariate normal distribution, whose > covariance matrix describes your assumption about the heteroscedasticity of > the residuals. > > \Sigma = > | \sigma_A^2 0 0 | > | 0 \sigma_B^2 0 | > | 0 0 \sigma_C^2 | > > Heteroscedastic likelihood = -n/2 \ln(2\pi) - 1/2 \sum \ln(\sigma_i^2) > -1/2 \sum \sigma_i^{-2} (y_{obs} - y_{sim})^2 > > > You might also consider the possibility that your errors are > multiplicative rather than additive. In this case, describing the residuals > by a lognormal distribution could make more sense. > > Maximize lognormal likelihood: L=lognormal(y_sim | ln(y_obs), \sigma) > > Cheers, > > David > > Matthieu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Thu Jun 21 11:43:45 2007 From: hasslerjc at comcast.net (John Hassler) Date: Thu, 21 Jun 2007 11:43:45 -0400 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <91cf711d0706210719w65526575s25a73f3375e72506@mail.gmail.com> References: <9EADC1E53F9C70479BF6559370369114142F0B@mrlnt6.mrl.uiuc.edu> <91cf711d0706210719w65526575s25a73f3375e72506@mail.gmail.com> Message-ID: <467A9CB1.80902@comcast.net> An HTML attachment was scrubbed... URL: From daniel.wheeler at nist.gov Thu Jun 21 12:03:18 2007 From: daniel.wheeler at nist.gov (Daniel Wheeler) Date: Thu, 21 Jun 2007 16:03:18 +0000 Subject: [SciPy-user] PDE Solver in SciPy In-Reply-To: References: Message-ID: <2046623B-2B91-4732-930F-CDB34A13A378@nist.gov> On Jun 19, 2007, at 8:57 AM, Lorenzo Isella wrote: > Deal All, > I have been using happily for a quite a while the ODE solver in SciPy > (I refer to integrate.odeint). > I wonder now if there is any PDE solver available for Python (better > if it was incorporated into SciPy). Just a note to say that fipy is actively supported and developed. Certainly fipy is too large and has to many nonstandard data types to be included under the scipy umbrella. Possibly a more light weight PDE solver could be included for structured grids and standard terms. There is also a package called OOF that uses python. See > After a bit of online search, the best I could find under the SciPy > umbrella was fipy: > > http://www.scipy.org/FiPy?highlight=%28fipy%29 > and > > http://www.ctcms.nist.gov/fipy/index.html > > Is that all? I am mainly interested in population balance equations > for aerosol science applications. > Any suggestion here is really welcome. Please sign up to the fipy mailing list if you need help. Cheers -- Daniel Wheeler From peridot.faceted at gmail.com Thu Jun 21 13:03:36 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 21 Jun 2007 13:03:36 -0400 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <467A43EA.4020402@unibo.it> References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> Message-ID: On 21/06/07, massimo sandal wrote: > Sorry, but I am quite a noob in serious data analysis (degree in > molecular biology, sigh)... > > > The easiest solution is to rescale your y values by the uncertainties > > before doing the fit. > > What do you mean by that? If you're doing a linear least-squares fit, you're producing a matrix M and searching for a set of parameters P so that M*P is as close to your measured values Y as possible, in a least-squares sense. (If you're fitting a straight line, for example, P is the two-element vector [m,b], and M is the matrix whose first column is the x values and whose second column is all ones.) In order to adjust the relative importance of the different data points, you can divide a row of M and the corresponding row of Y by the same constant. This will have the effect of changing the relative importance of this row compared to the others. In numpy terminology, if U is the vector of uncertainties on the Y values, you want to replace P = lstsq(A,Y) with P = lstsq((1/U)[:,newaxis]*A,(1/U)*Y) Effectively, this rescales all your measurements so that they have the same uncertainty. Think of it as writing all your measurements in units of one sigma. If you're doing nonlinear least squares, a similar trick is still possible, though of course it is no longer matrix multiplication. > > Now, if your errors are not Gaussian, least-squares is no longer the > > correct approach and your life becomes more difficult... > > In which sense not Gaussian? In the sense that for each point, the > uncertainity is not Gaussian distributed? It should at least with good > approximation be. If it is in another sense, please explain... No, that's what I meant. There's no need to get into all this in your case, it seems. But in astronomical data, it is very common for the distribution of values to not be Gaussian at all - Poisson errors are probably the most common, though chi-squared and other more exotic distributions crop up too. In these cases, least squares fitting simply gives the wrong answer (though sometimes it's not too wrong, to astronomical accuracy). Anne From robert.kern at gmail.com Thu Jun 21 13:09:12 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Jun 2007 12:09:12 -0500 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <467A79CC.5030804@unibo.it> References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467A539B.8070505@unibo.it> <467A6F9C.30209@unibo.it> <467A79CC.5030804@unibo.it> Message-ID: <467AB0B8.6020900@gmail.com> massimo sandal wrote: > Matthieu Brucher ha scritto: >> 1)Does this mean that least squares is NOT ok? >> >> Yes, LS is _NOT_ OK because it assumes that the distribution (with its >> parameters) is the same for all errors. I don't remember exactly, but >> this may be due to ergodicity > > OK. I just wanted to be sure I understood. However, weighted least squares works just fine. >> 2)What does "rescaling" mean in this context? >> >> You must change B and C so that : >> Ay +/- 5 >> B'y +/- 5 >> C'y +/- 5 > > Huh? How can this be possible/make sense whatsoever? I think the notation was misunderstood. Let's start from scratch, at least notationally. You have a function y = f(b, x) where `b` is the parameter vector, `x` is a vector of input points, and `y` is the vector of outputs corresponding to those inputs. Now, you have data consisting of vectors x0 and y0. According to the model, we have random variables Y0[i] which are normally distributed about f(b, x0[i]) each with their own variance v[i]. Equivalently, we can say that the residuals R[i] ~ N(0, v[i]) Now, to solve this problem with leastsq() we need to rescale the *residuals* such that their corresponding random variables all have the same variance. def residuals(b, x0=x0, y0=y0, v=v): return (y0 - f(b, x0)) / sqrt(v) Does this make sense? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Thu Jun 21 19:20:18 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 22 Jun 2007 08:20:18 +0900 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> Message-ID: Anne Archibald wrote: > On 21/06/07, massimo sandal wrote: >> Sorry, but I am quite a noob in serious data analysis (degree in >> molecular biology, sigh)... >> >>> The easiest solution is to rescale your y values by the uncertainties >>> before doing the fit. >> What do you mean by that? > > If you're doing a linear least-squares fit, you're producing a matrix > M and searching for a set of parameters P so that M*P is as close to > your measured values Y as possible, in a least-squares sense. (If > you're fitting a straight line, for example, P is the two-element > vector [m,b], and M is the matrix whose first column is the x values > and whose second column is all ones.) > > In order to adjust the relative importance of the different data > points, you can divide a row of M and the corresponding row of Y by > the same constant. This will have the effect of changing the relative > importance of this row compared to the others. In numpy terminology, > if U is the vector of uncertainties on the Y values, you want to > replace > P = lstsq(A,Y) > with > P = lstsq((1/U)[:,newaxis]*A,(1/U)*Y) > > Effectively, this rescales all your measurements so that they have the > same uncertainty. Think of it as writing all your measurements in > units of one sigma. > > If you're doing nonlinear least squares, a similar trick is still > possible, though of course it is no longer matrix multiplication. Is that the way scipy.odr handles the weights? Christian From robert.kern at gmail.com Thu Jun 21 19:39:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Jun 2007 18:39:54 -0500 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> Message-ID: <467B0C4A.7080102@gmail.com> Christian K wrote: > Is that the way scipy.odr handles the weights? The weighted sum of the residuals is the value that is minimized. I like formulating the problem that way rather than going through the details of the linear case. I think it's more direct this way. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Thu Jun 21 20:37:05 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 21 Jun 2007 19:37:05 -0500 Subject: [SciPy-user] Ubuntu Feisty upgrade problems Message-ID: Sorry, I feel like I whine about installation problems all the time lately. I just upgraded my home desktop to Ubuntu Feisty and have had some issues getting Scipy working. I think part of the problem is that this computer was running python 2.4 and there were some problems with side-by-side installation. I removed all references to the 2.4 directory from my path and PYTHONPATH, but still couldn't make the ubuntu python-scipy packages work. Anyways, I installed from source and things seemed to go o.k., but scipy.test() seg faulted. I ran this script that Robert Kern helped me come up with when I was testing my AMD executable for processors without SSE2: from numpy import NumpyTest packages = """ scipy.cluster scipy.fftpack scipy.integrate scipy.interpolate scipy.io scipy.lib scipy.linalg scipy.linsolve scipy.maxentropy scipy.misc scipy.odr scipy.optimize scipy.signal scipy.sparse scipy.special scipy.stats scipy.stsci scipy.weave """.strip().split() #packages.pop(0) #packages.pop() for subpkg in packages: print subpkg t = NumpyTest(subpkg) t.test(1, 2) And I get this: In [3]: run scipy_test.py scipy.cluster Warning: No test file found in /usr/lib/python2.5/site-packages/scipy/cluster/tests for module Warning: No test file found in /usr/lib/python2.5/site-packages/scipy/cluster/tests for module Found 9 tests for scipy.cluster.vq Found 0 tests for __main__ Testing that kmeans2 init methods work.Segmentation fault (core dumped) meaning the seg fault is from scipy.cluster. If I uncomment out the to pop lines (not testing scipy.cluster or scipy.weave) I get no errors. I don't use cluster that I can think of, but I would prefer a completely working scipy installation. Can someone please help me with this? Thanks, Ryan From robert.kern at gmail.com Thu Jun 21 21:46:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Jun 2007 20:46:24 -0500 Subject: [SciPy-user] Ubuntu Feisty upgrade problems In-Reply-To: References: Message-ID: <467B29F0.3030204@gmail.com> Ryan Krauss wrote: > meaning the seg fault is from scipy.cluster. If I uncomment out the > to pop lines (not testing scipy.cluster or scipy.weave) I get no > errors. > > I don't use cluster that I can think of, but I would prefer a > completely working scipy installation. > > Can someone please help me with this? David Cournapeau has been making changes to scipy.cluster. Someone else has been seeing similar problems, but David hasn't been able to reproduce them, IIRC. Please coordinate with him and provide some more details about your platform (CPU, g++ version, SWIG version, etc.). Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Thu Jun 21 22:00:51 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 21 Jun 2007 21:00:51 -0500 Subject: [SciPy-user] Ubuntu Feisty upgrade problems In-Reply-To: <467B29F0.3030204@gmail.com> References: <467B29F0.3030204@gmail.com> Message-ID: Glad to help pin this down. ryan at am2:~$ g++ -v Using built-in specs. Target: i486-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --program-suffix=-4.1 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug --enable-mpfr --enable-checking=release i486-linux-gnu Thread model: posix gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4) =========================== ryan at am2:~$ cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 75 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ stepping : 2 cpu MHz : 1000.000 cache size : 512 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc bogomips : 2011.57 clflush size : 64 processor : 1 vendor_id : AuthenticAMD cpu family : 15 model : 75 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ stepping : 2 cpu MHz : 1000.000 cache size : 512 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc bogomips : 2011.57 clflush size : 64 ============================== ryan at am2:~$ swig -version SWIG Version 1.3.31 Compiled with g++ [i686-pc-linux-gnu] Please see http://www.swig.org for reporting bugs and further information Let me know if I can provide or try anything else. Ryan ryan at am2:~$ swig -version SWIG Version 1.3.31 Compiled with g++ [i686-pc-linux-gnu] Please see http://www.swig.org for reporting bugs and further information On 6/21/07, Robert Kern wrote: > Ryan Krauss wrote: > > > meaning the seg fault is from scipy.cluster. If I uncomment out the > > to pop lines (not testing scipy.cluster or scipy.weave) I get no > > errors. > > > > I don't use cluster that I can think of, but I would prefer a > > completely working scipy installation. > > > > Can someone please help me with this? > > David Cournapeau has been making changes to scipy.cluster. Someone else has been > seeing similar problems, but David hasn't been able to reproduce them, IIRC. > Please coordinate with him and provide some more details about your platform > (CPU, g++ version, SWIG version, etc.). Thanks. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ckkart at hoc.net Fri Jun 22 00:11:29 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 22 Jun 2007 13:11:29 +0900 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: <467B0C4A.7080102@gmail.com> References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467B0C4A.7080102@gmail.com> Message-ID: Robert Kern wrote: > Christian K wrote: > >> Is that the way scipy.odr handles the weights? > > The weighted sum of the residuals is the value that is minimized. I like > formulating the problem that way rather than going through the details of the > linear case. I think it's more direct this way. I see. So then the same limitation applies: the error has to be gaussian, right? Christian From robert.kern at gmail.com Fri Jun 22 00:28:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Jun 2007 23:28:47 -0500 Subject: [SciPy-user] nonlinear fit with non uniform error? In-Reply-To: References: <46792715.6030707@unibo.it> <467A43EA.4020402@unibo.it> <467B0C4A.7080102@gmail.com> Message-ID: <467B4FFF.80509@gmail.com> Christian K wrote: > Robert Kern wrote: >> Christian K wrote: >> >>> Is that the way scipy.odr handles the weights? >> The weighted sum of the residuals is the value that is minimized. I like >> formulating the problem that way rather than going through the details of the >> linear case. I think it's more direct this way. > > I see. So then the same limitation applies: the error has to be gaussian, right? Yes, of course. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Jun 22 01:11:43 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 22 Jun 2007 14:11:43 +0900 Subject: [SciPy-user] Ubuntu Feisty upgrade problems In-Reply-To: <467B29F0.3030204@gmail.com> References: <467B29F0.3030204@gmail.com> Message-ID: <467B5A0F.6040807@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > Ryan Krauss wrote: > >> meaning the seg fault is from scipy.cluster. If I uncomment out the >> to pop lines (not testing scipy.cluster or scipy.weave) I get no >> errors. >> >> I don't use cluster that I can think of, but I would prefer a >> completely working scipy installation. >> >> Can someone please help me with this? > > David Cournapeau has been making changes to scipy.cluster. Someone else has been > seeing similar problems, but David hasn't been able to reproduce them, IIRC. > Please coordinate with him and provide some more details about your platform > (CPU, g++ version, SWIG version, etc.). Thanks. Looks like I screwed up on this one. To complete R. Kern description, the problem mentionned by Nils was reproductible (it is just that I don't have access to 64 bits cpu at work), and was solved in svn r3112. It looks like Is this the version you are using, Ryan ? The warning look strange too: are you using numpy 1.0.3 by any chance ? This release seems buggy wrt 64 bits arch, and you should use numpy svn too. David From asefu at fooie.net Fri Jun 22 02:14:12 2007 From: asefu at fooie.net (Fahd Sultan) Date: Fri, 22 Jun 2007 02:14:12 -0400 Subject: [SciPy-user] scipy fblas.so functions not found In-Reply-To: References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> <20070614193211.GG14029@avicenna.cc.columbia.edu> Message-ID: <467B68B4.1060903@fooie.net> I was trying the build on the same arch. I have managed to build scipy/numpy. Please read below of how I did it. frist I installed from the redhat cd gcc4, gcc4-fortran, and libgfortran. When I tried to build lapack-3.1.1-1.fc7.src.rpm I got the same relocation errors. I figured that something wasn't write, so I I built gcc4-4.1.1-53.EL4.src.rpm from src. I checked the Makefile before hand and it seemed to be configured to build the libraries shared. from the resulting rpms I installed gcc4-4.1.1-53.EL4.x86_64.rpm gcc4-c++-4.1.1-53.EL4.x86_64.rpm gcc4-gfortran-4.1.1-53.EL4.x86_64.rpm libgcj4-4.1.1-53.EL4.x86_64.rpm libgcj4-devel-4.1.1-53.EL4.x86_64.rpm libgfortran-4.1.1-53.EL4.x86_64.rpm libgomp-4.1.1-53.EL4.x86_64.rpm libmudflap-4.1.1-53.EL4.x86_64.rpm this time I was able to build lapack. I then installed both lapack and blas and their devel rpms. lapack-3.1.1-1.x86_64.rpm blas-3.1.1-1.x86_64.rpm I downloaded, built and install numpy-1.0.3-1.src.rpm. I found scipy-0.5.1-1.src.rpm on rpmfind, and fftw-3.1.2-3.fc6.src.rpm a prereq for scipy. fftw-3.1.2-3 fftw-devel-3.1.2-3 I had to alter the fc7 src rpm's spec files to change the gcc-gfortan prereq to gcc4-gfortran and before building scipy I had to export BLAS=/usr/lib64 and LAPACK=/usr/lib64 even though I had the dir in my lib path. after installing scipy I tested numpy and scipy with >>> import numpy, scipy >>> numpy.test() >>> scipy.test() The numpy test passed however some (?) of the scipy test failed. I tried the examples on http://www.scipy.org/scipy_Example_List and they seemed to work. I've put the rpms I built (along with src rpms) up at http://www.sharova.com/scipy_rhas44x86_64/ good luck, Fahd build server: vmware virtual server redhat AS 4.4 Linux rhas44-x86-64.vm.fooie.net 2.6.9-42.EL #1 Wed Jul 12 23:15:20 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux Charlie Yanaitis wrote: > Lev Givon columbia.edu> writes: > > > >> Being that the binary atlas rpm in Fedora is built with gfortran >> rather than g77, you should try using the former when you build scipy. >> > > > > Thanks again for your help! I tried gfortran and still got the "recompile with > -fPIC" error. I'm going to set this aside and then revisit it. Maybe when I come > back to try again, I'll notice something I may have missed. > > Thanks again and have a great weekend! > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From david at ar.media.kyoto-u.ac.jp Fri Jun 22 02:15:36 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 22 Jun 2007 15:15:36 +0900 Subject: [SciPy-user] scipy fblas.so functions not found In-Reply-To: <467B68B4.1060903@fooie.net> References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> <20070614193211.GG14029@avicenna.cc.columbia.edu> <467B68B4.1060903@fooie.net> Message-ID: <467B6908.70901@ar.media.kyoto-u.ac.jp> Fahd Sultan wrote: > I was trying the build on the same arch. I have managed to build > scipy/numpy. Please read below of how I did it. > The fpic error has nothing to do with the compiler you are using (assuming they are not buggy of course :) ), but all to do with the packaging of the blas/lapack you got. Please note that working rpm for numpy and scipy, as well as working blas/lapack are available there: http://software.opensuse.org/download/home:/ashigabou/ I added support for 64 bits arch and Fedora Core 7 a few days ago. If this does not work for you, I would like to hear it (I am the maintainer of this repository, but do not use fedora, so even if I try to produce good quality packages, there may be some errors). David From david at ar.media.kyoto-u.ac.jp Fri Jun 22 04:52:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 22 Jun 2007 17:52:14 +0900 Subject: [SciPy-user] Ubuntu Feisty upgrade problems In-Reply-To: <467B5A0F.6040807@ar.media.kyoto-u.ac.jp> References: <467B29F0.3030204@gmail.com> <467B5A0F.6040807@ar.media.kyoto-u.ac.jp> Message-ID: <467B8DBE.4060800@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Looks like I screwed up on this one. To complete R. Kern description, > the problem mentionned by Nils was reproductible (it is just that I > don't have access to 64 bits cpu at work), and was solved in svn r3112. > It looks like Is this the version you are using, Ryan ? The warning look > strange too: are you using numpy 1.0.3 by any chance ? This release > seems buggy wrt 64 bits arch, and you should use numpy svn too. > Actually, my fix does not work at all, so the problem is still here. I am taking a look at it now. David From ryanlists at gmail.com Fri Jun 22 09:26:12 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 22 Jun 2007 08:26:12 -0500 Subject: [SciPy-user] Ubuntu Feisty upgrade problems In-Reply-To: <467B8DBE.4060800@ar.media.kyoto-u.ac.jp> References: <467B29F0.3030204@gmail.com> <467B5A0F.6040807@ar.media.kyoto-u.ac.jp> <467B8DBE.4060800@ar.media.kyoto-u.ac.jp> Message-ID: Hey David, Thanks for looking into this. I am running SVN numpy from sometime yesterday: In [2]: numpy.__version__ Out[2]: '1.0.4.dev3875' Ryan On 6/22/07, David Cournapeau wrote: > David Cournapeau wrote: > > Looks like I screwed up on this one. To complete R. Kern description, > > the problem mentionned by Nils was reproductible (it is just that I > > don't have access to 64 bits cpu at work), and was solved in svn r3112. > > It looks like Is this the version you are using, Ryan ? The warning look > > strange too: are you using numpy 1.0.3 by any chance ? This release > > seems buggy wrt 64 bits arch, and you should use numpy svn too. > > > Actually, my fix does not work at all, so the problem is still here. I > am taking a look at it now. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From asefu at fooie.net Fri Jun 22 18:37:30 2007 From: asefu at fooie.net (Fahd Sultan) Date: Fri, 22 Jun 2007 18:37:30 -0400 Subject: [SciPy-user] scipy fblas.so functions not found In-Reply-To: <467B6908.70901@ar.media.kyoto-u.ac.jp> References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> <20070614193211.GG14029@avicenna.cc.columbia.edu> <467B68B4.1060903@fooie.net> <467B6908.70901@ar.media.kyoto-u.ac.jp> Message-ID: <467C4F2A.2010206@fooie.net> I had to recompile the rpms from your src rpms since the rpms dont install on Redhat AS 4.4. during the build of python-numpy I get this error: (any ideas?) I couldn't find an aborted install anywhere, not sure whats going on with this RPM. Charlie have you tried these src rpms yet? Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.45867 + umask 022 + cd /usr/src/redhat/BUILD + cd numpy-1.0.3 + LANG=C + export LANG + unset DISPLAY + /usr/lib/rpm/find-debuginfo.sh /usr/src/redhat/BUILD/numpy-1.0.3 0 blocks find: /var/tmp/python-numpy-1.0.3-build/usr/lib/debug: No such file or directory + /usr/lib/rpm/redhat/brp-compress + /usr/lib/rpm/redhat/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump Processing files: python-numpy-1.0.3-13.4 error: File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/bin/* error: File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/numpy error: File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/numpy*.egg-info error: File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/COMPATIBILITY error: File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/scipy_compatibility error: File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/site.cfg.example Processing files: python-numpy-debuginfo-1.0.3-13.4 RPM build errors: File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/bin/* File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/numpy File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/numpy*.egg-info File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/COMPATIBILITY File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/scipy_compatibility File not found by glob: /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/site.cfg.example David Cournapeau wrote: > Fahd Sultan wrote: > >> I was trying the build on the same arch. I have managed to build >> scipy/numpy. Please read below of how I did it. >> >> > The fpic error has nothing to do with the compiler you are using > (assuming they are not buggy of course :) ), but all to do with the > packaging of the blas/lapack you got. > > Please note that working rpm for numpy and scipy, as well as working > blas/lapack are available there: > > http://software.opensuse.org/download/home:/ashigabou/ > > I added support for 64 bits arch and Fedora Core 7 a few days ago. If > this does not work for you, I would like to hear it (I am the maintainer > of this repository, but do not use fedora, so even if I try to produce > good quality packages, there may be some errors). > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From shao at msg.ucsf.edu Fri Jun 22 20:26:12 2007 From: shao at msg.ucsf.edu (Lin Shao) Date: Fri, 22 Jun 2007 17:26:12 -0700 Subject: [SciPy-user] leastsq w/ Jocobian seg fault Message-ID: Hi, I think there's a bug in leastsq when Jacobian is involved -- the full_output option has to be 1 if Dfun is assigned to a user-defined function, otherwise there's a segmentation fault. gdb shows the seg fault happens here: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1210287456 (LWP 17643)] 0xb7e2c01a in malloc_usable_size () from /lib/tls/libc.so.6 If I force not to use the thread-local libc, I still got a seg fault: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 17610)] 0xb7e7f1aa in __libc_malloc_pthread_startup () from /lib/libc.so.6 This problem doesn't happen if Jocobian (Dfun) is None. It also doesn't happen when running on Debian x86_64 version. I'm using scipy 0.5.3, Debian i686 2.6.18 kernel. Thanks! --lin From shao at msg.ucsf.edu Fri Jun 22 22:35:05 2007 From: shao at msg.ucsf.edu (Lin Shao) Date: Fri, 22 Jun 2007 19:35:05 -0700 Subject: [SciPy-user] Another leastsq Jacobian bug Message-ID: Hi there, It seems like when calling leastsq(), if the Jacobian matrix is organized as rows of partial derivatives (w.r.t. each variable), then no optimization is done at all -- the return value is the same as the initial guess. It only works when the matrix is columns of derivatives and set the parameter col_deriv to 1. --lin From stefan at sun.ac.za Sat Jun 23 04:48:30 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 23 Jun 2007 10:48:30 +0200 Subject: [SciPy-user] Another leastsq Jacobian bug In-Reply-To: References: Message-ID: <20070623084829.GO20362@mentat.za.net> Hi Lin On Fri, Jun 22, 2007 at 07:35:05PM -0700, Lin Shao wrote: > It seems like when calling leastsq(), if the Jacobian matrix is > organized as rows of partial derivatives (w.r.t. each variable), then > no optimization is done at all -- the return value is the same as the > initial guess. It only works when the matrix is columns of derivatives > and set the parameter col_deriv to 1. It would be helpful if you could provide two short snippets of code to illustrate the problems you mention. Regards St?fan From tritemio at gmail.com Sat Jun 23 05:02:59 2007 From: tritemio at gmail.com (Antonino Ingargiola) Date: Sat, 23 Jun 2007 11:02:59 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <1182421803.2676.11.camel@carabos.com> References: <4671A96D.70709@vision.ee.ethz.ch> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <1182360427.2709.22.camel@carabos.com> <46797977.9090601@vision.ee.ethz.ch> <1182421803.2676.11.camel@carabos.com> Message-ID: <5486cca80706230202o34e88d12i189b205abbe89f9f@mail.gmail.com> Hi, 2007/6/21, Francesc Altet : > Ok, that's fine. In any case, I'm interested in knowing the reasons on > why you are using bzip2 instead zlib. Have you detected some data > pattern where you get significantly more compression than by using zlib > for example?. > > I'm asking this because, in my experience with numerical data, I was > unable to detect important compression level differences between bzip2 > and zlib. See: > > http://www.pytables.org/docs/manual/ch05.html#compressionIssues > > for some experiments in that regard. > > I'd appreciate any input on this subject (bzip2 vs zlib). Probably not very meaningful, but with ascii data (float as ascii) bzip2 seems to have a certain degree of advantages (both in speed and compress ratio): $ du -h lena.txt 3,1M lena.txt $ time gzip -9 lena.txt real 0m4.937s <= user 0m4.758s sys 0m0.018s $ du -h lena.txt.gz 316K lena.txt.gz $ time gunzip lena.txt.gz real 0m0.092s user 0m0.038s sys 0m0.020s $ time bzip2 lena.txt real 0m2.524s <= user 0m2.396s sys 0m0.027s $ du -h lena.txt.bz2 188K lena.txt.bz2 $ time bunzip2 lena.txt.bz2 real 0m0.868s user 0m0.775s sys 0m0.040s Even if it's usually a bad idea to put numerical data in ascii format, sometimes may be handy. Regards, ~ Antonio From domi at vision.ee.ethz.ch Sat Jun 23 05:35:06 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Sat, 23 Jun 2007 11:35:06 +0200 Subject: [SciPy-user] read/write compressed files In-Reply-To: <5486cca80706230202o34e88d12i189b205abbe89f9f@mail.gmail.com> References: <4671A96D.70709@vision.ee.ethz.ch> <4671B61E.1040304@vision.ee.ethz.ch> <4671B8B2.2040601@gmail.com> <4672340E.2060302@vision.ee.ethz.ch> <4678D5BE.5000107@vision.ee.ethz.ch> <46793866.8050807@att.net> <46794C44.5050505@vision.ee.ethz.ch> <1182360427.2709.22.camel@carabos.com> <46797977.9090601@vision.ee.ethz.ch> <1182421803.2676.11.camel@carabos.com> <5486cca80706230202o34e88d12i189b205abbe89f9f@mail.gmail.com> Message-ID: <467CE94A.4060409@vision.ee.ethz.ch> I remember even for my binary data that bzip was about 20% better, but significantly slower. Best would be of course to have both (and more) compressors and chose which suits the case best. But in real world probabely zlib is a more general choice, if only one compressor is intended. PS. Yes, it's a very bad idea to keep real numbers as ascii. - Dominik Antonino Ingargiola wrote: > Hi, > > 2007/6/21, Francesc Altet : > > > >> Ok, that's fine. In any case, I'm interested in knowing the reasons on >> why you are using bzip2 instead zlib. Have you detected some data >> pattern where you get significantly more compression than by using zlib >> for example?. >> >> I'm asking this because, in my experience with numerical data, I was >> unable to detect important compression level differences between bzip2 >> and zlib. See: >> >> http://www.pytables.org/docs/manual/ch05.html#compressionIssues >> >> for some experiments in that regard. >> >> I'd appreciate any input on this subject (bzip2 vs zlib). > > Probably not very meaningful, but with ascii data (float as ascii) > bzip2 seems to have a certain degree of advantages (both in speed and > compress ratio): > > $ du -h lena.txt > 3,1M lena.txt > > $ time gzip -9 lena.txt > > real 0m4.937s <= > user 0m4.758s > sys 0m0.018s > > $ du -h lena.txt.gz > 316K lena.txt.gz > > $ time gunzip lena.txt.gz > > real 0m0.092s > user 0m0.038s > sys 0m0.020s > > $ time bzip2 lena.txt > > real 0m2.524s <= > user 0m2.396s > sys 0m0.027s > > $ du -h lena.txt.bz2 > 188K lena.txt.bz2 > > $ time bunzip2 lena.txt.bz2 > > real 0m0.868s > user 0m0.775s > sys 0m0.040s > > > Even if it's usually a bad idea to put numerical data in ascii format, > sometimes may be handy. > > Regards, > > ~ Antonio > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From david at ar.media.kyoto-u.ac.jp Sat Jun 23 05:45:49 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 23 Jun 2007 18:45:49 +0900 Subject: [SciPy-user] Ubuntu Feisty upgrade problems In-Reply-To: References: <467B29F0.3030204@gmail.com> <467B5A0F.6040807@ar.media.kyoto-u.ac.jp> <467B8DBE.4060800@ar.media.kyoto-u.ac.jp> Message-ID: <467CEBCD.2050000@ar.media.kyoto-u.ac.jp> Ryan Krauss wrote: > Hey David, > > Thanks for looking into this. > > I am running SVN numpy from sometime yesterday: > > In [2]: numpy.__version__ > Out[2]: '1.0.4.dev3875' Fixed in scipy svn r3116 David From david at ar.media.kyoto-u.ac.jp Sat Jun 23 06:26:18 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 23 Jun 2007 19:26:18 +0900 Subject: [SciPy-user] scipy fblas.so functions not found In-Reply-To: <467C4F2A.2010206@fooie.net> References: <20070614134122.GB21936@localhost.ee.columbia.edu> <20070614140423.GC21936@localhost.ee.columbia.edu> <20070614193211.GG14029@avicenna.cc.columbia.edu> <467B68B4.1060903@fooie.net> <467B6908.70901@ar.media.kyoto-u.ac.jp> <467C4F2A.2010206@fooie.net> Message-ID: <467CF54A.1080406@ar.media.kyoto-u.ac.jp> Fahd Sultan wrote: > I had to recompile the rpms from your src rpms since the rpms dont > install on Redhat AS 4.4. Mmmh, this distribution is not supported by the build system, so if package convention differ, it may cause problems I see unnoticed, since every rpm distribution decides it is a good idea to change conventions. I assumed wrongly that you used fedora core, not the official RH thing. The distribution I am supporting for now are FC (5, 6, and 7) on x86 and x86_64 and opensuse (both x86 and x86_64). That does not mean I am not interested in improving the rpm for AS support, of course; it will just be trickier because I do not have access to such a distribution, and packaging is highly distribution dependent. > > during the build of python-numpy I get this error: > (any ideas?) > I couldn't find an aborted install anywhere, not sure whats going on > with this RPM. > Do you use 32 or 64 bits arch ? The problem seems to be related to the library location. What would be useful for me is a complete log of the rpm building (rpmbuild -ba python-numpy.spec &> build.log). Below are a more detailed explanation on the problem: If you do not know anything about rpm packaging, here is the problem: build rpm from sources involve a series of steps, the last one being install, where you list the files to be installed. For example, all python2.4 files are installed in /usr/lib/python2.4/site-packages/ (/usr/lib64/python2.4/site-packages for 64 bits arch). You can see in the python-numpy.spec file: %files %defattr(-,root,root,-) %{_bindir}/* # the following does not work on 64 bits arch. #%{py_sitedir}/* # I shamelessly copied the install from official fedora core numeric. %{_libdir}/python*/site-packages/numpy/ # the egg is not produced on fedora 7 (something to do with different # configuration wrt setuptools on the build farm ?) %if 0%{?fedora_version} %define NUMPY_EGG 0 %else %define NUMPY_EGG 1 %endif %if %{NUMPY_EGG} %{_libdir}/python*/site-packages/numpy*.egg-info %endif # Why the following are installed ??? %{_libdir}/python*/site-packages/COMPATIBILITY %{_libdir}/python*/site-packages/scipy_compatibility %{_libdir}/python*/site-packages/site.cfg.example Now, in your case, it seems like there is nothing in /var/tmp/python-numpy-1.0.3-build/usr/lib64/python*/site-packages/numpy which is a bit strange... As I don't have access to redhat AS, it would be useful for me to know which files are where, to see if the fs layout is different. David From ryanlists at gmail.com Sat Jun 23 10:14:20 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 23 Jun 2007 09:14:20 -0500 Subject: [SciPy-user] Ubuntu Feisty upgrade problems In-Reply-To: <467CEBCD.2050000@ar.media.kyoto-u.ac.jp> References: <467B29F0.3030204@gmail.com> <467B5A0F.6040807@ar.media.kyoto-u.ac.jp> <467B8DBE.4060800@ar.media.kyoto-u.ac.jp> <467CEBCD.2050000@ar.media.kyoto-u.ac.jp> Message-ID: Thanks David, I have confirmed that this is fixed. In [2]: run scipy_test.py scipy.cluster Warning: No test file found in /usr/lib/python2.5/site-packages/scipy/cluster/tests for module Warning: No test file found in /usr/lib/python2.5/site-packages/scipy/cluster/tests for module Found 9 tests for scipy.cluster.vq Found 0 tests for __main__ Testing that kmeans2 init methods work. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 and its results. ... ok This will cause kmean to have a cluster with no points. ... ok check_kmeans_simple (scipy.cluster.tests.test_vq.test_kmean) ... ok check_py_vq (scipy.cluster.tests.test_vq.test_vq) ... ok check_py_vq2 (scipy.cluster.tests.test_vq.test_vq) ... ok check_vq (scipy.cluster.tests.test_vq.test_vq) ... ok Test special rank 1 vq algo, python implementation. ... ok ---------------------------------------------------------------------- Ran 9 tests in 0.034s OK On 6/23/07, David Cournapeau wrote: > Ryan Krauss wrote: > > Hey David, > > > > Thanks for looking into this. > > > > I am running SVN numpy from sometime yesterday: > > > > In [2]: numpy.__version__ > > Out[2]: '1.0.4.dev3875' > Fixed in scipy svn r3116 > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Sat Jun 23 10:18:51 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 23 Jun 2007 09:18:51 -0500 Subject: [SciPy-user] weave test failures Message-ID: Thanks to David, my seg fault problem with cluster is fixed. I have two remaining test failures and they are both with weave: ====================================================================== FAIL: check_1d_3 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 168, in check_1d_3 self.generic_1d('a[-11:]') File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 135, in generic_1d self.generic_wrap(a,expr) File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 127, in generic_wrap self.generic_test(a,expr,desired) File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 123, in generic_test assert_array_equal(actual,desired, expr) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 223, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal a[-11:] (mismatch 100.0%) x: array([1]) y: array([10]) ====================================================================== FAIL: check_1d_6 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 174, in check_1d_6 self.generic_1d('a[:-11]') File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 135, in generic_1d self.generic_wrap(a,expr) File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 127, in generic_wrap self.generic_test(a,expr,desired) File "/usr/lib/python2.5/site-packages/scipy/weave/tests/test_size_check.py", line 123, in generic_test assert_array_equal(actual,desired, expr) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 223, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal a[:-11] (mismatch 100.0%) x: array([9]) y: array([0]) ---------------------------------------------------------------------- Ran 1771 tests in 4.081s FAILED (failures=2) Out[3]: How do I fix these? Thanks, Ryan From david at ar.media.kyoto-u.ac.jp Sun Jun 24 07:05:46 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 24 Jun 2007 20:05:46 +0900 Subject: [SciPy-user] [ANN]New numpy, scipy and atlas rpms for FC 5, 6 and 7 and openSUSE (with 64 bits arch support) Message-ID: <467E500A.20800@ar.media.kyoto-u.ac.jp> Hi there, After quite some pain, I finally managed to build a LAPACK + ATLAS rpm useful for numpy and scipy. Read the following if you use Fedora Core or OpenSuse and are tired to install unsuccessfully numpy, scipy, BLAS, LAPACK or ATLAS. Instructions are given there: http://www.scipy.org/Installing_SciPy/Linux (ashigabou repository) Basically: - Fedora Core 5, 6 and 7 and openSUSE 10.2 are supported (x86, and x86_64 for FC 7 and openSuse). - binary rpms for numpy, scipy and blas/lapack dependencies. - source rpm for atlas, for a really easy, 3 commands build of ATLAS (should work for both x86 and x86_64). numpy and scipy are the last releases, including some backported changes to make it work on 64 bits. Atlas is the last developement version, with a trivial patch to build shared blas and lapack which can be used as drop in replacements for netlib blas and lapack. I would like to hear people complains. If people want other distributions supported by the opensuse build system (such as mandriva), I would like to hear it too. cheers, David From doug-scipy at sadahome.ca Sun Jun 24 12:37:05 2007 From: doug-scipy at sadahome.ca (Doug Latornell) Date: Sun, 24 Jun 2007 09:37:05 -0700 Subject: [SciPy-user] can't build on OS X from SVN In-Reply-To: <7A2BF5DC-CC44-4BC8-83B9-A08D2F170B7A@stanford.edu> References: <20797684-49A1-4889-9FC1-87BA06F5AC10@stanford.edu> <4671DEA4.3050102@gmail.com> <3d375d730706141802r4c80f565g28dd4c80fb5a48a1@mail.gmail.com> <7A2BF5DC-CC44-4BC8-83B9-A08D2F170B7A@stanford.edu> Message-ID: <6279c0a40706240937s2225555va1711ab8882ef6f0@mail.gmail.com> I ran into the same problem as Zach and decided to see if I could work around it using the tarballs instead of SVN. No luck. The problem occurs with numpy-1.0.2 and scipy-0.5.2 tarballs. My setup: $ uname -a Darwin clara.local 8.9.0 Darwin Kernel Version 8.9.0: Thu Feb 22 20:54:07 PST 2007; root:xnu-792.17.14~1/RELEASE_PPC Power Macintosh powerpc $ python Python 2.4.4 (#1, Oct 18 2006, 10:34:39) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin $ gcc --version powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5367) $ gfortran --version GNU Fortran (GCC) 4.3.0 20070511 (experimental) Output of $ sudo python setup.py -v config_fc --fcompiler=gnu95 build >& ~/build.log is attached. Any suggestions (or dope-slaps :-) are welcome... Doug On 6/14/07, Zachary Pincus wrote: > > Attached is the log of a build made thusly: > > cd scipy > rm -rf build > svn up > python setup.py config_fc --fcompiler=gnu95 build [which fails] > python setup.py -v config_fc --fcompiler=gnu95 build > & build.log > > That is, this isn't the (huge) log of a build-from-scratch, but just > the log of the failing part. If you want, I can generate and send the > build-from-scratch log too. > > Zach > > > > > On Jun 14, 2007, at 6:02 PM, Robert Kern wrote: > > > On 6/14/07, Robert Kern wrote: > > > >> Gah. Looks like more fallout from the merge. The > >> get_flags_linker_so() methods > >> which have all of this information don't seem to be called any more. > > > > Never mind. It does get called in a roundabout way. > > > > Please send the full output from the build. Use "python setup.py -v > > config_fc ... etc." to turn on verbose mode. > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless > > enigma that is made terrible by our own mad attempt to interpret it as > > though it had an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.gz Type: application/x-gzip Size: 54285 bytes Desc: not available URL: From shao at msg.ucsf.edu Mon Jun 25 00:43:02 2007 From: shao at msg.ucsf.edu (Lin Shao) Date: Sun, 24 Jun 2007 21:43:02 -0700 Subject: [SciPy-user] Another leastsq Jacobian bug In-Reply-To: <20070623084829.GO20362@mentat.za.net> References: <20070623084829.GO20362@mentat.za.net> Message-ID: Try this: import numpy as N import scipy.optimize as O ## Create my data to fit: x=N.arange(-10,10,dtype=N.float32) y=(x-1.234)**2+3.456 ## I want to fit y=p0*(x-p1)^2+p2 ## Now I define my objective def obj_func(params, xx, yy, mode='col'): return params[0] * (xx-params[1])**2 + params[2] - yy ## 'mode' is dummy here, because I want to use it for my Jocobian ## Now define my Jacobian def Jacobian(params,xx,yy,mode='col'): J = N.empty((len(params),xx.size)) J[0] = (xx-params[1])**2 J[1] = -2*params[0]*(xx-params[1]) J[2] = 1 if mode=='col': return J elif mode=='row: return J.transpose() else: raise ValueError, "Unkown mode %s in Jacobian()" % mode ## Hopefully I did my calculus correctly ## First, try without Jocobian given b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'col')) print b[0] ## the result is [ 1. 1. 2.] ## obviously a failure, but no warning is returned ## Second, try Jacobian with col_deriv=1 b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'col'), Dfun=Jacobian, full_output=1, col_deriv=1) ## full_output has to be 1 because of a seg fault bug I reported earlier print b[0] ## the result is [ 1.00000004 1.23400008 3.45599962] ## Good Job! ## Last, try Jacobian with col_deriv=0 b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'row'), Dfun=Jacobian, full_output=1, col_deriv=0) print b[0] ## the result is [ 1.02507777 1.222815 2.2651552 ] ## completely different result and wrong Sorry it wasn't true what I said in my first report. The leastsq() result when col_deriv is not 1 is not an exactly the same as the initial guess. But one does have to agree there's a bug somewhere. --lin On 6/23/07, Stefan van der Walt wrote: > Hi Lin > > On Fri, Jun 22, 2007 at 07:35:05PM -0700, Lin Shao wrote: > > It seems like when calling leastsq(), if the Jacobian matrix is > > organized as rows of partial derivatives (w.r.t. each variable), then > > no optimization is done at all -- the return value is the same as the > > initial guess. It only works when the matrix is columns of derivatives > > and set the parameter col_deriv to 1. > > It would be helpful if you could provide two short snippets of code to > illustrate the problems you mention. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Mon Jun 25 05:23:11 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 25 Jun 2007 11:23:11 +0200 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) Message-ID: Hi, I did not improve much of the code, but here is a link to the scikit : http://download.gna.org/pypeline/ I will add other step and perhaps line search modules, as well as helper functions for fitting purpose (least squares or more robust fits), but I can't give you a dead line. The code works very well, I use it for my PhD searches, and it really saves me a lot of time/troubles. I'll add in another scikit a neighboorhood search (Kd-tree in biopython) as well once I've found a solution for the matrix lib I use.. Matthieu P.S. : pypeline is an "old" project similar in many aspects to what David wanted to do for his SoC, but much more general, for those who might be intrigued. It's being developped slowly, but steadily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Mon Jun 25 06:32:46 2007 From: ckkart at hoc.net (Christian K) Date: Mon, 25 Jun 2007 19:32:46 +0900 Subject: [SciPy-user] Another leastsq Jacobian bug In-Reply-To: References: <20070623084829.GO20362@mentat.za.net> Message-ID: Lin Shao wrote: > Try this: > > import numpy as N > import scipy.optimize as O > > ## Create my data to fit: > x=N.arange(-10,10,dtype=N.float32) > y=(x-1.234)**2+3.456 > ## I want to fit y=p0*(x-p1)^2+p2 > > ## Now I define my objective > def obj_func(params, xx, yy, mode='col'): > return params[0] * (xx-params[1])**2 + params[2] - yy > ## 'mode' is dummy here, because I want to use it for my Jocobian > > ## Now define my Jacobian > def Jacobian(params,xx,yy,mode='col'): > J = N.empty((len(params),xx.size)) > J[0] = (xx-params[1])**2 > J[1] = -2*params[0]*(xx-params[1]) shouldn't that be -2*params[0]*(xx-params[1])*params[1] ? > J[2] = 1 > if mode=='col': > return J > elif mode=='row: > return J.transpose() > else: > raise ValueError, "Unkown mode %s in Jacobian()" % mode > ## Hopefully I did my calculus correctly > > ## First, try without Jocobian given > b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'col')) > print b[0] > ## the result is [ 1. 1. 2.] > ## obviously a failure, but no warning is returned Not really a failure - this is numerics. The step length for finding the numeric derivative is too small (epsfcn keyword arg). Try with epsfcn = 1e-10. Have look at the return message of leastsq, which is b[3] in case full_output=1. > > ## Second, try Jacobian with col_deriv=1 > b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'col'), Dfun=Jacobian, > full_output=1, col_deriv=1) > ## full_output has to be 1 because of a seg fault bug I reported earlier does not segfault here. unix, python2.5, scipy 0.5.3.dev3062, numpy 1.0.2.dev3484 > print b[0] > ## the result is [ 1.00000004 1.23400008 3.45599962] > ## Good Job! > > ## Last, try Jacobian with col_deriv=0 > b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'row'), Dfun=Jacobian, > full_output=1, col_deriv=0) > print b[0] > ## the result is [ 1.02507777 1.222815 2.2651552 ] > ## completely different result and wrong I've no idea why this fails, though. Christian From fredmfp at gmail.com Mon Jun 25 07:27:48 2007 From: fredmfp at gmail.com (fred) Date: Mon, 25 Jun 2007 13:27:48 +0200 Subject: [SciPy-user] [cookbook] matplotlib & plotting tuto... Message-ID: <467FA6B4.7080107@gmail.com> Hi, Matplotlib cookbok & Plotting Tutorial should not be merged ? Cheers, -- http://scipy.org/FredericPetit From jdh2358 at gmail.com Mon Jun 25 07:50:55 2007 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 25 Jun 2007 06:50:55 -0500 Subject: [SciPy-user] [cookbook] matplotlib & plotting tuto... In-Reply-To: <467FA6B4.7080107@gmail.com> References: <467FA6B4.7080107@gmail.com> Message-ID: <88e473830706250450y15aef3f7o2c0826147df5fbe6@mail.gmail.com> On 6/25/07, fred wrote: > Hi, > > Matplotlib cookbok & Plotting Tutorial should not be merged ? They have different purposes -- a tutorial is designed to teach people the basics, and a cookbook is a collection of HOWTOs JDH From fredmfp at gmail.com Mon Jun 25 08:28:58 2007 From: fredmfp at gmail.com (fred) Date: Mon, 25 Jun 2007 14:28:58 +0200 Subject: [SciPy-user] [cookbook] matplotlib & plotting tuto... In-Reply-To: <88e473830706250450y15aef3f7o2c0826147df5fbe6@mail.gmail.com> References: <467FA6B4.7080107@gmail.com> <88e473830706250450y15aef3f7o2c0826147df5fbe6@mail.gmail.com> Message-ID: <467FB50A.3040107@gmail.com> John Hunter a ?crit : > On 6/25/07, fred wrote: > >> Hi, >> >> Matplotlib cookbok & Plotting Tutorial should not be merged ? >> > > They have different purposes -- a tutorial is designed to teach people > the basics, and a cookbook is a collection of HOWTOs > The Plotting Tutorial is rather short and the title does not mention that it uses matplotlib. My 2 cts. -- http://scipy.org/FredericPetit From c.j.lee at tnw.utwente.nl Mon Jun 25 10:12:09 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Mon, 25 Jun 2007 16:12:09 +0200 Subject: [SciPy-user] 3D density calculation In-Reply-To: <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> References: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> Message-ID: <467FCD39.8070403@tnw.utwente.nl> Thank you both for your reply. Sorry for my lack of response I was at a conference and the webmail interface only gave me read only access :( histogramdd looks like it might do it. I had thought about consecutive 1D histograms but realised that it would not work. I had thought that I couldn't use a fixed grid because I require a large total volume and also to be able to resolve small distances. However, the David's idea looks pretty good so I will try that as well. Thank you both for your help. Cheers Chris David Huard wrote: > Hi Chris, > > Have you tried numpy.histogramdd ? If its still too slow, I have a > fortran implementation on the back burner. I could try to finish it > quickly and send you a preliminary version. > > Other thought: the kernel density estimator scipy.stats.gaussian_kde > > David > > 2007/6/17, Bernhard Voigt >: > > Hi Chris! > > you could try a grid of unit cells that cover your phase space > (x,y,z,t). Count the number of photons per unit cell of your > initial configuration and track photons leaving and entering a > particular cell. A dictionary with a tuple of x,y,z,t coordinates > obtained from integer division of the x,y,z,t coordinates could > serve as keys. > > Example for 2-D: > > from numpy import * > # phase space in x,y > x = arange(-100,100.1,.1) > y = arange(-100,100.1,.1) > # cell dimension in both dimensions the same > GRID_WIDTH=7.5 > > # computes the grid key from x,y coordinates > def gridKey(x,y): > '''return the a tuple of x,y integer divided by GRID_WIDHT''' > return (int(x // GRID_WIDTH), int(y // GRID_WIDTH)) > > # setup your grid dictionary > gridLowX, gridHighX = gridKey(min(x), max(x)) > gridLowY, gridHighY = gridKey(min(y), max(y)) > keys = [(i,j) for i in xrange(gridLowX, gridHighX + 1) \ > for j in xrange(gridLowY, gridHighY + 1)] > grid = dict().fromkeys(keys, 0) > > # random photons > photons = random.uniform(-100.,100., (100000,2)) > > # count photons in each grid cell > for p in photons: > grid[gridKey(*p)] += 1 > > ######################################### > # in your simulation you have to keep track of where your photons > # are going to... > # (the code below won't run, it's just an example) > ######################################### > oldKey = gridKey(photon) > propagate(photon) # changes x,y coordinates of photon > newKey = gridKey(photon) > if oldKey != newKey: > grid[oldKey] -= 1 > grid[newKey] += 1 > > I hope this helps! Bernhard > > > On 6/15/07, * Chris Lee* < c.j.lee at tnw.utwente.nl > > wrote: > > Hi everyone, > > I was hoping this list could point me in the direction of a more > efficient solution to a problem I have. > > I have 4 vectors: x, y, z, and t that are about 1 million in > length > that describe the positions of photons. As my simulation > progresses > it updates the positions so x, y, z, and t change by an > unknown (and > unknowable) amount every update. > > This worked very well for its original purpose but now I need to > calculate the photon density change over time. Currently > after each > update, I iterate over time slices, x slices, and y slices and > then > make an histogram of z which I then stitch together to create a > density. However, this becomes very slow as the photons > spread out > in space and time. > > Does anyone know how to take such a large vector set and return a > density efficiently? > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: c.j.lee.vcf Type: text/x-vcard Size: 174 bytes Desc: not available URL: From fredmfp at gmail.com Mon Jun 25 11:22:48 2007 From: fredmfp at gmail.com (fred) Date: Mon, 25 Jun 2007 17:22:48 +0200 Subject: [SciPy-user] removing NaN... Message-ID: <467FDDC8.4040101@gmail.com> Hi, I work on data arrays with many NaN. How can I remove them and find min & max values ? TIA. Cheers, -- http://scipy.org/FredericPetit From pgmdevlist at gmail.com Mon Jun 25 11:42:20 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 25 Jun 2007 11:42:20 -0400 Subject: [SciPy-user] removing NaN... In-Reply-To: <467FDDC8.4040101@gmail.com> References: <467FDDC8.4040101@gmail.com> Message-ID: <200706251142.21125.pgmdevlist@gmail.com> On Monday 25 June 2007 11:22:48 fred wrote: > Hi, > > I work on data arrays with many NaN. > > How can I remove them and find min & max values ? First possibility: mask the NaNs, then use .min and .max data = masked_array(data, mask=isnan(data)) Second possibility: use nanmax & nanmin http://www.scipy.org/Numpy_Example_List_With_Doc From openopt at ukr.net Mon Jun 25 11:47:28 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 25 Jun 2007 18:47:28 +0300 Subject: [SciPy-user] matrix mult operator Message-ID: <467FE390.5090408@ukr.net> hi all, don't you think that numpy way of handling matrix multiplication is too complicated: suppose I have F = A . (B . C)^T . D + C . ((D . B)^T * A) (last is elementwise multiplication, '.' are matrix mult) then I must do def evalF(A,B,C,D) from numpy import mdot, dot # Also, I'm not fond of writing such lines each time return mdot(A, dot(B, C).T, D) + dot(C, dot(D, B).T * A) it's very long to read, especially if I have much longer lines what about an operator, something like !* or *! def evalF(A,B,C, D): return A !* (B !* C).T !* D + C !* (D !* B).T * A it's much more readable. Of course, you can wait till Python will support unicode, but as for me I would prefere not to wait so long. I know that there had been already some discussions about the matrix multiplication, but I decided to rise the question once again, because mdot() turned out to not solve all problems. Also, I think the names 'dot'/ 'mdot' are inappropriate ones, because it is sound like 'dotwise' multiplication. D. From matthieu.brucher at gmail.com Mon Jun 25 11:53:57 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 25 Jun 2007 17:53:57 +0200 Subject: [SciPy-user] matrix mult operator In-Reply-To: <467FE390.5090408@ukr.net> References: <467FE390.5090408@ukr.net> Message-ID: > > I know that there had been already some discussions about the matrix > multiplication, but I decided to rise the question once again, because > mdot() turned out to not solve all problems. Also, I think the names > 'dot'/ 'mdot' are inappropriate ones, because it is sound like 'dotwise' > multiplication. > Use numpy.matrix instead, it's what I do for this kind of computation. IMHO dot() is appropriate, as each element in the result matrix really is a dot product. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Mon Jun 25 11:56:35 2007 From: fredmfp at gmail.com (fred) Date: Mon, 25 Jun 2007 17:56:35 +0200 Subject: [SciPy-user] removing NaN... In-Reply-To: <200706251142.21125.pgmdevlist@gmail.com> References: <467FDDC8.4040101@gmail.com> <200706251142.21125.pgmdevlist@gmail.com> Message-ID: <467FE5B3.5040702@gmail.com> Pierre GM a ?crit : > On Monday 25 June 2007 11:22:48 fred wrote: > >> Hi, >> >> I work on data arrays with many NaN. >> >> How can I remove them and find min & max values ? >> > > First possibility: mask the NaNs, then use .min and .max > data = masked_array(data, mask=isnan(data)) > > Second possibility: > use nanmax & nanmin > > http://www.scipy.org/Numpy_Example_List_With_Doc > Sorry, I did not know this link. Bookmarked now. That's simply great. Thanks a lot. Cheers, -- http://scipy.org/FredericPetit From robert.kern at gmail.com Mon Jun 25 12:49:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 11:49:26 -0500 Subject: [SciPy-user] matrix mult operator In-Reply-To: <467FE390.5090408@ukr.net> References: <467FE390.5090408@ukr.net> Message-ID: <467FF216.7030107@gmail.com> dmitrey wrote: > what about an operator, something like !* or *! > > def evalF(A,B,C, D): > return A !* (B !* C).T !* D + C !* (D !* B).T * A > > it's much more readable. > Of course, you can wait till Python will support unicode, but as for me > I would prefere not to wait so long. We do not control the Python language. We cannot add operators to the language. FWIW, the support of Unicode identifiers that is coming in 3.0 still won't allow you to define new operators. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Mon Jun 25 14:17:28 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 25 Jun 2007 21:17:28 +0300 Subject: [SciPy-user] matrix mult operator In-Reply-To: <467FF216.7030107@gmail.com> References: <467FE390.5090408@ukr.net> <467FF216.7030107@gmail.com> Message-ID: <468006B8.8010808@ukr.net> Robert Kern wrote: > dmitrey wrote: > > >> what about an operator, something like !* or *! >> >> def evalF(A,B,C, D): >> return A !* (B !* C).T !* D + C !* (D !* B).T * A >> >> it's much more readable. >> Of course, you can wait till Python will support unicode, but as for me >> I would prefere not to wait so long. >> > > We do not control the Python language. We cannot add operators to the language. > FWIW, the support of Unicode identifiers that is coming in 3.0 still won't allow > you to define new operators. > > Hmm... so it cannot be defined anything like numpy.array operator !* (numpy.array): return (matrix multiplication) But so many languages allow the trick! Even so ancient ones as (from my point of view) C/C++ It makes me very disappointed in Python, along with absence of 'switch' equivalent (I wonder why it's not implemented yet?..). From robert.kern at gmail.com Mon Jun 25 14:25:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 13:25:26 -0500 Subject: [SciPy-user] matrix mult operator In-Reply-To: <468006B8.8010808@ukr.net> References: <467FE390.5090408@ukr.net> <467FF216.7030107@gmail.com> <468006B8.8010808@ukr.net> Message-ID: <46800896.3090205@gmail.com> dmitrey wrote: > Robert Kern wrote: >> dmitrey wrote: >> >> >>> what about an operator, something like !* or *! >>> >>> def evalF(A,B,C, D): >>> return A !* (B !* C).T !* D + C !* (D !* B).T * A >>> >>> it's much more readable. >>> Of course, you can wait till Python will support unicode, but as for me >>> I would prefere not to wait so long. >>> >> We do not control the Python language. We cannot add operators to the language. >> FWIW, the support of Unicode identifiers that is coming in 3.0 still won't allow >> you to define new operators. >> > Hmm... so it cannot be defined anything like > numpy.array operator !* (numpy.array): > return (matrix multiplication) > > But so many languages allow the trick! Even so ancient ones as (from my > point of view) C/C++ No, neither C nor C++ does not let you define new operators. In C++, you can overload the meaning of the existing set of operators for your new classes (same with Python), but you cannot add new ones. > It makes me very disappointed in Python, along with absence of 'switch' > equivalent (I wonder why it's not implemented yet?..). Because there isn't enough support for such a construct when we already have if/elif and other ways of dispatching code. Guido asked the community and PyCon 2007, and there really wasn't much of a favorable response. http://www.python.org/dev/peps/pep-3103/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From shao at msg.ucsf.edu Mon Jun 25 14:37:24 2007 From: shao at msg.ucsf.edu (Lin Shao) Date: Mon, 25 Jun 2007 11:37:24 -0700 Subject: [SciPy-user] Another leastsq Jacobian bug In-Reply-To: References: <20070623084829.GO20362@mentat.za.net> Message-ID: > > ## Now define my Jacobian > > def Jacobian(params,xx,yy,mode='col'): > > J = N.empty((len(params),xx.size)) > > J[0] = (xx-params[1])**2 > > J[1] = -2*params[0]*(xx-params[1]) > > shouldn't that be -2*params[0]*(xx-params[1])*params[1] ? I think I was right. Think about what's the derivative of -x^2 -- it's -2x > > Not really a failure - this is numerics. The step length for finding the numeric > derivative is too small (epsfcn keyword arg). Try with epsfcn = 1e-10. > Have look at the return message of leastsq, which is b[3] in case full_output=1. Ok, thanks for the tip. > > ## Second, try Jacobian with col_deriv=1 > > b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'col'), Dfun=Jacobian, > > full_output=1, col_deriv=1) > > ## full_output has to be 1 because of a seg fault bug I reported earlier > > does not segfault here. unix, python2.5, > scipy 0.5.3.dev3062, numpy 1.0.2.dev3484 > As said before, it doesn't seg fault everywhere. It happens on my Linux 2.6.18, i686, same scipy and numpy version as yours. > > ## Last, try Jacobian with col_deriv=0 > > b=O.leastsq(obj_func, (1.,1.,2.), args=(x, y, 'row'), Dfun=Jacobian, > > full_output=1, col_deriv=0) > > print b[0] > > ## the result is [ 1.02507777 1.222815 2.2651552 ] > > ## completely different result and wrong > > I've no idea why this fails, though. > I think this last bug is most important, because based on my understanding of the docstring, the difference between col_deriv=0 and 1 is just whether the Jacobian is transposed or not. But apparently there's more. --lin From david.warde.farley at utoronto.ca Mon Jun 25 15:04:43 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Mon, 25 Jun 2007 15:04:43 -0400 Subject: [SciPy-user] matrix mult operator In-Reply-To: <46800896.3090205@gmail.com> References: <467FE390.5090408@ukr.net> <467FF216.7030107@gmail.com> <468006B8.8010808@ukr.net> <46800896.3090205@gmail.com> Message-ID: <5090F46D-CF0E-43FE-870E-8A918A5C5162@utoronto.ca> On 25-Jun-07, at 2:25 PM, Robert Kern wrote: > Because there isn't enough support for such a construct when we > already have > if/elif and other ways of dispatching code. Guido asked the > community and PyCon > 2007, and there really wasn't much of a favorable response. > > http://www.python.org/dev/peps/pep-3103/ From what I've read it was a quick, informal poll during a talk, which is a painfully poor basis for making design decisions. But as usual we are at the mercy of Guido and his fanboys and their arbitrary notions of what's "Pythonic" or not. There are good reasons not to allow the definition of arbitrary infix operators, but the absence of a proper switch statement in a modern language is fairly silly, IMHO. This post has some interesting and marginally useful alternatives (if you don't need fall-through behaviour): http://simonwillison.net/2004/May/7/switch/ David From robert.kern at gmail.com Mon Jun 25 15:18:58 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 14:18:58 -0500 Subject: [SciPy-user] matrix mult operator In-Reply-To: <5090F46D-CF0E-43FE-870E-8A918A5C5162@utoronto.ca> References: <467FE390.5090408@ukr.net> <467FF216.7030107@gmail.com> <468006B8.8010808@ukr.net> <46800896.3090205@gmail.com> <5090F46D-CF0E-43FE-870E-8A918A5C5162@utoronto.ca> Message-ID: <46801522.8030606@gmail.com> David Warde-Farley wrote: > On 25-Jun-07, at 2:25 PM, Robert Kern wrote: > >> Because there isn't enough support for such a construct when we >> already have >> if/elif and other ways of dispatching code. Guido asked the >> community and PyCon >> 2007, and there really wasn't much of a favorable response. >> >> http://www.python.org/dev/peps/pep-3103/ > > From what I've read it was a quick, informal poll during a talk, > which is a painfully poor basis for making design decisions. Of course, all of this happened after a long, drawn-out series of discussions and designs, which is *also* a painfully poor basis for making design decisions if it's the only basis. Why? Because every participant is self-selected. After investing a lot of time sitting in the weeds, working out the details, it can be difficult to take a step back and see if the feature you are working on is really important enough. That's why Guido asked. We already have good ways to dispatch code. Is adding another going to generally improve people's use of Python? The answer was fairly clear. The community in general really doesn't feel a need for a switch statement. For adding such a large feature as new control flow syntax, you really should have broad support for it. > But as > usual we are at the mercy of Guido and his fanboys and their > arbitrary notions of what's "Pythonic" or not. You are entirely mistaken if you think PyCon is solely attended by Guido's fanboys. Really, getting several hundred Python users across a broad spectrum in the same room and asking them questions is actually probably one of the better ways to keep perspective and determine how much of an impact your design decisions are going to have. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david.warde.farley at utoronto.ca Mon Jun 25 15:39:29 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Mon, 25 Jun 2007 15:39:29 -0400 Subject: [SciPy-user] matrix mult operator In-Reply-To: <46801522.8030606@gmail.com> References: <467FE390.5090408@ukr.net> <467FF216.7030107@gmail.com> <468006B8.8010808@ukr.net> <46800896.3090205@gmail.com> <5090F46D-CF0E-43FE-870E-8A918A5C5162@utoronto.ca> <46801522.8030606@gmail.com> Message-ID: On 25-Jun-07, at 3:18 PM, Robert Kern wrote: > Of course, all of this happened after a long, drawn-out series of > discussions > and designs, which is *also* a painfully poor basis for making > design decisions > if it's the only basis. Why? Because every participant is self- > selected. My point was that the members of an audience at a Guido talk at a Python conference are even *more* self-selected, since not everyone with a vested interest in using the language can afford the time off work/travel time/etc. to attend such gatherings, nor do they necessarily see the point when there are mailing lists set up for just this sort of discussion. >> But as >> usual we are at the mercy of Guido and his fanboys and their >> arbitrary notions of what's "Pythonic" or not. > > You are entirely mistaken if you think PyCon is solely attended by > Guido's > fanboys. Really, getting several hundred Python users across a > broad spectrum in > the same room and asking them questions is actually probably one of > the better > ways to keep perspective and determine how much of an impact your > design > decisions are going to have. It was more of a general statement about the frustrating non- arguments made by some in the name of "Pythonicity". But I'm glad to hear that the attendance at PyCon is diverse. Don't get me wrong, I like Python, I like SciPy. I just find myself occasionally at odds with the decision making processes up the chain. David From robert.kern at gmail.com Mon Jun 25 15:56:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 14:56:11 -0500 Subject: [SciPy-user] matrix mult operator In-Reply-To: References: <467FE390.5090408@ukr.net> <467FF216.7030107@gmail.com> <468006B8.8010808@ukr.net> <46800896.3090205@gmail.com> <5090F46D-CF0E-43FE-870E-8A918A5C5162@utoronto.ca> <46801522.8030606@gmail.com> Message-ID: <46801DDB.10103@gmail.com> David Warde-Farley wrote: > On 25-Jun-07, at 3:18 PM, Robert Kern wrote: > >> Of course, all of this happened after a long, drawn-out series of >> discussions >> and designs, which is *also* a painfully poor basis for making >> design decisions >> if it's the only basis. Why? Because every participant is self- >> selected. > > My point was that the members of an audience at a Guido talk at a > Python conference are even *more* self-selected, since not everyone > with a vested interest in using the language can afford the time off > work/travel time/etc. to attend such gatherings, nor do they > necessarily see the point when there are mailing lists set up for > just this sort of discussion. PyCon is *significantly* less self-selected than python-dev and possibly even c.l.py. It's probably the best venue available in this regard. Yes, it takes time and money to attend, but people participating in online design discussions are, well, *special*. I'm not entirely sure why this is the case, and there's probably something terribly important about the social dynamics of asynchronous, multicasted communication (i.e. mailing lists), but polls on mailing lists just don't work. I think it has something to do with the optional nature of mailing lists. You don't have to reply or participate, so if you don't get excited about a particular thread, you don't reply. When you're in a room, and someone takes a poll, you're replying regardless of whether or not you care. And the poll-taker can count you. For questions of this sort, I think physical presence is a huge boon. >>> But as >>> usual we are at the mercy of Guido and his fanboys and their >>> arbitrary notions of what's "Pythonic" or not. >> You are entirely mistaken if you think PyCon is solely attended by >> Guido's >> fanboys. Really, getting several hundred Python users across a >> broad spectrum in >> the same room and asking them questions is actually probably one of >> the better >> ways to keep perspective and determine how much of an impact your >> design >> decisions are going to have. > > It was more of a general statement about the frustrating non- > arguments made by some in the name of "Pythonicity". But that wasn't the argument that sealed the deal. The question was, "Will you use it?" and the answer was overwhelmingly negative. *That's* what killed switch. The abstract arguments about Pythonicity were ignored, or at least were considered much, much less important. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Mon Jun 25 16:59:09 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 25 Jun 2007 14:59:09 -0600 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? Message-ID: Hi all, just curious. The stats.geom docstring says: Geometric distribution geom.pmf(k,p) = (1-p)**(k-1)*p for k >= 1 But I see this: In [10]: k,p = 2.0,0.5 In [11]: (1-p)**(k-1)*p Out[11]: 0.25 In [12]: stats.geom.pmf(k,p) Out[12]: array(0.125) However: In [13]: stats.geom.pmf(k-1,p) Out[13]: array(0.25) Is this an off-by-one bug, or am I misreading something here? Cheers, f From stefan at sun.ac.za Mon Jun 25 18:01:02 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 26 Jun 2007 00:01:02 +0200 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? In-Reply-To: References: Message-ID: <20070625220101.GF7619@mentat.za.net> On Mon, Jun 25, 2007 at 02:59:09PM -0600, Fernando Perez wrote: > just curious. The stats.geom docstring says: > > Geometric distribution > > geom.pmf(k,p) = (1-p)**(k-1)*p > for k >= 1 > > But I see this: > > In [10]: k,p = 2.0,0.5 > > In [11]: (1-p)**(k-1)*p > Out[11]: 0.25 > > In [12]: stats.geom.pmf(k,p) > Out[12]: array(0.125) > > However: > > In [13]: stats.geom.pmf(k-1,p) > Out[13]: array(0.25) > > > Is this an off-by-one bug, or am I misreading something here? Yes, a mistake in geom._pmf that would never have shown up, since it does not have a unit test. I'll fix it, unless you beat me to it. Cheers St?fan From oliphant at ee.byu.edu Mon Jun 25 18:04:33 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 25 Jun 2007 16:04:33 -0600 Subject: [SciPy-user] matrix mult operator In-Reply-To: References: <467FE390.5090408@ukr.net> <467FF216.7030107@gmail.com> <468006B8.8010808@ukr.net> <46800896.3090205@gmail.com> <5090F46D-CF0E-43FE-870E-8A918A5C5162@utoronto.ca> <46801522.8030606@gmail.com> Message-ID: <46803BF1.9000509@ee.byu.edu> David Warde-Farley wrote: >On 25-Jun-07, at 3:18 PM, Robert Kern wrote: > > >It was more of a general statement about the frustrating non- >arguments made by some in the name of "Pythonicity". But I'm glad to >hear that the attendance at PyCon is diverse. > >Don't get me wrong, I like Python, I like SciPy. I just find myself >occasionally at odds with the decision making processes up the chain. > > I think you will find a lot of people who also have thought in similar ways here on this mailing list. I for one would like to see an operator that allows us to distinguish between element-wise and object-wise operations. In the end, though, it hasn't been enough of a "need" for me to put in the considerable time and energy needed to convince others who use Python that the additional complexity is worth the added benefit. What it really takes to get something changed in Python is: 1) a champion willing to write a PEP, shepherd it through many revisions and critiques and eventually write the implementation. As somebody who has both published in peer-reviewed journals and written two PEPs, publishing a paper is easier to do. 2) the ears/eyes of Guido or one of his trusted developers. Note that these resources are not too difficult to obtain once #1 is accomplished... So, your criticisms and concerns are valid and useful. Perhaps someday we can make a case sufficient to make a change regarding the operator situation. -Travis From fperez.net at gmail.com Mon Jun 25 18:16:05 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 25 Jun 2007 16:16:05 -0600 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? In-Reply-To: <20070625220101.GF7619@mentat.za.net> References: <20070625220101.GF7619@mentat.za.net> Message-ID: On 6/25/07, Stefan van der Walt wrote: > Yes, a mistake in geom._pmf that would never have shown up, since it > does not have a unit test. I'll fix it, unless you beat me to it. Go for it, I'm not touching that code right now. Thanks! f From stefan at sun.ac.za Mon Jun 25 18:19:03 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 26 Jun 2007 00:19:03 +0200 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? In-Reply-To: References: Message-ID: <20070625221903.GG7619@mentat.za.net> On Mon, Jun 25, 2007 at 02:59:09PM -0600, Fernando Perez wrote: > Hi all, > > just curious. The stats.geom docstring says: > > Geometric distribution > > geom.pmf(k,p) = (1-p)**(k-1)*p > for k >= 1 > > But I see this: > > In [10]: k,p = 2.0,0.5 > > In [11]: (1-p)**(k-1)*p > Out[11]: 0.25 > > In [12]: stats.geom.pmf(k,p) > Out[12]: array(0.125) > > However: > > In [13]: stats.geom.pmf(k-1,p) > Out[13]: array(0.25) > > > Is this an off-by-one bug, or am I misreading something here? I read now in wikipedia that: """ In probability theory and statistics, the geometric distribution is either of two discrete probability distributions: * the probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1, 2, 3, ...}, or * the probability distribution of the number Y = X ? 1 of failures before the first success, supported on the set { 0, 1, 2, 3, ... }. Which of these one calls "the" geometric distribution is a matter of convention and convenience. """ So, do we simply pick one and stick with it? Cheers St?fan From fperez.net at gmail.com Mon Jun 25 18:34:00 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 25 Jun 2007 16:34:00 -0600 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? In-Reply-To: <20070625221903.GG7619@mentat.za.net> References: <20070625221903.GG7619@mentat.za.net> Message-ID: On 6/25/07, Stefan van der Walt wrote: > Which of these one calls "the" geometric distribution is a matter of > convention and convenience. > """ > > So, do we simply pick one and stick with it? I don't really care, but at least the code and the docstring should be consistent (and the docstring says that it uses the k>=1 formula). Also, the wikipedia page does list the (1-p)**(k-1)*p formula for the pmf, FWIW. Those who actually use this stuff might want to chime in, but at least right now there is an inconsistency between code and docstring. Cheers, f From stefan at sun.ac.za Mon Jun 25 18:39:38 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 26 Jun 2007 00:39:38 +0200 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? In-Reply-To: References: <20070625221903.GG7619@mentat.za.net> Message-ID: <20070625223938.GH7619@mentat.za.net> On Mon, Jun 25, 2007 at 04:34:00PM -0600, Fernando Perez wrote: > On 6/25/07, Stefan van der Walt wrote: > > > Which of these one calls "the" geometric distribution is a matter of > > convention and convenience. > > """ > > > > So, do we simply pick one and stick with it? > > I don't really care, but at least the code and the docstring should be > consistent (and the docstring says that it uses the k>=1 formula). > Also, the wikipedia page does list the (1-p)**(k-1)*p formula for the > pmf, FWIW. > > Those who actually use this stuff might want to chime in, but at least > right now there is an inconsistency between code and docstring. r3117 should fix this. I used the (k-1)-version, since that is what the original version was based on. Cheers St?fan From robert.kern at gmail.com Mon Jun 25 18:43:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 17:43:27 -0500 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? In-Reply-To: References: <20070625221903.GG7619@mentat.za.net> Message-ID: <4680450F.8060706@gmail.com> Fernando Perez wrote: > On 6/25/07, Stefan van der Walt wrote: > >> Which of these one calls "the" geometric distribution is a matter of >> convention and convenience. >> """ >> >> So, do we simply pick one and stick with it? > > I don't really care, but at least the code and the docstring should be > consistent (and the docstring says that it uses the k>=1 formula). > Also, the wikipedia page does list the (1-p)**(k-1)*p formula for the > pmf, FWIW. > > Those who actually use this stuff might want to chime in, but at least > right now there is an inconsistency between code and docstring. Flip a coin (p = 0.5; k = 1) and pick either the code or the docstring to fix. Either is satisfactory. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Mon Jun 25 18:44:47 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 25 Jun 2007 16:44:47 -0600 Subject: [SciPy-user] Is this a bug in stats.geom.pmf? In-Reply-To: <20070625223938.GH7619@mentat.za.net> References: <20070625221903.GG7619@mentat.za.net> <20070625223938.GH7619@mentat.za.net> Message-ID: On 6/25/07, Stefan van der Walt wrote: > On Mon, Jun 25, 2007 at 04:34:00PM -0600, Fernando Perez wrote: > > On 6/25/07, Stefan van der Walt wrote: > > > > > Which of these one calls "the" geometric distribution is a matter of > > > convention and convenience. > > > """ > > > > > > So, do we simply pick one and stick with it? > > > > I don't really care, but at least the code and the docstring should be > > consistent (and the docstring says that it uses the k>=1 formula). > > Also, the wikipedia page does list the (1-p)**(k-1)*p formula for the > > pmf, FWIW. > > > > Those who actually use this stuff might want to chime in, but at least > > right now there is an inconsistency between code and docstring. > > r3117 should fix this. I used the (k-1)-version, since that is > what the original version was based on. Great, thanks. Cheers, f From ryanlists at gmail.com Mon Jun 25 19:52:02 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 25 Jun 2007 18:52:02 -0500 Subject: [SciPy-user] floating point pedagogical problem Message-ID: I have a floating point issue that is forcing me to teach my students more python/computer science than I want to: In [36]: temp Out[36]: NumPy array, format: long [-1. -0.83045267 -0.64773663 -0.45185185 -0.24279835 -0.02057613 0.21481481 0.46337449 0.72510288 1. ] In [37]: arccos(temp) Out[37]: NumPy array, format: long [ nan 2.55071609 2.27540616 2.03963643 1.81604581 1.59137391 1.35429411 1.08899693 0.75961255 nan] We are doing a robotics problem involving inverse kinematics where they need to take the arccos and arcsin of some vectors. The problem is that In [38]: eps=1e-15 In [39]: -1-eps < temp[0] < -1+eps Out[39]: True So, my current solution is to check for theta +/- 1 +/- eps problems like this: tempout = [] for item in temp: if -1-eps References: Message-ID: <46805797.1040307@gmail.com> Ryan Krauss wrote: > I have a floating point issue that is forcing me to teach my students > more python/computer science than I want to: > > In [36]: temp > Out[36]: NumPy array, format: long > [-1. -0.83045267 -0.64773663 -0.45185185 -0.24279835 -0.02057613 > 0.21481481 0.46337449 0.72510288 1. ] > > In [37]: arccos(temp) > Out[37]: NumPy array, format: long > [ nan 2.55071609 2.27540616 > 2.03963643 1.81604581 1.59137391 > 1.35429411 1.08899693 0.75961255 > nan] > > > We are doing a robotics problem involving inverse kinematics where > they need to take the arccos and arcsin of some vectors. The problem > is that > In [38]: eps=1e-15 > > In [39]: -1-eps < temp[0] < -1+eps > Out[39]: True > > So, my current solution is to check for theta +/- 1 +/- eps problems like this: > > tempout = [] > for item in temp: > if -1-eps tempout.append(-1.0) > elif 1-eps tempout.append(1.0) > else: > tempout.append(item) > tempout = array(tempout) > > Is there a better way? clip(temp, -1.0, 1.0) That will silently allow obviously bogus values like 1.5, but you may not care overmuch for the problem. > Is making arccos and arcsin check for +/-1 +/- > eps reasonable? I don't think so. eps may not be the appropriate fuzz-factor for the problem. > Or should I give a lecture on floating point evils? Are they using floating point? If so, then yes, they need to know about the little buggers. For domain issues like this, though, it's simple enough to say that floating point introduces inaccuracies and sometimes functions that have restricted domains require some extra care to clip inputs to the domain. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Mon Jun 25 20:08:57 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 25 Jun 2007 19:08:57 -0500 Subject: [SciPy-user] floating point pedagogical problem In-Reply-To: <46805797.1040307@gmail.com> References: <46805797.1040307@gmail.com> Message-ID: Thanks Robert. The clip solution is much less frightening than my 10 mysterious lines of code. On 6/25/07, Robert Kern wrote: > Ryan Krauss wrote: > > I have a floating point issue that is forcing me to teach my students > > more python/computer science than I want to: > > > > In [36]: temp > > Out[36]: NumPy array, format: long > > [-1. -0.83045267 -0.64773663 -0.45185185 -0.24279835 -0.02057613 > > 0.21481481 0.46337449 0.72510288 1. ] > > > > In [37]: arccos(temp) > > Out[37]: NumPy array, format: long > > [ nan 2.55071609 2.27540616 > > 2.03963643 1.81604581 1.59137391 > > 1.35429411 1.08899693 0.75961255 > > nan] > > > > > > We are doing a robotics problem involving inverse kinematics where > > they need to take the arccos and arcsin of some vectors. The problem > > is that > > In [38]: eps=1e-15 > > > > In [39]: -1-eps < temp[0] < -1+eps > > Out[39]: True > > > > So, my current solution is to check for theta +/- 1 +/- eps problems like this: > > > > tempout = [] > > for item in temp: > > if -1-eps > tempout.append(-1.0) > > elif 1-eps > tempout.append(1.0) > > else: > > tempout.append(item) > > tempout = array(tempout) > > > > Is there a better way? > > clip(temp, -1.0, 1.0) > > That will silently allow obviously bogus values like 1.5, but you may not care > overmuch for the problem. > > > Is making arccos and arcsin check for +/-1 +/- > > eps reasonable? > > I don't think so. eps may not be the appropriate fuzz-factor for the problem. > > > Or should I give a lecture on floating point evils? > > Are they using floating point? If so, then yes, they need to know about the > little buggers. > > For domain issues like this, though, it's simple enough to say that floating > point introduces inaccuracies and sometimes functions that have restricted > domains require some extra care to clip inputs to the domain. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Mon Jun 25 23:20:15 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 25 Jun 2007 22:20:15 -0500 Subject: [SciPy-user] signal.butter questions Message-ID: I have two questions about signal.butter. First, I don't see it in the docstring, but I am assuming the Wn must be normalized by dividing by the Nyquist frequency. Is that true? It seems like that is what buttord is saying and I assume they use the same conventions. Second, what is meant by the analog keyword? Thanks, Ryan From ckkart at hoc.net Mon Jun 25 23:33:35 2007 From: ckkart at hoc.net (Christian K) Date: Tue, 26 Jun 2007 12:33:35 +0900 Subject: [SciPy-user] Another leastsq Jacobian bug In-Reply-To: References: <20070623084829.GO20362@mentat.za.net> Message-ID: Lin Shao wrote: >>> ## Now define my Jacobian >>> def Jacobian(params,xx,yy,mode='col'): >>> J = N.empty((len(params),xx.size)) >>> J[0] = (xx-params[1])**2 >>> J[1] = -2*params[0]*(xx-params[1]) >> shouldn't that be -2*params[0]*(xx-params[1])*params[1] ? > > I think I was right. Think about what's the derivative of -x^2 -- it's -2x you forgot about the chain rule . Christian From shao at msg.ucsf.edu Tue Jun 26 12:47:43 2007 From: shao at msg.ucsf.edu (Lin Shao) Date: Tue, 26 Jun 2007 09:47:43 -0700 Subject: [SciPy-user] Another leastsq Jacobian bug In-Reply-To: References: <20070623084829.GO20362@mentat.za.net> Message-ID: No I didn't. The simplest refute to your answer is that your J[1] is -2*params[0]*(xx*params[1]-params[1]^2). How could there be a term with params[1]'s quadratic in the derivative if the original function is a quadratic function? On 6/25/07, Christian K wrote: > Lin Shao wrote: > >>> ## Now define my Jacobian > >>> def Jacobian(params,xx,yy,mode='col'): > >>> J = N.empty((len(params),xx.size)) > >>> J[0] = (xx-params[1])**2 > >>> J[1] = -2*params[0]*(xx-params[1]) > >> shouldn't that be -2*params[0]*(xx-params[1])*params[1] ? > > > > I think I was right. Think about what's the derivative of -x^2 -- it's -2x > > you forgot about the chain rule . > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From tom.denniston at alum.dartmouth.org Tue Jun 26 15:32:13 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Tue, 26 Jun 2007 14:32:13 -0500 Subject: [SciPy-user] bug in lexsort with two different dtypes? Message-ID: In [1]: intArr1 = numpy.array([ 0, 1, 2,-2,-1, 5,-5,-5]) In [2]: intArr2 = numpy.array([1,1,1,2,2,2,3,4]) In [3]: charArr = numpy.array(['a','a','a','b','b','b','c','d']) Here I sort two int arrays. As expected intArr2 dominates intArr1 but the items with the same intArr2 values are sorted forwards according to intArr1 In [6]: numpy.lexsort((intArr1, intArr2)) Out[6]: array([0, 1, 2, 3, 4, 5, 6, 7]) This, however, looks like a bug to me. Here I sort an int array and a str array. As expected charArray dominates intArr1 but the items with the same charArray values are sorted *backwards* according to intArr1 In [5]: numpy.lexsort((intArr1, charArr)) Out[5]: array([2, 1, 0, 5, 4, 3, 6, 7]) Is this a bug or am I missing something? --Tom From lorenzo.isella at gmail.com Tue Jun 26 16:01:53 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 26 Jun 2007 22:01:53 +0200 Subject: [SciPy-user] How to Enable Delaunay Package Message-ID: <468170B1.6000303@gmail.com> Dear All, I am posting this after a discussion originated on the matplotlib mailing list. Fundamentally, I need to plot data on irregular (i.e. non equi-spaced rectangular) grids. I finally was recommended to look at the Delaunay package (see approach2 at the link: http://scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data ). The problem is that the approach: from scipy.sandbox.delaunay import * does not work (the system does not find the requested module). Now, I am running Debian testing on my box and I have Python2.3,2.4,2.5 installed beside SciPy as taken from the standard repositories. Under /usr/lib/python2.4/site-packages/scipy/sandbox I have the file setup.py which I copy and paste at the end of the email. I try uncommenting the line dealing with Delaunay, but that did not help me out (probably it is useful only if I am rebuilding SciPy, which I would like to avoid). Anyone has experienced the same problem or has any suggestions? I am really in need to get this working in order to be able to perform some non-trivial data plotting with matplotlib. Many thanks Lorenzo import os def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('sandbox',parent_package,top_path) sandbox_packages = [] try: sandbox_file = open(os.path.join(config.package_path, 'enabled_packages.txt'), 'rU') except IOError: pass else: for line in sandbox_file: p = line.strip() if line.startswith('#'): continue sandbox_packages.append(p) sandbox_file.close() for p in sandbox_packages: config.add_subpackage(p) # All subpackages should be commented out in the version # committed to the repository. This prevents build problems # for people who are not actively working with these # potentially unstable packages. # You can put a list of modules you want to always enable in the # file 'enabled_packages.txt' in this directory (you'll have to create it). # Since this isn't under version control, it's less likely you'll # check it in and screw other people up :-) # An example package: #config.add_subpackage('exmplpackage') # Monte Carlo package #config.add_subpackage('montecarlo') # PySparse fork with NumPy compatibility #config.add_subpackage('pysparse') # Robert Kern's corner: #config.add_subpackage('rkern') # ODRPACK #config.add_subpackage('odr') # Delaunay triangulation and Natural Neighbor interpolation config.add_subpackage('delaunay') # Gist-based plotting library for X11 #config.add_subpackage('xplt') # elementwise numerical expressions #config.add_subpackage('numexpr') # Statistical models #config.add_subpackage('models') # Adaptation of Scientific.IO (2.4.9) to use NumPy #config.add_subpackage('netcdf') # Finite Difference Formulae package #config.add_subpackage('fdfpack') # Package with useful constants and unit-conversions defined #config.add_subpackage('constants') # Interpolating between sparse samples #config.add_subpackage('buildgrid') # Package for Support Vector Machine #config.add_subpackage('svm') # Package for Gaussian Mixture Models #config.add_subpackage('pyem') # David Cournapeau's corner: autocorrelation, lpc, lpc residual #config.add_subpackage('cdavid') # New spline package (based on scipy.interpolate) #config.add_subpackage('spline') return config if __name__ == '__main__': from numpy.distutils.core import setup setup(**configuration(top_path='').todict()) From domi at vision.ee.ethz.ch Tue Jun 26 16:05:23 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Tue, 26 Jun 2007 22:05:23 +0200 Subject: [SciPy-user] How to Enable Delaunay Package In-Reply-To: <468170B1.6000303@gmail.com> References: <468170B1.6000303@gmail.com> Message-ID: <46817183.8090505@vision.ee.ethz.ch> I use VTK for this purpose (there are python bindings). - Dominik Lorenzo Isella wrote: > Dear All, > I am posting this after a discussion originated on the matplotlib > mailing list. > Fundamentally, I need to plot data on irregular (i.e. non equi-spaced > rectangular) grids. > I finally was recommended to look at the Delaunay package (see approach2 > at the link: > > http://scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data > ). > > The problem is that the approach: > > from scipy.sandbox.delaunay import * > > does not work (the system does not find the requested module). > > Now, I am running Debian testing on my box and I have Python2.3,2.4,2.5 > installed beside SciPy as taken from the standard repositories. > Under /usr/lib/python2.4/site-packages/scipy/sandbox I have the file > setup.py which I copy and paste at the end of the email. > I try uncommenting the line dealing with Delaunay, but that did not help > me out (probably it is useful only if I am rebuilding SciPy, which I > would like to avoid). > Anyone has experienced the same problem or has any suggestions? > I am really in need to get this working in order to be able to perform > some non-trivial data plotting with matplotlib. > Many thanks > > Lorenzo > > > > > import os > > def configuration(parent_package='',top_path=None): > from numpy.distutils.misc_util import Configuration > config = Configuration('sandbox',parent_package,top_path) > > sandbox_packages = [] > try: > sandbox_file = open(os.path.join(config.package_path, > 'enabled_packages.txt'), 'rU') > except IOError: > pass > else: > for line in sandbox_file: > p = line.strip() > if line.startswith('#'): > continue > sandbox_packages.append(p) > sandbox_file.close() > > for p in sandbox_packages: > config.add_subpackage(p) > > # All subpackages should be commented out in the version > # committed to the repository. This prevents build problems > # for people who are not actively working with these > # potentially unstable packages. > > # You can put a list of modules you want to always enable in the > # file 'enabled_packages.txt' in this directory (you'll have to > create it). > # Since this isn't under version control, it's less likely you'll > # check it in and screw other people up :-) > > # An example package: > #config.add_subpackage('exmplpackage') > > # Monte Carlo package > #config.add_subpackage('montecarlo') > > # PySparse fork with NumPy compatibility > #config.add_subpackage('pysparse') > > # Robert Kern's corner: > #config.add_subpackage('rkern') > > # ODRPACK > #config.add_subpackage('odr') > > # Delaunay triangulation and Natural Neighbor interpolation > config.add_subpackage('delaunay') > > # Gist-based plotting library for X11 > #config.add_subpackage('xplt') > > # elementwise numerical expressions > #config.add_subpackage('numexpr') > > # Statistical models > #config.add_subpackage('models') > > # Adaptation of Scientific.IO (2.4.9) to use NumPy > #config.add_subpackage('netcdf') > > # Finite Difference Formulae package > #config.add_subpackage('fdfpack') > > # Package with useful constants and unit-conversions defined > #config.add_subpackage('constants') > > # Interpolating between sparse samples > #config.add_subpackage('buildgrid') > > # Package for Support Vector Machine > #config.add_subpackage('svm') > > # Package for Gaussian Mixture Models > #config.add_subpackage('pyem') > > # David Cournapeau's corner: autocorrelation, lpc, lpc residual > #config.add_subpackage('cdavid') > > # New spline package (based on scipy.interpolate) > #config.add_subpackage('spline') > > return config > > if __name__ == '__main__': > from numpy.distutils.core import setup > setup(**configuration(top_path='').todict()) > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From robert.kern at gmail.com Tue Jun 26 16:13:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Jun 2007 15:13:50 -0500 Subject: [SciPy-user] How to Enable Delaunay Package In-Reply-To: <468170B1.6000303@gmail.com> References: <468170B1.6000303@gmail.com> Message-ID: <4681737E.3080209@gmail.com> Lorenzo Isella wrote: > Dear All, > I am posting this after a discussion originated on the matplotlib > mailing list. > Fundamentally, I need to plot data on irregular (i.e. non equi-spaced > rectangular) grids. > I finally was recommended to look at the Delaunay package (see approach2 > at the link: > > http://scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data > ). > > The problem is that the approach: > > from scipy.sandbox.delaunay import * > > does not work (the system does not find the requested module). > > Now, I am running Debian testing on my box and I have Python2.3,2.4,2.5 > installed beside SciPy as taken from the standard repositories. > Under /usr/lib/python2.4/site-packages/scipy/sandbox I have the file > setup.py which I copy and paste at the end of the email. > I try uncommenting the line dealing with Delaunay, but that did not help > me out (probably it is useful only if I am rebuilding SciPy, which I > would like to avoid). > Anyone has experienced the same problem or has any suggestions? You will have to rebuild. Follow the instructions here: > # You can put a list of modules you want to always enable in the > # file 'enabled_packages.txt' in this directory (you'll have to > create it). > # Since this isn't under version control, it's less likely you'll > # check it in and screw other people up :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ggellner at uoguelph.ca Tue Jun 26 16:38:15 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Tue, 26 Jun 2007 16:38:15 -0400 Subject: [SciPy-user] How to Enable Delaunay Package In-Reply-To: <46817183.8090505@vision.ee.ethz.ch> References: <468170B1.6000303@gmail.com> <46817183.8090505@vision.ee.ethz.ch> Message-ID: <20070626203815.GA22872@giton> Could you give a quick example? I would love to have a second way of doing this. . . Gabriel On Tue, Jun 26, 2007 at 10:05:23PM +0200, Dominik Szczerba wrote: > I use VTK for this purpose (there are python bindings). > - Dominik > > Lorenzo Isella wrote: > > Dear All, > > I am posting this after a discussion originated on the matplotlib > > mailing list. > > Fundamentally, I need to plot data on irregular (i.e. non equi-spaced > > rectangular) grids. > > I finally was recommended to look at the Delaunay package (see approach2 > > at the link: > > > > http://scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data > > ). > > > > The problem is that the approach: > > > > from scipy.sandbox.delaunay import * > > > > does not work (the system does not find the requested module). > > > > Now, I am running Debian testing on my box and I have Python2.3,2.4,2.5 > > installed beside SciPy as taken from the standard repositories. > > Under /usr/lib/python2.4/site-packages/scipy/sandbox I have the file > > setup.py which I copy and paste at the end of the email. > > I try uncommenting the line dealing with Delaunay, but that did not help > > me out (probably it is useful only if I am rebuilding SciPy, which I > > would like to avoid). > > Anyone has experienced the same problem or has any suggestions? > > I am really in need to get this working in order to be able to perform > > some non-trivial data plotting with matplotlib. > > Many thanks > > > > Lorenzo > > > > > > > > > > import os > > > > def configuration(parent_package='',top_path=None): > > from numpy.distutils.misc_util import Configuration > > config = Configuration('sandbox',parent_package,top_path) > > > > sandbox_packages = [] > > try: > > sandbox_file = open(os.path.join(config.package_path, > > 'enabled_packages.txt'), 'rU') > > except IOError: > > pass > > else: > > for line in sandbox_file: > > p = line.strip() > > if line.startswith('#'): > > continue > > sandbox_packages.append(p) > > sandbox_file.close() > > > > for p in sandbox_packages: > > config.add_subpackage(p) > > > > # All subpackages should be commented out in the version > > # committed to the repository. This prevents build problems > > # for people who are not actively working with these > > # potentially unstable packages. > > > > # You can put a list of modules you want to always enable in the > > # file 'enabled_packages.txt' in this directory (you'll have to > > create it). > > # Since this isn't under version control, it's less likely you'll > > # check it in and screw other people up :-) > > > > # An example package: > > #config.add_subpackage('exmplpackage') > > > > # Monte Carlo package > > #config.add_subpackage('montecarlo') > > > > # PySparse fork with NumPy compatibility > > #config.add_subpackage('pysparse') > > > > # Robert Kern's corner: > > #config.add_subpackage('rkern') > > > > # ODRPACK > > #config.add_subpackage('odr') > > > > # Delaunay triangulation and Natural Neighbor interpolation > > config.add_subpackage('delaunay') > > > > # Gist-based plotting library for X11 > > #config.add_subpackage('xplt') > > > > # elementwise numerical expressions > > #config.add_subpackage('numexpr') > > > > # Statistical models > > #config.add_subpackage('models') > > > > # Adaptation of Scientific.IO (2.4.9) to use NumPy > > #config.add_subpackage('netcdf') > > > > # Finite Difference Formulae package > > #config.add_subpackage('fdfpack') > > > > # Package with useful constants and unit-conversions defined > > #config.add_subpackage('constants') > > > > # Interpolating between sparse samples > > #config.add_subpackage('buildgrid') > > > > # Package for Support Vector Machine > > #config.add_subpackage('svm') > > > > # Package for Gaussian Mixture Models > > #config.add_subpackage('pyem') > > > > # David Cournapeau's corner: autocorrelation, lpc, lpc residual > > #config.add_subpackage('cdavid') > > > > # New spline package (based on scipy.interpolate) > > #config.add_subpackage('spline') > > > > return config > > > > if __name__ == '__main__': > > from numpy.distutils.core import setup > > setup(**configuration(top_path='').todict()) > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- > Dominik Szczerba, Ph.D. > Computer Vision Lab CH-8092 Zurich > http://www.vision.ee.ethz.ch/~domi > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Jun 26 17:29:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Jun 2007 16:29:07 -0500 Subject: [SciPy-user] How to Enable Delaunay Package In-Reply-To: <20070626203815.GA22872@giton> References: <468170B1.6000303@gmail.com> <46817183.8090505@vision.ee.ethz.ch> <20070626203815.GA22872@giton> Message-ID: <46818523.1080004@gmail.com> Gabriel Gellner wrote: > Could you give a quick example? > I would love to have a second way of doing this. . . > > Gabriel > > On Tue, Jun 26, 2007 at 10:05:23PM +0200, Dominik Szczerba wrote: >> I use VTK for this purpose (there are python bindings). VTK can do a Delaunay triangularization, but it will not do the natural neighbor interpolation, which is what the OP is ultimately trying to do. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From domi at vision.ee.ethz.ch Tue Jun 26 18:35:20 2007 From: domi at vision.ee.ethz.ch (Dominik Szczerba) Date: Wed, 27 Jun 2007 00:35:20 +0200 Subject: [SciPy-user] How to Enable Delaunay Package In-Reply-To: <20070626203815.GA22872@giton> References: <468170B1.6000303@gmail.com> <46817183.8090505@vision.ee.ethz.ch> <20070626203815.GA22872@giton> Message-ID: <468194A8.1000509@vision.ee.ethz.ch> Go the the VTK online documentation, there is plenty of python examples. - Dominik Gabriel Gellner wrote: > Could you give a quick example? > I would love to have a second way of doing this. . . > > Gabriel > > On Tue, Jun 26, 2007 at 10:05:23PM +0200, Dominik Szczerba wrote: >> I use VTK for this purpose (there are python bindings). >> - Dominik >> >> Lorenzo Isella wrote: >>> Dear All, >>> I am posting this after a discussion originated on the matplotlib >>> mailing list. >>> Fundamentally, I need to plot data on irregular (i.e. non equi-spaced >>> rectangular) grids. >>> I finally was recommended to look at the Delaunay package (see approach2 >>> at the link: >>> >>> http://scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data >>> ). >>> >>> The problem is that the approach: >>> >>> from scipy.sandbox.delaunay import * >>> >>> does not work (the system does not find the requested module). >>> >>> Now, I am running Debian testing on my box and I have Python2.3,2.4,2.5 >>> installed beside SciPy as taken from the standard repositories. >>> Under /usr/lib/python2.4/site-packages/scipy/sandbox I have the file >>> setup.py which I copy and paste at the end of the email. >>> I try uncommenting the line dealing with Delaunay, but that did not help >>> me out (probably it is useful only if I am rebuilding SciPy, which I >>> would like to avoid). >>> Anyone has experienced the same problem or has any suggestions? >>> I am really in need to get this working in order to be able to perform >>> some non-trivial data plotting with matplotlib. >>> Many thanks >>> >>> Lorenzo >>> >>> >>> >>> >>> import os >>> >>> def configuration(parent_package='',top_path=None): >>> from numpy.distutils.misc_util import Configuration >>> config = Configuration('sandbox',parent_package,top_path) >>> >>> sandbox_packages = [] >>> try: >>> sandbox_file = open(os.path.join(config.package_path, >>> 'enabled_packages.txt'), 'rU') >>> except IOError: >>> pass >>> else: >>> for line in sandbox_file: >>> p = line.strip() >>> if line.startswith('#'): >>> continue >>> sandbox_packages.append(p) >>> sandbox_file.close() >>> >>> for p in sandbox_packages: >>> config.add_subpackage(p) >>> >>> # All subpackages should be commented out in the version >>> # committed to the repository. This prevents build problems >>> # for people who are not actively working with these >>> # potentially unstable packages. >>> >>> # You can put a list of modules you want to always enable in the >>> # file 'enabled_packages.txt' in this directory (you'll have to >>> create it). >>> # Since this isn't under version control, it's less likely you'll >>> # check it in and screw other people up :-) >>> >>> # An example package: >>> #config.add_subpackage('exmplpackage') >>> >>> # Monte Carlo package >>> #config.add_subpackage('montecarlo') >>> >>> # PySparse fork with NumPy compatibility >>> #config.add_subpackage('pysparse') >>> >>> # Robert Kern's corner: >>> #config.add_subpackage('rkern') >>> >>> # ODRPACK >>> #config.add_subpackage('odr') >>> >>> # Delaunay triangulation and Natural Neighbor interpolation >>> config.add_subpackage('delaunay') >>> >>> # Gist-based plotting library for X11 >>> #config.add_subpackage('xplt') >>> >>> # elementwise numerical expressions >>> #config.add_subpackage('numexpr') >>> >>> # Statistical models >>> #config.add_subpackage('models') >>> >>> # Adaptation of Scientific.IO (2.4.9) to use NumPy >>> #config.add_subpackage('netcdf') >>> >>> # Finite Difference Formulae package >>> #config.add_subpackage('fdfpack') >>> >>> # Package with useful constants and unit-conversions defined >>> #config.add_subpackage('constants') >>> >>> # Interpolating between sparse samples >>> #config.add_subpackage('buildgrid') >>> >>> # Package for Support Vector Machine >>> #config.add_subpackage('svm') >>> >>> # Package for Gaussian Mixture Models >>> #config.add_subpackage('pyem') >>> >>> # David Cournapeau's corner: autocorrelation, lpc, lpc residual >>> #config.add_subpackage('cdavid') >>> >>> # New spline package (based on scipy.interpolate) >>> #config.add_subpackage('spline') >>> >>> return config >>> >>> if __name__ == '__main__': >>> from numpy.distutils.core import setup >>> setup(**configuration(top_path='').todict()) >>> >>> >>> >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >> -- >> Dominik Szczerba, Ph.D. >> Computer Vision Lab CH-8092 Zurich >> http://www.vision.ee.ethz.ch/~domi >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Dominik Szczerba, Ph.D. Computer Vision Lab CH-8092 Zurich http://www.vision.ee.ethz.ch/~domi From jelle.feringa at ezct.net Tue Jun 26 18:41:50 2007 From: jelle.feringa at ezct.net (jelle) Date: Tue, 26 Jun 2007 22:41:50 +0000 (UTC) Subject: [SciPy-user] How to Enable Delaunay Package References: <468170B1.6000303@gmail.com> <46817183.8090505@vision.ee.ethz.ch> <20070626203815.GA22872@giton> <468194A8.1000509@vision.ee.ethz.ch> Message-ID: Since there is now an unstable branch of the eggs repository, would it be entirely unreasonable to suggest to provide eggs for the sandboxed modules? Actually, it might make sense, since its so easy to update these... -jelle From robert.kern at gmail.com Tue Jun 26 18:44:56 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Jun 2007 17:44:56 -0500 Subject: [SciPy-user] How to Enable Delaunay Package In-Reply-To: References: <468170B1.6000303@gmail.com> <46817183.8090505@vision.ee.ethz.ch> <20070626203815.GA22872@giton> <468194A8.1000509@vision.ee.ethz.ch> Message-ID: <468196E8.1050101@gmail.com> jelle wrote: > Since there is now an unstable branch of the eggs repository, would it be > entirely unreasonable to suggest to provide eggs for the sandboxed modules? Whose egg repository? > Actually, it might make sense, since its so easy to update these... Not as long as scipy is a monolithic package, no. Better to move the independent sandboxed packages out to scikits so they can each be installed by themselves. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Tue Jun 26 19:47:41 2007 From: ckkart at hoc.net (Christian K) Date: Wed, 27 Jun 2007 08:47:41 +0900 Subject: [SciPy-user] Another leastsq Jacobian bug In-Reply-To: References: <20070623084829.GO20362@mentat.za.net> Message-ID: Lin Shao wrote: > No I didn't. The simplest refute to your answer is that your J[1] is > -2*params[0]*(xx*params[1]-params[1]^2). How could there be a term > with params[1]'s quadratic in the derivative if the original function > is a quadratic function? > > On 6/25/07, Christian K wrote: >> Lin Shao wrote: >>>>> ## Now define my Jacobian >>>>> def Jacobian(params,xx,yy,mode='col'): >>>>> J = N.empty((len(params),xx.size)) >>>>> J[0] = (xx-params[1])**2 >>>>> J[1] = -2*params[0]*(xx-params[1]) >>>> shouldn't that be -2*params[0]*(xx-params[1])*params[1] ? >>> I think I was right. Think about what's the derivative of -x^2 -- it's -2x >> you forgot about the chain rule . You're right. Sorry. Christian From c-b at asu.edu Tue Jun 26 21:05:41 2007 From: c-b at asu.edu (Christopher Brown) Date: Tue, 26 Jun 2007 18:05:41 -0700 Subject: [SciPy-user] signal.butter questions In-Reply-To: References: Message-ID: <4681B7E5.5020903@asu.edu> Hi Ryan, RK> I have two questions about signal.butter. First, I don't see it in RK> the docstring, but I am assuming the Wn must be normalized by RK> dividing by the Nyquist frequency. Is that true? It seems like RK> that is what buttord is saying and I assume they use the same RK> conventions. I am very, very new to scipy (and python), but this is almost certainly the case. RK> Second, what is meant by the analog keyword? This is (must be) to distinguish between digital and analog filters. http://www.dspguide.com/ch21/1.htm -- Chris From ryanlists at gmail.com Tue Jun 26 21:10:46 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 26 Jun 2007 20:10:46 -0500 Subject: [SciPy-user] signal.butter questions In-Reply-To: <4681B7E5.5020903@asu.edu> References: <4681B7E5.5020903@asu.edu> Message-ID: Thanks for your thoughts Chris. The idea of an analog filter implemented in software just doesn't make any sense to me. Ryan On 6/26/07, Christopher Brown wrote: > Hi Ryan, > > RK> I have two questions about signal.butter. First, I don't see it in > RK> the docstring, but I am assuming the Wn must be normalized by > RK> dividing by the Nyquist frequency. Is that true? It seems like > RK> that is what buttord is saying and I assume they use the same > RK> conventions. > > I am very, very new to scipy (and python), but this is almost certainly > the case. > > RK> Second, what is meant by the analog keyword? > > This is (must be) to distinguish between digital and analog filters. > http://www.dspguide.com/ch21/1.htm > > -- > Chris > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Thu Jun 28 07:31:57 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 28 Jun 2007 13:31:57 +0200 Subject: [SciPy-user] verify whether a matrix is positive definite or not Message-ID: <46839C2D.8000109@iam.uni-stuttgart.de> Hi all, I have a parameter-dependent matrix B(x) = B_0 + x B_1, 0 \le x \le 1 where B_0 and B_1 are symmetric. How can I determine critical values x* (if any) such that B(x*) is not positive definite ? from scipy import * def B(x): return array(([[11.,8.],[8.,7.]])) - x*array(([[20.,1.],[1.,26]])) X = linspace(0,1,100) for x in X: print x L=linalg.cholesky(B(x),lower=1) I mean it would be nice if cholesky could return info=1 if the matrix is not spd. The current behaviour is Traceback (most recent call last): File "test_spd.py", line 11, in ? L=linalg.cholesky(B(x),lower=1) File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line 552, in cholesky if info>0: raise LinAlgError, "matrix not positive definite" numpy.linalg.linalg.LinAlgError: matrix not positive definite Helpful suggestions would be appreciated. Nils From emanuelez at gmail.com Thu Jun 28 10:23:15 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 28 Jun 2007 16:23:15 +0200 Subject: [SciPy-user] lambda forms? Message-ID: Hello, i'm trying to define a function to calculate the sum of some gaussian given their paramers. Something like: def gaussian(height, center_x, center_y, width): """Returns a gaussian function with the given parameters""" width = float(width) return lambda x,y: sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) where all the parameters are arrays of the same length (the length is variable). This version of course does not work, but i wonder if it is still possible using lambda forms or if i have to write the function in an explicit way. Emanuele From openopt at ukr.net Thu Jun 28 10:27:17 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 28 Jun 2007 17:27:17 +0300 Subject: [SciPy-user] lambda forms? In-Reply-To: References: Message-ID: <4683C545.10901@ukr.net> works for me (however, I used 1-line: return lambda x,y:sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) ) HTH, D Emanuele Zattin wrote: > Hello, > i'm trying to define a function to calculate the sum of some gaussian > given their paramers. Something like: > > def gaussian(height, center_x, center_y, width): > """Returns a gaussian function with the given parameters""" > width = float(width) > return lambda x,y: > sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) > > where all the parameters are arrays of the same length (the length is variable). > This version of course does not work, but i wonder if it is still > possible using lambda forms or if i have to write the function in an > explicit way. > > Emanuele > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From aisaac at american.edu Thu Jun 28 10:40:50 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 28 Jun 2007 10:40:50 -0400 Subject: [SciPy-user] =?iso-8859-1?q?=5Bscikits=5D_Updated_generic_optimiz?= =?iso-8859-1?q?er_=28and_egg_download=09link=29?= In-Reply-To: References: Message-ID: On Mon, 25 Jun 2007, Matthieu Brucher apparently wrote: > I did not improve much of the code, but here is a link to the scikit : > http://download.gna.org/pypeline/ You are calling this a SciKit, but I believe the code is not yet placed in the SciKits repository http://projects.scipy.org/scipy/scikits/browser/trunk Am I right? I think that would be a better way to "expose' it. Cheers, Alan Isaac From emanuelez at gmail.com Thu Jun 28 10:38:19 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 28 Jun 2007 16:38:19 +0200 Subject: [SciPy-user] lambda forms? In-Reply-To: <4683C545.10901@ukr.net> References: <4683C545.10901@ukr.net> Message-ID: hmmm.... then it must be something else in the optimization part... In [23]: run cutoff --------------------------------------------------------------------------- Traceback (most recent call last) /home/emanuelez/Tesi/Code/cutoff.py in () 175 # FIND OBJECTS PROPERTIES 176 # ----------------------- --> 177 get_objects_info(blurred, 2, obj_x, obj_y, obj_v) 178 179 /home/emanuelez/Tesi/Code/cutoff.py in get_objects_info(image, size, obj_x, obj_y, obj_v) 144 #for indices in max_list: 145 ml = array(max_list) --> 146 params = fitgaussian(neigh, obj_x[ml], obj_y[ml], obj_v[ml]) 147 print len(max_list), params 148 /home/emanuelez/Tesi/Code/cutoff.py in fitgaussian(data, obj_x, obj_y, obj_v) 125 errorfunction = lambda p: ravel(gaussian(*p)(*indices(data.shape)) - 126 data) --> 127 p, success = leastsq(errorfunction, params) 128 return p 129 /usr/lib/python2.5/site-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 264 if (maxfev == 0): 265 maxfev = 200*(n+1) --> 266 retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) 267 else: 268 if col_deriv: : object too deep for desired array WARNING: Failure executing file: On 6/28/07, dmitrey wrote: > works for me > (however, I used 1-line: > > return lambda x,y:sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) > > ) > HTH, D > > Emanuele Zattin wrote: > > Hello, > > i'm trying to define a function to calculate the sum of some gaussian > > given their paramers. Something like: > > > > def gaussian(height, center_x, center_y, width): > > """Returns a gaussian function with the given parameters""" > > width = float(width) > > return lambda x,y: > > sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) > > > > where all the parameters are arrays of the same length (the length is variable). > > This version of course does not work, but i wonder if it is still > > possible using lambda forms or if i have to write the function in an > > explicit way. > > > > Emanuele > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From c.j.lee at tnw.utwente.nl Thu Jun 28 10:49:26 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 28 Jun 2007 16:49:26 +0200 Subject: [SciPy-user] 3D density calculation In-Reply-To: <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> References: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> Message-ID: <4683CA76.5000109@tnw.utwente.nl> Hi David, histrogramdd does exactly what I want and seems to be fast enough. However, I think it may have an irregular bug if I have some data with a shape (x,4) where x varies from call to call and I specify 4 bin numbers bins=(binx, biny, binz, bint) and then call hist, edges = histrogramdd(data, bins=(binx, biny, binz, bint))) then most of the time hist will have the shape (binx, biny, binz, bint) but sometimes it will return a different shape e.g., (bint, binz, biny, binx) and the edges array order will be different again e.g., (x, y, t, z). Here is the chunk of code that generates this (with appropriate an appropriate non null data of course) densityHistogram, edges = numpy.histogramdd(data, bins=(xBins,yBins,zBins,tBins)) print densityHistogram.shape SHGPhotons = n.asarray(densityHistogram*0.01, n.int32) totalPhotons = SHGPhotons.sum() print SHGPhotons.shape if totalPhotons>0: idxSHG = n.nonzero(SHGPhotons) #print idxSHG xEdges = edges[0] yEdges = edges[1] zEdges = edges[2] tEdges = edges[3] #print zEdges print xEdges.shape, yEdges.shape, zEdges.shape, tEdges.shape Here is a typical (non error generating output) 20 20 1 1 <- number of bins in order (20, 20, 1, 1) <- shape of histogram (20, 20, 1, 1) <- shape of an array generated from the histrogram (21,) (21,) (2,) (2,) <- shape of the inidividual edge arrays in order here is the version that puts out an error 21 20 2 3 <- bins in order (3, 2, 21, 20) <- histogram is backwards (3, 2, 21, 20) <- as is the array generated from it (22,) (21,) (3,) (4,) <- edge arrays are in order though So, is this an error, or am I assuming too much about how histogramdd operates? I will start checking the order in the program, but this will never be perfect since the number of bins could be the same for multiple axis. Thanks for any advice Cheers Chris David Huard wrote: > Hi Chris, > > Have you tried numpy.histogramdd ? If its still too slow, I have a > fortran implementation on the back burner. I could try to finish it > quickly and send you a preliminary version. > > Other thought: the kernel density estimator scipy.stats.gaussian_kde > > David > > 2007/6/17, Bernhard Voigt >: > > Hi Chris! > > you could try a grid of unit cells that cover your phase space > (x,y,z,t). Count the number of photons per unit cell of your > initial configuration and track photons leaving and entering a > particular cell. A dictionary with a tuple of x,y,z,t coordinates > obtained from integer division of the x,y,z,t coordinates could > serve as keys. > > Example for 2-D: > > from numpy import * > # phase space in x,y > x = arange(-100,100.1,.1) > y = arange(-100,100.1,.1) > # cell dimension in both dimensions the same > GRID_WIDTH=7.5 > > # computes the grid key from x,y coordinates > def gridKey(x,y): > '''return the a tuple of x,y integer divided by GRID_WIDHT''' > return (int(x // GRID_WIDTH), int(y // GRID_WIDTH)) > > # setup your grid dictionary > gridLowX, gridHighX = gridKey(min(x), max(x)) > gridLowY, gridHighY = gridKey(min(y), max(y)) > keys = [(i,j) for i in xrange(gridLowX, gridHighX + 1) \ > for j in xrange(gridLowY, gridHighY + 1)] > grid = dict().fromkeys(keys, 0) > > # random photons > photons = random.uniform(-100.,100., (100000,2)) > > # count photons in each grid cell > for p in photons: > grid[gridKey(*p)] += 1 > > ######################################### > # in your simulation you have to keep track of where your photons > # are going to... > # (the code below won't run, it's just an example) > ######################################### > oldKey = gridKey(photon) > propagate(photon) # changes x,y coordinates of photon > newKey = gridKey(photon) > if oldKey != newKey: > grid[oldKey] -= 1 > grid[newKey] += 1 > > I hope this helps! Bernhard > > > On 6/15/07, * Chris Lee* < c.j.lee at tnw.utwente.nl > > wrote: > > Hi everyone, > > I was hoping this list could point me in the direction of a more > efficient solution to a problem I have. > > I have 4 vectors: x, y, z, and t that are about 1 million in > length > that describe the positions of photons. As my simulation > progresses > it updates the positions so x, y, z, and t change by an > unknown (and > unknowable) amount every update. > > This worked very well for its original purpose but now I need to > calculate the photon density change over time. Currently > after each > update, I iterate over time slices, x slices, and y slices and > then > make an histogram of z which I then stitch together to create a > density. However, this becomes very slow as the photons > spread out > in space and time. > > Does anyone know how to take such a large vector set and return a > density efficiently? > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: c.j.lee.vcf Type: text/x-vcard Size: 174 bytes Desc: not available URL: From peridot.faceted at gmail.com Thu Jun 28 10:54:50 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 28 Jun 2007 10:54:50 -0400 Subject: [SciPy-user] verify whether a matrix is positive definite or not In-Reply-To: <46839C2D.8000109@iam.uni-stuttgart.de> References: <46839C2D.8000109@iam.uni-stuttgart.de> Message-ID: On 28/06/07, Nils Wagner wrote: > Hi all, > > I have a parameter-dependent matrix > > B(x) = B_0 + x B_1, 0 \le x \le 1 > > where B_0 and B_1 are symmetric. How can I determine critical values x* > (if any) such that B(x*) is not positive definite ? > > > from scipy import * > > def B(x): > > return array(([[11.,8.],[8.,7.]])) - x*array(([[20.,1.],[1.,26]])) > > X = linspace(0,1,100) > > for x in X: > print x > L=linalg.cholesky(B(x),lower=1) > > I mean it would be nice if cholesky could return info=1 if the matrix is > not spd. > The current behaviour is > > Traceback (most recent call last): > File "test_spd.py", line 11, in ? > L=linalg.cholesky(B(x),lower=1) > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > 552, in cholesky > if info>0: raise LinAlgError, "matrix not positive definite" > numpy.linalg.linalg.LinAlgError: matrix not positive definite > > Helpful suggestions would be appreciated. Well, you can always use try/except to catch the LinAlgError. It's remotely possible that cholesky might fail to converge and throw a different LinAlgError which you would want to re-raise. You can also look at the eigenvalues - the matrix is positive definite if and only if they're all positive. So making a function that takes a parameter and returns the least eigenvalue should give you a relatively smooth function to do root-finding on. With symmetric matrices, eigenvalue finding ought to be fairly reliable. For this particular case, note that the positive-definite matrices form a cone, that is, the sum of two or any positive multiple of a positive-definite matrix is also positive definite. In particular this means it's convex. So if you're tracing the line between two endpoints, as you are here (the endpoints are B_0 and B_0+B_1), you can check the endpoints and know that the matrix is positive definite between them if they're both positive definite. If one is positive definite and the other isn't, then clearly there's some point in between where they stop being positive definite. If neither is positive definite, then it's possible that some positive definite matrix lies between them (good luck finding it; you could try numerically maximizing the least eigenvalue). Anne M. Archibald From david.huard at gmail.com Thu Jun 28 11:31:01 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 28 Jun 2007 11:31:01 -0400 Subject: [SciPy-user] 3D density calculation In-Reply-To: <4683CA76.5000109@tnw.utwente.nl> References: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> <4683CA76.5000109@tnw.utwente.nl> Message-ID: <91cf711d0706280831t4f2a16ddh16aa3f4c5cf65b32@mail.gmail.com> Hi Chris, It's a bug alright, but I believe it has been fixed. What version of numpy are you using? If you're not using the latest release, please try it out and see if it still bugs. The bug had to do with the reordering of the flattened array into a N-D array, which is bin length dependent. Cheers, David 2007/6/28, Chris Lee : > > Hi David, > > histrogramdd does exactly what I want and seems to be fast enough. > However, I think it may have an irregular bug > > if I have some data with a shape (x,4) where x varies from call to call > and I specify 4 bin numbers bins=(binx, biny, binz, bint) and then call > > hist, edges = histrogramdd(data, bins=(binx, biny, binz, bint))) > > then most of the time hist will have the shape (binx, biny, binz, bint) > but sometimes it will return a different shape e.g., (bint, binz, biny, > binx) and the edges array order will be different again e.g., (x, y, t, > z). > > Here is the chunk of code that generates this (with appropriate an > appropriate non null data of course) > > densityHistogram, edges = numpy.histogramdd(data, > bins=(xBins,yBins,zBins,tBins)) > print densityHistogram.shape > SHGPhotons = n.asarray(densityHistogram*0.01, n.int32) > totalPhotons = SHGPhotons.sum() > print SHGPhotons.shape > if totalPhotons>0: > idxSHG = n.nonzero(SHGPhotons) > #print idxSHG > xEdges = edges[0] > yEdges = edges[1] > zEdges = edges[2] > tEdges = edges[3] > #print zEdges > print xEdges.shape, yEdges.shape, zEdges.shape, tEdges.shape > > Here is a typical (non error generating output) > 20 20 1 1 <- number of bins in order > (20, 20, 1, 1) <- shape of histogram > (20, 20, 1, 1) <- shape of an array generated from the histrogram > (21,) (21,) (2,) (2,) <- shape of the inidividual edge arrays in order > > here is the version that puts out an error > 21 20 2 3 <- bins in order > (3, 2, 21, 20) <- histogram is backwards > (3, 2, 21, 20) <- as is the array generated from it > (22,) (21,) (3,) (4,) <- edge arrays are in order though > > So, is this an error, or am I assuming too much about how histogramdd > operates? > > I will start checking the order in the program, but this will never be > perfect since the number of bins could be the same for multiple axis. > > Thanks for any advice > Cheers > Chris > > > > David Huard wrote: > > Hi Chris, > > > > Have you tried numpy.histogramdd ? If its still too slow, I have a > > fortran implementation on the back burner. I could try to finish it > > quickly and send you a preliminary version. > > > > Other thought: the kernel density estimator scipy.stats.gaussian_kde > > > > David > > > > 2007/6/17, Bernhard Voigt > >: > > > > Hi Chris! > > > > you could try a grid of unit cells that cover your phase space > > (x,y,z,t). Count the number of photons per unit cell of your > > initial configuration and track photons leaving and entering a > > particular cell. A dictionary with a tuple of x,y,z,t coordinates > > obtained from integer division of the x,y,z,t coordinates could > > serve as keys. > > > > Example for 2-D: > > > > from numpy import * > > # phase space in x,y > > x = arange(-100,100.1,.1) > > y = arange(-100,100.1,.1) > > # cell dimension in both dimensions the same > > GRID_WIDTH=7.5 > > > > # computes the grid key from x,y coordinates > > def gridKey(x,y): > > '''return the a tuple of x,y integer divided by GRID_WIDHT''' > > return (int(x // GRID_WIDTH), int(y // GRID_WIDTH)) > > > > # setup your grid dictionary > > gridLowX, gridHighX = gridKey(min(x), max(x)) > > gridLowY, gridHighY = gridKey(min(y), max(y)) > > keys = [(i,j) for i in xrange(gridLowX, gridHighX + 1) \ > > for j in xrange(gridLowY, gridHighY + 1)] > > grid = dict().fromkeys(keys, 0) > > > > # random photons > > photons = random.uniform(-100.,100., (100000,2)) > > > > # count photons in each grid cell > > for p in photons: > > grid[gridKey(*p)] += 1 > > > > ######################################### > > # in your simulation you have to keep track of where your photons > > # are going to... > > # (the code below won't run, it's just an example) > > ######################################### > > oldKey = gridKey(photon) > > propagate(photon) # changes x,y coordinates of photon > > newKey = gridKey(photon) > > if oldKey != newKey: > > grid[oldKey] -= 1 > > grid[newKey] += 1 > > > > I hope this helps! Bernhard > > > > > > On 6/15/07, * Chris Lee* < c.j.lee at tnw.utwente.nl > > > wrote: > > > > Hi everyone, > > > > I was hoping this list could point me in the direction of a more > > efficient solution to a problem I have. > > > > I have 4 vectors: x, y, z, and t that are about 1 million in > > length > > that describe the positions of photons. As my simulation > > progresses > > it updates the positions so x, y, z, and t change by an > > unknown (and > > unknowable) amount every update. > > > > This worked very well for its original purpose but now I need to > > calculate the photon density change over time. Currently > > after each > > update, I iterate over time slices, x slices, and y slices and > > then > > make an histogram of z which I then stitch together to create a > > density. However, this becomes very slow as the photons > > spread out > > in space and time. > > > > Does anyone know how to take such a large vector set and return > a > > density efficiently? > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.j.lee at tnw.utwente.nl Thu Jun 28 13:14:02 2007 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 28 Jun 2007 19:14:02 +0200 Subject: [SciPy-user] 3D density calculation In-Reply-To: <91cf711d0706280831t4f2a16ddh16aa3f4c5cf65b32@mail.gmail.com> References: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> <4683CA76.5000109@tnw.utwente.nl> <91cf711d0706280831t4f2a16ddh16aa3f4c5cf65b32@mail.gmail.com> Message-ID: <83092781-DE86-48E0-B13B-A4F541688A43@tnw.utwente.nl> That would be the problem. I just tested it on a machine with an up to date version of numpy and the problem went away. Out of curiosity (and perhaps practically since I have no power to upgrade the other machine I was using) is the buggy output always a transpose of the correct output? Cheers Chris On Jun 28, 2007, at 5:31 PM, David Huard wrote: > Hi Chris, > > It's a bug alright, but I believe it has been fixed. What version > of numpy are you using? If you're not using the latest release, > please try it out and see if it still bugs. The bug had to do with > the reordering of the flattened array into a N-D array, which is > bin length dependent. > > Cheers, > David > > 2007/6/28, Chris Lee : > Hi David, > > histrogramdd does exactly what I want and seems to be fast enough. > However, I think it may have an irregular bug > > if I have some data with a shape (x,4) where x varies from call to > call > and I specify 4 bin numbers bins=(binx, biny, binz, bint) and then > call > > hist, edges = histrogramdd(data, bins=(binx, biny, binz, bint))) > > then most of the time hist will have the shape (binx, biny, binz, > bint) > but sometimes it will return a different shape e.g., (bint, binz, > biny, > binx) and the edges array order will be different again e.g., (x, > y, t, > z). > > Here is the chunk of code that generates this (with appropriate an > appropriate non null data of course) > > densityHistogram, edges = numpy.histogramdd(data, > bins=(xBins,yBins,zBins,tBins)) > print densityHistogram.shape > SHGPhotons = n.asarray(densityHistogram*0.01, n.int32) > totalPhotons = SHGPhotons.sum() > print SHGPhotons.shape > if totalPhotons>0: > idxSHG = n.nonzero(SHGPhotons) > #print idxSHG > xEdges = edges[0] > yEdges = edges[1] > zEdges = edges[2] > tEdges = edges[3] > #print zEdges > print xEdges.shape, yEdges.shape, zEdges.shape, tEdges.shape > > Here is a typical (non error generating output) > 20 20 1 1 <- number of bins in order > (20, 20, 1, 1) <- shape of histogram > (20, 20, 1, 1) <- shape of an array generated from the histrogram > (21,) (21,) (2,) (2,) <- shape of the inidividual edge arrays in order > > here is the version that puts out an error > 21 20 2 3 <- bins in order > (3, 2, 21, 20) <- histogram is backwards > (3, 2, 21, 20) <- as is the array generated from it > (22,) (21,) (3,) (4,) <- edge arrays are in order though > > So, is this an error, or am I assuming too much about how histogramdd > operates? > > I will start checking the order in the program, but this will never be > perfect since the number of bins could be the same for multiple axis. > > Thanks for any advice > Cheers > Chris > > > > David Huard wrote: > > Hi Chris, > > > > Have you tried numpy.histogramdd ? If its still too slow, I have a > > fortran implementation on the back burner. I could try to finish it > > quickly and send you a preliminary version. > > > > Other thought: the kernel density estimator scipy.stats.gaussian_kde > > > > David > > > > 2007/6/17, Bernhard Voigt > >: > > > > Hi Chris! > > > > you could try a grid of unit cells that cover your phase space > > (x,y,z,t). Count the number of photons per unit cell of your > > initial configuration and track photons leaving and entering a > > particular cell. A dictionary with a tuple of x,y,z,t > coordinates > > obtained from integer division of the x,y,z,t coordinates could > > serve as keys. > > > > Example for 2-D: > > > > from numpy import * > > # phase space in x,y > > x = arange(-100,100.1,.1) > > y = arange(-100,100.1,.1) > > # cell dimension in both dimensions the same > > GRID_WIDTH=7.5 > > > > # computes the grid key from x,y coordinates > > def gridKey(x,y): > > '''return the a tuple of x,y integer divided by > GRID_WIDHT''' > > return (int(x // GRID_WIDTH), int(y // GRID_WIDTH)) > > > > # setup your grid dictionary > > gridLowX, gridHighX = gridKey(min(x), max(x)) > > gridLowY, gridHighY = gridKey(min(y), max(y)) > > keys = [(i,j) for i in xrange(gridLowX, gridHighX + 1) \ > > for j in xrange(gridLowY, gridHighY + 1)] > > grid = dict().fromkeys(keys, 0) > > > > # random photons > > photons = random.uniform(-100.,100., (100000,2)) > > > > # count photons in each grid cell > > for p in photons: > > grid[gridKey(*p)] += 1 > > > > ######################################### > > # in your simulation you have to keep track of where your > photons > > # are going to... > > # (the code below won't run, it's just an example) > > ######################################### > > oldKey = gridKey(photon) > > propagate(photon) # changes x,y coordinates of photon > > newKey = gridKey(photon) > > if oldKey != newKey: > > grid[oldKey] -= 1 > > grid[newKey] += 1 > > > > I hope this helps! Bernhard > > > > > > On 6/15/07, * Chris Lee* < c.j.lee at tnw.utwente.nl > > > wrote: > > > > Hi everyone, > > > > I was hoping this list could point me in the direction of > a more > > efficient solution to a problem I have. > > > > I have 4 vectors: x, y, z, and t that are about 1 million in > > length > > that describe the positions of photons. As my simulation > > progresses > > it updates the positions so x, y, z, and t change by an > > unknown (and > > unknowable) amount every update. > > > > This worked very well for its original purpose but now I > need to > > calculate the photon density change over time. Currently > > after each > > update, I iterate over time slices, x slices, and y > slices and > > then > > make an histogram of z which I then stitch together to > create a > > density. However, this becomes very slow as the photons > > spread out > > in space and time. > > > > Does anyone know how to take such a large vector set and > return a > > density efficiently? > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > ---------------------------------------------------------------------- > -- > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Thu Jun 28 13:28:36 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 28 Jun 2007 13:28:36 -0400 Subject: [SciPy-user] 3D density calculation In-Reply-To: <83092781-DE86-48E0-B13B-A4F541688A43@tnw.utwente.nl> References: <66C8EB7F-091B-4604-8582-2FCD7EA5D0A2@tnw.utwente.nl> <21a270aa0706170835r128f3c7dja9b42d4b5e76dcdf@mail.gmail.com> <91cf711d0706180606k249a3af3u7e0f5ef0df2ad1ed@mail.gmail.com> <4683CA76.5000109@tnw.utwente.nl> <91cf711d0706280831t4f2a16ddh16aa3f4c5cf65b32@mail.gmail.com> <83092781-DE86-48E0-B13B-A4F541688A43@tnw.utwente.nl> Message-ID: <91cf711d0706281028s38bf00av18a581e726d13e28@mail.gmail.com> 2007/6/28, Chris Lee : > > That would be the problem. I just tested it on a machine with an up to > date version of numpy and the problem went away. > Good. > Out of curiosity (and perhaps practically since I have no power to upgrade > the other machine I was using) is the buggy output always a transpose of the > correct output? > No, it's much worse than that. It's like ordering [1,2,3,4,5,6] to a (2x3) matrix or a (3x2) matrix [[1,2,3] [4,5,6]] [[1,2], [3,4], [5,6]] One is not the transpose of the other. Sorry about that. You could cut and paste the fixed code and import histogramdd from a local module instead of from numpy. David Cheers > Chris > On Jun 28, 2007, at 5:31 PM, David Huard wrote: > > Hi Chris, > > It's a bug alright, but I believe it has been fixed. What version of numpy > are you using? If you're not using the latest release, please try it out and > see if it still bugs. The bug had to do with the reordering of the flattened > array into a N-D array, which is bin length dependent. > > Cheers, > David > > 2007/6/28, Chris Lee : > > > > Hi David, > > > > histrogramdd does exactly what I want and seems to be fast enough. > > However, I think it may have an irregular bug > > > > if I have some data with a shape (x,4) where x varies from call to call > > and I specify 4 bin numbers bins=(binx, biny, binz, bint) and then call > > > > hist, edges = histrogramdd(data, bins=(binx, biny, binz, bint))) > > > > then most of the time hist will have the shape (binx, biny, binz, bint) > > but sometimes it will return a different shape e.g., (bint, binz, biny, > > binx) and the edges array order will be different again e.g., (x, y, t, > > z). > > > > Here is the chunk of code that generates this (with appropriate an > > appropriate non null data of course) > > > > densityHistogram, edges = numpy.histogramdd(data, > > bins=(xBins,yBins,zBins,tBins)) > > print densityHistogram.shape > > SHGPhotons = n.asarray(densityHistogram*0.01, n.int32) > > totalPhotons = SHGPhotons.sum() > > print SHGPhotons.shape > > if totalPhotons>0: > > idxSHG = n.nonzero(SHGPhotons) > > #print idxSHG > > xEdges = edges[0] > > yEdges = edges[1] > > zEdges = edges[2] > > tEdges = edges[3] > > #print zEdges > > print xEdges.shape, yEdges.shape, zEdges.shape, tEdges.shape > > > > Here is a typical (non error generating output) > > 20 20 1 1 <- number of bins in order > > (20, 20, 1, 1) <- shape of histogram > > (20, 20, 1, 1) <- shape of an array generated from the histrogram > > (21,) (21,) (2,) (2,) <- shape of the inidividual edge arrays in order > > > > here is the version that puts out an error > > 21 20 2 3 <- bins in order > > (3, 2, 21, 20) <- histogram is backwards > > (3, 2, 21, 20) <- as is the array generated from it > > (22,) (21,) (3,) (4,) <- edge arrays are in order though > > > > So, is this an error, or am I assuming too much about how histogramdd > > operates? > > > > I will start checking the order in the program, but this will never be > > perfect since the number of bins could be the same for multiple axis. > > > > Thanks for any advice > > Cheers > > Chris > > > > > > > > David Huard wrote: > > > Hi Chris, > > > > > > Have you tried numpy.histogramdd ? If its still too slow, I have a > > > fortran implementation on the back burner. I could try to finish it > > > quickly and send you a preliminary version. > > > > > > Other thought: the kernel density estimator scipy.stats.gaussian_kde > > > > > > David > > > > > > 2007/6/17, Bernhard Voigt > > >: > > > > > > Hi Chris! > > > > > > you could try a grid of unit cells that cover your phase space > > > (x,y,z,t). Count the number of photons per unit cell of your > > > initial configuration and track photons leaving and entering a > > > particular cell. A dictionary with a tuple of x,y,z,t coordinates > > > obtained from integer division of the x,y,z,t coordinates could > > > serve as keys. > > > > > > Example for 2-D: > > > > > > from numpy import * > > > # phase space in x,y > > > x = arange(-100,100.1,.1) > > > y = arange(-100,100.1,.1) > > > # cell dimension in both dimensions the same > > > GRID_WIDTH=7.5 > > > > > > # computes the grid key from x,y coordinates > > > def gridKey(x,y): > > > '''return the a tuple of x,y integer divided by GRID_WIDHT''' > > > return (int(x // GRID_WIDTH), int(y // GRID_WIDTH)) > > > > > > # setup your grid dictionary > > > gridLowX, gridHighX = gridKey(min(x), max(x)) > > > gridLowY, gridHighY = gridKey(min(y), max(y)) > > > keys = [(i,j) for i in xrange(gridLowX, gridHighX + 1) \ > > > for j in xrange(gridLowY, gridHighY + 1)] > > > grid = dict().fromkeys(keys, 0) > > > > > > # random photons > > > photons = random.uniform(-100.,100., (100000,2)) > > > > > > # count photons in each grid cell > > > for p in photons: > > > grid[gridKey(*p)] += 1 > > > > > > ######################################### > > > # in your simulation you have to keep track of where your photons > > > # are going to... > > > # (the code below won't run, it's just an example) > > > ######################################### > > > oldKey = gridKey(photon) > > > propagate(photon) # changes x,y coordinates of photon > > > newKey = gridKey(photon) > > > if oldKey != newKey: > > > grid[oldKey] -= 1 > > > grid[newKey] += 1 > > > > > > I hope this helps! Bernhard > > > > > > > > > On 6/15/07, * Chris Lee* < c.j.lee at tnw.utwente.nl > > > > wrote: > > > > > > Hi everyone, > > > > > > I was hoping this list could point me in the direction of a > > more > > > efficient solution to a problem I have. > > > > > > I have 4 vectors: x, y, z, and t that are about 1 million in > > > length > > > that describe the positions of photons. As my simulation > > > progresses > > > it updates the positions so x, y, z, and t change by an > > > unknown (and > > > unknowable) amount every update. > > > > > > This worked very well for its original purpose but now I need > > to > > > calculate the photon density change over time. Currently > > > after each > > > update, I iterate over time slices, x slices, and y slices and > > > then > > > make an histogram of z which I then stitch together to create > > a > > > density. However, this becomes very slow as the photons > > > spread out > > > in space and time. > > > > > > Does anyone know how to take such a large vector set and > > return a > > > density efficiently? > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dahl.joachim at gmail.com Thu Jun 28 15:09:04 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 28 Jun 2007 21:09:04 +0200 Subject: [SciPy-user] verify whether a matrix is positive definite or not In-Reply-To: <46839C2D.8000109@iam.uni-stuttgart.de> References: <46839C2D.8000109@iam.uni-stuttgart.de> Message-ID: <47347f490706281209m2e1d957axa9d75bae9429e40a@mail.gmail.com> Hi Nils, is this not similar to the eigenvalue problems we discussed off-list? You can numerically find the smallest x (if that's what you want) by solving minimize x s.t. Bo - B1*x >= 0 In CVXOPT you can solve it as follows: from cvxopt.base import matrix from cvxopt.solvers import sdp from cvxopt.lapack import syev c = matrix(-1.0) B0 = matrix([ [11.,8.], [8.,7.] ]) B1 = matrix([ [20.,1.] ,[1.,26.] ]) Gs = [ B1[:] ] hs = [ B0 ] sol = sdp(c, Gs=Gs, hs=hs) x = sol['x'] v = matrix([0., 0.]) syev(B0 - x*B1, v) print v - Joachim On 6/28/07, Nils Wagner wrote: > > Hi all, > > I have a parameter-dependent matrix > > B(x) = B_0 + x B_1, 0 \le x \le 1 > > where B_0 and B_1 are symmetric. How can I determine critical values x* > (if any) such that B(x*) is not positive definite ? > > > from scipy import * > > def B(x): > > return array(([[11.,8.],[8.,7.]])) - x*array(([[20.,1.],[1.,26]])) > > X = linspace(0,1,100) > > for x in X: > print x > L=linalg.cholesky(B(x),lower=1) > > I mean it would be nice if cholesky could return info=1 if the matrix is > not spd. > The current behaviour is > > Traceback (most recent call last): > File "test_spd.py", line 11, in ? > L=linalg.cholesky(B(x),lower=1) > File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line > 552, in cholesky > if info>0: raise LinAlgError, "matrix not positive definite" > numpy.linalg.linalg.LinAlgError: matrix not positive definite > > Helpful suggestions would be appreciated. > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Jun 28 16:40:03 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 28 Jun 2007 16:40:03 -0400 Subject: [SciPy-user] [SciPy-dev] question about scipy.optimize.line_search In-Reply-To: <46836BE8.6080501@ukr.net> References: <46836BE8.6080501@ukr.net> Message-ID: On Thu, 28 Jun 2007, Dmitrey apparently wrote: > help(line_search) yields > -------------------------------------------------------------------- > line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), > c1=0.0001, c2=0.90000000000000002, amax=50) > Find alpha that satisfies strong Wolfe conditions. > Uses the line search algorithm to enforce strong Wolfe conditions > Wright and Nocedal, 'Numerical Optimization', 1999, pg. 59-60 > For the zoom phase it uses an algorithm by > Outputs: (alpha0, gc, fc) > -------------------------------------------------------------------- > So I need to know what are other args, especially gfk (is it a gradient > in point xk?), old_fval, old_old_fval (I guess I know what do c1 & c2 mean) This is certainly lacking documentation! A little is here: http://docs.neuroinf.de/api/scipy/scipy.optimize.optimize-pysrc.html#line_search Can anyone help Dmitrey more? Thank you, Alan Isaac From dominique.orban at gmail.com Thu Jun 28 17:00:30 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Thu, 28 Jun 2007 17:00:30 -0400 Subject: [SciPy-user] [SciPy-dev] question about scipy.optimize.line_search In-Reply-To: References: <46836BE8.6080501@ukr.net> Message-ID: <4684216E.8010803@gmail.com> Alan G Isaac wrote: > On Thu, 28 Jun 2007, Dmitrey apparently wrote: > >>help(line_search) yields >>-------------------------------------------------------------------- >>line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), >>c1=0.0001, c2=0.90000000000000002, amax=50) >> Find alpha that satisfies strong Wolfe conditions. >> Uses the line search algorithm to enforce strong Wolfe conditions >> Wright and Nocedal, 'Numerical Optimization', 1999, pg. 59-60 >> For the zoom phase it uses an algorithm by >> Outputs: (alpha0, gc, fc) >>-------------------------------------------------------------------- >>So I need to know what are other args, especially gfk (is it a gradient >>in point xk?), old_fval, old_old_fval (I guess I know what do c1 & c2 mean) > > This is certainly lacking documentation! A little is here: > http://docs.neuroinf.de/api/scipy/scipy.optimize.optimize-pysrc.html#line_search > Can anyone help Dmitrey more? Each iteration of a linesearch procedure to satisfy the strong Wolfe conditions requires an evaluation of f and of its gradient. I have no idea who coded this and I don't have the book handy this moment, but I would guess gk is the gradient of the objective at the current trial point. No clue about the old_val and old_old_val (doesn't look like my dream programming style). Enforcing the strong-Wolfe conditions is not an easy task, is a sensitive process, and the algorithm presented in the book is certainly simplified as much as possible for clarity of exposition. For more robust software, you would be better off using the implementation of More and Thuente Mor?, J. J. and Thuente, D. J. 1994. Line search algorithms with guaranteed sufficient decrease. ACM Trans. Math. Softw. 20, 3 (Sep. 1994), 286-307. DOI= http://doi.acm.org/10.1145/192115.192132 This is Fortran software which you could interface. I did the job in NLPy (http://nlpy.sf.net). You should be able to reuse my interface. Dominique From matthieu.brucher at gmail.com Fri Jun 29 01:49:33 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 07:49:33 +0200 Subject: [SciPy-user] [SciPy-dev] question about scipy.optimize.line_search In-Reply-To: <4684216E.8010803@gmail.com> References: <46836BE8.6080501@ukr.net> <4684216E.8010803@gmail.com> Message-ID: I already told dmitrey, but I say it on list. My optimizers have several choice for line searches, including strong Wolfe Powell rules. Matthieu 2007/6/28, Dominique Orban : > > Alan G Isaac wrote: > > On Thu, 28 Jun 2007, Dmitrey apparently wrote: > > > >>help(line_search) yields > >>-------------------------------------------------------------------- > >>line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), > >>c1=0.0001, c2=0.90000000000000002, amax=50) > >> Find alpha that satisfies strong Wolfe conditions. > >> Uses the line search algorithm to enforce strong Wolfe conditions > >> Wright and Nocedal, 'Numerical Optimization', 1999, pg. 59-60 > >> For the zoom phase it uses an algorithm by > >> Outputs: (alpha0, gc, fc) > >>-------------------------------------------------------------------- > >>So I need to know what are other args, especially gfk (is it a gradient > >>in point xk?), old_fval, old_old_fval (I guess I know what do c1 & c2 > mean) > > > > > This is certainly lacking documentation! A little is here: > > > http://docs.neuroinf.de/api/scipy/scipy.optimize.optimize-pysrc.html#line_search > > Can anyone help Dmitrey more? > > Each iteration of a linesearch procedure to satisfy the strong Wolfe > conditions requires an evaluation of f and of its gradient. I have no > idea who coded this and I don't have the book handy this moment, but I > would guess gk is the gradient of the objective at the current trial > point. No clue about the old_val and old_old_val (doesn't look like my > dream programming style). > > Enforcing the strong-Wolfe conditions is not an easy task, is a > sensitive process, and the algorithm presented in the book is certainly > simplified as much as possible for clarity of exposition. For more > robust software, you would be better off using the implementation of > More and Thuente > > Mor?, J. J. and Thuente, D. J. 1994. Line search algorithms with > guaranteed sufficient decrease. ACM Trans. Math. Softw. 20, 3 (Sep. > 1994), 286-307. DOI= http://doi.acm.org/10.1145/192115.192132 > > This is Fortran software which you could interface. I did the job in > NLPy (http://nlpy.sf.net). You should be able to reuse my interface. > > Dominique > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Jun 29 01:59:04 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 07:59:04 +0200 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: References: Message-ID: > > You are calling this a SciKit, but I believe the code is not > yet placed in the SciKits repository > http://projects.scipy.org/scipy/scikits/browser/trunk > Am I right? I think that would be a better way to "expose' it. You're right, it's not an official scikit. I don't think that someone in charge said something about it when I asked if it should be a scikit or not, only Michael McNeil Forbes. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Jun 29 02:04:54 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 29 Jun 2007 09:04:54 +0300 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: References: Message-ID: <4684A106.7010804@ukr.net> can you inform, how this egg should be installed? (scikits_optimizers-0.5.dev_r700-py2.5.egg) it has neither INSTALL.txt, no setup.py, that is required in scikits page. when I run python scikits_optimizers-0.5.dev_r700-py2.5.egg it yields File "scikits_optimizers-0.5.dev_r700-py2.5.egg", line 1 SyntaxError: Non-ASCII character '\x89' in file scikits_optimizers-0.5.dev_r700-py2.5.egg on line 2, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details D. Matthieu Brucher wrote: > > You are calling this a SciKit, but I believe the code is not > yet placed in the SciKits repository > http://projects.scipy.org/scipy/scikits/browser/trunk > Am I right? I think that would be a better way to "expose' it. > > > You're right, it's not an official scikit. > I don't think that someone in charge said something about it when I > asked if it should be a scikit or not, only Michael McNeil Forbes. > > Matthieu > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Fri Jun 29 02:11:57 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 08:11:57 +0200 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: <4684A106.7010804@ukr.net> References: <4684A106.7010804@ukr.net> Message-ID: Egg should be installed with easy_install, IIRC. tarball have setup.py file, but the egg generation process did not include it. I'll add additional info for installation. Matthieu 2007/6/29, dmitrey : > > can you inform, how this egg should be installed? > > (scikits_optimizers-0.5.dev_r700-py2.5.egg) > > it has neither INSTALL.txt, no setup.py, that is required in scikits page. > when I run python scikits_optimizers-0.5.dev_r700-py2.5.egg > it yields > File "scikits_optimizers-0.5.dev_r700-py2.5.egg", line 1 > SyntaxError: Non-ASCII character '\x89' in file > scikits_optimizers-0.5.dev_r700-py2.5.egg on line 2, but no encoding > declared; see http://www.python.org/peps/pep-0263.html for details > > D. > > > Matthieu Brucher wrote: > > > > You are calling this a SciKit, but I believe the code is not > > yet placed in the SciKits repository > > http://projects.scipy.org/scipy/scikits/browser/trunk > > Am I right? I think that would be a better way to "expose' it. > > > > > > You're right, it's not an official scikit. > > I don't think that someone in charge said something about it when I > > asked if it should be a scikit or not, only Michael McNeil Forbes. > > > > Matthieu > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Jun 29 02:21:26 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 29 Jun 2007 09:21:26 +0300 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: References: <4684A106.7010804@ukr.net> Message-ID: <4684A4E6.2070307@ukr.net> Could you upload your tarball into you filearea http://download.gna.org/pypeline/ ? I still can't install the egg with neither python2.4 no python2.5, I receive same error message. D. Matthieu Brucher wrote: > Egg should be installed with easy_install, IIRC. tarball have setup.py > file, but the egg generation process did not include it. > I'll add additional info for installation. > > Matthieu > > 2007/6/29, dmitrey >: > > can you inform, how this egg should be installed? > > (scikits_optimizers-0.5.dev_r700-py2.5.egg) > > it has neither INSTALL.txt, no setup.py, that is required in > scikits page. > when I run python scikits_optimizers-0.5.dev_r700-py2.5.egg > it yields > File "scikits_optimizers- 0.5.dev_r700-py2.5.egg", line 1 > SyntaxError: Non-ASCII character '\x89' in file > scikits_optimizers-0.5.dev_r700-py2.5.egg on line 2, but no encoding > declared; see http://www.python.org/peps/pep-0263.html for details > > D. > > > Matthieu Brucher wrote: > > > > You are calling this a SciKit, but I believe the code is not > > yet placed in the SciKits repository > > http://projects.scipy.org/scipy/scikits/browser/trunk > > Am I right? I think that would be a better way to "expose' it. > > > > > > You're right, it's not an official scikit. > > I don't think that someone in charge said something about it when I > > asked if it should be a scikit or not, only Michael McNeil Forbes. > > > > Matthieu > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Fri Jun 29 02:55:53 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 08:55:53 +0200 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: <4684A4E6.2070307@ukr.net> References: <4684A106.7010804@ukr.net> <4684A4E6.2070307@ukr.net> Message-ID: egg files are archives, so you can't execute them. If setuptools is installed, you should have easy_install. I'm trying to build a tarball, but it seems there is catch with sdist and bdist... The former includes everything in my repository (not only the optimizers, but everything else...) and the latter saves the absolute path instead of the relative... Matthieu 2007/6/29, dmitrey : > > Could you upload your tarball into you filearea > http://download.gna.org/pypeline/ ? > I still can't install the egg with neither python2.4 no python2.5, I > receive same error message. > D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Jun 29 02:59:22 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 29 Jun 2007 09:59:22 +0300 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: References: <4684A106.7010804@ukr.net> <4684A4E6.2070307@ukr.net> Message-ID: <4684ADCA.1050909@ukr.net> Then, maybe, it would be more easy to upload the package into scikits svn server than construct a tarball? D Matthieu Brucher wrote: > egg files are archives, so you can't execute them. If setuptools is > installed, you should have easy_install. > I'm trying to build a tarball, but it seems there is catch with sdist > and bdist... The former includes everything in my repository (not only > the optimizers, but everything else...) and the latter saves the > absolute path instead of the relative... > > Matthieu > > 2007/6/29, dmitrey >: > > Could you upload your tarball into you filearea > http://download.gna.org/pypeline/ ? > I still can't install the egg with neither python2.4 no python2.5, I > receive same error message. > D. > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Fri Jun 29 03:04:06 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 09:04:06 +0200 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: <4684ADCA.1050909@ukr.net> References: <4684A106.7010804@ukr.net> <4684A4E6.2070307@ukr.net> <4684ADCA.1050909@ukr.net> Message-ID: Yes, it should, although I'm puzzled with setuptools behaviour. But David Cournapeau had problems too with sdist and bdist. 2007/6/29, dmitrey : > > Then, maybe, it would be more easy to upload the package into scikits > svn server than construct a tarball? > D -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jun 29 03:06:20 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Jun 2007 09:06:20 +0200 Subject: [SciPy-user] verify whether a matrix is positive definite or not In-Reply-To: <47347f490706281209m2e1d957axa9d75bae9429e40a@mail.gmail.com> References: <46839C2D.8000109@iam.uni-stuttgart.de> <47347f490706281209m2e1d957axa9d75bae9429e40a@mail.gmail.com> Message-ID: <4684AF6C.9090504@iam.uni-stuttgart.de> Joachim Dahl wrote: > Hi Nils, > > is this not similar to the eigenvalue problems we discussed off-list? > > You can numerically find the smallest x (if that's what you want) by > solving > minimize x s.t. Bo - B1*x >= 0 > > In CVXOPT you can solve it as follows: > > from cvxopt.base import matrix > from cvxopt.solvers import sdp > from cvxopt.lapack import syev > > c = matrix(-1.0) > B0 = matrix([ [11.,8.], [8.,7.] ]) > B1 = matrix([ [20.,1.] ,[1.,26.] ]) > Gs = [ B1[:] ] > hs = [ B0 ] > > sol = sdp(c, Gs=Gs, hs=hs) > x = sol['x'] > > v = matrix([0., 0.]) > syev(B0 - x*B1, v) > print v > > > - Joachim Hi Joachim, Yes indeed. It is related to my previous problem. Thank you very much for your solution ! Lieven sent me a randomly generated matrix pair (A(x), B(x)) off-list. However, the matrix B(x) corresponds to the mass matrix in structural dynamics (my background), which is almost always positive definite. Hence I was confused by the solution of his example for maximizing the smallest eigenvalue of (A(x), B(x)) subjected to some constraints. Anyway I found another way to detect the "border" by bisection. Thanks to Anne ! I haven't tested the code on other examples, so there could be mistakes. Nils S.M. Rump. Verification of Positive Definiteness. BIT Numerical Mathematics, 46:433?452, 2006. -------------- next part -------------- A non-text attachment was scrubbed... Name: test_spd.py Type: text/x-python Size: 256 bytes Desc: not available URL: From strawman at astraw.com Fri Jun 29 05:14:15 2007 From: strawman at astraw.com (Andrew Straw) Date: Fri, 29 Jun 2007 02:14:15 -0700 Subject: [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: References: <4684A106.7010804@ukr.net> <4684A4E6.2070307@ukr.net> Message-ID: <4684CD67.3000400@astraw.com> Matthieu Brucher wrote: > I'm trying to build a tarball, but it seems there is catch with sdist > and bdist... The former includes everything in my repository This is one of the 'features' I don't like with setuptools - anything under revision control is included when you do sdist. IIRC, even if you specifically exclude it in MANIFEST.in. From nwagner at iam.uni-stuttgart.de Fri Jun 29 05:15:45 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Jun 2007 11:15:45 +0200 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] Message-ID: <4684CDC1.8080808@iam.uni-stuttgart.de> Dmitrey, You asked me to move my inquiry to scipy-user. How can I improve my script wrt. the results ? How can I provide the gradients ? Is it possible to extend the code to rectangular matrices instead of vectors ? python -i test_qpqc.py starting solver lincher (BSD license) with problem unnamed itn 0: Fk= 0.285714285714 maxResidual= 0 itn 10 : Fk= 0.0820810434319 maxResidual= 0.00260656014331 N= 1.0 alpha= 0.978713763748 itn 20 : Fk= 0.0810194693773 maxResidual= 1.71023808193e-05 N= 1.0 alpha= 0.132923656348 itn 22 : Fk= 0.0810154618808 maxResidual= 3.51005361554e-06 N= 1.0 alpha= 0.354395906532 solver lincher finished solving the problem unnamed istop: 4 (|| F[k] - F[k-1] || < funtol) Solver: Time elapsed = 0.24 CPU Time Elapsed = 0.24 NO FEASIBLE SOLUTION is obtained (max residual = 3.51005361554e-06) x_opt: [ 0.11973864 0.22980377 0.32143896 0.38732479 0.42178464 0.4223829 0.38829845 0.32320301 0.2311393 0.12059048] f_opt: [ 0.08101546] Smallest eigenvalue by symeig 0.081014052771 Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_qpqc.py Type: text/x-python Size: 869 bytes Desc: not available URL: -------------- next part -------------- An embedded message was scrubbed... From: dmitrey Subject: Re: Computing eigenvalues by trace minimization Date: Thu, 28 Jun 2007 17:48:21 +0300 Size: 3426 URL: From openopt at ukr.net Fri Jun 29 05:40:12 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 29 Jun 2007 12:40:12 +0300 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684CDC1.8080808@iam.uni-stuttgart.de> References: <4684CDC1.8080808@iam.uni-stuttgart.de> Message-ID: <4684D37C.1010507@ukr.net> 1) you don't need to use .T in dot, it's executing automatically: f = lambda x: dot(x.T,dot(A,x)) # => dot(x,dot(A,x)) h = lambda x: dot(x.T,dot(B,x))-1.0 # => dot(x,dot(B,x))-1.0 2) I have no symeig module, I will try to install it now 3) I didn't decide yet what to do if stop creteria funtol or xtol report true, but constraints are not satisfied yet (despite alg can successfully continue to decriase them, but I can't know will he do the trick or no and how many iterations it will require). So now you should decide, what contol is ok for your problem, and then set appropriate contol, funtol, xtol note that funtol and xtol are stop criteria, not desired tolerance of solution. So, if you need contol=1e-6, as it is set by default, you need to reduce funtol and xtol (also, maybe, gradtol, if it will stop your problem before desired contol is achieved) from default values (1e-6) to something like 1e-8. 4) please wait some time before I will implement some changes about df, dc and dh. I will inform you about a hour later (as well as about my symeig module installation results) to update svn. Use only func values for now. 5) Some weeks later (I hope) there will be excellent native OO QPQC solver (for positive-defined matrices only). HTH, D. Nils Wagner wrote: > Dmitrey, > > You asked me to move my inquiry to scipy-user. > How can I improve my script wrt. the results ? > How can I provide the gradients ? > Is it possible to extend the code to rectangular matrices instead of > vectors ? > > python -i test_qpqc.py > starting solver lincher (BSD license) with problem unnamed > itn 0: Fk= 0.285714285714 maxResidual= 0 > itn 10 : Fk= 0.0820810434319 maxResidual= 0.00260656014331 > N= 1.0 alpha= 0.978713763748 > itn 20 : Fk= 0.0810194693773 maxResidual= 1.71023808193e-05 > N= 1.0 alpha= 0.132923656348 > itn 22 : Fk= 0.0810154618808 maxResidual= 3.51005361554e-06 > N= 1.0 alpha= 0.354395906532 > solver lincher finished solving the problem unnamed > istop: 4 (|| F[k] - F[k-1] || < funtol) > Solver: Time elapsed = 0.24 CPU Time Elapsed = 0.24 > NO FEASIBLE SOLUTION is obtained (max residual = 3.51005361554e-06) > > x_opt: [ 0.11973864 0.22980377 0.32143896 0.38732479 0.42178464 > 0.4223829 > 0.38829845 0.32320301 0.2311393 0.12059048] > f_opt: [ 0.08101546] > > > Smallest eigenvalue by symeig 0.081014052771 > > Nils > > > ------------------------------------------------------------------------ > > from scikits.openopt import NLP > from scipy import diag, ones, identity, rand, trace, dot, shape, zeros > from scipy.linalg import solve, norm > from symeig import symeig > > #f = lambda x: trace(dot(x.T,dot(A,x))) # Version for m > 1 > #h = lambda x: dot(x.T,dot(B,x))-identity(m) > f = lambda x: dot(x.T,dot(A,x)) > h = lambda x: dot(x.T,dot(B,x))-1.0 > > n = 10 # order of A and B > m = 1 # subspace dimension # m > 1 doesn't work > > A = diag(2*ones(n))-diag(ones(n-1),-1)-diag(ones(n-1),1) > B = identity(n) > > #x0 = rand(n,m) # Initial subspace > b = zeros(n) > b[-1] = 1. > x0 = solve(A,b) > x0 = x0/norm(x0) > p = NLP(f, x0, h=h) > > #Providing gradients is appriciated. > #p.df = lambda x: 2*dot(A,x) > #p.dh = lambda x: 2*dot(B,x) > > r = p.solve('lincher') > print > print 'x_opt:',r.xf > print 'f_opt:',r.ff > print > > w,v = symeig(A,B) > print > print 'Smallest eigenvalue by symeig',w[0] > > > ------------------------------------------------------------------------ > > Subject: > Re: Computing eigenvalues by trace minimization > From: > dmitrey > Date: > Thu, 28 Jun 2007 17:48:21 +0300 > To: > Nils Wagner > > To: > Nils Wagner > > > Hi Nils, > your problem seems to be QPQC - quadratic problem with quadratic > constraints. > Currently you can yuse only lincher to solve the problem. > f = lambda x: trace (X^T A X) > > h = lambda x X^T B X - I_p > > p = NLP(f, x0, h=h) > r = p.solve('lincher') > > Providing gradients is appriciated. > p.df = ... > p.dh = ... > > However, lincher can fail to solve the problem if something like > ill-conditioned matrix will be encountered. > Our department's chiefman Petro I. Stetsyuk said me he can provide an > excellent QPQC algorithm for my Python QPQC solver (based on ralg), > but currently he is busy with other problems. Maybe, it will be ready > some weeks or months later. > HTH, D. > > > > Nils Wagner wrote: >> Hi Dmitrey, >> >> This inquiry is not related to my previous problem. >> Can I use openopt to solve the quadratic minimization problem with >> openopt >> >> minimize trace (X^T A X) >> >> subjected to the constraints >> >> X^T B X = I_p, >> >> where I_p denotes the identity matrix of order p. A is symmetric >> positive semidefinite >> and B is symmetric positive definite. A, B \in \mathds{R}^{n \times n}. >> A small example would be appreciated. >> >> >> >> Nils >> >> >> Reference: A. Sameh and J. Wisniewski >> A trace minimization algorithm for the generalized eigenvalue problem >> SIAM J. Numer. Anal. Vol. 19 (1982) pp. 1243-1259 >> >> >> >> > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From openopt at ukr.net Fri Jun 29 06:12:20 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 29 Jun 2007 13:12:20 +0300 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684CDC1.8080808@iam.uni-stuttgart.de> References: <4684CDC1.8080808@iam.uni-stuttgart.de> Message-ID: <4684DB04.3030609@ukr.net> So, please update svn As for your code, I didn't make any changes. You need just specify desired contol and then make funtol, xtol, gradtol small enough. Maybe, in future I'll implement something more appropriate for to have exitflag positive. BTW for small-scaled problem using df, dh didn't yields any benefits only for nVars = 100 I've got ~6 sec with df, dh provided and 11 sec without the ones. for your nVars=10 time elapsed is almost the same. HTH, D. Nils Wagner wrote: > From nwagner at iam.uni-stuttgart.de Fri Jun 29 06:42:10 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Jun 2007 12:42:10 +0200 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684DB04.3030609@ukr.net> References: <4684CDC1.8080808@iam.uni-stuttgart.de> <4684DB04.3030609@ukr.net> Message-ID: <4684E202.1010707@iam.uni-stuttgart.de> dmitrey wrote: > So, please update svn > As for your code, I didn't make any changes. You need just specify > desired contol and then make funtol, xtol, gradtol small enough. Maybe, > in future I'll implement something more appropriate for to have exitflag > positive. > > BTW for small-scaled problem using df, dh didn't yields any benefits > only for nVars = 100 I've got ~6 sec with df, dh provided and 11 sec > without the ones. > for your nVars=10 time elapsed is almost the same. > > HTH, D. > > Nils Wagner wrote: > > >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Dmitrey, Thank you for your help. BTW, have you managed the installation of symeig ? http://mdp-toolkit.sourceforge.net/symeig.html 1) you don't need to use .T in dot, it's executing automatically: f = lambda x: dot(x.T,dot(A,x)) # => dot(x,dot(A,x)) h = lambda x: dot(x.T,dot(B,x))-1.0 # => dot(x,dot(B,x))-1.0 Let us assume that we are interested in the smallest m eigenvalues instead of the smallest eigenvalue. Then you will need .T in case of rectangular matrices x. Therefore I have asked for an extension. #f = lambda x: trace(dot(x.T,dot(A,x))) # Version for m > 1 #h = lambda x: dot(x.T,dot(B,x))-identity(m) from scipy import dot, rand n = 10 m = 3 A = rand(n,n) x = rand(n,m) res = dot(x.T,dot(A,x)) Nils From Alexander.Dietz at astro.cf.ac.uk Fri Jun 29 06:54:53 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Fri, 29 Jun 2007 11:54:53 +0100 Subject: [SciPy-user] General question on scipy Message-ID: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> Hi, I am relative new to python, but I would like to use python for post-processing of data etc. I am using a LINUX machine with python installed on it, but to use if for scientific analysis and creating pictures etc. I need to install: matplotlib, numpy, scipy. It was quite difficult to install matplotlib and compile it against numpy (or something like that), but now I really would like to have a package that can handle matrix calculations (e.g. matrix multiplication, singular value decomosition) and similar. The package then I need to install is: scipy. So I tried to install scipy, but, as I expected, it does not work (multiple libraries are not found). The whole error message is attached to the end of the email. My question: Why do I need to install additional packages? Isn't it possible to code them up in scipy and use pure python for this? Or is it for computational reasons (to put computative intense calculations in C or whatever)? Or do most of the functions not yet exist in scipy (because its version 0.XX)? What is the reason for not having an entire-pure-python extension to python.numpy? My second question: How to install scipy? What packages are missing? I am sure I have numpy installed correctly, but I failes installing LAPACK, which error-message is here: g77 aladhd.o alaerh.o alaesm.o alahd.o alareq.o alasum.o alasvm.o chkxer.o icopy.o ilaenv.o xlaenv.o xerbla.o slaord.o schkaa.o schkeq.o schkgb.o schkge.o schkgt.o schklq.o schkpb.o schkpo.o schkpp.o schkpt.o schkq3.o schkql.o schkqp.o schkqr.o schkrq.o schksp.o schksy.o schktb.o schktp.o schktr.o schktz.o sdrvgb.o sdrvge.o sdrvgt.o sdrvls.o sdrvpb.o sdrvpo.o sdrvpp.o sdrvpt.o sdrvsp.o sdrvsy.o serrge.o serrgt.o serrlq.o serrls.o serrpo.o serrql.o serrqp.o serrqr.o serrrq.o serrsy.o serrtr.o serrtz.o serrvx.o sgbt01.o sgbt02.o sgbt05.o sgelqs.o sgeqls.o sgeqrs.o sgerqs.o sget01.o sget02.o sget03.o sget04.o sget06.o sget07.o sgtt01.o sgtt02.o sgtt05.o slaptm.o slarhs.o slatb4.o slattb.o slattp.o slattr.o slavsp.o slavsy.o slqt01.o slqt02.o slqt03.o spbt01.o spbt02.o spbt05.o spot01.o spot02.o spot03.o spot05.o sppt01.o sppt02.o sppt03.o sppt05.o sptt01.o sptt02.o sptt05.o sqlt01.o sqlt02.o sqlt03.o sqpt01.o sqrt01.o sqrt02.o sqrt03.o sqrt11.o sqrt12.o sqrt13.o sqrt14.o sqrt15.o sqrt16.o sqrt17.o srqt01.o srqt02.o srqt03.o srzt01.o srzt02.o sspt01.o ssyt01.o stbt02.o stbt03.o stbt05.o stbt06.o stpt01.o stpt02.o stpt03.o stpt05.o stpt06.o strt01.o strt02.o strt03.o strt05.o strt06.o stzt01.o stzt02.o \ ../../tmglib_LINUX.a ../../lapack_LINUX.a ../../blas_LINUX.a -o ../xlintsts g77: ../../blas_LINUX.a: No such file or directory > (because there is no blas_LINUX.a file, only lapack_LINUX.a and blas_LINUX.a...) Thanks Alex ===== ERROR message === kl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /usr/local/lib libraries drfftw,dfftw not found in /usr/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib/sse2 libraries lapack,blas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib/sse2 libraries lapack,blas not found in /usr/lib NOT AVAILABLE /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:1301: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/local/lib libraries blas not found in /usr/lib NOT AVAILABLE /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:1310: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:1313: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 144, in setup config = configuration() File "setup.py", line 19, in configuration config.add_subpackage('Lib') File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./Lib/setup.py", line 7, in configuration config.add_subpackage('integrate') File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/integrate/setup.py", line 11, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File "/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py", line 403, in get_info raise self.notfounderror,self.notfounderror.__doc__ numpy.distutils.system_info.NotFoundError: Some third-party program or library is not found. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Fri Jun 29 07:34:15 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 29 Jun 2007 20:34:15 +0900 Subject: [SciPy-user] General question on scipy In-Reply-To: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> Message-ID: Alexander Dietz wrote: > Hi, > > I am relative new to python, but I would like to use python for > post-processing of data etc. I am using a LINUX machine with python > installed on it, but to use if for scientific analysis and creating > pictures etc. I need to install: matplotlib, numpy, scipy. It was quite > difficult to install matplotlib and compile it against numpy (or > something like that), but now I really would like to have a package that > can handle matrix calculations ( e.g. matrix multiplication, singular > value decomosition) and similar. The package then I need to install is: > scipy. What distribution are you using? For ubuntu e.g. all third-party libraries are available and building numpy/scipy is as simple as cooking tea. > So I tried to install scipy, but, as I expected, it does not work > (multiple libraries are not found). The whole error message is attached > to the end of the email. > My question: Why do I need to install additional packages? Isn't it > possible to code them up in scipy and use pure python for this? Or is it > for computational reasons (to put computative intense calculations in C > or whatever)? Or do most of the functions not yet exist in scipy > (because its version 0.XX)? What is the reason for not having an > entire-pure-python extension to python.numpy? Speed. Andd the fact that most of the algorithms have been coded in fortran many years ago and are known to be stable and good, I think. I would like to point you to http://www.scipy.org/Installing_SciPy but unfortunately that link is dead currently. Can somebody please fix it. Once on SuSE I build lapack/atlas based on those instructions and everything went fine. > My second question: How to install scipy? What packages are missing? I > am sure I have numpy installed correctly, but I failes installing > LAPACK, which error-message is here: For numpy no third party libs are needed as there is a 'light' version of blas included. For scipy you need either blas/lapack or atlas and the fftw libs if you want to use the fft package. So maybe wait until the webpage mentioned above is back and try again. Christian From ckkart at hoc.net Fri Jun 29 07:35:11 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 29 Jun 2007 20:35:11 +0900 Subject: [SciPy-user] http://www.scipy.org/Installing_SciPy is dead Message-ID: Hi, I can't access http://www.scipy.org/Installing_SciPy. Could somebody please check that, fix it? Christian From david at ar.media.kyoto-u.ac.jp Fri Jun 29 07:36:35 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 29 Jun 2007 20:36:35 +0900 Subject: [SciPy-user] General question on scipy In-Reply-To: References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> Message-ID: <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> Christian K wrote: > Alexander Dietz wrote: >> My second question: How to install scipy? What packages are missing? I >> am sure I have numpy installed correctly, but I failes installing >> LAPACK, which error-message is here: > > For numpy no third party libs are needed as there is a 'light' version of blas > included. For scipy you need either blas/lapack or atlas and the fftw libs if > you want to use the fft package. fftw is not required: if you have it, it will be used, but you can install scipy without it. Alexander, the most needed information from you to help is your distribution and architecture (eg x86, x86_64, other, etc...). David From Alexander.Dietz at astro.cf.ac.uk Fri Jun 29 08:02:19 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Fri, 29 Jun 2007 13:02:19 +0100 Subject: [SciPy-user] General question on scipy In-Reply-To: <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> Message-ID: <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> Hi, here is what I am using exactly: FC5 and Linux 2.6.20-1.2316.fc5smp. Hope this helps. Can I install scipy (and all the dependend libraries) using 'yum' or something similar? 'yum' itself does not work... Cheers Alex On 6/29/07, David Cournapeau wrote: > > Christian K wrote: > > Alexander Dietz wrote: > >> My second question: How to install scipy? What packages are missing? I > >> am sure I have numpy installed correctly, but I failes installing > >> LAPACK, which error-message is here: > > > > For numpy no third party libs are needed as there is a 'light' version > of blas > > included. For scipy you need either blas/lapack or atlas and the fftw > libs if > > you want to use the fft package. > fftw is not required: if you have it, it will be used, but you can > install scipy without it. > > Alexander, the most needed information from you to help is your > distribution and architecture (eg x86, x86_64, other, etc...). > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Jun 29 07:59:30 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 29 Jun 2007 20:59:30 +0900 Subject: [SciPy-user] General question on scipy In-Reply-To: <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> Message-ID: <4684F422.6050905@ar.media.kyoto-u.ac.jp> Alexander Dietz wrote: > Hi, > > here is what I am using exactly: FC5 and Linux 2.6.20-1.2316.fc5smp. > Hope this helps. > Can I install scipy (and all the dependend libraries) using 'yum' or > something similar? 'yum' itself does not work... What do you mean by yum does not work ? I have packaged numpy and scipy for FC 5 over there: http://software.opensuse.org/download/home:/ashigabou/Fedora_Extras_5/ (more details instructions are on the scipy webpage, which unfortunately is down right now) David From openopt at ukr.net Fri Jun 29 08:11:16 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 29 Jun 2007 15:11:16 +0300 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684E202.1010707@iam.uni-stuttgart.de> References: <4684CDC1.8080808@iam.uni-stuttgart.de> <4684DB04.3030609@ukr.net> <4684E202.1010707@iam.uni-stuttgart.de> Message-ID: <4684F6E4.3040000@ukr.net> Nils Wagner wrote: > dmitrey wrote: > >> So, please update svn >> As for your code, I didn't make any changes. You need just specify >> desired contol and then make funtol, xtol, gradtol small enough. Maybe, >> in future I'll implement something more appropriate for to have exitflag >> positive. >> >> BTW for small-scaled problem using df, dh didn't yields any benefits >> only for nVars = 100 I've got ~6 sec with df, dh provided and 11 sec >> without the ones. >> for your nVars=10 time elapsed is almost the same. >> >> HTH, D. >> >> Nils Wagner wrote: >> >> >> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > Hi Dmitrey, > > Thank you for your help. BTW, have you managed the installation of symeig ? > http://mdp-toolkit.sourceforge.net/symeig.html > I have installed symeig ok As for pysparse, 1st of all I went to sandbox/pysparse, run sudo python setup.py install, then from pysparse import * all works ok HTH, D > 1) you don't need to use .T in dot, it's executing automatically: > f = lambda x: dot(x.T,dot(A,x)) # => dot(x,dot(A,x)) > h = lambda x: dot(x.T,dot(B,x))-1.0 # => dot(x,dot(B,x))-1.0 > > > Let us assume that we are interested in the smallest m eigenvalues > instead of the smallest eigenvalue. > Then you will need .T in case of rectangular matrices x. Therefore I > have asked for > an extension. > #f = lambda x: trace(dot(x.T,dot(A,x))) # Version for m > 1 > #h = lambda x: dot(x.T,dot(B,x))-identity(m) > > from scipy import dot, rand > > n = 10 > m = 3 > A = rand(n,n) > x = rand(n,m) > res = dot(x.T,dot(A,x)) > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From matthieu.brucher at gmail.com Fri Jun 29 08:12:43 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 14:12:43 +0200 Subject: [SciPy-user] General question on scipy In-Reply-To: <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> Message-ID: 2007/6/29, Alexander Dietz : > > Hi, > > here is what I am using exactly: FC5 and Linux 2.6.20-1.2316.fc5smp. Hope > this helps. > Can I install scipy (and all the dependend libraries) using 'yum' or > something similar? 'yum' itself does not work... > > Cheers > Alex yum install numpy yum install scipy But FC5's scipy is old IIRC, can you update to FC 6 at least ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jun 29 08:16:17 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Jun 2007 14:16:17 +0200 Subject: [SciPy-user] Converting matrices from pysparse Message-ID: <4684F811.2010908@iam.uni-stuttgart.de> Hi all, Just now I have installed pysparse via http://people.web.psi.ch/geus/pyfemax/download.html. It works fine for me ! Is it possible to convert matrices from the pysparse specific format into a format such that I can visualize them with pylab.spy ? >>> type(A) >>> >>> from pylab import spy, show >>> spy(A) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line 2281, in spy ret = gca().spy(*args, **kwargs) File "/usr/lib64/python2.4/site-packages/matplotlib/axes.py", line 5089, in spy nr, nc = Z.shape ValueError: need more than 0 values to unpack I have used the following example from the website from pysparse import itsolvers, poisson, precon, jdsym A = poisson.poisson2d_sym(200).to_sss() K = precon.ssor(A) k_conv, lmbd, Q, it, it_inner = \ jdsym.jdsym(A, None, K, 5, 0.0, 1e-10, 150, itsolvers.qmrs, clvl=1) Nils From nwagner at iam.uni-stuttgart.de Fri Jun 29 08:21:44 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Jun 2007 14:21:44 +0200 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684F6E4.3040000@ukr.net> References: <4684CDC1.8080808@iam.uni-stuttgart.de> <4684DB04.3030609@ukr.net> <4684E202.1010707@iam.uni-stuttgart.de> <4684F6E4.3040000@ukr.net> Message-ID: <4684F958.3020504@iam.uni-stuttgart.de> > >> As for pysparse, 1st of all I went to sandbox/pysparse, run >> sudo python setup.py install, >> then >> from pysparse import * >> all works ok >> Interesting. So is there a difference between the installation procedures a) cd /path/to/your/sandbox/pysparse ; sudo python setup.py install b) cd /path/to/your/sandbox; edit enabled_packages.txt; add a line with pysparse; cd /path/to/your/scipy; sudo python setup.py install Nils From Alexander.Dietz at astro.cf.ac.uk Fri Jun 29 08:24:48 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Fri, 29 Jun 2007 13:24:48 +0100 Subject: [SciPy-user] General question on scipy In-Reply-To: References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> Message-ID: <9cf809a00706290524p5fbc2e40m754ef6dc2c1994cf@mail.gmail.com> Hi, here is my try: # yum install scipy Loading "installonlyn" plugin Setting up Install Process .... ..... No Match for argument: scipy Nothing to do So there is no scipy found or so. Cheers Alex On 6/29/07, Matthieu Brucher wrote: > > > > 2007/6/29, Alexander Dietz : > > > > Hi, > > > > here is what I am using exactly: FC5 and Linux 2.6.20-1.2316.fc5smp. > > Hope this helps. > > Can I install scipy (and all the dependend libraries) using 'yum' or > > something similar? 'yum' itself does not work... > > > > Cheers > > Alex > > > yum install numpy > yum install scipy > > But FC5's scipy is old IIRC, can you update to FC 6 at least ? > > Matthieu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jun 29 08:27:00 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Jun 2007 14:27:00 +0200 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684F958.3020504@iam.uni-stuttgart.de> References: <4684CDC1.8080808@iam.uni-stuttgart.de> <4684DB04.3030609@ukr.net> <4684E202.1010707@iam.uni-stuttgart.de> <4684F6E4.3040000@ukr.net> <4684F958.3020504@iam.uni-stuttgart.de> Message-ID: <4684FA94.3050407@iam.uni-stuttgart.de> Nils Wagner wrote: >>> As for pysparse, 1st of all I went to sandbox/pysparse, run >>> sudo python setup.py install, >>> then >>> from pysparse import * >>> all works ok >>> >>> > Interesting. So is there a difference between the installation procedures > > a) cd /path/to/your/sandbox/pysparse ; sudo python setup.py install > > b) cd /path/to/your/sandbox; edit enabled_packages.txt; add a line with > pysparse; cd /path/to/your/scipy; sudo python setup.py install > > > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I have used version a) to install pysparse from the sandbox. The problem persists. Can someone reproduce this behavior ? >>> from pysparse import * Traceback (most recent call last): File "", line 1, in File "/usr/local/lib64/python2.5/site-packages/pysparse/__init__.py", line 4, in from spmatrix import * ImportError: No module named spmatrix Any pointer how to fix this problem would be appreciated. Nils From matthieu.brucher at gmail.com Fri Jun 29 08:27:10 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 14:27:10 +0200 Subject: [SciPy-user] General question on scipy In-Reply-To: <9cf809a00706290524p5fbc2e40m754ef6dc2c1994cf@mail.gmail.com> References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> <9cf809a00706290524p5fbc2e40m754ef6dc2c1994cf@mail.gmail.com> Message-ID: try python-scipy (or even more better, yum search scipy to find the real name of the package) Matthieu 2007/6/29, Alexander Dietz : > > Hi, > > here is my try: > > # yum install scipy > Loading "installonlyn" plugin > Setting up Install Process > .... > ..... > No Match for argument: scipy > Nothing to do > > > So there is no scipy found or so. > > > Cheers > Alex > > > > On 6/29/07, Matthieu Brucher wrote: > > > > > > > 2007/6/29, Alexander Dietz < Alexander.Dietz at astro.cf.ac.uk>: > > > > > > Hi, > > > > > > here is what I am using exactly: FC5 and Linux 2.6.20-1.2316.fc5smp. > > > Hope this helps. > > > Can I install scipy (and all the dependend libraries) using 'yum' or > > > something similar? 'yum' itself does not work... > > > > > > Cheers > > > Alex > > > > > > yum install numpy > > yum install scipy > > > > But FC5's scipy is old IIRC, can you update to FC 6 at least ? > > > > Matthieu > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuelez at gmail.com Fri Jun 29 08:40:22 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Fri, 29 Jun 2007 14:40:22 +0200 Subject: [SciPy-user] object too deep?? Message-ID: I have this optimization problem: this function returns the sum of some gaussians given their parameters in arrays: def gaussian(height, center_x, center_y, width): """Returns a gaussian function with the given parameters""" width = float(width) return lambda x,y: sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) this function tries to fit given a starting image: def fitgaussian(data, obj_x, obj_y, obj_v): """Returns (height, x, y, width) the gaussian parameters of a 2D distribution found by a fit""" #params = moments(data) params = obj_v, obj_x-obj_x[0]+2, obj_y-obj_y[0]+2, ones(len(obj_x)) errorfunction = lambda p: ravel(gaussian(*p)(*indices(data.shape)) - data) p, success = leastsq(errorfunction, params) return p and i use them with: # how many maxima here? max_list = [i] for j in range(len(obj_x)): if obj_x[j] >= x1 and obj_x[j] < x2 and obj_y[j] >= y1 and obj_y[j] < y2 and j != i: max_list.append(j) #for indices in max_list: ml = array(max_list) params = fitgaussian(neigh, obj_x[ml], obj_y[ml], obj_v[ml]) print len(max_list), params but i get an error like: In [9]: run cutoff --------------------------------------------------------------------------- Traceback (most recent call last) /home/emanuelez/Tesi/Code/cutoff.py in () 174 # FIND OBJECTS PROPERTIES 175 # ----------------------- --> 176 get_objects_info(blurred, 2, obj_x, obj_y, obj_v) 177 178 /home/emanuelez/Tesi/Code/cutoff.py in get_objects_info(image, size, obj_x, obj_y, obj_v) 143 #for indices in max_list: 144 ml = array(max_list) --> 145 params = fitgaussian(neigh, obj_x[ml], obj_y[ml], obj_v[ml]) 146 print len(max_list), params 147 /home/emanuelez/Tesi/Code/cutoff.py in fitgaussian(data, obj_x, obj_y, obj_v) 124 params = obj_v, obj_x-obj_x[0]+2, obj_y-obj_y[0]+2, ones(len(obj_x)) 125 errorfunction = lambda p: ravel(gaussian(*p)(*indices(data.shape)) - data) --> 126 p, success = leastsq(errorfunction, params) 127 return p 128 /usr/lib/python2.5/site-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 264 if (maxfev == 0): 265 maxfev = 200*(n+1) --> 266 retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) 267 else: 268 if col_deriv: : object too deep for desired array WARNING: Failure executing file: What does "object too deep for desired array" mean? I'm really puzzled about this. Thanks for any help or suggestion! Emanuele From openopt at ukr.net Fri Jun 29 08:41:35 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 29 Jun 2007 15:41:35 +0300 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684FA94.3050407@iam.uni-stuttgart.de> References: <4684CDC1.8080808@iam.uni-stuttgart.de> <4684DB04.3030609@ukr.net> <4684E202.1010707@iam.uni-stuttgart.de> <4684F6E4.3040000@ukr.net> <4684F958.3020504@iam.uni-stuttgart.de> <4684FA94.3050407@iam.uni-stuttgart.de> Message-ID: <4684FDFF.9040208@ukr.net> Nils Wagner wrote: > I have used version a) to install pysparse from the sandbox. The problem > persists. Can someone reproduce > this behavior ? > > >>>> from pysparse import * >>>> > Traceback (most recent call last): > File "", line 1, in > File "/usr/local/lib64/python2.5/site-packages/pysparse/__init__.py", > line 4, in > from spmatrix import * > ImportError: No module named spmatrix > > Any pointer how to fix this problem would be appreciated. > > Nils > > First of all I suspect version a) is less appreciated, because I guess all use scipy.pysparse, not just pysparse. 2nd, did you check that all was compiled ok? I have no other idea for the bug, it works fine for me d. From Alexander.Dietz at astro.cf.ac.uk Fri Jun 29 08:48:53 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Fri, 29 Jun 2007 13:48:53 +0100 Subject: [SciPy-user] General question on scipy In-Reply-To: References: <9cf809a00706290354l7ff898bexcb4f8adab3041a5d@mail.gmail.com> <4684EEC3.4040000@ar.media.kyoto-u.ac.jp> <9cf809a00706290502m4207e366xb5d7401f6568bbdb@mail.gmail.com> <9cf809a00706290524p5fbc2e40m754ef6dc2c1994cf@mail.gmail.com> Message-ID: <9cf809a00706290548u42ce009w32e6b279085553c2@mail.gmail.com> Hi, I got it to work, finally, thanks to the back-online installing page. What I did was: yum install lapack-devel blas-devel cd scipy-0.5.2 python setup.py install >& install.log This finally worked. Thanks a lot for helping me installing scipy!!! Alex On 6/29/07, Matthieu Brucher wrote: > > try python-scipy (or even more better, yum search scipy to find the real > name of the package) > > Matthieu > > 2007/6/29, Alexander Dietz < Alexander.Dietz at astro.cf.ac.uk>: > > > > Hi, > > > > here is my try: > > > > # yum install scipy > > Loading "installonlyn" plugin > > Setting up Install Process > > .... > > ..... > > No Match for argument: scipy > > Nothing to do > > > > > > So there is no scipy found or so. > > > > > > Cheers > > Alex > > > > > > > > On 6/29/07, Matthieu Brucher < matthieu.brucher at gmail.com> wrote: > > > > > > > > > > > 2007/6/29, Alexander Dietz < Alexander.Dietz at astro.cf.ac.uk>: > > > > > > > > Hi, > > > > > > > > here is what I am using exactly: FC5 and Linux 2.6.20-1.2316.fc5smp. > > > > Hope this helps. > > > > Can I install scipy (and all the dependend libraries) using 'yum' or > > > > something similar? 'yum' itself does not work... > > > > > > > > Cheers > > > > Alex > > > > > > > > > yum install numpy > > > yum install scipy > > > > > > But FC5's scipy is old IIRC, can you update to FC 6 at least ? > > > > > > Matthieu > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Fri Jun 29 08:54:42 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 29 Jun 2007 21:54:42 +0900 Subject: [SciPy-user] object too deep?? In-Reply-To: References: Message-ID: Emanuele Zattin wrote: > I have this optimization problem: > > this function returns the sum of some gaussians given their parameters > in arrays: > > def gaussian(height, center_x, center_y, width): > """Returns a gaussian function with the given parameters""" > width = float(width) > return lambda x,y: > sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) > > this function tries to fit given a starting image: > > def fitgaussian(data, obj_x, obj_y, obj_v): > """Returns (height, x, y, width) > the gaussian parameters of a 2D distribution found by a fit""" > #params = moments(data) > params = obj_v, obj_x-obj_x[0]+2, obj_y-obj_y[0]+2, ones(len(obj_x)) params is a tuple of some objects which you don't tell us what they are and at least one ndarray (ones(....)). This is probably the error. params has to be a list or array contatining only scalars. > errorfunction = lambda p: ravel(gaussian(*p)(*indices(data.shape)) - data) > p, success = leastsq(errorfunction, params) > return p > > and i use them with: > > # how many maxima here? > max_list = [i] > for j in range(len(obj_x)): > if obj_x[j] >= x1 and obj_x[j] < x2 and obj_y[j] >= y1 and obj_y[j] < > y2 and j != i: > max_list.append(j) > #for indices in max_list: > ml = array(max_list) > params = fitgaussian(neigh, obj_x[ml], obj_y[ml], obj_v[ml]) > print len(max_list), params > > but i get an error like: > > In [9]: run cutoff > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /home/emanuelez/Tesi/Code/cutoff.py in () > 174 # FIND OBJECTS PROPERTIES > 175 # ----------------------- > --> 176 get_objects_info(blurred, 2, obj_x, obj_y, obj_v) > 177 > 178 > > /home/emanuelez/Tesi/Code/cutoff.py in get_objects_info(image, size, > obj_x, obj_y, obj_v) > 143 #for indices in max_list: > 144 ml = array(max_list) > --> 145 params = fitgaussian(neigh, obj_x[ml], > obj_y[ml], obj_v[ml]) > 146 print len(max_list), params > 147 > > /home/emanuelez/Tesi/Code/cutoff.py in fitgaussian(data, obj_x, obj_y, obj_v) > 124 params = obj_v, obj_x-obj_x[0]+2, obj_y-obj_y[0]+2, ones(len(obj_x)) > 125 errorfunction = lambda p: > ravel(gaussian(*p)(*indices(data.shape)) - data) > --> 126 p, success = leastsq(errorfunction, params) > 127 return p > 128 > > /usr/lib/python2.5/site-packages/scipy/optimize/minpack.py in > leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, > gtol, maxfev, epsfcn, factor, diag) > 264 if (maxfev == 0): > 265 maxfev = 200*(n+1) > --> 266 retval = > _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) > 267 else: > 268 if col_deriv: > > : object too deep for desired array > WARNING: Failure executing file: > > > What does "object too deep for desired array" mean? I'm really puzzled > about this. > > Thanks for any help or suggestion! > > Emanuele All those lambdas look pretty complicated to me. But I'll believe you if you say that this is clever programming :) Christian From emanuelez at gmail.com Fri Jun 29 09:06:11 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Fri, 29 Jun 2007 15:06:11 +0200 Subject: [SciPy-user] object too deep?? In-Reply-To: References: Message-ID: hmm... i see... that must be it. the fact is that the number of gaussians to fit is not constant so how can i build a function that will fit them if params only accepts scalars? On 6/29/07, Christian K wrote: > Emanuele Zattin wrote: > > I have this optimization problem: > > > > this function returns the sum of some gaussians given their parameters > > in arrays: > > > > def gaussian(height, center_x, center_y, width): > > """Returns a gaussian function with the given parameters""" > > width = float(width) > > return lambda x,y: > > sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) > > > > this function tries to fit given a starting image: > > > > def fitgaussian(data, obj_x, obj_y, obj_v): > > """Returns (height, x, y, width) > > the gaussian parameters of a 2D distribution found by a fit""" > > #params = moments(data) > > params = obj_v, obj_x-obj_x[0]+2, obj_y-obj_y[0]+2, ones(len(obj_x)) > > params is a tuple of some objects which you don't tell us what they are and at > least one ndarray (ones(....)). This is probably the error. params has to be a > list or array contatining only scalars. > > > errorfunction = lambda p: ravel(gaussian(*p)(*indices(data.shape)) - data) > > p, success = leastsq(errorfunction, params) > > return p > > > > and i use them with: > > > > # how many maxima here? > > max_list = [i] > > for j in range(len(obj_x)): > > if obj_x[j] >= x1 and obj_x[j] < x2 and obj_y[j] >= y1 and obj_y[j] < > > y2 and j != i: > > max_list.append(j) > > #for indices in max_list: > > ml = array(max_list) > > params = fitgaussian(neigh, obj_x[ml], obj_y[ml], obj_v[ml]) > > print len(max_list), params > > > > but i get an error like: > > > > In [9]: run cutoff > > --------------------------------------------------------------------------- > > Traceback (most recent call last) > > > > /home/emanuelez/Tesi/Code/cutoff.py in () > > 174 # FIND OBJECTS PROPERTIES > > 175 # ----------------------- > > --> 176 get_objects_info(blurred, 2, obj_x, obj_y, obj_v) > > 177 > > 178 > > > > /home/emanuelez/Tesi/Code/cutoff.py in get_objects_info(image, size, > > obj_x, obj_y, obj_v) > > 143 #for indices in max_list: > > 144 ml = array(max_list) > > --> 145 params = fitgaussian(neigh, obj_x[ml], > > obj_y[ml], obj_v[ml]) > > 146 print len(max_list), params > > 147 > > > > /home/emanuelez/Tesi/Code/cutoff.py in fitgaussian(data, obj_x, obj_y, obj_v) > > 124 params = obj_v, obj_x-obj_x[0]+2, obj_y-obj_y[0]+2, ones(len(obj_x)) > > 125 errorfunction = lambda p: > > ravel(gaussian(*p)(*indices(data.shape)) - data) > > --> 126 p, success = leastsq(errorfunction, params) > > 127 return p > > 128 > > > > /usr/lib/python2.5/site-packages/scipy/optimize/minpack.py in > > leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, > > gtol, maxfev, epsfcn, factor, diag) > > 264 if (maxfev == 0): > > 265 maxfev = 200*(n+1) > > --> 266 retval = > > _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) > > 267 else: > > 268 if col_deriv: > > > > : object too deep for desired array > > WARNING: Failure executing file: > > > > > > What does "object too deep for desired array" mean? I'm really puzzled > > about this. > > > > Thanks for any help or suggestion! > > > > Emanuele > > All those lambdas look pretty complicated to me. But I'll believe you if you say > that this is clever programming :) > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Fri Jun 29 10:12:34 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 29 Jun 2007 09:12:34 -0500 Subject: [SciPy-user] http://www.scipy.org/Installing_SciPy is dead In-Reply-To: References: Message-ID: <46851352.30908@gmail.com> Christian K wrote: > Hi, > I can't access http://www.scipy.org/Installing_SciPy. Could somebody please > check that, fix it? Done, thanks to Jeff Strunk. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Fri Jun 29 10:36:44 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 29 Jun 2007 23:36:44 +0900 Subject: [SciPy-user] object too deep?? In-Reply-To: References: Message-ID: Emanuele Zattin wrote: > hmm... i see... that must be it. > the fact is that the number of gaussians to fit is not constant so how > can i build a function that will fit them if params only accepts > scalars? Well, per set of parameters height, center_x, center_y, width you have to create one gaussian-lambda and sum them up in the end. > On 6/29/07, Christian K wrote: >> Emanuele Zattin wrote: >>> I have this optimization problem: >>> >>> this function returns the sum of some gaussians given their parameters >>> in arrays: >>> >>> def gaussian(height, center_x, center_y, width): >>> """Returns a gaussian function with the given parameters""" >>> width = float(width) >>> return lambda x,y: >>> sum(height*exp(-(((center_x-x)/width)**2+((center_y-y)/width)**2)/2)) gaussian() returns only a single peak 'object'. So no reason to sum here. >>> >>> this function tries to fit given a starting image: >>> >>> def fitgaussian(data, obj_x, obj_y, obj_v): >>> """Returns (height, x, y, width) >>> the gaussian parameters of a 2D distribution found by a fit""" >>> #params = moments(data) >>> params = obj_v, obj_x-obj_x[0]+2, obj_y-obj_y[0]+2, ones(len(obj_x)) >> params is a tuple of some objects which you don't tell us what they are and at >> least one ndarray (ones(....)). This is probably the error. params has to be a >> list or array contatining only scalars. >> >>> errorfunction = lambda p: ravel(gaussian(*p)(*indices(data.shape)) - data) Here you could do the summing of the gaussian peaks. By hand it would be like this: gaussian(p[:4])(*indices(data.shape)+gaussian(p[4:8])(*indices(data.shape)+.... It should be possible to find an automatic way though. Christian From dominique.orban at gmail.com Fri Jun 29 11:10:39 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 29 Jun 2007 11:10:39 -0400 Subject: [SciPy-user] [Fwd: Re: Computing eigenvalues by trace minimization] In-Reply-To: <4684FA94.3050407@iam.uni-stuttgart.de> References: <4684CDC1.8080808@iam.uni-stuttgart.de> <4684DB04.3030609@ukr.net> <4684E202.1010707@iam.uni-stuttgart.de> <4684F6E4.3040000@ukr.net> <4684F958.3020504@iam.uni-stuttgart.de> <4684FA94.3050407@iam.uni-stuttgart.de> Message-ID: <468520EF.10804@gmail.com> Nils Wagner wrote: >>>>from pysparse import * > > Traceback (most recent call last): > File "", line 1, in > File "/usr/local/lib64/python2.5/site-packages/pysparse/__init__.py", > line 4, in > from spmatrix import * > ImportError: No module named spmatrix > > Any pointer how to fix this problem would be appreciated. Could it be related to the recent namespace change in Pysparse? Now you have to say from pysparse import spmatrix instead of import spmatrix By the way, I didn't realize that Pysparse was available in Scipy. Is it now an 'official' component? Dominique From dominique.orban at gmail.com Fri Jun 29 11:21:46 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 29 Jun 2007 11:21:46 -0400 Subject: [SciPy-user] Converting matrices from pysparse In-Reply-To: <4684F811.2010908@iam.uni-stuttgart.de> References: <4684F811.2010908@iam.uni-stuttgart.de> Message-ID: <4685238A.4020105@gmail.com> Nils Wagner wrote: > Hi all, > > Just now I have installed pysparse via > http://people.web.psi.ch/geus/pyfemax/download.html. > It works fine for me ! > > Is it possible to convert matrices from the pysparse specific format > into a format such > that I can visualize them with pylab.spy ? > > >>>>type(A) > > Hi Nils, In Pysparse, it is much easier to deal with the linked-list format (ll_mat) for matrix updates/visualization. The compressed sparse row (csr) and sparse skyline (sss) formats are useful when it comes to computation (e.g., csr is faster for matrix-vector products). In NLPy (nlpy.sf.net), which is based on Pysparse, I wrote a sparsity pattern visualizer for ll_mat matrices and Matplotlib. This is how it goes: import sparsetools import pylab M = sparsetools.rdm( 200, nnz = 250 ) # Create a random ll_mat (a,h) = sparsetools.spymatll( M, color = True ) pylab.show() I have been using Pysparse, Pylab and Python to combine efficient optimization building blocks into an environment in which it is easy to write optimization algorithms. If there is an interest, I'd be ready to join efforts. Dominique From edschofield at gmail.com Fri Jun 29 12:11:27 2007 From: edschofield at gmail.com (Ed Schofield) Date: Fri, 29 Jun 2007 17:11:27 +0100 Subject: [SciPy-user] Deleting sandbox.pysparse Message-ID: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com> Hi everyone, The pysparse snapshot in the sandbox is old and unmaintained, and I'd like to remove it from the SVN tree. Is anyone using it? -- Ed From dominique.orban at gmail.com Fri Jun 29 12:24:03 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 29 Jun 2007 12:24:03 -0400 Subject: [SciPy-user] Deleting sandbox.pysparse In-Reply-To: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com> References: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com> Message-ID: <46853223.1040607@gmail.com> Ed Schofield wrote: > Hi everyone, > > The pysparse snapshot in the sandbox is old and unmaintained, and I'd > like to remove it from the SVN tree. Is anyone using it? There were some messages on this list today from people using Pysparse. Perhaps the maintainer of Pysparse (Daniel Wheeler I believe) should be contacted to see if he would agree to maintain Pysparse as part of SciPy. I think it would be a great feature. Dominique From edschofield at gmail.com Fri Jun 29 13:01:44 2007 From: edschofield at gmail.com (Ed Schofield) Date: Fri, 29 Jun 2007 18:01:44 +0100 Subject: [SciPy-user] Deleting sandbox.pysparse In-Reply-To: <46853223.1040607@gmail.com> References: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com> <46853223.1040607@gmail.com> Message-ID: <1b5a37350706291001s294f5f6eif8e96b103434599@mail.gmail.com> On 6/29/07, Dominique Orban wrote: > Ed Schofield wrote: > > Hi everyone, > > > > The pysparse snapshot in the sandbox is old and unmaintained, and I'd > > like to remove it from the SVN tree. Is anyone using it? > > There were some messages on this list today from people using Pysparse. Perhaps > the maintainer of Pysparse (Daniel Wheeler I believe) should be contacted to see > if he would agree to maintain Pysparse as part of SciPy. I think it would be a > great feature. I don't know whether we'd want to merge Pysparse into the SciPy tree. I think the code would be difficult to integrate with the existing scipy.sparse package (partly because they use different languages). And if they lived separate lives in the scipy namespace we wouldn't really have gained anything. In any case we'd want to start by removing the obsolete cruft in the sandbox. ;) -- Ed From aisaac at american.edu Fri Jun 29 13:11:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 29 Jun 2007 13:11:56 -0400 Subject: [SciPy-user] Deleting sandbox.pysparse In-Reply-To: <46853223.1040607@gmail.com> References: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com><46853223.1040607@gmail.com> Message-ID: On Fri, 29 Jun 2007, Dominique Orban apparently wrote: > Pysparse. Perhaps the maintainer of Pysparse (Daniel > Wheeler I believe) should be contacted to see if he would > agree to maintain Pysparse as part of SciPy. I think it > would be a great feature. I am not currently using Pysparse. But looking forward, that would be great! Cheers, Alan Isaac From barrywark at gmail.com Fri Jun 29 13:33:24 2007 From: barrywark at gmail.com (Barry Wark) Date: Fri, 29 Jun 2007 10:33:24 -0700 Subject: [SciPy-user] Compile fails for r3123 on OS X 10.4 (Intel) Message-ID: I've been unable to compile scipy on OS X 10.4 (intel) from the recent trunk. Scipy built on this machine as of r2708. The output from python setup.py build is attached. I have fftw3 installed via macports (in /opt/local), and it appears that the build finds it properly, but the build fails with an error: building extension "scipy.fftpack._fftpack" sources target build/src.macosx-10.3-fat-2.5/_fftpackmodule.c does not exist: Assuming _fftpackmodule.c was generated with "build_src --inplace" command. error: '_fftpackmodule.c' missing I would appreciate any advice or suggestions! Thanks, Barry -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy_build_output.txt URL: From millman at berkeley.edu Fri Jun 29 14:18:35 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 29 Jun 2007 11:18:35 -0700 Subject: [SciPy-user] Deleting sandbox.pysparse In-Reply-To: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com> References: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com> Message-ID: On 6/29/07, Ed Schofield wrote: > The pysparse snapshot in the sandbox is old and unmaintained, and I'd > like to remove it from the SVN tree. Is anyone using it? I think it is a good idea to remove the old cruft, so you have my vote to remove pysparse. I haven't been using it, though, so I don't know how much my vote should count. Should people currently using the pysparse code from the sandbox just start using the code from the sourceforge page? -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From edschofield at gmail.com Fri Jun 29 19:45:07 2007 From: edschofield at gmail.com (Ed Schofield) Date: Sat, 30 Jun 2007 00:45:07 +0100 Subject: [SciPy-user] Deleting sandbox.pysparse In-Reply-To: References: <1b5a37350706290911y1a1932b2had40fded7379c9a6@mail.gmail.com> Message-ID: <1b5a37350706291645l70ff24f4vd3dc105e0b168e41@mail.gmail.com> On 6/29/07, Jarrod Millman wrote: > On 6/29/07, Ed Schofield wrote: > > The pysparse snapshot in the sandbox is old and unmaintained, and I'd > > like to remove it from the SVN tree. Is anyone using it? > > I think it is a good idea to remove the old cruft, so you have my vote > to remove pysparse. I haven't been using it, though, so I don't know > how much my vote should count. > > Should people currently using the pysparse code from the sandbox just > start using the code from the sourceforge page? Yes -- if such creatures exist... -- Ed From angel.yanguas at gmail.com Sat Jun 30 20:29:12 2007 From: angel.yanguas at gmail.com (Angel Yanguas-Gil) Date: Sat, 30 Jun 2007 19:29:12 -0500 Subject: [SciPy-user] problems with scipy in gentoo Message-ID: <11f4deca0706301729g123b6200xc707b1ba240b40d2@mail.gmail.com> Hi, the scipy package in gentoo is currently masked and cannot be installed. The problem is not in scipy but in one of the dependencies which, at least in my case, is blas. Does anybody know if it is possible to install scipy with an alternative library? thanks anglyan