From scipy at zunzun.com Sun May 2 17:44:00 2004 From: scipy at zunzun.com (James R. Phillips) Date: Sun, 2 May 2004 17:44:00 -400 Subject: [SciPy-user] Unable to use weave in Enthought 0.3 Win binary install Message-ID: <40956ba02e07f7.15773117@mercury.sabren.com> Has anyone had success using weave in the Enthought 0.3 Winows binary install? I get linker errors whether using the Enthought-supplied Mingw32 or my MSVC++ 6.0 compiler. My test I have been through several install-reboot-test cycles on two different XP boxen. Arrgh. If I knew the magic chants to get weave working with Enthought's Mingw32 it would be sufficient. James Phillips http://zunzun.com From danny_shevitz at yahoo.com Mon May 3 19:07:41 2004 From: danny_shevitz at yahoo.com (danny shevitz) Date: Mon, 3 May 2004 16:07:41 -0700 (PDT) Subject: [SciPy-user] problems with stats In-Reply-To: <408EDC12.9080005@ucsd.edu> Message-ID: <20040503230741.45620.qmail@web41013.mail.yahoo.com> Robert, thanks for the response. I haven't forgotten you wrote, I just haven't had a chance to really dig into what you wrote. Now I have. So I have been using the frequentist approach to constructing the error bars and I understand that. It involves taking some linear fractional transformations of F statistics. After reading your letter, I was able to compute the same error bounds directly. The key was the bdtri function as you mentioned. pLower = bdtri(n-1, N, .95) pUpper = bdtri(n, N, .05) for example gives the .90 confidence interval. This is the pair of functions I needed. The n-1 in pLower is because the value n is considered in the tail for the purposes of computing the confidences. I didn't have a prayer in scipy.stats because bdtri is not wrapped. Its totally different than ppf, which still requires the unknown probability in order to construct the distribution at all. So I can get the frequentist result directly, now I'm curious about your Bayesian analysis. One of the uniform or improper prior gives the same answer. BTW, I'm also curious if you left off the combinatoric factor (n choose r) in your description. Can you give me a little of the rational for choosing the improper prior? thanks, Danny --- Robert Kern wrote: > danny shevitz wrote: > > > The short version of the problem that I was trying to solve is: > Given N > > trials and n events, what is the specified confidence interval for > the > > binomial probability p. > > > > The hypothetical argument call should therefore take 2 integers and > a > > probability, e.g. f(10, 10000000, .95) The third number is a > > confidence, not the probability in the binomial distribution. Of > course > > there would have to be a pair of such functions ( one for the lower > > bound, and one for the upper). > > What you are looking for is the beta distribution. Here's the > Bayesian > analysis: > > We have n Bernoulli trials and r successes with an unknown chance > (theta) of success on any given trial. The likelihood is > > P(n,r|theta) = theta**r * (1-theta)**(n-r) > > The posterior probability is > > P(theta|n,r) = A * P(n,r|theta) * P(theta) > > A is a normalizing constant and P(theta) is your prior probability > for > theta. For now, let's use a minimally informative prior in order to > make > the fewest assumptions. > > P(theta) o< 1/(theta*(1-theta)) > > The o< symbol is the proportionality symbol. This prior is improper > and > cannot be normalized as it stands. When we put it together with the > likelihood, as long as r does not equal either 0 or n, the posterior > will come out normalized. There are good reasons which I won't go > into > now to use this instead of P(theta)=1, but you can use that instead > and > redo the math that leads to the following (easy!). > > So, P(theta|n,r) = A * theta**(r-1) * (1-theta)**(n-r-1) > > We can recognize this as the Beta distribution, Beta(r,n-r) (or > Beta(r+1,n-r+1) if you're queasy about improper priors). > > special.bdtri(10, 10000000, 0.95) will give you the one-sided 95% > credible interval (like a confidence interval, only Bayesian). Since > this distribution isn't symmetric, I'm not entirely sure how to get > the > highest posterior density (HPD) credible interval around the peak. > And > I'm not entirely sure how accurate the function is with those extreme > > inputs. > > This analysis is lifted straight from Chapter 6 of the excellent book > > _Probability Theory: The Logic of Science_ by E. T. Jaynes (Cambridge > > Press, 2003). > > > Like I said, what I was doing, wasn't particularly well thought > out. I > > was just playing around, and using the function wrong anyway... > > > > BTW I guess I have no idea what binom.ppf is supposed to do then. > Why > > do you need two probilities? Is one a confidence and the other the > > probability. > > The random variable in the binomial distribution is the number of > successes given the probability p of individual success and the > number > of trials. The point mass function would invert for n, the number of > successes that, by summing the probability masses for each number of > successes between 0 and n, gives you 0.95 (for example). So yes, one > input is the confidence level, one is the probability of individual > success, and you also need the number of trials. > > > Also in case it wasn't obvious, I'm not a statistician by training > :-) > > > > D > > -- > Robert Kern > rkern at ucsd.edu > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user __________________________________ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover From danny_shevitz at yahoo.com Mon May 3 19:10:54 2004 From: danny_shevitz at yahoo.com (danny shevitz) Date: Mon, 3 May 2004 16:10:54 -0700 (PDT) Subject: [SciPy-user] problems with stats In-Reply-To: <20040503230741.45620.qmail@web41013.mail.yahoo.com> Message-ID: <20040503231054.92598.qmail@web41012.mail.yahoo.com> sorry, for the post, I meant to send it to Robert not the group. Danny __________________________________ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover From nwagner at mecha.uni-stuttgart.de Tue May 4 02:26:33 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 May 2004 08:26:33 +0200 Subject: [SciPy-user] scipy.test() failed with latest cvs Message-ID: <40973799.8050207@mecha.uni-stuttgart.de> ====================================================================== ERROR: check_setelement (scipy.sparse.Sparse.test_Sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy/sparse/tests/test_Sparse.py", line 67, in check_setelement a = self.datsp - self.datsp File "/usr/lib/python2.3/site-packages/scipy/sparse/Sparse.py", line 379, in __sub__ return csc_matrix(c,(rowc,ptrc),M=M,N=N) File "/usr/lib/python2.3/site-packages/scipy/sparse/Sparse.py", line 294, in __init__ self._check() File "/usr/lib/python2.3/site-packages/scipy/sparse/Sparse.py", line 310, in _check if (nzmax>0) and (max(self.rowind[:nnz]) >= M): ValueError: min() or max() arg is an empty sequence ---------------------------------------------------------------------- Ran 934 tests in 2.879s FAILED (errors=1) From karthik at james.hut.fi Tue May 4 04:10:53 2004 From: karthik at james.hut.fi (Karthikesh Raju) Date: Tue, 4 May 2004 11:10:53 +0300 Subject: [SciPy-user] concatenation of arrays woes !! Message-ID: Hi All, is it possible to concatenate arrays along the column, A[:,1:5] = concatenate(a,b,c) a is 2x1, b 2x1, c 1x1 i have been trying to convert the following matlab code and this is my stumbling block for several months :( K = 3 CL = 5 L = 2 d -- (K,L) in the range (0,CL) codes -- (K,CL) window = 1 s = codes' output S -(CL*window,K*L*(window+1)) matrix for k = 1:K for l = 1:L S(:,1+(window+1)*(l-1)+(window+1)*L*(k-1)) = [s(CL-d(k,l)+1:CL,k); zeros(CL*(window-1),1);zeros(CL-d(k,l),1)] S(:,(window+1)*1+(window+1)*L*(k-1)) = [zeros(d(k,l),1);zeros(CL*(window-1),1);s(1:CL-d(k,l),k)] for n=2:window S(:,n+(window+1)*(l-1)+(numbits+1)*L*(k-1)) = [zeros(d(k,l),1); zeros(CL*(n-2),1); s(:,k);zeros((window-n)*CL+CL-d(k,l),1)] end end end -------------------- This module has been my stumbling block for the last 2 months, had no problems with matlab, here i have all kinds of concatination woes. Please give me pointers on how to achieve such concatenations, thanks a lot regards karthik ----------------------------------------------------------------------- Karthikesh Raju, email: karthik at james.hut.fi Researcher, http://www.cis.hut.fi/karthik Helsinki University of Technology, Tel: +358-9-451 5389 Laboratory of Comp. & Info. Sc., Fax: +358-9-451 3277 Department of Computer Sc., P.O Box 5400, FIN 02015 HUT, Espoo, FINLAND ----------------------------------------------------------------------- From nadavh at visionsense.com Tue May 4 06:05:30 2004 From: nadavh at visionsense.com (Nadav Horesh) Date: Tue, 4 May 2004 12:05:30 +0200 Subject: [SciPy-user] Re: xplt documentation Message-ID: <07C6A61102C94148B8104D42DE95F7E86DECEA@exchange2k.envision.co.il> I've made some testing (not exhaustive) of matplotlib under IDLE and pycrust on win32 and linux platforms. matplotlib seems to be more tolerant then xplt, especially when show() is called instead of using the interactive mode. The linux environment seems to work a bit better then the win32. There is a project (not very active nowadays) called glplot (glplot.sf.net) which fills a shortcoming of gnuplot of generating false colour map of large matrices (gnuplot is too slow here). Could the solution be a library/utility that is spawned as a separate process and use some (simple) interprocess communication (ala gnuplot/glplot)? Is there a portable alternative way to run a multiple message-loops under the same process? Nadav. From Bob.Cowdery at CGI-Europe.com Tue May 4 05:20:37 2004 From: Bob.Cowdery at CGI-Europe.com (Bob.Cowdery at CGI-Europe.com) Date: Tue, 4 May 2004 10:20:37 +0100 Subject: [SciPy-user] Help with array functions Message-ID: <9B40DB5FBF7D2048AD79C2A91F862A51772FFE@STEVAPPS05.Stevenage.CGI-Europe.com> Hi all, Can anyone help me convert this loop to an array function. I can't have loops in my signal processing as they take far too much processing power. This is a fragment from a speech processor using Ferkins formula. Essentially the gain array (m_procGain) which is 2048 long (m_sampl_sz) needs its nth element processing into it's (n-1)th element. This is eventually used to multiply the samples in m_complexout, the first 2048 elements of this are used. Attack is a scaler currently 0.4. peak = abs(self.m_complexout[argmax(abs(self.m_complexout[:self.m_sampl_sz]))]) i = arange(1,self.m_sampl_sz) for x in i: if abs(self.m_complexout[x-1]) > 0: self.m_procGain[x] = ( (self.m_procGain[x-1] * (1 - attack)) + ((attack * peak) / abs(self.m_complexout[x-1])) ) if self.m_procGain[x] > level: self.m_procGain[x] = level else: self.m_procGain[x] = 1 Thanks for any assistance. Bob Bob Cowdery CGI Senior Technical Architect +44(0)1438 791517 Mobile: +44(0)7771 532138 bob.cowdery at cgi-europe.com *** Confidentiality Notice *** Proprietary/Confidential Information belonging to CGI Group Inc. and its affiliates may be contained in this message. If you are not a recipient indicated or intended in this message (or responsible for delivery of this message to such person), or you think for any reason that this message may have been addressed to you in error, you may not use or copy or deliver this message to anyone else. In such case, you should destroy this message and are asked to notify the sender by reply email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Tue May 4 08:49:33 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 4 May 2004 14:49:33 +0200 (CEST) Subject: [SciPy-user] Re: xplt documentation In-Reply-To: <07C6A61102C94148B8104D42DE95F7E86DECEA@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E86DECEA@exchange2k.envision.co.il> Message-ID: Hi, On Tue, 4 May 2004, Nadav Horesh wrote: > I've made some testing (not exhaustive) of matplotlib under IDLE and > pycrust on win32 and linux platforms. matplotlib seems to be more > tolerant then xplt, Can you be a bit more explicit, i.e. give examples ? > especially when show() is called instead of > using the interactive mode. Personally I think that this is one of the big strengths of scipy.xplt. For example, when debugging I use a lot from IPython.Shell import IPythonShellEmbed ipshell = IPythonShellEmbed() ipshell() The ipshell() can come at any point of your code and all variables are available. So a quick plg(some_array) or pli(some_2d_array) often allows to find bugs much more quickly than anything else. > The linux environment seems to work a > bit better then the win32. > > There is a project (not very active nowadays) called glplot > (glplot.sf.net) which fills a shortcoming of gnuplot of > generating false colour map of large matrices (gnuplot is too slow here). There is a patch on sourceforge for gnuplot which allows for bitmap images (ie. matrices). I once tried it out and it is very nice. It might take a little time until this gets integrated as the gnuplot team seems to be a bit busy with after release issues. If you really want to go for large matrices have a look at MayaVi, http://mayavi.sourceforge.net/ In particular http://mayavi.sourceforge.net/docs/guide/x967.html#TOOLS Best, Arnd From pajer at iname.com Tue May 4 09:26:25 2004 From: pajer at iname.com (Gary Pajer) Date: Tue, 4 May 2004 09:26:25 -0400 Subject: [SciPy-user] concatenation of arrays woes !! References: Message-ID: <005b01c431db$6a244fe0$01fd5644@playroom> ----- Original Message ----- From: "Karthikesh Raju" To: Sent: Tuesday, May 04, 2004 4:10 AM Subject: [SciPy-user] concatenation of arrays woes !! > > Hi All, > > is it possible to concatenate arrays along the column, > > A[:,1:5] = concatenate(a,b,c) > a is 2x1, b 2x1, c 1x1 > > i have been trying to convert the following matlab code and this is my > stumbling block for several months :( > > K = 3 > CL = 5 > L = 2 > d -- (K,L) in the range (0,CL) > codes -- (K,CL) > window = 1 > s = codes' > > output S -(CL*window,K*L*(window+1)) matrix > > for k = 1:K > for l = 1:L > S(:,1+(window+1)*(l-1)+(window+1)*L*(k-1)) = > [s(CL-d(k,l)+1:CL,k); zeros(CL*(window-1),1);zeros(CL-d(k,l),1)] > S(:,(window+1)*1+(window+1)*L*(k-1)) = > [zeros(d(k,l),1);zeros(CL*(window-1),1);s(1:CL-d(k,l),k)] > for n=2:window > S(:,n+(window+1)*(l-1)+(numbits+1)*L*(k-1)) = > [zeros(d(k,l),1); zeros(CL*(n-2),1); > s(:,k);zeros((window-n)*CL+CL-d(k,l),1)] > end > end > end > > -------------------- > > This module has been my stumbling block for the last 2 months, had no > problems with matlab, here i have all kinds of concatination woes. Please > give me pointers on how to achieve such concatenations, > > thanks a lot > > regards > karthik I'm still not sure what you are trying to achieve. If you post a minimal contrived example of what you are trying to do instead of a clip from the middle of your code it would be easier to figure out. But have you explored r_[ ] and c_[ ] ? ('c' for column, 'r' for row) These constructions are designed to aid concatenation: >>> from scipy import * >>> x=array([1,2]) >>> y=array([3,4]) >>> c_[x,y] array([1, 2, 3, 4]) >>> c_[x[:,NewAxis],y[:,NewAxis]] array([[1, 3], [2, 4]]) >>> r_[x[:,NewAxis],y[:,NewAxis]] array([[1], [2], [3], [4]]) hth, gary From jdhunter at ace.bsd.uchicago.edu Tue May 4 09:31:55 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 04 May 2004 08:31:55 -0500 Subject: [SciPy-user] Re: xplt documentation In-Reply-To: (Arnd Baecker's message of "Tue, 4 May 2004 14:49:33 +0200 (CEST)") References: <07C6A61102C94148B8104D42DE95F7E86DECEA@exchange2k.envision.co.il> Message-ID: >>>>> "Arnd" == Arnd Baecker writes: Arnd> There is a patch on sourceforge for gnuplot which allows for Arnd> bitmap images (ie. matrices). I once tried it out and it is Arnd> very nice. It might take a little time until this gets Arnd> integrated as the gnuplot team seems to be a bit busy with Arnd> after release issues. Arnd> If you really want to go for large matrices have a look at Arnd> MayaVi, http://mayavi.sourceforge.net/ In particular Arnd> http://mayavi.sourceforge.net/docs/guide/x967.html#TOOLS Also the fairly new imshow in matplotlib plots grayscale and pseudocolor numeric/numarray arrays. This replaces the painfully slow pcolor which was unusable except for small arrays. The guts of imshow are implemented entirely in numeric and extension code and should handle large arrays efficiently. If you encounter performance issues, I'd like to know about it! Right now the only false colormap is ColormapJet (see examples/pcolor_demo2.py) but you can define your own by subclassing matplotlib.colors.Colormap. JDH From eric at enthought.com Tue May 4 11:03:18 2004 From: eric at enthought.com (eric jones) Date: Tue, 04 May 2004 10:03:18 -0500 Subject: [SciPy-user] Help with array functions In-Reply-To: <9B40DB5FBF7D2048AD79C2A91F862A51772FFE@STEVAPPS05.Stevenage.CGI-Europe.com> References: <9B40DB5FBF7D2048AD79C2A91F862A51772FFE@STEVAPPS05.Stevenage.CGI-Europe.com> Message-ID: <4097B0B6.50006@enthought.com> Bob.Cowdery at CGI-Europe.com wrote: > Hi all, > > Can anyone help me convert this loop to an array function. I can't > have loops in my signal processing as they take far too much > processing power. This is a fragment from a speech processor using > Ferkins formula. Essentially the gain array (m_procGain) which is 2048 > long (m_sampl_sz) needs its nth element processing into it's (n-1)th > element. This is eventually used to multiply the samples in > m_complexout, the first 2048 elements of this are used. Attack is a > scaler currently 0.4. > > peak = > abs(self.m_complexout[argmax(abs(self.m_complexout[:self.m_sampl_sz]))]) > i = arange(1,self.m_sampl_sz) > for x in i: > if abs(self.m_complexout[x-1]) > 0: > self.m_procGain[x] = ( (self.m_procGain[x-1] * > (1 - attack)) + ((attack * peak) / abs(self.m_complexout[x-1])) ) > if self.m_procGain[x] > level: > self.m_procGain[x] = level > else: > self.m_procGain[x] = 1 > The general ideas of uisng Numeric in Python is to avoid loops, even at the expense of sometimes doing extra computations. In this case, the signal processing calculations can be done for every entry in the array (except the first). The conditions checked for in the "if" statements can then be filtered out in subsequent array processing steps. Here is some (totally untested) code that should give you some ideas: gain = self.m_procGain abs_complexout = abs(self.m_complexout) # Do singal processing calculation for all elements in array gain[1:] = ( (gain[:-1] * (1 - attack)) + ((attack * peak) / abs_complexout[:-1]) ) # clip the upper values to some level. gain = choose(gain > level, (gain, level)) # if complexout is less than 0, set gain to 1. gain = choose(abs_complexout < 0, (gain, 1)) self.m_procGain = gain With numarray I think it would look something like this (some numarray user correct me if I'm wrong): gain = self.m_procGain abs_complexout = abs(self.m_complexout) # Do singal processing calculation for all elements in array gain[1:] = ( (gain[:-1] * (1 - attack)) + ((attack * peak) / abs_complexout[:-1]) ) # clip the upper values to some level. gain[gain > level] = level # if complexout is less than 0, set gain to 1. gain[abs_complexout < 0] = 1 # unnecessary because it is a reference to the same array and all operations are done using # indexing on the LHS. self.m_procGain = gain The numarray code is easier to read. It also should be more efficient (if not now, then eventually). I really like the arrays as indices to arrays... eric > Thanks for any assistance. > > Bob > > > Bob Cowdery > CGI Senior Technical Architect > +44(0)1438 791517 > Mobile: +44(0)7771 532138 > bob.cowdery at cgi-europe.com > > > > > **** Confidentiality Notice **** Proprietary/Confidential > Information belonging to CGI Group Inc. and its affiliates > may be contained in this message. If you are not a recipient > indicated or intended in this message (or responsible for > delivery of this message to such person), or you think for > any reason that this message may have been addressed to you > in error, you may not use or copy or deliver this message > to anyone else. In such case, you should destroy this > message and are asked to notify the sender by reply email. > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From perry at stsci.edu Tue May 4 11:16:34 2004 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 4 May 2004 11:16:34 -0400 Subject: [SciPy-user] Help with array functions In-Reply-To: <4097B0B6.50006@enthought.com> Message-ID: Eric Jones wrote: > Bob.Cowdery at CGI-Europe.com wrote: > > The general ideas of uisng Numeric in Python is to avoid loops, even at > the expense of sometimes doing extra computations. In this case, the > signal processing calculations can be done for every entry in the array > (except the first). The conditions checked for in the "if" statements > can then be filtered out in subsequent array processing steps. > > > Here is some (totally untested) code that should give you some ideas: > > gain = self.m_procGain > abs_complexout = abs(self.m_complexout) > > # Do singal processing calculation for all elements in array > gain[1:] = ( (gain[:-1] * (1 - attack)) + > ((attack * peak) / abs_complexout[:-1]) ) > Does this do what is needed? I think the point he was making was that gain[i] depends on gain[i-1] which in turn depends on gain[i-2] and so on. The expression above doesn't capture that since from his code he doesn't show any initial value for gain (implicitly the gain[0] must exist). So using an array expression which uses a shifted gain array in its input won't work. This is the sort of problem that doesn't fit the array model very well unless there are some primitive array operations that happen to do the iterative thing needed (sometimes the accumulate or convolve functions can be used for this thing, but I can't see any obvious existing function. Seems like a candidate for a C extension but maybe I'm missing something. Perry > From Bob.Cowdery at CGI-Europe.com Tue May 4 11:36:16 2004 From: Bob.Cowdery at CGI-Europe.com (Bob.Cowdery at CGI-Europe.com) Date: Tue, 4 May 2004 16:36:16 +0100 Subject: [SciPy-user] Help with array functions Message-ID: <9B40DB5FBF7D2048AD79C2A91F862A51773001@STEVAPPS05.Stevenage.CGI-Europe.com> Eric Jones wrote: > Bob.Cowdery at CGI-Europe.com wrote: > > The general ideas of uisng Numeric in Python is to avoid loops, even > at > the expense of sometimes doing extra computations. In this case, the > signal processing calculations can be done for every entry in the array > (except the first). The conditions checked for in the "if" statements > can then be filtered out in subsequent array processing steps. > > > Here is some (totally untested) code that should give you some ideas: > > gain = self.m_procGain > abs_complexout = abs(self.m_complexout) > > # Do singal processing calculation for all elements in array > gain[1:] = ( (gain[:-1] * (1 - attack)) + > ((attack * peak) / abs_complexout[:-1]) ) > Perry Greenfield wrote: > Does this do what is needed? I think the point he was making was that gain[i] depends on gain[i-1] which in turn depends on gain[i-2] > and so on. The expression above doesn't capture that since from his code he doesn't show any initial value for gain (implicitly the > gain[0] must exist). So using an array expression which uses a shifted gain array in its input won't work. > This is the sort of problem that doesn't fit the array model very well unless there are some primitive array operations that happen to > do the iterative thing needed (sometimes the accumulate or convolve functions can be used for this thing, but I can't see any obvious > existing function. Seems like a candidate for a C extension but maybe I'm missing > something. That is correct, my basic problem was how to use the same array in input and output which I guess I should have mentioned. The gain array is initialised to zeros. Bob _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user *** Confidentiality Notice *** Proprietary/Confidential Information belonging to CGI Group Inc. and its affiliates may be contained in this message. If you are not a recipient indicated or intended in this message (or responsible for delivery of this message to such person), or you think for any reason that this message may have been addressed to you in error, you may not use or copy or deliver this message to anyone else. In such case, you should destroy this message and are asked to notify the sender by reply email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at enthought.com Tue May 4 11:39:45 2004 From: eric at enthought.com (eric jones) Date: Tue, 04 May 2004 10:39:45 -0500 Subject: [SciPy-user] Help with array functions In-Reply-To: References: Message-ID: <4097B941.1060809@enthought.com> Yikes. Your right! eric Perry Greenfield wrote: >Eric Jones wrote: > > >>Bob.Cowdery at CGI-Europe.com wrote: >> >>The general ideas of uisng Numeric in Python is to avoid loops, even at >>the expense of sometimes doing extra computations. In this case, the >>signal processing calculations can be done for every entry in the array >>(except the first). The conditions checked for in the "if" statements >>can then be filtered out in subsequent array processing steps. >> >> >>Here is some (totally untested) code that should give you some ideas: >> >> gain = self.m_procGain >> abs_complexout = abs(self.m_complexout) >> >> # Do singal processing calculation for all elements in array >> gain[1:] = ( (gain[:-1] * (1 - attack)) + >> ((attack * peak) / abs_complexout[:-1]) ) >> >> >> >Does this do what is needed? I think the point he was making >was that gain[i] depends on gain[i-1] which in turn depends >on gain[i-2] and so on. The expression above doesn't capture that >since from his code he doesn't show any initial value for gain >(implicitly the gain[0] must exist). So using an array expression >which uses a shifted gain array in its input won't work. > >This is the sort of problem that doesn't fit the array model >very well unless there are some primitive array operations >that happen to do the iterative thing needed (sometimes >the accumulate or convolve functions can be used for this >thing, but I can't see any obvious existing function. Seems >like a candidate for a C extension but maybe I'm missing >something. > >Perry > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From karthik at james.hut.fi Tue May 4 12:19:55 2004 From: karthik at james.hut.fi (Karthikesh Raju) Date: Tue, 4 May 2004 19:19:55 +0300 Subject: [SciPy-user] concatenation of arrays woes !! In-Reply-To: <005b01c431db$6a244fe0$01fd5644@playroom> References: <005b01c431db$6a244fe0$01fd5644@playroom> Message-ID: Hi Gary and others, thankx for your reply. i had managed to break the procedure into bits and pieces and get the work done. i have attached the file. Is there some way by which we can efficiently do something like s2 = s[0:CL-d[k,l],k] s3 = concatenate(z1,z2,s2[:,NewAxis]) stemp[:,NewAxis] = s3 is there any means by which i can avoid having s2 as a variable, and by using the NewAxis during striding process itself. i have just built this matrix now, row by row :), any way to improve this! With warm regards karthik ----------------------------------------------------------------------- Karthikesh Raju, email: karthik at james.hut.fi Researcher, http://www.cis.hut.fi/karthik Helsinki University of Technology, Tel: +358-9-451 5389 Laboratory of Comp. & Info. Sc., Fax: +358-9-451 3277 Department of Computer Sc., P.O Box 5400, FIN 02015 HUT, Espoo, FINLAND ----------------------------------------------------------------------- On Tue, 4 May 2004, Gary Pajer wrote: > ----- Original Message ----- > From: "Karthikesh Raju" > To: > Sent: Tuesday, May 04, 2004 4:10 AM > Subject: [SciPy-user] concatenation of arrays woes !! > > > > > > Hi All, > > > > is it possible to concatenate arrays along the column, > > > > A[:,1:5] = concatenate(a,b,c) > > a is 2x1, b 2x1, c 1x1 > > > > i have been trying to convert the following matlab code and this is my > > stumbling block for several months :( > > > > K = 3 > > CL = 5 > > L = 2 > > d -- (K,L) in the range (0,CL) > > codes -- (K,CL) > > window = 1 > > s = codes' > > > > output S -(CL*window,K*L*(window+1)) matrix > > > > for k = 1:K > > for l = 1:L > > S(:,1+(window+1)*(l-1)+(window+1)*L*(k-1)) = > > [s(CL-d(k,l)+1:CL,k); > zeros(CL*(window-1),1);zeros(CL-d(k,l),1)] > > S(:,(window+1)*1+(window+1)*L*(k-1)) = > > [zeros(d(k,l),1);zeros(CL*(window-1),1);s(1:CL-d(k,l),k)] > > for n=2:window > > S(:,n+(window+1)*(l-1)+(numbits+1)*L*(k-1)) = > > [zeros(d(k,l),1); zeros(CL*(n-2),1); > > s(:,k);zeros((window-n)*CL+CL-d(k,l),1)] > > end > > end > > end > > > > -------------------- > > > > This module has been my stumbling block for the last 2 months, had no > > problems with matlab, here i have all kinds of concatination woes. Please > > give me pointers on how to achieve such concatenations, > > > > thanks a lot > > > > regards > > karthik > > > > I'm still not sure what you are trying to achieve. If you post a minimal > contrived example of what you are trying to do instead of a clip from the > middle of your code it would be easier to figure out. > > But have you explored r_[ ] and c_[ ] ? ('c' for column, 'r' for row) > These constructions are designed to aid concatenation: > > >>> from scipy import * > >>> x=array([1,2]) > >>> y=array([3,4]) > > >>> c_[x,y] > array([1, 2, 3, 4]) > > >>> c_[x[:,NewAxis],y[:,NewAxis]] > array([[1, 3], > [2, 4]]) > > >>> r_[x[:,NewAxis],y[:,NewAxis]] > array([[1], > [2], > [3], > [4]]) > > > hth, > gary > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > -------------- next part -------------- from numarray import * K = 3 L = 2 CL = 5 codes = arange(1,16,shape=(K,CL)) codes.transpose() s = array(codes, copy=1) print s d = ones((K,L)) window = 1 S = zeros([CL*window,K*L*(window+1)]) for k in range(K): for l in range(L): print "--"*30 + "\n" # first row #-------------------------------------------------- stemp = S[:,(window+1)*(l)+(window+1)*L*(k)] s2 = s[CL-d[k,l]:CL,k] z1 = zeros([CL*(window-1),1]) z2 = zeros([CL-d[k,l],1]) if CL*(window-1) == 0: s3 = concatenate((s2[:,NewAxis],z2)) else: s3 = concatenate((s2[:,NewAxis],z1,z2)) stemp[:,NewAxis] = s3 #-------------------------------------------------- # # second row #-------------------------------------------------- stemp = S[:,(window+1)*l+window+(window+1)*L*k] z1 = zeros([d[k,l],1]); z2 = zeros([CL*(window-1),1]) s2 = s[0:CL-d[k,l],k] if CL*(window-1) == 0: s3 = concatenate((z1,s2[:,NewAxis])) else: s3 = concatenate((z1,z2,s2[:,NewAxis])) stemp[:,NewAxis] = s3 #-------------------------------------------------- # final push for w in range(0,window-1): stemp = S[:,w+1+(window+1)*l+(window+1)*L*k] z1 = zeros([d[k,l],1]) z2 = zeros([CL*w,1]) s2 = s[:,k] z3 = zeros([(window-(w+2))*CL+CL-d[k,l],1]) if CL*w == 0: s3 = concatenate((z1,s2[:,NewAxis],z3)) else: s3 = concatenate((z1,z2,s2[:,NewAxis],z3)) print s3 stemp[:,NewAxis] = s3 print S print S.shape From eric at enthought.com Tue May 4 12:31:52 2004 From: eric at enthought.com (eric jones) Date: Tue, 04 May 2004 11:31:52 -0500 Subject: [SciPy-user] Help with array functions In-Reply-To: <4097B941.1060809@enthought.com> References: <4097B941.1060809@enthought.com> Message-ID: <4097C578.1090207@enthought.com> So, try two... I don't know of a pure Python solution either. My next option is usually playing around in weave. Here is a weave snippet that implements the algorithm in C. It runs, but just like the last one is untested. Given the track record here, a decent test is advisable... :-) Here is the output from my cursory check to see if the code would work: C:\temp>wv.py weave results: [ 0. 0.8 0.96 0.992 0.9984 0.99968 0.999936 0.9999872 0.99999744 1.] python results: [ 0. 0.8 0.96 0.992 0.9984 0.99968 0.999936 0.9999872 0.99999744 1.] C time: 0.00707567096418 Python time: 0.316835262617 weave vs. Python speedup: 44.7781226997 This is using gcc as the compiler in weave and 2048 samples in a signal. If you drop to 256, the improvement is cut in half. Now the code: # test.py from scipy import zeros, ones, Float64, arange from scipy import stats import weave def python_algorithm(gain, complexout, attack, peak, level, samples): # gain is changed *inplace* i = arange(1,samples) for x in i: if abs(complexout[x-1]) > 0: gain[x] = ( (gain[x-1] * (1.0 - attack)) + ((attack * peak) / abs(complexout[x-1])) ) if gain[x] > level: gain[x] = level else: gain[x] = 1 def weave_algorithm(gain, complexout, attack, peak, level, samples): # gain is changed *inplace* code = """ for (int x=1; x < samples; x++) { if (abs(complexout[x-1]) > 0.0) { gain[x] = ( (gain[x-1] * (1.0 - attack)) + ((attack * peak) / abs(complexout[x-1])) ); if (gain[x] > level) gain[x] = level; } else gain[x] = 1; } """ weave.inline(code, ['gain', 'complexout', 'samples', 'attack', 'peak', 'level'], compiler='gcc') # some default parameters... samples = 2048 gain = zeros(samples,typecode=Float64) complexout = stats.choice([-1.0,0.0,1.0],size=samples) attack = .8 peak = 1.0 level = 1.0 # equivalence tests weave_algorithm(gain, complexout, attack, peak, level, samples) print 'weave results:', gain[:10] gain = zeros(samples,typecode=Float64) python_algorithm(gain, complexout, attack, peak, level, samples) print 'python results:', gain[:10] # speed tests import time N = 100 t1 = time.clock() for it in range(N): weave_algorithm(gain, complexout, attack, peak, level, samples) t2 = time.clock() ct = t2 - t1 print 'C time:', ct t1 = time.clock() for it in range(N): python_algorithm(gain, complexout, attack, peak, level, samples) t2 = time.clock() pt = t2-t1 print 'Python time:', pt ratio = pt/ct print 'weave vs. Python speedup:', ratio eric jones wrote: > Yikes. Your right! > > eric > > Perry Greenfield wrote: > >> Eric Jones wrote: >> >> >>> Bob.Cowdery at CGI-Europe.com wrote: >>> >>> The general ideas of uisng Numeric in Python is to avoid loops, even >>> at the expense of sometimes doing extra computations. In this case, >>> the signal processing calculations can be done for every entry in >>> the array (except the first). The conditions checked for in the >>> "if" statements can then be filtered out in subsequent array >>> processing steps. >>> >>> >>> Here is some (totally untested) code that should give you some ideas: >>> >>> gain = self.m_procGain >>> abs_complexout = abs(self.m_complexout) >>> >>> # Do singal processing calculation for all elements in array >>> gain[1:] = ( (gain[:-1] * (1 - attack)) + >>> ((attack * peak) / abs_complexout[:-1]) ) >>> >>> >> >> Does this do what is needed? I think the point he was making >> was that gain[i] depends on gain[i-1] which in turn depends >> on gain[i-2] and so on. The expression above doesn't capture that >> since from his code he doesn't show any initial value for gain >> (implicitly the gain[0] must exist). So using an array expression >> which uses a shifted gain array in its input won't work. >> This is the sort of problem that doesn't fit the array model >> very well unless there are some primitive array operations >> that happen to do the iterative thing needed (sometimes >> the accumulate or convolve functions can be used for this >> thing, but I can't see any obvious existing function. Seems >> like a candidate for a C extension but maybe I'm missing something. >> >> Perry >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> >> > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From oliphant at ee.byu.edu Tue May 4 14:47:43 2004 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 04 May 2004 12:47:43 -0600 Subject: ***[Possible UCE]*** [SciPy-user] Help with array functions In-Reply-To: <9B40DB5FBF7D2048AD79C2A91F862A51772FFE@STEVAPPS05.Stevenage.CGI-Europe.com> References: <9B40DB5FBF7D2048AD79C2A91F862A51772FFE@STEVAPPS05.Stevenage.CGI-Europe.com> Message-ID: <4097E54F.9020007@ee.byu.edu> Bob.Cowdery at CGI-Europe.com wrote: > Hi all, > > Can anyone help me convert this loop to an array function. I can't > have loops in my signal processing as they take far too much > processing power. This is a fragment from a speech processor using > Ferkins formula. Essentially the gain array (m_procGain) which is 2048 > long (m_sampl_sz) needs its nth element processing into it's (n-1)th > element. This is eventually used to multiply the samples in > m_complexout, the first 2048 elements of this are used. Attack is a > scaler currently 0.4. > > peak = > abs(self.m_complexout[argmax(abs(self.m_complexout[:self.m_sampl_sz]))]) > i = arange(1,self.m_sampl_sz) > for x in i: > if abs(self.m_complexout[x-1]) > 0: > self.m_procGain[x] = ( (self.m_procGain[x-1] * > (1 - attack)) + ((attack * peak) / abs(self.m_complexout[x-1])) ) > if self.m_procGain[x] > level: > self.m_procGain[x] = level > else: > self.m_procGain[x] = 1 > Bob, You have here something similar to an IIR filter. The command signal.lfilter is usually used in situations like this. If I read this code correctly then define y[n] = self.m_procGain[n] with a = (1-attack) w[n] = (attack*peak) / abs(self.m_complexout[n]) and you want to basically compute y[n] = a y[n-1] + w[n-1] as long as w[n-1] is not infinite because of zero-valued abs(self.m_complexout[n-1]) The problem with using signal.lfilter is you have a non-linear filter due to the if statements. You could try to modify the algorithm a bit to maintain a linear filter, or you will have to use weave (or f2py). Just to expose you to the many ways to accomplish the problem. Here is another test using f2py ******** Results ********************* weave results: [ 0. 0.8 0.96 0.992 0.9984 1. 1. 1. 1. 1. ] python results: [ 0. 0.8 0.96 0.992 0.9984 1. 1. 1. 1. 1. ] f2py results: [ 0. 0.8 0.96 0.992 0.9984 1. 1. 1. 1. 1. ] C time: 0.08 Python time: 6.13 f2py time: 0.05 weave vs. Python speedup: 76.625 f2py vs. Python speedup: 122.6 ********* Code below *********** # test.py fort_code = r""" subroutine gain_compute(gain, abs_complexout, attack, peak, $ level, samples) integer samples double precision attack, peak, level double precision gain(samples), abs_complexout(samples) double precision val integer i do i = 2, samples if (abs_complexout(i-1).gt.0.0) then val = gain(i-1)*(1.0-attack) + $ (attack*peak) / abs_complexout(i-1) if (val.gt.level) then val = level end if else val = 1.0 end if gain(i) = val end do return end subroutine gain_compute """ from scipy import zeros, ones, Float64, arange from scipy import stats import scipy_distutils import weave import f2py2e try: import forttest except ImportError: f2py2e.compile(fort_code, modulename='forttest') import forttest def python_algorithm(gain, abs_complexout, attack, peak, level, samples): # gain is changed *inplace* i = arange(1,samples) for x in i: if abs_complexout[x-1] > 0: gain[x] = ( (gain[x-1] * (1.0 - attack)) + ((attack * peak) / abs_complexout[x-1]) ) if gain[x] > level: gain[x] = level else: gain[x] = 1 def weave_algorithm(gain, abs_complexout, attack, peak, level, samples): # gain is changed *inplace* code = """ for (int x=1; x < samples; x++) { if (abs_complexout[x-1] > 0.0) { gain[x] = ( (gain[x-1] * (1.0 - attack)) + ((attack * peak) / abs_complexout[x-1]) ); if (gain[x] > level) gain[x] = level; } else gain[x] = 1; } """ weave.inline(code, ['gain', 'abs_complexout', 'samples', 'attack', 'peak', 'level'], compiler='gcc') # some default parameters... samples = 2048 gain = zeros(samples,typecode=Float64) complexout = stats.choice([-1.0,0.0,1.0],size=samples) attack = .8 peak = 1.0 level = 1.0 # equivalence tests ac = abs(complexout) weave_algorithm(gain, ac, attack, peak, level, samples) print 'weave results:', gain[:10] gain = zeros(samples,typecode=Float64) python_algorithm(gain, ac, attack, peak, level, samples) print 'python results:', gain[:10] gain = zeros(samples,typecode=Float64) forttest.gain_compute(gain, ac, attack, peak, level, samples) print 'f2py results:', gain[:10] # speed tests import time N = 1000 t1 = time.clock() for it in range(N): weave_algorithm(gain, complexout, attack, peak, level, samples) t2 = time.clock() ct = t2 - t1 print 'C time:', ct t1 = time.clock() for it in range(N): python_algorithm(gain, complexout, attack, peak, level, samples) t2 = time.clock() pt = t2-t1 print 'Python time:', pt f2py_algorithm = forttest.gain_compute t1 = time.clock() for it in range(N): f2py_algorithm(gain, complexout, attack, peak, level, samples) t2 = time.clock() ft = t2-t1 print 'f2py time:', ft ratio = pt/ct ratio2 = pt/ft print 'weave vs. Python speedup:', ratio print 'f2py vs. Python speedup:', ratio2 > Thanks for any assistance. > > Bob > > > Bob Cowdery > CGI Senior Technical Architect > +44(0)1438 791517 > Mobile: +44(0)7771 532138 > bob.cowdery at cgi-europe.com > > > > > **** Confidentiality Notice **** Proprietary/Confidential > Information belonging to CGI Group Inc. and its affiliates > may be contained in this message. If you are not a recipient > indicated or intended in this message (or responsible for > delivery of this message to such person), or you think for > any reason that this message may have been addressed to you > in error, you may not use or copy or deliver this message > to anyone else. In such case, you should destroy this > message and are asked to notify the sender by reply email. > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From eric at enthought.com Tue May 4 16:25:34 2004 From: eric at enthought.com (eric jones) Date: Tue, 04 May 2004 15:25:34 -0500 Subject: ***[Possible UCE]*** [SciPy-user] Help with array functions In-Reply-To: <4097E54F.9020007@ee.byu.edu> References: <9B40DB5FBF7D2048AD79C2A91F862A51772FFE@STEVAPPS05.Stevenage.CGI-Europe.com> <4097E54F.9020007@ee.byu.edu> Message-ID: <4097FC3E.40602@enthought.com> Hey Travis, The Fortran version didn't work for me. It looks like g77 is detected on my machine, and I get compile errors in the code. I also have an f90 compiler on my machine that wasn't picked up. Is there a way to switch which compiler is used through command line? eric Travis Oliphant wrote: > Bob.Cowdery at CGI-Europe.com wrote: > >> Hi all, >> >> Can anyone help me convert this loop to an array function. I can't >> have loops in my signal processing as they take far too much >> processing power. This is a fragment from a speech processor using >> Ferkins formula. Essentially the gain array (m_procGain) which is >> 2048 long (m_sampl_sz) needs its nth element processing into it's >> (n-1)th element. This is eventually used to multiply the samples in >> m_complexout, the first 2048 elements of this are used. Attack is a >> scaler currently 0.4. >> >> peak = >> abs(self.m_complexout[argmax(abs(self.m_complexout[:self.m_sampl_sz]))]) >> i = arange(1,self.m_sampl_sz) >> for x in i: >> if abs(self.m_complexout[x-1]) > 0: >> self.m_procGain[x] = ( (self.m_procGain[x-1] * >> (1 - attack)) + ((attack * peak) / abs(self.m_complexout[x-1])) >> ) if self.m_procGain[x] > level: >> self.m_procGain[x] = level >> else: >> self.m_procGain[x] = 1 >> > > Bob, > > You have here something similar to an IIR filter. The command > signal.lfilter is usually used in situations like this. > > If I read this code correctly then define y[n] = self.m_procGain[n] > with a = (1-attack) > > w[n] = (attack*peak) / abs(self.m_complexout[n]) > > and you want to basically compute > > y[n] = a y[n-1] + w[n-1] > as long as w[n-1] is not infinite because of zero-valued > abs(self.m_complexout[n-1]) > > The problem with using signal.lfilter is you have a non-linear filter > due to the if statements. You could try to modify the algorithm a bit > to maintain a linear filter, or you will have to use weave (or f2py). > Just to expose you to the many ways to accomplish the problem. Here > is another test using f2py > > > > ******** Results ********************* > weave results: [ 0. 0.8 0.96 0.992 0.9984 1. > 1. 1. 1. 1. ] > python results: [ 0. 0.8 0.96 0.992 0.9984 1. > 1. 1. 1. 1. ] > f2py results: [ 0. 0.8 0.96 0.992 0.9984 1. > 1. 1. 1. 1. ] > C time: 0.08 > Python time: 6.13 > f2py time: 0.05 > weave vs. Python speedup: 76.625 > f2py vs. Python speedup: 122.6 > > > ********* Code below *********** > > > > # test.py > > fort_code = r""" > > subroutine gain_compute(gain, abs_complexout, attack, peak, > $ level, samples) > > integer samples > double precision attack, peak, level > double precision gain(samples), abs_complexout(samples) > > double precision val > integer i > > do i = 2, samples > if (abs_complexout(i-1).gt.0.0) then > val = gain(i-1)*(1.0-attack) + > $ (attack*peak) / abs_complexout(i-1) > if (val.gt.level) then > val = level > end if > else > val = 1.0 > end if > gain(i) = val > end do > > return > end subroutine gain_compute > """ > > from scipy import zeros, ones, Float64, arange > from scipy import stats > import scipy_distutils > import weave > import f2py2e > > try: > import forttest > except ImportError: > f2py2e.compile(fort_code, modulename='forttest') > import forttest > > def python_algorithm(gain, abs_complexout, attack, peak, level, samples): > # gain is changed *inplace* > i = arange(1,samples) > for x in i: > if abs_complexout[x-1] > 0: > gain[x] = ( (gain[x-1] * (1.0 - attack)) + ((attack * peak) / > abs_complexout[x-1]) ) > if gain[x] > level: > gain[x] = level > else: > gain[x] = 1 > > def weave_algorithm(gain, abs_complexout, attack, peak, level, samples): > # gain is changed *inplace* > code = """ > for (int x=1; x < samples; x++) > { > if (abs_complexout[x-1] > 0.0) > { > gain[x] = ( (gain[x-1] * (1.0 - attack)) + > ((attack * peak) / abs_complexout[x-1]) > ); if (gain[x] > level) > gain[x] = level; > } > else > gain[x] = 1; > } > """ > weave.inline(code, > ['gain', 'abs_complexout', 'samples', 'attack', 'peak', > 'level'], > compiler='gcc') > > # some default parameters... > samples = 2048 > gain = zeros(samples,typecode=Float64) > complexout = stats.choice([-1.0,0.0,1.0],size=samples) > > attack = .8 > peak = 1.0 > level = 1.0 > > > # equivalence tests > ac = abs(complexout) > weave_algorithm(gain, ac, attack, peak, level, samples) > print 'weave results:', gain[:10] > > gain = zeros(samples,typecode=Float64) > python_algorithm(gain, ac, attack, peak, level, samples) > print 'python results:', gain[:10] > > gain = zeros(samples,typecode=Float64) > forttest.gain_compute(gain, ac, attack, peak, level, samples) > print 'f2py results:', gain[:10] > > > # speed tests > import time > > N = 1000 > > t1 = time.clock() > for it in range(N): > weave_algorithm(gain, complexout, attack, peak, level, samples) > t2 = time.clock() > ct = t2 - t1 > print 'C time:', ct > > t1 = time.clock() > for it in range(N): > python_algorithm(gain, complexout, attack, peak, level, samples) > t2 = time.clock() > pt = t2-t1 > print 'Python time:', pt > > f2py_algorithm = forttest.gain_compute > t1 = time.clock() > for it in range(N): > f2py_algorithm(gain, complexout, attack, peak, level, samples) > t2 = time.clock() > ft = t2-t1 > print 'f2py time:', ft > > > ratio = pt/ct > ratio2 = pt/ft > print 'weave vs. Python speedup:', ratio > print 'f2py vs. Python speedup:', ratio2 > > > > > > > > > > > > >> Thanks for any assistance. >> >> Bob >> >> >> Bob Cowdery >> CGI Senior Technical Architect >> +44(0)1438 791517 >> Mobile: +44(0)7771 532138 >> bob.cowdery at cgi-europe.com >> >> >> >> >> **** Confidentiality Notice **** Proprietary/Confidential >> Information belonging to CGI Group Inc. and its affiliates >> may be contained in this message. If you are not a recipient >> indicated or intended in this message (or responsible for >> delivery of this message to such person), or you think for >> any reason that this message may have been addressed to you >> in error, you may not use or copy or deliver this message >> to anyone else. In such case, you should destroy this >> message and are asked to notify the sender by reply email. >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From pearu at scipy.org Tue May 4 16:47:37 2004 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 4 May 2004 15:47:37 -0500 (CDT) Subject: [SciPy-user] Help with array functions In-Reply-To: <4097FC3E.40602@enthought.com> References: <9B40DB5FBF7D2048AD79C2A91F862A51772FFE@STEVAPPS05.Stevenage.CGI-Europe.com> <4097E54F.9020007@ee.byu.edu> <4097FC3E.40602@enthought.com> Message-ID: On Tue, 4 May 2004, eric jones wrote: > The Fortran version didn't work for me. It looks like g77 is detected > on my machine, and I get compile errors in the code. I also have an f90 > compiler on my machine that wasn't picked up. Is there a way to switch > which compiler is used through command line? > > try: > > import forttest > > except ImportError: > > f2py2e.compile(fort_code, modulename='forttest') Try: f2py2e.compile(fort_code, modulename='forttest', extra_args = '--fcompiler=gnu --compiler=mingw32') to specify Fortran and C compilers. Regards, Pearu From Bob.Cowdery at CGI-Europe.com Wed May 5 03:54:57 2004 From: Bob.Cowdery at CGI-Europe.com (Bob.Cowdery at CGI-Europe.com) Date: Wed, 5 May 2004 08:54:57 +0100 Subject: [SciPy-user] Help with array functions Message-ID: <9B40DB5FBF7D2048AD79C2A91F862A51773003@STEVAPPS05.Stevenage.CGI-Europe.com> Pearu, Eric, Travis .. Thanks for all your help. It's going to take me a little while to understand your replies and try them out. Real work unfortunately keeps getting in the way! I have a similar problem with the AGC (Automatic Gain Control) on the receive side which at the moment steps rather than ramps as I have miscoded it along the lines of the first reply. I will probably try to come up with a function I can use for both situations. If you have any more inspiration in the mean time please let me know. Thanks Bob _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user *** Confidentiality Notice *** Proprietary/Confidential Information belonging to CGI Group Inc. and its affiliates may be contained in this message. If you are not a recipient indicated or intended in this message (or responsible for delivery of this message to such person), or you think for any reason that this message may have been addressed to you in error, you may not use or copy or deliver this message to anyone else. In such case, you should destroy this message and are asked to notify the sender by reply email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tikva at israsrv.net.il Wed May 5 04:37:20 2004 From: Tikva at israsrv.net.il (Tikva Bonneh) Date: Wed, 5 May 2004 11:37:20 +0300 Subject: [SciPy-user] gplt plot with errorbars Message-ID: <001001c4327c$323341d0$0101c80a@bonneh> I want to plot using gplt with errorbars. In gnuplot the interface would we plot x,y,e with errorbars, but the interface of gplt.plot is plot(x,y,'format string'), so I don't know how to pass the third parameter (the array of the error values). -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Wed May 5 05:31:17 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 5 May 2004 11:31:17 +0200 (CEST) Subject: [SciPy-user] scipy on quantian Message-ID: Dear SciPy users, due to joint efforts a working SciPy is available on quantian 0.4.9.6, "The Quantian Scientific Computing Environment", http://dirk.eddelbuettel.com/quantian.html which is a live CD based on knoppix/debian. So anyone who wants to do see what SciPy has to offer for him can test this without installing anything. (It might be also useful for Windows users to see how things look like on linux). Thanks go to Jose Fonseca for building the debian packages, Dirk Edelbuettel for quantian and his extremely fast and helpful response and Ulf Lorenz for a lot of testing. Best, Arnd P.S.: Actually, maybe this is worth a link on the new scipy webpage? From arnd.baecker at web.de Wed May 5 05:31:17 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 5 May 2004 11:31:17 +0200 (CEST) Subject: [SciPy-user] scipy on quantian Message-ID: Dear SciPy users, due to joint efforts a working SciPy is available on quantian 0.4.9.6, "The Quantian Scientific Computing Environment", http://dirk.eddelbuettel.com/quantian.html which is a live CD based on knoppix/debian. So anyone who wants to do see what SciPy has to offer for him can test this without installing anything. (It might be also useful for Windows users to see how things look like on linux). Thanks go to Jose Fonseca for building the debian packages, Dirk Edelbuettel for quantian and his extremely fast and helpful response and Ulf Lorenz for a lot of testing. Best, Arnd P.S.: Actually, maybe this is worth a link on the new scipy webpage? From chris at fonnesbeck.org Wed May 5 10:47:23 2004 From: chris at fonnesbeck.org (Christopher Fonnesbeck) Date: Wed, 5 May 2004 10:47:23 -0400 Subject: [SciPy-user] gplt problems: saving output to file Message-ID: <1E7E28BC-9EA3-11D8-BA1A-000A956FDAC0@fonnesbeck.org> I continue to have a problem saving the output from certain gplt calls to disk. I have two plotting routines, both which call gplt.plot, one which plots a histogram, and the other which plots a time series. Though both produce output properly to screen as expected, the latter produces a zero-length file. Here are the two methods: def histogram(self,data,name,nbins=None,xlab='Value',ylab='Frequency',suffix =''): 'Internal histogram specification for handling nested arrays' 'If there is only one data array, go ahead and plot it ... ' if len(shape(data))==1: print 'Generating histogram of',name try: 'Generate reasonable bins if none specified' if not nbins: nbins = 50 * (len(data)>1000) or 10 'Generate binned values' counts,lowval,width,extra = histogram(data,numbins=nbins) 'Generate x-axis values' values = [lowval] for i in range(nbins-1): values.append(values[-1]+width) 'Pass to plotting function' plot(values,counts) 'Plot options' grid('off') xtitle(name) ytitle("Frequency") 'Save to file' output("%s%s.png" % (name,suffix),'png') close() except OverflowError: print '... cannot generate histogram' else: '... otherwise, plot recursively' tdata = swapaxes(data,0,1) for i in range(len(tdata)): self.histogram(tdata[i],name+'_'+str(i),nbins=nbins,xlab=xlab,suffix=suf fix) def time_series(self,data,name,xlab='Time',ylab='State',suffix=''): 'Plot (mutiple) time series' plot(data) print 'Plotting',name grid('off') xtitle(xlab) ytitle(ylab) output("%s%s.png" % (name,suffix),'png') close() I dont see any obvious reason why output() would produce different results for these two methods. If anyone has a plausible explanation for what may be happening, I'd love to hear from them. Thanks, Chris PS -- this occurs on both Linux and OSX -- Christopher J. Fonnesbeck ( c h r i s @ f o n n e s b e c k . o r g ) Georgia Cooperative Fish & Wildlife Research Unit, University of Georgia "I don't remember any kind of heaviness ruining my time at Yale." George W. Bush, on the civil rights movement -- Putting http://wecanstopspam.org in your email helps it pass through overzealous spam filters. From swisher at enthought.com Wed May 5 10:49:32 2004 From: swisher at enthought.com (Janet Swisher) Date: Wed, 5 May 2004 09:49:32 -0500 Subject: [SciPy-user] scipy on quantian In-Reply-To: Message-ID: <008501c432b0$30036be0$ab01a8c0@SWISHER> > P.S.: Actually, maybe this is worth a link on the new scipy webpage? Done. I added a link on the Downloads page. ------------------------- Janet Swisher Senior Technical Writer Enthought, Inc. 1-512-536-1057 From eric at enthought.com Wed May 5 10:58:29 2004 From: eric at enthought.com (eric jones) Date: Wed, 05 May 2004 09:58:29 -0500 Subject: [SciPy-user] scipy on quantian In-Reply-To: References: Message-ID: <40990115.9080209@enthought.com> Hey Arnd, Very cool. > P.S.: Actually, maybe this is worth a link on the new scipy webpage? Perhaps we should have a link page or "related projects" page that would be in the "navigation" panel of the site. I'll ask Travis V. about this. Also, you can always post a "news" articles from your personal scipy page that will show up on the front page. eric Arnd Baecker wrote: >Dear SciPy users, > >due to joint efforts a working SciPy is available on quantian 0.4.9.6, >"The Quantian Scientific Computing Environment", > http://dirk.eddelbuettel.com/quantian.html >which is a live CD based on knoppix/debian. > >So anyone who wants to do see what SciPy has to offer for him >can test this without installing anything. >(It might be also useful for Windows users to see >how things look like on linux). > >Thanks go to Jose Fonseca for building the debian packages, >Dirk Edelbuettel for quantian and his extremely fast and helpful response >and Ulf Lorenz for a lot of testing. > >Best, > >Arnd > >P.S.: Actually, maybe this is worth a link on the new scipy webpage? > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From danny_shevitz at yahoo.com Wed May 5 11:08:42 2004 From: danny_shevitz at yahoo.com (danny shevitz) Date: Wed, 5 May 2004 08:08:42 -0700 (PDT) Subject: [SciPy-user] gplt plot with errorbars In-Reply-To: <001001c4327c$323341d0$0101c80a@bonneh> Message-ID: <20040505150842.65035.qmail@web41008.mail.yahoo.com> I posted this essentially same email about 3 weeks ago. The new_plot module allegedly wraps the errorbar plot from gplot. I was never able to get it to work. I asked for a working code sample, but none was ever posted. I have since moved to matplotlib. It was a lot easier to get up and running and is working well for me. There was a sample errorbar plot in the matplotlib source file. Get that and you'll be off and running Danny --- Tikva Bonneh wrote: > I want to plot using gplt with errorbars. In gnuplot the interface > would we plot x,y,e with errorbars, > but the interface of gplt.plot is plot(x,y,'format string'), so I > don't know how to pass the third parameter (the array of the error > values). > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > __________________________________ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover From arnd.baecker at web.de Wed May 5 11:13:09 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 5 May 2004 17:13:09 +0200 (CEST) Subject: [SciPy-user] scipy on quantian In-Reply-To: <008501c432b0$30036be0$ab01a8c0@SWISHER> References: <008501c432b0$30036be0$ab01a8c0@SWISHER> Message-ID: On Wed, 5 May 2004, Janet Swisher wrote: > > P.S.: Actually, maybe this is worth a link on the new scipy webpage? > > Done. I added a link on the Downloads page. Excellent! Maybe it is useful to point out on that page that there are also the debian packages by Jose Fonseca, http://jrfonseca.dyndns.org/debian/ Many thanks, Arnd From arnd.baecker at web.de Wed May 5 11:16:01 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 5 May 2004 17:16:01 +0200 (CEST) Subject: [SciPy-user] scipy on quantian In-Reply-To: <40990115.9080209@enthought.com> References: <40990115.9080209@enthought.com> Message-ID: On Wed, 5 May 2004, eric jones wrote: > Hey Arnd, > > Very cool. > > > P.S.: Actually, maybe this is worth a link on the new scipy webpage? > > Perhaps we should have a link page or "related projects" page that would > be in the "navigation" panel of the site. I'll ask Travis V. about this. I think Janet's solution to put it on http://www.scipy.org/download/ is quite good. > Also, you can always post a "news" articles from your personal scipy > page that will show up on the front page. Ok, I wasn't aware of this, should I still post a news? (If so I will look into that tomorrow). Best, Arnd From arnd.baecker at web.de Wed May 5 11:48:01 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 5 May 2004 17:48:01 +0200 (CEST) Subject: [SciPy-user] help and lazy importer (again or still ?) Message-ID: Hi, with yesterdays CVS version I came across the following problem >>> help ("scipy.integrate.odeint") [[ this works fine ]] >>> from scipy.integrate import odeint >>> help ("scipy.integrate.odeint") Traceback (m`ost recent call last): File "", line 1, in ? File "/home/python/PYTHON_New//lib/python2.3/site.py", line 309, in __call__ return pydoc.help(*args, **kwds) File "/home/python/PYTHON_New/lib/python2.3/site-packages/scipy_base/ppimport.py", line 307, in _scipy_pydoc_help_call a = a._ppimport_module File "/home/python/PYTHON_New/lib/python2.3/site-packages/scipy_base/ppimport.py", line 270, in __getattr__ module = self._ppimport_importer() File "/home/python/PYTHON_New/lib/python2.3/site-packages/scipy_base/ppimport.py", line 243, in _ppimport_importer module = __import__(name,None,None,['*']) ImportError: No module named odeint >>> After importing (via `from scipy.integrate import odeint`) >>>help(odeint) works (as expected). Our students stumbled across this and complained "There is no help for scipy.integrate.odeint" ;-). I hope that there is an easy solution to this (and that my question does not open a can of worms ... ;-) And there is another (related?) one: Python 2.3.3 (#1, May 3 2004, 16:38:25) [GCC 3.3.3 (Debian)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> help("scipy.linalg") Gives Help on instance of _ModuleLoader in scipy: scipy.linalg = Only on the second call to help("scipy.linalg") one gets the expected documentation. Best, Arnd From eric at enthought.com Wed May 5 12:11:32 2004 From: eric at enthought.com (eric jones) Date: Wed, 05 May 2004 11:11:32 -0500 Subject: [SciPy-user] scipy on quantian In-Reply-To: References: <40990115.9080209@enthought.com> Message-ID: <40991234.6020204@enthought.com> Arnd Baecker wrote: >On Wed, 5 May 2004, eric jones wrote: > > > >>Hey Arnd, >> >>Very cool. >> >> >> >>>P.S.: Actually, maybe this is worth a link on the new scipy webpage? >>> >>> >>Perhaps we should have a link page or "related projects" page that would >>be in the "navigation" panel of the site. I'll ask Travis V. about this. >> >> > >I think Janet's solution to put it on http://www.scipy.org/download/ >is quite good. > > > Fine with me also. >>Also, you can always post a "news" articles from your personal scipy >>page that will show up on the front page. >> >> > >Ok, I wasn't aware of this, should I still post a news? >(If so I will look into that tomorrow). > > > I think it would be good. eric >Best, > >Arnd > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From pearu at scipy.org Wed May 5 15:44:54 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 5 May 2004 14:44:54 -0500 (CDT) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: On Wed, 5 May 2004, Arnd Baecker wrote: > with yesterdays CVS version I came across the > following problem > > >>> help ("scipy.integrate.odeint") > [[ this works fine ]] > >>> from scipy.integrate import odeint > >>> help ("scipy.integrate.odeint") > ImportError: No module named odeint > > Our students stumbled across this and complained > "There is no help for scipy.integrate.odeint" ;-). > > I hope that there is an easy solution to this > (and that my question does not open a can of worms ... ;-) Luckily;-), it wasn't a bug but an overlooked case (I believe it was the last one). The corresponding support code is commited to CVS now. > And there is another (related?) one: > > Python 2.3.3 (#1, May 3 2004, 16:38:25) > [GCC 3.3.3 (Debian)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> help("scipy.linalg") > Gives > Help on instance of _ModuleLoader in scipy: > scipy.linalg = e-packages/scipy/linalg/__init__.pyc' [imported]> > > Only on the second call to help("scipy.linalg") > one gets the expected documentation. Unfortunately, the only way to avoid the above issue is to import scipy_base or scipy (that imports ppimport module which adds a hook to pydoc.help to handle postponed modules) before trying to get help of scipy objects. You can import scipy_base in $PYTHONSTARTUP script, for instance. Regards, Pearu From Tikva at israsrv.net.il Wed May 5 15:41:50 2004 From: Tikva at israsrv.net.il (Tikva Bonneh) Date: Wed, 5 May 2004 22:41:50 +0300 Subject: [SciPy-user] matplotlib versus gplt multiple windows - (errorbars subjet continued) Message-ID: <004701c432d9$06261e70$0101c80a@bonneh> I have tried matplotlib and migrated to gplt because I couldn't open multiple windows with matplotlib . Is there a way to do it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhunter at ace.bsd.uchicago.edu Wed May 5 15:21:55 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed, 05 May 2004 14:21:55 -0500 Subject: [SciPy-user] matplotlib versus gplt multiple windows - (errorbars subjet continued) In-Reply-To: <004701c432d9$06261e70$0101c80a@bonneh> ("Tikva Bonneh"'s message of "Wed, 5 May 2004 22:41:50 +0300") References: <004701c432d9$06261e70$0101c80a@bonneh> Message-ID: >>>>> "Tikva" == Tikva Bonneh writes: Tikva> I have tried matplotlib and migrated to gplt because I Tikva> couldn't open multiple windows with matplotlib . Is there a Tikva> way to do Sure from matplotlib.matlab import * figure(1) plot([1,2,3]) figure(2) plot([4,5,6]) show() See http://matplotlib.sourceforge.net/tutorial.html and the many examples in the examples directory of the source distribution, especially examples/multiple_figs_demo.py. Have fun and good luck! John Hunter From oliphant at ee.byu.edu Wed May 5 15:51:41 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Wed, 05 May 2004 13:51:41 -0600 Subject: [SciPy-user] Marginal distributions Message-ID: <409945CD.9050600@ee.byu.edu> Scott, If you are trying to integrate a multidimensional function you can easily have an upper limit that is a variable. You will probably want to read about scipy.integrate.quad for performing the actual integration. and scipy.vectorize (for vectorizing the resulting function). If you could be a bit more specific about the problem you are referring to I might be able to help further. My understanding of marginals density functions is that you compute integrals over the entire domain (no variable in the limit) so perhaps I'm not understanding your problem well enough. Post more to see if we can help. -Travis O. From arnd.baecker at web.de Wed May 5 15:49:25 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 5 May 2004 21:49:25 +0200 (CEST) Subject: [SciPy-user] gplt problems: saving output to file In-Reply-To: <1E7E28BC-9EA3-11D8-BA1A-000A956FDAC0@fonnesbeck.org> References: <1E7E28BC-9EA3-11D8-BA1A-000A956FDAC0@fonnesbeck.org> Message-ID: Hi Chris, which gnuplot version are you using? I think there was a time when I had a similar problem If it's not gnuplot 4.0 you could try upgrading. Or you could try to use the IPython+Gnuplot.py interface to gnuplot - I am using that a lot and without any problems. In the histogram case: do you supply one data array or several? (I am asking, because in the latter case output is called more than once). Best, Arnd On Wed, 5 May 2004, Christopher Fonnesbeck wrote: > I continue to have a problem saving the output from certain gplt calls > to disk. I have two plotting routines, both which call gplt.plot, one > which plots a histogram, and the other which plots a time series. > Though both produce output properly to screen as expected, the latter > produces a zero-length file. Here are the two methods: > > def > histogram(self,data,name,nbins=None,xlab='Value',ylab='Frequency',suffix > =''): > 'Internal histogram specification for handling nested arrays' > > 'If there is only one data array, go ahead and plot it ... ' > if len(shape(data))==1: > print 'Generating histogram of',name > try: > 'Generate reasonable bins if none specified' > if not nbins: nbins = 50 * (len(data)>1000) or 10 > 'Generate binned values' > counts,lowval,width,extra = > histogram(data,numbins=nbins) > 'Generate x-axis values' > values = [lowval] > for i in range(nbins-1): > values.append(values[-1]+width) > 'Pass to plotting function' > plot(values,counts) > 'Plot options' > grid('off') > xtitle(name) > ytitle("Frequency") > 'Save to file' > output("%s%s.png" % (name,suffix),'png') > close() > except OverflowError: > print '... cannot generate histogram' > else: > '... otherwise, plot recursively' > tdata = swapaxes(data,0,1) > for i in range(len(tdata)): > > self.histogram(tdata[i],name+'_'+str(i),nbins=nbins,xlab=xlab,suffix=suf > fix) > > def time_series(self,data,name,xlab='Time',ylab='State',suffix=''): > 'Plot (mutiple) time series' > > plot(data) > print 'Plotting',name > grid('off') > xtitle(xlab) > ytitle(ylab) > output("%s%s.png" % (name,suffix),'png') > close() > > > I dont see any obvious reason why output() would produce different > results for these two methods. If anyone has a plausible explanation > for what may be happening, I'd love to hear from them. > > Thanks, > Chris > > PS -- this occurs on both Linux and OSX > > -- > Christopher J. Fonnesbeck ( c h r i s @ f o n n e s b e c k . o r g ) > Georgia Cooperative Fish & Wildlife Research Unit, University of Georgia > > "I don't remember any kind of heaviness ruining my time at Yale." > George W. Bush, on the civil rights movement > -- > Putting http://wecanstopspam.org in your email helps it pass through > overzealous spam filters. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From Tikva at israsrv.net.il Wed May 5 15:56:21 2004 From: Tikva at israsrv.net.il (Tikva Bonneh) Date: Wed, 5 May 2004 22:56:21 +0300 Subject: [SciPy-user] matplotlib multiple windows Message-ID: <005b01c432db$0c9ba4d0$0101c80a@bonneh> My last pot was too short! I meant to say - I can open multiple figures in the same time, but what I wanted to do is show one figure and then open a new window while the old window still shows. I did not find how to do that with matplotlib. I had to close the first window in order to see the next window. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Wed May 5 17:04:53 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 5 May 2004 23:04:53 +0200 (CEST) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: On Wed, 5 May 2004, Pearu Peterson wrote: [...] > Luckily;-), it wasn't a bug but an overlooked case (I believe it was > the last one). The corresponding support code is commited to CVS now. Great - many thanks! > > And there is another (related?) one: > > > > Python 2.3.3 (#1, May 3 2004, 16:38:25) > > [GCC 3.3.3 (Debian)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> help("scipy.linalg") > > Gives > > Help on instance of _ModuleLoader in scipy: > > scipy.linalg = > e-packages/scipy/linalg/__init__.pyc' [imported]> > > > > Only on the second call to help("scipy.linalg") > > one gets the expected documentation. > > Unfortunately, the only way to avoid the above issue is to import > scipy_base or scipy (that imports ppimport module which adds a hook to > pydoc.help to handle postponed modules) before trying to get help of scipy > objects. You can import scipy_base in $PYTHONSTARTUP script, for instance. Well, there is always a price to pay at some point ;-) Still I am wondering why help("scipy.integrate.odeint") works at a bare python prompt, whereas help("scipy.integrate") doesn't. The latter seems simpler. I am not sure if I should be able to understand this behavior (somehow I think that in the first case it has to look at scipy.integrate first and then at what is in there. The first "look" at scipy.integrate basically adds the hook to pydoc.help so that the odeint documentation can be displayed - but this is just guessing ;-). Just one last question on this, and I will really shut up: are there any implications/problems for doc-string extracting tools (pydoc, epydoc and such)? I'd guess not has they have to look at scipy first and then the hook should be installed. Best, Arnd From jdhunter at ace.bsd.uchicago.edu Wed May 5 16:48:51 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed, 05 May 2004 15:48:51 -0500 Subject: [SciPy-user] matplotlib multiple windows In-Reply-To: <005b01c432db$0c9ba4d0$0101c80a@bonneh> ("Tikva Bonneh"'s message of "Wed, 5 May 2004 22:56:21 +0300") References: <005b01c432db$0c9ba4d0$0101c80a@bonneh> Message-ID: >>>>> "Tikva" == Tikva Bonneh writes: Tikva> My last pot was too short! I meant to say - I can open Tikva> multiple figures in the same time, but what I wanted to do Tikva> is show one figure and then open a new window while the old Tikva> window still shows. I did not find how to do that with Tikva> matplotlib. I had to close the first window in order to see Tikva> the next window. I'm sorry, I don't really understand what it is you can and cannot do. Could you please describe it more clearly? Are you working interactively from a python shell or running a script? Are you working in a python IDE such as idle or pycrust? What matplotlib backend are you using? See also http://matplotlib.sourceforge.net/interactive.html and http://matplotlib.sourceforge.net/faq.html#SHOW Cheers, John Hunter From gazzar at email.com Wed May 5 22:13:19 2004 From: gazzar at email.com (Gary Ruben) Date: Thu, 06 May 2004 12:13:19 +1000 Subject: [SciPy-user] gplt plot with errorbars Message-ID: <20040506021319.053441CE303@ws3-6.us4.outblaze.com> If you want to use gnuplot, I could provide sample code for doing xyerrorbar plots via gnuplot.py if that's helpful. I can't help with gplt though. Matplotlib is easier to get successful plots with though. I use Matplotlib and gnuplot.py; Matplotlib for quickly plotting stuff and gnuplot.py for publication type stuff because I'm Windows based and I like the convenience of copying and pasting plots into my word processor via the clipboard. Gary Ruben ----- Original Message ----- From: danny shevitz Date: Wed, 5 May 2004 08:08:42 -0700 (PDT) To: SciPy Users List Subject: Re: [SciPy-user] gplt plot with errorbars > I posted this essentially same email about 3 weeks ago. The new_plot > module allegedly wraps the errorbar plot from gplot. I was never able > to get it to work. I asked for a working code sample, but none was ever > posted. I have since moved to matplotlib. It was a lot easier to get up > and running and is working well for me. There was a sample errorbar > plot in the matplotlib source file. Get that and you'll be off and > running > > Danny > > > --- Tikva Bonneh wrote: > > I want to plot using gplt with errorbars. In gnuplot the interface > > would we plot x,y,e with errorbars, > > but the interface of gplt.plot is plot(x,y,'format string'), so I > > don't know how to pass the third parameter (the array of the error > > values). -- ___________________________________________________________ Sign-up for Ads Free at Mail.com http://promo.mail.com/adsfreejump.htm From pearu at scipy.org Thu May 6 02:56:37 2004 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 6 May 2004 01:56:37 -0500 (CDT) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: On Wed, 5 May 2004, Arnd Baecker wrote: > > Unfortunately, the only way to avoid the above issue is to import > > scipy_base or scipy (that imports ppimport module which adds a hook to > > pydoc.help to handle postponed modules) before trying to get help of scipy > > objects. You can import scipy_base in $PYTHONSTARTUP script, for instance. > > Well, there is always a price to pay at some point ;-) It turns out I was wrong, the latest patch in CVS makes also help("scipy.integrate") to work. > Just one last question on this, and I will really shut up: > are there any implications/problems for doc-string extracting > tools (pydoc, epydoc and such)? > I'd guess not has they have to look at scipy first > and then the hook should be installed. The answer depends on how these tools work internally. If you find any problems, please, let me know. Usually the fixes are simple. Thanks, Pearu From arnd.baecker at web.de Thu May 6 03:03:33 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 6 May 2004 09:03:33 +0200 (CEST) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: On Thu, 6 May 2004, Pearu Peterson wrote: > > > On Wed, 5 May 2004, Arnd Baecker wrote: > > > > Unfortunately, the only way to avoid the above issue is to import > > > scipy_base or scipy (that imports ppimport module which adds a hook to > > > pydoc.help to handle postponed modules) before trying to get help of scipy > > > objects. You can import scipy_base in $PYTHONSTARTUP script, for instance. > > > > Well, there is always a price to pay at some point ;-) > > It turns out I was wrong, the latest patch in CVS makes also > > help("scipy.integrate") > > to work. Glad that you were wrong ;-). I think the on-line help is an extremely helpful aspect of python/scipy, so that's why I am very happy to see this working! (though I think it might be nice to have some well-defined and _simple_ procedure to add documention to scipy - something like this is just being discussed for python on c.l.p. - but this is off-topic here and might be good for a new thread at some point) _Many_ thanks, Arnd From nadavh at visionsense.com Thu May 6 06:34:09 2004 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 6 May 2004 12:34:09 +0200 Subject: [SciPy-user] Re: xplt documentation Message-ID: <07C6A61102C94148B8104D42DE95F7E86DECEF@exchange2k.envision.co.il> -----Original Message----- From: Arnd Baecker [mailto:arnd.baecker at web.de] Sent: Tue 04-May-04 14:49 To: Nadav Horesh Cc: scipy-user at scipy.net Subject: Re: [SciPy-user] Re: xplt documentation Hi, On Tue, 4 May 2004, Nadav Horesh wrote: >> I've made some testing (not exhaustive) of matplotlib under IDLE and >> pycrust on win32 and linux platforms. matplotlib seems to be more >> tolerant then xplt, > Can you be a bit more explicit, i.e. give examples ? I just succeeded to interact with the plot window (zoom for instance), while in xplt I had to call pyg_pending() in order to render the mouse commands. >> especially when show() is called instead of >> using the interactive mode. >Personally I think that this is one of the big strengths >of scipy.xplt. For example, when debugging >I use a lot > from IPython.Shell import IPythonShellEmbed > ipshell = IPythonShellEmbed() > > ipshell() > >The ipshell() can come at any point of your code >and all variables are available. >So a quick > plg(some_array) >or > pli(some_2d_array) >often allows to find bugs much more quickly >than anything else. I will try that. >> The linux environment seems to work a >> bit better then the win32. >> >> There is a project (not very active nowadays) called glplot >> (glplot.sf.net) which fills a shortcoming of gnuplot of >> generating false colour map of large matrices (gnuplot is too slow here). >There is a patch on sourceforge for gnuplot which allows >for bitmap images (ie. matrices). I once tried it out and it is very nice. >It might take a little time until this gets integrated >as the gnuplot team seems to be a bit busy with >after release issues. Can you explain more about this patch --- how to access, compile and use it? I am familiar only with the "set pm3d map" option, wich is very nice, but sill too slow for large matrices. >If you really want to go for large matrices have a look >at MayaVi, > http://mayavi.sourceforge.net/ >In particular > http://mayavi.sourceforge.net/docs/guide/x967.html#TOOLS I am using MayaVI for some time, it is reall an excellent tool. The direct call from python looked cumbersome. I'll try to write a script to automate some steps, what could make it more handy. Another excellent tool for viewing 2-4D array is flounder: http://www.enel.ucalgary.ca/~vigmond/flounder/ >Best, > >Arnd Thank you, Nadav. From nadavh at visionsense.com Thu May 6 06:42:35 2004 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 6 May 2004 12:42:35 +0200 Subject: [SciPy-user] Re: xplt documentation Message-ID: <07C6A61102C94148B8104D42DE95F7E86DECF0@exchange2k.envision.co.il> -----Original Message----- From: John Hunter [mailto:jdhunter at ace.bsd.uchicago.edu] Sent: Tue 04-May-04 15:31 To: SciPy Users List Cc: Nadav Horesh Subject: Re: [SciPy-user] Re: xplt documentation >>>>> "Arnd" == Arnd Baecker writes: Arnd> There is a patch on sourceforge for gnuplot which allows for Arnd> bitmap images (ie. matrices). I once tried it out and it is Arnd> very nice. It might take a little time until this gets Arnd> integrated as the gnuplot team seems to be a bit busy with Arnd> after release issues. Arnd> If you really want to go for large matrices have a look at Arnd> MayaVi, http://mayavi.sourceforge.net/ In particular Arnd> http://mayavi.sourceforge.net/docs/guide/x967.html#TOOLS >Also the fairly new imshow in matplotlib plots grayscale and >pseudocolor numeric/numarray arrays. This replaces the painfully slow >pcolor which was unusable except for small arrays. The guts of imshow >are implemented entirely in numeric and extension could >handle large arrays efficiently. If you encounter performance issues, >I'd like to know about it! Indeed this is the best opton I have at the moment to fast-create a false-colour maps for large matrices. >Right now the only false colormap is ColormapJet (see >examples/pcolor_demo2.py) but you can define your own by subclassing >matplotlib.colors.Colormap. I played for some time with the spectrum of colour avaiable in gnuplot's "set palette" command, and finally I picked one that is very close to matplotlib's, so there is nothing really to complain about. >JDH Thanks, Nadav. From travis at enthought.com Thu May 6 08:27:08 2004 From: travis at enthought.com (Travis N. Vaught) Date: Thu, 06 May 2004 07:27:08 -0500 Subject: [SciPy-user] ANN: SciPy 2004 Conference - Python for Scientific Computing Message-ID: <409A2F1C.7050801@enthought.com> Greetings, The 1st annual *SciPy Conference* will be held this year at Caltech, September 2-3, 2004. As some of you may know, we've experienced great participation in two SciPy "Workshops" (with ~70 attendees in both 2002 and 2003) and this year we're graduating to a "conference." With the prestige of a conference comes the responsibility of a keynote address. This year, Jim Hugunin has answered the call and will be speaking to kickoff the meeting on Thursday September 2nd. Jim is the creator of Numeric Python, Jython, and co-designer of AspectJ. Jim is currently working on IronPython--a fast implementation of Python for .NET and Mono. Registration is now open. More information can be found here: http://www.scipy.org/wikis/scipy04 You may register early online for $100.00. Registration includes breakfast and lunch Thursday & Friday and a very nice dinner Thursday night. After July 16, registration will cost $150.00. Call for Presenters: If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 1, 2004. Papers and/or presentation slides are acceptable and are due by August 20, 2004. We're also planning three days of informal "Coding Sprints" prior to the conference -- August 30 to September 1, 2004. Conference registration is not required to participate in the sprints. Please email the list, however, if you plan to attend. Topics for these sprints will be determined via the mailing lists as well, so please submit any suggestions for topics to the scipy-user list: list signup: http://www.scipy.org/mailinglists/ list address: scipy-user at scipy.org Please forward this announcement to anyone/list that might be interested. I look forward to seeing you at the conference. Best Regards, Travis N. Vaught From arnd.baecker at web.de Thu May 6 18:18:34 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 7 May 2004 00:18:34 +0200 (CEST) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: Hi Pearu, On Thu, 6 May 2004, Pearu Peterson wrote: > It turns out I was wrong, the latest patch in CVS makes also > > help("scipy.integrate") > > to work. I just tried it out (>>> scipy.__version__: '0.3.1_280.4183') >>> help("scipy.integrate") No module named Image >>> help("scipy.integrate") No module named ImageFilter >>> help("scipy.integrate") No module named wxPython >>> help("scipy.integrate") [[[ now the expected help shows up ]]] I.e, only the 4th try leads to the doc-string? Another one: >>> help("scipy.integrate.odeint") No module named Image >>> help("scipy.integrate.odeint") No module named odeint Do you also observe this or did I screw up my installation ? Best, Arnd From oliphant at ee.byu.edu Thu May 6 21:13:42 2004 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 06 May 2004 19:13:42 -0600 Subject: [SciPy-user] Re: Windose and Python In-Reply-To: <1083889365.6408.37.camel@newton.sandia.gov> References: <1083889365.6408.37.camel@newton.sandia.gov> Message-ID: <409AE2C6.1010808@ee.byu.edu> Todd Pitts from Sandia asked me the following question. >I have one final question about python on windows. It seems >that the non-interactive scripting works well enough. However, I have >not found a single interactive interpreter that I could recommend to >members of my group without serious reservations. > >I have tried IPython, >PyCrust (various), IDLE, Using it from within emacs (not cygwin emacs, >just emacs under windows), PythonWin, etc. They all have serious >problems when it comes to usability. Most don't have tab completion at >all. Most, (emacs included) don't work with any plotting package. I >have tried gist from scipy and matplotlib (doesn't work with anything >except straight scripting). > >Is python really this unusable for >interactive data exploration and modeling under Windows? > I'm forwarding it to these lists so that individuals with more experience on Windows than I have can respond to his request. What do people use on Windows for interactive work???? -Travis Oliphant From haase at msg.ucsf.edu Thu May 6 21:20:28 2004 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 6 May 2004 18:20:28 -0700 Subject: [SciPy-user] Re: Windose and Python In-Reply-To: <409AE2C6.1010808@ee.byu.edu> References: <1083889365.6408.37.camel@newton.sandia.gov> <409AE2C6.1010808@ee.byu.edu> Message-ID: <200405061820.29022.haase@msg.ucsf.edu> We (I) like PyCrust quite a lot. What is your complain with this in particular ? - Sebastian Haase On Thursday 06 May 2004 06:13 pm, Travis Oliphant wrote: > Todd Pitts from Sandia asked me the following question. > > >I have one final question about python on windows. It seems > >that the non-interactive scripting works well enough. However, I have > >not found a single interactive interpreter that I could recommend to > >members of my group without serious reservations. > > > >I have tried IPython, > >PyCrust (various), IDLE, Using it from within emacs (not cygwin emacs, > >just emacs under windows), PythonWin, etc. They all have serious > >problems when it comes to usability. Most don't have tab completion at > >all. Most, (emacs included) don't work with any plotting package. I > >have tried gist from scipy and matplotlib (doesn't work with anything > >except straight scripting). > > > >Is python really this unusable for > >interactive data exploration and modeling under Windows? > > I'm forwarding it to these lists so that individuals with more > experience on Windows than I have can respond to his request. > > What do people use on Windows for interactive work???? > > -Travis Oliphant > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From jdhunter at ace.bsd.uchicago.edu Thu May 6 21:56:25 2004 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Thu, 06 May 2004 20:56:25 -0500 Subject: [SciPy-user] Re: Windose and Python In-Reply-To: <409AE2C6.1010808@ee.byu.edu> (Travis Oliphant's message of "Thu, 06 May 2004 19:13:42 -0600") References: <1083889365.6408.37.camel@newton.sandia.gov> <409AE2C6.1010808@ee.byu.edu> Message-ID: >>>>> "Travis" == Travis Oliphant writes: >> (emacs included) don't work with any plotting package. I have >> tried gist from scipy and matplotlib (doesn't work with >> anything except straight scripting). Is python really this >> unusable for interactive data exploration and modeling under >> Windows? Travis> I'm forwarding it to these lists so that individuals with Travis> more experience on Windows than I have can respond to his Travis> request. What do people use on Windows for interactive Travis> work???? Have you tried matplotlib with the TkAgg backend using the standard python shell, ipython or idle launched with -n? Most people report good luck on windows with one of these shells for interactive use, particularly the first two. The TkAgg backend is a fairly recent addition, and a couple of settings in the matplotlib rc file will make your experience a little more pleasant backend : TkAgg interactive : True tk.window_focus : True # Maintain shell focus for TkAgg Now when you fire up python or ipython and then import matplotlib, you'll be in interactive mode using the Tkinter backend. The window focus setting is designed to keep your figure from taking the focus when you issue plotting commands. Admittedly scripting is the primary way most people use matplotlib, but we've been working to make the interactive experience better. So if it's been a while since you tried it interactively on win32, it may be worth a second look using a recent release. It is important to consult the backends section of the web page to make sure your IDE is compatible with the backend you are using, however. Finally, Todd Miller, who developed the Tk backend, has been very responsive in fixing known problems, so if you'll let us know what limitations you find we'll do what we can to fix them up. John Hunter From pearu at scipy.org Thu May 6 23:53:18 2004 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 6 May 2004 22:53:18 -0500 (CDT) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: On Fri, 7 May 2004, Arnd Baecker wrote: > I just tried it out (>>> scipy.__version__: '0.3.1_280.4183') > > >>> help("scipy.integrate") > No module named Image > >>> help("scipy.integrate") > No module named ImageFilter > >>> help("scipy.integrate") > No module named wxPython > >>> help("scipy.integrate") > [[[ now the expected help shows up ]]] > > I.e, only the 4th try leads to the doc-string? Yes, I have noticed that too when wxPython is not installed (Python 2.2). > Another one: > >>> help("scipy.integrate.odeint") > No module named Image > > >>> help("scipy.integrate.odeint") > No module named odeint I'll look into it. Pearu From fperez at colorado.edu Fri May 7 00:09:13 2004 From: fperez at colorado.edu (Fernando Perez) Date: Thu, 06 May 2004 22:09:13 -0600 Subject: [SciPy-user] Re: Windose and Python In-Reply-To: <409AE2C6.1010808@ee.byu.edu> References: <1083889365.6408.37.camel@newton.sandia.gov> <409AE2C6.1010808@ee.byu.edu> Message-ID: <409B0BE9.7090305@colorado.edu> Travis Oliphant wrote: > Todd Pitts from Sandia asked me the following question. > > >>I have one final question about python on windows. It seems >>that the non-interactive scripting works well enough. However, I have >>not found a single interactive interpreter that I could recommend to >>members of my group without serious reservations. >> >>I have tried IPython, >>PyCrust (various), IDLE, Using it from within emacs (not cygwin emacs, >>just emacs under windows), PythonWin, etc. They all have serious >>problems when it comes to usability. Most don't have tab completion at >>all. Most, (emacs included) don't work with any plotting package. I >>have tried gist from scipy and matplotlib (doesn't work with anything >>except straight scripting). >> >>Is python really this unusable for >>interactive data exploration and modeling under Windows? As the ipython (http://ipython.scipy.org) author I'm obviously biased, but Windows users seem fairly happy with it. Using Gary Bishop's extensions, it is possible (it should basically work out of the box, though I don't know because I don't use Windows) to get readline and coloring support under a normal (non-cygwin) command shell. Gary's tools are at: http://sourceforge.net/projects/uncpythontools I also imagine that using ipython within emacs as your python shell (which requires a special python-mode.el and ipython.el, available at http://ipython.scipy.org/dist/ipython-emacs-0.3.tgz) must be an option under Windows. I've only used them under Linux, but since this is just regular Emacs lisp, I imagine it should be platform-independent. I hope this helps. Regards, Fernando. From fperez at colorado.edu Fri May 7 00:09:13 2004 From: fperez at colorado.edu (Fernando Perez) Date: Thu, 06 May 2004 22:09:13 -0600 Subject: [SciPy-user] Re: Windose and Python In-Reply-To: <409AE2C6.1010808@ee.byu.edu> References: <1083889365.6408.37.camel@newton.sandia.gov> <409AE2C6.1010808@ee.byu.edu> Message-ID: <409B0BE9.7090305@colorado.edu> Travis Oliphant wrote: > Todd Pitts from Sandia asked me the following question. > > >>I have one final question about python on windows. It seems >>that the non-interactive scripting works well enough. However, I have >>not found a single interactive interpreter that I could recommend to >>members of my group without serious reservations. >> >>I have tried IPython, >>PyCrust (various), IDLE, Using it from within emacs (not cygwin emacs, >>just emacs under windows), PythonWin, etc. They all have serious >>problems when it comes to usability. Most don't have tab completion at >>all. Most, (emacs included) don't work with any plotting package. I >>have tried gist from scipy and matplotlib (doesn't work with anything >>except straight scripting). >> >>Is python really this unusable for >>interactive data exploration and modeling under Windows? As the ipython (http://ipython.scipy.org) author I'm obviously biased, but Windows users seem fairly happy with it. Using Gary Bishop's extensions, it is possible (it should basically work out of the box, though I don't know because I don't use Windows) to get readline and coloring support under a normal (non-cygwin) command shell. Gary's tools are at: http://sourceforge.net/projects/uncpythontools I also imagine that using ipython within emacs as your python shell (which requires a special python-mode.el and ipython.el, available at http://ipython.scipy.org/dist/ipython-emacs-0.3.tgz) must be an option under Windows. I've only used them under Linux, but since this is just regular Emacs lisp, I imagine it should be platform-independent. I hope this helps. Regards, Fernando. From fperez at colorado.edu Fri May 7 00:19:57 2004 From: fperez at colorado.edu (Fernando Perez) Date: Thu, 06 May 2004 22:19:57 -0600 Subject: [SciPy-user] Re: Windose and Python In-Reply-To: <409AE2C6.1010808@ee.byu.edu> References: <1083889365.6408.37.camel@newton.sandia.gov> <409AE2C6.1010808@ee.byu.edu> Message-ID: <409B0E6D.5060306@colorado.edu> Travis Oliphant wrote: > Todd Pitts from Sandia asked me the following question. >>all. Most, (emacs included) don't work with any plotting package. I >>have tried gist from scipy and matplotlib (doesn't work with anything >>except straight scripting). Sorry for the 2nd post. I forgot to mention plotting: ipython includes enhanced support for Gnuplot, with modifications to the interactive plotting syntax to make it as quick and easy to use as possible. I use python 100% of the time I'm working on scientific code and data exploration, and my environment is: Xemacs for heavy editing, a terminal with ipython for interactive work, Gnuplot (with ipython's extensions) for 2d plotting and Mayavi (http://mayavi.sourceforge.net) for sophisticated data visualization. With Gnuplot 4.0's mouse support, it is an extremely convenient tool for fast data exploration, capable of publication-quality PostScript output. Finally, for diagram generation and other problems of a graphical but not purely 'plotting' nature, I have been very happy with PyX. IPython was designed _specifically_ to make interactive scientific computing work as fluid as possible. It has direct access to the underlying system shell, it remembers previous values (like Mathematica's %N variables), and has many other features which you may find useful in this kind of context. I haven't looked at matplotlib yet (I've been using gnuplot since the days of Windows 3.0), but I will very soon, and I have heard excellent things about it. For those already familiar with matlab's syntax, this may be a better option than gnuplot. If there are any problems with ipython's interaction with matplotlib, I'll gladly fix them if possible. Regards, Fernando. From fperez at colorado.edu Fri May 7 00:19:57 2004 From: fperez at colorado.edu (Fernando Perez) Date: Thu, 06 May 2004 22:19:57 -0600 Subject: [SciPy-user] Re: Windose and Python In-Reply-To: <409AE2C6.1010808@ee.byu.edu> References: <1083889365.6408.37.camel@newton.sandia.gov> <409AE2C6.1010808@ee.byu.edu> Message-ID: <409B0E6D.5060306@colorado.edu> Travis Oliphant wrote: > Todd Pitts from Sandia asked me the following question. >>all. Most, (emacs included) don't work with any plotting package. I >>have tried gist from scipy and matplotlib (doesn't work with anything >>except straight scripting). Sorry for the 2nd post. I forgot to mention plotting: ipython includes enhanced support for Gnuplot, with modifications to the interactive plotting syntax to make it as quick and easy to use as possible. I use python 100% of the time I'm working on scientific code and data exploration, and my environment is: Xemacs for heavy editing, a terminal with ipython for interactive work, Gnuplot (with ipython's extensions) for 2d plotting and Mayavi (http://mayavi.sourceforge.net) for sophisticated data visualization. With Gnuplot 4.0's mouse support, it is an extremely convenient tool for fast data exploration, capable of publication-quality PostScript output. Finally, for diagram generation and other problems of a graphical but not purely 'plotting' nature, I have been very happy with PyX. IPython was designed _specifically_ to make interactive scientific computing work as fluid as possible. It has direct access to the underlying system shell, it remembers previous values (like Mathematica's %N variables), and has many other features which you may find useful in this kind of context. I haven't looked at matplotlib yet (I've been using gnuplot since the days of Windows 3.0), but I will very soon, and I have heard excellent things about it. For those already familiar with matlab's syntax, this may be a better option than gnuplot. If there are any problems with ipython's interaction with matplotlib, I'll gladly fix them if possible. Regards, Fernando. From arnd.baecker at web.de Fri May 7 03:59:03 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 7 May 2004 09:59:03 +0200 (CEST) Subject: [SciPy-user] scipy on quantian In-Reply-To: <40991234.6020204@enthought.com> References: <40991234.6020204@enthought.com> Message-ID: Hi, On Wed, 5 May 2004, eric jones wrote: >>Also, you can always post a "news" articles from your personal scipy >>page that will show up on the front page. I just tried that and have a text in my http://www.scipy.org/Members/baecker/folder_contents Now, how do I make that appear in the News section on the front page? (or does this happen automatically after approval ?) Best, Arnd From karthik at james.hut.fi Fri May 7 04:21:31 2004 From: karthik at james.hut.fi (Karthikesh Raju) Date: Fri, 7 May 2004 11:21:31 +0300 Subject: [SciPy-user] covariance and other linear algebra functions Message-ID: Hi, Is am searching for a covariance function and other linear algebra functions like eigen value decomposition. i am interested in both scipy (havent managed to get scipy installed on the main computation systems) and also numarray solution. With warm regards karthik ----------------------------------------------------------------------- Karthikesh Raju, email: karthik at james.hut.fi Researcher, http://www.cis.hut.fi/karthik Helsinki University of Technology, Tel: +358-9-451 5389 Laboratory of Comp. & Info. Sc., Fax: +358-9-451 3277 Department of Computer Sc., P.O Box 5400, FIN 02015 HUT, Espoo, FINLAND ----------------------------------------------------------------------- From eric at enthought.com Fri May 7 04:37:22 2004 From: eric at enthought.com (eric jones) Date: Fri, 07 May 2004 03:37:22 -0500 Subject: [SciPy-user] scipy on quantian In-Reply-To: References: <40991234.6020204@enthought.com> Message-ID: <409B4AC2.2080404@enthought.com> I just "published" it. I hope this step won't be necessary at some point in the future. eric Arnd Baecker wrote: >Hi, > >On Wed, 5 May 2004, eric jones wrote: > > > >>>Also, you can always post a "news" articles from your personal scipy >>>page that will show up on the front page. >>> >>> > >I just tried that and have a text in my > http://www.scipy.org/Members/baecker/folder_contents >Now, how do I make that appear in the News section on the front page? >(or does this happen automatically after approval ?) > >Best, > >Arnd > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From funkmeister at lynxseismicdata.com Fri May 7 18:47:11 2004 From: funkmeister at lynxseismicdata.com (Funkmeister) Date: Fri, 7 May 2004 16:47:11 -0600 Subject: [SciPy-user] Logistic Regression Modules? Message-ID: <7A48F014-A078-11D8-9A2A-000A95B074E0@lynxseismicdata.com> Hello SciPy Users, I am an experienced python user, but I am rather new to SciPy and a little rusty with Stats applications so I apologize now for asking a possibly obvious question. I am working with the GRASS GIS system and need to do some logistic regression computations. Are there any modules in SciPy to do this, or do I need to write them? In the documentation i found genlogistic and logistic under the statistical functions. Would I be able to use these functions to do the regression analysis? Thanks In Advance, Craig From rossini at blindglobe.net Fri May 7 20:54:24 2004 From: rossini at blindglobe.net (A.J. Rossini) Date: Fri, 07 May 2004 17:54:24 -0700 Subject: [SciPy-user] Logistic Regression Modules? In-Reply-To: <7A48F014-A078-11D8-9A2A-000A95B074E0@lynxseismicdata.com> (funkmeister@lynxseismicdata.com's message of "Fri, 7 May 2004 16:47:11 -0600") References: <7A48F014-A078-11D8-9A2A-000A95B074E0@lynxseismicdata.com> Message-ID: <853c6beo1r.fsf@servant.blindglobe.net> Funkmeister writes: > Hello SciPy Users, I am an experienced python user, but I am rather > new to SciPy and a little rusty with Stats applications so I apologize > now for asking a possibly obvious question. > > I am working with the GRASS GIS system and need to do some logistic > regression computations. Are there any modules in SciPy to do this, or > do I need to write them? In the documentation i found genlogistic and > logistic under the statistical functions. Would I be able to use these > functions to do the regression analysis? I'm guessing not. The simplest approach would be to wrap a call to R's glm function via Rpy. The second simplest would be to plug in the likelihood function and optimize (which is usually how one used to do this -- I wrote one for my thesis, via newton-raphson, eons ago (well, mid 90's). You could also simulate the optimization via EM, but that's a pretty slow approach. best, -tony -- rossini at u.washington.edu http://www.analytics.washington.edu/ Biomedical and Health Informatics University of Washington Biostatistics, SCHARP/HVTN Fred Hutchinson Cancer Research Center UW (Tu/Th/F): 206-616-7630 FAX=206-543-3461 | Voicemail is unreliable FHCRC (M/W): 206-667-7025 FAX=206-667-4812 | use Email CONFIDENTIALITY NOTICE: This e-mail message and any attachments may be confidential and privileged. If you received this message in error, please destroy it and notify the sender. Thank you. From scipy at zunzun.com Fri May 7 22:27:27 2004 From: scipy at zunzun.com (James R. Phillips) Date: Fri, 7 May 2004 22:27:27 -400 Subject: [SciPy-user] Enthought 0.3 mingw32 weave in WinXP now working Message-ID: <409c458f6addc0.17626783@mercury.sabren.com> Whew. Here are the magic chants from python.org: http://docs.python.org/inst/tweak-flags.html#SECTION000622000000000000000 That one was painful to figure out. James Phillips http://zunzun.com From eric at enthought.com Fri May 7 22:46:47 2004 From: eric at enthought.com (eric jones) Date: Fri, 07 May 2004 21:46:47 -0500 Subject: [SciPy-user] Enthought 0.3 mingw32 weave in WinXP now working In-Reply-To: <409c458f6addc0.17626783@mercury.sabren.com> References: <409c458f6addc0.17626783@mercury.sabren.com> Message-ID: <409C4A17.20704@enthought.com> This should happen automatically when using weave. It has the creation of the libpython.a step built in if it doesn't find it. Here is what I see (after deleting my existing libpython23.a file in c:\python23\libs): c:\wrk\proava_new\src>python Enthought Edition build 1057 Python 2.3.3 (#51, Feb 16 2004, 04:07:52) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import weave >>> weave.inline(" ",compiler='gcc') file changed mingw32 Building import library: "C:\Python23\libs\libpython23.a" Not sure why it doesn't fly on your machine. eric James R. Phillips wrote: >Whew. Here are the magic chants from python.org: > >http://docs.python.org/inst/tweak-flags.html#SECTION000622000000000000000 > >That one was painful to figure out. > > James Phillips > http://zunzun.com > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From eric at enthought.com Sat May 8 11:41:31 2004 From: eric at enthought.com (eric jones) Date: Sat, 08 May 2004 10:41:31 -0500 Subject: [Fwd: Re: [SciPy-user] Enthought 0.3 mingw32 weave in WinXP now working] Message-ID: <409CFFAB.1050907@enthought.com> Oops. responded only to James... -------------- next part -------------- An embedded message was scrubbed... From: eric jones Subject: Re: [SciPy-user] Enthought 0.3 mingw32 weave in WinXP now working Date: Sat, 08 May 2004 10:27:51 -0500 Size: 2032 URL: From pearu at scipy.org Sun May 9 03:23:17 2004 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 9 May 2004 02:23:17 -0500 (CDT) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: On Thu, 6 May 2004, Pearu Peterson wrote: > On Fri, 7 May 2004, Arnd Baecker wrote: > > > I just tried it out (>>> scipy.__version__: '0.3.1_280.4183') > > > > >>> help("scipy.integrate") > > No module named Image > > >>> help("scipy.integrate") > > No module named ImageFilter > > >>> help("scipy.integrate") > > No module named wxPython > > >>> help("scipy.integrate") > > [[[ now the expected help shows up ]]] > > > > I.e, only the 4th try leads to the doc-string? > > Yes, I have noticed that too when wxPython is not installed (Python 2.2). > > > Another one: > > >>> help("scipy.integrate.odeint") > > No module named Image > > > > >>> help("scipy.integrate.odeint") > > No module named odeint The above issues are now fixed in CVS. When PIL or wxPython are not installed then expect the following messages when importing scipy: /usr/local/lib/python2.2/site-packages/scipy/pilutil.pyc No module named PIL /usr/local/lib/python2.2/site-packages/scipy/plt/interface.pyc No module named wxPython /usr/local/lib/python2.2/site-packages/scipy/plt/__init__.pyc No module named wxPython Pearu From gazzar at email.com Mon May 10 00:12:58 2004 From: gazzar at email.com (Gary Ruben) Date: Mon, 10 May 2004 14:12:58 +1000 Subject: [SciPy-user] Announce: ErrorVal.py 1.0 for error-bounds calculations Message-ID: <20040510041258.DA8523CE181@ws3-4.us4.outblaze.com> I finally got around to putting my errorbar calculation module on my website. This should be of interest to Numeric Python users who deal with calculation of error-bounds on experimental data. Synopsis: A module providing a Python number type class and helper functions to ease the task of computing error bounds on experimental data values. This module defines an abstract data type, Err, which carries a central/prime value and upper and lower error bounds. Helper functions are provided to allow Python Numeric rank-1 arrays of Err objects to be constructed with ease and to apply Numeric Ufuncs. Written under Python 2.3.3 and Numeric 23.0 Example of usage: upperPressure = ArrayOfErr([909., 802., 677., 585., 560., 548.], 1.0) lowerPressure = ArrayOfErr([144., 246., 378., 469., 493., 505.], 1.0) pressureDiff = upperPressure - lowerPressure V_RStandard = ArrayOfErr([2.016, 2.016, 2.020, 2.017, 2.021, 2.019], 0.001) R = 100.2 # standard resistor value [Ohm] I = V_RStandard / R # current [A] V_RAB = ArrayOfErr([6.167, 6.168, 6.170, 6.160, (6.153, 0.02), (5.894, 0.01)], 0.002) R_AB = V_RAB / I logR_AB = ApplyUfuncToErr(R_AB, log10) # This means log10(R_AB) print PrimeVals(logR_AB) print MinVals(logR_AB) print MaxVals(logR_AB) Enjoy, Gary Ruben -- ___________________________________________________________ Sign-up for Ads Free at Mail.com http://promo.mail.com/adsfreejump.htm From elcortostrash at gmx.net Mon May 10 17:37:11 2004 From: elcortostrash at gmx.net (el corto) Date: Mon, 10 May 2004 23:37:11 +0200 (MEST) Subject: [SciPy-user] Re: Windose and Python References: <409AE2C6.1010808@ee.byu.edu> Message-ID: <26281.1084225031@www48.gmx.net> > Todd Pitts from Sandia asked me the following question. > > >I have one final question about python on windows. It seems > >that the non-interactive scripting works well enough. However, I have > >not found a single interactive interpreter that I could recommend to > >members of my group without serious reservations. > > > >I have tried IPython, > >PyCrust (various), IDLE, Using it from within emacs (not cygwin emacs, > >just emacs under windows), PythonWin, etc. They all have serious > >problems when it comes to usability. > >Most don't have tab completion at > >all. This is at least not true for PythonWin. E.g. when you type from scipy. a popup shows up when you hit the "." offering several completion options. I've tested PyCrust and IDLE and found PythonWin the best one for me. > >Most, (emacs included) don't work with any plotting package. I > >have tried gist from scipy and matplotlib (doesn't work with anything > >except straight scripting). I use scipy.xplt with absolutely no probs. > > > >Is python really this unusable for > >interactive data exploration and modeling under Windows? No :) > > > > I'm forwarding it to these lists so that individuals with more > experience on Windows than I have can respond to his request. > > What do people use on Windows for interactive work???? > > -Travis Oliphant > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- "In science one tries to tell people, in such a way as to be understood by everyone, something that no one ever knew before. But in poetry, it's the exact opposite." -Paul Dirac (1902-1984) "Sie haben neue Mails!" - Die GMX Toolbar informiert Sie beim Surfen! Jetzt aktivieren unter http://www.gmx.net/info From arnd.baecker at web.de Tue May 11 03:00:12 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 11 May 2004 09:00:12 +0200 (CEST) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: Dear Pearu, I checked out the CVS version and there seem to be some further issues (each starting with a fresh python session): Case 1 (this works now - thanks!!!) ----------------------------------- help("scipy.integrate.odeint") from scipy.integrate import odeint help("scipy.integrate.odeint") Case 2 ------ help("scipy.integrateMISPELLED.odeint") help("scipy.integrate.odeint") gives Python 2.3.3 (#1, May 3 2004, 16:38:25) [GCC 3.3.3 (Debian)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> help("scipy.integrateMISPELLED.odeint") /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/pilutil.pyc No module named PIL no Python documentation found for 'scipy.integrateMISPELLED.odeint' >>> help("scipy.integrate.odeint") ppimport('scipy.integrate.odeint') caller locals: __name__=__main__ No module named odeint and no further repetition of the call helps here ;-). Doing then help("scipy.xplt") works, and after this help("scipy.integrate.odeint") works as well. Case 3 ------ from scipy.integrate import odeint def f(x): """doc string for f(x)""" return x*x >>> help(f) [[gives the help on f and then, after returning from the pager: ]] /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/interface.pyc No module named wxPython /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/__init__.pyc No module named wxPython >From a user perspective I would not have thought to see this here ;-) Sorry for being a pain here, but maybe you (again) have a solution for the above points. Many thanks, Arnd From karthik at james.hut.fi Tue May 11 05:26:21 2004 From: karthik at james.hut.fi (Karthikesh Raju) Date: Tue, 11 May 2004 12:26:21 +0300 Subject: [SciPy-user] simple question about digonal matrices Message-ID: Hi All, How does one do something as mundane as assigning value to a diagonal vector. D = array([1,2,3]) D_with_zeros = array([[0,0,0], [0,0,0], [0,0,0]]) now i want D in the diagonal of D_with_zeros, i was trying something like: numarray.diagonal(D_with_zeros) = D numarray is not happy with my assignment? Warm regards karthik ----------------------------------------------------------------------- Karthikesh Raju, email: karthik at james.hut.fi Researcher, http://www.cis.hut.fi/karthik Helsinki University of Technology, Tel: +358-9-451 5389 Laboratory of Comp. & Info. Sc., Fax: +358-9-451 3277 Department of Computer Sc., P.O Box 5400, FIN 02015 HUT, Espoo, FINLAND ----------------------------------------------------------------------- From karthik at james.hut.fi Tue May 11 05:27:41 2004 From: karthik at james.hut.fi (Karthikesh Raju) Date: Tue, 11 May 2004 12:27:41 +0300 Subject: [SciPy-user] simple question about digonal matrices Message-ID: Hi All, How does one do something as mundane as assigning value to a diagonal vector. D = array([1,2,3]) D_with_zeros = array([[0,0,0], [0,0,0], [0,0,0]]) now i want D in the diagonal of D_with_zeros, i was trying something like: numarray.diagonal(D_with_zeros) = D numarray is not happy with my assignment? Warm regards karthik ----------------------------------------------------------------------- Karthikesh Raju, email: karthik at james.hut.fi Researcher, http://www.cis.hut.fi/karthik Helsinki University of Technology, Tel: +358-9-451 5389 Laboratory of Comp. & Info. Sc., Fax: +358-9-451 3277 Department of Computer Sc., P.O Box 5400, FIN 02015 HUT, Espoo, FINLAND ----------------------------------------------------------------------- From pearu at scipy.org Tue May 11 09:05:03 2004 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 11 May 2004 08:05:03 -0500 (CDT) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: Hi Arnd, On Tue, 11 May 2004, Arnd Baecker wrote: > I checked out the CVS version and there seem to be > some further issues (each starting with a fresh python > session): > > Case 2 > ------ > > help("scipy.integrateMISPELLED.odeint") > help("scipy.integrate.odeint") > > gives > > Python 2.3.3 (#1, May 3 2004, 16:38:25) > [GCC 3.3.3 (Debian)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> help("scipy.integrateMISPELLED.odeint") > /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/pilutil.pyc No > module named PIL > no Python documentation found for 'scipy.integrateMISPELLED.odeint' > > >>> help("scipy.integrate.odeint") > ppimport('scipy.integrate.odeint') caller locals: > __name__=__main__ > No module named odeint > > and no further repetition of the call helps here ;-). Fixed in CVS. > Case 3 > ------ > > from scipy.integrate import odeint > > def f(x): > """doc string for f(x)""" > return x*x > > >>> help(f) > [[gives the help on f and then, after returning from the pager: ]] > /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/interface.pyc > No module named wxPython > /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/__init__.pyc > No module named wxPython > > >From a user perspective I would not have thought to see this here ;-) I wouldn't disable these messages yet as they are informative for users who try to use scipy.plt module (that will not work without wxPython) as well as they appear only once. Maybe these messages should appear as warnings, though, then one can disable them using python warnings system.. > Sorry for being a pain here, but maybe you (again) have a solution for > the above points. No problem at all. Your feedback is very much appreciated. Keep sending bug reports also in future;-) There is also Scipy bugtracker, http://www.scipy.org/bugtracker, that might be the place to save bugreports. Unfortunately, from the developers point of view, it is pain to use the current Roundup: almost all issues that I have fixed there and set 'resolved', have appeared as 'chatting' again after few days, either because of some bug in Roundup itself or because of some 'anonymous' user who cannot stand 'resolved' issues:( Hopefully 'anonymous' users will be disabled in Roundup someday.. Thanks, Pearu From gazzar at email.com Tue May 11 09:25:19 2004 From: gazzar at email.com (Gary Ruben) Date: Tue, 11 May 2004 23:25:19 +1000 Subject: [SciPy-user] simple question about digonal matrices Message-ID: <20040511132519.3F1EC1CE303@ws3-6.us4.outblaze.com> Hi Karthik, FYI, there's a numpy discussion list here which is probably a more appropriate place to post a numpy-specific question. As for your problem, here's one way to do it: >>> from numarray import * >>> D = array([1,2,3]) >>> D_with_zeros = array([[0,0,0], [0,0,0], [0,0,0]]) >>> where(identity(3),D,D_with_zeros) array([[1, 0, 0], [0, 2, 0], [0, 0, 3]]) HTH, Gary Ruben ----- Original Message ----- From: Karthikesh Raju Date: Tue, 11 May 2004 12:26:21 +0300 To: scipy-user at scipy.net Subject: [SciPy-user] simple question about digonal matrices > Hi All, > > How does one do something as mundane as assigning value to a diagonal > vector. > > D = array([1,2,3]) > > D_with_zeros = array([[0,0,0], > [0,0,0], > [0,0,0]]) > > now i want D in the diagonal of D_with_zeros, i was trying something like: > numarray.diagonal(D_with_zeros) = D > > numarray is not happy with my assignment? > > Warm regards > karthik -- ___________________________________________________________ Sign-up for Ads Free at Mail.com http://promo.mail.com/adsfreejump.htm From arnd.baecker at web.de Tue May 11 09:27:42 2004 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 11 May 2004 15:27:42 +0200 (CEST) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: Hi Pearu, On Tue, 11 May 2004, Pearu Peterson wrote: [...] > > Case 3 > > ------ > > > > from scipy.integrate import odeint > > > > def f(x): > > """doc string for f(x)""" > > return x*x > > > > >>> help(f) > > [[gives the help on f and then, after returning from the pager: ]] > > /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/interface.pyc > > No module named wxPython > > /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/__init__.pyc > > No module named wxPython > > > > >From a user perspective I would not have thought to see this here ;-) > > I wouldn't disable these messages yet as they are informative for users > who try to use scipy.plt module (that will not work without wxPython) as > well as they appear only once. Maybe these messages should appear as > warnings, though, then one can disable them using python warnings system.. I am not fully convinced ;-): If a user asks for help on a function which does not contain anything related to scipy or wxPython he will will be confused about the message and will have a hard time to find out where the complaint comes from. Somehow the invocation of "help" on f triggers to import the scipy stuff and that's where the warning comes from (Hope this is the right explanation). In addition, there is (from the users perspective) no place where scipy.plt is called. Wouldn't it be better to defer any complaints about a missing wxPython (and PIL in other situations) when scipy.plt is actually used? (Ensuring the availability of wxPython/PIL is more a thing of the installation than that it should pop on every import of scipy, I think.) ((Honestly, normally I would not see these messages anyway as I have PIL and wxPython installed, but not for these tests;-)) > > Sorry for being a pain here, but maybe you (again) have a solution for > > the above points. > > No problem at all. Your feedback is very much appreciated. Keep sending > bug reports also in future;-) I will ;-) > There is also Scipy bugtracker, http://www.scipy.org/bugtracker, that > might be the place to save bugreports. Unfortunately, from the developers > point of view, it is pain to use the current Roundup: almost all issues > that I have fixed there and set 'resolved', have appeared as 'chatting' > again after few days, either because of some bug in Roundup itself or > because of some 'anonymous' user who cannot stand 'resolved' issues:( Some people seem to have just too much time. > Hopefully 'anonymous' users will be disabled in Roundup someday.. Whenever this works properly just let me know, and I will post there if you prefer that. Many thanks, Arnd From perry at stsci.edu Tue May 11 11:09:50 2004 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 11 May 2004 11:09:50 -0400 Subject: [SciPy-user] simple question about digonal matrices In-Reply-To: Message-ID: Karthikesh Raju wrote: > Hi All, > > How does one do something as mundane as assigning value to a diagonal > vector. > > D = array([1,2,3]) > > D_with_zeros = array([[0,0,0], > [0,0,0], > [0,0,0]]) > > now i want D in the diagonal of D_with_zeros, i was trying something like: > numarray.diagonal(D_with_zeros) = D > > numarray is not happy with my assignment? > I don't believe that this is a numarray problem but python syntax issue. You can't assign to a function call (that is the error message you got, right?). It seems that you are asking for a way of creating an array that is a view of the diagonal of a matrix such that if you assign to elements of that diagonal vector, the matrix will also be updated. With either Numeric or numarray, it is possible to create such an array so you can do that but it involves a couple steps (one could write a function to do that for you in one step). It would have to work something like this: diagonalview(D_with_zeros)[:] = D Note that you must use a slice assignment for this to work. Note that this won't work for the diagonal function since it returns a copy of the diagonal elements and so changing those doesn't change the matrix. You could define diagonalview as: def diagonalview(arr) return arr.flat[::arr.shape[1]+1] (Note that this won't work for noncontinguous arrays and needs some error checking) Another approach usable with numarray is: ind = arange(D_with_zeros.shape[1]) D_with_zeros[ind, ind] = D From pearu at scipy.org Tue May 11 15:56:26 2004 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 11 May 2004 14:56:26 -0500 (CDT) Subject: [SciPy-user] help and lazy importer (again or still ?) In-Reply-To: References: Message-ID: On Tue, 11 May 2004, Arnd Baecker wrote: > On Tue, 11 May 2004, Pearu Peterson wrote: > [...] > > > Case 3 > > > ------ > > > > > > from scipy.integrate import odeint > > > > > > def f(x): > > > """doc string for f(x)""" > > > return x*x > > > > > > >>> help(f) > > > [[gives the help on f and then, after returning from the pager: ]] > > > /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/interface.pyc > > > No module named wxPython > > > /home/python/PYTHON_New/lib/python2.3/site-packages/scipy/plt/__init__.pyc > > > No module named wxPython > > > > > > >From a user perspective I would not have thought to see this here ;-) > > > > I wouldn't disable these messages yet as they are informative for users > > who try to use scipy.plt module (that will not work without wxPython) as > > well as they appear only once. Maybe these messages should appear as > > warnings, though, then one can disable them using python warnings system.. > > I am not fully convinced ;-): > .... Ok, ok ;) The issue is fixed in CVS. Well, you won't see 'No module named PIL' messages anymore until trying to use PIL stuff. Sometimes you might see 'No module named wxPython', (e.g. when asking help('scipy')) but not with help(f) or help('scipy.linalg'). I'll fix this sometimes later (feel free to file a bug report to roundup so that it won't be forgotten). Thanks, Pearu From jl at dmi.dk Wed May 12 10:14:27 2004 From: jl at dmi.dk (Jesper Larsen) Date: Wed, 12 May 2004 16:14:27 +0200 Subject: [SciPy-user] High-pass filtering Message-ID: <200405121614.27707.jl@dmi.dk> Dear scipy mailing list, I would like to perform a high-pass filtering of a simple timeseries of floats. I'm not really sure how to do that using numarray and scipy. Can anyone post a simple example of how to do that? I have no special preferences for window type. Kind regards, Jesper From oliphant at ee.byu.edu Wed May 12 16:56:10 2004 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 12 May 2004 14:56:10 -0600 Subject: [SciPy-user] High-pass filtering In-Reply-To: <200405121614.27707.jl@dmi.dk> References: <200405121614.27707.jl@dmi.dk> Message-ID: <40A28F6A.6020901@ee.byu.edu> Jesper Larsen wrote: >Dear scipy mailing list, > >I would like to perform a high-pass filtering of a simple timeseries of >floats. I'm not really sure how to do that using numarray and scipy. Can >anyone post a simple example of how to do that? I have no special preferences >for window type. > > > SciPy uses Numeric and doesn't require numarray (it can use numarray objects as inputs which will be treated as Python sequences and can be slow). What kind of high-pass filter are you looking for. Simple FIR filters are quite simple to design and give linear phase. Other IIR filters are available. I'll show you how to design a high-pass FIR filter using the remez algorithm (you could also look at signal.firwin for windowed filter design (but remez is usually a better solution). Let T be the sample distance and must be defined F = 1.0/T bands = array([0,0.5,0.6,1])*F/2 # this places band-edges gain = [0,1] # high-pass filter N = 25 # length of FIR filter b = signal.remez(N, bands, gain,Hz=F) w,h = signal.freqz(b,1) # this will show you the filter xplt.plot(w/pi*F/2,abs(h)) To filter a time series you can use out = signal.lfilter(b,[1],in_) Or simply out = signal.convolve(in_,b)[:-(N-1)] You can get different transients using out = signal.convolve(in_,b,'same') (cuts off on both ends) Good luck, -Travis O. From scottbray83 at hotmail.com Wed May 12 22:17:24 2004 From: scottbray83 at hotmail.com (Scott Bray) Date: Thu, 13 May 2004 12:17:24 +1000 Subject: [SciPy-user] normalizing data/distributions Message-ID: Hey everyone, i am working on a statistics type project for university study. i have a set of data, have built a discrete probability distribution from this data (using the cauchy distribution) and now want to normalize it. currently, the area of the distribution is not equal to one. i have been trying to find literature about how to normalize, but have been unsuccessful (and what i have found, i am unsure on the validity). some say to normalize the data points by: (data point - sample mean) / sample std others say to multiply by a normalising constant that is "chosen" to make the area equal to one. i tried this by just scaling the area to equal one. i'm sorry if this is unclear, but if you have done this sort of thing before, i would REALLY appreciate some help. just ask me some questions if needs be. Thanks Scott _________________________________________________________________ What's your house worth? Click here to find out: http://www.ninemsn.realestate.com.au From rkern at ucsd.edu Wed May 12 22:57:57 2004 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 12 May 2004 19:57:57 -0700 Subject: [SciPy-user] normalizing data/distributions In-Reply-To: References: Message-ID: <40A2E435.50503@ucsd.edu> Scott Bray wrote: > Hey everyone, > > i am working on a statistics type project for university study. i have a > set of data, have built a discrete probability distribution from this > data (using the cauchy distribution) and now want to normalize it. I'm sorry, but this doesn't make sense. How does one build a discrete probability distribution using the Cauchy distribution and the data? Cauchy is a continuous distribution. Do you mean parameterized? Histogrammed? > currently, the area of the distribution is not equal to one. i have been > trying to find literature about how to normalize, but have been > unsuccessful (and what i have found, i am unsure on the validity). some > say to normalize the data points by: > > (data point - sample mean) / sample std The meaning of the word "normalization" varies with context. If you had reason to believe the data were Gaussian, this transformation reduces the data to a standard ("normal") form. Not what you want, I think. > others say to multiply by a normalising constant that is "chosen" to > make the area equal to one. i tried this by just scaling the area to > equal one. Sounds about right. I'm not sure what you're talking about, though (area of what exactly? how are you calculating this thing?). > i'm sorry if this is unclear, but if you have done this sort of thing > before, i would REALLY appreciate some help. just ask me some questions > if needs be. Sure thing, but let's take the statistics issues off-list into private email until we get to SciPy issues. > Thanks > Scott -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From rossini at blindglobe.net Wed May 12 23:14:07 2004 From: rossini at blindglobe.net (A.J. Rossini) Date: Wed, 12 May 2004 20:14:07 -0700 Subject: [SciPy-user] normalizing data/distributions In-Reply-To: (Scott Bray's message of "Thu, 13 May 2004 12:17:24 +1000") References: Message-ID: <85y8nxkohs.fsf@servant.blindglobe.net> "Scott Bray" writes: > i am working on a statistics type project for university study. i have > a set of data, have built a discrete probability distribution from > this data (using the cauchy distribution) and now want to normalize > it. currently, the area of the distribution is not equal to one. i > have been trying to find literature about how to normalize, but have > been unsuccessful (and what i have found, i am unsure on the > validity). some say to normalize the data points by: > > (data point - sample mean) / sample std This is to "normalize" it in the sense of making it match a N(0,1) distribution -- you explicitly said that you were using a cauchy as the basis?! > others say to multiply by a normalising constant that is "chosen" to > make the area equal to one. i tried this by just scaling the area to > equal one. This is probably the most reasonable approach to making a "distribution" of your discretized empirical observations. best, -tony -- rossini at u.washington.edu http://www.analytics.washington.edu/ Biomedical and Health Informatics University of Washington Biostatistics, SCHARP/HVTN Fred Hutchinson Cancer Research Center UW (Tu/Th/F): 206-616-7630 FAX=206-543-3461 | Voicemail is unreliable FHCRC (M/W): 206-667-7025 FAX=206-667-4812 | use Email CONFIDENTIALITY NOTICE: This e-mail message and any attachments may be confidential and privileged. If you received this message in error, please destroy it and notify the sender. Thank you. From stefan_hd2004 at yahoo.de Thu May 13 02:59:52 2004 From: stefan_hd2004 at yahoo.de (=?iso-8859-1?q?Stefan=20Stefan?=) Date: Thu, 13 May 2004 08:59:52 +0200 (CEST) Subject: [SciPy-user] problem with scipy installation on Athlon 64 bit system Message-ID: <20040513065952.43647.qmail@web25207.mail.ukl.yahoo.com> Can anyone help me to find out why my installation of Scipy does not work I had no problems installing it on my notebook, but cannot install it on my desktop computer with an Athlon 64 bit. I tried to follow all instructions but could not compile. I think that I installed atlas and f2py correctly. However, the python setup.py install for python let to errors which I list up at the end of this message. I fully understand it if you cannot provide any support. If this is the case could you tell me where I can obtain additional information on this topic? It seems something went wrong with the libraries. I use the Suse 9.1 Linux distribution. Best regards, Stefan > > > build/temp.linux-x86_64-2.3/config_pygist/config.h > exist. > Skipping pygist configuration (remove build/temp.linux-x86_64-2.3/config_pygist/Make.cfg to force reconfiguration). > ********************************************************************** > x11_info: > FOUND: > libraries = ['X11'] > library_dirs = ['/usr/X11R6/lib'] > include_dirs = ['/usr/X11R6/include'] > > fftw_info: > NOT AVAILABLE > > dfftw_info: > NOT AVAILABLE > > > FFTW (http://www.fftw.org/) libraries not found. > Directories to search for the libraries can be specified in the > scipy_distutils/site.cfg file (section [fftw]) or by setting > the FFTW environment variable. > djbfft_info: > NOT AVAILABLE > > > DJBFFT (http://cr.yp.to/djbfft.html) libraries not found. > Directories to search for the libraries can be specified in the > scipy_distutils/site.cfg file (section [djbfft]) or by setting > the DJBFFT environment variable. > lapack_opt_info: > atlas_threads_info: > scipy_distutils.system_info.atlas_threads_info > NOT AVAILABLE > > atlas_info: > scipy_distutils.system_info.atlas_info > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = f77 > include_dirs = ['/usr/local/lib/atlas'] > > running build_src > building extension "atlas_version" sources > adding 'build/src/atlas_version_0xa900e03cf7166de.c' to sources. > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize GnuFCompiler > customize GnuFCompiler > customize GnuFCompiler using build_ext > building 'atlas_version' extension > compling C sources > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -DHAVE_LARGEFILE_SUPPORT -O2 -fmessage-length=0 -Wall -fPIC' > compile options: '-I/usr/local/lib/atlas -I/usr/include/python2.3 -c' > /usr/bin/g77 -shared build/temp.linux-x86_64-2.3/build/src/atlas_version_0xa900e03cf7166de.o -L/usr/local/lib/atlas -latlas -lg2c -o build/lib.linux-x86_64-2.3/atlas_version.so > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 can not be used when making a shared object; recompile with -fPIC > /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value > collect2: ld returned 1 exit status > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 can not be used when making a shared object; recompile with -fPIC > /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value > collect2: ld returned 1 exit status > ##### msg: error: Command "/usr/bin/g77 -shared build/temp.linux-x86_64-2.3/build/src/atlas_version_0xa900e03cf7166de.o -L/usr/local/lib/atlas -latlas -lg2c -o build/lib.linux-x86_64-2.3/atlas_version.so" failed with exit status 1 > error: Command "/usr/bin/g77 -shared build/temp.linux-x86_64-2.3/build/src/atlas_version_0xa900e03cf7166de.o -L/usr/local/lib/atlas -latlas -lg2c -o build/lib.linux-x86_64-2.3/atlas_version.so" failed with exit status 1 > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = f77 > define_macros = [('NO_ATLAS_INFO', 2)] > include_dirs = ['/usr/local/lib/atlas'] > > blas_opt_info: > atlas_blas_threads_info: > scipy_distutils.system_info.atlas_blas_threads_info > NOT AVAILABLE > > atlas_blas_info: > scipy_distutils.system_info.atlas_blas_info > FOUND: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = c > include_dirs = ['/usr/local/lib/atlas'] > > running build_src > building extension "atlas_version" sources > adding 'build/src/atlas_version_0x78c19bb379372753.c' to sources. > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > building 'atlas_version' extension > compling C sources > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -DHAVE_LARGEFILE_SUPPORT -O2 -fmessage-length=0 -Wall -fPIC' > compile options: '-I/usr/local/lib/atlas -I/usr/include/python2.3 -c' > gcc -pthread -shared build/temp.linux-x86_64-2.3/build/src/atlas_version_0x78c19bb379372753.o -L/usr/local/lib/atlas -latlas -o build/lib.linux-x86_64-2.3/atlas_version.so > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 can not be used when making a shared object; recompile with -fPIC > /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value > collect2: ld returned 1 exit status > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 can not be used when making a shared object; recompile with -fPIC > /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value > collect2: ld returned 1 exit status > ##### msg: error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.3/build/src/atlas_version_0x78c19bb379372753.o -L/usr/local/lib/atlas -latlas -o build/lib.linux-x86_64-2.3/atlas_version.so" failed with exit status 1 > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.3/build/src/atlas_version_0x78c19bb379372753.o -L/usr/local/lib/atlas -latlas -o build/lib.linux-x86_64-2.3/atlas_version.so" failed with exit status 1 > FOUND: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = c > define_macros = [('NO_ATLAS_INFO', 2)] > include_dirs = ['/usr/local/lib/atlas'] > > numpy_info: > FOUND: > define_macros = [('NUMERIC_VERSION', '"\\"23.1\\""')] > include_dirs = ['/usr/include/python2.3'] > > SciPy Version 0.3 > **************************************************************** > Using fortran_libraries setup option is depreciated > --------------------------------------------------- > Use libraries option instead. Yes, scipy_distutils > now supports Fortran sources in libraries. > **************************************************************** > running install > running build > running config_fc > running build_src > building extension "scipy.io.numpyio" sources > building extension "scipy.xxx.spam" sources > adding 'build/temp.linux-x86_64-2.3/Lib/xxx/spam.pyf' to sources. > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy.cluster._vq" sources > building extension "scipy.xplt.gistC" sources > adding 'Lib/xplt/src/play/unix/dir.c' to sources. > adding 'Lib/xplt/src/play/unix/files.c' to sources. > adding 'Lib/xplt/src/play/unix/fpuset.c' to sources. > adding 'Lib/xplt/src/play/unix/pathnm.c' to sources. > adding 'Lib/xplt/src/play/unix/timew.c' to sources. > adding 'Lib/xplt/src/play/unix/uevent.c' to sources. > adding 'Lib/xplt/src/play/unix/ugetc.c' to sources. > adding 'Lib/xplt/src/play/unix/umain.c' to sources. > adding 'Lib/xplt/src/play/unix/usernm.c' to sources. > adding 'Lib/xplt/src/play/unix/slinks.c' to sources. > adding 'Lib/xplt/src/play/x11/colors.c' to sources. > adding 'Lib/xplt/src/play/x11/connect.c' to sources. > adding 'Lib/xplt/src/play/x11/cursors.c' to sources. > adding 'Lib/xplt/src/play/x11/errors.c' to sources. > adding 'Lib/xplt/src/play/x11/events.c' to sources. > adding 'Lib/xplt/src/play/x11/fills.c' to sources. > adding 'Lib/xplt/src/play/x11/fonts.c' to sources. > adding 'Lib/xplt/src/play/x11/images.c' to sources. > adding 'Lib/xplt/src/play/x11/lines.c' to sources. > adding 'Lib/xplt/src/play/x11/pals.c' to sources. > adding 'Lib/xplt/src/play/x11/pwin.c' to sources. > adding 'Lib/xplt/src/play/x11/resource.c' to sources. > adding 'Lib/xplt/src/play/x11/rgbread.c' to sources. > adding 'Lib/xplt/src/play/x11/textout.c' to sources. > adding 'Lib/xplt/src/play/x11/rect.c' to sources. > adding 'Lib/xplt/src/play/x11/clips.c' to sources. > adding 'Lib/xplt/src/play/x11/points.c' to sources. > adding 'Lib/xplt/src/play/all/hash.c' to sources. > adding 'Lib/xplt/src/play/all/hash0.c' to sources. > adding 'Lib/xplt/src/play/all/mm.c' to sources. > adding 'Lib/xplt/src/play/all/alarms.c' to sources. > adding 'Lib/xplt/src/play/all/pstrcpy.c' to sources. > adding 'Lib/xplt/src/play/all/pstrncat.c' to sources. > adding 'Lib/xplt/src/play/all/p595.c' to sources. > adding 'Lib/xplt/src/play/all/bitrev.c' to sources. > adding 'Lib/xplt/src/play/all/bitlrot.c' to sources. > adding 'Lib/xplt/src/play/all/bitmrot.c' to sources. > building extension "scipy.stats.rand" sources > building extension "scipy.stats.statlib" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy.stats.futil" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy.interpolate._fitpack" sources > building extension "scipy.interpolate.dfitpack" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > adding 'build/src/Lib/interpolate/dfitpack-f2pywrappers.f' to sources. > building extension "scipy.fftpack._fftpack" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy.fftpack.convolve" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy.special.cephes" sources > building extension "scipy.special.specfun" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy.linalg.fblas" sources > adding 'build/src/fblas.pyf' to sources. > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy.linalg.cblas" sources > adding 'build/src/cblas.pyf' to sources. > adding 'build/src/fortranobject.c' to sources. > building extension "scipy.linalg.flapack" sources > adding 'build/src/flapack.pyf' to sources. > adding 'build/src/fortranobject.c' to sources. > building extension "scipy.linalg.clapack" sources > adding 'build/src/clapack.pyf' to sources. > adding 'build/src/fortranobject.c' to sources. > building extension "scipy.linalg._flinalg" sources > adding 'build/src/fortranobject.c' to sources. > building extension "scipy.linalg.calc_lwork" sources > adding 'build/src/fortranobject.c' to sources. > building extension "scipy.linalg.atlas_version" sources > building extension "scipy.signal.sigtools" sources > building extension "scipy.signal.spline" sources > building extension "scipy.sparse._zsuperlu" sources > building extension "scipy.sparse._dsuperlu" sources > building extension "scipy.sparse._csuperlu" sources > building extension "scipy.sparse._ssuperlu" sources > building extension "scipy.sparse._sparsekit" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src/Lib/sparse/_sparsekit-f2pywrappers.f' to sources. > building extension "scipy.optimize._minpack" sources > building extension "scipy.optimize._zeros" sources > building extension "_lbfgsb" sources > adding 'build/src/fortranobject.c' to sources. > building extension "moduleTNC" sources > building extension "scipy.integrate._quadpack" sources > building extension "scipy.integrate._odepack" sources > building extension "scipy.integrate.vode" sources > adding 'build/src/fortranobject.c' to sources. > adding 'build/src' to include_dirs. > building extension "scipy_base.fastumath" sources > adding 'scipy_core/scipy_base/fastumathmodule.c' to sources. > building extension "scipy_base._compiled_base" sources > building library "c_misc" sources > building library "cephes" sources > building library "rootfind" sources > building library "statlib" sources > building library "fitpack" sources > building library "dfftpack" sources > building library "mach" sources > building library "amos" sources > building library "toms" sources > building library "cdf" sources > building library "specfun" sources > building library "minpack" sources > building library "superlu_src" sources > building library "sparsekit_src" sources > building library "quadpack_src" sources > building library "linpack_lite_src" sources > building library "mach_src" sources > building library "odepack_src" sources > running build_py > package init file 'Lib/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/io/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/xxx/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/xxx/yyy/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/cluster/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/gui_thread/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/stats/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/interpolate/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/fftpack/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/fftpack/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/special/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/linalg/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/signal/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/optimize/tests/__init__.py' not found (or not a regular file) > package init file 'scipy_core/weave/tests/__init__.py' not found (or not a regular file) > package init file 'scipy_core/weave/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/scipy_core/scipy_base/tests/__init__.py' not found (or not a regular file) > package init file 'Lib/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/io/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/xxx/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/xxx/yyy/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/cluster/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/gui_thread/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/stats/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/interpolate/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/fftpack/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/fftpack/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/special/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/linalg/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/signal/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/Lib/optimize/tests/__init__.py' not found (or not a regular file) > package init file 'scipy_core/weave/tests/__init__.py' not found (or not a regular file) > package init file 'scipy_core/weave/tests/__init__.py' not found (or not a regular file) > package init file '/home/jordan/installdirectory/SciPy_complete-0.3/scipy_core/scipy_base/tests/__init__.py' not found (or not a regular file) > running build_clib > customize UnixCCompiler > customize UnixCCompiler using build_clib > customize GnuFCompiler > customize GnuFCompiler > customize GnuFCompiler using build_clib > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize GnuFCompiler > customize GnuFCompiler > customize GnuFCompiler using build_ext > building 'scipy.xplt.gistC' extension > compling C sources > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -DHAVE_LARGEFILE_SUPPORT -O2 -fmessage-length=0 -Wall -fPIC' > compile options: '-Ibuild/temp.linux-x86_64-2.3/config_pygist -ILib/xplt/src/gist -ILib/xplt/src/play -ILib/xplt/src/play/unix -I/usr/X11R6/include -I/usr/X11R6/include -I/usr/include/python2.3 -c' > extra options: '-DGISTPATH="\"/usr/lib64/python2.3/site-packages/scipy/xplt/gistdata\""' > gcc -pthread -shared build/temp.linux-x86_64-2.3/Lib/xplt/pygist/gistCmodule.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gist.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/tick.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/tick60.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/engine.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gtext.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/draw.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/draw0.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/clip.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gread.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gcntr.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/hlevel.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/ps.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/cgm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/eps.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/style.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/xfancy.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/xbasic.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/dir.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/files.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/fpuset.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/pathnm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/timew.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/uevent.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/ugetc.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/umain.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/usernm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/slinks.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/colors.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/connect.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/cursors.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/errors.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/events.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/fills.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/fonts.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/images.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/lines.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/pals.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/pwin.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/resource.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/rgbread.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/textout.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/rect.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/clips.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/points.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/hash.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/hash0.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/mm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/alarms.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/pstrcpy.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/pstrncat.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/p595.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/bitrev.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/bitlrot.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/bitmrot.o -LLib/xplt/. -LLib/xplt/src -L/usr/X11R6/lib -L/usr/lib -L/usr/X11R6/lib -Lbuild/temp.linux-x86_64-2.3 -lX11 -lm -o build/lib.linux-x86_64-2.3/scipy/xplt/gistC.so > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.so when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.a when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.so when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.a when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: cannot find -lX11 > collect2: ld returned 1 exit status > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.so when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.a when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.so when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/X11R6/lib/libX11.a when searching for -lX11 > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.3/../../../../x86_64-suse-linux/bin/ld: cannot find -lX11 > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.3/Lib/xplt/pygist/gistCmodule.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gist.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/tick.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/tick60.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/engine.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gtext.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/draw.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/draw0.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/clip.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gread.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/gcntr.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/hlevel.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/ps.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/cgm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/eps.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/style.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/xfancy.o build/temp.linux-x86_64-2.3/Lib/xplt/src/gist/xbasic.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/dir.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/files.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/fpuset.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/pathnm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/timew.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/uevent.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/ugetc.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/umain.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/usernm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/unix/slinks.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/colors.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/connect.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/cursors.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/errors.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/events.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/fills.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/fonts.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/images.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/lines.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/pals.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/pwin.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/resource.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/rgbread.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/textout.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/rect.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/clips.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/x11/points.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/hash.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/hash0.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/mm.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/alarms.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/pstrcpy.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/pstrncat.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/p595.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/bitrev.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/bitlrot.o build/temp.linux-x86_64-2.3/Lib/xplt/src/play/all/bitmrot.o -LLib/xplt/. -LLib/xplt/src -L/usr/X11R6/lib -L/usr/lib -L/usr/X11R6/lib -Lbuild/temp.linux-x86_64-2.3 -lX11 -lm -o build/lib.linux-x86_64-2.3/scipy/xplt/gistC.so" failed with exit status 1 > > > linux:/home/xxxx/installdirectory/SciPy_complete-0.3 # > Mit sch?nen Gr??en von Yahoo! Mail - http://mail.yahoo.de From jl at dmi.dk Thu May 13 06:51:24 2004 From: jl at dmi.dk (Jesper Larsen) Date: Thu, 13 May 2004 12:51:24 +0200 Subject: [SciPy-user] High-pass filtering In-Reply-To: <40A28F6A.6020901@ee.byu.edu> References: <200405121614.27707.jl@dmi.dk> <40A28F6A.6020901@ee.byu.edu> Message-ID: <200405131251.24858.jl@dmi.dk> Thanks for the help Travis, > SciPy uses Numeric and doesn't require numarray (it can use numarray > objects as inputs which will be treated as Python sequences and can be > slow). Okay, I did'nt know that (I'm quite new to Python and this is my first use of SciPy). I thought that numarray was a replacement for Numeric and have therefore only bothered to learn the former (maybe I should have a look at Numeric as well). > What kind of high-pass filter are you looking for. Simple FIR filters > are quite simple to design and give linear phase. Other IIR filters are > available. I'll show you how to design a high-pass FIR filter using the > remez algorithm (you could also look at signal.firwin for windowed > filter design (but remez is usually a better solution). ... I think the FIR filter you have shown how to use is just fine for my purposes (but I am obviously not an expert on filters). I have therefore described below what I am going to use it for. If you have any ideas of a better filter for this application I would be grateful to hear about it. I'm making an application for quality control (QC) of timeseries of observations from various meteorological stations. The QC first has several simple checks. After these checks a check based on optimal interpolation (OI) is performed. OI is a method to estimate a given observation at a given station using statistical covariances with timeseries from other stations (including these timeseries with lagged observations). The OI method will also give an estimate of the uncertainty of the estimated value. If the real value is to far away from the estimated value it will be removed or replaced by the estimated value. To get good results from the OI it is necessary to remove low frequency oscillations (timescales > 2-3 months). For example the annual oscillation in temperature should be removed. If this is not done the covariances used in the OI will be overshadowed by the annual oscillation and it will be impossible to estimate fluctuations with high frequency (timescale of hours-days). The reason for this is that the covariance structure of low and high frequency oscillations are not the same due to the fact that different physical processes are responsible for each type of variability. I hope to release the application as open source when it is complete and has been tested (it still requires quite some work). It might then be adapted/expanded to QC of other types of observational timeseries in completely different areas. Kind regards, Jesper From JADDISON at SYSTEMS.TEXTRON.com Sun May 16 23:29:30 2004 From: JADDISON at SYSTEMS.TEXTRON.com (Addison, Jason) Date: Sun, 16 May 2004 17:29:30 -1000 Subject: [SciPy-user] pilutil toimage Message-ID: <200405161729.30692.jaddison@systems.textron.com> I'm trying to generate an image from a 2D array and a colormap. Here is a simple example: # python code <><><><><><><><><><><><><><><> c0 = array([[0, 0, 0, 0], [1, 1, 1, 1], [2, 2, 2, 2]]) icm = array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) c0_im = toimage(c0, pal=icm) c0_im.save('c0.png') # python code <><><><><><><><><><><><><><><> # Fedora Core 1 # python version 2.3.3 # scipy version 0.3 # Numeric 23.1 # PIL 1.1.4 But, I always get a grey scale image. I think that I should get 3 pixel by 4 pixel with the first row red, then green, then blue, but I get black, grey and white. What am I missing? Does anyone have some examples of turning 2D arrays into color graphics files? Thank you, Jason From eric at enthought.com Mon May 17 00:34:59 2004 From: eric at enthought.com (eric jones) Date: Sun, 16 May 2004 23:34:59 -0500 Subject: [SciPy-user] ANN: Job Openings at Enthought Message-ID: <40A840F3.2050608@enthought.com> Hello, This is a bit of a status report I guess for SciPy (www.scipy.org) and Enthought (www.enthought.com), but, really, it is a long winded job posting. Among other things, Enthought develops scientific software for clients. Python is central to to our strategy in this arena. We have long supported SciPy and believe strongly in Open Source Software. The purpose is to give people a feeling about what we do, the technical challenges we face, and the skills needed to meet them. SciPy ----- There has been a lot of work on SciPy lately by the Travis Oliphant, Travis Vaught, Pearu Peterson, Joe Cooper, and on the website by Jon-Eric Steinbomer and Janet Swisher. The site is now upgraded to Plone and we have a SciPy 0.3 package out. If you're interested in seeing the activity level at scipy.org, please visit. http://www.scipy.org/map?rmurl=http%3A//scipy.net/scipyusage/ It looks like we're averaging about a 100 downloads a day if you add up all source and binary distributions. When SciPy 0.1 was released a couple a years ago, I think we averaged about 10. Its extremely difficult to extrapolate a user base from such information, the obvious growth is good news. Other Stuff ----------- In addition to SciPy we have multiple projects going on right now that either already plug into or will be plugins to a Python-based scientific application framework called Envisage (sorta like an IDE for science apps) that we're working on. The idea is: (1) To lower the bar as far as work required to get a GUI put on the front end of a scientific algorithm. (2) Develop scientific plugins that can play well together so that, for instance, an electromagnetics simulator built by one person can be used in conjunction with an optimization plugin built by another person and results can be visualized with a 3D plugin. This is a hard, long path that requires much work and though. We have started the process and actually have built an app on a very early version of the Envisage framework. The app works great, but it's usage of the framework is a little mixed at this point. There were a lot of compromises we ended up making on the framework side in order to ship on time. Also, I am not sure whether "easy-to-program" and "flexible architecture for compatible plugins" are not mutually exclusive. I always have in my mind that I want a smart scientist to be able to build the GUI in a short amount of time after the algorithm (which is the "hard part") is finished. I still harbor this wish, but the lessons I've had over the last couple of years suggest that the GUI is actually the most expensive part of development for large apps. This is partially due to the fact that commercial apps are rarely built around "research" algorithms -- meaning that the algorithm code usually already exists in some form. Still, UIs take tons of time. Testing them is hard. Flexibility requires more complexity (factories, adapters, etc.) than seems necessary at first. Further, building *good* UI's is really hard. Scientist rarely have the patience or perspective for it -- and yet it is very often difficult to build a good UI for science apps without a scientist's understanding of the underlying problem. We have a number of tools along with SciPy that we're building that are pieces of this puzzle. 1. Traits -- They at the heart of everything we do. They provide some explicit typing, default GUI's, an observer event model for anything that uses them. They can also require some getting used to. 2. Kiva/Enable -- This is the generic drawing system for Python. 3. Chaco -- Interactive 2D Plotting 4. PyFace -- MVC layer on top of wxPython. Supports trees, menus, and a few other things right now. This will grow. 5. Envisage -- Plugin based GUI framework for scientific applications. Beyond the basic GUI that comes from Envisage, we already have a couple plugins for profiling code using hotshot and also searching for memory leaks. David Morrill wrote these and they are called gotcha and enroll. Martin Chilvers wrote a simple scintilla based plugin for editing scripts, and a PyCrust shell is available by default in the app (not a plugin at the moment). We would love to see a number of other plugins: a. Debugger. b. IPython-like PyCrust going. c. A more full featured script editing plugin. (there is a huge list of features here). d. Mayavi e. etc. These are all used actively in commercial products that we deliver. However, they are also in various stages of development and will continue to evolve and grow. Some are still pretty green. Portions of these are openly available now. All five of the listed tool sets will be open at some point this summer when they are cleaned up a bit. Jobs ---- All of this leads to the fact that we are hiring and looking for smart, pleasant, communicative people with integrity that are excited about building this sort of stuff. There is a lot of software architecture to be done that needs talented designers. There is UI design/development that needs scientists that care about UIs or UI developers/designers that are really quick at grasping scientific problems. We also need really strong scientists/scientific developers to do some of the backend coding for commercial products and also SciPy development. If you have a background in geophysics, electromagnetics (especially multi-pole methods) that is a big plus. For this, a PhD is helpful, but not required. Parallel computing skills wouldn't hurt. 3D visualization a la VTK/Python is a major need. So is knowledge about 2D rendering to help finish out Kiva and the Enable toolset. Strong Python skills, Python extension writing knowledge and strong C/C++ skills are a big benefit. Design/development with or of Java NetBeans or Eclipse-like architecture is great -- or any other solid GUI architecture. Dedication to writing clean/clear code meant for other humans to read is a requirement. We have 3 or 4 scientific/python positions to fill over the coming months, and we're looking for candidates that have the best mix of the above skills. You'll join our existing team of scientist/engineers, computer scientists, technical writers, and an HCI specialist. We like this mix and feel it is the best way to build this sort of software. If you are interested in working at Enthought, please send us your resume at jobs at enthought.com. If not, please send this posting to the smartest people you know that fit some part of the above description (Python experience not explicitly required). Salaries are competitive. Candidates must be willing to relocate to Austin, Tx. thanks, eric jones PS: There are additional positions listed at http://www.enthought.com/careers.htm for those interested in business application development (not necessarily with Python). From jeanluc.menut at free.fr Tue May 18 12:28:44 2004 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Tue, 18 May 2004 18:28:44 +0200 Subject: [SciPy-user] scipy on sun/OSF Message-ID: <40AA39BC.3090704@free.fr> Hello, I work on a sun computer under OSF through telnet or ssh. Is it scipy working on this kind of computer/OS ? I need to use some plotting capability (2D, 3D, images) and calculation like FFT, etc... It will replace IDL. I'd like to have a confirmation thet it is possible before starting to install all the software needed by scipy. best regards, Jean-Luc From jeanluc.menut at free.fr Wed May 19 09:54:47 2004 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Wed, 19 May 2004 15:54:47 +0200 Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 Message-ID: <40AB6727.6090000@free.fr> Hello, I hope i'm writing in the good mailing list. I have a problem when I build scipy on my alpha (unix : OSF1): After the compilation of python 2.2.3, the installation of Numeric 22.0 and F2PY-2.39.235, i install the binary of atlas 3.4.1 (for OSF1 of course). When i run python setup.py build, there is a lot a compilation but after a moment it's stop : I have this error message : g++ -shared -expect_unresolved * build/temp.osf1-V5.1-alpha-2.2/vq_wrap.o -Lbuild/temp.osf1-V5.1-alpha-2.2 -o build/lib.osf1-V5.1-alpha-2.2/scipy/cluster/_vq.so /usr/bin/ld: Object file format error in: DEVELOPERS.txt: read_cur_obj_info: bad file magic number(0x2e2e) collect2: ld returned 1 exit status /usr/bin/ld: Object file format error in: DEVELOPERS.txt: read_cur_obj_info: bad file magic number(0x2e2e) collect2: ld returned 1 exit status 1) it's look strange because there is a pb with ld and DEVELOPERS.txt wich is a text file. 2)I can't try with the gnu ld because it don't work on alpha/OSF1 3) it's the same kind of error with a own compilated version of atlas (3.6.0) 4)I don't know if there is a relation but I have a problem with FFTW too. I compile and install it without pb but the building script of scipy don't see it. Someone can help me ? best regards, Jean-Luc Menut From pearu at scipy.org Wed May 19 10:17:08 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 19 May 2004 09:17:08 -0500 (CDT) Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: <40AB6727.6090000@free.fr> References: <40AB6727.6090000@free.fr> Message-ID: On Wed, 19 May 2004, Jean-Luc Menut wrote: > Hello, > > I hope i'm writing in the good mailing list. Yes, that's a good one;-) > I have a problem when I build scipy on my alpha (unix : OSF1): > > After the compilation of python 2.2.3, the installation of Numeric 22.0 > and F2PY-2.39.235, i install the binary of atlas 3.4.1 (for OSF1 of > course). When i run python setup.py build, there is a lot a compilation > but after a moment it's stop : > > I have this error message : > > g++ -shared -expect_unresolved * build/temp.osf1-V5.1-alpha-2.2/vq_wrap.o ^^^ This 'star' is causing all the trouble: all files in the current directory (.txt files etc) are passed to g++. Could you find out where this 'star' comes from? Can you build other extension modules than of scipy? Pearu From jeanluc.menut at free.fr Wed May 19 10:17:47 2004 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Wed, 19 May 2004 16:17:47 +0200 Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: References: <40AB6727.6090000@free.fr> Message-ID: <40AB6C8B.2030405@free.fr> >>g++ -shared -expect_unresolved * build/temp.osf1-V5.1-alpha-2.2/vq_wrap.o > > ^^^ > This 'star' is causing all the trouble: all files in the current directory > (.txt files etc) are passed to g++. Could you find out where this > 'star' comes from? Can you build other extension modules than of scipy? Yes, what module by example ? I have built Numeric without problem before. Jean-Luc From pearu at scipy.org Wed May 19 10:42:12 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 19 May 2004 09:42:12 -0500 (CDT) Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: <40AB6C8B.2030405@free.fr> References: <40AB6727.6090000@free.fr> <40AB6C8B.2030405@free.fr> Message-ID: On Wed, 19 May 2004, Jean-Luc Menut wrote: > >>g++ -shared -expect_unresolved * build/temp.osf1-V5.1-alpha-2.2/vq_wrap.o > > > > ^^^ > > This 'star' is causing all the trouble: all files in the current directory > > (.txt files etc) are passed to g++. Could you find out where this > > 'star' comes from? Can you build other extension modules than of scipy? > > Yes, what module by example ? I have built Numeric without problem before. First try installing scipy_core packages: cd scipy_core python setup.py install Then try building scipy modules separately: cd Lib/linalg python setup_linalg.py build cd Lib/fftpack python setup_fftpack.py build cd Lib/cluster python setup_cluster.py build etc. Pearu From jeanluc.menut at free.fr Wed May 19 10:46:18 2004 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Wed, 19 May 2004 16:46:18 +0200 Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: References: <40AB6727.6090000@free.fr> <40AB6C8B.2030405@free.fr> Message-ID: <40AB733A.1050008@free.fr> > First try installing scipy_core packages: Ok > cd scipy_core > python setup.py install There is no setup.py in scipy_core. I tried to build setup_scipy_base.py in scipy_base but it need scipy_distutils and if i try to build setup_scipy_distutils.py in scipy_distutils, i have this message : scipy_distutils Version 0.3.0_27.455 Traceback (most recent call last): File "setup_scipy_distutils.py", line 27, in ? from core import setup File "core.py", line 5, in ? from scipy_distutils.dist import Distribution ImportError: No module named scipy_distutils.dist thank you, Jean-Luc From pearu at scipy.org Wed May 19 11:05:17 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 19 May 2004 10:05:17 -0500 (CDT) Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: <40AB733A.1050008@free.fr> References: <40AB6727.6090000@free.fr> <40AB733A.1050008@free.fr> Message-ID: On Wed, 19 May 2004, Jean-Luc Menut wrote: > > > First try installing scipy_core packages: > Ok > > > cd scipy_core > > python setup.py install > > There is no setup.py in scipy_core. I tried to build setup_scipy_base.py > in scipy_base but it need scipy_distutils and if i try to build > setup_scipy_distutils.py in scipy_distutils, i have this message : > > scipy_distutils Version 0.3.0_27.455 > Traceback (most recent call last): > File "setup_scipy_distutils.py", line 27, in ? > from core import setup > File "core.py", line 5, in ? > from scipy_distutils.dist import Distribution > ImportError: No module named scipy_distutils.dist So, the scipy tar-balls are not complete.. I suggest getting scipy from CVS. See http://www.scipy.org/cvs/ (if there will be any bug fixes for your platform then you would need CVS version of scipy anyway..) Or you can add /path/to/scipy_core to PYTHONPATH environment variable and then try building scipy_base. Pearu From cookedm at physics.mcmaster.ca Wed May 19 14:19:08 2004 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 19 May 2004 14:19:08 -0400 Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: References: <40AB6727.6090000@free.fr> Message-ID: <20040519181908.GA11465@arbutus.physics.mcmaster.ca> On Wed, May 19, 2004 at 09:17:08AM -0500, Pearu Peterson wrote: > > > On Wed, 19 May 2004, Jean-Luc Menut wrote: > > > Hello, > > > > I hope i'm writing in the good mailing list. > > Yes, that's a good one;-) > > > I have a problem when I build scipy on my alpha (unix : OSF1): > > > > After the compilation of python 2.2.3, the installation of Numeric 22.0 > > and F2PY-2.39.235, i install the binary of atlas 3.4.1 (for OSF1 of > > course). When i run python setup.py build, there is a lot a compilation > > but after a moment it's stop : > > > > I have this error message : > > > > g++ -shared -expect_unresolved * build/temp.osf1-V5.1-alpha-2.2/vq_wrap.o > ^^^ > This 'star' is causing all the trouble: all files in the current directory > (.txt files etc) are passed to g++. Could you find out where this > 'star' comes from? Can you build other extension modules than of scipy? I've had similiar problems compiling on Tru64. The -expect_unresolved "*" (note the quotes) comes from Python's Makefile -- that option is needed for the linker to work properly. The quotes are getting stripped off somewhere. I worked around this by writing wrapper scripts for cc (I use DEC's/Compaq's/HP's compiler), and ld. By setting the CC and LDSHARED environment variables (after editing usr/lib/python2.3/distutils/sysconfig.py to allow overriding LDSHARED), the wrapper scripts could make the right calls. You'll want something like this: #!/usr/bin/env python import sys import os command = 'g++ -shared -expect_unresolved "*" ' + ' '.join(sys.argv[1:]) print '>>>', command w = os.system(command) sys.exit(os.WEXITSTATUS(w)) (and similiar for ld). Call it cc-wrapper and ldshared-wrapper, and put them in your path, and call like this $ CC=cc-wrapper LDSHARED=ldshared-wrapper python ./setup.py build [I didn't have the time to fix the underlying quote problem, hence the hack.] Hope this helps. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From pearu at scipy.org Wed May 19 16:04:08 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 19 May 2004 15:04:08 -0500 (CDT) Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: <20040519181908.GA11465@arbutus.physics.mcmaster.ca> References: <40AB6727.6090000@free.fr> <20040519181908.GA11465@arbutus.physics.mcmaster.ca> Message-ID: On Wed, 19 May 2004, David M. Cooke wrote: > On Wed, May 19, 2004 at 09:17:08AM -0500, Pearu Peterson wrote: > > > I have a problem when I build scipy on my alpha (unix : OSF1): > > > > > > After the compilation of python 2.2.3, the installation of Numeric 22.0 > > > and F2PY-2.39.235, i install the binary of atlas 3.4.1 (for OSF1 of > > > course). When i run python setup.py build, there is a lot a compilation > > > but after a moment it's stop : > > > > > > I have this error message : > > > > > > g++ -shared -expect_unresolved * build/temp.osf1-V5.1-alpha-2.2/vq_wrap.o > > ^^^ > > This 'star' is causing all the trouble: all files in the current directory > > (.txt files etc) are passed to g++. Could you find out where this > > 'star' comes from? Can you build other extension modules than of scipy? > > I've had similiar problems compiling on Tru64. The -expect_unresolved > "*" (note the quotes) comes from Python's Makefile -- that option is > needed for the linker to work properly. The quotes are getting stripped > off somewhere. Thanks for the hint. The quotes get stripped off in distutils.util.split_quoted function. I have implemented a workaround to this issue in scipy_distutils. Now the quotes are preserved when a quoted word does not contain any white-space. So, update scipy from cvs and try again. Let us know if the patch works. HTH, Pearu From nwagner at mecha.uni-stuttgart.de Fri May 21 03:58:38 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 May 2004 09:58:38 +0200 Subject: [SciPy-user] AttributeError: keys Message-ID: <40ADB6AE.4070501@mecha.uni-stuttgart.de> Python 2.3.3 (#1, Apr 6 2004, 01:47:39) [GCC 3.3.3 (SuSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * >>> import IPython >>> a = zeros((2,2),Float) >>> a array([[ 0., 0.], [ 0., 0.]]) >>> who (a) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/site-packages/scipy/common.py", line 253, in who for name in vardict.keys(): AttributeError: keys >>> From pearu at scipy.org Fri May 21 06:17:38 2004 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 21 May 2004 05:17:38 -0500 (CDT) Subject: [SciPy-user] AttributeError: keys In-Reply-To: <40ADB6AE.4070501@mecha.uni-stuttgart.de> References: <40ADB6AE.4070501@mecha.uni-stuttgart.de> Message-ID: On Fri, 21 May 2004, Nils Wagner wrote: > Python 2.3.3 (#1, Apr 6 2004, 01:47:39) > [GCC 3.3.3 (SuSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> from scipy import * > >>> import IPython > >>> a = zeros((2,2),Float) > >>> a > array([[ 0., 0.], > [ 0., 0.]]) > >>> who (a) > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib/python2.3/site-packages/scipy/common.py", line 253, in who > for name in vardict.keys(): > AttributeError: keys Read help(who). So try who() who({'a',a}) who(a) could never work in Python as it does in Matlab (who('a')). However, who(a=a) could. Regards, Pearu From Jonathan.Peirce at nottingham.ac.uk Fri May 21 09:34:28 2004 From: Jonathan.Peirce at nottingham.ac.uk (Jon Peirce) Date: Fri, 21 May 2004 14:34:28 +0100 Subject: [SciPy-user] distribution.fit() methods In-Reply-To: References: Message-ID: <40AE0564.10706@psychology.nottingham.ac.uk> Hi there, I'm trying to fit a weibull function to some data using Scipy. It looks like I should be able to use exponweibull_gen.fit but I'm a bit puzzled about the arguments. According to the code the arguments (data, *args, **kwds) are needed, where *args seems to contain the shape parameters for the fit and **kwds contains the location and scale params. But if I'm fitting a function, surely I want to give xData and yData and *not* shape arguments (or maybe a first guess at those). Does anyone know how to use these methods? Jon This message has been scanned but we cannot guarantee that it and any attachments are free from viruses or other damaging content: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From jeanluc.menut at free.fr Fri May 21 10:28:07 2004 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Fri, 21 May 2004 16:28:07 +0200 Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: References: <40AB6727.6090000@free.fr> <20040519181908.GA11465@arbutus.physics.mcmaster.ca> Message-ID: <40AE11F7.9010604@free.fr> hello, > Thanks for the hint. The quotes get stripped off in > distutils.util.split_quoted function. I have implemented a workaround to > this issue in scipy_distutils. Now the quotes are preserved when a quoted > word does not contain any white-space. > > So, update scipy from cvs and try again. Let us know if the patch works. I' ve installed newer version of python and numeric (2.3.3 and 23.1). When i try to build scipy (the last cvs version) i have another error message : error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -Ibuild/temp.osf1-V5.1-alpha-2.3/config_pygist -ILib/xplt/src/gist -ILib/xplt/src/play -ILib/xplt/src/play/unix -I/usr/include -I/usr/include -I/home2/menut/prg/python2.3/include/python2.3 -c Lib/xplt/pygist/gistCmodule.c -o build/temp.osf1-V5.1-alpha-2.3/Lib/xplt/pygist/gistCmodule.o -DGISTPATH="\"/home2/menut/prg/python2.3/lib/python2.3/site-packages/scipy/xplt/gistdata\"" -DNO_EXP10" failed with exit status 1 1.77u 2.16s 1:00 6% 20+27k 210+332io 0pf+0w I fact i had this message for the cluster module but i've moved its directory to cluster.sav for testing. best regards, jean-luc From pearu at scipy.org Fri May 21 10:59:10 2004 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 21 May 2004 09:59:10 -0500 (CDT) Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: <40AE11F7.9010604@free.fr> References: <40AB6727.6090000@free.fr> <20040519181908.GA11465@arbutus.physics.mcmaster.ca> <40AE11F7.9010604@free.fr> Message-ID: On Fri, 21 May 2004, Jean-Luc Menut wrote: > I' ve installed newer version of python and numeric (2.3.3 and 23.1). > When i try to build scipy (the last cvs version) i have another error > message : > > > error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -Ibuild/temp.osf1-V5.1-alpha-2.3/config_pygist > -ILib/xplt/src/gist -ILib/xplt/src/play -ILib/xplt/src/play/unix > -I/usr/include -I/usr/include > -I/home2/menut/prg/python2.3/include/python2.3 -c > Lib/xplt/pygist/gistCmodule.c -o > build/temp.osf1-V5.1-alpha-2.3/Lib/xplt/pygist/gistCmodule.o > -DGISTPATH="\"/home2/menut/prg/python2.3/lib/python2.3/site-packages/scipy/xplt/gistdata\"" > -DNO_EXP10" failed with exit status 1 > 1.77u 2.16s 1:00 6% 20+27k 210+332io 0pf+0w It is not a real error message. It just says that the above command line failed. The real error message should have been shown earlier. Anyways, it's looks like xplt is the only scipy module that fails to build. You can disable it by adding 'xplt' to ignore_packags list at the end of setup.py file, at least you could then install and test scipy. Pearu From jeanluc.menut at free.fr Fri May 21 12:01:40 2004 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Fri, 21 May 2004 18:01:40 +0200 Subject: [SciPy-user] Problem when building scipy 0.3 for alpha OSF1 In-Reply-To: References: <40AB6727.6090000@free.fr> <20040519181908.GA11465@arbutus.physics.mcmaster.ca> <40AE11F7.9010604@free.fr> Message-ID: <40AE27E4.70205@free.fr> Pearu Peterson a ?crit : > Anyways, it's looks like xplt is the only scipy module that fails to > build. You can disable it by adding 'xplt' to ignore_packags list at the > end of setup.py file, at least you could then install and test scipy. I have 6 packages which show the same kind of message :'cluster', 'xplt', 'stats', 'signal', 'sparse', 'optimize' (1). I can build and install scipy without these package but when I write "import scipy" under python i have this message : Python 2.3.3 (#8, May 21 2004, 12:13:46) [GCC 3.3.3] on osf1V5 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Floating point exception and i return on the shell. Jean-Luc (1): I copy here the error messages for cluster only : c++ options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes' creating build/temp.osf1-V5.1-alpha-2.3/Lib/cluster creating build/temp.osf1-V5.1-alpha-2.3/Lib/cluster/src compile options: '-I/home2/menut/prg/python2.3/include/python2.3 -c' c++: Lib/cluster/src/vq_wrap.cpp In file included from /home2/menut/prg/python2.3/include/python2.3/Python.h:8, from Lib/cluster/src/vq_wrap.cpp:176: /home2/menut/prg/python2.3/include/python2.3/pyconfig.h:840:1: warning: "_OSF_SOURCE" redefined In file included from /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/string.h:61, from Lib/cluster/src/vq_wrap.cpp:27: /usr/include/standards.h:206:1: warning: this is the location of the previous definition In file included from /home2/menut/prg/python2.3/include/python2.3/Python.h:8, from Lib/cluster/src/vq_wrap.cpp:176: /home2/menut/prg/python2.3/include/python2.3/pyconfig.h:847:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/string.h:61, from Lib/cluster/src/vq_wrap.cpp:27: /usr/include/standards.h:198:1: warning: this is the location of the previous definition In file included from /home2/menut/prg/python2.3/include/python2.3/Python.h:8, from Lib/cluster/src/vq_wrap.cpp:176: /home2/menut/prg/python2.3/include/python2.3/pyconfig.h:859:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/string.h:61, from Lib/cluster/src/vq_wrap.cpp:27: /usr/include/standards.h:188:1: warning: this is the location of the previous definition In file included from /home2/menut/prg/python2.3/include/python2.3/pyport.h:99, from /home2/menut/prg/python2.3/include/python2.3/Python.h:48, from Lib/cluster/src/vq_wrap.cpp:176: /usr/include/sys/time.h:71: error: 'suseconds_t' is used as a type, but is not defined as a type. In file included from /home2/menut/prg/python2.3/include/python2.3/pyport.h:157, from /home2/menut/prg/python2.3/include/python2.3/Python.h:48, from Lib/cluster/src/vq_wrap.cpp:176: /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/sys/stat.h:213: error: ' blksize_t' is used as a type, but is not defined as a type. /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/sys/stat.h:213: error: ' blkcnt_t' is used as a type, but is not defined as a type. In file included from Lib/cluster/src/vq_wrap.cpp:499: Lib/cluster/src/vq.h:57:7: warning: no newline at end of file In file included from /home2/menut/prg/python2.3/include/python2.3/Python.h:8, from Lib/cluster/src/vq_wrap.cpp:176: /home2/menut/prg/python2.3/include/python2.3/pyconfig.h:840:1: warning: "_OSF_SOURCE" redefined In file included from /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/string.h:61, from Lib/cluster/src/vq_wrap.cpp:27: /usr/include/standards.h:206:1: warning: this is the location of the previous definition In file included from /home2/menut/prg/python2.3/include/python2.3/Python.h:8, from Lib/cluster/src/vq_wrap.cpp:176: /home2/menut/prg/python2.3/include/python2.3/pyconfig.h:847:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/string.h:61, from Lib/cluster/src/vq_wrap.cpp:27: /usr/include/standards.h:198:1: warning: this is the location of the previous definition In file included from /home2/menut/prg/python2.3/include/python2.3/Python.h:8, from Lib/cluster/src/vq_wrap.cpp:176: /home2/menut/prg/python2.3/include/python2.3/pyconfig.h:859:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/string.h:61, from Lib/cluster/src/vq_wrap.cpp:27: /usr/include/standards.h:188:1: warning: this is the location of the previous definition In file included from /home2/menut/prg/python2.3/include/python2.3/pyport.h:99, from /home2/menut/prg/python2.3/include/python2.3/Python.h:48, from Lib/cluster/src/vq_wrap.cpp:176: /usr/include/sys/time.h:71: error: 'suseconds_t' is used as a type, but is not defined as a type. In file included from /home2/menut/prg/python2.3/include/python2.3/pyport.h:157, from /home2/menut/prg/python2.3/include/python2.3/Python.h:48, from Lib/cluster/src/vq_wrap.cpp:176: /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/sys/stat.h:213: error: ' blksize_t' is used as a type, but is not defined as a type. /usr/local/lib/gcc-lib/alphaev67-dec-osf5.1/3.3.3/include/sys/stat.h:213: error: ' blkcnt_t' is used as a type, but is not defined as a type. In file included from Lib/cluster/src/vq_wrap.cpp:499: Lib/cluster/src/vq.h:57:7: warning: no newline at end of file error: Command "c++ -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/home2/menut/prg/python2.3/include/python2.3 -c Lib/cluster/src/vq_wrap.cpp -o build/temp.osf1-V5.1-alpha-2.3/Lib/cluster/src/vq_wrap.o" failed with exit status 1 From oliphant at ee.byu.edu Fri May 21 15:40:13 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Fri, 21 May 2004 13:40:13 -0600 Subject: [SciPy-user] distribution.fit() methods In-Reply-To: <40AE0564.10706@psychology.nottingham.ac.uk> References: <40AE0564.10706@psychology.nottingham.ac.uk> Message-ID: <40AE5B1D.5060501@ee.byu.edu> Jon Peirce wrote: > Hi there, > > I'm trying to fit a weibull function to some data using Scipy. It looks > like I should be able to use exponweibull_gen.fit but I'm a bit puzzled > about the arguments. According to the code the arguments (data, *args, > **kwds) are needed, where *args seems to contain the shape parameters > for the fit and **kwds contains the location and scale params. > > But if I'm fitting a function, surely I want to give xData and yData and > *not* shape arguments (or maybe a first guess at those). Does anyone > know how to use these methods? > These fit methods are a fairly thin wrapper around optimize.fmin. The intended use is to take random variates you suspect as being weibull distributed and find the maximum likelihood estimates of the shape, location, and scale parameters. Notice, it does not try to fit (x,y) coordinates to a weibull function. To do that you should just use an optimization method and with a fit function that involves the exponweib.pdf function. Please clarify what it is that you are trying to do. Also, the XXXXX_gen classes are not intended for general use unless you know what you are doing. What you want is exponweib.XXX --- this gives you a realization of the XXXXX_gen class. -Travis O. From jonathan.peirce at nottingham.ac.uk Sat May 22 09:33:02 2004 From: jonathan.peirce at nottingham.ac.uk (Jon Peirce) Date: Sat, 22 May 2004 14:33:02 +0100 Subject: [SciPy-user] Re: distribution.fit() methods In-Reply-To: <40AE5B1D.5060501@ee.byu.edu> References: <40AE0564.10706@psychology.nottingham.ac.uk> <40AE5B1D.5060501@ee.byu.edu> Message-ID: Travis E. Oliphant wrote: > Jon Peirce wrote: > >> Hi there, >> >> I'm trying to fit a weibull function to some data using Scipy. It >> looks like I should be able to use exponweibull_gen.fit but I'm a bit >> puzzled about the arguments. According to the code the arguments >> (data, *args, **kwds) are needed, where *args seems to contain the >> shape parameters for the fit and **kwds contains the location and >> scale params. >> >> But if I'm fitting a function, surely I want to give xData and yData >> and *not* shape arguments (or maybe a first guess at those). Does >> anyone know how to use these methods? >> > > These fit methods are a fairly thin wrapper around optimize.fmin. The > intended use is to take random variates you suspect as being weibull > distributed and find the maximum likelihood estimates of the shape, > location, and scale parameters. > > Notice, it does not try to fit (x,y) coordinates to a weibull function. > To do that you should just use an optimization method and with a fit > function that involves the exponweib.pdf function. > > Please clarify what it is that you are trying to do. Also, the > XXXXX_gen classes are not intended for general use unless you know what > you are doing. What you want is exponweib.XXX --- this gives you a > realization of the XXXXX_gen class. > > -Travis O. Thanks (and sorry my post was kinda garbled - head wasn't screwed on tight enough yesterday!!) I am now using exponweib.pdf and am able to use optimize.fmin to get the appropriate shape parameters - that's what I thought xxx.fit was doing, but I see now that it expects a raw variate rather than the pdf or cdf coordinates. chrs, Jon From nwagner at mecha.uni-stuttgart.de Mon May 24 04:29:20 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 24 May 2004 10:29:20 +0200 Subject: [SciPy-user] dpi size of an exisiting graphics window Message-ID: <40B1B260.9050800@mecha.uni-stuttgart.de> Dear experts, Is it possible to change the dpi size of an existing graphics window ? Nils From ariciputi at pito.com Tue May 25 06:40:04 2004 From: ariciputi at pito.com (Andrea Riciputi) Date: Tue, 25 May 2004 12:40:04 +0200 Subject: [SciPy-user] Data convolution. Message-ID: Hi, I need to perform convolution on a set of two dimensional data resulting from a numerical simulation. Since my numerical simulation is written in Python and C (for CPU intensive task), I'm wondering if scipy is able to fit my needs. I've read the tutorial but it's quite terse about the topic, further I've found a reference to FFT section that has not been written. Could anyone help me with some hints? Thanks in advance, Andrea. From nwagner at mecha.uni-stuttgart.de Wed May 26 06:58:11 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 26 May 2004 12:58:11 +0200 Subject: [SciPy-user] Schur complement Message-ID: <40B47843.2000005@mecha.uni-stuttgart.de> Dear experts, I am trying to solve a transcendental eigenvalue problem via the Schur complement. However, I have some trouble with optimize.fsolve. How can I solve this problem ? Thanks in advance. Nils Traceback (most recent call last): File "fschur.py", line 58, in ? S = optimize.fsolve(func,x0) File "/usr/lib/python2.3/site-packages/scipy/optimize/minpack.py", line 80, in fsolve check_func(func,x0,args,n,(n,)) File "/usr/lib/python2.3/site-packages/scipy/optimize/common_routines.py", line 13, in check_func res = myasarray(apply(thefunc,args)) File "fschur.py", line 56, in func return schur(T(x)) File "fschur.py", line 8, in T tmp[0,0] = -sin(x) ValueError: array too large for destination >>> -------------- next part -------------- A non-text attachment was scrubbed... Name: fschur.py Type: text/x-python Size: 967 bytes Desc: not available URL: From nwagner at mecha.uni-stuttgart.de Wed May 26 08:00:58 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 26 May 2004 14:00:58 +0200 Subject: [SciPy-user] brentq, brenth in case of complex roots Message-ID: <40B486FA.6060705@mecha.uni-stuttgart.de> Dear experts, How can I apply brentq, brenth etc. in case of complex functions ? Nils From pearu at scipy.org Wed May 26 10:46:07 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 26 May 2004 09:46:07 -0500 (CDT) Subject: [SciPy-user] Schur complement In-Reply-To: <40B47843.2000005@mecha.uni-stuttgart.de> References: <40B47843.2000005@mecha.uni-stuttgart.de> Message-ID: On Wed, 26 May 2004, Nils Wagner wrote: > I am trying to solve a transcendental eigenvalue problem via the Schur > complement. However, I have some trouble with optimize.fsolve. > > How can I solve this problem ? In short, use def func(x): return abs(schur(T(x[0]))) (when using optimize.fsolve then the argument x is rank-1 array and fsolve assumes that func is a real-valued function) Pearu From nwagner at mecha.uni-stuttgart.de Wed May 26 10:52:20 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 26 May 2004 16:52:20 +0200 Subject: [SciPy-user] Schur complement In-Reply-To: References: <40B47843.2000005@mecha.uni-stuttgart.de> Message-ID: <40B4AF24.7020603@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > On Wed, 26 May 2004, Nils Wagner wrote: > > >>I am trying to solve a transcendental eigenvalue problem via the Schur >>complement. However, I have some trouble with optimize.fsolve. >> >>How can I solve this problem ? > > > In short, use > > def func(x): > return abs(schur(T(x[0]))) > > (when using optimize.fsolve then the argument x is rank-1 array and > fsolve assumes that func is a real-valued function) > > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > I have modified my program. def func2(x): return abs(schur(T(x))) r6 = optimize.fsolve(func2,x0) Now I get Traceback (most recent call last): File "fschur.py", line 77, in ? r6 = optimize.fsolve(func2,x0) File "/usr/lib/python2.3/site-packages/scipy/optimize/minpack.py", line 80, in fsolve check_func(func,x0,args,n,(n,)) File "/usr/lib/python2.3/site-packages/scipy/optimize/common_routines.py", line 13, in check_func res = myasarray(apply(thefunc,args)) File "fschur.py", line 63, in func2 return abs(schur(T(x))) File "fschur.py", line 8, in T tmp[0,0] = -sin(x) ValueError: array too large for destination >>> The requirement, i.e. func is real is not mentioned in help (optimize.fsolve). Am I missing something ? Nils fsolve(func, x0, args=(), fprime=None, full_output=0, col_deriv=0, xtol=1.49012e -08, maxfev=0, band=None, epsfcn=0.0, factor=100, diag=None) Find the roots of a function. Description: Return the roots of the (non-linear) equations defined by func(x)=0 given a starting estimate. Inputs: func -- A Python function or method which takes at least one (possibly vector) argument. x0 -- The starting estimate for the roots of func(x)=0. args -- Any extra arguments to func are placed in this tuple. fprime -- A function or method to compute the Jacobian of func with derivatives across the rows. If this is None, the Jacobian will be estimated. full_output -- non-zero to return the optional outputs. col_deriv -- non-zero to specify that the Jacobian function computes derivatives down the columns (faster, because there is no transpose operation). Outputs: (x, {infodict, ier, mesg}) x -- the solution (or the result of the last iteration for an unsuccessful call. infodict -- a dictionary of optional outputs with the keys: 'nfev' : the number of function calls 'njev' : the number of jacobian calls 'fvec' : the function evaluated at the output 'fjac' : the orthogonal matrix, q, produced by the QR facotrization of the final approximate Jacobi matrix, stored column wise. 'r' : upper triangular matrix produced by QR factorization of same matrix. 'qtf' : the vector (transpose(q) * fvec). ier -- an integer flag. If it is equal to 1 the solution was found. If it is not equal to 1, the solution was not found and the following message gives more information. mesg -- a string message giving information about the cause of failure. Extended Inputs: xtol -- The calculation will terminate if the relative error between two consecutive iterates is at most xtol. maxfev -- The maximum number of calls to the function. If zero, then 100*(N+1) is the maximum where N is the number of elements in x0. band -- If set to a two-sequence containing the number of sub- and superdiagonals within the band of the Jacobi matrix, From pearu at scipy.org Wed May 26 11:14:22 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 26 May 2004 10:14:22 -0500 (CDT) Subject: [SciPy-user] Schur complement In-Reply-To: <40B4AF24.7020603@mecha.uni-stuttgart.de> References: <40B47843.2000005@mecha.uni-stuttgart.de> <40B4AF24.7020603@mecha.uni-stuttgart.de> Message-ID: On Wed, 26 May 2004, Nils Wagner wrote: > > In short, use > > > > def func(x): > > return abs(schur(T(x[0]))) ^^^ > > > I have modified my program. > > def func2(x): > return abs(schur(T(x))) > > r6 = optimize.fsolve(func2,x0) > > Now I get > > Traceback (most recent call last): > ValueError: array too large for destination Compare my suggestion with your version of func2 (notice ^^^). > The requirement, i.e. func is real is not mentioned in help > (optimize.fsolve). Am I missing something ? Notice that it also does not mention that func can be complex valued. Anyways, fsolve assumes real-valued functions - `.`. But it is also simple to use fsolve for complex-valued functions: make sure that func returns real and imaginary parts of a function value, also an initial value to fsolve should be specified as a pair of real and imaginary parts of the value. Pearu From nwagner at mecha.uni-stuttgart.de Wed May 26 11:30:11 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 26 May 2004 17:30:11 +0200 Subject: [SciPy-user] Schur complement In-Reply-To: References: <40B47843.2000005@mecha.uni-stuttgart.de> <40B4AF24.7020603@mecha.uni-stuttgart.de> Message-ID: <40B4B803.8090907@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > On Wed, 26 May 2004, Nils Wagner wrote: > > >>>In short, use >>> >>>def func(x): >>> return abs(schur(T(x[0]))) > > ^^^ > > >>I have modified my program. >> >>def func2(x): >> return abs(schur(T(x))) >> >>r6 = optimize.fsolve(func2,x0) >> >>Now I get >> >>Traceback (most recent call last): >>ValueError: array too large for destination > > > Compare my suggestion with your version of func2 (notice ^^^). > > >>The requirement, i.e. func is real is not mentioned in help >>(optimize.fsolve). Am I missing something ? > > > Notice that it also does not mention that func can be complex valued. > Anyways, fsolve assumes real-valued functions - `.`. But it is also simple > to use fsolve for complex-valued functions: make sure that func returns real > and imaginary parts of a function value, also an initial value to > fsolve should be specified as a pair of real and imaginary parts of the > value. > Dear Pearu, Thank you very much for your help. Moreover, it would be very kind of you, if you could send me a short example for complex-valued functions as well. Thanks in advance. Nils > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From pearu at scipy.org Wed May 26 11:49:23 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 26 May 2004 10:49:23 -0500 (CDT) Subject: [SciPy-user] Schur complement In-Reply-To: <40B4B803.8090907@mecha.uni-stuttgart.de> References: <40B47843.2000005@mecha.uni-stuttgart.de> <40B4AF24.7020603@mecha.uni-stuttgart.de> <40B4B803.8090907@mecha.uni-stuttgart.de> Message-ID: On Wed, 26 May 2004, Nils Wagner wrote: > Thank you very much for your help. Moreover, it would be very kind of > you, if you could send me a short example for complex-valued functions > as well. In [1]: from scipy import * In [2]: def cfunc(z): return z+2j ...: In [3]: def rfunc2(x): ...: v = cfunc(x[0]+x[1]*1j) ...: return v.real,v.imag ...: In [4]: optimize.fsolve(rfunc2,(3,1)) Out[4]: array([ -8.07793567e-28, -2.00000000e+00]) Pearu From nwagner at mecha.uni-stuttgart.de Wed May 26 11:53:56 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 26 May 2004 17:53:56 +0200 Subject: [SciPy-user] Schur complement In-Reply-To: References: <40B47843.2000005@mecha.uni-stuttgart.de> <40B4AF24.7020603@mecha.uni-stuttgart.de> <40B4B803.8090907@mecha.uni-stuttgart.de> Message-ID: <40B4BD94.1090207@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > On Wed, 26 May 2004, Nils Wagner wrote: > > >>Thank you very much for your help. Moreover, it would be very kind of >>you, if you could send me a short example for complex-valued functions >>as well. > > > In [1]: from scipy import * > > In [2]: def cfunc(z): return z+2j > ...: > > In [3]: def rfunc2(x): > ...: v = cfunc(x[0]+x[1]*1j) > ...: return v.real,v.imag > ...: > > In [4]: optimize.fsolve(rfunc2,(3,1)) > Out[4]: array([ -8.07793567e-28, -2.00000000e+00]) > > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Hopefully my last question with respect to this topic ... Can I apply brentq, brenth in case of complex-valued functions ? Nils From pearu at scipy.org Wed May 26 13:01:44 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 26 May 2004 12:01:44 -0500 (CDT) Subject: [SciPy-user] Schur complement In-Reply-To: <40B4BD94.1090207@mecha.uni-stuttgart.de> References: <40B47843.2000005@mecha.uni-stuttgart.de> <40B4AF24.7020603@mecha.uni-stuttgart.de> <40B4BD94.1090207@mecha.uni-stuttgart.de> Message-ID: On Wed, 26 May 2004, Nils Wagner wrote: > Hopefully my last question with respect to this topic ... > Can I apply brentq, brenth in case of complex-valued functions ? No. These functions can only deal with real-valued functions with one real argument. Pearu From fonnesbeck at mac.com Wed May 26 12:56:31 2004 From: fonnesbeck at mac.com (Christopher Fonnesbeck) Date: Wed, 26 May 2004 12:56:31 -0400 Subject: [SciPy-user] build errors on OSX using IBM XL FORTRAN Message-ID: Using today's cvs update, I get the following error when building on OSX 10.3; seems to be related to XL FORTRAN: compling Fortran sources xlf(f77) options: '-qextname -O5' xlf90(f90) options: '-qextname -O5' xlf90(fix) options: '-qfixed -qextname -O5' creating build/temp.darwin-7.3.0-Power_Macintosh-2.3/Lib/linalg creating build/temp.darwin-7.3.0-Power_Macintosh-2.3/Lib/linalg/src compile options: '-DNO_ATLAS_INFO=3 -Ibuild/src -I/System/Library/Frameworks/Python.framework/Versions/2.3/include/ python2.3 -c' xlf:f77: Lib/linalg/src/fblaswrap.f xlf: 1501-210 command option NO_ATLAS_INFO=3 contains an incorrect subargument xlf: 1501-210 command option NO_ATLAS_INFO=3 contains an incorrect subargument error: Command "xlf -qextname -O5 -DNO_ATLAS_INFO=3 -Ibuild/src -I/System/Library/Frameworks/Python.framework/Versions/2.3/include/ python2.3 -c -c Lib/linalg/src/fblaswrap.f -o build/temp.darwin-7.3.0-Power_Macintosh-2.3/Lib/linalg/src/fblaswrap.o" failed with exit status 40 -- Christopher J. Fonnesbeck ( c h r i s @ f o n n e s b e c k . o r g ) Georgia Cooperative Fish & Wildlife Research Unit, University of Georgia "I don't remember any kind of heaviness ruining my time at Yale." George W. Bush, on the civil rights movement -- Putting http://wecanstopspam.org in your email helps it pass through overzealous spam filters. From pearu at scipy.org Wed May 26 13:32:14 2004 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 26 May 2004 12:32:14 -0500 (CDT) Subject: [SciPy-user] build errors on OSX using IBM XL FORTRAN In-Reply-To: References: Message-ID: On Wed, 26 May 2004, Christopher Fonnesbeck wrote: > Using today's cvs update, I get the following error when building on > OSX 10.3; seems to be related to XL FORTRAN: > > compling Fortran sources > xlf(f77) options: '-qextname -O5' > xlf90(f90) options: '-qextname -O5' > xlf90(fix) options: '-qfixed -qextname -O5' > creating build/temp.darwin-7.3.0-Power_Macintosh-2.3/Lib/linalg > creating build/temp.darwin-7.3.0-Power_Macintosh-2.3/Lib/linalg/src > compile options: '-DNO_ATLAS_INFO=3 -Ibuild/src > -I/System/Library/Frameworks/Python.framework/Versions/2.3/include/ > python2.3 -c' > xlf:f77: Lib/linalg/src/fblaswrap.f > xlf: 1501-210 command option NO_ATLAS_INFO=3 contains an incorrect > subargument > xlf: 1501-210 command option NO_ATLAS_INFO=3 contains an incorrect > subargument > error: Command "xlf -qextname -O5 -DNO_ATLAS_INFO=3 -Ibuild/src > -I/System/Library/Frameworks/Python.framework/Versions/2.3/include/ > python2.3 -c -c Lib/linalg/src/fblaswrap.f -o > build/temp.darwin-7.3.0-Power_Macintosh-2.3/Lib/linalg/src/fblaswrap.o" > failed with exit status 40 Try again. Hopefully the patch in CVS fixes this problem. It turns out that -D/-U options to xlf compiler are not for defining cpp macros... Pearu From fonnesbeck at mac.com Wed May 26 14:06:53 2004 From: fonnesbeck at mac.com (Christopher Fonnesbeck) Date: Wed, 26 May 2004 14:06:53 -0400 Subject: [SciPy-user] build errors on OSX using IBM XL FORTRAN In-Reply-To: References: Message-ID: <77DE7F33-AF3F-11D8-AD48-000A956FDAC0@mac.com> On May 26, 2004, at 1:32 PM, Pearu Peterson wrote: > > Try again. Hopefully the patch in CVS fixes this problem. It turns out > that -D/-U options to xlf compiler are not for defining cpp macros... > > Pearu > This appears to have worked ... thanks. C. -- Christopher J. Fonnesbeck ( c h r i s @ f o n n e s b e c k . o r g ) Georgia Cooperative Fish & Wildlife Research Unit, University of Georgia "I don't remember any kind of heaviness ruining my time at Yale." George W. Bush, on the civil rights movement -- Putting http://wecanstopspam.org in your email helps it pass through overzealous spam filters. From oliphant at ee.byu.edu Wed May 26 14:33:33 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Wed, 26 May 2004 12:33:33 -0600 Subject: [SciPy-user] Data convolution. In-Reply-To: References: Message-ID: <40B4E2FD.9040104@ee.byu.edu> Andrea Riciputi wrote: > Hi, > I need to perform convolution on a set of two dimensional data resulting > from a numerical simulation. Since my numerical simulation is written in > Python and C (for CPU intensive task), I'm wondering if scipy is able to > fit my needs. I've read the tutorial but it's quite terse about the > topic, further I've found a reference to FFT section that has not been > written. > Yes, you can do convolution. from scipy import * info(signal.convolve) should give more information. But... this is really for convolution where one or both of the inputs are small for 2.D. FIR filtering (say less than 32x32 -- not really sure of the cutoff size). Convolution of large data sets can be done be done with FFT's (but their will be boundary wrapping issues that you will need to address --- or ignore depending on what you are doing). To get something to compare with c = signal.convolve(a,b) will do the job. An optional third input will control the size of the resulting output. Post if you have further questions. -Travis O. From ariciputi at pito.com Wed May 26 18:18:30 2004 From: ariciputi at pito.com (Andrea Riciputi) Date: Thu, 27 May 2004 00:18:30 +0200 Subject: [SciPy-user] Data convolution. In-Reply-To: <40B4E2FD.9040104@ee.byu.edu> References: <40B4E2FD.9040104@ee.byu.edu> Message-ID: <9E3C17E8-AF62-11D8-BF3D-000A95C0BC68@pito.com> On 26 May 2004, at 20:33, Travis E. Oliphant wrote: > Yes, you can do convolution. > > from scipy import * > > info(signal.convolve) > > [snip] Thanks, but my data set are typically 1024x1024 so I think I need FFT. Nevertheless I'll look into scipy for testing and verifying my own code. Cheers, Andrea. From oliphant at ee.byu.edu Wed May 26 19:01:24 2004 From: oliphant at ee.byu.edu (Travis E. Oliphant) Date: Wed, 26 May 2004 17:01:24 -0600 Subject: [SciPy-user] Data convolution. In-Reply-To: <9E3C17E8-AF62-11D8-BF3D-000A95C0BC68@pito.com> References: <40B4E2FD.9040104@ee.byu.edu> <9E3C17E8-AF62-11D8-BF3D-000A95C0BC68@pito.com> Message-ID: <40B521C4.6070306@ee.byu.edu> Andrea Riciputi wrote: > On 26 May 2004, at 20:33, Travis E. Oliphant wrote: > >> Yes, you can do convolution. >> >> from scipy import * >> >> info(signal.convolve) >> >> [snip] > > > Thanks, but my data set are typically 1024x1024 so I think I need FFT. > Nevertheless I'll look into scipy for testing and verifying my own code. > info(fft) will give you loads of information on how to use fft's to do convolution. It could be as simple as ifft2(fft2(a)*fft2(b)) depending on your boundary issues. -Travis O. > Cheers, > Andrea. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From nwagner at mecha.uni-stuttgart.de Thu May 27 03:49:29 2004 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 27 May 2004 09:49:29 +0200 Subject: [SciPy-user] Rootfinding procedures for Vector-valued functions In-Reply-To: References: <40B47843.2000005@mecha.uni-stuttgart.de> <40B4AF24.7020603@mecha.uni-stuttgart.de> <40B4BD94.1090207@mecha.uni-stuttgart.de> Message-ID: <40B59D89.8000303@mecha.uni-stuttgart.de> Pearu Peterson wrote: > > On Wed, 26 May 2004, Nils Wagner wrote: > > >>Hopefully my last question with respect to this topic ... >>Can I apply brentq, brenth in case of complex-valued functions ? > > > No. These functions can only deal with real-valued functions with one > real argument. > > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Is it planned to upgrade the functions with respect to complex-valued functions and vector-valued functions respectively ? Nils From stephen.walton at csun.edu Thu May 27 17:45:45 2004 From: stephen.walton at csun.edu (Stephen Walton) Date: Thu, 27 May 2004 14:45:45 -0700 Subject: [SciPy-user] Data convolution. In-Reply-To: <40B521C4.6070306@ee.byu.edu> References: <40B4E2FD.9040104@ee.byu.edu> <9E3C17E8-AF62-11D8-BF3D-000A95C0BC68@pito.com> <40B521C4.6070306@ee.byu.edu> Message-ID: <1085694345.6683.12.camel@s123n051.sfo.csun.edu> On Wed, 2004-05-26 at 16:01, Travis E. Oliphant wrote: > Andrea Riciputi wrote: > > > > Thanks, but my data set are typically 1024x1024 so I think I need FFT. > > Nevertheless I'll look into scipy for testing and verifying my own code. > > > It could be as simple as > > ifft2(fft2(a)*fft2(b)) I've got quite a bit of experience doing this with astronomical images. Travis is right about boundary issues; you almost certainly want to extend your images to twice their original size in each direction (four times their original area) with the mean of your images. In my case, when I do this with full disk images of the Sun, I extend the image with a model aureole to avoid edge effects. This may be overkill. All my code for doing this is in Fortran and built on top of the PORT library FFT routines, so you probably don't want a copy. -- Stephen Walton Dept. of Physics & Astronomy, Cal State Northridge From cookedm at physics.mcmaster.ca Thu May 27 23:24:34 2004 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 27 May 2004 23:24:34 -0400 Subject: [SciPy-user] Rootfinding procedures for Vector-valued functions In-Reply-To: <40B59D89.8000303@mecha.uni-stuttgart.de> References: <40B47843.2000005@mecha.uni-stuttgart.de> <40B4AF24.7020603@mecha.uni-stuttgart.de> <40B4BD94.1090207@mecha.uni-stuttgart.de> <40B59D89.8000303@mecha.uni-stuttgart.de> Message-ID: <20040528032434.GA21308@arbutus.physics.mcmaster.ca> On Thu, May 27, 2004 at 09:49:29AM +0200, Nils Wagner wrote: > Pearu Peterson wrote: > > > >On Wed, 26 May 2004, Nils Wagner wrote: > > > > > >>Hopefully my last question with respect to this topic ... > >>Can I apply brentq, brenth in case of complex-valued functions ? > > > > > >No. These functions can only deal with real-valued functions with one > >real argument. > > > >Pearu > > Is it planned to upgrade the functions with respect to complex-valued > functions and vector-valued functions respectively ? How? The Brent routines are one-dimensional zero finders. Multidimensional (which includes complex) zero finders are much different beasts from one-dimensional routines -- in general, you don't a bracket around the zero. You want scipy.optimize.fsolve for multidimensional problems. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From lanceboyle at cwazy.co.uk Fri May 28 00:46:41 2004 From: lanceboyle at cwazy.co.uk (Lance Boyle) Date: Thu, 27 May 2004 21:46:41 -0700 Subject: [SciPy-user] Data convolution. In-Reply-To: <1085694345.6683.12.camel@s123n051.sfo.csun.edu> References: <40B4E2FD.9040104@ee.byu.edu> <9E3C17E8-AF62-11D8-BF3D-000A95C0BC68@pito.com> <40B521C4.6070306@ee.byu.edu> <1085694345.6683.12.camel@s123n051.sfo.csun.edu> Message-ID: <031E40B9-B062-11D8-B39D-003065F93FF0@cwazy.co.uk> When doing (linear) convolutions (of any dimension) with FFTs, does SciPy take care of aliasing in the "other" domain, usually time domain and spatial domain? This is done by zero-padding and with the so-called overlap-add and overlap-save methods. This is a different issue from edge effects (as I understand the earlier posters) which are Gibbs phenomena. Jerry On May 27, 2004, at 2:45 PM, Stephen Walton wrote: > On Wed, 2004-05-26 at 16:01, Travis E. Oliphant wrote: >> Andrea Riciputi wrote: >>> >>> Thanks, but my data set are typically 1024x1024 so I think I need >>> FFT. >>> Nevertheless I'll look into scipy for testing and verifying my own >>> code. >>> > >> It could be as simple as >> >> ifft2(fft2(a)*fft2(b)) > > I've got quite a bit of experience doing this with astronomical images. > Travis is right about boundary issues; you almost certainly want to > extend your images to twice their original size in each direction (four > times their original area) with the mean of your images. In my case, > when I do this with full disk images of the Sun, I extend the image > with > a model aureole to avoid edge effects. This may be overkill. > > All my code for doing this is in Fortran and built on top of the PORT > library FFT routines, so you probably don't want a copy. > > -- > Stephen Walton > Dept. of Physics & Astronomy, Cal State Northridge > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From oliphant at ee.byu.edu Fri May 28 13:49:27 2004 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 28 May 2004 11:49:27 -0600 Subject: [SciPy-user] Data convolution. In-Reply-To: <031E40B9-B062-11D8-B39D-003065F93FF0@cwazy.co.uk> References: <40B4E2FD.9040104@ee.byu.edu> <9E3C17E8-AF62-11D8-BF3D-000A95C0BC68@pito.com> <40B521C4.6070306@ee.byu.edu> <1085694345.6683.12.camel@s123n051.sfo.csun.edu> <031E40B9-B062-11D8-B39D-003065F93FF0@cwazy.co.uk> Message-ID: <40B77BA7.8070901@ee.byu.edu> Lance Boyle wrote: > When doing (linear) convolutions (of any dimension) with FFTs, does > SciPy take care of aliasing in the "other" domain, usually time domain > and spatial domain? This is done by zero-padding and with the > so-called overlap-add and overlap-save methods. This is a different > issue from edge effects (as I understand the earlier posters) which > are Gibbs phenomena. > We could add an fft-based convolution pretty easily (and would probably be worth it). But, no boundary effects of using fft's are different than Gibbs phenomena. You can ameliorate the boundary effects using various schemes as has been alluded to (which one you choose depends on how anxious you are about the boundaries). -Travis O.